text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
As Hurricane Irene rumbles through the Atlantic Ocean, it needs fuel to sustain itself. Warm water is the main fuel, and there is plenty of it right now, as there usually is this time of year.
The map above shows sea surface temperatures (SST) in the Atlantic Ocean, Gulf of Mexico, and the Caribbean Sea on August 23, 2011. The measurements come from the Advanced Microwave Scanning Radiometer (AMSR-E) on NASA’s Aqua satellite and the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on both the Terra and Aqua satellites. The satellites measure the temperature of the top millimeter of the ocean.
Waters typically need to be above 27.8 degrees Celsius (82 Fahrenheit) to properly fuel tropical storms with warm, moist air. Red, orange, and yellow colors depict waters above the 27.8 degree mark. The warmer the water, the more intense the storm can grow, if upper level wind patterns cooperate. In the map above, such waters dominate the Gulf of Mexico and tropical Atlantic in late August 2011. They also run up the southeastern coast of the United States, following the Gulf Stream to Cape Hatteras before giving way to slightly cooler waters (shades of blue) in the Middle and North Atlantic.
As of 5 p.m. Eastern Daylight Time on August 24, 2011, the NOAA National Hurricane Center reported Irene had maximum sustained winds of 195 kilometers (120 miles) per hour and was located at 23.1 degrees North and 74.7 degrees West, about 45 kilometers (30 miles) east-southeast of Long Island in the Bahamas. The forecasted path had the hurricane sweeping over nearly all Bahaman islands, then turning toward the North Carolina coast and eventually New England. Forecasts are updated roughly every six hours.
Irene is the first hurricane of the Atlantic season, and potentially the first to make landfall in the United States in several years.
- National Hurricane Center. (2011, August 24). Hurricane Irene. National Weather Service. Accessed August 24, 2011.
- NASA Earth Observatory. (n.d.). Global Maps: Sea Surface Temperature. Accessed August 24, 2011.
NASA Earth Observatory image by Jesse Allen, using merged AMSR-E/MODIS data processed and provided by Chelle Gentemann and Frank Wentz, Remote Sensing Systems. Caption by Michael Carlowicz.
- Aqua - AMSR-E | <urn:uuid:f88848ae-e08c-4ef3-937b-0fde328edec2> | 3.0625 | 500 | Knowledge Article | Science & Tech. | 49.788909 |
common name: robber flies
scientific name: Asilidae (Insecta: Diptera: Asilidae)
Introduction - Distribution - Description - Biology - Behavior - Key to the Subfamilies of Florida Asilidae - Selected References
The robber flies are an abundant and diverse family (Asilidae) known for their predatory behavior. Asilidae diversity can be attributed to their broad distribution, as most species tend to occupy a selective niche. As their common name implies, robber flies have voracious appetites and feed on a vast array of other arthropods, which may help to maintain a healthy balance between insect populations in various habitats (Joern and Rudd 1982, Shurovnekov 1962). Asilidae adults attack wasps, bees, dragonflies, grasshoppers, other flies, and some spiders. Robber flies are particularly abundant in arid and sunny habitats, which are optimal conditions in which to observe their many morphs and behaviors.
Figure 1. Adult female Dysmachus trigonus, a robber fly. Photograph by Fritz Geller-Grimm.
Figure 2. Adult Dioctria media Banks, a robber fly. Photograph by Ken Gray, Oregon State University.
The Asilidae enjoy a worldwide distribution, with some groups limited to certain regions (Hull 1962). For instance, the genera Megapodinae are unique to the Neotropical region. Large island chains tend to encompass abundant asilid faunas, particularly those south of Asia. By contrast, smaller islands such as the Hawaiian chain have no indigeous or introduced species (Hull 1962). The majority of robber fly species are found in dry, sandy conditions, as confirmed by the diversity of species found in such locales. Some species are well adapted to desert climates, where they are known to thermoregulate in response to temperature variations throughout the day (O'Neill et al. 1988, Morgan and Shelly 1988, O'Neill and Kemp 1990). Few species occur in woodland areas, and those that do tend to aggregate along the edges, near grasslands. In Florida, all four subfamilies of Asilidae (Asilinae, Dasypogoninae, Laphriinae, and Leptogastrinae) are present,. Within these subfamilies, the following genera are known to exist in Florida:
Figure 3. The "Florida bee killer," Mallophora bomboides (Wiedemann), with honey bee prey. Image taken near Wimauma, Florida. Photograph by Nancy West, University of Florida.
Figure 4. Adult Stenopogon sp., a robber fly. Photograph by Ken Gray, Oregon State University.
Asilidae are a family of true flies belonging to the superfamily Asiloidea within the suborder Brachycera. To date, there are approximately 7,003 described species of Asilidae distributed throughout the world (Geller-Grimm 2008). There are nearly 1,000 North American species of robber flies, with more than 100 species occurring in Florida. Loew was perhaps the most influential dipterist to contribute information to the study of robber flies, describing several species and more than 80 genera. Other mid-nineteenth century contributors include Macquart, Walker, Rondani, and Bigot. Later, dipterists in the 1900's became specialists of robber flies in particular locales, most notably Curran and Bromley in North America.
All robber flies have a characteristic divot on top of the head, which is located between their especially prominent compound eyes. In general, adult Asilidae have an elongate body with a tapered abdomen. However, some species are stout and hairy, mimicking bumble bees, and still others may be slender and have a damsel fly appearance. Adults range in size from small (3 mm) to very large (over 50 mm), averaging 9 to 15 mm in length (Wood 1981). Robber flies have long, strong legs that are bristled to aid in prey capture. Sexual dimorphisms are not extreme, although females tend to have slightly broader abdomens than males. Most robber flies have a brown, gray, or black coloration.
Figure 5. Adult Dasyllis haemorrhoa, a robber fly mimic of Euglossa dimidiata (Hymenoptera) in Brazil. Photograph by Fritz Geller-Grimm.
Figure 6. Adult Proctacanthus occidentalis Hine, a robber fly. Photograph by Ken Gray, Oregon State University.
Female Asilidae deposit whitish-colored eggs on low-lying plants and grasses, or in crevices within soil, bark, or wood. Egg-laying habits depend on the species and their specific habitat; most species lay their eggs in masses, which are then covered with a chalky protective coating. Robber fly larvae live in the soil or in various other decaying organic materials that occur in their environment. Larvae are also predatory, feeding on eggs, larvae, or other soft-bodied insects. Robber flies overwinter as larvae and pupate in the soil. Pupae migrate to the soil surface and emerge as adults, often leaving behind their pupal casing. Complete development ranges from one to three years, depending on species and environmental conditions. Theodor (1980) proposed that larval growth is accelerated in warmer regions and that many Asilidae species live no longer than one year.
Figure 7. Larva of an unidentified laphriine robber fly. Photograph by Stephen W. Bullington.
Figure 8. Exuviae of an unidentified laphriine robber fly. Photograph by Stephen W. Bullington.
Robber flies are opportunistic predators, their diets often reflecting prey availability in a particular habitat. Shelly (1986) reported that of the nine Neotropical Asilidae species he studied, diet constituents were more than 85% composed of insects from the orders Diptera, Coleoptera, Hymenoptera, Homoptera, and Lepidoptera. Furthermore, larger species tended to consume a greater diversity of prey taxa. Robber flies generally establish a perching zone in which to locate potential prey. Perching height varies by species, but generally occurs in open, sunny locations. Asilidae seize their prey in flight and inject their victims with saliva containing neurotoxic and proteolytic enzymes. This injection, inflicted by their modified mouthparts (hypotharynx), rapidly immobilizes prey and digests bodily contents. The robber fly soon has access to a liquid meal, which is generally consumed upon returning to a perched position.
Robber flies exhibit minimal courtship behavior. Instead, the male pounces on the female much like an act of prey acquisition. Copulation is accomplished in a tail-to-tail fashion with the male and female genetalia interlocked. Flight is not completely inhibited during mating.
Figure 9. Robber fly, Stenopogon sp., with an antlion, Palpares libelluloides, prey. Photograph by Mike Taylor.
Figure 10. Mated pair of Dasypogon diadema. Photograph by Fritz Geller-Grimm.
1. Marginal cell open . . . . . 2
1'. Marginal cell closed . . . . . 3
2. Palpi one-jointed; small, slender species; antennae with slender terminal arista . . . . . Leptogastrinae
2'. Palpi two-jointed; antennae with or without a thickened terminal style . . . . . Dasypogoninae
3. Antennae with or without a terminal style, never a terminal arista; palpi two-jointed . . . . . Laphriinae
3'. Antennae with slender terminal arista; palpi one-jointed . . . . . Asilinae
- Barnes JK. (August 2003). Robber Flies. University of Arkansas Arthropod Museum(26 September 2012).
- Bromley SW. 1950. Florida Asilidae (Diptera) with descriptions of one new species. Annals of the Entomological Society of America 43: 227-239.
- Bullington SW. (July 2008). The Laphriini pages. (26 September 2012).
- Cannings RA. (1998). Robber flies (Insecta: Diptera: Asilidae). (26 September 2012).
- Fasulo TR. (2002). Beneficial Insects 1 and Beneficial Insects 2. Bug Tutorials. University of Florida/IFAS. CD-ROM. SW 153.
- Geller-Grimm F. (January 2008). Robber flies (Asilidae). (26 September 2012).
- Hull FM. 1962. Robber flies of the world. Bulletin of the United States National Museum 224: 1-907.
- Joern A, Rudd NT. 1982. Impact of predation by the robber fly Proctacanthus milbertii (Diptera: Asilidae) on grasshopper (Orthoptera: Acrididae) populations. Oecologia 55: 42-46.
- Mahr S. 1999. Know your friends: robber flies. Midwest Biological Control News 6: 1-2.
- O'Neill KM, Kemp WP, Johnson KA. 1988. Behavioral thermoregulation in three species of robber flies (Diptera: Asilidae: Efferia). Animal Behavior 39: 181-191.
- O`Neill KM, Kemp WP. 1990. Behavioral responses of the robber fly Stenopogon inquinatus (Diptera: Asilidae) to variation in the thermal environment. Environmental Entomology 19: 459-464.
- Morgan KR, Shelly TE. 1988. Body temperature regulation in desert robber flies (Diptera: Asilidae). Ecological Entomology 13: 419-428.
- Shelly TE. 1986. Rates of prey consumption by Neotropical robber flies (Diptera: Asilidae). Biotropica 18: 166-170.
- Shurovnekov BG. 1962. Field entomophagous predators (Coleoptera, Carabidae, and Diptera, Asilidae) and factors determining their efficiency. Entomological Review 41: 476-485.
- Theodor O. 1980. Diptera: Asilidae. Fauna Palestina: Insecta II. The Israel Academy of Sciences and Humanities, Jerusalem. 446 pp.
- Wood GC. 1981. Asilidae. In McAlpine JF, Peterson BV, Shewell GE, Teskey HJ, Vockeroth JR, Wood DM. (Editors): Manual of Nearctic Diptera. Vol. 1 - Research Branch, Agriculture Canada, Monographs 27: 549-573; Ottawa. | <urn:uuid:0dd3ca88-ba7a-45ca-bbf0-522ad5d3bada> | 3.484375 | 2,284 | Knowledge Article | Science & Tech. | 41.423629 |
In this class activity, students will analyze data on rainfall and dust levels in Africa in order to investigate two competing hypotheses about how climate affects human evolution. They will apply their understanding of orbital forcing factors and the use of proxy variables in order to develop a research plan they could use to study African climate change. This document serves as an instructor guide; student handouts are provided in accompanying documents.
|Download PDF||File size|
|Africa Teach Guide||304.8 KB| | <urn:uuid:53502305-ba8b-4703-ad3b-ffaf2bae6af7> | 3.3125 | 98 | Truncated | Science & Tech. | 28.17628 |
variables in string.count
__peter__ at web.de
Fri Jun 4 08:54:23 CEST 2004
> Can you use variables in string.count or string.find? I tried, and
> IDLE gives me errors.
> Here's the code:
I assume you omitted
list = ["a", "b", "c"]
> while list:
> letter = raw_input("What letter? ")
Strings are immutable, i. e. they cannot be modified. Therefore the above
letter = string.lower(letter)
or even better:
letter = letter.lower()
The string module is rarely needed these days as most functions are also
available as string methods.
> guess = string.count(letter, list)
To check if a string (letter) is in a list, use the "in" operator:
if letter in list:
print "Try again!"
> if guess == -1:
> print "Try again!"
> the error is:
> TypeError: expected a character buffer object
Note that list is also a Python builtin and should not be used as a variable
name to avoid confusing the reader. For a complete list of builtins type
in the interpreter.
More information about the Python-list | <urn:uuid:e806b6b5-1612-484a-b184-dc736bb24b7e> | 3.359375 | 278 | Comment Section | Software Dev. | 69.153798 |
Endospores and Radiation
Country: United States
Date: November 2008
What properties of endospore make them resistant to UV radiation?
UV photons are not energetic enough to penetrate the spore coating to "hit"
the spores DNA. UV kills by damaging DNA causing mutations.
Ron Baker, Ph.D.
Click here to return to the Molecular Biology Archives
Update: June 2012 | <urn:uuid:4ffb0bbd-85bd-404c-9d37-d69d31174410> | 2.953125 | 82 | Knowledge Article | Science & Tech. | 40.447207 |
This topic stresses mechanical energy, potential and kinetic, and also describes conversion between types of energy (while conserving the total amount), units, and the special position of heat.
Part of a high school course on astronomy, Newtonian mechanics and spaceflight
by David P. Stern
This lesson plan supplements: "Energy," section #15 http://www.phy6.org/stargaze/Senergy.htm |
Goals: The student will learn about|
Terms: Energy (potential, kinetic, conservation of), pendulum, joule, calorie, second law of thermodynamics. (kilojoule, kilocalories)
Stories and extras: The energy content of a candy bar. And of TNT
Starting the lesson:
Today we will study energy,so we might just as well start by asking--"What is energy?"
If someone gives the formal definition "ability to do work" ask:
We could redefine it as "overcoming resistance over a distance"--for instance, lifting a brick (against gravity) from the floor to the table, or dragging it along the floor (against friction), and then work equals resistance times distance.
Any of these could also be done by a machine, so for a start we will simply say "energy is anything that can run a machine."
--Light--that was what the electricity in the lightbulb was converted to
--Sound--that was what the electricity in the radio was converted to
--Chemical energy--when you ate breakfast, it gave you strength.
--Heat--if you cooked your food, or heated the house.
--Nuclear energy--if you enjoyed sunlight, because the Sun gets its energy by combining atomic nuclei of hydrogen to helium, deep inside it.
That energy becomes heat, and heat causes light to come out.
And you know you can trade one kind against the other: rolling down a hill, you lose height as you gain speed, and that speed helps you get up the next hill (the same in a roller coaster).
Guiding questions and additional tidbits with suggested answers.
-- When an object falls down from a height h meters, what is the relation between h and its final velocity v, in meters per second?
--What is interesting about this relation?
[The teacher may note that while the final speed is the same, the time taken to reach bottom isn't.
-- Is something kept constant in this motion?
-- Is this the energy? (No) Why?
E = mgh + (1/2)mv2
We have not yet defined mass; for the time it is understood to be "the amount of matter in motion," which we can measure by weighing.
In all our calculations involving Newtonian mechanics (unless explicitely stated otherwise) the so-called MKS units are used--distances in Meters, masses in Kilograms, time in Seconds, and all derived units based on these three. In those units g = 9.81 and energy comes out in joules. Whenever other units are given, be sure to convert them to MKS! One spacecraft sent towards Mars was lost, because engineers giving the command for a final crucial rocket burn got their units confused.
--How does a pendulum or a swing demonstrate the conservation of total energy?
--How does a roller-coaster demonstrate it?
--What is work W? How much work is performed in lifting a mass m by a height h?
W = mgh
--If m is in kilograms, h in meters, g = 9.81 meter/sec2, in what units is W, as given by the above formula?
--You have climbed to the second floor, raising yourself by 9 feet, (1 ft=30.5 cm = 0.305 meter). You weigh 150 pounds (1 pound = 0.454 kg). How much work did you perform?
h = 9*0.305 = 2.745 meters, m = 150*0.454 = 68.1 kg. If g = 9.81 m/s2, then
W = mgh = 1833.8 joule
--Into what form of energy did this work go?
--From what form of energy did it come?
Suppose you have eaten one square of chocolate weighing 4 grams (1/8 of a bar weighing one ounce). The chocolate contains 2 grams cocoa fat, providing 9 calories per gram (typical for fats), and 2 grams sugar, a carbohydrate with 4 calories per gram, for a total of 18 + 8 = 26 calories. These are "kilocalories" of 4180 joule each, so that piece of chocolate has given you the equivalent of 108,680 joules. If your body can convert it to muscle power with an efficiency of 10% = 0.1, you get 10,868 joules of usable work from that piece of candy, enough to climb 10,868/1833.8 or about 6 floors.
--You jump down from the height of one floor. With what speed v do you hit the ground?
In miles-per hour (1 mile = 1609 meters).
v = 7.3387*3600 = 26,419 meters/hour = 16.4 miles/hour.
--Even a hospital patient lying in bed all day needs to eat. Why?
On the table of energy conversions, which form is converted into which:
-- In an electric fan?
--In an elevator winch?
--Can we convert it back when the elevator descends?
--In a light emitting diode?
--Why did we say "light emitting diode" and not "electric lightbulb"?
--In a car battery?
-- Can it be converted back to chemical energy?
-- In a rocket nozzle?
Heat to kinetic energy. We will later see that the converging-diverging design of the rocket nozzle is very efficient in converting heat to kinetic energy.
Has anyone here read "October Sky", or seen the film? It is a true story of high school boys building and flying rockets, and after they discovered the proper design of the nozzle, their rockets flew much higher. The conversion is never complete--heat can never be completely converted to mechanical energy--but the nozzle comes fairly close to the theoretical limit.
--In quicklime? What happens there?
[This question may not be meaningful enough to students living in a big city.]
For making mortar, builders slake the quicklime with water. It heats up, returning its chemical energy to heat.
-- How do spacecraft get their electric energy?
Around the outer planets, sunlight is too dim to provide enough energy in this way. Spacecraft that fly there, e.g. Voyagers 1-2 and Pioneers 10-11, use radioactive sources which generate heat, and thermocouples (special junctions of different metals) convert it to electricity.
The Russians experimented with small nuclear reactors on spacecraft. One crashed into a lake in Canada, contaminating it with radioactivity and creating great resentment. No such reactors are being flown any more.
-- How is mechanical power defined? What are its units?
-- Your electric bill charges you a certain price per kilowatt-hour (kwh). What do kilowatt-hours measure?
--Food energy is measured in calories. How many calories does a gram of sugar contain?
A "small" calorie contains 4.18 joule, a "kilocalorie" has 4180 joule, and a gram of sugar--as in a piece of candy--has 16720 joule.
--How about other foods?
Fats have about 9 cal/gr., alcohol about 7
--Any materials contain more energy?
--How does TNT (tri-nitro-toluene) compare?
-- Seward, the port at the end of the Alaska Railroad, has steep Mt. Marathon towering just behind it, to a height of about 900 meters. Every 4th of July a footrace is held, from the town to the top of Mt. Marathon and (with a lot of sliding!) back. The current record is 43 minutes and a fraction.
-- Why do we often say "energy is lost as heat"?
--What does the second law of thermodynamics say?
[Optional: The fraction of heat energy which can be converted to other forms depends on the temperature at which the heat is provided.
The unrecovered heat is changed to a lower temperature, and the fraction we recover depends on these two temperatures--the one at which the heat is received, and the one at which the remainder can be dumped.
A power station needs not only a supply of hot steam, but also a way of dumping the heat left at the end of the cycle. Steam locomotives dumped their spent steam into the air, and therefore needed a great amount of of water, carried in their tenders. Electric power stations (of any kind) recycle their steam and cool it with air in cooling towers, like those of 3-mile island which (for some reason) became a symbol of nuclear energy. Other power stations use nearby lakes and rivers to cool and condense their steam, and steamships (of course!) do so with seawater.]
Back to the Lesson Plan Index Back to the Master Index
Guides to teachers... A newer one An older one Timeline Glossary
Author and Curator: Dr. David P. Stern
Mail to Dr.Stern: audavstern("at" symbol)erols.com .
Last updated: 10-8-2004 | <urn:uuid:e0565b87-1ff5-4638-a5be-22ba84809edf> | 4.09375 | 2,018 | Tutorial | Science & Tech. | 67.928947 |
Previous posts in the series:
Part 9 of the original series focuses in on Constructor Injection, which is one method of doing Dependency Injection (the other is Setter Injection, which we’ll get to) The reason to use Constructor or Setter Injection is a bit subjective, but (to me) boils down to if you the paramters to be mandatory and how many parameters you have. We’re not here to debate, though, we’re here to copy Alex’s hard work….
Here’s the interface Alex defines;
And the two encoders:
Let’s check out the binsor:
Notice we didn’t specify a value for the “_encoder” parameter, so Windsor will just plug in the first one it finds. And, finally, the Program:
Running this as is, gives us:
GOWZIQ EOIME YQ KICROSOFQ?
So, what if we want to send an unencrypted message? Well, we can specify which IEncoder component to wire to our sender, like so:
See? We refer to it by the name we gave the component. Running it now, gives us:
Howzit going at Microsoft?
(I bet he’s doing great…)
Setter injection…coming up! | <urn:uuid:3a92e28c-345e-4a83-b345-4cd462eb16be> | 2.953125 | 289 | Personal Blog | Software Dev. | 55.582778 |
How old is your mummy?
Radiocarbon dating and the secrets it reveals
All living things, including us, contain carbon atoms. As long as we are alive we are constantly exchanging carbon atoms with our environment. We take in carbon atoms in our food and release them by exhaling carbon dioxide into the air. Some of these carbon atoms are radioactive (about one in every million million).
The radioactive carbon is called carbon-14. Some of the carbon-14 atoms decay while they are inside us, but carbon-14 atoms in our food are continually replacing them. This means that as long as we keep eating, the amount of carbon-14 in our bodies remains roughly constant.
Where does carbon-14 come from?
Carbon-14 forms in the upper atmosphere from nitrogen by the action of cosmic rays, mainly from the Sun. The carbon-14 combines with oxygen to form carbon dioxide. This is then taken in by plants to make food, a process called photosynthesis. We eat the plants, or the animals that ate the plants, and that's how carbon-14 gets into us.
When a plant dies, photosynthesis stops. When an animal dies, it stops eating. In both cases the exchange of carbon with the outside world stops. The carbon-14 already inside decays, but is no longer being replaced. This means that the longer something is dead, the less carbon-14 it contains. Every 'half-life' (5,730 years for carbon-14) the amount decreases by half as shown in the graph.
In 1947, US scientist Willard F Libby and his team used the amount of carbon-14 in ancient remains to determine their age. At first the technique was used on remains of a known age such as bread that had been buried in Pompeii during the eruption of Mount Vesuvius in AD79, to see if it gave accurate results.
It did. They then carried out tests on older remains, such as wood from the funerary boat of Egyptian Pharaoh Sesostris III, wrappings from a mummy believed to be Cleopatra, and charcoal from a campfire at Stonehenge. Radiocarbon dating was a great success, and in 1960 Libby was awarded the Nobel Prize in Chemistry.
The most famous radiocarbon dating of all was of the Turin Shroud, a religious relic believed to be the cloth that Jesus was wrapped in after he died. It has a ghostly outline of a man with injuries to the head, chest, wrists and ankles. Samples of the shroud, together with samples from other relics were given to three different laboratories, one in Arizona, one in Oxford and one in Zurich. Each lab was given four samples with numbers, but they didn't know which was from the Turin Shroud.
In 1989, their findings were published in the journal Nature. All three labs confirmed the Turin Shroud as being 600-700 years old. It was not the cloth used to wrap Jesus - it was too new by about 1,300 years. When the same method was used on the famous Dead Sea scrolls, they were found to be over 2,000 years old.
Carbon dating method and its limitations
Cleaning the sample and then burning it to produce carbon dioxide is how most radiocarbon dating is done. The amount of radiation coming from the carbon dioxide is then measured with sensitive instruments to determine the amount of carbon-14 present. A new method that uses accelerated mass spectrometry (AMS) makes it possible to count individual atoms of carbon-14, which means that much smaller samples can be used.
Radiocarbon dating cannot be used to date very recent materials, because not enough of the carbon-14 has decayed. In fact, radiocarbon dates are followed by the letter BP ('before present', meaning 1950). Objects about 50,000 years old or more contain so little carbon-14 that it is difficult to measure the radiation. In recent years, AMS has been used to date objects as old as 100,000 years but there is much debate about the accuracy of these dates.
Radiocarbon dating has advanced hugely since Libby pioneered it. One of the biggest problems was the assumption that the amount of carbon-14 in the atmosphere has been roughly constant for the last 50,000 years. We now know it hasn't been, due to changes in solar activity, volcanic eruptions and nuclear weapons testing. However, through comparisons with tree rings (dendrochronology), ice cores and coral samples, corrections can be made so that 'radiocarbon dates' can be converted into 'calendar dates'.
Next time; What exactly is bandwidth, and why is it so important?
All rights reserved 2007 |
Last modified: July 9, 2007 | <urn:uuid:dabd4fef-d161-447c-9c9f-dc0bc314a875> | 3.78125 | 981 | Knowledge Article | Science & Tech. | 58.652563 |
The diode as an energy-controlled, not a charge-controlled device.
The traditional theory of operation of the diode is, for me, one of the many casualities of advances in electromagnetic theory during the last 25 years.
Whilst at Motorola, Phoenix, in 1964, work on the problem of how to interconnect high-speed (one nanosecond) logic gates led me to the same general conclusion as had been reached (unknown by me until 1972) by Oliver Heaviside a century before when he tackled the problem of how to improve undersea telegraphy from Newcastle to Denmark.
"... [The electric and magnetic fields] are supposed to be set up by the current in the wire. We reverse this; the current in the wire is set up by the energy current through the medium around it. The sum of the electric and magnetic energies is the energy....
".... A line of energy-current is perpendicular to the electric and magnetic force...."
Our conclusion was that what he called "energy current" travelling down between the two conductors guided by them as a train is guided by two rails, was the important feature of signalling, and not the electric charge and electric current in or on the wires. Twenty years later my view hardened when I came across the Catt Anomaly.
Let us deliver a 1ns-wide pulse down a long transmission line terminated by a diode (Fig.67). When the pulse reaches the diode, it does not carry any charge with it. Catt's Anomaly shows that charge could not have travelled fast enough to keep up with the pulse, which travels at the speed of light. If we are agreed that the diode will respond (for instance 'start to conduct') after a delay which is small (say 100ps) compared with the time delay down the transmission line delivering the pulse, then it must be responding to the energy current, that is, the TEM wave or pulse approaching it in between the two conductors. This TEM pulse enters directly into the side of the crucial interface or surface between the p-region and the n-region which together make up the diode.
Note the phrase on page 30 col.2; "Nothing ever traverses a capacitor from one plate to the other". Applied to the diode, this seems to say that nothing travels across the junction from the p-region to the n-region, or vice versa. The only travel is along the surface between the two regions, in a direction at right angles to the generally supposed direction of movement.
When the leading edge of the pulse reaches the near edge of the diode, it finds a change in characteristic impedance. As a result, most of it is reflected, but a small portion continues forward to the right, down the very narrow transmission line made by the surface between the p and n regions. It is possible that the effective dielectric constant is large so that the velocity of propagation, , from left to right along the p-n interface is very slow. At the speed of light in a vacuum, the round trip across the p-n interface of a diode a tenth of an inch wide would be 20 picoseconds, but since the effective ε will be bigger than for a vacuum, the delay will be greater.
When the step reaches the right-hand edge of the diode, it sees an open circuit and reflects back toward the left, so that the total voltage across the junction doubles. When it gets back to the front (left-hand) end, it reflects toward the right again (except for the very small portion which escapes across the Zo mismatch back into the transmission line leading away to the left). By this mechanism of zig-zag repeated reflections across the diode, the amount of energy (current) in the p-n surface increases in a series of diminishing steps which approximates to an exponential (Fig.66). When the energy density builds up beyond some critical level (0.7v), there is a 'snap', and the later advancing energy current sees a short circuit, and reflects with inversion.
Since no charge has been introduced into the p-n interface, it is totally inappropriate to explain the mechanism of the diode in terms of extra electrons. The explanation must be novel, in terms of the amount of electromagnetic energy present; that a level in excess of some critical value (0.7v) causes the TEM wave travelling down the p-n interface to see a change in what is ahead of it, from open circuit to short circuit. That is, beyond that critical amplitude the p-n interface cannot accept more energy and rejects it.
This developing analysis of the behaviour of a diode is totally at odds with the traditional view, based on electrons, holes, energy barriers across the p-n interface that charges are trying to climb up. Why does this earlier theory succeed in correlating at all with experimental results?
"... if in conversation you insisted that your elder daughter was identical to your younger daughter, whereas in fact their "equality" only related to their parentage, every conclusion that followed this absurd assertion would not necessarily be absurd. For instance, if you knew the address of one daughter you might therefore know the address of the other. In the same way, it is possible for 'valid' results to come from absurd postulates (like the absurd postulate that a diode is full of particles called electrons buzzing around trying to climb hills [in the wrong direction at the wrong speed]).
"... the two non-identical daughters might have the same address. It is these 'echoes of truth' which masquerade as scientific truth today" (ref. ll vol 1, p15).
False theories, like the theory that the diode is a device controlled by charge, exist in the real world, and so are influenced, or somewhat directed, by the imperatives of the real world in which they find themselves, at least when it comes to the moment of truth: the checking of theory against experimental result.
The Catt Question. (Was Catt's Anomaly.)
Traditionally, when a TEM step (i.e. a logic transition from low to high) travels through a vacuum from left to right (Fig.1), guided by two conductors (the signal line and the 0v line), there are four factors which make up the wave: (1) electric current in the conductors, (2) magnetic field, or flux, surrounding the conductors, (3) electric charge on the surface of the conductors, (4) electric field, or flux, in the vacuum terminating on the charge.
The key to grasping the anomaly is to concentrate on the electric charge on the bottom conductor. During the next 1 nanosecond, the step advances one foot to the right. During this time, extra negative charge appears on the surface of the bottom conductor in the next one foot length, to terminate the lines (tubes) of flux which now exist between the top (signal) conductor and the bottom conductor.
Where does this new charge come from? Not from the upper conductor, because by definition, displacement current is not the flow of real charge. Not from somewhere to the left, because such charge would have to travel at the speed of light in a vacuum. (This last sentence is what those "disciplined in the art" cannot grasp, although paradoxicallyt it is obvious to the untutored mind.) A central feature of conventional theory is that the drift velocity of electric current is slower than the speed of light.
For further information on the Catt Anomaly, see letters in the following issues of Wireless World; aug81, aug82,oct82,dec82, jan83.
Heaviside and the Catt Anomaly.
Oliver Heaviside did his work on Energy Current too early to discern the Catt Anomaly. The idea that electric current comprised electrons was still only an "ingenious .... theory [by] J. J. Thomson" in the 1905 Harmsworth Encyclopaedia, p2184. So the firm conviction that the electrons which comprised electric current had mass came too late for Heaviside.
This section was first published in Electronics and Wireless World, September 1987, p903.
i.e. the Poynting Vector.
These insights will meet the same indifference as was discussed in Footnote 24, p13.
I agree with L. Turin that this is simplistic, since the forward voltage drop of a diode is not sharp, and follows a law which includes electric current and temperature. The purpose of this section is to take the first step away from the conventional theory, which is obviously balderdash. Since it was first published in 1987 it has excited no comment, not even a riposte. | <urn:uuid:25239ccd-05c1-4373-ad00-d364bd38c42b> | 2.71875 | 1,816 | Nonfiction Writing | Science & Tech. | 56.019096 |
WHAT are you thinking about? Which memory are you reliving right now? You may think that only you can answer, but by combining brain scans with pattern-detection software, neuroscientists are prying open a window into the human mind.
In the last few years, patterns in brain activity have been used to successfully predict what pictures people are looking at, their location in a virtual environment or a decision they are poised to make. The most recent results show that researchers can now recreate moving images that volunteers are viewing - and even make educated guesses at which event they are remembering.
Last week at the Society for Neuroscience meeting in Chicago, Jack Gallant, a leading "neural decoder" at the University of California, Berkeley, presented one of the field's most impressive results yet. He and colleague Shinji Nishimoto showed that they could create a crude ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:ba8a1479-7b58-44a9-a514-41424a1ef094> | 3.203125 | 202 | Truncated | Science & Tech. | 41.380909 |
A method of obtaining an indicatrix or a scattering diagram of the earth’s surface is developed. When we regard the earth’s surface as a kind of irregular surface, the indicatrix of scattered radiation is a way of effective representation of its surface roughness. The indicatrices of radiation over the sands of a seashore, a downtown area of Tokyo, and some of its suburban areas are obtained from Landsat MSS data. The radiant intensity decreases within the range of 32° to 65° of scattering angles in accordance with the degree of urbanization. Experimental results obtained by a remote sensing simulator are compared with those of Landsat MSS data analysis.
© 1983 Optical Society of America
Hiroshi Okayama and Iwao Ogura, "Indicatrices of the earth’s surface reflection from Landsat MSS data," Appl. Opt. 22, 3652-3656 (1983) | <urn:uuid:1a08a333-4b66-400b-98fb-15b586e3797a> | 3.140625 | 191 | Academic Writing | Science & Tech. | 33.207956 |
Dolly the sheep was the first mammal to be successfully cloned from adult cells, by a team of researchers at the Roslin Institute in Edinburgh. Dolly had been cloned from a mammary gland (the organs that produce milk in mammals' breasts). Dolly was named after the singer Dolly Parton.
Sheep can reach an age of about 12 years, but Dolly was put down at the age of 6 because she had a serious lung disease. At the autopsy, scientists discovered that she had suffered from arthritis as well. Some scientists speculate that her illnesses and early death were related to the fact that she was produced from an adult cell - in a sense, Dolly's DNA was older than she was. However, these questions are still unresolved.
Related Themes and Topics
BibliographyS Franklin, Dolly Mixtures. The Remaking of Genealogy (Durham NC: Duke University Press, 2007)
Basic unit of all living organisms, it can reproduce itself exactly.
Inflammation of joints; swelling, pain and decreased mobility are typical symptoms.
Deoxyribonucleic acid (DNA). The material of all living organisms, it stores the information, or blueprints, about every cell and is located in the genes. It is made up of two strands which form a double helix and are linked with hydrogen bonds. It was first described in 1953 by Francis Crick and James Watson. | <urn:uuid:6e59d159-8a31-44de-b7b2-0a6aaaa155a4> | 3.234375 | 291 | Knowledge Article | Science & Tech. | 49.940102 |
The advancing limit of Arctic ice, having in its train an endless procession of masses drifting down from the North, reaches the northern average limit of the Gulf Stream in the month of April, and having spread itself along this line both East and West of the 50th meridian of longitude, the ice disintegrates and rapidly disappears. Still, after reaching this limit of southward movement, many bergs, on account of their deep immersion, find their way to the westward even within the current of the Gulf Stream.
The locality in which ice of all kinds is apt to be found during the months of April, May, and June lies between latitude 42 degrees 45 minutes and longitude 47 degrees 52 minutes west of Greenwich. Here the Gulf Stream and the Labrador current meet; here the movement of the ice is influenced sometimes by the one and sometimes by the other of these currents; and here in latitude 41 degrees 46 minutes and longitude 50 degrees
14 minutes the "Titanic" came to grief.
A huge iceberg, in the shape of a ship’s sail, floats in the North Atlantic, in a photo published in 1912. Credit: Scientific American, April 27, 1912
The Menace of the Iceberg.
It is the huge mass of an iceberg that the mariner has most to fear. While it may vary in size, an ordinary iceberg will measure from 60 to 100 feet to the top of its walls, and it will have spires or pinnacles towering from 200 to 250 feet above a base that may be from 300 to 500 yards in length. Only one-eighth or one-ninth of the entire mass lies above water. Mass, let it be borne in mind, is a different quantity from height. Hence, the statement sometimes found in books that the depth of a berg under water may be from eight to nine times the height above water is incorrect. It is possible to have a berg as high out of water as it is deep below the surface; for if we imagine a large, solid lump of any regular shape, which has a very small sharp high pinnacle in the center, the height above water can easily equal the depth below. The Hydrographic Office has recorded the case of a berg, grounded in the Strait of Belle Isle in sixteen fathoms of water, that had a thin spire about 100 feet in height. Often the bergs are so nicely balanced that the slightest melting of their surfaces causes a shifting of the center of gravity and a consequent turning over of 'the mass into a new position.
Disintegration occurs very rapidly. On the coast of Labrador in July and August, when bergs are packed thickly together, the noise of rupture is often deafening. When they are frozen, the temperature is very low, so that on exposure to a thawing temperature the tension of the exterior differs from that of the interior. In other words, the berg becomes like a huge Prince Rupert's drop, which, as every one knows, is a drop formed by allowing molten glass to fall into cold water. It is said that the concussion of gun fire will sometimes break up a berg, so unequal is the tension within and without. During the day, water, the result of melting, finds its way into crevices. At night it freezes, expands, and splits the berg. The greater the splitting action the more rapid is disintegration, because new surface is exposed. Were it not for these circumstances, large bergs would remain intact years before they melted completely away.
The Queer Shapes of Icebergs.
Not only is the huge mass of an iceberg a source of danger, but its eccentric shape is as well. The weird pinnacles, spires, domes, minarets, and peaks, that remind one of castles fashioned by some genius for the pleasure of some whimsical fairy princess, find their counterpart in unseen, outlying spurs that project under water and that are fully as dangerous as any reef. The United States Hydrographic Office has called attention to the accident sustained by the British steamship "Nessmore," which ran into a berg and stove in her bows. When she was docked a long score was found extending from abreast her fore rigging all the way aft, just above the keel. Four frames were broken, and the plates were almost cut through. As there was clear water between the ship and the berg after the first collision, it was evident that the ship had struck a projecting spur after her helm had been put over. | <urn:uuid:95cfdc3c-6784-4ffa-8135-1b08693eee52> | 3.796875 | 941 | Knowledge Article | Science & Tech. | 49.380024 |
Have you ever wondered what makes a bubble form? The secret to making bubbles is surface tension. Adding soap (such as the kind you use to wash dishes in the sink) to water changes the surface tension of that water, and this creates a great solution to make bubbles from.
If you try to make bubbles using normal water, you will quickly see that it doesn't work very well. This is because the surface tension—the forces holding the molecules of a liquid together—of water is too high. When detergent is added to water, it lowers the surface tension so that bubbles can form. Add other things, such as corn syrup or glycerin, to improve the bubbles. Which solution will create the best bubbles?
In a container of water, the tiny water molecules are attracted to each other, which means that they're constantly pulling on each other. At the surface of the water, these water molecules are attracted to the water molecules around and below them. But they have no water molecules above them to be attracted to (since it is just air up there). This is what creates the force known as surface tension. The water molecules at the surface of the water do not want to move up, away from other water molecules to which they are attracted. This gives plain old water a high surface tension. In fact, it's too high to allow big bubbles to form.
When a soapy dish detergent is added to water, it lowers the surface tension so that bubbles can form. The detergent molecules increase the distance between water molecules and reduce those molecules' ability to interact with each other. This decreases the pull—or attraction–that the water molecules exert on each other, lowering the surface tension of the solution. Other substances, such as corn syrup or glycerin, can be added to the solution of water and detergent to make even better bubbles.
• Three large cups or jars with a wide opening
• One-cup measuring cup
• Tablespoon measurer
• Three cups of distilled water (which can be purchased at the supermarket)
• Liquid dishwashing soap (for example, Dawn or Joy brand)
• Glycerin, small bottle (available at a drugstore or pharmacy)
• Light corn syrup
• Three pipe cleaners
• Permanent marker
• Label the three cups "Detergent Only," "Glycerin," and "Corn Syrup," respectively.
• To all three cups, add one cup of distilled water.
• To the "Detergent Only" cup, add an extra one tablespoon of water.
• Make a pipe cleaner wand by pinching a pipe cleaner in the middle and bending half of it into a circle, twisting a little bit of the end to secure it. Make two more pipe cleaner wands this way, making sure their diameters are all the same.
• To all three cups, add two tablespoons of detergent. Mix the detergent in each cup with a spoon. You should see small bubbles forming as you mix in the detergent. Why do you think you need detergent in every solution?
• To the "Glycerin" cup, add one tablespoon of glycerin. And to the "Corn Syrup" cup, add one tablespoon of corn syrup. What is the consistency of the glycerin and corn syrup? Does one seem more viscous (thick and sticky) than the other, or do they have about the same viscosity? Mix the contents of the "Glycerin" and "Corn Syrup" cups.
• Blow a bubble from one of the solutions (outside is best, but over a kitchen sink or any other place that can stand to get a little sticky is okay). Try to catch the bubble on your wand and time how long the bubble lasts before it pops. This can be difficult to do, so you may need to practice it first. Also, it might be helpful to have another person time you.
• Catch and time at least four bubbles from each solution and write the times down. Calculate the average bubble life span for each solution. To do this, add the recorded times for each bubble type separately, then divide each total by the number of times you recorded that bubble (for example, if your "Detergent Only" bubble times were 5.1 seconds, 4.5 seconds, 5.2 seconds and 5.7 seconds, the average would be 5.1 seconds). Which solution makes bubbles that last the longest? Which solution makes the shortest-lived bubbles? Why do you think this is? Did some solutions make larger bubbles than others?
• Extra: How does the concentration of glycerin or corn syrup in the bubble solution change how long the bubbles last? You can try this activity again, using different amounts of glycerin or corn syrup in the solutions. How little is too little, and how much is too much to add? Can you make a bubble solution that results in bubbles that last longer than the ones in the original activity?
• Extra: Do bubbles always make a spherical shape? Try twisting pipe cleaners into different shapes, such as stars, squares or triangles. What shape are the bubbles made from these differently shaped pipe cleaner wands? | <urn:uuid:6ea36f4e-bd41-43c2-b2c0-1eea78411ac2> | 3.984375 | 1,068 | Tutorial | Science & Tech. | 56.978401 |
Cerium, Ce, is a metallic element, which is found in the lanthanide
series of inner transition metals in
Group IIIa of the
- Atomic Number : 58
- Relative Atomic Mass : 140.12
- Melting Point : 800 degC
- Boiling Point : 3000 degC
- Relative Density : 6.7
Cerium was discovered by M H Klaproth in 1803AD and Berzelius(1804).
Cerium, Ce, occurs in Allanite, Bastnasite, Cerite and Monazite.
- Cerium has four isotopes occur naturally: Cerium-136, Cerium-138,
Cerium-140, and Cerium-142. Fifteen radioisotopes have been identified.
- Cerium is a gray lustrous malleable metal.
- Cerium is oxidised readily in air.
- Cerium is attacked readily by acids and alkalis.
- Cerium has a variable electronic configuration, as its
inner 4f level is of almost the same energy level as its outer 3d level.
Cerium is used
- in lighter flints,
- in the form of its oxide is used in the glass industry, and
- in the form of ceric sulphate is used in volumetric analysis.
Start of Hypertext ....
Hypertext Copyright (c) 2000 Donal O'Leary. All Rights Reserved. | <urn:uuid:bbb84823-3bcd-4057-aea3-a7e21cfe694d> | 3.84375 | 305 | Knowledge Article | Science & Tech. | 53.860294 |
Marcasite (FeS2) has the same formula as pyrite but a different structure. The term "marcasite" is often used to refer to pyrite as a gem stone, and this usage actually predates the mineralogical usage of the name "marcasite." Marcasite is notorious for crumbling in collections because the sulfur reacts with atmospheric misture. The structure consists of iron atoms surrounded by octahedra of sulfur, each joined to a neighbor by sharing vertices.
Above is a single unit cell of marcasite and below is a view of the lattice. Note the nearly planar arrangement of sulfur atoms shown in blue.
Below is a view perpendicular to the views above. At top is a view of three octahedra and below that is a side view showing the vertical chains of edge-joined octahedra.
A view of the kinked plane of sulfur atoms shown in blue above. The arrangement is approximately close packed with rows of iron atoms (green) alternating on each side of the sheet. On the left side, triangles with iron atoms above the layer are yellow and those with iron atoms below are in green. Iron atoms on top are brown.
Created 21 March 2011, last update 21 March 2011
Not an official UW Green Bay site | <urn:uuid:75b7bce6-4678-4603-b202-1d6a63ff485b> | 3.046875 | 268 | Knowledge Article | Science & Tech. | 44.030251 |
An artist's depiction of what the scene must have looked like when the NASA’s Galileo spacecraft approached Amalthea, moon of Jupiter.
Click on image for full size
Courtesy of NASA
A Jupiter moon that has more holes than Swiss cheese!
News story originally written on December 11, 2002
Amalthea is a small reddish moon of Jupiter. It is not made of Swiss cheese, but it does seem to be full of holes. This makes it very light! This new information came from the Galileo spacecraft last week.
Scientists think that the rock that makes up Amalthea is broken into many large boulders. Long ago, the moon may have been one piece, but then it was broken into pieces by collisions. The pieces are just barely touching each other leaving many empty gaps in the moon.
Shop Windows to the Universe Science Store!
Our online store
includes fun classroom activities
for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist
are also full of classroom activities on different topics in Earth and space science!
You might also be interested in:
Galileo was a spacecraft that orbited Jupiter for eight years. It made many discoveries about Jupiter and its moons. Galileo was launched in 1989, and reached Jupiter in 1995. The spacecraft had two parts....more
It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more
The Space Shuttle Discovery lifted off from Kennedy Space Center on October 29th at 2:19 p.m. EST. The sky was clear and the weather was great. This was the America's 123rd manned space mission. A huge...more
Scientists found a satellite orbiting the asteroid, Eugenia. This is the second one ever! A special telescope allows scientists to look through Earth's atmosphere. The first satellite found was Dactyl....more
The United States wants Russia to put the service module in orbit! The module is part of the International Space Station. It was supposed to be in space over 2 years ago. Russia just sent supplies to the...more
A coronal mass ejection (CME) happened on the Sun last month. The material that was thrown out from this explosion passed the ACE spacecraft. ACE measured some exciting things as the CME material passed...more
Trees and plants are a very important part of this Earth. Trees and plants are nature's air conditioning because they help keep our Earth cool. On a summer day, walking bare-foot on the sidewalk burns,...more | <urn:uuid:178bcfe2-208d-4be0-82f9-935eb2071a4f> | 3.328125 | 542 | Content Listing | Science & Tech. | 62.346364 |
It’s been a month since the Tohuku earthquake and tsunami rattled then swamped northern Honshu, and Japan continues to be rattled by sizeable aftershocks. A magnitude 7.1 shock last Thursday initially set off further tsunami alerts but the rupture turned out be 50 km beneath the seafloor, too deep to pose a significant threat. Even more media attention was focussed on a magnitude 6.6 on Monday, which cut off the power at the crippled Fukushima nuclear plant for almost an hour: a serious worry given the ongoing problems with cooling the damaged reactors at this plant. A further magnitude 6 aftershock earlier today also prompted an evacuation.
A magnitude 6 earthquake is not insignificant, of course, but these particular aftershocks are particularly ill-located with respect to Fukushima. As the map below shows, they are part of a cluster of quite shallow (10-20 km) tremors that is centred only about 50 km to the southwest of the reactors. As events in Christchurch taught us, when it comes to the strength of shaking, a weaker earthquake can be pretty damaging if you’re parked right next door to the rupture, and the Fukushima plant is close enough to get shaken around pretty strongly by events of this size.
The focal mechanisms from these events are interesting: Monday’s magnitude 6.6 appears to show northeast-southwest extension, and today’s magnitude 6 appears to be mainly due to strike-slip faulting, but also with a hint of extension. All of the aftershocks in this cluster are firmly located in the over-riding plate of the subduction zone that ruptured last month; they are too shallow, and too far to the west of the plate boundary, to have occurred on the subduction interface itself. Remember that during the main shock, the Japanese mainland moved east and down as elastic strain built up across the plate boundary since it last ruptured was released. Another way of thinking about this is that prior to the earthquake, the east coast of Japan was braced against a strong, locked, subduction thrust. Now that buttress has been removed, Japan is spreading and settling outwards. Extension, some of which will be accommodated by faulting, is a natural consequence of this.
These earthquakes are therefore well within the realm of expected aftershock behaviour following a large earthquake on a nearby subduction zone. If you relied only on the media coverage, you might think there had been a ramping up of activity in the past few days, but it is more likely just that the location of these earthquakes – and their impact on the ongoing crisis at Fukushima – means that they have just been more widely noticed than tremors of a similar size offshore. From my initial assessment of aftershock activity, we should be expecting a magnitude 6-7 aftershock every 30 hours or so on average at this point, and they’ll continue to be a weekly hazard for several more months to come. Hopefully, not all of them will be so unfortunately located. | <urn:uuid:794a3026-a07e-407a-815f-d192fe3202ef> | 3.03125 | 619 | Personal Blog | Science & Tech. | 42.21936 |
This module seeks to explain the bonding of the 4 Hydrogen atoms to the 1 Carbon atom in the molecule CH4 (methane),using the molecular orbital theory. Molecular orbital theory describes orbitals that are formed with the interaction of the atomic orbitals of given atoms. These orbitals are spread out over the entire molecule and electrons fill these orbitals in accordance with the aufbau principle.
Various concepts explain the molecular orbital theory in the bonding in methane, including character tables, symmetry, LGOs (ligand group orbital approach), and a qualitative MO diagram.
The Symmetry of CH4
- CH4 belongs to the Td point group and contains: 8C3 axes, 3C2 axes, 6S4 axes, and a dihedral plane of symmetry. Using the character table for the Td point group,
Character Table for Td Point group
|Td ||E ||8C3 ||3C2 ||6S4 ||6sigmad |
|A1 ||1 ||1 ||1 ||1 ||1 |
|A2 ||1 ||1 ||1 ||-1 ||-1 |
|E ||2 ||-1 ||2 ||0 ||0 |
|T1 ||3 ||0 ||-1 ||1 ||-1 |
|T2 ||3 ||0 ||-1 ||-1 ||1 |
Molecular Orbital Diagram for CH4
Rename to desired sub-topic. You can delete the header for this section and place your own related to the topic. Remember to hyperlink your module to other modules via the link button on the editor toolbar.
- Housecroft, C; Sharpe, A. (2008). Bonding in Polyatomic Molecules. In: Pearson Education Limited Inorganic Chemistry. Edinburgh Gate: Pearson Education Limited. pgs. 33, 130-131.
- This is not meant for references used for constructing the module, but as secondary and unvetted information available at other site
- Link to outside sources. Wikipedia entries should probably be referenced here.
What are ligand group orbitals, and how are they used in MO theory in polyatomic molecules?
How many Vibrational modes and IR/Ramen stretches are there in CH4?
What are the major differences between VB theory and MO theory applied to polyatomic molecules?
- Name #1 here (if anonymous, you can avoid this) with university affiliation
This page viewed 4338 times
The ChemWiki has 9285 Modules. | <urn:uuid:625bad12-40a5-4ff5-ac9d-4a7979542c12> | 2.984375 | 523 | Documentation | Science & Tech. | 52.448636 |
What's a kilowatt?
A kilowatt, or kW for short, is a unit of power that measures the rate of energy conversion. The average annual electrical energy consumption of a household in the United States is about 8,900 kilowatt-hours.
"Watt" does this mean for the environment? Every year, a 10 kW solar or wind system is reducing greenhouse gas emissions significantly. That’s like eliminating 953 gallons of gasoline, or recycling 2.9 tons of waste instead of sending it to a landfill. | <urn:uuid:edfec0ab-b560-4574-b39a-c12371c33098> | 3.140625 | 112 | Knowledge Article | Science & Tech. | 56.164118 |
Quiz: What You Don't Know About Energy
Photograph by Michael Melford
How much do you know about the power that keeps you cool in summer, cooks your meals every evening, and illuminates this computer screen?
How much energy from a coal power plant makes it to customers as electricity?
- A quarter
- A third
About a third of the energy from a steam coal power plant makes it to customers as electricity. A very efficient type of natural gas plant, called combined cycle, typically converts about 40 percent of the fuel into electricity, with advanced units capable of 60 percent efficiency.
What standard homebuilding practice guarantees large heating and cooling losses?
- Locating heating units in basements
- Installing double-paned windows
- Using fiberglass insulation
- Hiding ductwork in attics and crawlspaces
Hiding ductwork in attics and crawlspaces is a sure way to lose air. Scientists at Department of Energy labs have shown that 40 percent of heated or cooled air supply is lost through duct leaks in unconditioned spaces.
What country is the premier source of steel for nuclear power plants?
Japan Steel Works long was the only foundry that could forge a reactor vessel in one piece, seen as essential to reduce risk of leaks. But China and Russia now have the capacity, and new facilities are under construction.
Estimates of what U.S. fuel resource have grown 40 percent or more since 2006?
- Natural gas
Estimates of U.S. natural gas have grown 40 percent or more since 2006, thanks to the success of hydraulic fracturing technology, making huge reserves now recoverable.
What global treaty has slashed world greenhouse gas emissions by 11 gigatons a year?
- The Kyoto Protocol
- The United Nations Framework Convention on Climate Change
- The Montreal Protocol
- The United Nations Convention on the Law of the Sea
The 1990 Montreal Protocol, in phasing out ozone-depleted substances with high global warming potential, had the side benefit of eliminating the equivalent of 11 gigatons a year of carbon dioxide emissions by 2010—about five to six times the target set for the first period of the Kyoto Protocol on climate change.
Does ethanol always deliver lower fuel economy than gasoline?
Yes. Because of ethanol’s lower energy content, it takes 1.03 gallons (3.90 liters) of E10 (the most common blend, which is gasoline with 10 percent ethanol) to travel as far as 1 gallon (3.79 liters) of regular gasoline. It takes 1.33 gallons (5.03 liters) if the blend is increased to 85 percent ethanol.
Are there any choices for electricity generation that don’t rely on water?
No. Coal, nuclear, natural gas, and solar thermal plants all produce electricity by boiling water into steam to turn turbines. Wind and solar photovoltaic power use much less water, but regular water use is still needed to keep the blades and panels clean or their operation degrades.
What solution is said to have potential for 57 percent of needed carbon cuts by 2030?
- Space-based solar systems
- Energy efficiency
- Carbon capture and storage
The International Energy Agency's scenario for stabilizing the atmospheric concentration of carbon dioxide at 450 parts per million would get 57 percent of the cuts from efficiency gains.
Do any cars run on coal?
No cars run directly on coal, but about 30 percent of South Africa's transportation needs are met by a diesel fuel that is processed from coal. When oil prices soared in the U.S., there was a frenzy of interest in coal-to-liquids there, although there are concerns about its high-carbon intensity.
Can solar power provide electricity when the sun isn’t shining?
Many solar energy systems have battery storage, allowing users to draw energy at night. Large solar thermal power projects are coming online that use molten salt to store energy for use after dark.
Excellent work. You are an energy star and know about power in its many forms.
You’re off to a good start. Recharge and take the quiz again to improve your score.
Your energy knowledge needs some work. Drill deep and try again.
Don't worry, energy knowledge is a renewable resource. Try learning more about it at The Great Energy Challenge, and then retake the quiz to see how much you’ve learned.Retake Quiz
Great Energy Challenge Blog
@NatGeoGreen on Twitter
Special Report: The Great Shale Gas Rush
The shale gas industry maintains that it protects drinking water and land. But mistrust has been sown in rural communities.
The industry promises jobs to a state badly in need of an economic boost, but the work so far isn't where you might expect it to be.
Track the growing mark that energy companies have etched on Pennsylvania since first producing natural gas from shale. | <urn:uuid:1a6fa46b-57a7-425b-b1a7-b0df5778b9e0> | 3.46875 | 1,013 | Q&A Forum | Science & Tech. | 50.500933 |
Termites' crystal backpacks help them go out with bang
When defending their colony, some termites "explode", releasing chemicals that injure intruders.
A previously unknown crystal structure has been discovered that raises the toxicity of their chemical weapons.
As worker termites grow older, they become less able to perform their duties.
Yet this newly discovered structure allows ageing workers to better defend their colony. The research was published today in Science.
When faced with a threat, many termite species employ a type of altruistic suicide known as "autothysis" in order to deter attackers. http://www.bbc.co.uk/news/science-environment-19001083 | <urn:uuid:26401631-0907-4a37-b975-95f6f0be0814> | 3.296875 | 141 | Comment Section | Science & Tech. | 42.159296 |
Science Fair Project Encyclopedia
Inclination is one of the six orbital parameters describing the shape and orientation of a celestial orbit. It is the angular distance of the orbital plane from the plane of reference (usually the primary's equator or the ecliptic), normally stated in degrees.
In the solar system, the inclination (i in figures 1 and 2, below) of the orbit of a planet is defined as the angle between the plane of the orbit of the planet, and the ecliptic, which is the orbit of Earth. It could be measured with respect to another plane, such as the Sun's equator, Jupiter's orbital plane, or some such, but the ecliptic is more practical for Earth-bound observers.
The inclination of orbits of natural or artificial satellites is measured relative to the equatorial plane of the body they orbit (the equatorial plane is the plane perpendicular to the axis of rotation of the central body):
- an inclination of 0 degrees means the orbiting body orbits the planet in its equatorial plane, in the same direction as the planet rotates;
- an inclination of 90 degrees indicates a polar orbit, in which the spacecraft passes over the north and south poles of the planet; and
- an inclination of 180 degrees indicates a retrograde equatorial orbit.
For the Moon however, this leads to a rapidly varying quantity and it makes more sense to measure it with respect to the ecliptic (i.e. the plane of the orbit that Earth and Moon track together around the Sun), a fairly constant quantity.
The inclination of distant objects, such as a binary star, is defined as the angle between the normal to the orbital plane and the direction to the observer, since no other reference is available. Binary stars with inclination close to 90 degrees (edge-on) are often eclipsing.
In astrodynamics inclination can be computed as follows:
- is z-component of ,
- is orbital momentum vector perpendicular to the orbital plane.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:09fbf52e-2e78-4be8-b76e-f44283b28b34> | 3.9375 | 443 | Knowledge Article | Science & Tech. | 31.915528 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
...which is defined at its base by the discontinuity in seismic wave behaviour, as cited above.) They ride on a weak, perhaps partially molten, layer of the upper mantle called the asthenosphere. Slow convection currents deep within the mantle generated by radioactive heating of the interior drive lateral movements of the plates (and the continents on top of them) at a rate of several centimetres...
What made you want to look up "convection current"? Please share what surprised you most... | <urn:uuid:239cb42a-7c70-4c14-9e92-05a4ee64de71> | 3.921875 | 137 | Truncated | Science & Tech. | 50.165878 |
Contact: Marie Guma-Diaz
University of Miami
Caption: Certain kinds of male birds gather into small clusters of land called leks to perform their courtship dances, and according to science, who they choose to associate with matters. A new study by University of Miami Evolutionary Biologist J. Albert Uy and his collaborators finds that some male birds are better at attracting females if they gather with close male kin, than in the company of distant relatives. The findings provide an intriguing account of why individuals help each other, especially when cooperating can be costly.
Credit: J. Albert Uy, associate professor of Biology in the College of Arts and Sciences at the University of Miami
Usage Restrictions: This video may only be used to accompany the press release “Why Birds of a Feather Lek Together”
Related news release: Why birds of a feather lek together | <urn:uuid:0e45d146-216a-40c7-87b0-6420c038bf90> | 3.53125 | 181 | Truncated | Science & Tech. | 25.4405 |
Approximately 64 acres of the Park are wooded, of which approximately 46 acres are young deciduous forest (less than 50 years old). The IMA’s Horticulture Department has collected tree and understory canopy data with the purpose of understanding the benefits of the Park within its urban environment.
In recent years the IMA has made a commitment to the Indianapolis community to become more conscientious stewards of the environment in its pursuit of fulfilling the museum’s mission.i-Tree.
The intention of i-Tree is to allow communities and other users to assess their current urban forest cover, create awareness and educational opportunities, and guide application for better management of those trees. It has frequently been applied on a city-wide scale, but can also analyze an entire state’s urban forest.The results are based on field data collected from random plots, accounting for tree species, height, trunk diameter, and canopy characteristics.The data is then entered into the Urban Forest Effects (UFORE) analysis model, which calculates the amount of air pollution removed, the carbon sequestered and stored by the trees, and the sustained economic benefits.
In evaluating the entire IMA campus, we found that an estimated total of 750 tons of carbon is sequestered annually by our tree cover, and over 10 tons of pollutants are removed from the air. | <urn:uuid:bb34cb41-1eee-469a-ab48-4c36af407fa8> | 3 | 272 | Knowledge Article | Science & Tech. | 26.142955 |
Technology alone won't help the world turn away from fossil fuel-based energy sources, says University of Oregon sociologist Richard York. In a newly published paper, York argues for a shift in political and economic policies to embrace the concept that continued growth in energy consumption is not sustainable.
Many nations, including the United States, are actively pursuing technological advances to reduce the use of fossil fuels to potentially mitigate human contributions to climate-change. The approach of the International Panel on Climate Change assumes alternative energy sources -- nuclear, wind and hydro -- will equally displace fossil fuel consumption. This approach, York argues, ignores "the complexity of human behavior."
Based on a four-model study of electricity used in some 130 countries in the past 50 years, York found that it took more that 10 units of electricity produced from non-fossil sources -- nuclear, hydropower, geothermal, wind, biomass and solar -- to displace a single unit of fossil fuel-generated electricity.
"When you see growth in nuclear power, for example, it doesn't seem to affect the rate of growth of fossil fuel-generated power very much," said York, a professor in the sociology department and environmental studies program. He also presented two models on total energy use. "When we looked at total energy consumption, we found a little more displacement, but still, at best, it took four to five units of non-fossil fuel energy to displace one unit produced with fossil fuel."
For the paper -- published online March 18 by the journal Nature Climate Change -- York analyzed data from the World Bank's world development indicators gathered from around the world. To control for a variety of variables of economics, demographics and energy sources, data were sorted and fed into the six statistical models.
Admittedly, York said, energy-producing technologies based on solar, wind and waves are relatively new and may yet provide viable alternative sources as they are developed.
"I'm not saying that, in principle, we can't have displacement with these new technologies, but it is interesting that so far it has not happened," York said. "One reason the results seem surprising is that we, as societies, tend to see demand as an exogenous thing that generates supply, but supply also generates demand. Generating electricity creates the potential to use that energy, so creating new energy technologies often leads to yet more energy consumption."
Related to this issue, he said, was the development of high-efficiency automobile engines and energy-efficient homes. These improvements reduced energy consumption in some respects but also allowed for the production of larger vehicles and bigger homes. The net result was that total energy consumption often did not decrease dramatically with the rising efficiency of technologies.
"In terms of governmental policies, we need to be thinking about social context, not just the technology," York said. "We need to be asking what political and economic factors are conducive to seeing real displacement. Just developing non-fossil fuel sources doesn't in itself tend to reduce fossil fuel use a lot -- not enough. We need to be thinking about suppressing fossil fuel use rather than just coming up with alternatives alone."
The findings need to become part of the national discussion, says Kimberly Andrews Espy, vice president for research and innovation at the UO. "Research from the social sciences is often lost in the big picture of federal and state policymaking," she said. "If we are to truly solve the challenges our environment is facing in the future, we need to consider our own behaviors and attitudes."
About the University of Oregon
The University of Oregon is among 108 institutions chosen from 4,633 U.S. universities for top-tier designation of "Very High Research Activity" in the 2010 Carnegie Classification of Institutions of Higher Education. The UO also is one of two Pacific Northwest members of the Association of American Universities.
Source: Richard York, associate professor of sociology and environmental studies, 541-346-5064, firstname.lastname@example.org
York faculty page: http://sociology.uoregon.edu/faculty/york.php
Department of Sociology: http://sociology.uoregon.edu/
Environmental Studies Program: http://envs.uoregon.edu/
UO Science on Facebook: http://www.facebook.com/UniversityOfOregonScience
Note: The University of Oregon is equipped with an on-campus television studio with satellite uplink capacity, and a radio studio with an ISDN phone line for broadcast-quality radio interviews. Call the Media Contact above to begin the process.
Jim Barlow | Source: EurekAlert!
Further information: www.uoregon.edu
More articles from Studies and Analyses:
Footwear’s (carbon) footprint
23.05.2013 | Massachusetts Institute of Technology
Rate of bicycle-related fatalities significantly lower in states with helmet laws
23.05.2013 | Boston Children's Hospital
This morning at 05:45 CEST, the earth trembled beneath the Okhotsk Sea in the Pacific Northwest. The quake, with a magnitude of 8.2, took place at an exceptional depth of 605 kilometers.
Because of the great depth of the earthquake a tsunami is not expected and there should also be no major damage due to shaking.
Professor Frederik Tilmann of the GFZ German Research Centre for Geosciences: "The epicenter is exceptionally deep, far below the earth's crust in the mantle. Such strong ...
The Ring Nebula's distinctive shape makes it a popular illustration for astronomy books. But new observations by NASA's Hubble Space Telescope of the glowing gas shroud around an old, dying, sun-like star reveal a new twist.
"The nebula is not like a bagel, but rather, it's like a jelly doughnut, because it's filled with material in the middle," said C. Robert O'Dell of Vanderbilt University in Nashville, Tenn.
He leads a research team that used Hubble and several ground-based telescopes to obtain the best view yet of ...
New indicator molecules visualise the activation of auto-aggressive T cells in the body as never before
Biological processes are generally based on events at the molecular and cellular level. To understand what happens in the course of infections, diseases or normal bodily functions, scientists would need to examine individual cells and their activity directly in the tissue.
The development of new microscopes and fluorescent dyes in ...
A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials.
The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers.
Droplets in this toroidal shape made ...
Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry.
Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada.
High manufacturing cost and a short lifetime are still a major obstacle on ...
24.05.2013 | Life Sciences
24.05.2013 | Ecology, The Environment and Conservation
24.05.2013 | Physics and Astronomy
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:afeb7bd2-653b-4517-8b2d-a660cf96c62b> | 3.1875 | 1,604 | Content Listing | Science & Tech. | 40.352889 |
Can you imagine being able to witness the start of the enzymatic process that ensures oxygen is carried throughout your blood stream? Can you fathom being able to see the first structural movement of a protein in a timescale of a hundred trillionths of a second? Michael Fayer can. And because this seemingly mild mannered physical chemist has such an active imagination, he has been able to develop laser techniques that can answer piercing biological questions that have eluded us for decades.
The author of two books, including the brand-new "Absolutely Small: How Quantum Theory Explains our Everyday World," Fayer is the David Mulvane Ehrsam and Edward Curtis Franklin Professor of Chemistry at Stanford University. His PhD is from UC Berkeley, and he is a member of the National Academy of Sciences and the American Academy of Arts and Sciences. Fayer is also a Fellow of the APS, the OSA, and the Royal Academy of Chemistry.
Fayer's research focuses on observing incredibly fast chemical processes as they happen. For example, on the picosecond scale, "proteins are constantly undergoing structural fluctuations," he says. "This is what allows the proteins do the various chemical (enzymatic) processes that make life possible. We can use lasers to study the fast motions of proteins."
Fayer and his team use ultrafast, femtosecond (10-15 second) lasers to conduct nonlinear spectroscopic experiments in order to follow the motion of not only proteins but other molecules as well. The advanced lasers chart the course of the expeditious chemical process by recording spectra, in many cases multi-dimensional infrared spectra, as a function of time during the course of the important chemical and physical events.
Fayer's laser experiments have enabled us to better understand water dynamics. "How does water behave? It has such different behavior than other liquids," he notes. Unlike other solidified fluids, ice floats. This is odd behavior for a liquid. But Fayer has applied a laser technique, involving ultrafast laser pulses to examine the vibrational echoes of water in two dimensions, allowing him to study the hydrogen bond network in water and explain how this vital substance maneuvers through various materials. "What water does dynamically tells us how processes work where water is the most common molecule involved," he elucidates. "Up until recently we couldn't do this. Using ultrafast two dimensional, nonlinear (optics), we can (finally) characterize water."
"This work is important in biology, (because) protein folding depends on how water can reorganize around the protein," he continues. Finally, thanks to these laser techniques, scientists can scrutinize the water's dynamics at the surfaces of model cell membranes or in nanoscopic environments found in biological systems that allow cells to function properly. The research also opens conduits of understanding in areas of geology and even the study of fuel cell membranes, where water has to conduct protons through the membranes, he adds.
With the advent of ultrafast nonlinear infrared spectroscopy, we "can examine particular groups of atoms, which are handles for opening up our understanding of molecular dynamical processes," says Fayer. And as more and more laser systems become turnkey and easier to use by non-experts, he predicts the techniques and results of laser research will expand exponentially. The laser pioneer believes that "in the same manner that sophisticated nuclear magnetic resonance (NMR) instruments permeate wide areas of science, ultrafast multi-dimensional infrared spectroscopy will become a common place scientific tool." | <urn:uuid:83f6e7fa-0be0-4ade-a861-5da068a6346d> | 3.515625 | 724 | Knowledge Article | Science & Tech. | 30.033553 |
respectively get or set a unique 32-bit identifier for the current machine.
The 32-bit identifier is intended to be unique among all Unix systems in
This normally resembles the Internet address for the local
machine, as returned by
and thus usually never needs to be set.
call is restricted to the superuser.
returns the 32-bit identifier for the current host as set by
returns 0; on error, -1 is returned, and
is set to indicate the error.
can fail with the following errors:
The caller did not have permission to write to the file used
to store the host ID.
The calling process's effective user or group ID is not the same
as its corresponding real ID.
4.2BSD; these functions were dropped in 4.4BSD.
In the glibc implementation, the
is stored in the file
(In glibc versions before 2.2, the file
In the glibc implementation, if
cannot open the file containing the host ID,
then it obtains the hostname using
passes that hostname to
in order to obtain the host's IPv4 address,
and returns a value obtained by bit-twiddling the IPv4 address.
(This value may not be unique.)
It is impossible to ensure that the identifier is globally unique. | <urn:uuid:26e12b27-fa62-4a9e-bd19-302056af8842> | 2.9375 | 284 | Documentation | Software Dev. | 58.4075 |
Real leaves are natural energy factories that can split water molecules and create hydrogen ions. Scientists have long tried to copy the molecules involved in creating hydrogen, but a Chinese team took a different tack by creating an artificial structure based on natural leaves as templates. Early tests have shown that the artificial leaves could soak up twice as much light and produce three times as much energy as the real thing, New Scientist reports.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:0e327f57-0ad1-4db9-9391-1a17f386c3cc> | 3.46875 | 134 | Content Listing | Science & Tech. | 40.208119 |
From W3C Wiki
A "Node ID," is a way to conveniently identify nodes within the confines of a single file.
Outside of the file, the nodeID doesn't mean anything; you just see BlankNodes. Within the file, you can refer back to the node with a simple name.
You create a nodeID with something like the following form:
You refer back to it with something like the following form:
I'm researching the concept of a "Node ID", also known as a "bNode."
According to link...
...is something to do with the node ID.
I suppose it's a short convenient string name for a node.
This, then, would be an RDF "Description" class instance, and we're calling it "node1."
If you wanted to talk about that node, you'd say...
link includes some notes on nodeID. It seems to confirm what I think.
Okay, I'm writing this up. If someone can correct this, it'd be great.
- is a bNode the same thing as a NodeId?
- is it only part of RdfXmlSyntax, or does this concept exist in the larger world of RDF formats, such as NotationThree?
W3C had something to say about blank nodes:
...so, it does seem that the blank node concept is the same as the nodeID.
And again, from the page, it notes that "
Thing" is invisible, from outside the file. Hence, "blank."
There also seems to be a way to make something like a NodeId, but visible from the outside world, using
rdf:about, instead of | <urn:uuid:7e0183ec-50ab-46c8-90ed-5c5095872b8c> | 2.703125 | 356 | Knowledge Article | Software Dev. | 76.056634 |
The above plot is average annual temperature for the selected station. Switch stations by using the map below. In grey is historical data from the Global Historical Climatology Network (GHCN). The post-1900 trend is in black. GHCN includes approximately 6,000 temperature stations across the globe, and is the primary source for temperature data for global temperature records. Records in this GHCN database date back to as early as 1700. There are some gaps in the GHCN data, and when this occurs, the legend will say "N/A."
In red and orange are the projections from global climate models under two IPCC greenhouse gas emissions scenarios: high (red, A1B) and low (orange, B1). The result is an average of many global climate simulations, which can be found here. The low emissions scenario projects a 1.8°C global rise in temperature through 2100, while the high emissions scenario projects a 2.8°C global rise in average temperature through 2100. Note: The high emissions scenario is not the highest in the IPCC scenarios. The highest scenario is A1F1, which projects up to 6.4°C warming, globally. Our current emissions are on a pace greater than A1B, and close to A1F1.
* Snowfall data only available for stations in the U.S.
About Local Climate Change
People all across the globe are feeling the impacts of climate change in their back yards. The Local Climate Change product is a way to see changes that are happening in your region. Use the tabs to browse historical and projected changes in temperature, precipitation, and snow. We plan add more data later this year, including drought, growing season, sea level rise, and extreme weather. | <urn:uuid:272fca0a-524b-4acf-8f07-54320ae355f2> | 3.03125 | 356 | Knowledge Article | Science & Tech. | 53.874241 |
Lectures - Monday and Wednesday, 11:00 AM - 12:15 PM
Lab - Tuesday, 4:10 PM -7 PM
Last week you explored the geographical variations of Earth's albedo, reflected solar radiation, and Earth's radiation received by satellites from cloud free areas of the Earth's surface. The patterns you observed were controlled by the curvature of the earth, variations in seasonal radiation received from the sun, and varying properties of the Earth's surface. This week we will explore the effect of clouds on these patterns. Accordingly, the data sets you will first look at this week will be that under the category: total (this link will open a new window with these data).
We will want to compare some of the cloud free data sets from last week's lab with the total datasets used this week. These comparisons, which we recommend you do when looking at the albedo data sets, will allow you to differentiate between reflectivity that is caused by clouds and that which is caused by Earth surface properties such as ice and snow. As you look at these data, note what areas of the Earth clouds persistently cover and what areas are generally cloud free. The reasons for these patterns will become clear as the course continues.
Go to the window you opened earlier that contains the total fields. Look at the January map of total albedo (adjust your graphical interface so continental outlines are shown, set your colorscale range from 0 to 100 and then choose the option of "colors | contours"). Access clear-sky albedo from last week's lab in your other opened window (make the same display choices as above).
Task I: Comparing between clear-sky and total.
Go back to the windows that contain the total and clear-sky data and navigate to the shortwave fields. We will now calculate the globally averaged amount of reflected solar radiation in the clear-sky case and in the total to evaluate the effect of couds on the radiation budget. To do that, click on the "expert mode" link in the upper right corner of the window. We will first average all the data over time to look at the annual average. In the expert mode window type the following line of text below the text which is already present in the window:
Now click the "OK" button to the right of the expert window. This tells the software to average the data over all time slices. Each point in space is averaged separately. View this field to look at the annual average total reflected shortwave radiation. Do the same with the clear-sky field. As with the albedo fields, a comparison between the total and clear-sky data highlights the role of clouds in the short wavelength budget.
Now return to expert mode and continue typing underneath the first line you entered:
Y cosd mul
This multiplies every grid point in space by the cosine of the latitude angle (Y is the latitude angle in degrees, and cosd is a calculation of cosine when the angle is given in degrees). We need to do that so that our grid points will be properly weighted with respect to the geographical areas they represent as there are more grid points per unit area in the high latitudes than in the tropics. Then type:
[X Y] average
Cick the "OK" button again. The viewer will return a single number just below the expert window (in bold letters). That number is the amount of reflected shortwave energy averaged over the entire globe in W/m2. Do the same operation with clear-sky shortwave radiation.
Task 2: Record the global annual averages for both the total and clear-sky shortwave radiation. (Results) How do you explain the difference between these numbers in relation to cloud cover? If cloud cover increases, how will this difference change? (Discussion)
Open a new browser window with the total longwave radiation dataset (hold the apple/command key down when you click here). In the expert mode calculate the annual mean outgoing longwave (that is, type "[T] average" and click OK). In the other window, go back to the view of the global mean reflected radiation. Display the figures side-by-side.
The net radiation is the difference between the radiation coming into the Earth from the sun and the energy radiated by the Earth to space. For the planet as a whole what comes in must equal what goes back out; however, more radiation comes in at the tropics than goes out in these latitudes and more is radiated from the higher latitudes (north and south of the tropics) than comes in. This difference provides the energy to drive the circulation of the atmosphere and ocean.
Task 4: Calculate the global annual average net radiation (use the expert mode again). (Results) What percentage of incoming solar radiation at the top of the atmosphere (So/4 = 342 Wm-2) is the global annual average net radiation? (Results) (You should find one value for global annual average net radiation using the same technique as in Section B.)
In order to try to understand how clouds affect the Earth's radiation budget, ERBE scientists calculated the difference between the clear-sky fields and the total fields. The result is often referred to as cloud-forcing. The fields in this dataset show how much clouds affect the amount of radiation available to Earth by comparing the data from the same locations during cloudy and non cloudy days. This calculation cannot be done in some places due to insufficient data.
Go to the annual average net cloud forcing dataset. Here you can see that clouds may locally warm or cool a given region. The degree of cloud cover at given location is necessary but not sufficient to determine the cloud forcing. Compare the region over the western tropical Pacific (near Indonesia and northern Australia) to the North Pacific (between Japan and Alaska), two areas with extensive cloud cover.
Task 5: What is the net cloud forcing in these two areas? (Results) How does this relate to the balance between shortwave reflection and longwave emission? (Discussion)
The Stefan-Boltzmann Law relates the amount of longwave radiation emitted by a black body to its temperature. Thus we can calculate the effective temperature of Earth as determined by its total longwave radiation emitted into space, and compare it with the surface temperature. To do that let us go back to the total longwave radiation dataset (you might want to close all other windows first).
In the expert mode we can calculate the temperature corresponding to the outgoing longwave radiation by first dividing by the Stefan Boltzmann constant and then taking the square root of the result twice. We do that as follows:
X Y 1 SM121
This code converts the results to °C and add some smoothing in space. View the results in colors and contours.
Now open the JONES surface temperature climatology dataset. These temperature data were carefully compiled from land station measurements and from ship observations made from 1854 through 1994.
We can use these temperatures to calculate the amount of longwave radiation emitted from the Earth's surface, and compare that to the ERBE measurements of what is emitted into space. Go back to the longwave radiation dataset and in the expert mode replace the Stefan Boltzmann calculation with a calculation of the annual averaged longwave radiation using "[T] average" as described before. Then go to the Jones surface temperature dataset and use the Stefan Boltzmann Law to calculate the longwave radiation emitted from the surface assuming that surface emissivity is 1 (not accurate, but sufficient for our purpose). To do that, type the following into the expert window:
dup mul dup mul
The command "dup" duplicates the data which is then multiplied by the original using "mul". This is done twice to create the 4th power of temperature in K.
Task 7: Compare the annual averaged longwave radiation coming from the surface with that going out to space. Which is larger? What does the difference represent? Where is the effect (difference) largest. Where is it smallest?
In the previous section, you used the Stefan-Boltzmann law to calculate the effective temperature of a planet heated by the sun. The same reasoning applies to ordinary paper heated by a lamp. In this experiment you will use a desk lamp, a two channel thermometer and two pieces of paper, one black and one white. Both pieces of paper can be treated as black bodies with different albedos. Calculated albedos are written on the pieces of paper. You will do the experiment on both pieces of paper at the same time under the same conditions. Tape one of the thermometer sensors to each piece of paper. Place the papers close to each other under the desk lamp and turn the lamp on. You will find that the temperatures of both papers immediately rise. Why? After a few minutes, the temperatures stop growing and stay constant, which means the papers have reached an equilibrium state. Now write down those steady state temperatures. Why is there a big difference? Use the Stefan-Boltzmann law to calculate the energy emitted by the lamp in two ways: one using the thermal equilibrium of the white paper, the other for the black paper. Are they the same?
It is often very useful to describe a light-emitting body in terms of its emission spectrum, that is, the partitioning of energy among the different frequencies (or wavelengths) composing the light. For example, the sun emits most of its energy in the visible part of the spectrum, centered on wavelengths around 500 nm (green). This means that its emission spectrum shows a bump at these wavelengths, where its brightness is at a maximum. A spectroscope is a device that allows you to see the spectrum of the incoming lights. Point the spectroscope at two different light sources, a fluorescent light (ceiling light) and a halogen light (desk lamp). You will notice that the different colors are not present at the same brightness. Sketch a qualitative spectrum plot (intensity versus wavelength) for each light sources. Make sure to label both axes and use the spectral chart to convert the colors to wavelengths in microns. Compare the two spectra. Are both continuous? Where are peak intensities?
Write a lab report (as per the Lab Report Format) summarizing the major findings of your investigation. In addition, explain the following questions in your discussion section:
Report the results of your hands-on experiments.
July 9, 2007
To report problems, email webmaster. | <urn:uuid:68e4f164-2234-4021-9240-9e1892abd3f2> | 3.65625 | 2,156 | Tutorial | Science & Tech. | 47.572116 |
The Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques (Geneva: 18 May 1977, Entered into force: 5 October 1978) prohibits "widespread, long-lasting or severe effects as the means of destruction, damage or injury". However it has been argued that this permits "local, non-permanent changes".
Prior to the Geneva Convention, the United States used weather warfare in the Vietnam War. Under the auspices of the Air Weather Service, the United States' Operation Popeye used cloud seeding over the Ho Chi Minh Trail, increasing rainfall by an estimated thirty percent during 1967 and 1968. It was hoped that the increased rainfall would reduce the rate of infiltration down the trail.
A research paper produced for the United States Air Force written in 1996 speculates about the future use of nano-technology to produce "artificial weather", clouds of microscopic computer particle all communicating with each other to form an intelligent fog that could be used for various purposes. "Artificial weather technologies do not currently exist. But as they are developed, the importance of their potential applications rises rapidly." Weather modification technologies are described as "a force multiplier with tremendous power that could be exploited across the full spectrum of war-fighting environments."
See also
- Environmental Warfare and Climate Change Michel Chossudovsky.
- Transcript of the US Senate Hearing on Weather Modification of March 20, 1974
- House, Tamzy J., et al. "Weather as a Force Multiplier: Owning the Weather in 2025". US Air Force. Retrieved April 17, 2012.
- Non Lethal Warfare Proposal:Weather Modification, The Sunshine Project
|This climatology/meteorology-related article is a stub. You can help Wikipedia by expanding it.|
|This article related to weaponry is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:b3a24cda-8ca4-46f6-a1e5-5ea43a6eb2cf> | 3.3125 | 387 | Knowledge Article | Science & Tech. | 31.683288 |
When we get the signal that we hit the atmosphere, we're already on the surface... yet we don't know in what condition. So you are sitting there going "our rover right now is on the surface of Mars... dead or alive." And then you live the next seven minutes living in this delayed time.
It's like the NBC delay, but worse and because of physics. Watch him explaining the whole process and get ready for your head to explode.
This is the same phenomenon that allows us to see the Pillars of Creation using Hubble even while they were destroyed a thousand years ago. Since light has to travel through space, everything is delayed.
In fact, everything you look at, this screen, your desk or your sofa, the view from your window, your own hands, is already in the past. Only a tiny fraction of a second, but the reality you see now is already gone when it hits your eyes.
The same happens with those radio signals. Since they have to travel through space, there's a communication delay between Earth and Mars. It's very short, but it will be enough to drive everyone at JPL crazy for seven minutes. Starting with Thoma, who was both the Mars Curiosity's Descent Stage Lead Structures & Configuration Engineer—the famous sky crane that marvels everyone—and the Mechanical Lead for Assembly, Test, and Launch Operations for the mission.
Godspeed Curiosity! And go faster faster, damn radio waves.
I'm going to live blog the landing live from the JPL later tonight, starting around 12:30am ET/9:30pm PT.
Gizmodo is covering the Mars Curiosity rover live from the Jet Propulsion Laboratory in Pasadena, California. Check all the articles here. | <urn:uuid:347b6a64-063c-48a9-8526-fa052ade14dc> | 2.890625 | 359 | Personal Blog | Science & Tech. | 64.223064 |
Investigate this balance which is marked in halves. If you had a
weight on the left-hand 7, where could you hang two weights on the
right to make it balance?
Investigate what happens when you add house numbers along a street
in different ways.
48 is called an abundant number because it is less than the sum of
its factors (without itself). Can you find some more abundant
If the answer's 2010, what could the question be?
In this section from a calendar, put a square box around the 1st,
2nd, 8th and 9th. Add all the pairs of numbers. What do you notice
about the answers?
What happens when you add the digits of a number then multiply the
result by 2 and you keep doing this? You could try for different
numbers and different rules.
Explore Alex's number plumber. What questions would you like to ask? What do you think is happening to the numbers?
Well now, what would happen if we lost all the nines in our number
system? Have a go at writing the numbers out in this way and have a
look at the multiplications table.
Start with four numbers at the corners of a square and put the
total of two corners in the middle of that side. Keep going... Can
you estimate what the size of the last four numbers will be?
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
Which times on a digital clock have a line of symmetry? Which look
the same upside-down? You might like to try this investigation and
Find out why these matrices are magic. Can you work out how they were made? Can you make your own Magic Matrix?
Place the numbers 1 to 10 in the circles so that each number is the
difference between the two numbers just below it.
Peter, Melanie, Amil and Jack received a total of 38 chocolate
eggs. Use the information to work out how many eggs each person
Max and Mandy put their number lines together to make a graph. How
far had each of them moved along and up from 0 to get the counter
to the place marked?
How could you put eight beanbags in the hoops so that there are
four in the blue hoop, five in the red and six in the yellow? Can
you find all the ways of doing this?
Winifred Wytsh bought a box each of jelly babies, milk jelly bears,
yellow jelly bees and jelly belly beans. In how many different ways
could she make a jolly jelly feast with 32 legs?
You have 5 darts and your target score is 44. How many different
ways could you score 44?
The value of the circle changes in each of the following problems.
Can you discover its value in each problem?
Use the information to work out how many gifts there are in each
Can you design a new shape for the twenty-eight squares and arrange
the numbers in a logical way? What patterns do you notice?
Cassandra, David and Lachlan are brothers and sisters. They range
in age between 1 year and 14 years. Can you figure out their exact
ages from the clues?
There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2
litres. Find a way to pour 9 litres of drink from one jug to
another until you are left with exactly 3 litres in three of the
There are 44 people coming to a dinner party. There are 15 square
tables that seat 4 people. Find a way to seat the 44 people using
all 15 tables, with no empty places.
There are 78 prisoners in a square cell block of twelve cells. The
clever prison warder arranged them so there were 25 along each wall
of the prison block. How did he do it?
Complete these two jigsaws then put one on top of the other. What
happens when you add the 'touching' numbers? What happens when you
change the position of the jigsaws?
You have two egg timers. One takes 4 minutes exactly to empty and
the other takes 7 minutes. What times in whole minutes can you
measure and how?
Annie cut this numbered cake into 3 pieces with 3 cuts so that the
numbers on each piece added to the same total. Where were the cuts
and what fraction of the whole cake was each piece?
Using the statements, can you work out how many of each type of
rabbit there are in these pens?
Cherri, Saxon, Mel and Paul are friends. They are all different
ages. Can you find out the age of each friend using the
These sixteen children are standing in four lines of four, one
behind the other. They are each holding a card with a number on it.
Can you work out the missing numbers?
Can you put plus signs in so this is true? 1 2 3 4 5 6 7 8 9 = 99
How many ways can you do it?
This magic square has operations written in it, to make it into a
maze. Start wherever you like, go through every cell and go out a
total of 15!
In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square?
Arrange eight of the numbers between 1 and 9 in the Polo Square
below so that each side adds to the same total.
Can you arrange 5 different digits (from 0 - 9) in the cross in the
There are three buckets each of which holds a maximum of 5 litres.
Use the clues to work out how much liquid there is in each bucket.
Zumf makes spectacles for the residents of the planet Zargon, who
have either 3 eyes or 4 eyes. How many lenses will Zumf need to
make all the different orders for 9 families?
Write the numbers up to 64 in an interesting way so that the shape they make at the end is interesting, different, more exciting ... than just a square.
On the planet Vuv there are two sorts of creatures. The Zios have 3 legs and the Zepts have 7 legs. The great planetary explorer Nico counted 52 legs. How many Zios and how many Zepts were there?
This problem is based on a code using two different prime numbers
less than 10. You'll need to multiply them together and shift the
alphabet forwards by the result. Can you decipher the code?
The clockmaker's wife cut up his birthday cake to look like a clock
face. Can you work out who received each piece?
Add the sum of the squares of four numbers between 10 and 20 to the
sum of the squares of three numbers less than 6 to make the square
of another, larger, number.
Find at least one way to put in some operation signs (+ - x ÷)
to make these digits come to 100.
Find out what a Deca Tree is and then work out how many leaves
there will be after the woodcutter has cut off a trunk, a branch, a
twig and a leaf.
A group of children are using measuring cylinders but they lose the
labels. Can you help relabel them?
Tell your friends that you have a strange calculator that turns
numbers backwards. What secret number do you have to enter to make
141 414 turn around?
Can you make square numbers by adding two prime numbers together?
Ten cards are put into five envelopes so that there are two cards in each envelope. The sum of the numbers inside it is written on each envelope. What numbers could be inside the envelopes?
A game for 2 players. Practises subtraction or other maths | <urn:uuid:8138d692-9e71-4f45-a7df-4f30ebe6d373> | 4.25 | 1,656 | Content Listing | Science & Tech. | 75.523 |
Figure 1. Two records of climate change over the last glacial cycle. The oxygen isotopes of the ocean reflect both sea level and temperature changes. The oxygen isotopes of Greenland ice reflect the temperature of precipitation above the core site.
Paleoclimate and Deep-Sea Corals
California Institute of Technology
When many of us think of 'climate change', we envision various catastrophic scenarios in our near future. From rising carbon dioxide levels in the atmosphere, changing vegetation growing patterns, to melting ice sheets and rising sea level, there is uncertainty about our climatic future over the coming years, decades and centuries. In paleoclimatology we use the fact that all of these things have happened in the past to better understand why and if they might happen in the future. One of the goals of this NOAA Ocean Exploration program is to use deep-sea corals to help unravel the climate shifts of the recent geologic past.
Climate on Different Timescales
What might seem like the distant past to some, can seem like recent history to others. The last 120,000 years of climate history contains a single glacial to interglacial cycle. 20,000 years ago the ice sheets that covered Canada and Northern Europe were at their maximum extent (the Last Glacial Maximum, or LGM). By ~8,000 years ago they were gone. The last time it was as warm and ice free as the last 8,000 years was ~120,000 years ago (Figure 1). All of these changes are paced by variations in the earth's orbit around the sun. Operating at 20, 40 and 100 thousand year periods, these Milankovitch cycles set the clock for glacial to interglacial climate change. These can seem like very long periods relative to our lifetimes, but they are relatively short compared to all of earth history. Of the ~4.5 billion years earth has been around, the last ~2.7 million years is only one of a small number of times the earth has been glaciated, or less than one-tenth of one percent of the time.
Figure 2. Deep ocean circulation patterns in the modern Atlantic and during the Last Glacial Maximum. For our purposes the most important difference between these two maps is the larger influence of Antarctic Bottom Water at the latitude of the New England Seamounts during the glacial period.
The imprint of these climate cycles on sea level, mean temperature, ice extent and ocean/atmosphere circulation is huge. Figure 1 shows that at the LGM sea level was ~120 meters lower than today. However, in the early 1990's we were treated to a paleoclimate surprise. Records from the Greenland ice sheet published in 1992 show that the earth's orbit is not the only story. In Figure 1 we compare the temperature changes recorded by the oxygen isotopes in the Greenland ice with the sea level and temperature record from deep-sea benthic foraminifera. While the Milankovitch cycles are clear in the marine record, there are many, many more climate shifts in the ice data. Because the ice core is a nearly perfect recorder of time (ocean sediments, on the other hand, are mixed by benthic organisms), we can see that large amplitude and rapid changes in the temperature over Greenland are a characteristic feature of the last ice age. However, the last ~10,000 years have been relatively warm and stable. Understanding this 'sub-Milankovitch' climate variability, and its absence in the Holocene, is a major thrust in modern paleoclimate research.
The Deep Ocean 's Place in Climate Change
Figure 3. A fossil Desmophylum dianthus skeleton from the New England Seamounts. This sample is about 5cm tall and lived for ~50-100 years about 15,000 years ago.
The effects of glacial cycles are also seen in the deep ocean. Figure 2 shows the major deep-water masses of the modern and LGM Atlantic Ocean. Because denser water must always lie below less dense water, the ocean can be thought of as a 'stratified bathtub'. New deep waters can only punch through this stratification at special places in the high latitudes where seasonal cooling and salinification make surface waters that are denser than the water below. The water mass that forms around Greenland is called 'North Atlantic Deep Water' (NADW) and the water that forms around Antarctica is called 'Antarctic Bottom Water' (AABW). Today at the latitude of the New England Seamounts the whole water column is filled with NADW. At the LGM there is a large component of AABW below about ~2000 meters. As the deep ocean contains all of the mass, carbon and thermal inertia of the ocean-atmosphere system, changes in its circulation pattern can have profound effects on global climate.
This view of the past ocean was made from measuring the isotopic chemistry of calcium carbonate shells of benthic foraminifera that grew during the LGM. Deep-sea corals from the New England Seamounts can do the same thing, with several key advantages. The Uranium rich aragonite skeletons of Desmophyllum dianthus are ideal for measuring precise and accurate ages (Figure 3). In addition, each coral lives for about 100 years and precipitates a skeleton that can't be mixed up in time. Like the ice cores, these corals can record very rapid events in the circulation of the deep ocean. While foraminifera have been very good at recording the glacial-interglacial changes in deep ocean patterns, we need the corals to find the rapid changes, seen in the ice cores, in the deep sea.
Climate's Effect on Populations
Figure 4. The North Atlantic Ocean distribution of the planktonic foraminifera N. pachyderma (sin) at the Last Glacial Maximum (LGM). Compared to the modern distribution (dashed line near Greenland ) this 'foram' found a much wider geographic range at the LGM. This data set shows that the polar front moved very far to the south and traveled more zonally across the LGM Atlantic. It is an excellent example of how marine populations can 'feel' climate change.
Another aspect of our paleoclimate research on this expedition involves the study of populations in space and time. One of the main results from the 1970's era Climate Mapping Project (CLIMAP) was the realization that marine populations 'feel' climate change in a diagnostic way. Figure 4 shows the results of counting the relative abundance of the planktonic foraminifera N. pachyderma (sin) in modern (dashed line near Iceland ) and LGM age (shaded area) sediments. It is clear that this cool water loving species greatly expanded its geographic range during the last glacial period. Cooler waters north of the polar front extended much further south at the LGM and were much more zonal. Clearly the surface plankton 'feel' climate change. We have preliminary evidence from our last cruise to the central New England Seamounts in 2003 that D. dianthus populations also feel climate changes in the deep ocean. Fossil skeletons of this species are much more likely to be from glacial periods than from interglacial ones. Within a glacial period they are also more likely to be from times of the 'cold mode' of rapid climate change than from the 'warm mode' (see Figure 1). By traversing the whole seamount chain on this expedition we hope to collect the material need to explore this intriguing distribution in both space and time. | <urn:uuid:0c4d5a05-25d8-4007-a90a-53c8c4bcaadc> | 3.859375 | 1,571 | Academic Writing | Science & Tech. | 44.595967 |
Question: How does a stirling generator work?
Answer: To understand a Stirling generator you have to understand how a Stirling engine works. Essentially a Stirling engine operates based on a closed chamber containing a gas (usually just air) that is heated at one end and cooled at the other. When a gas heats or cools it also expands and contracts. This expansion and contraction powers pistons that drive mechanical work. Now that you have the basics imagine a wheel that rotates from these pistons pushing and pulling on it. A basic electrical generator uses coils of wire rotating within a magnetic field to produce electricity. If you were to wrap the rotating wheel with copper wire and suspend it between magnets you would now have a Stirling engine generator. It really is as simple as that. Below is a video showing this setup except the magnet is being rotated within the wiring instead of the opposite situation. The spinning part in the center between the two wheels is a magnet and the red surrounding it is the wiring.
Other questions related to misc energy:
- How many amps can a bike produce?
- How much does a electricity generating bike cost?
- How is energy created from plastics?
- How does a stirling generator work?
- Is crude oil a renewable energy resource?
- Ways to power a generator with weights?
- Where to buy bicycle powered water pump?
- What type of waste can be converted to energy?
- What are the disadvantages of using human power?
- What plans are there for the future of renewable energy sources?
- What is solid waste made up of?
- What are the renewable energy resources?
- What is butanol fuel?
- What kind of energy is used on a bicycle? | <urn:uuid:5e4bd3f4-0778-4674-bae4-818e87aa2cf1> | 3.859375 | 357 | Q&A Forum | Science & Tech. | 49.096127 |
Melting Point, the temperature at which a substance passes from the solid to the liquid state. Each chemical element has a specific melting point. The melting points of the elements range from —453.5 F. (-269.7 C.) for helium to 6,740.6 F. (3,727 C.) for carbon (graphite). Chemical compounds also have specific melting points, but some compounds decompose before the necessary temperature is reached.
Mixtures of substances, such as butter or paraffin, do not have specific melting points, but melt within a range of temperatures. Each substance in a mixture retains its own melting point. Since the substances with lower melting points melt before those with higher melting points, a mixture tends to become soft before changing into a liquid.
The freezing point, or the temperature at which liquids become solids, occurs at the same temperature as the melting point. The difference between the two is that the temperature is rising when the melting point is reached, and falling when the freezing point is reached.
A substance that is melting stays at the same temperature (its melting point) as long as any of it remains in the solid state. For example, if ice is melted in a glass, both the ice and the water stay at 32 F. (0 C.) until all of the ice is melted. As soon as there is only water in the glass, the temperature can begin to rise.
Pressure affects melting points. Most substances expand as they melt. An increase of pressure retards the melting of these substances. For such substances, increasing the pressure raises the melting point. A few substances, ice for example, contract as they melt. An increase of pressure makes it easier for the substance to melt, and therefore lowers the melting point.
For the melting points of specific chemical elements, see articles on the elements in question. | <urn:uuid:fa397d33-655e-473d-afc8-731404fd4a1b> | 4.34375 | 378 | Knowledge Article | Science & Tech. | 59.145887 |
Distinction within Microbial as well as Eukaryotic Transcribing.
Distinction within Microbial as well as Eukaryotic Transcribing
Transcribing happens in most microorganisms; nevertheless you will find different facets which happen in between germs as well as eukaryotes. Within germs the actual enzyme RNA polymerase begins through joining on to the actual marketer within DNA. The actual primary marketer area associated with germs is situated from all of them -35 as well as -10 bottom sets downstream from the marketer. The actual RNA polymerase accounts for the actual manufacturing associated with mRNA. In contrast to germs along with just one type of RNA polymerase, eukaryotes include 3. Each one of the 3 polymerases consists of various functions within transcribing as well as called RNA polymerase 1, two as well as 3 respectively. Within polymerase 1, the actual enzyme transcribes rRNA. With regard to polymerase two, this particular enzyme transcribes just about all proteins development genetics that the best transcript is actually mRNA together with a few snRNA’s. Within polymerase 3 perform within transcribing may be the manufacturing associated with little practical RNA genetics (ex. tRNA, snRNa, 5SRNA). Within germs, RNA polymerase binds having a sigma element to be able to hole towards the marketer within DNA. The actual sigma element assists find the actual -35 as well as -10 bottom sets. In eukaryotes, the actual polymerases can’t identify the actual marketer area associated with DNA simply because it doesn’t include sigma elements. Instead of possess sigma elements, eukaryotes make use of transcribing elements.
These types of elements tend to be meats accustomed to hole towards the marketer as well as generate the actual RNA polymerases to the proper location. Within polymerase two consists of utilizes the actual TATA container, that is 25 bottom sets upstream from the website with regard to transcribing. The actual TATA container looks like the actual -10 bottom set within germs. The actual proteins utilized in eukaryote joining associated with RNA polymerases is actually TATA joining proteins also called TBP. This particular proteins binds towards the TATA container as well as interacts using the small groove associated with DNA. Throughout end of contract germs make use of 2 kinds known as Rho-independent (intrinsic termination) as well as Rho-dependent end of contract. The actual Rho-independent (intrinsic termination) demands the actual G-C hairpin cycle, as the Rho-dependent end of contract demands destabilization in between theme follicle as well as mRNA. Within eukaryotes it’s more difficult as well as entails polyadenylation.
Polyadenylation mainly results precursor mRNA. Several adenosine monophosphate tend to be put into the actual RNA as well as removing pyrophosphate too. This really is procedure for polyadenylation is actually related to the actual splicosomes. Within germs as well as eukaryotes the actual DNA isn’t subjected completely because of DNA theme sure towards the nucleosome. Additional variations happen along the way associated with interpretation. Eukaryotes possess a lot of difficulties which researching each and every process as well as perform every proteins performs is actually hard. They are a few variations in between germs as well as eukaryote transcribing. | <urn:uuid:f1c60ba8-c285-4ae1-900b-25f208747f8c> | 2.84375 | 720 | Knowledge Article | Science & Tech. | 30.50787 |
By David L. Brown
A couple of years ago I privately predicted that the Arctic polar ice cap would melt a lot quicker than scientists were willing to predict. I noted the trend and projected ice loss using common sense, and by factoring in the effect of high-albedo ice and snow being replaced by low-albedo open sea water. Albedo is a measure of the amount of Sunlight reflected from the surface of celestial objects, including the Earth. It is estimated that open water absorbs 7 or 8 times more heat from the Sun than snow and ice, which reflect much of the solar energy back into space.
At that time in October, 2005, scientists were predicting the ice would not disappear for a hundred years. I boldly predicted a complete meltdown of the Arctic ice within as little as ten years, which would place it in about 2015. It wasn’t long before events began to move in the direction I had predicted.
Here are some excerpts from what I wrote in a posting made about one year ago on September 16, 2006 titled “Meltdown of Arctic Ice Continues to Accelerate”:
We have written before about the observed shrinking of the Arctic ice cap, and Star Phoenix Base has contended that the rate of disappearance is accelerating and threatens to become a runaway meltdown. Our reasoning for this pessimistic position is based on several factors. First, temperatures in the Arctic are rising faster than in temperate or tropical regions. In Alaska, for example, average temperatures already have risen by as much as seven degrees F. In the far north of Canada, Alaska, and Siberia, permafrost is melting and thus lending irony to its name, since it is no longer “perma.” That melting is releasing greenhouse gases long sequestered in the frozen bogs, adding impetus to the warming trend.
Now there is evidence that our speculations were spot on, as the British say. According to the National Aeronautical and Space Administration, the wintertime extent of Arctic ice has decreased by six percent in each of the last two years. That compares with a previously observed rate of decrease of only 1.5 percent per decade, or a mere 0.15 percent per year. It is based on those previous rates that scientists as recently as one year ago were predicting that the Arctic ice cap would not completely disappear for about 100 years.
I analyzed the situation in October, 2005 and made the prediction in private correspondence that it would happen much faster, perhaps in as little as ten years. I later published some of this analysis on this weblog. Some earlier postings on this subject can be found in this site’s archives, particularly the articles “Bad News for Polar Bears — The Big Thaw,” and “Catastrophic Loss of Arctic Ice in Store.” You can find these and other articles by using the keyword search function on the sidebar.
In other postings I have noted the fact that the Arctic ice cover should not be viewed only from the standpoint of its area. It is a relatively thin layer of ice floating in the Arctic Ocean, and that layer of ice has grown significantly thinner. In other words, the total volume of ice per unit of area is also declining, perhaps rapidly so. Add that to the albedo effect and you have the makings of a runaway meltdown.
Now the latest news from the National Snow and Ice Data Center (NSIDC) in Boulder, Colorado, as reported today by CNN.com (read it here) in an article titled “Arctic Sea Ice Cover at Record Low”:
Ice cover in the Arctic Ocean, long held to be an early warning of a changing climate, has shattered the all-time low record this summer…It is possible that Arctic sea ice could decline even further this year before the onset of winter.
Mark Serreze, senior research scientist at NSIDC, termed the decline “astounding.”
“It’s almost an exclamation point on the pronounced ice loss we’ve seen in the past 30 years,” he said.
Most researchers had anticipated that the complete disappearance of the Arctic ice pack during summer months would happen after the year 2070, he said, but now, “losing summer sea ice cover by 2030 is not unreasonable.”
Here is a map from the NSIDC web site showing the extent of Arctic ice as of three days ago. The purple line indicates the “normal” or median historic limits and the white area is the actual ice coverage. As can be seen, at this time the Northwest Passage is still wide open according to the NSIDC site, which you can view here.
My projections from a year ago were based on an accelerating loss of ice, and the new data appears to confirm my analysis. (Disclaimer: I am not a climate scientist but am well-read on the subject, seem to be imbued with some common sense, and am not constrained by political correctness or fear of job security.) | <urn:uuid:4c81b6ac-7cba-46fd-b8e5-590bbda9f0af> | 3.265625 | 1,033 | Personal Blog | Science & Tech. | 46.510548 |
According to a recent paper, human actions may have caused Earth's climate to warm much earlier than previously expected. In an article to be published in Geophysical Research Letters, and widely reported in the media, around 15,000 years ago, early hunters were a major factor in driving mammoths to extinction. Supposedly, this die-off had the side effect of heating up the planet. This is an interesting conjecture, since a letter just published in Nature Geocience reaches the opposite conclusion regarding climate and the mammoths' decline. This mammoth confusion illustrates the uncertain and even contradictory evidence that abounds in climate science.
In a new study, “Biophysical feedbacks between the Pleistocene mega-fauna extinction and climate: The first human-induced global warming?,” Chris Doughty, Adam Wolf, and Chris Field—all from the Carnegie Institution for Science—present an hypothesis explaining how neolithic hunters triggered global warming thousands of years before the invention of agriculture. You might recognize Field as the co-chair of the IPCC working group on impacts, adaptation and vulnerability. Here he participates in what The Economist called “some serious boffinry.”
Supposedly, the demise of leaf-chomping woolly mammoths at the hands of Homo sapiens contributed to the spread of dwarf birch trees in and around the Arctic. This proliferation of previously suppressed birch trees darkened the largely barren, reflective landscape and accelerated temperature rise across the polar north.
The northward spread of vegetation affected the climate because of the albedo effect: white snow and ice was replaced with darker land surfaces that absorbed more sunlight and created a self-repeating warming cycle. In the scenario proposed by Doughty et al., the process would have added to natural climate change, making it harder for mammoths to cope, and helping the birch spread further.
A human getting ready to change the climate.
“A lot of people still think that people are unable to affect the climate even now, even when there are more than 6 billion people,” says lead author Doughty, “even when we had populations orders of magnitude smaller than we do now, we still had a big impact.” Of course, the end of the last glacial period was already under way when the extinction of woolly mammoths began. The deglaciation was marked by a worldwide rise in temperatures and the dramatic retreat of glaciers that once covered much of the Northern Hemisphere, but this did not happen in a smooth upward shift in temperature.
In a letter to the Nature Geoscience, Felisa A. Smith, Scott M. Elliott and S. Kathleen Lyons comment on the same human impact on the large animals of the late Ice Age. In “Methane emissions from extinct megafauna,” Smith et al. reach a very different conclusion than Doughty and colleagues. Herbivores produce methane as a by-product of fermentation during digestion. Today, enteric emissions by domestic livestock are an important contributor to greenhouse gas concentrations, representing ~20% of annual methane emissions. Given that methane is ~30 times more potenet a GHG than CO2, gas from large wild animals could have a significant impact on climate. According to the authors:
About 13,400 years ago, the Americas were heavily populated with large-bodied herbivores such as mammoths, camelids and giant ground sloths; the megaherbivore assemblage was richer than in present-day Africa. However, by 11,500 years ago and within 1,000 years of the arrival of humans in the New World, 80% of these large-bodied mammals were extinct. The eradication of megafauna had marked effects on terrestrial communities, including changes in vegetative structure and reorganization of food webs. Here, we suggest that the extinction also had profound effects on methane emissions and atmospheric methane concentrations, with potential implications for abrupt climate change during the Younger Dryas cold event.
Smith et al. calculated the amount of methane that would have been released by the large herbivores around the time humans arrived in America and found that the elimination of those creatures at the hands of paleo-hunters could have led directly to global cooling. Supporting their hypothesis, the letter authors note that ice-core records of methane concentration reveal an abrupt drop at the onset of the Younger Dryas cold event, about 12,800 years ago. The drop seems to be in sync with the extinction of New World megafauna (see the figure below).
Not that the impact of either scenario would have been that dramatic. “We're not saying this was a big effect,” Chris Field of the pro-warming study said, “about 0.2 degrees C (0.36 degrees F) of regional warming is the part that is likely due to humans.” They qualify their results, saying that “the point of the paper isn't that this is a big effect. But it's a human effect.”
Even so, Field and colleagues argue, the evidence of an even earlier human-made global climate impact suggests the Anthropocene could have started much earlier. The Anthropocene is the controversial name some scientists give to the time of humanity's overwhelming impact on nature (see “A Brave New Epoch?”). The authors' results, “suggest the human influence on climate began even earlier than previously believed, and that the onset of the Anthropocene should be extended back many thousands of years.”
Smith et al., are a bit more assertive, stating “the 185 to 245 ppbv methane drop observed at the Younger Dryas stadial is associated with a temperature shift of 9 to 12 °C.” But like all good scientists, they too are hedging their bets: “The attribution and magnitude of the Younger Dryas temperature shift, however, remain unclear. Nevertheless, our calculations suggest that decreased methane emissions caused by the extinction of the New World megafauna could have played a role in the Younger Dryas cooling event.”
Did the spread of darker vegetation, made possible to the elimination of browsing mammoths and other large herbivores, cause global warming or did the elimination of enteric methane emissions, also due to the demise of ice age megafauna, cause the abrupt cooling of the Younger Dryas? Interestingly there is a chance that both hypotheses are correct.
Given that methane has a relatively short lifespan in the atmosphere, on the order of 20 years, it is possible that the cooling effect postulated by Smith et al. could have taken place fairly rapidly, leading to cooling over the short term. The change in albedo due to changing vegetation could have take longer to transpire, contributing to the general warming trend after the Little Dryas cold snap. Of course, both ideas may be wrong or the magnitude of their impact on climate negligible.
There is overwhelming evidence that the Little Dryas was caused by a disruption of the Atlantic thermohaline circulation, probably due to a massive influx of glacial melt water from the North American ice sheet. The trigger for this was most probably changes in insolation due to Earth's orbital cycles. And, given that thawing permafrost soils release large volumes of methane, CO2 and nitrous oxide (see “High nitrous oxide production from thawing permafrost,” in the May 2010, Nature Geocience), the lack of digestive emanations from the departed megafauna may have been quickly compensated for.
So who is correct? Sorry, no consensus has been reached—and herein lies an important lesson for all those following the anthropogenic global warming controversy.
Was the climate altered by lack of grazing or lack of gas?
This blog previously reported on the unsettled science regarding the extinction of the dinosaurs at the end of the Cretaceous Period, some 65 million years ago. In “Chicxulub Redux: A Lesson In How Science Works,” it was revealed that scientists still cannot agree about what killed the dinosaurs and caused one of the greatest mass extinctions in Earth's history. One might conclude that 65 million years was a long time ago and that some confusion is to be expected. But, remember, this was the most recent extinction event, and it is well documented in the rock record. Still, scientists cannot come to agreement on the events that transpired.
The events of the deglaciation that led to the Holocene warming are much more recent, lying only 10-15 thousand years in the past. Yet, much like the controversy surrounding the dinosaur extinction, scientists cannot agree what happened during the most recent transition from glacial to interglacial conditions. Was it caused in part by the demise of the mammoths and other megafauna? It isn't even certain that humans were primarily responsible for the extinction of the mammoths and their companions. If science cannot tell us what happened in the recent past, events that led to our current temperate climate, why should be believe predictions of future conditions 100 or a 1000 years from now?
As these examples show, two teams of scientists can study the same chain of events and then publish scholarly papers in respected journals in which diametrically opposed conclusions are reached. The facts regarding anthropogenic global warming are at least as contradictory and unclear. This is why the proclamations by the IPCC should be taken with a small grain of salt. The simple truth is that science, and climate science in particular, does not know all the answers. An honest scientist will admit that progress means going from being wrong to being less wrong, and that little in science is really settled.
Scientists do not agree about the extinction of the dinosaurs 65 million years ago, what effect the disappearance of the late ice age megafauna had on climate 12,000 years ago, or even how CO2 is sequestered in Earth's oceans today. What triggered the end of the last glacial period is hotly debated in scientific circles, yet we are told there is unanimity of opinion, a consensus with regard to climate change.
“Don't blame me, I'm outa' here.”
Scientists are a bunch of bickering, willful and self-deluding individuals, just like the rest of us. If you have any doubt, witness the attempts to belittle skeptics by AGW believers. Climate science has devolved into the politics of personal destruction and the primary reason the debate has grown so nasty is because the science is so scanty, the evidence so lacking.
There are no authoritative answers regarding the dinosaurs' downfall or the disappearance of late ice age megafauna and how that event affected climate. And to blame our ancient ancestors for global warming or cooling more than 10,000 years ago smacks of hubris and self-deception. If climate science cannot accurately identify the causes of climate change in the past there is no chance it can predict the future. As a scientist, I will wait for more convincing evidence about the dinosaurs, the mammoths and global warming.
Be safe, enjoy the interglacial and stay skeptical.
[ Thanks to Pat Kerr, one of our Canadian readers, for suggesting this topic for an article. I appreciate feedback on our books and the web site, and welcome suggestions for future article topics. ] | <urn:uuid:805cc394-3ee5-45e0-bb7d-37876beed0df> | 4 | 2,322 | Personal Blog | Science & Tech. | 39.279707 |
Snapshot Issue 48 October 2008
One of Natureís wonders is to produce light. Fireflies flutter and flicker in the night while other creatures are flashing light in the depths of the ocean. Brief and relatively intense flashes are used by some to ward off predators, catch prey or even seduce a future partner. This fascinating phenomenon is the achievement of a number of proteins amongst which GFP, otherwise known as Green Fluorescent Protein.
GFP was discovered by a Japanese scientist in the 1960s whilst carrying out research on jellyfish, in particular Aequorea victoria. Aequorea victoria haunts the North West Pacific where it flashes green light when the nearby tranquillity of sea water is perturbed. The green flashes emerge from the tips of its tentacles which turned out to be the home of GFP. When this luminescent protein was discovered, however, it raised more questions than it did give answers. As a result, it was shadowed and forgotten for the best part of thirty years.
A new interest in GFP arose when its 3D structure was solved. And not surprisingly. GFP sports an almost perfect barrel-shape. Besides the astonishing regularity, this very compact structure can protect its fluorescent core from any chemical damage.
How does GFP produce such an extraordinary property of emitting fluorescent light? On its own. And spontaneously. No other enzyme is necessary, just a little bit of oxygen. And this is what titillated the imagination of biologists who soon realised that if the GFP gene is introduced into a cell, you can then follow a cellís progress. Furthermore, not only is GFP non-toxic but its fluorescence is not dangerous for the cell. Today, researchers have even found ways of modifying GFP in such a way that it can emit other colours such as blue, yellow or a yellowy red!
Without a doubt, GFP has revolutionized the study of biological processes in cells and living beings. Scientists can follow almost any cell with regard to time, such as the development of an embryo, for example, or the progression of a tumour. So it hardly comes as a surprise to learn that GFP was the object of this yearís Nobel prize in chemistry. Almost 50 years after its discovery. Itís never too lateÖ
L'édition française de cette chronique est disponible dans l'Instantanés du mois de Prolune.
- Need to reference this article ? Please use this link: | <urn:uuid:269319f1-f0ff-4eaa-8269-3d9666427dad> | 3.328125 | 515 | Knowledge Article | Science & Tech. | 50.682714 |
Common Lisp the Language, 2nd Edition
Some type specifier lists denote specializations of data types named by symbols. These specializations may be reflected by more efficient representations in the underlying implementation. As an example, consider the type (array short-float). Implementation A may choose to provide a specialized representation for arrays of short floating-point numbers, and implementation B may choose not to.
If you should want to create an array for the express purpose of holding only short-float objects, you may optionally specify to make-array the element type short-float. This does not require make-array to create an object of type (array short-float); it merely permits it. The request is construed to mean ``Produce the most specialized array representation capable of holding short-floats that the implementation can provide.'' Implementation A will then produce a specialized array of type (array short-float), and implementation B will produce an ordinary array of type (array t).
If one were then to ask whether the array were actually of type (array short-float), implementation A would say ``yes,'' but implementation B would say ``no.'' This is a property of make-array and similar functions: what you ask for is not necessarily what you get.
Types can therefore be used for two different purposes: declaration and discrimination. Declaring to make-array that elements will always be of type short-float permits optimization. Similarly, declaring that a variable takes on values of type (array short-float) amounts to saying that the variable will take on values that might be produced by specifying element type short-float to make-array. On the other hand, if the predicate typep is used to test whether an object is of type (array short-float), only objects actually of that specialized type can satisfy the test; in implementation B no object can pass that test.
X3J13 voted in January 1989 (ARRAY-TYPE-ELEMENT-TYPE-SEMANTICS) to eliminate the differing treatment of types when used ``for discrimination'' rather than ``for declaration'' on the grounds that implementors have not treated the distinction consistently and (which is more important) users have found the distinction confusing.
As a consequence of this change, the behavior of typep and subtypep on array and complex type specifiers must be modified. See the descriptions of those functions. In particular, under their new behavior, implementation B would say ``yes,'' agreeing with implementation A, in the discussion above.
Note that the distinction between declaration and discrimination remains useful, if only so that we may remark that the specialized (list) form of the function type specifier may still be used only for declaration and not for discrimination.
X3J13 voted in June 1988 (FUNCTION-TYPE) to clarify that
while the specialized form of the function type specifier
(a list of the symbol function possibly followed by
argument and value type specifiers)
may be used only for declaration, the symbol form (simply the name
function) may be used for discrimination.
The valid list-format names for data types are as follows:
(array integer 3) ;Three-dimensional arrays of integers (array integer (* * *)) ;Three-dimensional arrays of integers (array * (4 5 6)) ;4-by-5-by-6 arrays (array character (3 *)) ;Two-dimensional arrays of characters ; that have exactly three rows (array short-float ()) ;Zero-rank arrays of short-format ; floating-point numbers
Note that (array t) is a proper subset of (array *). The reason is that (array t) is the set of arrays that can hold any Common Lisp object (the elements are of type t, which includes all objects). On the other hand, (array *) is the set of all arrays whatsoever, including, for example, arrays that can hold only characters. Now (array character) is not a subset of (array t); the two sets are in fact disjoint because (array character) is not the set of all arrays that can hold characters but rather the set of arrays that are specialized to hold precisely characters and no other objects. To test whether an array foo can hold a character, one should not use
(typep foo '(array character))
(subtypep 'character (array-element-type foo))
X3J13 voted in January 1989 (ARRAY-TYPE-ELEMENT-TYPE-SEMANTICS) to change typep and subtypep so that the specialized array type specifier means the same thing for discrimination as for declaration: it encompasses those arrays that can result by specifying element-type as the element type to the function make-array. Under this interpretation (array character) might be the same type as (array t) (although it also might not be the same). See upgraded-array-element-type. However,
(typep foo '(array character))
is still not a legitimate test of whether the array foo can hold a character; one must still say
(subtypep 'character (array-element-type foo))
to determine that question.
X3J13 also voted in January 1989 (DECLARE-ARRAY-TYPE-ELEMENT-REFERENCES) to specify that within the lexical scope of an array type declaration, it is an error for an array element, when referenced, not to be of the exact declared element type. A compiler may, for example, treat every reference to an element of a declared array as if the reference were surrounded by a the form mentioning the declared array element type (not the upgraded array element type). Thus
(defun snarf-hex-digits (the-array) (declare (type (array (unsigned-byte 4) 1) the-array)) (do ((j (- (length array) 1) (- j 1)) (val 0 (logior (ash val 4) (aref the-array j)))) ((< j 0) val)))
may be treated as
(defun snarf-hex-digits (the-array) (declare (type (array (unsigned-byte 4) 1) the-array)) (do ((j (- (length array) 1) (- j 1)) (val 0 (logior (ash val 4) (the (unsigned-byte 4) (aref the-array j))))) ((< j 0) val)))
The declaration amounts to a promise by the user that the aref will never produce a value outside the interval 0 to 15, even if in that particular implementation the array element type (unsigned-byte 4) is upgraded to, say, (unsigned-byte 8). If such upgrading does occur, then values outside that range may in fact be stored in the-array, as long as the code in snarf-hex-digits never sees them.
As a general rule, a compiler would be justified in transforming
(aref (the (array elt-type ...) a) ...)
(the elt-type (aref (the (array elt-type ...) a) ...)
It may also make inferences involving more complex functions, such as position or find. For example, find applied to an array always returns either nil or an object whose type is the element type of the array.
(vector double-float) ;Vectors of double-format
; floating-point numbers
(vector * 5) ;Vectors of length 5
(vector t 5) ;General vectors of length 5
(vector (mod 32) *) ;Vectors of integers between 0 and 31
X3J13 voted in March 1988 (FUNCTION-TYPE-KEY-NAME) to specify that, in a function type specifier, an argument type specifier following &key must be a list of two items, a keyword and a type specifier. The keyword must be a valid keyword-name symbol that may be supplied in the actual arguments of a call to the function, and the type specifier indicates the permitted type of the corresponding argument value. (The keyword-name symbol is typically a keyword, but another X3J13 vote (KEYWORD-ARGUMENT-NAME-PACKAGE) allows it to be any symbol.) Furthermore, if &allow-other-keys is not present, the set of keyword-names mentioned in the function type specifier may be assumed to be exhaustive; for example, a compiler would be justified in issuing a warning for a function call using a keyword argument name not mentioned in the type declaration for the function being called. If &allow-other-keys is present in the function type specifier, other keyword arguments may be supplied when calling a function of the indicated type, and if supplied such arguments may possibly be used.
A declaration specifier of the form
(ftype (function (arg1-type arg2-type ... argn-type) value-type) fname)
implies that any function call of the form
(fname arg1 arg2 ...)
within the scope of the declaration can be treated as if it were rewritten to use the-forms in the following manner:
(the value-type (fname (the arg1-type arg1) (the arg2-type arg2) ... (the argn-type argn)))
That is, it is an error for any of the actual arguments not to be of its specified type arg-type or for the result not to be of the specified type value-type. (In particular, if any argument is not of its specified type, then the result is not guaranteed to be of the specified type-if indeed a result is returned at all.)
Similarly, a declaration specifier of the form
(type (function (arg1-type arg2-type ... argn-type) value-type) var)
is interpreted to mean that any reference to the variable var will find that its value is a function, and that it is an error to call this function with any actual argument not of its specified type arg-type. Also, it is an error for the result not to be of the specified type value-type. For example, a function call of the form
(funcall var arg1 arg2 ...)
could be rewritten to use the-forms as well. If any argument is not of its specified type, then the result is not guaranteed to be of the specified type-if indeed a result is returned at all.
Thus, a type or ftype declaration specifier describes type requirements imposed on calls to a function as opposed to requirements imposed on the definition of the function. This is analogous to the treatment of type declarations of variables as imposing type requirements on references to variables, rather than on the contents of variables. See the vote of X3J13 on type declaration specifiers in general, discussed in section 9.2.
In the same manner as for variable type declarations in general, if two or more of these declarations apply to the same function call (which can occur if declaration scopes are suitably nested), then they all apply; in effect, the types for each argument or result are intersected. For example, the code fragment
(locally (declare (ftype (function (biped) digit) butcher-fudge)) (locally (declare (ftype (function (featherless) opposable) butcher-fudge)) (butcher-fudge sam)))
may be regarded as equivalent to
(the opposable (the digit (butcher-fudge (the featherless (the biped sam)))))
(the (and opposable digit) (butcher-fudge (the (and featherless biped) sam)))
That is, sam had better be both featherless and a biped, and the result of butcher-fudge had better be both opposable and a digit; otherwise the code is in error. Therefore a compiler may generate code that relies on these type assumptions, for example. | <urn:uuid:ab31bcf0-147a-42af-8c7c-60f5cb2866b1> | 3.3125 | 2,508 | Documentation | Software Dev. | 41.41698 |
|In the center of star-forming region 30 Doradus lies a huge cluster of the largest, hottest, most massive stars known. These stars, known collectively as star cluster R136, were captured above in visible light by the newly installed Wide Field Camera peering though the recently refurbished Hubble Space Telescope. Gas and dust clouds in 30 Doradus, also known as the Tarantula Nebula, have been sculpted into elongated shapes by powerful winds and ultraviolet radiation from these hot cluster stars. The 30 Doradus Nebula lies within a neighboring galaxy known as the Large Magellanic Cloud and is located a mere 170,000 light-years away.
Credit: NASA, ESA, & F. Paresce (INAF-IASF), R. O'Connell (U. Virginia), & the HST WFC3 Science Oversight Committee | <urn:uuid:b03bb948-6b98-48fe-9c40-7c2968bb9a5d> | 3.546875 | 173 | Truncated | Science & Tech. | 45.24878 |
Introduction to Stars, Galaxies, & the Universe
Prof. Richard Pogge, MTWThF 9:30
(Credit: Hubble Space Telescope)
Unit 3: "Death & Transfiguration":
The Endpoints of Stellar Evolution
We continue our exploration of Stellar Evolution that we began in Unit 2 by turning to the question of how stars die.
Massive stars end their lives in spectacular supernova explosions
wherein they briefly outshine entire galaxies full of billions of stars,
and the ashes of their final moments seed the interstellar medium with
heavy elements from which to build the next generation of stars. The
remnant core of such stars is left behind as either a neutron star, a
few of which are visible as rapidly spinning pulsars, or as a black hole
in which matter is crushed to such high densities that its gravity warps
space and time around it and nothing, not even light, may escape.
Smaller stars like our Sun will gently shrug off their envelopes over
the course of many thousands of years, leaving behind a white dwarf core
no larger than the Earth, but containing nearly 60% of the mass of the
Sun. With no sources of energy other than leftover heat from bygone
days, it will gently fade into the night.
Stellar Evolution plays out on time scales measured in millions and
billions of years, so how do we know we are right? The crucial test of
stellar evolution comes from observations of the H-R diagrams of star
clusters, putting our ideas on a firm observational basis.
- Supernovae (Jan 30)
- Extreme Stars:
White Dwarfs & Neutron Stars (Jan 31)
- Black Holes (Feb 1)
- Testing Stellar Evolution (Feb 2)
- Quiz 2: Friday, February 3 (in class)
Associated Readings in Universe are listed
at the top of each of the lectures.
Return to Astronomy 162 Main Page
Go back to Unit 2
Go forward to Unit 4
Updated: 2006 January 29
Copyright © Richard W. Pogge, All Rights | <urn:uuid:96dc1dc9-bc1e-4d04-8937-c89999c2b901> | 3.828125 | 441 | Content Listing | Science & Tech. | 42.288522 |
On April 19th, 2012, Climate Central's Ben Strauss testified before the Senate committee on Energy and Natural Resources on the impact of sea level rise.
Sample coverage of the national report called Surging Seas, which has recalculated the risk of flooding to coastal communities across America.
As the land loses its struggle with the sea, it's time for new tools to learn just how climate change and sea level rise will effect the coastal U.S.
The currents around the equator in the Pacific Ocean are cooler than average this year, which means we are experiencing the phenomenon known as La Niña. This can bring good weather conditions, or poor ones, depending on where you live and your point of view. Dr. Heidi Cullen explains.
Some in Florida who lived through Katrina now are preparing for climate change-related disasters they fear could be more damaging than a hurricane.
Dr. Jørgen Peder Steffensen explained the goals of the North Greenland Eemian Ice Drilling project and what the ice cores can tell us about our climate history.
NOAA's annual climate report shows we live in a warming world.
Cape Verde storms have a long time to develop and intensify. | <urn:uuid:08717bb4-d6e3-49d9-8ced-4a9482f487bf> | 3 | 251 | Content Listing | Science & Tech. | 52.436729 |
For ornithologists, conservationists, and backyard birders, it would be a dream come true. After more than 60 years of presumed extinction, the ivory-billed woodpecker was reported spotted by expert birders in a dense Arkansas swamp. It appeared that "the Grail Bird,— a ghostly symbol of one of America's ravaged natural habitats, had returned from the past.
Within days Sara Barker '94, a project leader at the Johnson Center for Birds and Biodiversity at Cornell University, had begun recruiting the team that would search for and collect evidence that the bird still existed. Barker's first challenge: to lure the best birders and scientists to the Arkansas Mississippi Delta without ever uttering these words: "ivory-billed woodpecker.—
"I had to convince seventeen people that they wanted to go work down in Arkansas on a 'biodiversity project,'— Barker said, back at Cornell this summer. "I couldn't tell them [in advance] what we were doing. I said, 'An inventory of bottomland hardwood swamp and bottomland hardwood forest.'—
An understatement, but true. Fourteen months later, the search teams (including Barker, an energetic former Colby ski racer) had indeed done an exhaustive—and exhausting—inventory of the wild and primeval swamps in southeastern Arkansas. They'd encountered herons, warblers, owls, flying squirrels, ducks, many poisonous snakes—and at least one ivory-billed woodpecker.
The news was big, and not just in the bird world. The official Cornell Ornithology Lab paper breaking the ivory-bill discovery was the cover story of the prestigious journal Science. The report made the front page of The New York Times and countless other newspapers around the country, was featured on National Public Radio, was heralded at a strobe-popping press conference in Washington, D.C. Nature Conservancy President Steven McCormick began his column in last summer's magazine with these words. "We've found the bird.—
So just how big a deal is this really? "I think this probably is the most exciting [bird-related] story of the last fifty years,— said Herb Wilson, the Leslie Brainerd Arey Chair in Bioscience and a nationally known ornithologist. | <urn:uuid:95e4fbc2-7ef4-47d5-b04a-2dac374993ed> | 2.765625 | 474 | Nonfiction Writing | Science & Tech. | 44.686182 |
All > Science > Weather
- A watch is used when the risk of a hazardous weather or hydrologic event has increased significantly, but its occurrence, location, and/or timing is still uncertain. It is intended to provide enough lead time so that those who need to set their plans in motion can do so.
NOAA National Weather Service - Cite This Source - This Definition
- Blue Watch or Blue Box, Enhanced Wording, Red Watch or Red Box, SEL, Severe Thunderstorm Watch, Tornado Watch, Watch Box, Watch Cancellation, Watch Status Reports, Winter Storm Watch | <urn:uuid:219dd61a-59e0-4d10-a926-7ec4344daaf6> | 3.15625 | 123 | Structured Data | Science & Tech. | 22.885478 |
Astronomers have discovered the largest known structure in the universe, a clump of active galactic cores that stretches 4 billion light-years from end to end.
The structure is a large quasar group (LQG), a collection of extremely luminous galactic nuclei powered by supermassive central black holes. This particular group is so large that it challenges modern cosmological theory, researchers said.
Quasars are the brightest objects in the universe. For decades, astronomers have known that they tend to assemble in huge groups, some of which are more than 600 million light-years wide.
But the record-breaking quasar group, which Clowes and his team spotted in data gathered by the Sloan Digital Sky Survey, is on another scale altogether. The newfound LQC is composed of 73 quasars and spans about 1.6 billion light-years in most directions, though it is 4 billion light-years across at its widest point.
To put that mind-boggling size into perspective, the disk of the Milky Way galaxy— home of Earth's solar system — is about 100,000 light-years wide. And the Milky Way is separated from its nearest galactic neighbor, Andromeda, by about 2.5 million light-years.
To view links or images in this forum your post count must be 10 or greater. You currently have 0 posts. | <urn:uuid:4fae13a3-4493-4a12-957d-868fb72a870e> | 3.75 | 281 | Comment Section | Science & Tech. | 57.09313 |
Figure 7.8: Simplified principal feedback loops active in El Niño- Southern Oscillation (ENSO). The fast loop (right) gives rise to an instability responsible for the development of an El Niño, the slow loop (left) tends to dampen and reverse the anomalies, so that together, these processes excite oscillations.
The strongest natural fluctuation of climate on interannual
time-scales is the El Niño-Southern Oscillation (ENSO) phenomenon, and
ENSO-like fluctuations also dominate decadal time-scales (sometimes referred
to as the Pacific decadal oscillation). ENSO originates in the tropical Pacific
but affects climate conditions globally. The importance of changes in ENSO as
the climate changes and its potential role in possible abrupt shifts have only
recently been appreciated. Observations and modelling of ENSO are addressed
in Chapters 2, 8 and 9;
here the underlying processes are discussed. Observational and modelling results
suggest that more frequent or stronger ENSO events are possible in the future.
Because social and ecological systems are particularly vulnerable to rapid changes
in climate, for the next decades, these may prove of greater consequence than
a gradual rise in mean temperature.
ENSO is generated by ocean-atmosphere interactions internal to the tropical Pacific and overlying atmosphere. Positive temperature anomalies in the eastern equatorial Pacific (characteristic of an El Niño event) reduce the normally large sea surface temperature difference across the tropical Pacific. As a consequence, the trade winds weaken, the Southern Oscillation index (defined as the sea level pressure difference between Tahiti and Darwin) becomes anomalously negative, and sea level falls in the west and rises in the east by as much as 25 cm as warm waters extend eastward along the equator. At the same time, these weakened trades reduce the upwelling of cold water in the eastern equatorial Pacific, thereby strengthening the initial positive temperature anomaly. The weakened trades also cause negative off-equatorial thermocline depth anomalies in the central and western Pacific. These anomalies propagate westward to Indonesia, where they are reflected and propagate eastward along the equator. Thus some time after their generation, these negative anomalies cause the temperature anomaly in the east to decrease and change sign. The combination of the tropical air-sea instability and the delayed negative feedback due to sub-surface ocean dynamics can give rise to oscillations (for a summary of theories see Neelin et al., 1998). Two of these feedbacks are schematically illustrated in Figure 7.8. Beyond influencing tropical climate, ENSO seems to have a global influence: during and following El Niño, the global mean surface temperature increases as the ocean transfers heat to the atmosphere (Sun and Trenberth, 1998).
Box 7.2: Changes in natural modes of the climate system.
Observed changes in climate over the Northern Hemisphere in winter reveal
large warming over the main continental areas and cooling over the North
Pacific and North Atlantic. This "cold ocean - warm land"
pattern has been shown to be linked to changes in the atmospheric circulation,
and, in particular, to the tendency in the past few decades for the North
Atlantic Oscillation (NAO) to be in its positive phase. Similarly, the
Pacific-North American (PNA) teleconnection pattern has been in a positive
phase in association with a negative Southern Oscillation index or, equivalently,
the tendency for El Niño-Southern Oscillation (ENSO) to prefer
the warm El Niño phase following the 1976 climate shift (Chapter
2). Because of the differing heat capacities of land and ocean, the "cold
ocean-warm land" pattern has amplified the Northern Hemisphere warming.
A fingerprint of global warming from climate models run with increasing
greenhouse gases indicates greater temperature increases over land than
over the oceans, mainly from thermodynamic (heat capacity and moisture)
effects. This anthropogenic signal is therefore very similar to that observed,
although an in-depth analysis of the processes involved shows that the
dynamical effects from atmospheric circulation changes are also important.
In other words, the detection of the anthropogenic signal is potentially
masked or modified by the nature of the observed circulation changes,
at least in the northern winter season. The detection question can be
better resolved if other seasons are also analysed (Chapter
12). Attribution of the cause of the observed changes requires improved
understanding of the origin of the changes in atmospheric circulation.
In particular, are the observed changes in ENSO and the NAO (and other
modes) perhaps a consequence of global warming itself?
There is no simple answer to this question at present. Because the natural
response of the atmosphere to warming (or indeed to any forcing) is to
change large-scale waves, some regions will warm while others cool more
than the hemispheric average, and counterintuitive changes can be experienced
locally. Indeed, there are preferred modes of behaviour of the atmospheric
circulation, sometimes manifested as preferred teleconnection patterns
(see this chapter) that arise from the planetary waves in the atmosphere
and the distribution of land, high topography, and ocean. Often these
modes are demonstrably natural modes of either the atmosphere alone or
the coupled atmosphere-ocean system. As such, it is also natural for modest
changes in atmospheric forcing to project onto changes in these modes,
through changes in their frequency and preferred sign, and the evidence
suggests that changes can occur fairly abruptly. This is consistent with
known behaviour of non-linear systems, where a slow change in forcing
or internal mechanisms may not evoke much change in behaviour until some
threshold is crossed at which time an abrupt switch occurs. The best known
example is the evidence for a series of abrupt climate changes in the
palaeoclimate record apparently partly in response to slow changes in
sea level and the orbit of the Earth around the Sun (Milankovitch changes,
see Chapter 2). There is increasing evidence that
the observed changes in the NAO may well be, at least in part, a response
of the system to observed changes in sea surface temperatures, and there
are some indications that the warming of tropical oceans is a key part
of this (see this chapter for more detail). ENSO is not simulated well
enough in global climate models to have confidence in projected changes
with global warming (Chapter 8). It is likely that
changes in ENSO will occur, but their nature, how large and rapid they
will be, and their implications for regional climate change around the
world are quite uncertain and vary from model to model (see this chapter
and Chapter 9). On time-scales of centuries, the
continuing increase of greenhouse gases in the atmosphere may cause the
climate system to cross a threshold associated with the Atlantic thermohaline
circulation: beyond this threshold a permanent shut-down of the thermohaline
circulation results (see this chapter and Chapter 9).
Therefore, climate change may manifest itself both as shifting means as well as changing preference of specific regimes, as evidenced by the observed trend toward positive values for the last 30 years in the NAO index and the climate "shift" in the tropical Pacific about 1976. While coupled models simulate features of observed natural climate variability such as the NAO and ENSO, suggesting that many of the relevant processes are included in the models, further progress is needed to depict these natural modes accurately. Moreover, because ENSO and NAO are key determinants of regional climate change, and they can possibly result in abrupt changes, there has been an increase in uncertainty in those aspects of climate change that critically depend on regional changes.
Figure 7.9: Darwin Southern Oscillation Index (SOI) represented as monthly surface pressure anomalies in hPa. Data cover the period from January 1882 to December 1998. Base period climatology computed from the period January 1882 to December 1981. The step function fit is illustrative only, to highlight a possible shift around 1976 to 1977.
The shifts in the location of the organised rainfall in the tropics and the associated latent heat release alters the heating patterns of the atmosphere which forces large-scale waves in the atmosphere. These establish teleconnections, especially the PNA and the southern equivalent, the Pacific South American (PSA) pattern, that extend into mid-latitudes altering the winds and changing the jet stream and storm tracks (Trenberth et al., 1998), with ramifications for weather patterns and societal impacts around the world.
Another related feedback occurs in the sub-tropics. The normally cold waters off the western coasts of continents (such as California and Peru) encourage the development of extensive low stratocumulus cloud decks which block the Sun, and this helps keep the ocean cold. A warming of the waters, such as during El Niño, eliminates the cloud deck and leads to further sea surface warming through solar radiation. Kitoh et al. (1999) found that this mechanism could lead to interannual variations in the Pacific Ocean without involving equatorial ocean dynamics. Currently, stratocumulus decks are not well simulated in coupled models, resulting in significant deviations of SST from the observed (see Chapter 8, Figure 8.1).
Indices of ENSO for the past 120 years (Figure 7.9), indicate that there is considerable variability in the ENSO cycle in the modern record. This variability has been variously attributed to: (i) stochastic forcing due to weather and other high-frequency "noise", and the Madden-Julian intra-seasonal oscillation in particular; (ii) deterministic chaos arising from internal non-linearities of the tropical Pacific ENSO system; (iii) forcing within the climate system but external to the tropical Pacific, and (iv) changes in exogenous forcing (see Neelin et al., 1998 and references therein). Palaeo-proxies, archaeological evidence, and instrumental data (see Chapter 2) all indicate variations in ENSO behaviour over the past centuries, and throughout the Holocene. Much of this variability appears to be internal to the Earth's climate system, but there is evidence that the rather weak forcing due to orbital variations may be responsible for a systematic change to weaker ENSO cycles in the mid-Holocene (Sandweiss et al., 1996; Clement et al., 1999; Rodbell et al., 1999). However, it appears that the character of ENSO can change on a much faster time-scale than that of small amplitude insolation change imposed by the Earth's varying orbit. The inference to be drawn from observed ENSO variability is that small forcings are able to cause large alterations in the behaviour of this non-linear system.
Continues on next page
Other reports in this collection | <urn:uuid:06774405-c13b-40bd-a1e5-dc2b8a9a5823> | 3.890625 | 2,249 | Academic Writing | Science & Tech. | 22.92836 |
Length: males 15-20 mm, females 18-27 mm
Tipula ultima is on the wing in September and October. Its range extends from Saskatchewan to Newfoundland, south to Florida and Louisiana, and west to Oklahoma and Wyoming.
Taber (2009) suggests that this species is associated with dying bracken ferns (Pteridium aquilinum), and that it is cryptically protected by its resemblance to the fern in color. Also, the wing shape of the fly matches closely the shape of an individual lobe on a frond of bracken.
Look for Tipula ultima in woodlands and forest edges. The species is also attracted to lights.
Gelhaus reports that the larvae have been collected from a variety of damp habitats near streams (Gelhaus, 1986).
Dr. Chen Young (1981) reports collecting 60 larvae at a Kansas location:
"They were found in saturated, organic mud in a shaded seepage area. Larvae were found at the surface or only a few mm beneath it, with their spiracular discs open to the air above them. They moved freely through the wet soil. These larvae were 18-25 mm in length and about 3 mm in breadth."
Young reported that the larvae grew to 36-41 mm in length in the laboratory and then prepared for pupation by moving to drier soil. Several centimeters below the soil the larvae became inactive for about seven weeks, then moved to a vertical position just under the surface of the soil and pupated. The pupae measured 23-30 mm, with the female pupae invariably larger than the male pupae. The pupal stage lasted about eight days, with males emerging a couple of days ahead of the females.
Nocturnal mating followed emergence, and ovipositing came five days after mating. The female laid about 150 eggs a few mm below the surface in sandy soil.
Insects of West Virginia | <urn:uuid:8293bdde-cf11-4f41-82ee-959285ca78ae> | 2.828125 | 400 | Knowledge Article | Science & Tech. | 56.39212 |
Potential Problems When Low and High Voltage Meet
This is especially true for the two methods I focus on, i.e. Capillary Electrophoresis (CE) and Electrochemical Detection (ECD). Each of them simply performs theoretically much better as a miniaturised system. To understand better, where the problem lays, please let me give you a brief sketch of what CE and ECD are, what they can be used for, and how this is done.
CE or other separation techniques are commonly used in analytical chemistry to separate complex samples and by this simplify measurements on one or more components of the sample. A typical example could be found in pharmaceutical industry, where separation techniques often are used to separate drugs from possible contaminants or by-products.
Main feature of a CE apparatus is the capillary, a thin tube with a typical diameter of 0.05 mm or less. This tube is filled with a salt solution, called electrolyte, that provides electrical conductivity. Each end of the tube is immersed in a small beaker that also contains this electrolyte and a high voltage electrode. For analysis, a very small amount of the sample, about 1/2000 of the volume of a raindrop, is introduced into one end of the capillary, and high voltage, some ten thousand volts, is applied to the high voltage electrodes. Upon this the electrolyte starts travelling through the capillary. The components of the sample start to travel too, each of them with a characteristic speed. Eventually the sample separates into small groups of identical components that leave, sorted by their characteristic speed, the capillary on its other end. Where they elute from the capillary one has to use some detection technique to identify and scale them.
ECD is one of those techniques. The set-up for ECD consists of a device that is called potentiostat and of three electrodes which are immersed in the beaker with electrolyte at the end of the CE-capillary. One of the electrodes, the reference electrode senses the electrical potential of the surrounding electrolyte. The electrical potential of the second electrode, the working electrode, is then set by the potentiostat to a chosen value. If the potential of the working electrode is sufficiently high, some types of compounds nearby would attach and deliver an electron to it. If the potential is sufficiently negative, the working electrode could pass an electron to some types of compounds. In both cases an electron moves, i.e. a current runs. The function of the third electrode, the counter electrode, is to close the circuit and provide a path for any current generated at the working electrode. This current is my analytical signal and is proportional to the concentration of the electron delivering or electron accepting compounds.
Compounds that can deliver or accept electrons by this way are said to
be electroactive. Among them one finds many important substance groups
like DNA, neurotransmitters, amino acids or heavy metals.
Thus my research so far was concerned with the high voltage effect of CE on ECD rather the miniaturising both systems to make a neat "Lab-on-a-chip". However, it has turned out that the relative positions of working and reference electrode along the path that the high voltage takes outside the capillary is responsible for these effects. Briefly, the disturbing effects could be eliminated simply by placing working and reference electrode very close to each other. Unfortunately, "quite close" in this context means less than the diameter of the capillary, i.e. 0.05 mm. That's now why the micromachining comes into the picture. I won't discuss here how this works. However, now I am back to the initial task, the Lab-on-a-chip, but not because of the benefits of miniaturisation as anticipated initially, but because CE and ECD simply can not be combined else than by miniaturisation.
Furthermore it appears that the high voltage field in CE can be used for some other interesting features related to ECD. To find them and to learn how to control the interactions between electric fields of some ten thousand volts used in CE and potentials of some hundred millivolts used in ECD is the topic of my further research.
|Modified 15 January 2003 * Contact Us| | <urn:uuid:8256865a-4985-4169-9438-b4ea2d7eacba> | 2.75 | 871 | Academic Writing | Science & Tech. | 37.91192 |
A Boeing Delta 2 rocket carrying the first MER spacecraft, named Spirit, lifts off from Cape Canaveral in Florida on June 10, 2003.
Click on image for full size
Image courtesy NASA.
Mars Exploration Rovers Launched
News story originally written on July 10, 2003
Exploration Rovers (MER) mission is now underway. Twin robotic
vehicles bound for Mars were successfully launched from Cape Canaveral in Florida.
The rovers, which will explore the geology of Mars in search of signs of water,
are scheduled to land on Mars in January 2004.
The first MER spacecraft, named Spirit, blasted off on June 10, 2003. It is
slated to land within Gusev
Crater on Mars. The second MER, called Opportunity,
lifted off July 7, 2003. It will explore the Meridiani
Planum region of The
Red Planet. Both spacecraft rode into space atop Boeing Delta II launch vehicles.
Shop Windows to the Universe Science Store!
Our online store
includes fun classroom activities
for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist
are also full of classroom activities on different topics in Earth and space science!
You might also be interested in:
How did life evolve on Earth? The answer to this question can help us understand our past and prepare for our future. Although evolution provides credible and reliable answers, polls show that many people turn away from science, seeking other explanations with which they are more comfortable....more
In the past few decades, the Russian and American space agencies have sent many spacecraft to Mars. Some have been a great success while others didn't even make it into space! In 1998, Japan also joined...more
Both Mars Exploration Rovers (MER) were launched from Cape Canaveral, Florida, during the summer of 2003. The first, Spirit, blasted off on June 10. The second, Opportunity, was launched on July 7. After...more
NASA scientists were unable to detect a signal from Pioneer 10 when they tried to contact the spacecraft on February 7, 2003. They believe Pioneer 10's radioisotope power supply no longer generates enough...more
The Mars Exploration Rovers (MER) mission is now underway. Twin robotic vehicles bound for Mars were successfully launched from Cape Canaveral in Florida. The rovers, which will explore the geology of...more
1998 was a very full year when it came to space exploration and history making. In the blast-from-the-past department, John Glenn received another go for a launch aboard Space Shuttle Discovery. After...more
Something new and exciting is happening at Windows to the Universe! Windows scientists say they discovered twelve new stars, including one that is the second brightest in the night sky! They decided to...more
The following is Andy Thomas's last letter to those on Earth. The subject -- a view from space...As I have orbited around the Earth, I have spoken to many amateur radio operators as well as television...more | <urn:uuid:a1263b31-9605-45d4-b651-9d90cf09de57> | 3.234375 | 613 | Content Listing | Science & Tech. | 55.459583 |
Please can you explain what Parameters are?
In simple terms, the parameter of a curve defines a 0-n floating point value indicating point positions on a curve.
Lets use a polyline as an example of what to expect from a parameter value.
If we have a polyline which has 5 points, and each of the 4 lines are different in length. If we call AcDbCurve::getStartParam() we will be returned a value of 0, if we call AcDbCurve::getEndParam() we will get a value of 4, if we extract the 2nd polyline point and call AcDbCurve::getParamAt() we will be returned a parameter value of 1. So you can see that in the case of a polyline, the parameter values directly represent each of the start and end points of the Polyline.
Parameters are very powerful, for instance, I can extract the halfway point between p1 and p2 by calling AcDbCurve::getPointAtParam(0.5) or the third of a distance between p3 and p4 by calling AcDbCurve::getPointAtParam(2.33333).
Another point to mention is that parameter values, although define a 0-n floating point value indicating point positions on a curve, are designed to best fit the needs of the curve being implemented. For example, the polyline example above uses parameter values as start and end point positions, this is a very efficient way to use parameters for a polyline and indeed makes logical sense, but for a circle the parameter values correspond to a radian increment around the circle. So, for a polyline the parameter value is non-uniform whereas the circle uses a uniform parameter value. | <urn:uuid:7ca502e0-30e8-47d4-832f-09b141619c60> | 2.890625 | 358 | Q&A Forum | Software Dev. | 42.684167 |
From my basic understanding of White Holes they are said to be the other end of a black hole right? A black hole sucks matter in and spews it out into another universe via a wormhole. I believe that idea was thought up by Schwarzschild? And the wormhole joining the two separate Universes is known as the Einstein-Rosen bridge?
If a black hole is emptying its matter into another universe wouldn't it quickly disappear unless it was constantly being fed large amounts of matter? And how could the event horizon grow and make SMBH's if this matter was being ejected constantly?
Or does any of this even matter? I have seen a few other articles that say white holes can't exist anyway because they violate the second law of thermodynamics. | <urn:uuid:78cceaec-3e58-4d7d-9add-d221f77d33ba> | 2.765625 | 156 | Comment Section | Science & Tech. | 55.357631 |
This Demonstration simulates a random decay/degradation process. It could represent the decay of radioactive nuclei or the degradation of a therapeutic drug in the human body. Individual decays or degradation events are visualized via the thin cyan lines. The red line is the mean of all the events. The blue line is a simple exponential function, characterized by the input decay/degradation rate. The green line is an exponential function, characterized by a maximum-likelihood estimate of the decay/degradation rate, calculated using the measured decay/degradation times. For a large number of events, the red, green, and blue curves coincide. | <urn:uuid:0ae836b0-c098-48d1-b66a-8e50185c7a30> | 2.6875 | 134 | Knowledge Article | Science & Tech. | 26.992059 |
<architecture> A virtual machine implementation approach, used to speed up execution of byte-code programs. To execute a program unit such as a method or a function, the virtual machine compiles its bytecodes into (hardware) machine code. The translated code is also placed in a cache, so that next time that unit's machine code can be executed immediately, without repeating the translation.
This technique was pioneered by the commercial Smalltalk implementation currently known as VisualWorks, in the early 1980s. Currently it is also used by some implementations of the Java Virtual Machine under the name JIT (Just In Time compilation).
[Peter L. Deutsch and Alan Schiffman. "Efficient Implementation of the Smalltalk-80 System", 11th Annual Symposium on Principles of Programming Languages, Jan 1984, pp. 297-302].
Try this search on Wikipedia, OneLook, Google
Nearby terms: dynamic scope « dynamic scoping « Dynamic Systems Development Method « dynamic translation » dynamic typing » DYNAMO » Dynix | <urn:uuid:a4293ef1-f1a5-41b1-8ed4-3c8a7bc17a70> | 2.953125 | 212 | Knowledge Article | Software Dev. | 26.922333 |
Genus: Colony of 4 or 16 cells arranged nearly parallel, within a gelatinous sheath, planktonic;
cell body elongated spindle-shaped, slightly curved; a single chloroplast plate-like, with a pyrenoid
(Guide book to the "Photomicrographs of the Freshwater Algae", 1998).
Species: Colonies of 4, 8 or 16 cells embedded in a gelatinous matrix; four cells arranging parallel; cell body needle-shaped, 20-60 μm long, 1.5-5 μm wide; (Illustrations of The Japanese Fresh-water Algae, 1977).
Images of collecting locality:
No.5, Kashibaru marsh, Karatsu city, Saga Pref., Japan, November 22, 2006 by Y. Tsukii | <urn:uuid:c33dc365-f53d-4baf-9ffc-c373f35e80d7> | 2.953125 | 169 | Knowledge Article | Science & Tech. | 50.879442 |
People hate aphids
because they’re incredibly successful pests. Aphids love our vegetable gardens and our flower beds. They hang out under leaves, safe from the sun, and drain a plant’s energy by drinking the sap. They can also transmit all kinds of viruses. Honeydew, or an aphid’s sugary excrement, coats the leaves and provides the perfect substrate for the growth of sooty mold, which blocks sunlight from chlorophyll. Aphids can reproduce sexually AND asexually--and often--so they can create huge populations in a very short time. On top of all this, they’re great communicators and have figured out a way to defend themselves against predation. You might say they have a trick up their cornicles…
Aphid cornicles are paired tube-like structures located on the fifth or sixth abdomen segment that look like little exhaust pipes. They can be cylindrical, conical, or some even have setae (little bristle-like structures.) Cornicles are sclerotized (hardened). They vary in length depending on the aphid species, and some species can even elongate their cornicles by flexing muscles in the abdomen. Regardless of size, length, or color, cornicles are morphological characteristics that all aphids share.
These hollow tubes secrete droplets of fluid when the aphid contracts tiny muscles located in its abdomen, under the cornicle. The contraction forces the fluid from specialized sacs that line the inside of the cornicle. For years, scientists thought this liquid was honeydew; however it was later discovered that honeydew is a waste product that exits the aphid's body through its digestive system. It’s now understood that this second liquid actually contains lipids, hemolymph, and a substance called an alarm pheromone.
Alarm pheromones are chemical cues that function as a type of “emergency broadcast system” that other members of a group can detect. It’s given off most often when there is a predator nearby. The aphid alarm pheromone is a compound called E-β-farnesene. A bit of chemistry: farnesene has six different forms. Each form has a different function. The one found in aphids, E-β-farnesene, is one of the only forms found in nature.
An aphid only produces E-β-farnesene after it’s been attacked. Once the alarm pheromone is released, any aphid within detection distance will stop feeding and walk or fall off the leaf it’s on. In species with long cornicles, the aphids will flex their abdomens and smear the pheromone onto the predator in the moments before death. This action ensures that wherever the predator goes on the plant, the other aphids know before it even arrives! It’s totally sneaky. This behavior allows the other aphids in the cluster more time to escape predation.
Even though all aphids can produce this pheromone, it is produced more frequently in immature aphids than in adults. According to a study done by Mondor et al. at Simon Fraser University in 2000, when attacked, aphid nymphs produce E-β-farnesene 100% of the time. Adults, on the other hand, only produce the pheromone about 50% of the time. You’d think that all developmental stages would want to survive, so why wouldn’t they produce equal amounts of the pheromone? Think of it in the context of aphid behavior. Adult aphids are more likely and better able to disperse over a plant than the immature stages. The nymphs tend to cluster along plant leaf veins, creating almost a smorgasbord for predators. So the more readily an immature aphid alerts the rest of the clustered group, the more members of that group will survive.
Production of an alarm pheromone is an adaptation of aphids that effectively counterbalances the fact they are slow and delicious to predators. It’s a tough life out there for animals so small, but these defensive mechanisms give them the edge they need to survive in a big world.
Want to know more? More in-depth information can be found in the references we used for this article:
Mondor, E., S. Baird, K. Slessor, and B. Roitberg. 2000. Ontogeny of alarm pheromone secretion in pea aphid, Acyrthosiphon pisum. Jour of Chem. Ecol. 26 (12): 2875-2882
Mondor, E. and B. Roitberg. 2002. Pea aphid, Acyrthosiphon pisum, cornicle ontogeny as an adaptation to preferential predation risk. Can. J. Zool. 80: 2131-2136 | <urn:uuid:54f7f23b-50b4-48ab-8c52-7b52be82cf16> | 3.609375 | 1,045 | Knowledge Article | Science & Tech. | 51.007222 |
Users will produce and analyze graphs showing water temperature, salinity, density, and chlorophyll concentration for 2004 at four buoy locations in the Gulf of Maine. The multi-year graph of chlorophyll concentration below serves to illustrate the distinct fall and spring phytoplankton blooms that occur there each year.
The Gulf of Maine is outlined in the red box in the image below. The image shows chlorophyll levels recorded by the MODIS (Moderate Resolution Imaging Spectroradiometer) sensor onboard the Terra satellite on April 23, 2003. The dark red pixels show areas where chlorophyll levels were 10.0 mg/cubic meter.
After completing this chapter, students will be able to:
- explain the ecological importance of phytoplankton;
- describe the components that influence a phytoplankton bloom;
- interpret satellite images in order to correlate buoy data;
- use the scientific process to predict the onset of the spring bloom based on background data;
- download and analyze graphs of oceanographic buoy data; and
- identify geographic features in the Gulf of Maine.
The following National Science Education Standards are supported by this chapter:
- 12ASI1.1 Identify questions and concepts that guide scientific investigations. Students should form a testable hypothesis and demonstrate the logical connections between the scientific concepts guiding a hypothesis and the design of an experiment. They should demonstrate appropriate procedures, a knowledge base, and conceptual understanding of scientific investigations.
- 12ASI1.5 Recognize and analyze alternative explanations and models. This aspect of the standard emphasizes the critical abilities of analyzing an argument by reviewing current scientific understanding, weighing the evidence, and examining the logic so as to decide which explanations and models are best. In other words, although there may be several plausible explanations, they do not all have equal weight. Students should be able to use scientific criteria to find the preferred explanations.
- 12CLS4.4 Living organisms have the capacity to produce populations of infinite size, but environments and resources are finite. This fundamental tension has profound effects on the interactions between organisms. | <urn:uuid:b2ab7279-0f31-4c9d-9e99-34dfb758653c> | 3.75 | 435 | Tutorial | Science & Tech. | 20.425031 |
This tree diagram shows the relationships between several groups of organisms.
The root of the current tree connects the organisms featured in this tree to their containing group and the rest of the Tree of Life. The basal branching point in the tree represents the ancestor of the other groups in the tree. This ancestor diversified over time into several descendent subgroups, which are represented as internal nodes and terminal taxa to the right.
You can click on the root to travel down the Tree of Life all the way to the root of all Life, and you can click on the names of descendent subgroups to travel up the Tree of Life all the way to individual species.close box
Rhyacophilidae is a relatively large family, originally established by Stephens (1836). At one time the family included also Glossosomatidae and Hydrobiosidae and other taxa, but its definition has progressively become more restricted. Evolutionary relationships of the family were discussed by Ross (1956) and the family was the subject of a large revision by Schmid (1970). The family is predominantly north temperate and is found in North America, Europe, and Asia, but also extends into India and the tropical areas of southeastern Asia. Currently most of the diversity is included in a single genus, Rhyacophila Pictet, the largest genus in Trichoptera, with over 700 species and additional ones regularly being described. In addition to the landmark works of Ross and Schmid on Rhyacophila, Prather and Morse (2001) studied the phylogeny of the R. invaria group from eastern North America and Mey (1999) investigated the biogeography of Southeast Asian members of the genus. Other genera include Himalopsyche Banks (ca. 50 species, predominantly in the eastern Palaearctic and Oriental regions, but with 1 species from western North America), Philocrena Lepneva (1 species from Georgia, western Palaearctic), and Fansipangana Mey (a single species recently described from Vietnam).
The family is 1 of 2 (the other being Hydrobiosidae) that includes species that are free-living and predaceous as larvae, constructing a domed pupal chamber of rocks at maturity. As the etymology of the family name indicates, the larvae frequent cool, fast flowing rivers and streams. Larvae in the genus Himalopsyche, and some in the genus Rhyacophila, possess abdominal and thoracic gills, quite different from those in Integripalpia or Hydropsychidae. (From Holzenthal et al. 2007a)
Ivanov (2002, see also Ivanov & Sukatcheva 2002) proposed that Rhyacophilidae and Hydrobiosidae are sister taxa allied to the Annulipalpia. Combined molecular and morphological data place Rhyacophilidae in a clade that includes the other "spicipalpian" families (Hydrobiosidae, Glossosomatidae, and Hydroptilidae) and Integripalpia (Kjer et al., 2001; 2002; Holzenthal et al., 2007b), although its position within this clade is unstable.
Holzenthal R.W., Blahnik, R.J., Prather, A.L., and Kjer K.M. 2007a. Order Trichoptera Kirby 1813 (Insecta), Caddisflies. In: Zhang, Z.-Q., and Shear, W.A. (Eds). 2007 Linneaus Tercentenary: Progress in Invertebrate Taxonomy. Zootaxa. 58 pp. 1668:639-698
Holzenthal R.W., Blahnik, R.J., Kjer K.M and Prather, A.L. 2007b. An update on the phylogeny of Caddisflies (Trichoptera). Proceedings of the XIIth International Symposium on Trichoptera. Bueno-Soria, R. Barba-Alvearz and B. Armitage (Eds). pp. 143-153. The Caddis Press.
Ivanov, V.D. & Melnitsky, S.I. (2002) Structure of pheromone glands in Trichoptera. Nova Supplementa Entomologica (Proceedings of the 10th International Symposium on Trichoptera), 15, 1728.
Ivanov, V.D. & Sukatcheva, I.D. (2002) Order Trichoptera Kirby, 1813. The caddisflies (=Phryganeida Latreille, 1810). In: Rasnitsyn, A.P. & Quicke, D.L.J. (Eds.) History of Insects. Kluwer Academic Publishers, Dordrecht, The Netherlands, pp. 199222.
Kjer, K.M., Blahnik, R.J. & Holzenthal, R.W. (2001) Phylogeny of Trichoptera (caddisflies): characterization of signal and noise within multiple datasets. Systematic Biology, 50, 781816.
Kjer, K.M., Blahnik, R.J. & Holzenthal, R.W. (2002) Phylogeny of caddisflies (Insecta, Trichoptera). Zoologica Scripta, 31, 8391.
Mey, W. (1999) Origin and formation of the distributional patterns of Rhyacophila species in the islands of South-East Asia. Senckenbergiana Biologica, 78, 193203.
Prather, A.L. & Morse, J.C. (2001) Eastern Nearctic Rhyacophila species, with a revision of the Rhyacophila invaria group (Trichoptera: Rhyacophilidae). Transactions of the American Entomological Society, 127, 85166.
Ross, H.H. (1956) Evolution and Classification of the Mountain Caddisflies. University of Illinois Press, Urbana, 213 pp.
Schmid, F. (1970) Le genre Rhyacophila et la famille des Rhyacophilidae (Trichoptera). Memoires de la Société Entomologique du Canada, 66, 1230.
Stephens, J.F. (1836) Illustrations of British Entomology; or a Synopsis of Indigenous Insects: Containing their Generic and Specific Distinctions; with an Account of their Metamorphoses, Times of Appearance, Localities, Food, and Economy, as far as Practicable. Mandibulata. Vol. VI. [Trichoptera, pages 146208]. Baldwin and Cradock, London, 240 pp.
Rutgers University, New Brunswick, New Jersey, USA
Correspondence regarding this page should be directed to Karl Kjer at
Page copyright © 2010 Karl Kjer
Page: Tree of Life Rhyacophilidae. Authored by Karl Kjer. The TEXT of this page is licensed under the Creative Commons Attribution-NonCommercial License - Version 3.0. Note that images and other media featured on this page are each governed by their own license, and they may or may not be available for reuse. Click on an image or a media link to access the media data window, which provides the relevant licensing information. For the general terms and conditions of ToL material reuse and redistribution, please see the Tree of Life Copyright Policies.
- First online 17 July 2010
- Content changed 20 July 2010
Citing this page:
Kjer, Karl. 2010. Rhyacophilidae. Version 20 July 2010 (under construction). http://tolweb.org/Rhyacophilidae/14586/2010.07.20 in The Tree of Life Web Project, http://tolweb.org/ | <urn:uuid:37dc1321-50fb-4362-9e19-e9a98cc1cea6> | 3.21875 | 1,678 | Knowledge Article | Science & Tech. | 50.56697 |
PARTITION TABLE — Specifies that a table is partitioned and which is the partitioning column.
PARTITION TABLE table-name ON COLUMN column-name
Partitioning a table specifies that different records are stored in different unique partitions, based on the value of the specified column. The table table-name and column column-name must be valid, declared elements in the current DDL file or VoltDB generates an error when compiling the schema.
For a table to be partitioned, the partitioning column must be declared as NOT NULL. If you do not declare a partitioning column of a table in the DDL, the table is assumed to be a replicated table. | <urn:uuid:d7cd221e-f8e0-48b1-97c7-5ba150c40f0a> | 3.03125 | 140 | Documentation | Software Dev. | 33.727667 |
Science Fair Project Encyclopedia
|- style="text-align:center;" ! style="background: pink;" | Binomial name |- style="text-align:center;" |Globicephala melas
| style="text-align:center;" |
Long-finned Pilot Whale range A Pilot Whale is one of two species of cetacean in the genus Globicephala. The genus is part of the oceanic dolphin family (Delphinidae) although their behaviour is closer to that of the larger whales. The two species are the Long-finned Pilot Whale and the Short-finned Pilot Whale. The two are not readily distinguished at sea and are typically just known simply as Pilot Whales. They and other large members of the dolphin family are also known as blackfish.
Pilot Whales are jet black or a very dark grey colour. The dorsal fin in set forward on the back and sweeps back. The body is elongated but stocky in the tail fin.
The differences in appearance of the two species are quite subtle and where their distributions overlap it is generally not possible to tell the species apart at sea. On land specimens may be distinguished (perhaps unsurprisingly!) by the length of flipper, the number of teeth and the shape of the skull: the Short-finned has a more bulbous head particularly in older males; the Long-finned is squarer, and the forehead is more likely to overhang the mouth. G. macrorhynchus was described, from skeletal materials only, by John Edward Gray in 1846. He presumed from the skeleton that the whale had a large beak ("macrorhynchus" in Latin).
Birth weight is about 60 kg. Adult weight varies from 1,000 to 3,000 kg. They may be between four and seven metres in length. Life span is about 45 years in males and 60 years in females for both species.
Both species live in groups of about 10 to 30 in number. They are quite active and will frequently lobtail, spyhop and approach boats.
Pilot Whales feed predominantly on squid. Tuna and Pilot Whales are frequently found in the same area. This is probably because they share a common diet (squid) rather than that the Pilot Whale feeds on tuna. Pilot Whales are more susceptible than most species to beaching. It is possible that squid spawning close to shore attract Pilot Whales and cause them to beach.
Population and distribution
The Long-finned species prefers slightly cooler waters than the Short-finned and is divided into two populations. The larger group is found in a circumpolar band in the Southern Ocean running from approximately 20° S to 65° S. It may be sighted off the coasts of Chile, Argentina, South Africa, Australia and New Zealand. There are estimated to be in excess of 200,000 individuals in this group. The second population is much smaller and inhabits the North Atlantic Ocean, in a band that runs from South Carolina in the United States across to the Azores and Morocco and its southern edge and from Newfoundland to Greenland, Iceland and northern Norway at is northern. It is also present in the western half of the Mediterranean Sea.
The Short-finned species is more populous. It is found in temperate and tropical waters of the Indian, Atlantic and Pacific Oceans. Its population overlaps slightly with the Long-finned Species in the western Atlantic. There are 150,000 individuals in the eastern tropical Pacific Ocean. There are estimated to be more than 30,000 animals in the western Pacific, off the coast of Japan.
Both species prefer deep water.
See also: Whaling in the Faroe Islands
The Long-finned Pilot Whale has traditionally been killed by whalers by the process of "driving" - where many fishermen and boats surround a school of whales and slowly force them to shore, killing them. This practice was common in both the nineteenth and twentieth centuries, declining only in the 1990s. In the 1980s around 2,500 individuals were killed each year in this manner. Currently only the Faroe Islands operates such a cull - killing around 1,000 animals each year. In the southern Hemisphere there has been much less human interference than in the north - there are some reports of a whaling drive off the Falkland Islands but details are sketchy. It is unlikely to effect the stability of the southern population which seems to be secure.
The Short-finned Pilot Whale has also been hunted for many centuries, particularly by Japanese whalers. In the mid-1980s the annual Japanese kill was about 2,300 animals. This had decreased to about 400 per year by the 1990s. Killing by harpoon is still relatively common in the Lesser Antilles, Indonesia and Sri Lanka. Due to poor record-keeping it is not known how many kills are many each year, and what the effect this has on the local population, although the global effect is probably absorbable.
Both species are killed in their hundreds or perhaps thousands in longline and gillnets each year.
- National Audobon Society Guide to Marine Mammals of the World ISBN 0375411410
- Encyclopedia of Marine Mammals ISBN 0125513402
- Whales, Dolphins and Porpoises, Mark Carwardine, ISBN 0751327816
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:c47a6e67-1e41-46dd-9249-4fbb5c2b89a3> | 3.515625 | 1,134 | Knowledge Article | Science & Tech. | 52.535293 |
DefinitionThe class Compare_to_less<F> is used to convert a functor which
returns a Comparison_result to a predicate (returning bool) : it
will return true iff the return value of F is SMALLER.
The class is used in conjunction with the compare_to_less
function; see there for an explanation on how exactly the functors
type of the composed functor. | <urn:uuid:bf3f4c4f-cf57-4448-952e-2d8f9bb0b957> | 3.375 | 88 | Documentation | Software Dev. | 28.854286 |
Algae grows on a particular rock at the bed of a lake. The
volume V of the algae on this rock is the square root of
the surface area of the rock which is covered by water. The
surface area S covered by water, in turn, can be expressed
through the function
S(R) =50R / R + 1
where R is the amount of rain (in mm) which falls through-
out the year.
(a) Express the volume V of the algae as a function of the
amount of rainfall R.
(b) Determine the volume V of the algae if the rainfall for
the year is 20 mm. | <urn:uuid:329b47ae-e0aa-4b74-9d13-528cbbec21fc> | 3.671875 | 139 | Q&A Forum | Science & Tech. | 80.76417 |
A lightyear is the distance that light travels in one year. Since there are various definitions for the length of a year, there are correspondingly slightly different values for a lightyear. One lightyear corresponds to about 9.461e15 m, 5.879e12 mi, or 63239.7 AU, or 0.3066 pc.
The metre is a unit of length in the metric system, and is the base unit of length in the International System of Units (SI).
As the base unit of length in the SI and other m.k.s. systems (based around metres, kilograms and seconds) the metre is used to help derive other units of measurement such as the newton, for force. | <urn:uuid:7e193096-18c0-4bc7-bda5-170d9d7a9d13> | 3.484375 | 149 | Knowledge Article | Science & Tech. | 75.243455 |
Unknown triangles 2: cosine rule
Added by Kadie Armstrong on Feb 6, 2008
A graphic showing three different triangles, two with one unidentified side length and one with an unidentified angle. Ask pupils to use the cosine rule to obtain the unknown values using the information given.
This item is part of Absorb Mathematics.
- Absorb Mathematics
- 1,515 times
- File Size:
- 1.94 KB
- Related Items
- This Author
Unknown triangles 1:sine rule
A graphic showing three different triangles, each with one u...
Unknown triangles 4: cosine rule
A graphic showing four different triangles with various unla...
Unknown triangles 3: sine rule
A graphic showing three different triangles with various unl...
Proving the cosine rule
An interactive animation illustrating the proof of the cosin...
The Sierpinski triangle
An interactive animation showing the fractal nature of the S...
Circles produced by rescaling
A graphic showing circles with a variety of different radii ...
Lines in the x-y plane 3
An interactive animation showing two lines in the x-y plane....
An interactive animation showing three shapes rotating about...
A graphic showing three examples of isosceles triangles. Lin...
Varying the gradient
An interactive animation showing a straight line passing thr... | <urn:uuid:9fa6f028-4930-4122-b16b-ad832debaae1> | 3.859375 | 283 | Content Listing | Science & Tech. | 49.997955 |
As multiple researchers continue their efforts to make micro-robotic flying insects, Harvard's Robert Wood has made strides in self-assembling systems with the robobee above. Inspired by his child's pop-up books, Wood's device starts flat on a scaffold. More than 100 hinges enable the 3D structure to "pop up" into the robot seen here. This is only one of the Origami-like approaches that researchers at Harvard, MIT, University of Illinois at Urbana-Champaig, and elsewhere are using to create small, complex objects at scale, from drug delivery systems to solar cells. Science News surveys the field. "Into the Fold"
"Stress Relief: Improving Structural Strength of 3-D Printable Objects," a paper presented at SIGGRAPH 2012 from Purdue University's Bedrich Benes demonstrated an automated system for predicting when 3D models would produce structural weaknesses if they were fed to 3D printers, and to automatically modify the models to make them more hardy.
Findings were detailed in a paper presented during the SIGGRAPH 2012 conference in August. Former Purdue doctoral student Ondrej Stava created the software application, which automatically strengthens objects either by increasing the thickness of key structural elements or by adding struts. The tool also uses a third option, reducing the stress on structural elements by hollowing out overweight elements.
"We not only make the objects structurally better, but we also make them much more inexpensive," Mech said. "We have demonstrated a weight and cost savings of 80 percent."
The new tool automatically identifies "grip positions" where a person is likely to grasp the object. A "lightweight structural analysis solver" analyzes the object using a mesh-based simulation. It requires less computing power than traditional finite-element modeling tools, which are used in high-precision work such as designing jet engine turbine blades.
New Tool Gives Structural Strength to 3-D Printed Works
This 2010 video demonstrates the wonderful and intriguing behavior exhibited by water when it is dripped on paper that is coated with "superhydrophobic" aerogel powder. The water forms tiny marbles and races around like it's on a griddle. This looks like it would be a lot of fun to try in person, possibly with some small people in attendance.
Water droplets on a superhydrophobic surface
A joint Disney Research and CMU team have produced a demo showing gesture controls on a variety of everyday, non-computer objects. The system, called Touché, uses capacitive coupling to infer things about what your hands are doing. It can determine which utensil you're eating your food with, or how you're grasping a doorknob, or even whether you're touching one finger to another or clasping your hands together. It's a pretty exciting demo, and the user interface possibilities are certainly provocative. Here's some commentary from Wired UK's Mark Brown:
Some of the proof-of-concept applications in the lab include a smart doorknob that knows whether it has been grasped, touched, or pinched; a chair that dims the lights when you recline into it; a table that knows if you're resting one hand, two hands, or your elbows on it; and a tablet that can be pinched from back to front to open an on-screen menu.
The technology can also be shoved in wristbands, so you can make sign-language-style gestures to control the phone in your pocket—two fingers on your palm to change a song, say, or a clap to stop the music. It can also go in liquids, to detect when fingers and hands are submerged in water.
"In our laboratory experiments, Touché demonstrated recognition rates approaching 100 percent," claims Ivan Poupyrev, senior research scientist at Disney Research in Pittsburgh. "That suggests it could immediately be used to create new and exciting ways for people to interact with objects and the world at large."
Disney researchers put gesture recognition in door knobs, chairs, fish tanks
In this short video, Richard from ABEbooks describes the distinctive smell of old books ("a combination of grassy notes with a tang of acids and a hint of vanilla, with an underlying mustiness") caused by hundreds of volatile compounds released during the slow oxidization of the paper, glues and inks.
Why Do Old Books Smell?
Aerogel.org is devoted to making open versions of aerogel, the super-strong, super-light new material. They provide recipes for several sorts of aerogel, testing protocols, and projects you can undertake with your homebrew miracle substances.
Propylene oxide is a known carcinogen (exposure can cause cancer), and epichlorohydrin is probably too. If you plan on doing this procedure, take the proper precautions to prevent your exposure to the vapors of these substances by using a fume hood in lab, if possible, or at the very least a fitted respirator (gas mask) with the right organics cartridges and a well-ventilated space, on top of the usual splash goggles, gloves, long pants, and closed-toe shoes.
Look under Explore > Information About Chemicals to see where you can find health and safety information about these and other chemicals.
If you can’t use these substances safely, don’t use them until you can!
Aerogel.org » Make
(Image: A silica aerogel puck Rayleigh scatters light from a laser pointer like smoke.)
A new material developed by scientists at UC Irvine is described as the "world's lightest material," so light it can perch atop a dandelion clock without disturbing the seeds. The material is documented in the Nov 18 Science.
The new material redefines the limits of lightweight materials because of its unique “micro-lattice” cellular architecture. The researchers were able to make a material that consists of 99.99 percent air by designing the 0.01 percent solid at the nanometer, micron and millimeter scales. “The trick is to fabricate a lattice of interconnected hollow tubes with a wall thickness 1,000 times thinner than a human hair,” said lead author Dr. Tobias Schaedler of HRL.
The material’s architecture allows unprecedented mechanical behavior for a metal, including complete recovery from compression exceeding 50 percent strain and extraordinarily high energy absorption.
Multidisciplinary team of researchers develop world’s lightest material
(Thanks, Fipi Lele!)
(Image: Dan Little, HRL Laboratories LLC)
We've been in the market for a new surface for our kitchen's eating area (a wide shelf that's set into a wide space knocked through into the sitting room serviced by four tall stools) for a year now. We've looked at tiles, synthetic stone, real stone, polymers, concrete, and lots of other stuff, but we knew we'd discovered our material when we happened on the Çurface exhibition at a coffee fair in east London. Çurface is the brainchild of two British makers who've figured out how to make a durable, beautiful, malleable material out of melted plastic coffee cups and compressed coffee-grounds.
Our Çurface cost £141 including delivery and installation -- that was the minimum price for a 1m x 2m sheet (bigger than we needed it, but Adam from Çurface was happy to cut it to size and finish the edges). We've had it for two months now, and at this point, I'm prepared to pronounce it delightful. It looks great: the solid material minimizes the occasional small scratch or scuff, and it cleans very easily with normal spray-cleaners (when he installed it, Adam explained that we could treat it as a polymer and use Turtle Wax or similar for a high gloss, or treat it as a compressed fiber and seal it with Danish Oil). The manufacturer makes lots of different shapes to order -- the demo we saw included lots of fancy curved chairs and such, all cast from a single piece. The manufacturer also advertises it as suitable for flooring, though I think it might be a little slippery.
It smelled great when we installed it, a faint, earthy coffee smell that faded over the course of a week or so. Now it's just the kitchen table, and we love it. It was half the price of the synthetic rock we'd looked at, it's made of recycled coffee waste, and it looks great. What more could we ask for (apart from a less orthographically unwieldy name)? | <urn:uuid:a43fc153-9871-484e-8bba-2976d5dbc24b> | 3.703125 | 1,787 | Content Listing | Science & Tech. | 41.900181 |
Date: May 2004
Creator: Conder, Jason M.
Description: TNT (2,4,6-trinitrotoluene) is a persistent contaminant at many military installations and poses a threat to aquatic ecosystems. Data from environmental fate and toxicity studies with TNT revealed that sediment toxicity test procedures required modification to accurately assess sediment TNT toxicity. Key modifications included aging TNT-spiked sediments 8-14 d, basing lethal dose on measured sediment concentrations of the molar sum of TNT and its main nitroaromatic (NA) transformation products (SNA), basing sublethal dose on average sediment SNA concentrations obtained from integration of sediment SNA transformation models, avoiding overlying water exchanges, and minimizing toxicity test durations. Solid phase microextraction fibers (SPMEs) were investigated as a biomimetic chemical measure of toxicity and bioavailability. Both organism and SPME concentrations provided measures of lethal dose independent of exposure scenario (TNT-spiked sediment or TNT-spiked water) for Tubifex tubifex. Among all benthic organisms tested (Chironomus tentans, Ceriodaphnia dubia, T. tubifex) and matrixes, median lethal dose (LC50) estimates based on SPME and organism concentrations ranged from 12.6 to 55.3 mmol SNA/ml polyacrylate and 83.4 to 172.3 nmol SNA/g tissue, ww, respectively. For Tubifex, LC50s (95% CI) based on SNA concentrations in sediment and SPMEs were 223 (209-238) nmol SNA/g, dw and 27.8 (26.0-29.8) mmol SNA/ml, respectively. Reproductive effects ...
Contributing Partner: UNT Libraries | <urn:uuid:16b91c42-97e4-47ae-8c3a-c7c53eb7842e> | 2.75 | 376 | Academic Writing | Science & Tech. | 34.357695 |
Research projects waste millions or billions of euros, so is it conceivable and possible for amateurs to operate serious scientific research in their garage or at their home desk?
Can you imagine that individual amateur scientists or small groups of people, bringing together the fun of science, may create new findings in scientific disciplines where yet professional teams of researchers have hundreds of members?
And finally, how it should be possible that such leisure scientists can pursue the knowledge of a discipline accurately and that they produce interesting new knowledge based on these latest findings?
A few weeks ago nature wrote about the potential of the garage biologists, and in a specially written editorial , the editors emphasized the importance of amateur researchers for professional scientists. Even in the field of modern biology new, serious research is achievable with straightforward, affordable private lab-equipment. There are similar possibilities in astronomy and earth sciences.
Maybe soon the vast amounts of data the LHC is producing are available on the Internet. Then any interested person can even plunge into the waters of the sea of data and search for the Higgs particle – or whatever she or he hope to find in the collision data of the millions of elementary events.
The work of these people, dedicated and fascinated by science, could have great benefit for professional scientists – and one can not start early enough implementing an effective network of professional science and freelancer research.
In addition, with the cloud technologies in the very near future not only everyone can access unlimited storage capacity and information (data, literature, the latest research results), but also the computing power of mainframes. The time when each of us may start his or here own climate simulation, is not far off.
Will we have then finally a real democratization of the knowledge society? Comes with these possibilities the domination-free discourse – or a new Babel, where everyone, not only based on its Wikipedia-knowledge but also through their own faulty simulations and large-scale experiments of friends, will be a bogus expert?
It all depends on how the conversation between science and society is developing, whether suspicion or trust will be cultured on both sides. If each side accepts the other’s expertise, their vision, their goals and their specific options and considered, then the knowledge society of responsible citizens is no more obstacles.
Ledford H (2010). Garage biotech: Life hackers. Nature, 467 (7316), 650-2 PMID: 20930820
Nature Editorial (2010). Garage biology. Nature, 467 (7316) PMID: 20930797 | <urn:uuid:843190f4-4bd0-4f79-8a82-0302d5bcf05a> | 2.765625 | 512 | Personal Blog | Science & Tech. | 28.895301 |
Typical Math Courses of Study
See Below For Grade by Grade Goals
Although mathematics curricula will vary from state to state and country to country, you'll find that this list provides the basic concepts that are addressed and required for each grade. The concepts have been divided by topic and grade for easy navigation. Mastery of the concepts at the previous grade is assumed. Students preparing for each grade will find the listings to be extremely helpful. When you understand the topics and concepts that are required, you will find tutorials to help you prepare under the perspective subjects on the home page. Calculators and computer applications are also required as early as kindgarten. Most curriculum documents request that you are also able to use the corresponding technologies such as software applications, regular calculators, and graphing calculators.
For more specific details regarding the math requirements for each grade, you may want to do a search for the curriculum in your state, province or country. Most boards of education will provide you with the details to access the documents.
|Pre-K||Kdg.||Gr. 1||Gr. 2||Gr. 3||Gr. 4||Gr. 5|
|Gr. 6||Gr. 7||Gr. 8||Gr. 9||Gr. 10||Gr.11||Gr. 12| | <urn:uuid:b84d1a64-46bc-48ee-a08e-430bfecd77f5> | 3.515625 | 267 | Content Listing | Science & Tech. | 64.681395 |
Chippendales over dose
the Lion has always been fascinated by the Chippendales and their way of living.
Chippendales natural habitat is the Norwegian Fiord's, there they live in herds of approximate 15 individuals. As it is almost impossible to survive in the harsh conditions of the Fiord's, the lion decided to bring home a herd of Chippendales, to be able to study them more upclose.
The Chippendales had no problems with their new habitat, they seemed to coop excellently both with the travel and the Sigtuna Steppe.
The lion soon made gigantic leaps forward in our knowledge about Chippendales, one thing she was surprised by was the total lack of aggressive behavior.
The Lion found that the Chippendale DNA differed from humans, the helix has a "bow-tie" look.The Lion found that the herd was growing, how could that be, until this moment she had only found male specimens. Could some of them be female?
Soon it dawned on the lion. Chippendales have a special liking to holes in the ground, when they found one they get excited and soon....
they jump in, they can spend up to 20 hours in them.
To the lions (and the scientific communities) utmost surprise the Chippendales propagates through asexual cloning. The copy is not a perfect clone the strange DNA might make the Chippendale prone to mutations.
The utter lack of natural enemies (i.e the Gigantic Norwegian squirrel and the Fiord Amazon) and the abundance of holes in the Sigtuna Steppe has resulted in an explosion in the Chippendale population. TO BE CONTINUED | <urn:uuid:e7c12739-1ad5-494e-8e40-b2775545f686> | 2.84375 | 355 | Personal Blog | Science & Tech. | 54.152879 |
Conserving an Ozark Cave
Tumbling Creek Cavesnail Working Group brought together landowners and scientists to determine what had happened. We concluded that sediment from surface erosion was the most likely factor affecting the cavesnail population.
Twenty to 30 years ago, many forested areas in Missouri were cleared to create permanent pasture. This increased soil erosion, especially on steeper slopes in the first year after clearing or following droughts. Although the cave has no upstream entrance, the sediments worked down through sinkholes and losing streams into the cave.
A Working Group
Our group has worked on many fronts to restore or protect the cave’s unique habitat and inhabitants. In 2005, scientists placed terra cotta tiles in a cavesnail refuge area. Cavesnails were recently found on those tiles, creating hope that they may use them for feeding on microbes and laying eggs. Tumbling Creek cavesnails may rescue themselves this way.
In 2006, we built a small cavesnail laboratory in the cave, where we have done preliminary tests. If necessary, cavesnails might be propagated in the lab and then stocked in Tumbling Creek.
We sampled the water with highly sensitive equipment that detects parts per quadrillion, but found only tiny amounts of a few chemicals that were of no concern. Working with the Missouri Department of Transportation to monitor a resurfacing project on Highway 160 in the recharge area, we determined that their “chip and seal” method using an asphalt-water emulsion did not introduce any detectable petroleum products into the road ditches or the groundwater.
We also got help from the Conservation Department, which worked with the Ozark Underground Laboratory and the local community to help a school replace a sewage lagoon that was leaking most of its contents into the groundwater system feeding Tumbling Creek Cave (See The School and the Cavesnail; September 2006). A modern peat-filtration system was installed with the help of grants and substantial local contributions.
Because surface and subsurface are connected, caves cannot be protected without protecting the land that contributes water to them.
The Aleys bought nearby properties to help protect the cave and its critters. They used cost-share funds from the Conservation Department and the U.S. Fish & Wildlife Service to plant 70,000 trees to help restore the land.
Although some Ozark Underground Laboratory lands are used for raising cattle or growing hay, the overall goal is to create a landscape dominated by native species, including black oak, northern red oak, white oak, black gum, black walnut, green ash, | <urn:uuid:fdab7e2b-cfb2-49fa-8fd6-d3e04a8d1285> | 4 | 530 | Knowledge Article | Science & Tech. | 37.672696 |
Laser at Magurele – funded project record worth 293 million euros – will be 1,000 times more powerful than the most powerful laser in the world that now exists. If the project succeeds, over decades, a laser can do the job a giant particle accelerator. The Geneva has a diameter of 27 kilometers. The laser will fit in a building several hundred feet square.
What could be the laser? ”An interesting application is to succeed in separating the useful uranium useless”, says Nicolae Zamfir, director of the Institute of Physics and Nuclear Engineering “Horia Hulubei”.Such an outcome would have immediate economic implications.While (an isotope of uranium) is completely useless, uranium-235 (another isotope) is used in reactors to produce electricity, for example.
Separation of uranium useless at useful
Separation may be by laser. Uranium 238 is harder, because it has three extra neutrinos. Thus, it might take a sample from a uranium deposit and laser, could say exactly whether it is worth exploited, depending on the presence of the isotope abundance of 235.
Moreover, the laser could find a way to measure the amount of fuel rods of radioactive material. Now, these bars are changed periodically. At the time of their removal from the production, some bars have long been consumed fuel, while others could be used in a while.
Processing of radioactive waste – a short neutralization time
Laser, however, would have immediate application in the field of nuclear waste. These wastes have a very long time to neutralize, even thousands of years. Therefore, these wastes are kept in huge warehouses and carefully isolated. Laser, researchers can turn, radiation, radioactive nuclei in nuclei with a half-life (meaning offset) less.
In Romania, radioactive waste from the reactor at Magurele stored at Baita Bihor. Dangerous fuel was also transported to Russia. However, countries like Germany, who decided to largely abandon nuclear energy, have thousands of nuclear waste containers to be stored.
A revolution in price – vs. accelerator. laser
Another immediate application would be in medicine. Currently, hospitals money buys protonoterapie accelerators. In short, the machines send a beam of accelerated protons in tissue, in order to destroy the diseased cell accurately without them and the surrounding affected. Such accelerators have migrated from paper into practice, and Siemens to build such machines. University Hospital of Heidelberg, for example, has such a device, simply because the medical facility in Germany and it allows. Unit cost is 100 million dollars.
“It is a price that hospitals in many European countries can afford not and will not decrease significantly over time. Think about it: 100 million dollars for a single hospital. What to speak of Romania? Instead, using laser equipment prices decrease rapidly. A pointer trivial cost tens of thousands of dollars in the 60s, now it’s two lions, “said Zamfir. Also in medicine, lasers could facilitate the production of radioisotopes of great importance in medicine. Radioisotopes (radiopharmaceuticals) are used in medicine, the non-invasive diagnosis and treatment of serious and common diseases like cancer or cardiovascular disease.Medical radioisotope labeled biological molecules are known as “markers” because, given the very small quantities allow us to track certain biological processes.
Radioisotopes for medical use
The most common medical radioisotopes are produced in nuclear reactors. In 2008, the unexpected interruption of the activities of three European reactors produce radioisotopes led the EU to a shortage of radioisotopes for medical use (Molybdenum-99 / technetium-99m).This is applicable and globally, as the largest producer of molybdenum-99, Canadian National Research Universal reactor (NRU), is closed for repairs in May 2009. Therefore, the request could be extremely high and the laser could be Magurele provider such radioisotopes.
Laser acceleration of the talking at first is long-term goal. All long-term, ELI project offers hope to solve one of the largest remaining 11 unsolved mysteries in modern physics: the production of heavy elements heavier than iron in the universe.
What is ELI-NP
Extreme Light Infrastructure (ELI) is a project developed in three EU countries: Czech Republic, Hungary and Romania.
The Czech project will be implemented (pillar) called Beam Lines. ”We are talking about secondary beam applications,” says Nicolae Zamfir.When the laser interacts with matter to produce X-rays, for example.Czech project will have applications in materials research and life sciences.
In Hungary, the project called “Attosecond Facility”. Hungarian Researchers develop laser specifically to obtain and study very short beams. More specifically, talk beam whose duration is measured in atosecunde. A atosecundă is a unit for a billion billion times smaller than the second.
“The dynamics observed by the cellular level, with successive chance to take pictures during this process,” says Professor Zamfir.Immediate applications of the project will target Hungarian study materials and medicine.
Project in Romania is called ELI-NP. NP stands for Nuclear Physics (Nuclear Physics). Powerful laser will operate on electrons and ions of a material, these particles will be accelerated to nearly the speed of light.
From Czech Republic and Hungary, where studies will be made solely on visible light, in Romania we have both visible light and invisible light. More specifically, we talk about gamma rays, which are high frequency electromagnetic waves produced by interactions between subatomic particles, such as the radioactive decay or collision and annihilation of a pair electron – positron. Based on these rays, diagnostic methods have been developed in different conditions, one of the most interesting extension helping to diagnose cancer in the body. However, rays are so powerful that they can divide, “break” the DNA molecule, the long exposure.
How and why Romania was chosen
First, it should be noted that ELI is a project that provides a great opportunity for researchers from EU Member States in Central and Eastern Europe. But. Institute of Magurele present many advantages in the fight for funding and implementation.
The first laser operation made the IFA (Institute of Physics – the old name of institute) dates from 1962. Five years before this event, Magurele was commissioned first research reactor and first cyclotron in Romania. Production of radioisotopes, one of the goals ELI-NP, is an activity that it carries the Institute since 1974. In the same year, the institute was equipped with an accelerator and a radioactive waste processing center.
In 2000, together with two other European countries opened a center in Magurele multipurpose irradiation, the project is called IRASM.
How powerful is the laser
7-8 years ago, the European Union began to conduct discussions for developing a laser to 1,000 times more powerful than what existed at that time. Today, lasers in the world situation has changed a lot. It is still working on the terawaţilor (terawatt is a unit with a capacity of one billion kilowatts), while the desired building Magurele, the final version of a few hundred laser petawaţi. A pettawat terawaţi equals 1,000.
“It was found that up to several hundred petawaţi is too high. So he decided a two-step development,” said Zamfir. So, first, the two lasers will be built Magurele tens of petawaţi and will be studied best way to synchronize them. Then many will be synchronized lasers, and power will get hundreds of petawaţi, ELI-NP target.
- White Paper project was submitted in 2010
- The project was evaluated by Jaspers, a support institution to obtain EU funding
- Jaspers gave the OK project, qualifying it for funding by the EBRD and the EC. It happens in the summer of 2011
- Technical project, completed in December 2011
- Start of construction, planned for autumn 2012
- Completion, planned for autumn 2014
- First stage of the research infrastructure in 2015
- Operational in December 2016
Financing the project is 293 million euros. 17% of this amount comes from the state budget, the rest are European money.
Project in Romania
, romania laser
, Uranium 238 | <urn:uuid:ae8fb143-3504-4666-af7a-49869f02232e> | 3.015625 | 1,782 | Knowledge Article | Science & Tech. | 31.509134 |
Questions on Anti-matter
I am very interested in anti-matter. How is it stored, what are its p
properties, charges, etc. Are there actual elements made completely of a
anti-matter, or is is just another state of matter?
Anti-matter is just like ordinary matter (same masses of all the
particles, same kinds of nuclei and elements) but with all the
charges reversed, so the anti-electron (the positron) has a positive
charge and the anti-proton has a negative charge (the anti-neutron
is neutral like the ordinary neutron, but is itself composed of
quarks of the opposed kind to those in the regular neutron).
I think the only place it is made in quantity is at big particle
accelerators like Fermilab. They store it in big "storage rings"
that use magnets to keep the anti-protons or positrons in little
bunches inside of a big tube (which is continually pumped to
provide a vacuum so that ordinary matter does not get in and annihilate
the anti-matter). It is also possible to store anti-matter in other
kinds of electrical or magnetic "traps". I am not sure anybody's
actually made much of any kind of anti-matter other than
anti-hydrogen (to make heavier elements requires fusion, which
is hard enough to do even for regular matter).
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:bea7c07c-810e-4dcb-b1ae-2edf05efa6ff> | 2.875 | 315 | Q&A Forum | Science & Tech. | 32.037073 |
Mentor: Dr. Neal Williams
In winter of 2000-2001, the College built a water catchment pond behind Rhoads Hall to retain runoff water from local-area drainage systems as well as from the College grounds. This pond has three inlets, one from the Shipley School and nearby area, one from a storm drain on Wyndon Avenue and one inlet from a convergence of pipes beneath the College grounds that flows into the pond below the surface of the pond. Water exits the pond through a single pipe on the end of the pond opposite the inflow. In addition to providing a water control mechanism, the pond was intended to provide a habitat for native plants and animals. Native plants were introduced as part of the construction plan, but no proposal was made for restoration of other native species. Such a “field of dreams” strategy of restoring structural vegetation with the tacit assumption that other species will arrive on their own is a common approach in restoration ecology. Although other researchers at the College have made regular studies of chemistry and water flow, no surveys of biodiversity have been performed since initial construction. I will survey the plants, algae, invertebrates, mammals, herpetofauna and birds that inhabit the pond and the surrounding area and create a database that can be used by future researchers. Particular areas of focus will be the main inlet pipe, a wash coming from the city no. 2 inlet storm drain on Wyndon Avenue , and two paths through the waterline vegetation created and highly traveled by resident Canada geese. Since these are the most obvious sources of incoming water and runoff containing substances like vegetation and earth, they have a great deal of influence over the pond content.
Various sampling methods will be used for each taxonomic group. Aquatic organisms will be collected in traps or using dipnets; birds will be counted once a week in three sampling periods per day (early morning, midday , and dusk), and insects will be captured through net sweeps or traps. Invertebrates, plants and algae will be preserved and identified under microscopes if necessary, while vertebrates will be identified visually (or audibly in the case of birds), in the field. The information will be listed in a database containing the species name, distribution, and possibly other information that has not yet been encountered. This database may assist other researchers in linking further studies of chemical makeup, water flow, or other projects with the presence or absence of different types of wildlife in their study area. | <urn:uuid:1d721f1d-3b15-4f41-b278-f732efb4df73> | 2.9375 | 506 | Academic Writing | Science & Tech. | 34.892198 |
Stars live for a very long time compared to human lifetimes. Your great, great grandparents saw the same stars as you will see tonight (if it's clear). Our lifetimes are measured in years. Star lifetimes are measured in millions of years. Even though star timescales are enormous, it is possible to know how stars are born, live, and die. This chapter covers the stages a star will go through in its life and how it was figured out. The last part of the chapter will cover the remains of stars, white dwarfs, neutron stars, and the Hollywood favorite: black holes. The vocabulary terms are in boldface.
Go to next section
last updated: 24 May 2001 | <urn:uuid:db3c9613-e223-431f-98e9-93064c428a75> | 3.8125 | 143 | Truncated | Science & Tech. | 63.029569 |
6.2 HTML and coding
6.2a HTML and XHTML
The HyperText Markup Language (HTML) is designed for the presentation of information on the World Wide Web, using a Web browser. More...
6.2b Browser compatibility
The Web page must be compatible with the most recent and penultimate versions of common browsers. More...
6.2c New windows and pop-ups
New windows The term "new window" refers to the action of clicking on a link within a Web page and a new browser window opening. More...
6.2e HTML metadata tag
HTML includes a meta element that goes inside the "head" tag at the top of Web pages. More... | <urn:uuid:41eac6a0-5278-4e02-91fb-670832f3d928> | 3.171875 | 146 | Content Listing | Software Dev. | 79.711621 |
A large body of air having similar horizontal temperature and moisture characteristics.
A small, fast-moving low-pressure system that forms in western Canada and travels southeastward into the United States. These storms, which generally bring little precipitation, generally precede an Arctic air mass.
Describes the movement of air around a high pressure, and rotation about the local vertical opposite the earth's rotation. This is clockwise in the Northern Hemisphere.
A flat, elongated cloud formation at the top of a thunderstorm.
Back Door Cold Front
A front that moves east to west in direction rather than the normal west to east movement.
Back Building Thunderstorm
A thunderstorm in which new development takes place on the upwind side (usually the west or southwest side), such that the storm seems to remain stationary or propagate in a backward direction.
An accelerated portion of a squall line of thunderstorms, taking on bow configuration, created by strong downburst winds.
A layer of relatively warm air aloft (usually several thousand feet above the ground) which suppresses or delays the development of thunderstorms. Air parcels rising into this layer become cooler than the surrounding air, which inhibits their ability to rise further. As such, the cap often prevents or delays thunderstorm development even in the presence of extreme instability.
The height of the lowest layer of clouds, when the sky is broken or overcast.
Convection in the form of a single updraft, downdraft, or updraft/downdraft couplet, typically seen as a vertical dome or tower as in a cumulus or towering cumulus cloud. A typical thunderstorm consists of several cells.
Cold Air Funnel
A funnel cloud or (rarely) a small, relatively weak tornado that can develop from a small shower or thunderstorm when the air aloft is unusually cold (hence the name). They are much less violent than other types of tornadoes.
The transfer of heat within a the air by its movement. The term is used specifically to describe vertical transport of heat and moisture, especially by updrafts and downdrafts in an unstable atmosphere.
The temperature to which the air must be cooled for water vapor to condense.
A severe localized downdraft from a thunderstorm.
A column of generally cool air that rapidly sinks to the ground, usually accompanied by precipitation as in a shower or thunderstorm.
A line that separates very warm, moist air to the east from hot, dry air to the west.
A major warming of the equatorial waters in the Pacific Ocean. El Nino events usually occur every 3 to 7 years, and are characterized by shifts in "normal" weather patterns.
A line of cumulus connected to and extending outward from the most active portion of a parent cumulonimbus, usually found on the southwest side of the storm. The cloud line has roughly a stair step appearance with the taller clouds adjacent to the parent cumulonimbus. It is most frequently associated with strong or severe thunderstorms.
The transition zone between two distinct air masses. The basic frontal types are cold fronts, warm fronts, occluded fronts, and stationary fronts.
A brief sudden increase in wind speed. Generally the duration is less than 20 seconds and the fluctuation greater than 10 mph.
The leading edge of the downdraft from a thunderstorm.
Gust front tornado. A small tornado, usually weak and short-lived, that occurs along the gust front of a thunderstorm. Often it is visible only as a debris cloud or dust whirl near the ground.
A radar pattern sometimes observed in the southwest quadrant of a tornadic thunderstorm. Appearing like a fishhook turned in toward the east, the hook echo is precipitation aloft around the periphery of a rotating column of air 2-10 miles in diameter.
The amount of water vapor in the atmosphere. (See relative humidity).
An unseasonably warm period near the middle of autumn, usually following a substantial period of cool weather.
A radar signature characterized by an indentation in the reflectivity pattern on the inflow side of the storm. The indentation often is V-shaped, but this term should not be confused with V-notch. Supercell thunderstorms often exhibit inflow notches, usually in the right quadrant of a classic supercell, but sometimes in the eastern part of an HP storm or in the rear part of a storm (rear inflow notch).
A state of the atmosphere in which convection takes place spontaneously, leading to cloud formation and precipitation.
Strong winds concentrated within a narrow band in the atmosphere. The jet stream often "steers" surface features such as fronts and low pressure systems.
The effect of a lake (usually a large one) in modifying the weather near the shore and down wind. It is often refers to the enhanced rain or snow that falls downwind from the lake. This effect can also result in enhanced snowfall along the east coast of New England in winter.
A cooling of the equatorial waters in the Pacific Ocean.
A tornado that does not arise from organized storm-scale rotation and therefore is not associated with a wall cloud (visually) or a mesocyclone (on radar). Landspouts typically are observed beneath Cbs or towering cumulus clouds (often as no more than a dust whirl), and essentially are the land-based equivalents of waterspouts.
A region of relatively strong winds in the lower part of the atmosphere.
Large thunderstorm downbursts with a 2.5 mile diameter or greater outflow of damaging winds lasting 5 to 20 minutes.
A regional network of observing stations (usually surface stations) designed to diagnose mesoscale weather features and their associated processes.
Mesoscale Convective Complex
A large mesoscale convective system, generally round or oval-shaped, which normally reaches peak intensity at night. The formal definition includes specific minimum criteria for size, duration, and eccentricity (i.e., "roundness"), based on the cloud shield as seen on infrared satellite photographs.
A strong localized downdraft less than 2.5 miles in diameter from a thunderstorm. Peak gusts last from 2 to 5 minutes.
Multicell Cluster Thunderstorm
A thunderstorm consisting of two or more cells, of which most or all are often visible at a given time as distinct domes or towers in various stages of development.
A low-pressure disturbance forming along the South Atlantic coast and moving northeast along the Middle Atlantic and New England coasts to the Atlantic Provinces of Canada. It usually causes strong northeast winds with rain or snow. Also called a Northeaster or Coastal Storm.
Air that flows outward from a thunderstorm.
Radar signature generally similar to a hook echo, except that the hook shape is not as well defined.
Winds in the middle latitudes (approximately 30 degrees to 60 degrees) that generally blow from west to east.
A thunderstorm within which a brief period (pulse) of strong updraft occurs, during and immediately after which the storm produces a short episode of severe weather. These storms generally are not tornado producers, but often produce large hail and/or damaging winds. See overshooting top, cyclic storm.
A horizontal, dark cumulonimbus base that has no visible precipitation beneath it. This structure usually marks the location of the thunderstorm updraft. Tornadoes most commonly develop (1) from wall clouds that are attached to the rain-free base, or (2) from the rain-free base itself. This is particularly true when the rain-free base is observed to the south or southwest of the precipitation shaft.
Red Flag Warning
A term used by fire-weather forecasters to call attention to limited weather conditions of particular importance that may result in extreme burning conditions. It is issued when it is an on-going event or the fire weather forecaster has a high degree of confidence that Red Flag criteria will occur within 24 hours of issuance. Red Flag criteria occurs whenever a geographical area has been in a dry spell for a week or two, or for a shorter period , if before spring green-up or after fall color, and the National Fire Danger Rating System (NFDRS) is high to extreme and the following forecast weather parameters are forecasted to be met:
1) a sustained wind average 15 mph or greater
2) relative humidity less than or equal to 25 percent and
3) a temperature of greater than 75 degrees F.
In some states, dry lightning and unstable air are criteria. A Fire Weather Watch may be issued prior to the Red Flag Warning.
Radar term referring to the ability of a radar target to return energy; used to estimate precipitation intensity and rainfall rates.
The amount of water vapor in the air, compared to the amount the air could hold if it was totally saturated. (Expressed as a percentage).
South winds on the back (west) side of an eastward-moving surface high pressure system. Return flow over the central and eastern United States typically results in a return of moist air from the Gulf of Mexico (or the Atlantic Ocean).
A relatively rare, low-level horizontal, tube-shaped accessory cloud completely detached from the cumulonimbus base. When present, it is located along the gust front and most frequently observed on the leading edge of a line of thunderstorms. The roll cloud will appear to be slowly "rolling" about its horizontal axis. Roll clouds are not and do not produce tornadoes.
Long, wedge-shaped clouds associated with the gust front. Shelf clouds indicate the downdraft or outflow of a thunderstorm.
Shear (Wind Shear)
Variation in wind speed and/or direction over a short distance. Shear usually refers to vertical wind shear, i.e., the change in wind with height, but the term also is used in Doppler radar to describe changes in radial velocity over short horizontal distances.
A non-frontal band or line of thunderstorms.
A transition zone between air masses, with neither advancing upon the other.
Thunderstorm winds most often found with the gust front. They originate from downdrafts and can cause damage which occurs in a "straight line", as opposed to tornadic wind damage which has circular characteristics.
The branch of the jet stream that is found in the lower latitudes.
A highly organized thunderstorm with a rotating updraft, known as a mesocyclone. It poses an inordinately high threat to life and property. Often produces large hail, strong winds, and tornadoes.
The wind speed obtained by averaging the observed values over a one minute period.
A thunderstorm or cloud tower which is not purely vertical but instead exhibits a slanted or tilted character. It is a sign of vertical wind shear, a favorable condition for severe storm development.
Persistent low-level tropical winds that blow from the subtropical high pressure centers towards the equatorial low.
An elongated area of low pressure at the surface or aloft.
A small-scale current of rising air. This is often associated with cumulus and cumulonimbus clouds.
Upper Level System
A general term for any large-scale or mesoscale disturbance capable of producing upward motion (lift) in the middle or upper parts of the atmosphere.
Precipitation falling from the base of a cloud and evaporating before it reaches the ground.
An isolated lowering of a cloud that is attached to the rain-free base of a thunderstorm, generally to the rear of the visible precipitation area. Wall clouds indicate the updraft of or the inflow to a thunderstorm.
A large tornado with a condensation funnel that is at least as wide (horizontally) at the ground as it is tall (vertically) from the ground to cloud base.
The wind speeds and wind directions at various levels in the atmosphere above the area of surface.
Large-scale atmospheric flow in which the east-west component (i.e., latitudinal) is dominant. | <urn:uuid:4c5f2c04-826c-4515-a624-544ffb51a6bf> | 3.640625 | 2,498 | Structured Data | Science & Tech. | 44.33158 |
|Catching Heat Waves||
Red tail hawks and ravens soar effortlessly in the warm afternoon air of California's Mojave Desert.
Seeking to take advantage of rising plumes of warm air called thermals, the same phenomenon that allows birds to lazily glide for long periods of time, engineers at NASA's Dryden Flight Research Center recently flew a radio-controlled sailplane in the shoestring-budgeted Autonomous Soaring project.
Image right: NASA Dryden aerospace engineer Michael Allen hand-launches a model motorized sailplane during a study validating the use of heat thermals to extend flight time. (NASA photo by Carla Thomas)
The effort was geared to discover if robotic unmanned air vehicles (UAVs) might be programmed to catch thermals and ride them, extending their flight range and saving their fuel.
For the experiment, the sailplane was modified to incorporate a small electric motor and an autopilot programmed to detect thermals or updrafts.
Following hand-launch by NASA Dryden aerospace engineer Michael Allen, pilot Tony Frackowiak flew the 14-foot-wingspan UAV to an altitude of about 1,000 feet before handing off control to the sailplane's autopilot. The software programmed into the autopilot flew the aircraft in a pre-determined pattern over Rogers Dry Lake at Edwards Air Force Base until it detected an updraft. As the aircraft rose, the engine automatically shut off and the aircraft circled to stay within the convective lift resulting from the thermal updraft.
Allen said the small UAV added 60 minutes to its endurance by soaring autonomously, using thermals that formed over the dry lakebed. Nicknamed Cloud Swift after a bird known for feeding on insects found in rising air masses, the sailplane gained an average altitude in 23 updrafts of 565 feet, and in one strong thermal ascended 2,770 feet.
"The flights demonstrated that a small UAV can mimic birds and exploit the free energy that exists in the atmosphere," Allen said. "We have been able to gather useful and unique data on updrafts and the response of the aircraft in updrafts. This will further the technology and refine the algorithms that are used."
Image left: A NASA model motorized sailplane catches a thermal during one of 17 flights to demonstrate that updrafts can extend flight time and save energy for small UAVs. (NASA photo by Carla Thomas)
Allen noted that a small, portable UAV with long-endurance capabilities could fulfill a number of surveillance roles including forest fire monitoring, traffic control and search and rescue. The technology might also have an application to flight on Mars where dust devils have been observed.
NASA Dryden Flight Research Center | <urn:uuid:ea781084-83a6-4ed5-b03a-53e0e165a695> | 3.484375 | 563 | Knowledge Article | Science & Tech. | 30.849343 |
A HOT cup of tea and a standard piece of laboratory equipment could replace more exotic attempts to build a quantum computer.
Neil Gershenfeld at the Massachusetts Institute of Technology and Isaac Chuang of the University of California at Santa Barbara outline their scheme in this week's
In a hypothetical quantum computer, the 0s and 1s of the bits, known as qbits, are represented by quantum states of particles, for example the up and down spin of protons. Like the gates in an ordinary computer, quantum gates operate on these qbits to calculate results as an array of new states.
What makes quantum computing special is that the array of quantum states exists in ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:8d6d5f76-99fc-4a9c-9520-40a811bd9f3e> | 3.71875 | 163 | Truncated | Science & Tech. | 41.813542 |
Birds survived the mass extinction that wiped out the dinosaurs because of their larger brains.
The Cretaceous–Tertiary mass extinction 65 million years ago may have wiped out the dinosaurs, but those that survived – the ancestors of today’s birds – may have done so because of their bird brains.
Analysis of computer tomography (CT) scans of fossilised bird skulls shows they had a more developed, larger brain than previously thought.
‘Birds today are the direct descendents of the Cretaceous extinction survivors, and they went on to become one of the most successful and diverse groups on the planet,’ says Natural History Museum palaeontologist (fossil expert), Dr Stig Walsh.
‘There were other flying animals around, such as pterosaurs and older groups of birds,’ says Dr Walsh, ‘but we’ve not really known why the ancestors of the birds we see today survived the extinction event and the others did not. It has been a great puzzle for us – until now.’
A larger and more complex brain may have given them a competitive advantage over the other more ancient birds and pterosaurs, helping them to better adapt when the environment changed after the mass extinction event.
Species of living birds that have larger brains are more likely to live in more socially complex groups and exhibit more complex and flexible behaviour than those with smaller brains.
For instance, members of the crow family have large brains, and some make and use tools, inventing cunning ways to find food.
Previous research has suggested birds with larger brains are more likely to survive if introduced to new environments than those with smaller brains.
These results suggest that this kind of behavioural flexibility was already a characteristic of the ancestors of modern birds before the Cretaceous–Tertiary extinction event.
‘In the aftermath of the extinction event, life must have been especially challenging,’ says Dr Walsh.
‘Birds that were not able to adapt to rapidly changing environments and food availability did not survive, whereas the flexible behaviour of the large-brained individuals would have allowed them to think their way around the problem.’
The team studied 2 fossil seabirds, found in the London Clay Formation on the Isle of Sheppey in England.
These deposits are famous for the huge range of preserved fossils, from molluscs, fishes, birds and even mammals.
These ancient creatures lived in the warm, semi-tropical conditions of southern England 55 million years ago.
CT scan of skull of ancient seabird - the brain is shown in blue and the inner ear in red
Natural History Museum palaeontologist and team leader, Dr Angela Milner explains how they worked out the brain size. ‘The shape of the brain is imprinted on the inside of the braincase of birds and represents a reasonably accurate approximation of the original shape and volume of the brain’.
‘Using CT analysis, we were able to create a virtual cast of this cavity so that the shape and detail of the brain and its nerves could be analysed.’ You can see a virtual cast of a crow at the top of the page.
The 2 specimens used in the study are from the Museum’s vast collection.
Fossil of ancient seabird Odontopteryx toliapica used in the bird brain study
Odontopteryx toliapica belonged to an extinct group of large, gliding seabirds called Odontopterygiformes, or bony-toothed birds. The largest had a wing span of up to 7m.
Prophaethon shrubsolei was an extinct relative of the modern tropic birds that live in tropical areas around the world.
‘We did not expect the brains of Odontopteryx toliapica and Prophaethon shrubsolei to be so much like those of living birds,’ says Dr Walsh.
‘The brain of the oldest-known flying bird, Archaeopteryx, is very bird-like, but not as large as in the fossil seabirds or living bird species.’
‘The parts of the brain that control sight, flight and high-level functions, including the ability to learn and remember information, turned out to be every bit as expanded in the 55-million-year-old fossils as they are in living species.’
‘This proves that the avian brain was already rather modern in appearance and organisation.’
This research is published in the Linnean Society’s Zoological Journal of the Linnean Society | <urn:uuid:a9675fb7-81e8-4df2-b22c-8029f8bb6acd> | 4.0625 | 968 | Knowledge Article | Science & Tech. | 40.03875 |
When Ocypus olens is threatened it produces an unpleasant smelling chemical defence from a pair of exocrine glands found at the terminal segment (8th tergite) of the abdomen.
It can also excrete fecal fluid from its anus as well as foul smelling fluid from its mouth.
Ocypus olens reproduction takes place in the autumn.
14-21 days after mating, the female lays a single egg in a damp, dark place such as
This provides the emerging larvae with a habitat.
The larva emerges after 30 days and will live mainly underground. It is predacious like the adult, and has a similar diet and defence behaviour.
The larva has 3 successive growth stages, called instars. The final larval stage at approximately 150 days reaches 20-26mm in length.
At this stage, pupation begins, taking up to 35 days.
The fully grown adult then emerges from the pupae, remaining inactive for up to 2 hours whilst the wings dry out. These can then be folded under the protective second wing cases (elytra).
Adults can either: | <urn:uuid:06856f7b-7d90-4b81-a580-87dc36d82dff> | 3.421875 | 231 | Knowledge Article | Science & Tech. | 55.903032 |
In this section, you will learn how to implement the close button of any frame or window in java application. By default the close button of the frame is not in working state. If you click on the close button of the frame or window that will be cancelled automatically. The close button of the frame has to be set for the desired operation (i.e. closing the frame).
Here, you will see how to set the close button functionality to close the frame or window. This is done using the addWindowListener() method of the Frame class which passes the new instance of the WindowAdapter class that uses the windowClosing() method for receiving the WindowEvent and close the frame or window.
This is the addWindowListener() method which adds the window listener to receive the window event from the window. This method passes the window listener.
This is the constructor of the WindowAdapter class of the java.awt.event.*; package. This is an abstract class used for receiving window events.
This is the windowClosing() method of the WindowAdapter class invoked when the frame is attempted to close from the window's system menu or close button. This method receives the WindowEvent object.
Here is the code of the program :
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:7f366e48-e090-47e0-bf33-2df4fd58e786> | 3.40625 | 295 | Documentation | Software Dev. | 53.031411 |
Ionizing radiation is a type of particle radiation in which an individual particle (for example, a photon, electron, or helium nucleus) carries enough energy to ionize an atom or molecule (that is, ... > more
Weapon of mass destruction (WMD) is a term used to describe a munition with the capacity to indiscriminately kill large numbers of living beings. The phrase broadly encompasses several areas of ... > more
Proteins are large organic compounds made of amino acids arranged in a linear chain and joined together between the carboxyl atom of one amino acid and the amine nitrogen of another. This bond is ... > more
Chromium is a chemical element in the periodic table that has the symbol Cr and atomic number 24. Chromium is a steel-gray, lustrous, hard metal that takes a high polish, and has a high melting ... > more
A nanowire is a wire of dimensions of the order of a nanometer (10−9 meters). Alternatively, nanowires can be defined as structures that have a lateral size constrained to tens of nanometers or ... > more
Citric acid is a weak organic acid found in citrus fruits.
It is a natural preservative and is also used to add an acidic (sour) taste to foods and soft drinks. In biochemistry, it is important as ... > more
Biomass is organic non-fossil material, collectively. In other words, 'biomass' describes the mass of all biological organisms, dead or alive, excluding biological mass that has been transformed by ... > more | <urn:uuid:216afb6b-bfdf-496c-9fd9-f9c5bc8ed536> | 3.28125 | 326 | Content Listing | Science & Tech. | 46.2075 |
In the 1980s one of us (Ramachandran) and our colleague Stuart M. Anstis developed an apparent-motion display called the Bistable Quartet (d). In this illusion, two dots are flashed simultaneously (frame 1 in d) on two corners of an imaginary square and then switched off and replaced by two identical dots on the remaining two corners (frame 2 in d). When frames 1 and 2 are alternated rapidly, you can see apparent motion: the dots appear to move either left-right, left-right or up-down, up-down. The perceived direction of motion is ambiguous, or bistable. You can see one or the other, but you cannot see both simultaneously. It is similar to the experience with the face-vase illusion shown in b.
If this display is rotated 45 degrees so that the dots define an imaginary diamond instead of a square, you perceive the path of motion rotated 45 degrees as well. That is, the dots appear to move back and forth along parallel diagonals. Again, there are two equally possible, mutually exclusive perceptions of motion: either along the diagonal with a positive slope or along the diagonal with a negative slope. And again, you should be able to alternate between the two.
Consider what happens when we scatter multiple Bistable Quartets randomly on a computer display screen (f). Because each one has a 50 percent probability of being seen with movement along the positive versus the negative axis, you might expect a 50–50 split. Amazingly, they all get coupled together by the brain. They end up doing exactly the same type of oscillation throughout the visual field. You can cause some brief moments of uncoupling the quartets if you expend intense mental effort, but their natural state in your perception is to remain synchronized. This experiment shows that the perception of apparent motion is not a piecemeal affair happening separately in different parts of the visual field. There is a global imposition of coherence.
Now we introduce symmetry by rearranging the field of Bistable Quartets to form a “butterfly” pattern, which is bilaterally symmetric across the vertical axis. An extraordinary thing happens: people see the quartets within each half of the display synchronized, as expected, but across the axis of symmetry, in the mirror half of the display, all the quartets are synchronized to the opposite direction of motion (g). It is as though the overall global symmetry of the form of the butterfly imposes its symmetry on the perceived motion, which necessarily means opposite directions for the two halves of the display. (We are currently exploring this phenomenon with our student Elizabeth Seckel of the University of California, San Diego.)
Thus, the need for symmetry overrides the global tendency to see identical motion throughout the visual field. All of perception depends on a hierarchy of precedence rules that determines how different “laws” interact, rules that reflect the statistical properties of the world and the organism’s need for survival.
A different experiment on the interaction between motion and symmetry, one that you can perform yourself, involves the spinning ballet dancer illusion (h; you can Google that phrase to bring up the moving display). What is on the retina is a deforming shadow—a black silhouette—but your brain makes sense of it instantly to see a young woman in full 3-D spinning on her vertical axis. If asked, you could confidently answer which direction she is spinning, clockwise or counterclockwise (as seen from above). But keep looking because, again, the direction of motion is ambiguous. With effort (or by first covering all but a small part of the moving display), you should be able to flip the direction you see her spinning.
It is fun to see a group of these figures spinning; if you have some programming skills, you can try creating it. Otherwise, you may generate a reasonable display by opening multiple new pages, each with the same image, and scattering them across your screen. Or you could employ a multilens (insect eye) fresnel lens sheet (available in novelty or science museum stores) that will optically multiply the ballerina. As with the earlier, simpler bistable motion quartets, you will perceive all the ballerinas synchronized, spinning together either rightward or leftward. (We conducted this experiment with Shai Azoulai, then a U.C.S.D. graduate student.) Again we created a symmetrical butterflylike display with multiple ballerinas, and again, most subjects instantly saw the ballerinas within one half of the axis of symmetry synchronized—but the populations on the two halves spun in opposite directions from each other. In other words, the two fields appeared to spin either toward or away from each other. The need for symmetry overrides the need for seeing synchronized motion throughout the field. (Sometimes, with mental effort, they can all be made to do the same thing, but the spontaneous preference is toward seeing opposite directions.) You can verify this result by simply putting a mirror at right angles to the computer screen next to the ballerina. | <urn:uuid:30a43a12-8ac8-4559-8980-275e64f2b209> | 3.484375 | 1,043 | Academic Writing | Science & Tech. | 41.829267 |
The New Student's Reference Work/New Stars
|←New South Wales||The New Student's Reference Work (1914)
|New Year's Day→|
|See also Nova on Wikipedia, and the disclaimer.|
New Stars, sometimes called temporary stars, are bodies which suddenly make their appearance in the heavens, rise rapidly to their full brightness, and soon begin to diminish until they can be seen only with a telescope or, perhaps, not at all. The earliest one of which we have any account is that of 1572, generally known as the Star of Tycho Brahe. But it is only since the invention of the spectroscope that this class of stars has come to be of especial interest. The new star in the constellation of Corona Borealis, discovered by Birmingham on May 12, 1866, was examined spectroscopically by Huggins and Miller. They found that it possessed both a dark line spectrum and a bright line spectrum, differing in this respect from nearly all the other stars. The next new star was that in the constellation of the Swan, known as Nova Cygni, discovered on Nov. 24, 1876, a red star of the third magnitude. Two years later it was fainter than the 11th magnitude. Nova Andromedæ was discovered in August, 1885; and Nova Orionis in December of the same year. But the star which Anderson at Edinburgh discovered on Jan. 24, 1892, far exceeded all previous new stars in interest, because the power of the spectroscope had been increased in many ways since the previous stars were observed. For a full account of this star, called Nova Aurigæ, the reader is referred to Scheiner’s Astronomical Spectroscopy, where its interesting spectrum is described in detail. The next and only other important new star was also discovered by Anderson, this time in the constellation of Perseus, Feb. 22, 1901. Many theories have been advanced to explain this curious phenomenon; but the one which at present seems most probable is that advanced by Seeliger: The new star is produced by some dark body rushing into a meteor swarm or a nebula, the impact of small particles being sufficient to bring the dark body to incandescence. | <urn:uuid:17ed41c5-9483-4eb7-bdbd-2b5294a43963> | 3.71875 | 465 | Knowledge Article | Science & Tech. | 49.030854 |
The study of distant galaxies is empirically demanding - not surprisingly, as these galaxies are very faint.
Of course there are a variety of motivations to observe and perhaps understand distant "units of the Universe". We would like to detail the present-day "lumpiness" of the Cosmos and its evolution from a very smooth "sea" at decoupling. At the nominal redshift of the Cosmic Microwave Background the key fluctuations on reasonable scales are only of order 10-5. Of course at z ~ 0 we have a very inhomogenous distribution of baryons we call galaxies and the Intergalactic Medium (IGM hereafter).
Noting the obvious, studying distant galaxies is synonymous with traveling far back in cosmic time towards the birth of massive sub-structures and large galaxies. Can we now see directly the development of single galaxies of Milky Way dimensions?
We now believe that most galaxies form and accumulate either (1) by the infall of gas (and dark matter) as "monolithic" entities, self-gravitating by the time we can observe them, or (2) by a series of major and/or minor mergers. This is the now-popular "bottom-up" scenario. Here it is presumably difficult to catch the small and immature systems in the act of merging, depending perhaps on the appropriate dynamical time scales. Thus for scenario (2) we would anticipate young galaxies to illustrate complex morphologies, quite different from those of the mature galactic systems we study readily here and now, at zero redshift. There is indeed some evidence for "recent" mergers from the fine images of distant galaxies observed with the Hubble Space Telescope (HST) - see Stern & Spinrad (1999) for some plausible early merger examples (Fig. 1). And we'd like to push these examples back in cosmic time to even "younger" galaxy growth - but the first problem is, quite naturally, the location of small and dismally faint candidates for galaxies in formation.
Figure 1. HST images of five spectroscopically confirmed galaxies located in the HDF(N). Note the distortions, small tails, and multiple centeral components - presumably due to mergers. Overall the galaxies are obviously quite small at this stage of their evolution. From Stern & Spinrad (1999)
Another important contemporary research area emerging is the study of intergalactic (gaseous) matter usually seen in silhouette against a bright background source like a QSO or an unusually bright and distant galaxy. And now, new observational techniques are beginning to tell us about the interaction history of galaxies and the IGM (cf. Adelberger et al., 2003).
One of this paper's topics, directly or indirectly stated, is just how early in cosmic epoch (parameterized by redshift) we can study individual galaxies or their "pre-galactic" fragments. There is only a short time interval between the early epochs beyond z = 3 (see Figure 2). How can the galaxies evolve so quickly?
Figure 2. A plot of look back time (in Gyrs) versus redshift for three cosmological models. Most might now prefer the short-dashed curve. Note that at high z (z 3) the time intervals become quite short. Figure by Curtis Manning.
The historical view of our empirical and theoretical march outward toward higher redshift has shown a fairly rapid expansion. By 1976 a few radio galaxies had been located and studied at z > 0.5. The z = 1.0 threshold (for galaxies) was crossed in 1981. Of course Quasars and QSOs had been actively observed and known earlier at large distances - redshifts in the 1960s and 1970s taking us to z = 2.01 (Schmidt 1965) and then 2.88, and then to z = 3.5 (OQ 172; Baldwin et al., 1974). Finally, z = 4 for QSOs was surpassed by the Palomar two-color-based searches (Schneider, Schmidt & Gunn 1991), and searches for Ly on low-resolution grism spectra (Osmer 1999) were equally successful. Almost all the recent stages of the "QSO-z race" have emphasized red-IR photometry and unusual colors, since the z ~ 5 QSOs are heavily depressed by the Ly forest of the IGM (see Fan et al. 2001). The largest published QSO redshift to date is z = 6.28 (Fan et al., 2002; Pentericci et al., 2002a).
Now we are witness to the era of a friendly race toward higher and record-breaking galaxy redshifts. The current limit for galaxies, which we shall detail later in this publication, is near z = 6.5. Is this redshift close to the end of the "dark ages", where re-ionization by massive stars and/or early QSOs play as vital sources of ionizing radiation? We return to this topic, with empirical evidence, toward the conclusion of this review. | <urn:uuid:4d4e96f2-d485-4040-8d74-d45ad76a8405> | 2.71875 | 1,036 | Academic Writing | Science & Tech. | 52.724 |
As scientists unveil artificial organs and prosthetics to improve the function of our hearts, kidneys, hands, and even eyes, it's easy to gloss over these devices' Achilles' heel: power.
Even building devices that run on very low power, such as pacemakers, tend to require additional invasive surgeries just to replace their batteries. Meanwhile, artificial limbs can be huge energy hogs, with the power source needing to be swapped out as frequently as every few weeks. Impractical is an understatement.
Biofuel cells could very well solve this problem. Researchers around the world are investigating how to use a body's own energy to power various devices, and one team out of France last year successfully implanted in a rat a biofuel cell that uses glucose and oxygen to generate electricity.… Read more | <urn:uuid:58221916-4ff4-4b0b-87f5-7b804fe2fcb9> | 3.046875 | 160 | Truncated | Science & Tech. | 31.968636 |
June 02, 2011
Tambopata rainforest in Peru as mapped by CAO.
Unveiled today at the Hiller Aviation Museum in San Carlos, California by Greg Asner of the Carnegie Institution's Department of Global Ecology at Stanford University, the newest version of the Carnegie Airborne Observatory (CAO) will offer powerful insights into the composition and biology of tropical forests.
"With CAO II we'll be able not only map the extent of a forest, but its quality and composition," Asner, director of CAO, told mongabay.com.
CAO combines optical, chemical, and laser sensors aboard aircraft to create high-resolution, three-dimensional maps of vegetation structure. These maps can be used to detect small changes in forest canopy structure from selective logging, measure biomass in dense tropical rainforests, and distinguish between plant species. It has the potential to inventory biodiversity across 40,000 acres of rainforest per day by detecting the chemical and spectral (light-reflecting) properties of individual plant species across a diverse landscape.
The Carnegie Airborne Observatory (top) and the Airborne Taxonomic Mapping System (bottom).
Asner plans to put the system to work immediately. Over the next three months he and his team will conduct aerial and on-the-ground surveys of the Western Amazon, which houses the most biodiverse rainforests on the planet. The immediate goal is to assess the impact of last year's catastrophic drought on forests of the Peruvian and Colombian Amazon. Initial work suggests these forests, which have long been thought to be among the most resilient to climate change, were particularly affected by the drought, which was the worst on record and came just five years after a "hundred-year drought" in 2005. Asner says the data collected during the mission will help researchers understand how the Amazon is changing.
The new system will support ongoing work to quantify carbon stocks, which is critical to the REDD+ (Reducing Emissions from Deforestation and Degradation) program. REDD+ will compensate tropical countries for protecting forests.
To support REDD efforts, Asner's team has developed an advanced satellite-based carbon mapping tool for use by tropical countries. Asner is now working with Google Earth to make the tool more widely available.
80% of rainforests could adversely impacted by logging, deforestation, climate change by 2100
(08/05/2010) The world's tropical forests may suffer large-scale degradation and deforestation by the end of the century if current logging and climate change trends persist, finds a new analysis published in Conservation Letters.
Google Earth boosts deforestation monitoring capabilities
(02/07/2010) Google has taken a step towards ramping up the deforestation monitoring capabilities the Google Earth Engine by contracting Massachusetts-based Clark Labs to develop an online version of its Land Change Modeler application.
Selective logging occurs in 28 percent of world’s rainforests
(01/13/2009) New satellite research presented for the first time at a symposium entitled “Will the rainforests survive?” showed that selective logging is impacting over a quarter of the world’s rainforests. Gregory Asner from the Carnegie Institution presented the “first true global estimate of selective logging” which showed that 5.5 million square kilometers of the rainforest has already seen selective logging or is slated to be logged in the near future.
Google Earth to monitor deforestation
(12/10/2009) It what could be a critical development in helping tropical countries monitor deforestation, Google has unveiled a partnership with scientists using advanced remote sensing technology to rapidly analyze and map forest cover in extremely high resolution. The effort could help countries detect deforestation shortly after it occurs making it easier to prevent further forest clearing.
How satellites are used in conservation
(04/13/2009) In October 2008 scientists with the Royal Botanical Garden at Kew discovered a host of previously unknown species in a remote highland forest in Mozambique. The find was no accident: three years earlier, conservationist Julian Bayliss identified the site—Mount Mabu—using Google Earth, a tool that’s rapidly becoming a critical part of conservation efforts around the world. As the discovery in Mozambique suggests, remote sensing is being used for a bewildering array of applications, from monitoring sea ice to detecting deforestation to tracking wildlife. The number of uses grows as the technology matures and becomes more widely available. Google Earth may represent a critical point, bringing the power of remote sensing to the masses and allowing anyone with an Internet connection to attach data to a geographic representation of Earth. | <urn:uuid:16c5ec60-b6e6-4af8-9375-1b86d35d7353> | 3.484375 | 950 | Content Listing | Science & Tech. | 26.61581 |
Arthropod acoustic communication is a primary focus at the Patek Lab. Here you can find our acoustically oriented research projects along with sounds and video of spiny lobsters (Palinuridae) and mantis shrimp (Stomatopoda).
New Article In JASA
Below is the abstract for our most recent article in the Journal of the Acoustical Society of America
Numerous animals produce sounds during interactions with potential predators, yet little is known about the acoustics of these sounds, especially in marine environments. California spiny lobsters (Panulirus interruptus) produce pulsatile rasps when interacting with potential predators. They generate sound using frictional structures located at the base of each antenna. This study probes three issues—the effect of body size on signal features, behavioral modification of sound features, and the influence of the ambient environment on the signal. Body size and file length were positively correlated, and larger animals produced lower pulse rate rasps. Ambient noise levels (149.3 dB re 1 µPa) acoustically obscured many rasps (150.4±2.0 dB re 1 µPa) at distances from 0.9–1.4 m. Significantly higher numbers of pulses, pulse rate, and rasp duration were produced in rasps generated with two antennae compared to rasps produced with only one antenna. Strong periodic resonances were measured in tank-recorded rasps, whereas field-recorded rasps had little frequency structure. Spiny lobster rasps exhibit flexibility in acoustic signal features, but their propagation is constrained, perhaps beneficially, by the noisy marine environment. Examining the connections between behavior, environment, and acoustics is critical for understanding this fundamental type of animal communication. | <urn:uuid:9c74c251-3fd5-4bb4-b988-f2bea16adc3d> | 2.84375 | 366 | Academic Writing | Science & Tech. | 22.824458 |
"Houston, Tranquility Base here. The Eagle has landed." On July 20, 1969, the Eagle landing module became the first manned spacecraft to land on the moon. The crew of Neil Armstrong and Edwin "Buzz" Aldrin soon became the first humans to explore the dusty, gray surface.
The success of NASA's Apollo 11 lunar mission depended on many unique technologies, one of which is the metallic silver and gold material covering the Eagle landing module. This light-weight, metallized plastic film was used to insulate the Eagle's crew and instrumentation from radiative heat transfer from the sun.1 Interestingly, insulation against other forms of heat transfer did not require NASA engineering; in fact, it was provided by the near-vacuum conditions in space where the absence of molecules limits conductive and convective heat transfer.
Long before the Apollo 11 lunar mission, chemist James Dewar combined the same principles of insulation, a reflective surface and a vacuum, to construct a specialized glass flask for his research on the liquefaction of hydrogen and helium.2 (Coincidentally, his work led to the liquid hydrogen rocket fuel technology that propelled the Apollo mission's Saturn V rocket into space.) In his 1898 publication in the Journal of the Chemical Society, Transactions, Dewar noted how super-cooled liquid hydrogen could be collected and stored "in a vacuum vessel doubly silvered and of special construction".3
The vacuum vessel, or "Dewar flask" as it was later named, was essentially a flask within a flask, with the air between the two layers evacuated to form a partial vacuum. Like the Eagle landing module, the flasks were coated with silver to prevent radiative heat transfer. Along with the vacuum, the reflective coating effectively blocked heat transfer in and out of the vessel, keeping cold contents cold and hot contents hot for extended periods of time. In 1906 and 1907, Reinhold Burger and the American Thermos Bottle Company filed U.S. patent applications4,5 for improvements to Dewar's vacuum flask:
- "My invention relates to a new and useful article of manufacture comprising inner and outer glass vessels inclosing rarefied space between them. Such vessels are used for hot and cold drinks, eatables, etc. and are made in various forms and sizes."5
While reflective surfaces and vacuums continue to be used to prevent heat transfer in modern Dewar flasks and NASA spacecraft, a more familiar application of these principles of insulation is in the thermos bottles designed to keep your morning coffee hot well into the afternoon.
Peter S. Carlton, Ph.D.
You can use SciFinder® or STN® to search the CAS databases for additional information about the research of the Apollo 11 lunar mission and the research of James Dewar. If your organization is enabled to use the web version of SciFinder, you can click the SciFinder links in this article to directly access details of the references.
- NASA Scientific and Technical Information (STI). http://www.sti.nasa.gov/ (accessed July 8, 2009).
- Sella, A. Classic Kit: Dewar's Flask. Chemistry World [Online] 2008, 5, http://www.rsc.org/chemistryworld/Issues/2008/August/DewarsFlask.asp (accessed July 8, 2009).
- Dewar, J. Note on the Liquefaction of Hydrogen and Helium. J. Chem. Soc., Trans. 1898, 73, 528.
- Burger, R. Double Walled Vessel with a Space for a Vacuum Between the Walls. U.S. Patent 872,795, Dec 3, 1907.
- Burger, R. Double Walled Vessel or the Like. U.S. Patent 888,783, May 26, 1908.
<< Prev Science Connections Archive Next >> | <urn:uuid:beb3af6d-c842-42a7-be9a-aff34f8ba1f2> | 4.03125 | 801 | Knowledge Article | Science & Tech. | 56.58359 |
Gibbs Free Energy and Pressure, Chemical Potential, Fugacity
The Gibbs free energy depends on pressure as well as on temperature. The pressure dependence of the Gibbs free energy in a closed system is given by the combined first and second laws and the definition of Gibbs free energy as,
If we hold temperature constant and vary only the pressure we can prepare Equation 1 for integration from pressure p1 to p2 as follows:
Equation 4 is quite general and applies to all isotropic substance: solids, liquids, ideal gases, and real gases. We will apply it first to isotropic solids and liquids.
Solids and Liquids
First level of approximation
We know that solids and liquids are not very compressible so, to a first approximation, we can regard the volume in Equation 4 as constant (as long as the range of pressure is not too large). Then the V in Equation 4 comes out of the integral and we can integrate easily to get,
Second level of approximation.
We know, however, that solids and liquids are slightly compressible and that we can define the isothermal compressibility as
Our second level of approximation is to regard κ as approximately constant. (In fact, κ is not constant, but the variation with pressure is so small it can be ignored unless enormous pressures - megabars - are involved.) With κ regarded as constant we can rearrange Equation 6 and integrate it to find an expression for V as a function of p (which can then be substituted into Equation 4 and integrated.) Rearrangement of Equation 6 looks like,
Integrate from p1 to p2 (and volume goes from V1 to V2) to get,(8) .
Take the antilog of both sides,(9) .
In Equation 9 we can let p1 be a constant and let p2 range over the pressures of interest. There is no reason why we have to keep the subscript "2" on p2 so change p2 to just p. This gives us V as a function of p,(10) .
On the far right of Equation 10 the constant parts have been separated from the variable part to make it easy to integrate. When this expression for V is plugged into Equation 4 only the need stay inside the integral.
Equation 4 is also valid for gases, only here we put in the value of V for an ideal gas.(11) .
With this substitution Equation 4 becomes,
which is an integral we have done many times. After integration Equation 12 becomes.
It is customary (and useful) to make several changes in Equation 13. We let p2 range over the pressures of interest to us and call it just p, we let p1 be some standard state pressure and call it po, and finally we divide through by the number of moles of gas, n. With these changes equation 13 is written,
One more change: the quantity G/n turns out to be so important that it is given a special symbol and its own name. Strictly speaking G/n is just the Gibbs free energy per mole of substance, but the simplicity belies its importance. This quantity is called the chemical potential and it is given the symbol, μ . Our final version of what used to be Equation 13 is now,
We have replaced G(po )/n , the molar Gibbs free energy at the standard state pressure, with its chemical potential symbol, μ o. In most cases we will set the standard state pressure equal to one atmosphere. It is not unusual to see Equation 15 written,
but when it is written like this we have to remember that there is an implied po = 1 atm dividing the p in the ln p, otherwise the argument of the log function would not be unitless.
Equation 15 was derived assuming the gas is ideal. It does not apply to real gases or approximations to a real gas, like the van der Waals equation of state. If we know the equation of state we can go back to Equation 4 and make the appropriate modifications in our notation. That is, we divide Equation 4 by the number of moles, n, let p1 equal the standard state pressure, po and note that V/n is the molar volume to get,(17) .
(We have also let p2 range over the pressures of interest and called it just plain p, which means that we have to change the dummy variable of integration from p to p'.) Equation 17 would provide the correct answer in numerical calculations, but it would wreak havoc in some of the later developments of thermodynamics, namely the equilibrium constant expression, as we will see later. G. N Lewis (the same Lewis of the Lewis dot structures and Lewis acid/base theory) proposed to preserve the form of Equation 15 by writing the chemical potential as,(18) .
This equation defines a quantity f (p) called the fugacity. The fugacity has units of pressure and it is a function of pressure. It contains all the information on the nonideality of the gas. For an ideal gas the fugacity is the same as the pressure. Since all real gases become ideal in the limit as pressure goes to zero we must have.(19) .
We would like to have a way of calculating the fugacity from the equation of state for a gas. To do this go all the way back to Equation 2 and divide it by the number of moles, n,
or, in our new notation,
We can get another expression for dμ by taking the differential of Equation 18 (Remember that R, T, po , and μ o are constants.)
The dμ in Equations 21 and 22 must be the same, so we can set them equal to each other(23) .
Rearrange this to get,
We could integrate this equation directly, but that would sort of take us back to where we started from. Instead, we use a mathematical trick before we integrate it. Add and subtract to the right hand side of Equation 24,(25) .
You can see that we didn't really change anything. Regroup the terms in Equation 25,(26) .
Now integrate from po to p . (We will have to call our dummy variable of integration p' so as not to conflict with the limits of integration.) We get,(27) ,
where fo is the fugacity at po. Move the to the right hand side,
Now we can take the limit where po goes to zero. We know that fo goes to po as po → 0 so the last two terms in parentheses on the right cancel each other in this limit. Equation 28 becomes,
Equation 29 will suffice to calculate the fugacity, but it is customary to take the antilog of both sides to get,(30) .
In the last segment of Equation 29, , is the so-called compressibility factor. In either version of Equation of 29 it is easy to see that if the gas is ideal f = p. It requires an equation of state or experimental data to calculate a fugacity from either Equation 30 or Equaton 29. From the right-hand side of Equation 30 we can see that the second form of the virial expansion,
would be the best choice for calculating fugacity.
From here you can:
Copyright 2004, W. R. Salzman
Return to the local Table of Contents,
Return to the Table of Contents for the Dynamic Text, or
Return to the WRS Home Page. | <urn:uuid:0ac1b2d7-209d-4e9d-83d0-28bc00d32b57> | 3.53125 | 1,581 | Tutorial | Science & Tech. | 58.49381 |
Currently iam learning Threads topic from Kathy Sierra.
I am trying to write a typical producer-consumer code that makes use of the join() method of Thread.
This is what i want to accomplish :
1> There will be 3 threads Thread_A, Thread_B and Thread_C.
2> Thread_A's job is to continuously take input from the User from console and store it in an ArrayList.
The input will be in form of positive integer numbers ( eg: 2, 3, 100, 50 ....).
3> Thraed_B's job is to iterate through the array list and read these values one-by-one and store it in a variable.
Thraed_B will not read the next values unless Thread_C has finished its job.( Thread_B has join() on Thread_C)
4> Thread_C's job is to print '#' character in a File as per the values in the variable. So if the
values is say 20, then Thread_C will print '#' 20 times in the file.
To summarize, if the user has entered (20, 5, 10, 50) then Thread_C should print '#' 20 times, then 5 times then 10 times.....
The approach was wrong. The solution to the problem can be achieved using the wait() and notify() methods.
I had put an infinite loop in Thread_C to continually monitor if the Thread_B has written some value, but doing so Thread_C was not exiting the run method and so not going "dead".
As Thread_B had join over Thread_C, Thread _B could not go further unless Thread_C was not over and this was not happening.
To achieve the goal i have used the wait() and notify() methods.
Below is a new code that will make clear how to achieve the goal using wait() and notify() methods.
( Note : The code now does not take any input from the user, instead the producer produce the values) | <urn:uuid:09ffe94d-e8c6-44f3-bb1e-bf7aeabd8196> | 2.78125 | 425 | Q&A Forum | Software Dev. | 78.8095 |
[N] 2005 Hongshanornis longicresta
Zhonghe Zhou and Fucheng Zhang. (2005). Discovery of an ornithurine bird and its implication for Early Cretaceous avian radiation. Proc. Natl. Acad. Sci. USA. Published online before print December 12, 2005.
Abstract: \\\\\\\"An ornithurine bird, _Hongshanornis longicresta_ gen. et sp. nov., represented by a nearly complete and articulated skeleton in full plumage, has been recovered from the lacustrine deposits of the Lower Cretaceous Jehol Group in Inner Mongolia, northeast China. The bird had completely reduced teeth and possessed a beak in both the upper and lower jaws, representing the earliest known beaked ornithurine. The preservation of a predentary bone confirms that this structure is not unique to ornithischian dinosaurs but was common in early ornithurine birds. This small bird had a strong flying capability with a low aspect ratio wing. It was probably a wader, feeding in shallow water or marshes. This find confirms that the aquatic environment had played a key role in the origin and early radiation of ornithurines, one branch of which eventually gave rise to extant birds near the Cretaceous/Tertiary boundary. This discovery provides important information not only for studying the origin and early evolution of ornithurines but also for understanding the differentiation in morphology, body size, and diet of the Early Cretaceous birds.\\\\\\\" | <urn:uuid:8f491adb-2a06-420c-8fbd-76b40fec16f4> | 2.984375 | 319 | Academic Writing | Science & Tech. | 34.994344 |
THE GPS TIME & SPATIAL REFERENCE SYSTEMS
To fully understand the operations, as well as the mathematical basis, of GPS it is necessary to know the definition and implementation of the various time and spatial reference systems which are utilised in one form or another:
Spatial or coordinate ("datum") reference systems:
Time reference systems:
Back to Chapter 2 Contents / Next Topic / Previous Topic
World Geodetic System 1984 is defined and maintained by the U.S. National Mapping & Imaging Agency (NMIA), formerly known as the Defense Mapping Agency (DMA), as a global geodetic datum (D.M.A., 1991). It is the datum to which all GPS positioning information is referred by virtue of being the reference system of the Broadcast Ephemeris. (Prior to January 1987, the system in use was WGS72.) The realisation of the WGS84 satellite datum is the catalogue of coordinates of over 1500 geodetic stations (most of them active or past tracking stations) around the world. They fulfil the same function as national geodetic stations, they provide the means by which a position can be related to a datum. WGS84 is an earth-fixed Cartesian coordinate system with:
The four defining parameters of the WGS84 ellipsoid are:
The relationship between WGS84, as well as other global datums, and local geodetic datums have been determined empirically, and transformation parameters of varying quality have been derived (see SOLER & HOTHEM, 1989; STEED, 1990; and Table -- section1.2.3). Reference systems are periodically redefined, for various reasons, such as when the primary tracking technology changes (for example when the TRANSIT system was superseded by GPS), or if the configuration of ground stations alters radically enough to justify a recomputation of the global datum coordinates. The result is generally a small refinement in the datum definition, and a change in the numerical values of the coordinates. For example, there is a small difference in the definition of WGS84 from TRANSIT and GPS (see Table -- section1.2.3): origin offsets of approximately 10cm in the z-direction and a scale difference of about 0.1ppm.
However, with dramatically increasing tracking accuracies another phenomenon impacts on datum definition and its maintenance: the motion of the tectonic plates across the earth's surface, or "continental drift" as it is often known (it is assumed there is relatively little vertical motion). This motion is measured in centimetres per year, with the fastest rates being over 10cm/year. Nowadays this motion can be monitored and measured to centimetre accuracy, on a global annual-average basis. In 1994 the GPS reference system underwent a subtle change to WGS84(G730) to bring it into the same system as used by the International GPS Service to produce precise GPS ephemerides (section 6.2.3 and section12.2.1).
The WGS84 system is the most widely used global reference system because it is the system in which the GPS satellite coordinates are expressed in the Navigation Message (section 3.3.3). Other satellite reference systems have been defined but these have mostly been for "scientific" purposes. However, since the mid 1980's, geodesists have been using GPS to measure crustal motion, and to define more precise satellite datums. The latter were essentially by-products of the sophisticated data processing, which included the computation of the GPS satellite orbits. These GPS surveys required coordinated tracking by GPS receivers spread over a wide region during the period of GPS "campaigns". Little interest was shown in these alternative datums until:
In 1991, the International Association of Geodesy decided to establish the International GPS Service (IGS) to promote and support activities such as the maintenance of a permanent network of GPS tracking stations, and the continuous computation of the satellite orbits and ground station coordinates (DIXON, 1995; ZUMBERGE et al, 1995). Both of these were preconditions to the definition and maintenance of a new satellite datum independently of the DMA network (used to maintain the WGS datum) and the Control Segment monitor station network used to provide the data for the operational computation of the GPS broadcast ephemerides. After a test campaign in 1992, routine activities commenced at the beginning of 1994. The network now consists of about 40 core tracking stations located around the world, supplemented by more than 100 other stations (some continuously operating, others only intermittently). The precise orbits of the GPS satellites are available from the IGS with several days delay. See section12.2.1 for further details concerning the IGS.
The definition of the reference system in which the coordinates of the tracking stations are expressed and periodically redetermined is the responsibility of the International Earth Rotation Service (IERS). The reference system is now known as the International Terrestrial Reference Frame (ITRF), and its definition and maintenance is dependent on a suitable combination of Satellite Laser Ranging, Very Long Baseline Interferometry and GPS coordinate results. (Increasingly it is the GPS system that is providing most of the data.) Each year a new combination of precise tracking results is performed, and the resulting datum is referred to as ITRFxx, where "xx" denotes the year "epoch". A further characteristic that sets the ITRF series of datums apart from the WGS, is that the definition not only consists of the station coordinates, but also their velocities due to continental and regional tectonic motion. Hence, it is possible to determine station coordinates within the datum, say ITRF96, at some "epoch" such as the year 2000, by applying the velocity information and predicting the coordinates of the station at any time into the future (or the past). The WGS84(G730) reference system is identical to that of ITRF91 at epoch 1994.0.
Such ITRF datums, initially dedicated to geodynamical applications requiring the highest possible precision, have been used increasingly as the fundamental basis for the redefinition of many nations' geodetic datums. For example, the new Australian datum, known as the Geocentric Datum of Australia (MANNING & HARVEY, 1994), is defined as ITRF92 at epoch 1994.0 (section 12.1.5). Of course other countries are free to chose any of the ITRF datums (it is usually the latest), and define any epoch for their national datum (the year of GPS survey, or some reference date in the future, such as the year 2000). Only if both the ITRF datum and epoch are the same, can it be claimed that two countries have the same geodetic datum. Note, the recent redefinition of the WGS84 datum was made in order to bring it into line with this new international approach.
In order to appreciate the role of time in GPS data analysis it is necessary to review briefly the various time systems involved, and their associated time scales. Some of these definitions are standard and inherent to all space positioning technologies, while others are particular to the GPS system. In general there are three different time systems that are used in space geodesy (KING et al, 1987; LANGLEY, 1991d; SEEBER, 1993):
Dynamical Time is the uniform time scale which governs the motions of bodies in a gravitational field: that is, the independent argument in the Equations of Motion for a body according to some particular gravitational theory, such as Newtonian Mechanics or General Relativity. Atomic Time is time defined by atomic clocks, and is the basis of a uniform time scale on the earth. Sidereal Time is measured by the earth's rotation about its axis, and although sidereal time was once used as a measure of time it is much too irregular by today's standards. Rather, it is a measure of the angular position of a site on the earth with respect to a celestial body (though in keeping with traditional practice, its units are seconds of time rather than seconds of arc). (When the celestial body is the sun, the time scale can also be referred to as Solar Time.) Within each of these broad categories there are specific measures of time, or time scales, that are commonly used in space geodesy and astronomy.
Some time scales have a special importance because they provide the "benchmark"
or reference scale within a particular time system. This often occurs by
international convention. Often, however, the time scales to which we have
access are merely realisations of the "true" or definitive reference
time scale (or scales) associated with each time system.
The Julian Date (JD) defines the number mean solar days (each of which is 86400 SI second in length) elapsed since the epoch 1.5 (midday) January, 4713 B.C. The Modified Julian Date (MJD) is obtained by subtracting 2400000.5 from the JD. (MJD therefore commences at midnight.) The standard epoch for GPS Time (0hr 6 January, 1980) is therefore MJD44244.0.
The date conversions described below are taken from HOFMANN-WELLENHOF et al. (1998), and are valid for the period March 1900 to February 2100. The JD can be computed from the year number Y (a full four digit integer), integer month number M , integer day number D , and the real-valued time in hours H:
|JD = Int[ 365.25y ] + Int[ 30.6001(m+1) ] + D + H / 24 + 1720981.5|
where Int denotes the integer part of the number, and:
|y = Y - 1||and||m = M + 12||if M 2|
|y = Y||and||m = M||if M > 2|
The reverse conversion is carried out stepwise by first defining the quantities:
|b = Int[ JD + 0.5 ] + 1537|
|c = Int[ (b - 122.1) / 365.25 ]|
|d = Int[ 365.25c ]|
|e = Int[ (b - d) / 30.6001 ]|
The date parameters are then obtained:
|D = b - d - Int[ 30.6001e ] + Frac[ JD + 0.5 ]|
|M = e - 1 - 12.Int[ e / 14 ]|
|Y = c - 4715 - Int[ (7 + M) / 10 ]|
where Frac denotes the fractional part of a number.
A further useful relation is between JD and the GPS week number:
|GPSWeek = Int[ (JD - 2444244.5) / 7 ]|
The GPS week starts on Saturday midnight (Sunday morning), and runs
for 604800 seconds. The "GPS Week Rollover" occurred on the weekend
of 21-22 August 1999, when the GPS week number was reset to week 0 ("week
zero"). Hence if the week number is greater than 1024, subtract the
Back to Chapter 2 Contents / Next Topic / Previous Topic
© Chris Rizos, SNAP-UNSW, 1999 | <urn:uuid:c72e0329-b95f-4a9f-a081-f2b426d39d56> | 3.671875 | 2,366 | Academic Writing | Science & Tech. | 47.128807 |
When you create a local binding for a variable, that binding takes effect only within a limited portion of the program (see Local Variables). This section describes exactly what this means.
Each local binding has a certain scope and extent. Scope refers to where in the textual source code the binding can be accessed. Extent refers to when, as the program is executing, the binding exists.
By default, the local bindings that Emacs creates are dynamic
bindings. Such a binding has indefinite scope, meaning that
any part of the program can potentially access the variable binding.
It also has dynamic extent, meaning that the binding lasts only
while the binding construct (such as the body of a
let form) is
Emacs can optionally create lexical bindings. A lexical binding has lexical scope, meaning that any reference to the variable must be located textually within the binding construct. It also has indefinite extent, meaning that under some circumstances the binding can live on even after the binding construct has finished executing, by means of special objects called closures.
The following subsections describe dynamic binding and lexical binding in greater detail, and how to enable lexical binding in Emacs Lisp programs. | <urn:uuid:e4193d02-2b98-444f-b26f-8baf2e8c2cec> | 3.109375 | 242 | Documentation | Software Dev. | 37.128004 |
Amidst all the concern about the Deepwater Horizon spill, global warming, and a myriad other issues affecting fossil fuel usage, one item is often overlooked, an item that many people don’t want to think about too much.
Oil is a finite resource. One day, it will eventually run out. The global economy is based on the ready availability of portable, convenient energy. Not to mention the chemicals and synthetic materials made from petroleum, everything from clothing to kids’ toys to road surfaces.
At some point in time, oil supplies will reach their maximum output, and after that point, world-wide production will begin to fall as the easily-exploited reserves are exhausted. Demand for oil will outstrip the available supply. This phenomenon is known as Peak Oil, and many fear that we have already past it. Peak Oil is often illustrated by the Hubbert Curve, a classic bell curve showing a peak and rapid production decline.
The consequences are pretty dire, and the seriousness of the situation depends on how many years worth of oil we have left, and what we can do to shift our dependence away from oil. At the very least, a change in lifestyle will be required, especially for those who live in the wealthy western nations. More than that, though, entire economies will have to shift. Iceland has gotten a start on this process, with its plans to be fossil-fuel-free by 2050. However, Iceland is small, both in geographical size and population, and even they will take decades to eliminate their dependence on oil.
Some warn that the global economy will not be able to respond in time, and that the world is on the brink of a next great depression, one that will last for many generations. Some of these doomsayers also sell courses on how to survive the coming resource crash, in a manner reminiscent of the Y2K “crisis”, so there is an element of self-interest here that cannot be discounted. Others, with an equal, or perhaps even greater level of self-interest, maintain that peak oil is not an issue. Much of the latter’s criticism of the methodology surrounding Peak Oil and the Hubbert curve appears valid.
One of the problems around the prognostication of Peak Oil is the timing. Experts have been predicting Peak Oil since the late 1970’s, each time revising their predictions based on new data. Thus there is a great deal of uncertainty surrounding the timing and effects of Peak Oil. For some, the boy may have cried wolf a few too many times.
There is no doubt that we will eventually run out of oil, or at least cheap oil. Against that day, we need to start moving towards establishing alternative energy sources for transportation and infrastructure. We need to do things like moving to concrete paving rather than asphalt, electric cars, and renewable sources of base power. Most importantly, however, we need to start adjusting our lifestyles. Conservation needs to assume a greater importance in all energy strategies, and issues like fuel economy and emission standards need to be addressed and improved as rapidly as possible. One day the oil will run oout, and what happens to us then depends on what we do now. | <urn:uuid:e11f76ae-6b9c-48f9-93d4-3881e2020b5c> | 2.796875 | 653 | Nonfiction Writing | Science & Tech. | 47.838016 |
Earth's Climate Thermostat
by Larry Vardiman, Ph.D.
Global warming, the greenhouse effect, and climate change continue to receive attention in the media and scientific circles. A recent decision by President George W. Bush not to ratify the Kyoto Accords on Carbon Dioxide (CO2) emissions has spurred all kinds of political commentary. Yet, scientists remain divided on the legitimacy of attributing global warming to human releases of CO2. Approximately half of the atmospheric science community remains skeptical of global warming. Some of these atmospheric experts have gone so far as to sign statements refuting the results of theoretical climate models used to predict future warming.1
In June 1990 I first discussed some of the issues at play in the greenhouse controversy. I went so far as to suggest that cloudiness had not been treated adequately in global climate models and could function as a negative feedback mechanism in the atmosphere which served to limit global warming.2 I have also discussed global warming in light of what I believe are mistaken interpretations of the Ice Age.3, 4 Since 1990 the connection between carbon dioxide and global warming has become slightly clearer, but significant governmental regulation has been proposed. In this article I will attempt to restrict my comments to the scientific aspects of the controversy. Fortunately, cooler heads currently prevail in Washington, D.C., forestalling implementation of unnecessary policies advocated by our former Vice President.5
Atmospheric Concentrations of CO2
The concentration of CO2 at the observatory on Mauna Loa, Hawaii, has exhibited a continuing increase since 1990.6 Figure 1 shows this increase through 2000 with annual oscillations due to the summer growth of continental and marine vegetation in the northern hemisphere which extracts CO2 from the atmosphere in the summer and releases it in the winter. Solid horizontal lines show estimated levels which prevailed in 1900 and 1940.7 The magnitude of the atmospheric increase during the 1980s and 1990s was about three gigatons of carbon (Gt) per year. This compares to 5.5 Gt per year estimated for the human release of CO2, primarily from coal, oil, and natural gas, and the production of cement. However, these numbers are small compared to the reservoirs of carbon: 750 Gt in the atmosphere, 1,000 Gt in the surface ocean, 2,200 Gt in the vegetation, soils, and detritus, and 38,000 Gt in the intermediate and deep oceans.
Figure 1. Atmospheric CO2 concentration at Mauna Loa, Hawaii.6 Approximate lobal concentrations of CO2 in 1900 and 1940 are also shown as horizontal lines.7
It is apparent that CO2 is increasing in the atmosphere, whether for natural or human causes. However, does this rise in CO2 result in increased warming of the atmosphere? Figure 2 shows the annual mean surface temperature in the contiguous United States between 1895 and 1997 as compiled by the National Climate Data Center.8 The trend line for this 103-year period has a slope of 0.22oC per century. Note, however, that the slope from 1940 to 1997 is much lower at 0.08oC per century during this period when the greatest increase in CO2 was observed. The greatest rate of estimated warming does not seem to coincide with the period of greatest increase in atmospheric CO2.
Questions have been raised about the accuracy of the surface temperature measurements used in these calculations. For example, the so-called "heat island effect" may have caused the temperature observations to be biased to higher values toward the end of the century as weather-observing locations were encroached upon by growing cities. It is well known that temperatures in cities exhibit higher temperatures than surrounding countrysides. It is likely that 100-year-old weather stations have a built-in warming trend due to this effect.
Figure 2. Annual mean surface temperature in the U.S. between 1895 and 1997.8
In an attempt to remove this bias, temperature trends of radiosonde measurements from 63 upper-air stations between 90°N and 90°S latitude from 1958 to 1996 and measurements of a satellite microwave sounding unit between 83°N and 83°S latitude from 1979 to 1997 of global lower tropospheric temperatures were studied. Both of these systems show a slight decline in temperature since 1979.
These temperature trends all fly in the face of projections by the United Nations Intergovernmental Panel on Climate Change (IPCC).9 The IPCC has routinely stated that increases in global atmospheric temperatures by as much as 5oC (10oF) would occur if the concentration of CO2 were to double. The IPCC has been the primary source of scientific expertise which has led to the Kyoto agreement that CO2 emissions be cut drastically in developed countries. Recommendations by the IPCC are based primarily on the results of global climate models which still do not include an adequate treatment of many physical mechanisms like cloud cover.
A Climate Thermostat
In March of 2001 a significant paper by Richard Lindzen of MIT was published in the Bulletin of the American Meteorological Society which addresses the cloud-cover feedback mechanism.10 Dr. Lindzen has been a long-time critic of the IPCC and a highly respected researcher in the atmospheric sciences. His paper reports that clouds in the tropics respond to warmer sea-surface temperatures (SST) by permitting long-wave radiation to space to increase, causing greater cooling of the atmosphere. This negative feedback mechanism would more than cancel all the positive feedbacks included in the more sensitive current climate models.
Dr. Lindzen calculated the average SST as a function of cloud cover and found a strong negative relationship. His explanation for the negative relationship is that warmer SSTs lead to higher humidities and greater convective activity. This greater convective activity is more efficient in creating rainfall, leaving less moisture to form cirrus anvils which prevent long-wave radiation from escaping to space. Consequently, warmer SSTs lead to more rapid cooling of the atmosphere and stabilization of the earth's temperature.
Richard Lindzen's mechanism works differently than the one I proposed ten years ago, but the result is similar. In either case, God has designed the atmosphere to maintain a uniform temperature, whether there is a cooling or a warming tendency. Only under catastrophic conditions will the atmosphere experience major changes. This type of catastrophic event occurred during the Genesis Flood, but this event involved unique conditions not applicable now. Under normal circumstances, God has designed the atmosphere with a built-in thermostat which maintains thermal equilibrium. This climate condition does not lead to a runaway greenhouse or a new ice age. The view of a climate with a built-in thermostat contrasts strongly with the conventional worldview that the atmosphere is unstable and a minor perturbation could lead to a natural catastrophe of either fire or ice.
Seitz, Frederick, 1998, Petition Project: Global Warming Review, see email@example.com, January.
Vardiman, Larry, 1990, "The Christian and the Greenhouse Effect," ICR Impact No. 204, June.
Vardiman, Larry, 1994, "Out of Whose Womb Came the Ice?" ICR Impact No. 254, August.
Vardiman, Larry, 1995, "A Faulty Climate Trigger," ICR Impact No. 261, March.
Gore, Albert, Jr., 1992, Earth in the Balance, Houghton Mifflin.
Keeling, C.D. and T.P. Whorf, 1997, Trends Online: A Compendium of Data on Global Change, Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory.
Idso, S.B., 1989, Carbon Dioxide and Global Change: Earth in Transition, IBR Press, 7.
Brown, W.O. and Heim, R.R., 1996, National Climate Data Center, Climate Variation Bulletin , 8, Historical Climatology Series 4-7, December.
Houghton, John T. et al., 1995, Report of the Intergovernmental Panel on Climate Change, Cambridge University Press.
Lindzen, Richard S., Ming-Dah Chou, and Arthur Y. Hou, 2001, "Does the Earth Have an Adaptive Infrared Iris?" Bulletin of the American Meteorological Society, 82, 417-432, March.
* Dr. Vardiman is Chairman of the Astrogeophysics Department at ICR. | <urn:uuid:70b5154c-42cd-4972-ab8a-817e2513771a> | 3.3125 | 1,714 | Academic Writing | Science & Tech. | 45.023161 |