text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Because of all the traffic on this post, I wanted to clarify that I am completely convinced that there is lots of plastic in the North Pacific Gyre, and that it is a serious environmental problem. My issue with the plastic:plankton ratio is that it doesn’t accurately measure the amount of plastic.
The Algalita Marine Research Foundation is great at raising awareness of the problem of trash in the North Pacific Gyre. They’ve tirelessly lobbied for political change, coined terms like “plastic soup,” worked in the schools, and are sailing the Junk raft to Hawaii as we speak. However, as part of their quest to make the enormity of the plastic problem understood, they’ve been claiming that there is six time more plastic than plankton in the North Pacific Gyre. The 6:1 ratio has appeared in PBS, The Seattle Times, and has been repeated all over the internet.
Though I admire Algalita’s work, the 6:1 plastic:plankton ratio is deeply flawed. Worse, it is flawed in a direction that undermines Algalita’s credibility: It may vastly underestimate plankton and overestimate plastic. Here’s why, based off the methodology published in Moore et al’s 2001 paper in Marine Pollution Bulletin.
1) The mesh in the net was too big, and half the samples were taken at the wrong time of day.
In the Moore et al (2001) paper, the researchers use a 333 micron (millionth of a meter) manta tow. This means that the holes in the mesh are approximately 333 microns in diameter, though they may stretch somewhat depending on how the net was towed. This is a standard technique for sampling zooplankton.
Just calling all tiny marine life “plankton” and lumping it together makes as little sense as saying that a tree and a beetle are the same because they both live in a forest. So I am going to briefly digress into the difference between phytoplankton and zooplankton. Phytoplankton are essentially tiny floating plants usually with only one cell, while zooplankton are larger floating animals with tons of cells. In areas with lots of nutrients, phytoplankton are relatively big. Diatoms, for example, range from 10-150 microns. But the North Pacific Gyre has very few nutrients, and the most common phytoplankton are very small. A single type of cyanobacteria, Prochlorococcus, accounts for 50% of the total phytoplankton community, but it is so small (less than 1 micron) that it wasn’t even discovered until the 1980s (Karl 1999). The vast, vast majority of life in the North Pacific Gyre is smaller than 8 microns (Karl 1999).
A 333 micron net is way too big to sample phytoplankton; it is designed to sample tiny animals, or zooplankton. The most common types of zooplankton are tiny crustaceans, like copepods, that make a living by grazing phytoplankton. But the tiny plants have to be big enough for them to grasp and put in their mouths. That’s easy in productive waters where phytoplankton are big and the zooplankton can pop them like candy, but hard in nutrient-poor waters where the phytoplankton are very small. So there’s not very much zooplankton at all in the North Pacific Gyre – most of the life there is the very tiny phytoplankton.
To make matters more complicated, most zooplankton hang out hundreds of meters below the surface during the day, and only come to the surface at night. Otherwise, they’d be eaten in an instant by sight predators like birds or fish. (This is called vertical migration.) Sampling for zooplankton during the day is like looking for an open bar at 10 AM. If you look hard enough you’ll find one or two, but you really have to wait until full night for the party to start. The Moore et al. (2001) paper states that the samples were evenly split between daytime and nighttime hours, but that means that the daytime samples probably underestimated zooplankton abundance. Since there isn’t very much zooplankton in the Gyre anyway, sampling during the day is going to mean that you won’t get much of anything — except plastic.
2) The 6:1 ratio is based off dry weight, but plankton is 95% water.
Moore et al. (2001) calculated the ratio based off the dry weight of the stuff they scooped up in their manta trawl. That means they put everything in an oven until all of the water was evaporated. That’s not going to change the weight of plastic, but drying out a zooplankter is like drying out Jello: there’s not going to be very much left.
Therefore, comparing the dry weight of plastic to the dry weight of zooplankton is going to vastly overestimate the amount of plastic. To be fair, the ratio might accurately reflect how, for example, an albatross’s stomach might deal with the different masses; plastic just sits there, while zooplankton would be digested and the water removed. Nonetheless, the ratio is a poor reflection of how much plastic is out there. A more accurate way to measure it might have been displacement volume: How much space is taken up by plastic versus space taken up by plankton?
3. Plankton populations fluctuate wildly, and maybe plastic does too.
The 6:1 plastic:plankton ratio is based off a single moment in time — four days in August 1999, to be exact. Plankton populations often bloom and bust, depending on the season and the oceanic conditions. For example, in the winter, storms stir up the water which brings more nutrients to the surface which causes phytoplankton to bloom. There’s no way to tell from a single point in time whether this plankton is blooming or busting, whether it’s a good year or a bad year, or whether this particular moment is representative of “normal conditions.” So even if there was a 6:1 plastic:plankton ratio on those days in August 1999, the ratio could have been completely different in October 1999, or could be completely different now. There is no constant plankton amount. (There’s probably no constant plastic amount, either, depending on storm mixing.)
I don’t mean to criticize Algalita’s mission to reduce plastics in the ocean. I deeply admire it. But I see the 6:1 ratio all over the media coverage of the North Pacific Gyre, and I fear in the end it will backfire on Algalita, and consequently on the whole issue of marine plastic debris. The constant hammering on the flawed 6:1 ratio makes it easy for oceanographers to dismiss the problem, the plastic lobby to discredit it, and regular people to ignore it, which would be the worst outcome of all.
Karl, D. M. 1999. Minireviews: A Sea of Change: Biogeochemical Variability in the North Pacific Subtropical Gyre. Ecosystems 2:181-214.
Moore, C. J., S. L. Moore, M. K. Leecaster, and S. B. Weisberg. 2001. A comparison of plastic and plankton in the North Pacific central gyre. Marine Pollution Bulletin 42:1297-1300. | <urn:uuid:9b6c263a-e508-4125-8f7e-cd19a7a54c5b> | 2.890625 | 1,636 | Personal Blog | Science & Tech. | 57.846669 |
It was all lined up even without the colorful aurora exploding overhead.
If you follow the apex line of the recently deployed monuments of
Arctic Henge in
Raufarhöfn in northern
Iceland from this vantage point, you will see that they point due north.
A good way to tell is to follow their apex line to the line connecting the end stars of the
Big Dipper, Merak and Dubhe, toward
Polaris, the bright star near the north
spin axis of the Earth projected onto the sky.
By design, from this vantage point, this same apex line will also point directly at the
at its highest point in the sky just during the
summer solstice of Earth's northern hemisphere.
In other words, the Sun will not set at
Arctic Henge during the summer solstice in late June, and at its
highest point in the sky it will appear just above the aligned vertices of this modern monument.
The above image was taken in late March during a
beautiful auroral storm. | <urn:uuid:c6584ade-bb17-4334-8810-6c9e05324ca3> | 2.734375 | 221 | Personal Blog | Science & Tech. | 48.091987 |
CLICK ON THE GRAPHIC TO SEE AN ENHANCED IMAGE.
Is there an association between the tropospheric ΔT and the Intensity of the Intergalactic Cosmic Ray (IICR)? The superposition of the graph on the oscillations in the Tropospheric temperature to compare it with the graph on the Intensity of Cosmic Rays (IICR) that collide with the solar wind, in the Termination Shock zone of the Solar System, seems to demonstrate a direct correlation between the variations of the tropospheric temperature and the IICR. In addition, it could be that the anomaly of the ICR is causing the anomalies observed in the radiating activity of the Sun. FOR A DEEPER REVIEW ON THIS SUBJECT, READ OUR ARTICLE ON THE CORRELATION OF ICR WITH THE TERRESTRIAL GLOBAL WARMING.
Climate is not the same as weather. The climate is a set of averaged quantities complemented with high moment statistics that take account variance, covariance, correlation, etc., which describes a structure and behavior of the atmosphere, the hydrosphere, the cryosphere and the biosphere on a period of time. On the other hand, weather is the set of meteorological conditions that prevail in a given period of time and a determined place, for example, temperature, relative humidity, dew point, atmospheric pressure, rainfall, snowfall, etc.
When we talk about climate change we are referring to the changes occurred on the averaged values that characterize a region given during a period of time; for example, A. M. rain, temperature: 80°F, winds: 3 mph SE, humidity: 38%, dew point: 52°F, etc.
Climate always change and there are not fixed values for any region of the world. Sometimes we talk about standard values for simple convenience and to recognize the factor or factors that could modify the structure of an ecosystem in a given moment, for example, when we want to know the factors that could modify the migratory pattern of Monarch butterflies.
Some people very interested on obtaining political or economic gains distort the scientific concepts making their audience thinks that the climate has always been fixed and that the changes that we observe at this moment are anomalous. Nevertheless, we only need to read any good book on paleontology to discover that the climate has never been stable or balanced. For example, the Medieval Period had a period of global warming when the atmospheric temperatures rose far higher than at the present time. Another case of a significant climate change has been spotted in the Western seaside of Canada, where 7000 years ago the temperature was so benign that a tempered forest prospered there. The climate in that region would have changed so much that the upper layer of the ground was frozen and the intense cold devastated the whole forest. (Read here the related article)
Lean, J. 2004. Solar Irradiance Reconstruction. IGBP PAGES/World Data Center for Paleoclimatology Data Contribution Series # 2004-035. NOAA/NGDC Paleoclimatology Program, Boulder CO, USA.
Eric Rignot and Robert H. Thomas. Mass Balance of Polar Ice Sheets. Science, Vol. 297, Issue 5586, 1502-1506, 30 August 2002.
Curt H. Davis, Yonghong Li, Joseph R. McConnell, Markus M. Frey, Edward Hanna. Snowfall-Driven Growth in East Antarctic Ice Sheet Mitigates Recent Sea-Level Rise. Science, Vol. 308, Issue 5730, 1898-1901, 24 June 2005.
Sea Ice Decline Intensifies. National Snow and Ice Data Center (NSIDC), a part of the Cooperative Institute for Research in Environmental Sciences at the University of Colorado, Boulder; NASA; and the University of Washington. NSIDC. 28 September, 2005.
Thomas L. Delworth, Thomas R. Knutson. Simulation of Early 20th Century Global Warming. Science, Vol. 287, Issue 5461, 2246-2250, 24 March 2000.
R. B. Alley et al. Abrupt Climate Change. Science, Vol. 299, Issue 5615, 2005-2010, 28 March 2003.
Richard A. Kerr. CLIMATE CHANGE: A Few Good Climate Shifters. Science, Vol. 306, Issue 5696, 599-601, 22 October 2004.
Thomas R. Karl, Kevin E. Trenberth. Modern Global Climate Change. Science, Vol. 302, Issue 5651, 1719-1723, 5 December 2003.
Lean J., Bear J., Bradley R. Reconstruction of Solar Irradiance since 1610 - Implications for Climate – Change. Geophysical Research Letters 22 (23). Pp. 3195-3198, December 1st., 1995.
Drew T. Shindell et al. Solar Forcing of Regional Climate Change During the Maunder Minimum. Science, Vol. 294, Issue 5549, 2149-2152, 7 December 2001.
Seong-Joong Kim, Thomas J. Crowley, Achim Stössel. Local Orbital Forcing of Antarctic Climate Change during the Last Interglacial. Science, Vol. 280, Issue 5364, 728-730, May 1st. 1998.
Gerald E. Marsh. A Global Warming Primer. National Policy Analysis, No. 361, September 2001. The National Center for Public Policy Research.
M. L. Khandekar, T. S. Murty, P. Chittibabu. The Global Warming Debate: A Review of the State of Science. Pure Applied geophysics, Vol. 162, 1557–1586. 2005.
Hansen, J., Sato, M., Lacis, A., and Ruedy, R. The Missing Climate Forcing. Phil Trans. R. Soc. London. 352, 231–240. 1997.
Reed P. Scherer, Ala Aldahan, Slawek Tulaczyk, Göran Possnert, Hermann Engelhardt, Barclay Kamb. Pleistocene Collapse of the West Antarctic Ice Sheet. Science, Vol. 281, Issue 5373, 82-85, 3 July 1998.
Sharon L. Kanfoush, David A. Hodell, Christopher D. Charles, Thomas P. Guilderson, P. Graham Mortyn, Ulysses S. Ninnemann. Millennial-Scale Instability of the Antarctic Ice Sheet During the Last Glaciation. Science, Vol. 288, Issue 5472, 1815-1819, 9 June 2000.
Marshes Tell Story of Medieval Drought, Little Ice Age, and European Settlers near New York City. Krishna Ramanujan- Goddard Space Flight Center. Published May 18, 2005. Last seen October 05, 2005.
Who's Afraid of a Solar Flare? October 7, 2005. NASA’s site.
Solar Minimum Explodes. Solar minimum is looking strangely like Solar Max. September 15, 2005. NASA’s site.
The Biggest Explosions in the Solar System. February 6, 2002. NASA’s site.
Solar Event Reports -last 60 days. NOAA’s site:
Schmidt/Miller, NASA GISS/Universidad de Columbia, NYAS, Nueva York, NY. 2004
Climate Change on Mars: | <urn:uuid:a12e0ad7-609f-4dde-8bcd-e237b0f3dc33> | 3.125 | 1,545 | Knowledge Article | Science & Tech. | 63.212217 |
Web Application Development
A web application is a programme that can only be accessed via an Internet browser and the data it provides is primarily stored online. You may not realise it but web applications are now widely spread across the Internet and you'll already be familiar with them if you're a regular web user.
One of the more simplistic examples of a web application is the forms you fill in to register with an online store. These forms gather all your details and add the relevant information to the company's database which automatically generates a profile for you. This is then updated with each purchase you make. Web applications like this streamline the processes involved and ensure that the site can monitor and adapt to the requirements and needs of its customers. In the example of an online stores, when an order is placed the system can know whether the item is in stock and can interact with the company's purchasing system to ensure the order is satisfied. The shopper may then track their purchase through the despatch process through to delivery. In some cases they can also request or access support for the purchase through the application. The e-commerce Web application can potentially remain in contact with the customer, notifying them of new products that may be of interest in line with their profile or purchasing history.
If you think for a second how many online forms you use or have used at some point you'll realise the range of web applications that are available and the possibilities that they open up. They don't just save time and remove much of the human error and staff overheads from a process, they also speed up requests, make information far more accessible and can be formatted, verified and utilised more effectively across a business. | <urn:uuid:12eb5d24-7d07-46c5-b40a-37b966b94894> | 2.8125 | 338 | Knowledge Article | Software Dev. | 34.995821 |
Energetic Ring Marks Spot That Leads to Discovery of Neutron Star
The Chandra image of the distant supernova remnant SNR G54.1+0.3 reveals a bright ring of high-energy particles with a central point-like source. This observation enabled scientists to use the giant Arecibo Radio Telescope to search for and locate the pulsar, or neutron star that powers the ring. The ring of particles and two jet-like structures appear to be due to the energetic flow of radiation and particles from the rapidly spinning neutron star rotating 7 times per second.
During the supernova event, the core of a massive star collapsed to form a neutron star that is highly magnetized and creates an enormous electric field as it rotates. The electric field accelerates particles near the neutron star and produces jets blasting away from the poles, and as a disk of matter and anti-matter flowing away from the equator at high speeds. As the equatorial flow rams into the particles and magnetic fields in the nebula, a shock wave forms. The shock wave boosts the particles to extremely high energies causing them to glow in X-rays and produce the bright ring (see inset).
The particles stream outward from the ring and the jets to supply the extended nebula, which spans approximately 6 light years.
The features observed in SNR G54.1+0.3 are very similar to other "pulsar wind nebulas" found by Chandra in the Crab Nebula, the Vela supernova remnant, and PSR B1509-58. By analyzing the similarities and differences between these objects, scientists hope to better understand the fascinating process of transforming the rotational energy of the neutron star into high-energy particles with very little frictional heat loss. | <urn:uuid:3976f0ec-1672-4e89-821d-67bc65b2e918> | 3.359375 | 361 | Knowledge Article | Science & Tech. | 47.126791 |
Species at Risk
At the same time heroic efforts are being undertaken to restore wolves in the lower 48 states, wolves in Alaska are often the victims of controversial predator-control programs.
Millions of bison once thundered across the Great Plains. Today, wild bison are making a small comeback, but they need more room to roam.
Amphibians are facing many threats to their survival. Chytrid fungus, commercial trade of amphibians, habitat loss, pollution, pesticides, competition from invasive species and climate change are wreaking havoc on their numbers.
Although still endangered, black-footed ferrets are starting to make a comeback, and Defenders of Wildlife is helping to make this a remarkable wildlife success story.
This bird once dominated the skies over the western U.S. But through habitat loss and toxins, humans have put the condor in a steep decline.
As a result of historic trapping and continued habitat loss, there may be as few as 1,000 lynx remaining in the lower 48.
Climate change is now one of the leading threats to wildlife. Find out what Defenders is doing to help animals around the country survive in a warming planet.
Cook Inlet belugas are the most isolated and genetically distinct of Alaska’s five beluga populations, separated from the others by the geographic barrier of the Alaska Peninsula for over 10,000 years. Their previous range had been most or all of Cook Inlet, but today that range is much smaller.
The law that is most vital to protecting wildlife often needs protection of its own.
The Florida black bear, a unique subspecies of the American black bear, once numbered an estimated 12,000 animals that roamed throughout Florida and into the southern portions of adjacent states.
Get Instant Alerts & Updates
Enter email address
Enter mobile number
On the Blog
May 21, 2013 | 10.20 AM
May 20, 2013 | 10.56 AM
May 17, 2013 | 11.04 AM | <urn:uuid:02f14e96-9f2c-4389-adbf-87271a2de103> | 3.375 | 410 | Content Listing | Science & Tech. | 50.826913 |
Two types of derecho may be distinguished based largely on the organization and behavior of the associated derecho-producing convective system. The type of derecho most often encountered during the spring and fall is a serial derecho. The second type is a progressive derecho, associated with a relatively short line of thunderstorms.
Derechos are widespread, long-lived windstorms associated with a band of rapidly moving showers or thunderstorms. Coined by Dr. Gustavus Hinrichs in 1888, "derechos", a Spanish word which means "direct" or "straight ahead".
Although a derecho's strength can produce destruction similar to tornadoes, the damage pattern produced by these events will occur along relatively straight lines. Thus the term, straight-line wind damage.
Derechos are produced by a family of downbursts clusters. Downburst clusters have overall lengths of 50 to 60 miles (80 to 100 kilometers).
A downburst cluster itself is made up of several downbursts. A downburst is an area of strong, often damaging wind produced by a convective downdraft with the overall size of the downburst varying from 4 to 6 miles (8 to 10 kilometers).
Within the downbursts are microbursts; smaller pockets of more intense wind.
While not shown in the illustration at right, within the microbursts are even smaller pockets of extreme wind called burst swaths. Burst swaths can range from 50 to 150 yards (45 to 140 meters) long. The damage pattern from burst swaths can often resemble a path of a tornado.
Due to this nature of the derecho, damage produced by these wind storms is highly variable along its path. Damage surveys following derecho events have shown that within large areas of overall damage are much smaller pockets of intense damage.
It is not uncommon for one house to be nearly destroyed while adjacent houses have relatively minor damage.
Derechos are produced by long-lived thunderstorm complexes that produce bow echoes.
- Also, see: About Derechos. | <urn:uuid:d7bda5b9-0024-46f4-bf38-5c4ccd10deb6> | 3.9375 | 424 | Knowledge Article | Science & Tech. | 42.015714 |
The script runs again when you release and press the scroll button again. This time the "if" argument can't be evaluated to a null, since we have a definite value (5000) stored in the up_last_click variable. Lets assume you pressed the button exactly one quarter (1/4) second after the first press, and lets step through this script again:
up_first_click = getTimer( );
Now the up_first_click variable is updated to a new value by the "getTImer" action. Since one quarter second translates to 250 ms, up_first_click's new value will be 5250 (5000 + 250 = 5250). With that done, we move to the "if" condition, and fill in our numeric values for the up_first_click and up_last_click variables. 5250 - 5000 = 250 ms. Using this inequality expression, Mick has set the double-click threshold to 350 ms. As long as the pause between button presses is equal to or less than 350 ms, as it is in our example, the "if" argument returns "true" and its embedded action is executed. If the pause between cicks is longer than 350 ms, the continuous feedback buttons are triggered instead.
The embedded action, "scroll_shape._y = scroll_shape.logic.top;" immediately sets the _y position of scroll_shape to its uppermost limit, top. This is what creates the jump effect. Since our example "if" argument returns "true," the continuous feedback action embedded within the "else" clause is skipped. (A "true" value for the "if" statement voids the "else" clause below it.) Either way though, the action loop in logic is always triggered on (press) because the "gotoAndPlay("scroll_logic")" action isn't embedded within either the "if" or the "else." Conversely, logic is always stopped on (release).
The script for the bottom button is again structured very similarly to the
top, however the double click variables are named down_first_click and
down_last_click. Also the jump action embedded within the "if"
statement sets scroll_shape._y
to bottom, not top. The code in the source FLA is extensively
commented, but feel free to remove the comments ("//blah
blah blah") if it makes the code easier to understand for you :^)
|» Level Intermediate|
Rating: 8 Votes: 94
|Me Like Flash 5. Flash 4 good. Flash 5 better.|
|Download the files used in this tutorial.|
|Download (229 kb)|
|More help? Search our boards for quick answers!| | <urn:uuid:629aef70-4501-43e7-baaa-aa93b863f944> | 2.8125 | 572 | Documentation | Software Dev. | 61.557856 |
equalp x y => generalized-boolean
Arguments and Values:
generalized-boolean---a generalized boolean.
Returns true if x and y are equal, or if they have components that are of the same type as each other and if those components are equalp; specifically, equalp returns true in the following cases:
equalp does not descend any objects other than the ones explicitly specified above. The next figure summarizes the information given in the previous list. In addition, the figure specifies the priority of the behavior of equalp, with upper entries taking priority over lower ones.
Type Behavior number uses = character uses char-equal cons descends bit vector descends string descends pathname same as equal structure descends, as described above Other array descends hash table descends, as described above Other object uses eq
Figure 5-13. Summary and priorities of behavior of equalp
(equalp 'a 'b) => false (equalp 'a 'a) => true (equalp 3 3) => true (equalp 3 3.0) => true (equalp 3.0 3.0) => true (equalp #c(3 -4) #c(3 -4)) => true (equalp #c(3 -4.0) #c(3 -4)) => true (equalp (cons 'a 'b) (cons 'a 'c)) => false (equalp (cons 'a 'b) (cons 'a 'b)) => true (equalp #\A #\A) => true (equalp #\A #\a) => true (equalp "Foo" "Foo") => true (equalp "Foo" (copy-seq "Foo")) => true (equalp "FOO" "foo") => true
(setq array1 (make-array 6 :element-type 'integer :initial-contents '(1 1 1 3 5 7))) => #(1 1 1 3 5 7) (setq array2 (make-array 8 :element-type 'integer :initial-contents '(1 1 1 3 5 7 2 6) :fill-pointer 6)) => #(1 1 1 3 5 7) (equalp array1 array2) => true (setq vector1 (vector 1 1 1 3 5 7)) => #(1 1 1 3 5 7) (equalp array1 vector1) => true
Side Effects: None.
Affected By: None.
Exceptional Situations: None.
eq, eql, equal, =, string=, string-equal, char=, char-equal
Object equality is not a concept for which there is a uniquely determined correct algorithm. The appropriateness of an equality predicate can be judged only in the context of the needs of some particular program. Although these functions take any type of argument and their names sound very generic, equal and equalp are not appropriate for every application. | <urn:uuid:cb9c9f62-975c-4029-a45f-add83b7ec57e> | 3.25 | 621 | Documentation | Software Dev. | 56.555885 |
Theory of Spontaneous Generation
Name: Jessica B.
I am doing an experiment about the Theory of Spontaneous
Generation. I have done three tests with bread then placing them in petri
dishes. The first bread was dry, second got 40 water drops and third got
40 sucrose drops (5% sucrose solution). I wanted to know which one should
mold the most after one week, i know it isnt dry but between the other
two i hav no idea. How would i disagree with Spontaneous Generation (what
would be my reasons) and how would i explain to non believers that life
does not evolve from non-life? Thank you in advance.. I am in deep need
First, you should probably re-name your project. Spontaneous generation implies that life can
arise from non-living substances. Your mold growth experiment will simply test how quickly
already living mold spores can settle upon and grow in the media you've selected. The
experiment you are carrying out will not test whether life can arise from that which is
not or ever has been alive.
Probably the best argument against spontaneous generation is the fact that no experiment
ever performed has produced any living material from dead atoms. Indeed, many experiments
have demonstrated that precursors (building blocks) of life can be produced in the lab.
However, no unguided, random combination of simple dead elements has ever produced any
material that meets the criteria of a living organism. Much as some scientists would like
to believe and prove otherwise, although it can be sustained by that which is not alive,
life has not yet been produced from non-living material.
The "original" experiment refuting "spontaneous generation" was done using
common house flys and maggots. When raw meat at room temperature was covered
with a cloth tent so that the flies could not land on the meat -- no maggots
hatched. The uncovered meat produced maggots because the flies were able to
lay their eggs in the meat hence producing the maggots. The experiment you
are doing is more difficult because your "test life" is bread mold and mold
spores being so much smaller can be carried by air currents to the bread.
However, there is a way to do the experiment, I think.
Make a growth medium. It could be sugar and bread, or a commercial agar
growth medium that you should be able to buy through your school. Sterilize
the growth medium and the petri dishes by boiling them for about 10 min. to
kill of all bacteria and mold. You could also use canning jars. You need to
keep both samples wet by adding boiled sterile water to each every couple of
days so that "drying out" is not a factor. You will have to arrange some
sort of syphon to add sterile water without opening the sterile control.
Possibly a piece of rubber tubing that you sterilize with ethanol or
isopropanol prior to "watering your garden" so to speak. This is the tricky
Then keep one dish open for say an hour each day. Keep the other one
covered. In a few days the one you keep open each day should pick up some
spores of some sort of "invisible" life force -- but the other one shouldn't
if you are careful to maintain its isolation from drafts of air.
Nonetheless, given you are dealing with microscopic invaders, you will
have to be really careful.
It is tempting for me to give you the proper experimental procedure to
follow but this is a great opportunity for you to do some research and then
put your imagination to work with some sound reasoning. I suggest you look
up the works in spontaneous generation done by Pasteur, Reti, and
spallanzani. Pasteur's work might be the easiest to work from... chicken
There are two ways you can go here. You can operate on pure belief, and
simply have faith that what you believe is true; or you can use science
to try to find out what is true. You can't do both at once, because
belief before proof is the opposite of science. It is one thing to make
an initial guess, for the purpose of organizing a program of research in
which you try honestly to discover what is true. It is an entirely different
thing to begin with a belief, focus your attention on facts that seem to
support your belief, and try to explain away or minimize the importance of
facts that seem not to support it. If you're going to use science, the
first thing you have to be is honest.
Some things can be proven, and some things cannot. I don't know whether
or not spontaneous generation is actually possible, and I can live without
having an answer. But I can't think of any single step in the process
that has been shown to be impossible, or even seems likely to be shown so.
First of all, your experiment doesn't have anything to do with spontaneous generation. It only
shows that mold needs moist conditions to grow and will grow faster with a more concentrated
source of carbon (sugar). If you refer back to Pasteur and Needham and Spallanzani, they
were trying to demonstrate that life could not generate within a closed system. For example,
Pasteur devised a way of sterilizing (made free of living things) a broth yet leaving it open
to the air. He made a "swan necked" flask that prevented bacteria and fungi from entering the
flask, yet let oxygen in. It didn't show signs of life for over a year, until he broke off the
neck which allowed dust and fungi to enter. What you need to do is somehow sterilize the bread
in a closed container, and have a control of the same set up with bread that is not sterile or
is not closed and see which grows mold. The closed container shouldn't grow anything, as long
as bacteria and mold are prevented from getting to it. You need to show that the life is not
coming from within the system, only from outside of it.
Click here to return to the General Topics Archives
Update: June 2012 | <urn:uuid:226351bb-cd3b-45e3-aa4b-b609b4254f02> | 2.875 | 1,308 | Q&A Forum | Science & Tech. | 57.099038 |
Lightning and Trees
Why when lightening strikes a tall pine tree it creates a clockwise
spiral around the outside of the tree. I think the the electrical
circuit between the cloud and the ground interacts with the earths
magnetic field causing the charge ions to spiral and this charge going
up and down the bark causing the disruption I see on the bark.
The earth's magnetic field is much too weak to affect
lightning. More likely, the pine tree has a natural twist
in the grain. This could cause the lightning to follow the
twist as it follows the lowest resistance path of sap to
Argonne National Laboratory
That is an intriging theory about the spiral track of lightning. I'm not
aware of any investigations into this area. Studies have shown that
lightning follows the path of least electrical resistance through the air,
and that magnetic field effects on ionized particles exist at much higher
altitudes than the region of convective weather occurrences.
Here are a couple of links that discuss lightning formation and its effects
in more detail This first one is a NASA link, and has some good graphics.
This next one, from the publication "American Scientist," is more technical,
but describes how the electrical charge in clouds originates.
Wendell Bechtold, Meteorologist
Forecaster, National Weather Service
Weather Forecast Office, St. Louis, MO
Click here to return to the Weather Archives
Update: June 2012 | <urn:uuid:f092b49a-bc92-4261-854d-faf514ce3b2d> | 3.484375 | 307 | Q&A Forum | Science & Tech. | 42.136462 |
Genomes to Life: A DOE Systems Biology Program
Genomics and Its Impact on Science and Society: The Human Genome Project and Beyond
Genomes for Bioenergy
Cellulosic Biomass: An Abundant, Secure Energy Source to Reduce
The United States now produces 7 billion gallons
of corn-grain ethanol per year, a fraction of the 142 billion
gallons of transportation fuel used annually. Cellulosic
ethanol has the potential to dramatically increase
the availability of ethanol and help meet the national
goal of displacing 30% of gasoline by 2030.
Cellulose is the most abundant biological material
on earth. The crops used to make cellulosic ethanol
(e.g., postharvest corn plants—not corn grain—and
switchgrass) can be grown in most states and often
on marginal lands. As with ethanol from corn grain,
cellulose-based ethanol can be used as a fuel additive
to improve gasoline combustion in today’s
vehicles. Modest engine modifications are required
to use higher blends (85% ethanol). Additionally, the
amount of carbon dioxide emitted to the atmosphere
from producing and burning ethanol is far less than
that released from gasoline.
To accelerate technological breakthroughs, the DOE Genomic Science Program will establish research centers to target specific DOE mission challenges. Three DOE Bioenergy Research Centers are focused on overcoming biological challenges to cellulosic ethanol production. In addition to ethanol, these centers are exploring ways to produce a new generation of petroleum-like biofuels and other advanced energy products from cellulosic biomass.
Download flyer at http://genomicsgtl.energy.gov/biofuels/placemat.shtml
The online presentation of this publication is a special feature of the Human Genome Project Information Web site.
Document Use and Credits | <urn:uuid:d7e5ae5e-d121-4375-84ce-eabb569338bf> | 3.296875 | 385 | Knowledge Article | Science & Tech. | 25.045441 |
|Feb21-13, 06:20 PM||#1|
Cool examples of radiometric dating?
So, I know that C-14 dating is being used now to solve crimes/identify bodies. Any other examples of it being used for things besides dating really old rocks/fossils? I'm teaching an Earth Science class and I want to show them a few cool articles.
|Feb22-13, 09:04 AM||#2|
I'm a bit surprised about that because of the rather high error rate. Also, neither rocks nor petrified fossils can be dated by 14C dating.
But it's used widespread for any organic remains, especially in sediment cores to establish a chronology and for archeological artifacs.
some more here
|Feb24-13, 11:12 AM||#3|
There are relative and absolute dating methods used, and they are used in conjunction with one another to give the age range of a site.
Relative dating methods say "this is older/younger than x." Examples of relative dating are:
Stratigraphy- The mapping of layers of sedimentation or artifact deposition. In most cases, the deeper the layer, the older it is, IF there is no disturbance (tunneling animals, digging of post holes for a building, etc.).
Zooarchaeological analysis- The study of faunal remains in archaeological context. By studying the remains of animals at a site and comparing them to known periods when they were alive, a site can be dated. For instance, finding remains from Pleistocene megafauna (mammoths are the obvious choice) will give you a relative date.
Palynology- Performing a pollen analysis on the material excavated at the site. Certain plants existed at certain times, in certain places in the past. It also gives climate and environmental information, because those plants live in very specific climatic circumstances.
Seriation- Analyzing the artifacts used at a site and placing them into categories according to times in the past they were traditionally used. Spearpoints, arrowheads, and pottery are the most likely candidates, as their technology, frequency, and style changes over time. When a new style is being developed, very few of the newer type will be found, but as the style gains widespread use, many will be found before they slowly disappear to make room for the ever-newer style.
Absolute dating methods- this gives you an exact date ± a set of years.
Radiocarbon dating- (C14 dating) The most widely-known and used. All living things take in C14 as they live, and stop taking it in when they die. C14 decays at a known rate over time. By analyzing the amount of C14 left in dead material, you know that it died x number of years ago, within a possible date range. It's effective to about 50,000 years before present (YBP). As a rule of thumb, a minimum of three separate samples must be taken from different remains at the site for a meaningful date. This is often used in conjunction with...
Dendrochronology- Simply put, counting tree rings and analyzing that sample via radiocarbon dating. This gives a much closer approximation of the date of the material, called a "Calibrated Date." For instance, the Clovis, NM site's uncalibrated date is 11,200 YBP, but the calibrated date pushed it back to 13,500 YBP.
Potassium/Argon dating (K/Ar dating)- This measures the decay of an isotope of Potassium (40K) into Argon over time. This is usually used on volcanic deposits and can measure to ~1.3 Billion years before present to 100,000 YBP. Yes, billion.
Fission Track Dating- This is relatively new. It measures the damage (or tracks) left by decaying uranium atoms in natural glasses (such as obsidian) and some minerals. It can date materials from 3 million YBP to 100,000 YBP.
Obsidian Hydration Dating- Measures the amount of water absorbed by a piece of broken obsidian. Water works its way into a flintknapped or otherwise broken piece of obsidian at an observable rate. This can be measured simply using a microscope, where a small sample is taken from the artifact, or by using the much more technical and non-destructive (and therefore better, but much more expensive) Secondary Ion Mass Spectrometry method. This is used to date material as old as eight million YBP.
Thermoluminescence (TL)- This method can only be used on burned materials like fire-cracked rock, pottery, and sediments exposed to sunlight. It measures the amount of accumulated radiation in an artifact or other sample. When the material is heated, it emits a small amount of light based on the amount of radiation stored within. This amount is measured. It is used to date material up to 50,000 YBP.
Archaeomagnetic Dating- This depends upon the inclusion of magnetite within an artifact. It analyzes the magnetic properties of the material as it relates to the Earth’s magnetic field at a given time in the past. This is effective on material up to 10,000 YBP.
Electron Paramagnetic Resonance- These are incredibly technical dating methods. They are based on the electron spin in an artifact by measuring the electromagnetic field of unpaired electrons in bone or calcite formation, and are effective from 1,000 to 2 million YBP. Nuclear Magnetic Resonance is also used- in MRI’s.
Uranium Series Dating (Uranium/Thorium Dating, Thorium-230 dating, Uranium-Series Disequilibrium Dating)- This measures the amount of Uranium-234 compared to the amount of Thorium-230 in a given sample. Uranium-234 has a measured radioactive decay into Thorium-230. The isotope Uranium-234 must also be measured against its parent isotope, Uranium-238, for an accurate measurement of radioactive decay. This is effective on any materials containing calcium carbonate- bones, mollusk shells, limestone, stalactites and stalagmites. It is used to date materials up to 500,000 YBP.
Cosmogenic Nuclide Dating- This is an incredibly new process of dating. In one study, it measures the isotopes of Beryllium and the effects of cosmic rays, high-energy particles that come to Earth from space, causing the Beryllium to have differing numbers of neutrons. There are a total of 21 isotopes created by cosmic rays spread over a number of different elements. This dating method is most often used in Geology, focusing on Aluminum, Chlorine, Calcium, and Iodine, which each have half-lifes of 720,000, 308,000, 103,000, and 15,700,000 years, respectively.
|Feb24-13, 12:16 PM||#4|
Cool examples of radiometric dating?
This article will also help you out, dealing with Forensic Archaeology, used by police to solve crimes.
|Feb25-13, 05:58 AM||#5|
And I meant radiometric dating as a whole is used for fossils and old rocks, not C-14. The C-14 being used forensically was just an example of radiometric dating being used for something different.
|Feb25-13, 07:01 AM||#6|
Scroll all the way down to the bottom, notice that 0 cal yrs BP is CE 1950, notice the error margin and the reversing of the scale, so if something was to be dated say 150 radiocarbon years ago then the table would both return ~20 cal BP (CE 1930) and ~145 Cal BP or CE 1805.
|Similar Threads for: Cool examples of radiometric dating?|
|Radiometric dating of items with a known age||Earth||6|
|Radiometric dating question||Biology, Chemistry & Other Homework||5|
|Radiometric Dating||Calculus & Beyond Homework||1|
|Radiometric dating- creationism.org||General Discussion||6| | <urn:uuid:4d7b0f6b-39f2-40cd-9ddc-44aea3632785> | 3.34375 | 1,726 | Comment Section | Science & Tech. | 47.714148 |
A few years ago, there seemed to be a risk that the Large Hadron Collider (LHC) at CERN would destroy Earth. The risk was based on speculative, if serious, physics. The probability that Earth would be destroyed seemed low because the theories that allowed trouble were speculative. Nevertheless, even given a small probability, destruction of Earth would have an enormous negative expected value (probability times value, the appropriate metric for decision theory).
Collider advocates put forth several reasons not to worry, but others found lacuna in those reasons. CERN conducted two safety studies, the second necessitated by flaws in the first. The final study was fairly good, but was still subject to criticism. Despite all this, CERN declared that the probability that the Earth would be destroyed was zero and fired up the LHC in 2008. Despite a minor industrial accident, the Earth was not destroyed.
Much of the risk that we can control is behind us. In most models destruction of Earth takes a few years, but there is nothing we can do if the process has been initiated. Continued collider operation poses residual risk from events that happen infrequently and have not happened yet. There is also risk from CERNís planned upgrade, and from the next generation of colliders.
Against this risk, we have to balance the probability that LHC research will discover something that will save us from other risks, something that would not be discovered by other methods.
Discussion of the problem
Forum (Letters and articles published here)
References and links to published articles | <urn:uuid:eb31d4bf-0a5a-4922-b702-d2898be68370> | 3.390625 | 317 | Comment Section | Science & Tech. | 41.555581 |
Build your own spectrophotometer
- Take a 100 W light bulb, a light-dependent resistor, a prism or grating in front of a slit, and a curtain - and voilà , a DIY spectrophotometer
For many students the spectrophotometer has become a 'black box' into which a sample is placed, and from which the analytical data appear. By designing and building their own visible-light spectrophotometers, students get to grips with the underlying principles of this widely used analytical tool. Stewart J. Tavener and Jane E. Thomas-Oates
Spectroscopy is widely taught at A-level and at undergraduate level and, as scientific instruments become more affordable yet more sensitive and complex in their workings, it is increasingly important for students to understand their underlying principles. Most spectrophotometers in the teaching laboratory are driven by a PC, which controls the operations, stores files and manipulates the data, leaving students divorced from the physical processes that lead to the measurement. Indeed, when we asked a class of first-year undergraduates who had recently used a UV-VIS spectrophotometer to explain its internal workings, only one out of 20 showed a clear understanding.
To overcome this problem we have developed a project that allows students to design and build their own visible-light spectrophotometer, giving them hands-on experience of the intimate workings of this analytical instrument at a cost that compares favourably with conventional synthetic chemistry experiments. Not only do students learn about the key components of the equipment, but they also gain experience of calibrating the instrument and an understanding of the relationships between the absorption of light and concentration, and between resolution and sensitivity. While the experiments described in this article were designed for undergraduates, they can also be adapted for A-level and GCSE projects. (A summer camp of 13-16-year olds enthusiastically built photometers with a high degree of success.)
The full undergraduate practical is run over two days, the first task being to build a simple photometer. If time is limited, this can be used as a stand-alone exercise. In the second part of the practical the students design and construct a spectrophotometer and use it to measure the visible spectrum of a solution of potassium manganate(VII).
Building the photometer
In our classes, we supply a printed circuit board and teach the students to solder, an important skill for anyone who regularly deals with scientific equipment. Alternatively, a board may be etched and drilled (a layout is shown in Fig 1),1 or the circuit may be built on plug-board, which may be more suitable for schools since many science departments will already have these, and the components can be reused.
Fig 1 (a) Layout for circuit board for the photometer; and (b) trace for photo-etching the circuit board. (For further information on etching circuit boards see ref 1.)
The photometer consists of a light source (an LED), a light-dependent resistor (LDR) as a detector and a simple amplifier/buffer circuit to make the output suitable to drive a voltmeter. (If a high impedance multimeter is used, the amplifier could be omitted, but it does illustrate an important component of a 'real' photometer.) The LED and LDR face one another and the sample cuvette is placed between them. The resistance of the LDR decreases as the amount of light that falls on the LDR increases: more light lets more current flow. The circuit runs on two 9 V batteries. To avoid errors caused by stray light from the room, the photometer must be placed in a box. Figures 2 and 3 show a photometer circuit diagram and a photograph of the completed photometer respectively. Components P1, P2 and P3 are generally omitted but may be used to replace the fixed resistors with potentiometers, thus allowing control over the light intensity, and gain and offset of the amplifier circuit. (These could be exploited for an extended project.)
The relationships between absorbed and transmitted light, and between concentration and absorbance, may be explored with the photometer. However, the photometer must first be calibrated using standard concentrations of a suitable coloured chemical. We have used potassium manganate(VII), cobalt salts, molybdenum blue (sodium molybdate(VI) colour of the LED to improve the sensitivity. All compounds absorb some of the light that falls upon them, the energy from the radiation being used to excite electrons to higher energy levels. The absorbance, A, of a solution of the compound at a particular wavelength is described by the Beer-Lambert law (i), which is widely used in quantitative analysis.
A= -log10(I / Io) = cl (i)
The absorbance is directly related to the concentration, c, of the compound, the pathlength of the sample, l, and the molar absorption coefficient, , a wavelength-dependent constant characteristic of the compound. Io is the incident light intensity and I, the transmitted light intensity.
Rather than measuring absorbance directly, the photometer gives information as a voltage. This is true of commercial instruments, though these contain an internal processor to do the necessary mathematics. However, the conversion is straightforward and can readily be done with a calculator or spreadsheet.
As well as absorption by the compound, other processes reduce the intensity of light that passes through the cuvette, so it is essential to take a 'background' reading for the solvent and the cell, which corresponds to Io. Do not assume that the circuit sends out 0 V when no light falls on the detector, and make the correction by subtracting the voltage at zero light (Vzero) from all readings. These two procedures 'zero' the photometer. Over the range of wavelengths and light intensities in which we are interested, the photometer exhibits a linear relationship between incident light and the voltage ratio described in equation (ii). (This assumption holds true for useful concentration ranges, and the calibration plot will let the user know if they have entered a region of non-linear behaviour.)
I / Io = Isample / Isolvent
= (Vsample - Vzero)/(Vsolvent - Vzero) (ii)
Some older needle-type voltmeters may be set to zero manually, which simplifies the maths.
The absorbance is calculated by combining equations (i) and (ii) into (iii). After calibration, using a set of standard solutions to determine , measurement of A allows the concentration of unknown solutions to be determined.
A = -log10((Vsample - Vzero)/(Vsolvent - Vzero))
= cl (iii)
Figure 4 shows a student calibration plot of concentration against the voltage ratio. The completed photometers have proved suitable for monitoring kinetics of reactions that involve a colour change, eg measuring the rate of bleaching crystal violet in the presence of sodium hydroxide.
In addition to a light source (100 W light bulb or other polychromatic sources) and detector, a spectrophotometer also requires a prism or grating to obtain different wavelengths from the light source, and a slit to select a narrow range of wavelengths. The latter determines the resolution of the instrument.
There is, however, an inherent compromise between sensitivity and resolution - a narrower slit gives better resolution, but fewer photons with which to make the measurement. The slit may be made carefully from card, and placed either between grating and sample, or between sample and detector.
Figure 5 shows a typical layout of a 'DIY' spectrophotometer, where the spectrum produced by the grating is projected onto graph paper to produce a scale of wavelength. The grating may be rotated, or the slit and sample moved, to select different wavelengths of light. Calibration of the wavelength is performed by eye, using the numbers in Table 1 as a guide.
Fig 5 The DIY spectrophotometer. The lens forms an image of the aperture at the plane of the slit
To construct a spectrum, the absorbance must be calculated for each wavelength, and therefore Vzero, Vwater and Vsample must be measured at each point. Ambient light interferes with the spectrophotometer and causes inaccuracies, and so either large cardboard boxes, or thick blackout curtains draped between two or three retort stands, are used to keep out light. The latter is preferable because the students can work under the material. There is no single correct way of assembling a spectrophotometer, and we find that students often have ideas that we have not anticipated. A simple instrument can be created by using coloured filters in place of the grating-lens-slit assembly, though the number of data points is limited to the number of available filter colours.
A valuable experience
This hands-on, discovery-based learning encourages ingenuity and creativity, and gives the students a real sense of achievement. If constructed with care and ambient light is excluded effectively, the photometer is sufficiently precise to make measurements that are comparable with the students' abilities to make up calibration solutions. It is certainly good enough to measure unknown concentrations and the rates of simple reactions to within a few per cent.
Fig 6 Visible region spectrum of KMnO4 from a commercial spectrophotometer and data points measured using the DIY spectrophotometer
Building the spectrophotometer is challenging and we deliberately avoid giving explicit instructions, though some students require more guidance than others depending on their ability, confidence and experience. The process of trial and error ensures that every component is explored and its purpose understood. The ability to resolve the spectrum is limited by how widely the light source is diffracted, and though the fine structure of the KMnO4 spectrum cannot be resolved, the spectra obtained broadly resemble those from commercial instruments at a fraction of the cost (Fig 6). We have found that the learning outcomes are worth the work - after running the experiment, all the students understood the Beer-Lambert law and how a spectrophotometer works.
Acknowledgements: we thank Ed T. Bergström and Laura Karran for their help in developing this experiment.
References1. For example see website.
2. Data from various sources, including D.A. Skoog, D. M. West, F. J. Holler and S. R. Crouch, Fundamentals of analytical chemistry (8th edn). Belmont, US: Brooks/Cole-Thomson Learning, 2004.
- Printed circuit board (£1.90 each for a run of 104), plug-board, or photoresist board
- Voltmeter or digital multimeter
- Plastic or glass cuvettes
Total cost for photometer consumables is under £4.00
- White light source
- Diffraction grating, prisms or coloured filters. (Note: gratings are available at modest cost (For example, go to website) and work better than prisms.)
- Optical bench or stands and clamps; black cloth
- Resistors: 4.7 k (£0.02); 2.2 k (£0.02); 1.0 k (£0.02)
- LED: orange, 5 mm (£0.20)
- LDR (£0.57)
- Op amp: 3140 (£0.65)
- Socket: eight-pin (£0.09)
- Battery clips (£0.26)
- Soldering iron, solder,
- Blu-Tack, tape and card
Build your own spectrophotometer
Download this article as it originally appeared in Education in Chemistry
PDF files require Adobe Acrobat Reader
Paton Hawksley Education Ltd
Manufacturer of diffraction gratings, spectroscopes and educational science equipment
Wealth of technology information sheets for pupils and teachers
External links will open in a new browser window | <urn:uuid:d7afb97c-1ec1-42b3-9d97-6f9c067d905b> | 3.3125 | 2,454 | Academic Writing | Science & Tech. | 38.742974 |
Can Captured Carbon Save Coal?; June 2009; Scientific American Earth 3.0; by David Biello; 8 Page(s)
Like all big coal-fired power plants, the 1,600-megawatt-capacity Schwarze Pumpe plant in Spremberg, Germany, is undeniably dirty. Yet a small addition to the facility—a tiny boiler that pipes 30 MW worth of steam to local industrial customers—represents a hope for salvation from the global climatechanging consequences of burning fossil fuels.
To heat that boiler, the damp, crumbly brown coal known as lignite—which is even more polluting than the harder black anthracite variety—burns in the presence of pure oxygen, releasing as waste both water vapor and that more notorious greenhouse gas, carbon dioxide (CO2). By condensing the water in a simple pipe, Vattenfall, the Swedish utility that owns the power plant, captures and isolates nearly 95 percent of the CO2 in a 99.7 percent pure form. | <urn:uuid:3a1b2c8c-ba31-4a02-a7b3-c4e7e098a265> | 3.84375 | 211 | Truncated | Science & Tech. | 45.093684 |
EXPERIMENT 3 INEXTINGUISHABLE FLAME Place a funnel against the flame of a candle and blow it through the thin end. The flame of candle won’t so much as stir, although the stream of air from the funnel would seem to be striking the flame directly. You might think that the funnel is far way from the flame and bring the flame more near to funnel and blow harder alas you might be shocked because the flame is not deflected away from you but towards you in spite of the fact that you may blow harder, against the stream of air coming form the funnel. What is done to extinguish the flame? It is needed to locate the funnel so that flame is not on the axis of the funnel but in the line of its cone part. Now by blowing into the funnel you will easily extinguishes the candle. This is explained by the fact that air stream leaving the narrow part of the funnel does not propagate along the axis but spreads along the walls of the cone, thus forming a sort of air vortex. But the air along the funnel axis is rarefied with the result that airflow sets in near the axis, thus making it lean toward the funnel. | <urn:uuid:f87335d9-8105-4f2f-bf04-09a08d63d1b9> | 4 | 241 | Tutorial | Science & Tech. | 60.655695 |
Previous | Session 52 | Next | Author Index | Block Schedule
D.F. de Mello (GSFC/CUA), J.P. Gardner (GSFC)
We present the analysis of the morphology of star-forming galaxies using the Great Observatories Origins Deep Survey done with the Hubble Space Telescope. These objects are characterized by single, double or multiple clumps. Their morphology, sizes and their environment can be used to test the current models of galaxy formation. The evolutionary path that these faint galaxies take to become today's galaxies remains largely unknown. These clumps are bluer and fainter than Lyman break galaxies but of comparable sizes, have spectral types of starbursts and photometric redshifts < 3. We will discuss their properties and role in galaxy evolution. These objects can be (i) dwarf galaxies having strong bursts of star-formation, (ii) the building blocks or sub-units which will merge to form larger galaxies or (iii) clumpy star-forming regions in disks.
Previous | Session 52 | Next
Bulletin of the American Astronomical Society, 37 #4
© 2005. The American Astronomical Soceity. | <urn:uuid:4a1f0d1d-2a57-4315-b84a-9b91b7757b11> | 2.859375 | 240 | Academic Writing | Science & Tech. | 45.760747 |
© 1994, Astronomical Society of the Pacific, 390 Ashton Avenue, San Francisco, CA 94112.
by George Musser, Astronomical Society of the Pacific
Convection in a pot. The flames under the pot heat the water on the bottom. As the water gets hotter, it expands and rises. The colder water above it sinks down to the bottom, where it too starts to get hot. The process continues until you shut the stove off. The convection cycle distributes the heat of the stove evenly throughout the water in the pot.
A primer on convectionStars do it, icy moons do it, even pots of coffee do it. From the small to the large, things undergo convection. Convection -- the movement of heat by the movement of matter -- is a recurring theme in science, and a vivid reminder that astronomical objects aren't really so different from everyday objects. They may be bigger and hotter and farther away, but the same laws of physics determine how they behave.
Convection is a consequence of the most basic drive in nature: Hot things want to cool down. If the thing is a fluid -- air, water, anything that can flow -- it can cool down by moving around. Hot air rises, cold air sinks: It's common sense. It's convection. And from this basic idea can come some funky phenomena.
The next time you boil water, fill a jar with hot water and notice how much lighter it is than a jar of cold water. (If you don't trust yourself, put the jars on a balance.) This difference in weight keeps convection going. If you put water in a pot on the stove, the water on the bottom gets hot first. The colder fluid above it is heavier, so it sinks. This forces the hot fluid on the bottom to go up on top. The stove heats up the cold water on the bottom; the air cools down the hot water on the top. So they switch places again. And so on. It sets up a cycle, as shown in Figure 1. This cycle transports heat from the stove up to the air.
Oh, Smoggy Day
The Yearn to Churn
Lava Lamps to Lava Flows
Activities in the Classroom
There are three ways how. First, heat can move by conduction: molecules bang into their neighbors and pass heat along in a domino effect. When you put your hand against a cold window on a winter's day, the window sucks the heat from your hand by conduction.
Second, heat can move by radiation. Usually, when people think radiation, they think nukes, mutants, Chernobyl. But to scientists, radiation just means any kind of light ray, either visible or invisible (such as infrared or ultraviolet). Hot things glow, giving off light that carries the heat away. That's why the Sun shines and warms the Earth. It's also why you feel hot when sitting in front of a fireplace or electric heater. The desert is so cold at night because you and the air around you are glowing in the infrared, losing heat to outer space. Survival kits contain "space blankets," basically big sheets of aluminum foil, and these keep you warm by reflecting the infrared radiation from your body back to your body.
Convection is the third way that heat can move. Instead of molecular domino-effects or streaming heat rays, convection relies on movement of fluid. When hot fluid moves, it carries the heat along with it. You can think of the three ways heat moves in terms of passing a love note to your sweetie across a classroom. You could give it to person sitting next to you and ask them to pass it along (like conduction), you could signal using a flashlight (like radiation), or you could get up, walk across the room, and give the note to the person directly (like convection).
Convection is a last-ditch way to get rid of heat. It takes effort to start convection, so heat usually prefers to escape by conduction or radiation. But conduction is slow, and radiation can't work where it's opaque: inside a planet and certain parts of a star. In those cases, only convection can do the job.
The mechanism of heat loss determines what a planet looks like. The inner planets of the solar system are composed of rock. Normally, we think of rock as a solid, but it can act as a liquid if you wait long enough; say, millions of years. Rock can transport heat either by conduction or by convection; rocks are opaque, so they block radiation. Small, cool planets, like the Moon and Mars, lose their heat by conduction. The Earth and Venus prefer convection (see Figure 2). Convection is much more exciting. It powers the plate tectonics and other fancy styles of geology that the Earth and Venus have.
The interior of the Earth or Venus. We build our buildings and eat our Eggos on top of a thin crust of solid rock, like scum on a pond. Underneath is a vast sea of fluid rock called the mantle, continually churned by convection. The mantle itself sits on a core composed of molten iron.
Inside a star, conduction doesn't work because the molecules are too far from one another, so heat moves either by radiation or by convection. Radiation operates where the gas of the star is transparent, as it is when it is especially hot. In medium-sized stars like the Sun, radiation transports heat deep within the star, where it's hottest (see Figure 3), and convection operates toward the outside, where it's cooler. In small, cool stars, convection is the main mechanism; in large, hot stars, radiation dominates.
The interior of the Sun. Only a small part of the Sun, its core, actually generates energy. The rest of the gas just gets in the way as the heat tries to get out. Most of the Sun's interior is filled with transparent gas, and the energy passes through it in the form of radiation. But toward the surface, the gas is cooler and opaque, so radiation can't get through. Convection takes over.
Convection gives the Sun a mottled appearance, as astronomers see when they look at the Sun with specially designed telescopes (see Figure 4). Each of the little granules in Figure 4 marks a place where convection is gurgling up from below. Convection dredges up atoms manufactured in the core of the star. Astronomers like this because it gives them some idea what's happening deep within the star, where they can't see.
Fire burn and cauldron bubble. This highly magnified picture of the surface of the Sun shows a sunspot (dark splotches in center) and granules (light-colored grains). Each granule, small as it looks, is about the size of Texas. The granules flicker rapidly, making the Sun look like a burbling witch's brew. Granules are the top of convection cycles that bring material from deep within the Sun to the surface. Photo courtesy of Sacramento Peak Observatory.
| 1 | 2 | 3 | next page >>
back to Teachers' Newsletter Main Page | <urn:uuid:9e5306e8-4162-447c-b1a6-e666e64ccb0f> | 4.0625 | 1,494 | Knowledge Article | Science & Tech. | 64.21719 |
Looking down a microscope always reminds us how much we can’t see with the naked eye. The winners of the 2011 International Science and Engineering Visualization Challenge provide a tantalizing glimpse into the micro- and nanoscopic world.
This image of a thin slice of a mouse’s eye, above, was dyed so that different tissues show up as different colors. Muscles are pale yellow, for example, and the sclera is green.
No, this isn’t a cliff—it’s far too tiny. Each layer of titanium carbide—an exceptionally hard material used in energy storage devices, solar cells, and the like—in the stack pictured here is only 5 atoms thin.
These honeycombed stalks are carbon nanotubules illustrated by graphic artist Joel Brehm. Carbon nanotubules have unique thermal and electric properties that give them potential roles in everything from detecting cancer to powering hydrogen cars.
You might know these better as the little white hairs on young cucumbers. But under 800x magnification, these tiny outgrowths, called trichomes, look almost creature-like. In fact, these trichomes are there to protect the cucumber: the tips are sharp and bulbs are filled with a bitter toxin.
Read more about these images and see the rest of the winners over at Science.
Images courtesy of AAAS/Science | <urn:uuid:4d0cecbb-d498-44a5-bc60-6b8a04b5dc13> | 3.4375 | 285 | Personal Blog | Science & Tech. | 41.277157 |
Numbers can be powerful things, but they don't necessarily help the average person grasp what's actually going on in science. Instead, personal stories tend to make a bigger impact. And that's understandable. Things you can see—or things that someone can show you—are going to stick in your head a bit more than a barrage of data.
This is especially a problem, I think, with climate change. Some of the largest impact of climate change, so far, have happened in places far removed from the experiences of the people who create the most anthropogenic greenhouse gases. So it's often hard to take the idea "the Earth is getting warmer" and really grok what that actually means.
That's why people like Will Steger are important. Steger is an explorer and science communicator who has won the National Geographic Society's John Oliver La Gorce Medal—an award that's also been given to Amelia Earhart, Robert Peary, Roald Amundsen and Jacques Cousteau.
He does most of his work in the Arctic and Antarctic, places where he has clearly seen the results of climate change. In a video of a presentation at the University of Minnesota, Steger shows you his experiences—and what they mean. How has climate change altered the landscape of the poles? What does that mean for the future of the Earth? Steger does a good job of making the data feel like something real.
I wish I could figure out how to embed this, but you should go watch it, nonetheless. It's a long video, but worth the time.
Maggie Koerth-Baker is the science editor at BoingBoing.net. She writes a monthly column for The New York Times Magazine and is the author of Before the Lights Go Out, a book about electricity, infrastructure, and the future of energy. You can find Maggie on Twitter and Facebook. | <urn:uuid:6448fea8-eeb4-4015-8112-2414cf78682d> | 2.875 | 388 | Personal Blog | Science & Tech. | 57.842518 |
Texas Bats Having Tough Time Amid Drought
The drought affecting Texas this summer is giving resident bats a hard time. The lack of moisture has depleted the insect population so that bats are having to depart early from their daytime dwellings because they have to travel farther to find enough to eat.
At Bracken Bat Cave, home to the world’s highest concentration of bats, the little guys are having to emerge as much as two hours before darkness.
Leaving their daytime roosts before nightfall makes them more susceptible to predators. Hawks, falcons, owls, raccoons and snakes are known to make meals of bats.
Experts have also noticed fewer bats emerging from caves and have seen evidence that more infant bats are showing up dead.
An infant bat will not survive it it’s mother cannot provide it with enough food.
“If adults aren’t able to get enough food, babies don’t,” - Diana Foss, Urban Biologist for the Texas Parks and Wildlife Department, to the Bryan College Station Eagle
Although it is impossible to determine the true severity until 2012, it looks as though the population will decline.
Bat colonies face adversity amid Texas drought [Bryan College Station Eagle] | <urn:uuid:eaf1cab1-8dd6-42a4-a78a-df1369f551f1> | 3.125 | 255 | Truncated | Science & Tech. | 41.024 |
(1791 - 1867)
Born to a poor family in London, he was extremely curious, questioning everything. He felt an urgent need to know more. At age 13, he became an errand boy for a bookbinding shop in London. He read every book that he bound, and decided
that one day he would write a book of his own. He became interested in the concept of
energy, or more specifically, force. Because of his early reading and experiments with the idea of force, he was able to make important discoveries in electricity later in life.
He eventually became a chemist and physicist. He isolated benzene (a clear, colorless,
flammable liquid derived from petroleum and used to manufacture motor fuels). He performed experiments demonstrating discovery of electromagnetic induction. This discovery paved the way for changing mechanical energy into electrical energy.
Links to other Websites:
Picture credit: B&W image from an oil on canvas portrait of Faraday by Thomas Phillips.
Portrait is on display at England's National Portrait Gallery. For info about using this picture, go to: | <urn:uuid:ac78c6fa-9d91-41ab-8a9a-2566599c1853> | 3.515625 | 225 | Knowledge Article | Science & Tech. | 36.852345 |
(Submitted March 14, 1997)
Just curious about Black holes, and I wanted to know if the gravitational
field of a black hole would pull an object in faster than the speed of
light. If I understand correctly objects cannot go faster than the speed
of light our they would be going back in time. If the acceleration of a
black hole is constant, would an object that got sucked into a black holes
velocity increase beyond the speed of light the closer it got to the black
The answer to your question is that the motion
of a particle near a black hole is not governed by Newton's
laws of motion in the familiar sense. The correct equations
for motion near a black hole predict that an object on a radial
path into the hole will have a velocity which approaches
the speed of light as the object approaches the event horizon.
For more information, I can only refer you to a textbook on
general relativity, such as the one by Steven Weinberg
("Gravitation and Cosmology..." 1972 (Wiley: New York).
I hope this is of some help. | <urn:uuid:068fe93d-4115-4c67-8aac-a3fe492035ca> | 3.53125 | 228 | Q&A Forum | Science & Tech. | 59.182778 |
Let's imagine that you and a friend could converse on the planet Venus — without having to worry about the lack of oxygen, crushing pressure, and beyond boiling temperatures. Your friend would sound so different that you'd actually see her differently.
That's one of the more intriguing conclusions from a team led by the University of Southampton's Professor Tim Leighton, who have calculated all the different sounds we might hear if we could listen in on the other worlds of our solar system. That includes the whirlwinds of Mars, the lightning of Venus, and even the sounds of methane and ethane falling like water on Saturn's moon Titan.
But perhaps the most interesting is how we would sound if we could actually talk on these faraway worlds. Because of differences in atmospheres, pressures, and temperatures on these worlds, the human voice would sound very different, and since we're adapted to hear voices in Earth's atmosphere, these changes would actually play havoc on how we comprehend the voices of those around us. Leighton explains how this would work on Venus:
"We are confident of our calculations; we have been rigorous in our use of physics taking into account atmospheres, pressure and fluid dynamics. On Venus, the pitch of your voice would become much deeper. That is because the planet's dense atmosphere means that the vocal cords vibrate more slowly through this 'gassy soup'.
"However, the speed of sound in the atmosphere on Venus is much faster than it is on Earth, and this tricks the way our brain interprets the size of a speaker (presumably an evolutionary trait that allowed our ancestors to work out whether an animal call in the night was something that was small enough to eat or so big as to be dangerous). When we hear a voice from Venus, we think the speaker is small, but with a deep bass voice. On Venus, humans sound like bass Smurfs."
Of course, this is all strictly theoretical, but I suppose we could test this one day in the far future using humanoid robots that could withstand the conditions of Venus, Mars, or Titan, and perhaps even have advanced enough minds to simulate the perception that their speech partner is smaller than they really are. The next step for the team in their study of space acoustics? Professor Leighton says he's intrigued to find out what music would sound like on other worlds, something we could one day have Mars astronauts put to the test. If nothing else, it's slightly more cultured than taking golf clubs to the Moon... | <urn:uuid:5967a76f-a3d1-4d66-9f07-1f6c4542f4bd> | 3.78125 | 508 | Truncated | Science & Tech. | 50.458865 |
Science strides through revolutions, but people
often refuse to accept revolutionary concepts at first place. This
describes the emergence of the Planck's equation. In classical physics,
energy of electromagnetic (EM) radiation was thought to be absorbed
or emitted continuously. It wasn't until late 1900 the German scientist
Max Planck (1858-1947) made a radical assumption in explaining the
black body radiation spectrum, the idea of discrete energy arose.
In Planck's assumption, radiant energy is emitted
in small bursts, known as "quanta". Each of the bursts called a
"quantum" has energy E that depends on the frequency f of
the electromagnetic radiation by the equation:
Photo of Max Planck.
Courtesy of AIP
Emilio Segre Visual Archives, W.F. Meggers
where h is a fundamental constant of nature, the "Planck constant".
is later found to be true for all EM radiant energy emitted or absorbed.
Planck's equation implies the higher the frequency of a radiation,
the more energetic are its quanta. It for example explains why you can never
get brown from visible light ((
but from ultraviolet light (from to ).
The quanta of visible light don't carry enough energy to start the chemical reaction
in your skin!
Figure: Visible Spectrum.
Courtesy of NASA.
The quantum energy is not to compare with the power of the light!
The Power of light (Luminosity) is the total energy per second, that means the number of quanta per second times the quantum energy.
Therefore even if visible light carrys a lot more Energy per second than UV-light, you won't get any browner from it.
The theoretical black body radiation spectra predicted
by Planck's Radiation Law, with the assumption , agreed
with the experimentally found spectra in all wavelengths and temperatures.
Photo of Max Planck with his hand writing.
Courtesy of the Archives, California Institute
Many other scientists, including Wien, Rayleigh and
Jean, attempted to explain the blackbody radiation spectra using classical
wave theory and failed. However, the idea of quantized energy was too revolutionary
for most scientists at the time (Even Planck puzzled his own conclusion).
It was not generally accepted until 1905 when Einstein extended
Planck's equation in deriving his formula for photoelectric emission.
The idea of quantized energy led Einstein to postulate the particle-wave
duality of light and other EM radiation. Planck's equation is
essential to the formulation of quantum physics. | <urn:uuid:2d926c2c-45ef-4986-84fc-1525a575cc1e> | 4.03125 | 540 | Knowledge Article | Science & Tech. | 43.039722 |
Helium is the second lightest gas and follows hydrogen. Helium is very valuable in the scientific world and has many applications in the non-scientific world, too. It is extremely useful and practical. Helium is inert which means it won’t combust when combined with air and energy and it is far lighter than air.
Liquid helium is used to cool some of the most powerful electromagnets in the world. It is also the first known superfluid, or a fluid with many interesting qualities. Helium is actually pretty rare and a limited resource. In fact, we’re running out of it. Yet, it is the second most common element in the universe.
Here on Earth, though, there are only two places to find it. It can be found in the earth’s atmosphere in its exosphere which is the uppermost layer of the atmosphere. It is such a small amount, though, that it is much too expensive to extract from the atmosphere. The second place is in the ground where pockets of helium are found. It is made through alpha decay.http://bit.ly/MtEm2N
- queenofhedgehogs likes this
- kuwaitigenius reblogged this from omgfactsofficial and added:
- iseegodinbirds likes this
- milkshakespear3 reblogged this from omgfactsofficial and added:
- moonlillyxo reblogged this from omgfactsofficial
- omgfactsofficial posted this | <urn:uuid:35891d59-4106-4383-8492-49b07927f615> | 3.265625 | 322 | Personal Blog | Science & Tech. | 50.627634 |
Satyendra Nath Bose developed statistical methods, later utilized by Albert Einstein, to describe the behavior of massless photons and massive atoms, as well as other bosons. This "Bose-Einstein statistics" described the behavior of a "Bose gas" composed of uniform particles of integer spin (i.e. bosons). When cooled to extremely low temperatures, Bose-Einstein statistics predicts that the particles in a Bose gas will collapse into their lowest accessible quantum state, creating a new form of matter, which is called a superfluid. This is a specific form of condensation which has special properties.
Bose-Einstein condensates were a purely theoretical conjecture until experimentally observed by Eric Cornell & Carl Wieman at the University of Colorado at Boulder in 1995, for which they received the 2001 Nobel prize. | <urn:uuid:1e9ed7e7-0087-4fa7-b3bd-264443c68bf6> | 3.328125 | 171 | Knowledge Article | Science & Tech. | 25.215276 |
I was thinking about Hydrogen balloons and that large ones which are used for weather balloons which sometimes go up to 100,000 ft (approx 30km). Then I was wondering, how much potential energy has the balloon gained with the balloon and the weight it carries to get up to 100,000 ft. It seems the object would have a lot of potential energy at that height. If the object was rolled down a ramp from that height, it would generate a lot of energy going down a 30km height ramp. Then how much energy was used to get it to lift, to produce the hydrogen in the first place?
So my question is, could there be any situation where the potential energy the balloon and its cargo gained exceed the energy it took to make the hydrogen in the first place? Then, if so, how could a cycle be set up where the lifting energy of the hydrogen is used to liberate more hydrogen and produce energy.
Here is another idea, what if the balloon started at the bottom of the ocean, a Electrolysis device is separating hydrogen and oxygen from the water down there. A balloon collects the hydrogen and oxygen and pulls upwards. The balloon is attached to a string which it pulls up and turns a pully (wheel) at the bottom as it goes up. Could the rotation of the wheel gain more energy than the cost to extract the hydrogen. I guess the weight of the string would be a factor to consider as well.
My thought is that all of this is very unlikely, as it seems like a perpetual motion device, as the hydrogen and oxygen could be re-combined and it would fall back downwards as water and the cycle would be repeated. The question would be, where does the energy come from? it has to come from somewhere, so this seems very unlikely. I cannot think of where the energy comes from.
But can anyone work out the calculations even for a very basic calculation? | <urn:uuid:05b7c7cc-95bb-4f4d-9b34-1ba12db0bcf6> | 3.609375 | 391 | Q&A Forum | Science & Tech. | 59.396176 |
1). Learning About Snake Facts And Behaviors By : Stephanie Davies
This article teaches you about why snakes behave the way they do, and how to handle a snake encounter. Also includes information on identifying venomous and non-venomous snakes.
2). Need To Cool Down? Use A Dehumidifier! By : James Monahan
A dehumidifier is a device which removes excess moisture in the air. This device performs this process by condensing the moisture on a cool surface. A dehumidifier is simply an air conditioner.
3). How to Make a Thermometer By : James Hunt
A thermometer is an instrument that measures the temperature. Depending on what country you live in, temperature is measured either in a scale called Fahrenheit or Celsius...
4). Go Meteorite Hunting By : b hirst
Go Meteorite Hunting
5). No More Distractions with Noise Reduction Headphones By : dave4
The world is a very noisy place with loud, intermittent sounds and constant, droning noises – noise reduction headphones can help you get a little peace amongst the distractions of everyday life. Headphones can block out the myriad of sounds that occur in a variety of setting and are helpful to many different people.
Sleeping – If you ...
6). Plastic Forming - Vacuum Forming Guide By : John Morris
What is vacuum forming? What does it do? What are the methods used in forming vacuums? Vacuum forming is basically the procedure used in shaping any kind of plastic...
7). How the Meter Came To Be By : James Monahan
The meter follows a timeline dating back to the eighteenth century, when two approaches to the definition of the standard unit of length were broached.
8). Magnets Are a Very Important Part of Our Lives By : James Hunt
Do you remember as a child ever being fascinated by magnets? Such a simple thing yet complicated...
9). The Odd Seven Continents Theory By : Richard Monk
Viewed from space, the Earth appears to have four or five major landmass areas depending on your viewpoint. Despite this, we hold on to the illusion there are more continents.
As we all learned in grade school, there are seven continents. A quick look at a globe, however, reveals this basic assumption is just flat wrong. In particular,...
10). Making Biodiesel at Home By : Joseph Then
It is easy to make your own fuel at home. You need a few simple supplies, all of which are readily available at your hardware stores. Gather up 1 liter of vegetable oil, antifreeze, and lye.
11). Biotechnology Timeline: Important Events And Discoveries In Biotechnology By : George Royal
The Age of biotechnology arrives with “somatostatin” - a human growth hormone-releasing inhibitory factor, the first human protein manufactured in bacteria by Genentech, Inc. A synthetic, recombinant gene was used to clone a protein for the first time.
Genentech, Inc. and The City of Hope National ...
12). Capacitor: An Overview By : James Monahan
Anybody in the field of electronics would doubtless be familiar with a capacitor, but what exactly is it?
13). A History of Elasticity By : James Monahan
Man has, since the early times, found out how useful elastic materials are. And today’s man has improved on this idea and constantly finds ways to make more elastic materials to suit his everyday needs.
14). What Is The Element Molybdenum Used For? By : Gray Rollins
Molybdenum is from the Greek word molybdos meaning “lead like.” It is directly mined and is a byproduct of copper mining. It was used very infrequently up until the 19th century when Schneider and Co decided to use Molybdenum as an alloying agent in steel. Today there are many uses of molybdenum.
Molybdenum is still used as an alloy ag...
15). Is Switchgrass a Viable Energy Crop? By : Kael
Switchgrass has long been a staple crop of farmers. It is used as fodder for farm animals, fuel, and electrical needs, as a buffer strip and soil erosion control.
However, when President Bush introduced The Biofuels Initiative during his 2006 state of the nation address, he moved this native prairie grass’ use as an energy crop to the...
16). Many Uses of Metal Detectors By : James Hunt
Have you ever lost something at the beach or at a park and wondered for weeks what happened to it? Chances are that someone was walking with the ingenious invention...
17). The Tale of the Humble Popcorn By : Sam Vaknin
Corn pollen more than 80,000 years old was found in Mexico. Proper popcorn was known in China, Sumatra, and India for at least 5000 years. Popped popcorn and kernels 5600 years old were discovered in the "Bat Cave" in New Mexico in 1948-1950. Popcorn kernels - ready to pop - were unearthed in ancient Peruvian tombs. In a cave is southern Utah, fluf...
18). What is a Water Softener? By : James Hunt
It seems a little strange that water is soft or hard. However, these are two recognized types of water. A water softener is a machine that removes certain elements from hard water, thus softening it and making it a little better to use...
19). Asteroids and Earth Impacts By : Herbert Young
Science Fiction movies present stories about meteor impacts on the Earth. Is this possible? If so, when will it happen?
20). Biodiesel and You By : Joseph Then
The idea of using an all natural biodegradable fuel source may seem a bit too science fiction or Hollywood for the average person.
21). Making Biodiesel For Fun and Savings By : Joseph Then
All of us have a little chemist in us that likes to come out and play. Experimenting with different concoctions is part of what makes cooking so much fun, but can you imagine a chemistry experiment that could end up saving you thousands of dollars on your gasoline bills?
22). Beating the High Price of Gasoline with Biodiesel By : Joseph Then
With the price of traditional fuel rising faster everyday, people everywhere are looking for alternatives. Electric cars were once touted as the way to save the environment and beat the cost of gasoline, but they are so expensive that very few people can afford to save money by purchasing one.
23). Pump It Up! By : James Monahan
24). The Biology 30 Curriculum By : Barney Garcia
In Science students learn about the physical world, ecology and technology. Studying science also helps develop an understanding of the many applications of science in daily life.
25). Up, Up and Away! Look Forward to Space Travel by 2008 By : Sarah Deak
Those who hate to fly would not be thrilled to hear about one of the newest ways to travel: spaceship.
26). When do children really understand what "Adoption" means? By : Jeff Conrad
Today most Scientists & Adoption Agents are of the opinion that parents should inform their adopted children as soon as possible about their status. Only an early introduction to the subject will give parents and children a chance to develop an open and trusting relationship between each other.
27). The Invention Of The Atomic Clocks By : Steve Gink
Louis Essen was born in 1908 in a small city in England called Nottingham. His childhood was typical of the time and he pursued his education with enjoyment and dedication. At the age of 20 Louis graduated from the University of Nottingham, where he had been studying. It was at this time that his career started to take off, as he was invited to joi...
28). The Interesting Eagle Nebula By : David Craig
The Eagle Nebula, associated with open star cluster M16 of the Milky Way, was named for its dramatic similarity to the appearance of an eagle. Located 7000 light years from Earth, it is a component of the constellation Serpens (for Serpent). It was discovered in 1746 by P.L. de Cheseaux but it was not until twenty years later that the famous astron...
29). The Invisible Ether and Michelson Morley By : Mike Strauss
The concept of the invisible ether or 'aether' is an old concept dating to the time of the ancient Greeks. They considered the ether as that medium which permeated all of the universe and even believed the ether to be another element. Along with Earth, Wind, Fire and Water Aristotle proposed that the ether should be treated as the fifth element or ...
30). The Hitchhiker’s Guide to Elliptical Galaxies By : James Monahan
Elliptical galaxies are ellipsoidal agglomerations of stars, which usually do not contain much interstellar matter, and look smoothly like small wads when viewed through a telescope.
Browse Pages: 1 | <urn:uuid:b0110f12-cd24-4674-9ac0-b66a62a5976c> | 2.75 | 1,874 | Content Listing | Science & Tech. | 56.296479 |
The Audubon Insectarium in New Orleans just bred these rare pink katydids and I find them captivating/delicious.
Pink Katydid Facts:
• The parental katydids, both pink, were brought to Audubon Insectarium during
the summer of 2008 as donations by visitors.
• The pink katydids were sent off to Cokie Bauder, Manager of Animal Collections
at the Insectarium’s Insect Rearing Facility, for supervision and care.
• The pink katydids are oblong-winged katydids, Amblycorypha oblongifolia.
• This unusual katydid coloration was first written about in a scientific article in
• The first and only available scientific research paper on the genetics of this
coloration and captive breeding was conducted by Dr. Joseph Hancock and
published in February 1916.
• No scientific records appear to exist for the offspring of two pink parents. It
appears that Hancock was only able to successfully produce viable offspring from
crosses of one pink female to one green male. | <urn:uuid:a2de6573-0dd4-4489-8dce-8771c02488b0> | 2.75 | 232 | Personal Blog | Science & Tech. | 30.654115 |
[Twisted-Python] Deferred usage clarification
andrew-twisted at puzzling.org
Sun Jun 1 20:31:39 EDT 2003
On Sun, Jun 01, 2003 at 08:05:02PM -0400, Gary Poster wrote:
> Hi. A newbie question: I'd like a bit of clarification in Deferred
> usage. More specifically, it would be nice if one of the examples on
> the Twisted Deferred howto page used a real blocking call rather than a
> So, to see if I'm on the right track, I'd like to modify the third code
> example to use a sleep, to make it better parallel the first code
> snippet on the howto page.
> First, here it is from the page, without modification:
> Here's my modification. Is this the right usage?
No -- see below.
> class Getter:
> def getData(self, x):
> # this won't block
> d = defer.Deferred()
> reactor.callLater(0, self.doWork, x, d)
> return d
> def doWork(self, x, deferred):
> # churn, churn, churn
> deferred.callback(x * 3)
You're misunderstanding the purpose of Deferreds. They don't do any magic
to make blocking calls happen asynchronously.
A Deferred is simply a result that hasn't arrived yet. When the result of
some function can take some arbitrary amount of time to happen, it will
return a Deferred, rather than holding up the execution of the rest of the
program. A Deferred is a promise that a result (or failure) will arrive
eventually, and provides a convenient abstraction to attach callbacks to
that is far more usuable than riddling your code with functions that take a
method to callback when they're done.
How the result arrives later is entirely up to the code that returns the
Deferred. It might simply send an HTTP request, and wait for the result to
arrive off the network; it might query a database; it might split the work
up into small chunks that it schedules using callLater; or it might just use
deferToThread. All that matters is that at some point "d.callback(result)"
or "d.errback(failure)" is called (or you could call d.setTimeout(...) if
you aren't sure you want to wait indefinitely).
Does this make sense?
More information about the Twisted-Python | <urn:uuid:0f42daa2-4fa4-4ec8-adb7-0e32db2a8ca9> | 2.75 | 545 | Comment Section | Software Dev. | 64.260038 |
While the vast majority of scientists today assert that humans contribute to the earth’s increasing temperatures, there is still a debate going on among politicians and journalists over the facts for global warming.
Facts For Global Warming
- The 10 hottest years recorded were in the past 12 years. Since record-keeping began in 1880, the years between 1997 and 2008 were the warmest the planet has seen in quite some time.
- Global average temperatures have increased by 1.4 degrees Fahrenheit since 1880.
- Today’s CO2 levels are the highest they’ve been in the last 650,000 years.
- CO2 levels have increased by 43% since the Industrial Revolution. Before this time, the average concentration of carbon dioxide was 260-280 parts per million (ppm). In 2008, levels hit 388 ppm, a record high.
Facts Against Global Warming
Truth be told, the only “facts” against global warming are myths perpetrated by individuals and groups with political or social agendas. These people are trying to keep you in the dark about what’s to come so they can continue to profit from the sale of foreign oil.
- Myth #1: It’s not your fault. This isn’t true; scientific evidence shows a direct correlation between the levels of carbon dioxide in our atmosphere (which have increased primarily from humans’ use of fossil fuels) and the increasing temperatures our planet is experiencing.
- Myth #2: It’s perfectly natural. While the earth does go through periods of natural temperature fluctuation, the major increases in temperature and CO2 levels we are facing go far beyond the normal realm. Many experts believe that the world’s average temperature will increase by 2.5 to 10.4 degrees Fahrenheit by 2100.
- Myth #3: Don’t worry; be happy. Global warming doesn’t mean the end of winter and a permanent tropical climate. On the contrary, the more correct term is “climate change” which means more frequent and intense storms, droughts, wars breaking out over water, and mass migrations on a biblical scale as the waters slowly drown out coastal areas.
The Truth Can’t Be Denied
Author: Brown, Nathan. A Cooler Climate, 2008. | <urn:uuid:fc896c8d-ce08-4071-825d-25e3b44b72ea> | 3.203125 | 476 | Personal Blog | Science & Tech. | 58.937422 |
The Uses of Set Theory
The Mathematical Intelligencer, Vol. 14, No. 1 (1992), 63-69.
In this article Roitman gives six examples from analysis, algebra, and algebraic topology whose results make use of modern set-theoretic techniques. These examples are
The ideal of compact operators
The purely analytic question "Is the ideal of compact operators on Hilbert space the sum of two properly smaller ideals?" is equivalent to purely set-theoretic combinatorics.
A characterization of free groups
The proof that "an Abelian group is free if and only if it has a discrete norm" exploits the use of model theory within set theory.
The fundamental group
The proof that "the fundamental group of a nice space is either finitely generated or has cardinality of the first uncountable cardinal" uses methods related to consistency results.
The Hawaiian Earring
Questions in strong homology theory are related to consistency results and the continuum hypothesis.
A Banach space with few operators
An example of a nonseparable Banach space where every linear operator is a scalar multiplication plus an operator with separable range is connected to set theory through infinite combinatorics on the first uncountable ordinal.
The free left-distributive algebra on one generator
Questions on free left-distributive algebras on one generator are connected to large cardinal theory.
In the conclusion Roitman writes:
I have presented a few theorems of mainstream mathematics that have been proved by set-theoretic techniques. In some cases we know that set theory is necessary; in other cases it has certainly proved convenient. The theorems presented are just a small percentage of such applications. One suspects that the existing applications are just a small fraction of the applications to be found in the near future. My thesis has been that set theory is an important tool of mathematics, whose use extends far outside the obvious. | <urn:uuid:35976f1a-24c4-461f-b931-5da9a6a0e4ce> | 3.203125 | 404 | Academic Writing | Science & Tech. | 26.240772 |
Science Fair Project Encyclopedia
Four color theorem
The four color theorem states that every possible geographical map can be colored using no more than four colors in such a way that no two adjacent regions receive the same color. Two regions are called adjacent if they share a border segment, not just a point.
It is obvious that three colors are inadequate, and it is not difficult, relatively speaking, to prove that five colors are sufficient to color a map.
The four color theorem was the first major theorem to be proven using a computer, and the proof is not accepted by all mathematicians because it would be unfeasible for a human to verify by hand. Ultimately, one has to have faith in the correctness of the compiler and hardware executing the program used for the proof.
The lack of mathematical elegance was another factor, and to paraphrase comments of the time, "a good mathematical proof is like a poem—this is a telephone directory!"
The conjecture was first proposed in 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. At the time, Guthrie was a student of Augustus De Morgan at University College. (Guthrie graduated in 1850, and later became a professor of mathematics in South Africa). According to de Morgan:
- A student of mine [Guthrie] asked me today to give him a reason for a fact which I did not know was a fact - and do not yet. He says that if a figure be anyhow divided and the compartments differently coloured so that figures with any portion of common boundary line are differently coloured - four colours may be wanted, but not more - the following is the case in which four colours are wanted. Query cannot a necessity for five or more be invented...
The first published reference is found in Arthur Cayley's, On the colourings of maps., Proc. Royal Geography Society 1, 259-261, 1879.
There were several early failed attempts at proving the theorem. One proof of the theorem was given by Alfred Kempe in 1879, which was widely acclaimed; another proof was given by Peter Tait in 1880. It wasn't until 1890 that Kempe's proof was shown incorrect by Percy Heawood , and 1891 that Tait's proof was shown incorrect by Julius Petersen - each false proof stood unchallenged for 11 years.
In 1890, in addition to exposing the flaw in Kempe's proof, Heawood proved that all planar graphs are five-colorable.
However, it was not until 1977 that the four-color conjecture was finally proven by Kenneth Appel and Wolfgang Haken at the University of Illinois. They were assisted in some algorithmic work by J. Koch.
The proof reduced the infinitude of possible maps to 1,936 configurations (later reduced to only 1,476) which had to be checked one by one by computer. The work was independently double checked with different programs and computers. However, the "proof" was over 500 pages of hand written counter-counter-examples, much of which was Hakken's teenage son verifying graph colorings. The computer program ran for hundreds of hours. Essentially the proof stated that the theorem must be correct because all potential counter-examples were disproven, without actually giving a concrete logical reasoning. However, it provided a firm basis for later work.
In 1996, Neil Robertson, Daniel Sanders , Paul Seymour and Robin Thomas produced a similar proof which required checking 633 special cases. This new proof also contains parts which require the use of a computer and are impractical for humans to check alone.
In 2004, Benjamin Werner and Georges Gonthier formalized a proof of the theorem inside the Coq theorem prover. This removes the need to trust the various computer programs used to verify particular cases — it is only sufficient to trust the Coq prover.
Not for map-makers
The four colour theorem does not arise out of and has no origin in practical cartography. According to Kenneth May, a mathematical historian who studied a sample of atlases in the Library of Congress, there is no tendency to minimise the number of colors used. Maps utilizing only four colors are rare, and those that do usually require only three.
Textbooks on cartography and the history of cartography don't mention the four colour theorem, even though map colouring is a subject of discussion. Generally, mapmakers say they are more concerned about coloring maps in a balanced fashion, so that no single color dominates. Whether they use 4 or 5 colors is not their primary concern.
Formal statement in graph theory
To formally state the theorem, it is easiest to rephrase it in graph theory. It then states that the vertices of every planar graph can be colored with at most four colors so that no two adjacent vertices receive the same color. Or "every planar graph is four-colorable" for short. Here, every region of the map is replaced by a vertex of the graph, and two vertices are connected by an edge if and only if the two regions share a border segment.
One can also consider the coloring problem on surfaces other than the plane. The problem on the sphere is equivalent to that on the plane. For closed (orientable or non-orientable) surfaces with positive genus, the maximum number p of colors needed depends on the surface's Euler characteristic χ according to the formula
where the outermost brackets denote the floor function. The only exception to the formula is the Klein bottle, which has Euler characteristic 0 and requires 6 colors. This was initially known as the Heawood conjecture and proved as The Map Color Theorem by Gerhard Ringel and J. T. W. Youngs in 1968.
Real world counterexamples
In the real world, not all countries are contiguous (e.g. Alaska as part of the United States). If the chosen coloring scheme requires that the territory of a particular country must be the same color, four colors may not be sufficient. Conceptually, a constraint such as this enables the map to become non-planar, and thus the four color theorem no longer applies. For instance, consider a simplified map:
In this map, the two regions labeled A belong to the same country, and must be the same color. This map then requires five colors, since the two A regions together are contiguous with four other regions, each of which is contiguous with all the others. If A consisted of three regions, six or more colors might be required; one can construct maps that require an arbitrarily high number of colors.
Theorem proved: four colours suffice
The proof of the Four Color Theorem is not simple; it involved more than 1,000 hours of computing time on a 1977 computer checking more than 100,000 particular cases. To some mathematicians this is unacceptable, as the proof cannot be reviewed; but would a 1,000-page hand-written proof be any more checkable?
What the proof does not provide is any insight as to why the conjecture is true. The theorem is true, but unexplained. This is another, stronger reason that critics call the proof inelegant.
In 2002, Benjamin Werner and Georges Gonthier from INRIA formalized a proof of the theorem inside the Coq theorem prover. This removes the need to trust the various computer programs used to verify particular cases — it is only sufficient to trust the Coq prover.
- Appel, Kenneth & Haken, Wolfgang & Koch, John, Every Planar map is Four Colorable, Illinois: Journal of Mathematics: vol.21: pp.439-567, December 1977.
- Appel, Kenneth & Haken, Wolfgang, Solution of the Four Color Map Problem, Scientific American, vol.237 no.4: pp.108-121, October 1977.
- Appel, Kenneth & Haken, Wolfgang, Every Planar Map is Four-Colorable. Providence, RI: American Mathematical Society, 1989.
- O'Connor and Robertson, History of the Four Color Theorem, MacTutor project, http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/The_four_colour_theorem.html
- Saaty and Kainen, The Four Color Problem: Assaults and Conquest (ISBN 0-486-65092-8)
- Robin Thomas, An Update on the Four-Color Theorem (PDF File), Notices of the American Mathematical Society, Volume 45, number 7 (August 1998)
- Robin Thomas, The Four Color Theorem, http://www.math.gatech.edu/~thomas/FC/fourcolor.html
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:c00df222-69aa-48ee-bcef-a4474694b16d> | 4.125 | 1,839 | Knowledge Article | Science & Tech. | 50.776676 |
The North America nebula
on the sky can do what the
North America continent
on Earth cannot -- form stars.
Specifically, in analogy to the Earth-confined continent, the bright part that appears as Central America and
is actually a hot bed of gas, dust, and newly formed stars known as the
The above image
shows the star forming wall lit and eroded by
bright young stars, and partly hidden by the dark
dust they have created.
The part of the North America nebula
(NGC 7000) shown spans about 15 light years and lies about 1,500 light years away toward the
of the Swan
Credit & Copyright: | <urn:uuid:7a6f6f06-a253-49f3-8004-76b9beaf1685> | 3.65625 | 137 | Knowledge Article | Science & Tech. | 29.332453 |
The pattern of seasonal maximum temperature odds across Australia
is a result of the combined effects of above average temperatures
in the Indian Ocean immediately to the west of Australia and cooler than average
waters in the central to western equatorial Pacific in association with the decaying
La Niña pattern.
Averaged over winter, the chances are between 60 and 70%
for above-normal maximum temperatures in southern WA
(see map). So for every ten years with ocean patterns
like the current, about six or seven winter periods are
expected to be warmer than average over this part of the
country, with about three or four being cooler.
Over the rest of the country, the chances of exceeding the three-month median maximum
temperature are mainly between 40 and 60%. So the chances of being warmer than normal are
about the same as the chances of being cooler.
Outlook confidence is related to how consistently the Pacific and Indian
Oceans affect Australian temperatures. During winter, history shows
this effect on maximum temperatures to be moderately consistent over most
the country (see background information).
Similar to the maximum temperature outlook, the chances of above median
minimum temperatures over the winter period are between 60 and 70% in
southwestern Australia (see map). In contrast a region in northwest Australia has an increased chance of
cooler conditions, with only a 35 to 40% chance of exceeding the winter
median minimum temperature.
Over the rest of the country the odds are in
the 40 to 60% range, so the chances of being warmer than average are
similar to the chances of being cooler.
History shows the oceans' effect on minimum temperatures in winter
to be moderately consistent over large parts of the country,
except over Victoria, Tasmania and parts of southeast SA where the
influence is only weakly or very weakly consistent. | <urn:uuid:5bc7d822-53a7-41ae-af62-5eb94868a1ba> | 3.234375 | 380 | Knowledge Article | Science & Tech. | 25.841475 |
GLUT stands for OpenGL Utility Toolkit. Mark J. Kilgard implemented it to enable the construction of OpenGL applications that are truly window system independent. Thanks to GLUT, we can write applications without having to learn about X windows or Microsoft’s own window system. Kilgard implemented the version for X windows, and later Nate Robins ported it to Microsoft Windows. Thanks to both, you did a great job.
With GLUT you can open a window for OpenGL rendering with 5 lines of code! Another 3 or 4 lines and you can a keyboard and mouse with your application. GLUT really makes things simple, hence it is very usefull to learn and to build small applications. Although GLUT is not being maintained anymore it still serves its purpose.
The GLUT distribution comes with lots and lots of examples so after you read through the basics in here you’ll have plenty of material to go on. Check out the GLUTs page.
In this tutorial I’ll introduce you to the basics of building an application using GLUT. This tutorial won’t introduce fancy visual effects in order to keep the code as simple as possible. I’ll use OpenGL 2.0 since it is much simpler and avoids complicating the core subject of the tutorial: GLUT.
There are open source versions of GLUT, such as freeGLUT and OpenGLUT. They all kept the API so 99.9% of what will be presented in this tutorial is still valid. Nonetheless these new versions do have some extensions that make it worth a try. Check out the extensions in freeGLUT here.
Please do comment if something is not completely clear. Your feedback is important.
- Moving the Camera I
- Advanced Keyboard
- Moving the Camera II
- The Code So Far
- Moving the Camera III
- The Code So Far II
Avoiding the Idle Func | <urn:uuid:6ce634d6-b017-4175-8fa4-6aa84177fe69> | 2.984375 | 389 | Tutorial | Software Dev. | 61.013341 |
Yahaya, Muhammad and Yap, Chi Chin and Salleh, Muhamad Mat (2009) Energy Conversion: Nano Solar Cell. INTERNATIONAL WORKSHOP ON ADVANCED MATERIAL FOR NEW AND RENEWABLE ENERGY . ISSN 0094-243X
Full text is not hosted in this archive but may be available via the Official URL, or by requesting a copy from the corresponding author.
Official URL: http://apps.isiknowledge.com/full_record.do?produc...
Problems of fossil-fuel-induced climate change have sparked a demand for sustainable energy supply for all sectors of economy. Most laboratories continue to search for new materials and new technique to generate clean energy at affordable cost. Nanotechnology can play a major role in solving the energy problem. The prospect for solar energy using Si-based technology is not encouraging. Si photovoltaics can produce electricity at 20-30 cent/kWhr with about 25% efficiency. Nanoparticles have a strong capacity to absorb light and generate more electrons for current as discovered in the recent work of organic and dye-sensitized cell. Using cheap preparation technique such as screen-printing and self-assembly growth, organic cells shows a strong potential for commercialization. Thin Films research group at National University Malaysia has been actively involved in these areas, and in this seminar, we will present a review works on nanomaterials for solar cells and particularly on hybrid organic solar cell based on ZnO nanorod arrays. The organic layer consisting of poly[2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene] (MEHPPV) and [6,6]-phenyl C61-butyric acid 3-ethylthiophene ester (PCBE) was spin-coated on ZnO nanorod arrays. ZnO nanorod arrays were grown on FTO glass substrates which were pre-coated with ZnO nanoparticles using a low temperature chemical solution method. A gold electrode was used as the top contact. The device gave a short circuit current density of 2.49 x 10(-4) mA/cm(2) and an open circuit voltage of 0.45 V under illumination of a projector halogen light at 100 mW/cm(2).
|Subjects:||Technology > Nanotechnology and environmental applications|
|Deposited On:||11 Feb 2010 08:04|
|Last Modified:||11 Feb 2010 08:04|
Repository Staff Only: item control page | <urn:uuid:c89e688e-93e3-48a9-bdcd-4d2a24dac9ab> | 2.703125 | 543 | Academic Writing | Science & Tech. | 42.162867 |
Rocky extrasolar planets thought to be half frozen and half scorched might instead rock back and forth, creating large swaths of twilight with temperatures suitable for life.
Because of gravitational tugs with the objects they orbit, rocky bodies often settle into trajectories in which they always show the same face to their hosts. Such 'tidally locked' exoplanets would thus seem like bad candidates for life, since the hemisphere facing their host stars would roast and the dark side would freeze.
But a new computer model by Anthony Dobrovolskis of NASA Ames Research Center in California, US, suggests this is not always so. He finds that such planets can rock to and fro if they travel on elongated, or eccentric orbits, creating a 'twilight zone' that could be hospitable to life.
The Moon experiences a similar rocking motion. It always shows the same face to Earth, taking the same amount of time to rotate around its axis as it does to circle our planet once. However, because the Moon's path around the Earth is not perfectly circular, its orbital speed is sometimes faster or slower than its rotational speed. The difference between the two motions causes the Moon to rock slightly.
"If you're standing on the Moon, you'll see the Earth rock back and forth a little bit," Dobrovolskis told New Scientist.
He says extrasolar planets on very elongated orbits will experience pronounced rocking motions, or librations. Rather than being worlds of fire and ice, these 'rock-a-bye' planets could have much more temperate climes than previously thought.
If the planet rocks by 90° or more, "there is no permanent day or permanent night side anymore," Dobrovolskis told New Scientist. "The whole thing becomes a twilight zone."
The effect could increase the likelihood of life on rocky worlds orbiting small, dim stars called red dwarfs, said NASA Ames astronomer Jack Lissauer. That's because the dwarfs' habitable zone - where liquid water, and potentially life, could exist - lies so close to the small stars that any planets there would almost certainly be tidally locked to their hosts.
The results also have implications for attempts to directly observe new worlds, Dobrovolskis says. He says astronomers should look for planets whose temperatures are relatively even all across their surfaces, not just for planets sporting one very hot and one very cold hemisphere.
Astrobiology - Learn more in our out-of-this-world special report.
Journal Reference: Icarus (vol 192, p 1)
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Twilight Zones On Scorched Planets
Tue Dec 11 10:11:59 GMT 2007 by David Mcculloch
Wouldn't this be a relatively minor effect compared to the massive hurricane on the sunward face if the planet has an atmosphere?
NS had an article on this some time ago, under the general heading of the goldilocks zones of red dwarves. Perhaps you've forgotten.
Thu Dec 13 12:12:00 GMT 2007 by John
What if we lived between the light dark part of the planet? where the hot and cold meet? the temperature for life should be ok?
Thu Dec 13 15:49:00 GMT 2007 by David Mcculloch
I'm sure that's the best bet, John, even if it is a bit windy! :)
Twilight Zone On Tidally Locked Planets-heat Transfer Across Zone
Mon Dec 31 15:53:08 GMT 2007 by Ian Star
As many readers wil recall NS had an article on such a tidally locked planet years ago.Heat transfer was proposed by not only atmosphere but by sub surface ice seas, back across from the colder dark side to the hotter ligted side.This would mean that the atmosphere could be sufficiently thin to allow light for photosynthesis etc,and yet be sufficiently thick to maintain temperatures for life.All this is from article in NS years ago. Ian Star
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:5be04ec5-01af-4dcc-a682-d95cdf79e172> | 3.28125 | 953 | Comment Section | Science & Tech. | 51.960326 |
This thread is in response to this site suggestion
The basic difference between a coding language and a scripting language is that the coded languages are compiled and the scripted languages are interpreted. Naturally other differences exist, but I'm not trying to be comprehensive or complete here.
C++ is probably the most commonly used coding language in Linux. QT is the C++ toolkit for KDE, and GTK (Gimp Tool Kit-- edit: uses C
) is used for Gnome, XFCE and LXDE. These tool kits are provided as a sort of "leg-up" for coding apps for these specific DEs.
There are IDEs (Integrated Development Environments) available for QT and GTK. QT Designer and QT Creator are available in the repos for creating KDE apps, and Glade Interface Designer and Anjuta are available for creating GTK apps. For other IDEs and tools, look in Synaptic > Sections > Development.
Scripting languages come with the interpreters they need. In addition, there may be development tools available, as in IDLE and Eric for Python.
The text editor is a very commonly used tool. It can be used in both coding and scripting. There are several available in the repos. | <urn:uuid:554d8347-5a04-49c0-afef-630a1d470820> | 3.234375 | 260 | Comment Section | Software Dev. | 47.78086 |
There are several
kinds of fuel cells, but in principle, every fuel cell operates
something like a battery. It will produce energy in the form
of electricity and heat as long as fuel is supplied.
cell consists of two electrodes sandwiched around an electrolyte.
Oxygen passes over one electrode and hydrogen over the other,
generating electricity, water and heat. Hydrogen fuel is fed
into the “anode” of the fuel cell. Oxygen (or air)
enters the fuel cell through the cathode. Encouraged by a catalyst,
the hydrogen atom splits into a proton and an electron, which
take different paths to the cathode. The proton passes through
the electrolyte. The electrons create a separate current that
can be utilized before they return to the cathode to be reunited
with the hydrogen and oxygen in a molecule of water.
A fuel cell system
which includes a “fuel reformer” can
utilize the hydrogen from any hydrocarbon fuel – from
natural gas to methanol or even gasoline. Since the fuel cell
relies on chemistry and not combustion, emissions from this
type of a system would still be much smaller than emissions
from the cleanest fuel combustion processes.
comes from Fuel Cells 2000, an online reference. See www.fuelcells.org. | <urn:uuid:5d35826b-9cdb-4001-8223-652e54d46fe6> | 4.34375 | 284 | Knowledge Article | Science & Tech. | 42.235612 |
The Environment of the Earth in the Past
Calculations show that the Earth had large oceans very early in its history. During this time the Earth should have been frozen because of the weak luminosity of the sun at that time, nevertheless the fact that there was a large and vigorous ocean suggests that the Earth must have had a large and vigorous atmosphere in place to keep the surface warm. The warm early ocean of Earth was ideal for the development of life. The earliest fossils show that there was life on Earth at least 3.8 billion years ago (see the geologic record for the corresponding epochs of the Earth's history).
The atmosphere of the Earth came from, and continues to come from volcanoes, which produce a great deal of water vapor, carbon dioxide and other gases. Over the course of time the composition of the atmosphere has changed significantly. In particular, the earliest atmosphere was very rich in carbon dioxide (like present Mars and Venus). The present atmosphere is 80% Nitrogen and 20% Oxygen. It was life on Earth which was largely responsible for transforming the content of the Earth's atmosphere to its present composition.
The changes in the atmosphere, as well as the changes to the locations of the continents, have contributed to very significant changes in the climate of the Earth. The surface of the Earth has seen extremely high temperatures, as well as extremely low temperatures. Today's concern about global warming is part of a long history of Earth's variability with regard to climate. | <urn:uuid:47df22dd-f71c-4cbd-8dc0-d2150760c669> | 4 | 300 | Knowledge Article | Science & Tech. | 45.222692 |
The nearby or "local" universe, an area that extends about 380 million light-years away from Earth, contains many galaxy clusters, i.e., gravitationally bound groups of about 100 to more than 1000 galaxies. These clusters are connected with each other and make up a huge network of galaxies called the "large-scale structure" of the Universe. Such configurations raise fundamental questions: When and how did these structures form in the history of the Universe?
Astronomers think that the Universe started out as an almost homogeneous mass that spread uniformly. Small fluctuations in the initial mass distribution increased by gravity over the 13.7 billion years of the Universe's age and produced the recent array of clusters. Because clusters contain a larger number of old and massive galaxies than those found in isolated galaxies, astronomers speculate that developing clusters may significantly affect the evolution of their member galaxies. Therefore, understanding the details of cluster formation (Note 1) is an essential step in addressing key issues of structure formation and galaxy evolution. A necessary part of this process is an investigation of all stages of cluster formation from beginning to end, which is why the current team gave particular emphasis to studying the birth of clusters.
The team focused on this phase of cluster formation by searching very distant galaxies that existed in the early Universe. Such observations present challenges for a couple of reasons. First, the light from more distant galaxies is faint and difficult to detect. Second, protoclusters in the early Universe are rare. The use of the Subaru Telescope allowed the team to overcome these difficulties. The telescope not only has an 8.2 m primary mirror with large light-gathering power but also offers the advantage of the Subaru Prime Focus Camera (Suprime-Cam) with a wide-field imaging capability. These features are particularly beneficial for discovering faint and rare objects in the distant Universe.
The team chose to observe the Subaru Deep Field, a 0.25 square-degree-wide field in the northern sky near the constellation Coma Barenices. The Subaru Deep Field is one of the most suitable regions for finding protoclusters in the early Universe; the area is not only deep and wide but has been intensively observed with the Subaru Telescope, which has detected very faint galaxies. When the team searched for distant galaxies in the Subaru Deep Field and investigated their distribution, they found a region with a surface number density five times greater than the average (Fig. 1).
The astronomers then used Subaru's Faint Object Camera and Spectrograph (FOCAS) to conduct a spectroscopic observation, which confirmed that most of the galaxies located in the highly dense region lay in a narrow area in the line-of-sight. This concentration of galaxies could not be explained by chance. On the basis of their observations with the Subaru Telescope, the team confirmed the existence of a protocluster 12.72 billion years ago (Fig. 2)--the most distant protocluster found with its distance established by spectroscopic observations (Note 2). The astronomers were able to directly observe this cluster of galaxies at an early stage in galaxy evolution, when structures were beginning to form in the early Universe. This discovery will be an important step on the way to understanding structure formation and galaxy evolution.
The team will continue their research with the Subaru Telescope's forthcoming Hyper-Suprime Camera (HSC), which has an imaging capability with a field of view seven times wider than Suprime-Cam. The astronomers expect to use HSC to reveal how many protoclusters existed in the early Universe and to provide a better picture of protoclusters in general. Toshikawa summarized the team's intent: "By continually working to find such distant protoclusters, we can understand cluster formation more clearly."
Nobunari Kashikawa, National Astronomical Observatory of Japan
Kazuaki Ota, Kyoto University
Tomoki Morokuma, University of Tokyo
Takatoshi Shibuya, The Graduate University for Advanced Studies, Japan
Masao Hayashi, National Astronomical Observatory of Japan
Tohru Nagao, associate professor, Kyoto University
Linhua Jiang, University of Arizona
Matthew A. Malkan, University of California
Eiichi Egami, University of Arizona
Kazuhiro Shimasaku, University of Tokyo
Kentaro Motohara, University of Tokyo
Yoshifumi Ishizaki, The Graduate University for Advanced Studies, Japan
2. Prior to this research, Ouchi et al. used the Subaru Telescope and discovered the most distant protocluster ever found in 2005. In 2012 Trenti et al. found a protocluster candidate beyond 12.7 billion light-years ago, but spectroscopic observations have not confirmed their distances.
3. This research investigated luminosities and star formation rates.
4. Investigation of other properties such as mass, age and color is necessary to conclude whether the properties of protoclusters are different from those of field galaxies.
5. The interesting structure in Fig.1 is elongated toward the upper left of the protocluster. A large-scale structure may begin to form in this early Universe. Only the Subaru Telescope could find this large feature. | <urn:uuid:afec4632-967a-4bce-86b1-5c70de1683d3> | 3.9375 | 1,057 | Academic Writing | Science & Tech. | 26.581806 |
HW: p 623, #13.1. Modify the
third homework from last week to use exception handling to catch and
record non-integer inputs. Find the total of all the valid inputs
and produce a list of the non-valid inputs after reporting the sum.
You will need a String Array to keep track of the invalid inputs.
HW: p 624, #13.3 Instead of
the GUI expected by the problem, use a JOptionPane window. Use exception
handling to report both non-integer index requests and
out-of-bounds requests. Allow the user to repeat this operation as
often as desired. In other words, the main() program should have a loop.
Week 10: Introduction to Files. (Liang, Chapter 17, through
Print Streams (lightly)) | <urn:uuid:0f9997ea-7756-40f8-8a88-c290322223f9> | 2.6875 | 176 | Content Listing | Software Dev. | 75.612154 |
Out to sea
From the Amazon to Antarctica, oceanographer Patricia Yager tracks climate change
Patricia Yager studies how the climate is changing the ocean, how the ocean is responding, and how that relates back to the climate.
That’s pretty big-picture stuff. Luckily, Yager, associate professor in the Department of Marine Sciences and “Tish” to her friends, is good at thinking on a big-picture level.
When it comes to climate change and the ocean, “You can’t just study one little piece,” she says. “You have to have a team. You need all the pieces.”
But Yager doesn’t have just one team. She leads a Moore Foundation- and three National Science Foundation-funded research projects that look at carbon cycling, microbial ecology and microbial community structure in three different coastal regions of the world: Antarctica, the Arctic and the Amazon River.
She’s principal investigator of an international team of oceanographers and river scientists that studies carbon and nutrients flowing from the Amazon River to the Atlantic Ocean; she’s also the principal investigator in the Arctic, where scientists are studying one-celled organisms in the food web, taken from ocean samples collected near Barrow, Alaska.
And, last winter, she was chief of 46 scientists in coastal Antarctica, marking the beginning of the Amundsen Sea Polynya International Research Expedition (ASPIRE), which is jointly funded by the NSF and the Swedish Research Council.
“It’s an amazing opportunity to have three projects going,” she says. “It’s difficult for me since I’m literally going to the ends of the earth.”
Regardless of the end she’s in, she’s seeing the same thing: the ocean, which has the responsibility of absorbing a third of all human CO2 output, is not working the way it used to. And if that intake stops, CO2 stays in the atmosphere.
“It looks like the process is slowing down,” Yager says. “The ability of the ocean to help us out is shrinking.”
Trees and soil also absorb CO2, and planting more trees would help, she says. But the reality is that deforestation and the ocean’s slowing down of absorption leaves humanity with a big job: adapt. Yager wants to see what’s coming, so we can get ready for it.
If you look at the ice ages, you’ll see that the changes were linked to the solar cycle—but the solar cycle alone wasn’t enough to make the ice ages. You have to have something to act like an amplifier to take a little change and make it a big change. The solar cycle, then, acted like a trigger.
Just like human increases in atmospheric CO2 might be a trigger now.
This is where the Amundsen Sea polynya (puh-lin-yuh) becomes important.
Polynya, Russian for “pool,” is an area of open ocean surrounded by ice, existing either because wind is blowing ice away from the coast or because warm air or an upwelling of warmer water melts the ice away.
“In a world of white, here’s this patch of open water,” Yager says. “It’s a biological oasis, a hotspot.”
Think of it this way: If it’s sunny outside, a white shirt will reflect light and feel cooler than a dark shirt. Polar environments are white and reflect a lot of sunlight. When a polynya opens up, there’s now this patch of blue that will absorb light, and therefore be warmer. Sea ice melts fast in summer; imagine that polynya getting bigger, making a larger patch of warm, light-absorbing blue in the middle of all that white, light-reflecting white. Things begin changing, getting warmer. Even a small patch of melting ice leads to warmer temperatures and more melting.
“It’s a runaway train,” Yager says.
This particular polynya, located near the extremely fast-melting Pine Island and Thwaites Glaciers, is unique. It’s filled with algae and krill, making it a feeding hotspot for wildlife.
“On a per area basis, (this one) is the most productive, greenest polynya,” Yager says. “We don’t really know why. There’s something interesting happening here.” In fact it was much greener, Yager says, than anywhere the team measured in the Amazon River Plume.
No one knew the polynya was so exciting until 2007, when the NSF invited Yager to join a group of American and Swedish scientists on a 42-day cruise between the southern tip of Chile and Antarctica’s McMurdo Base, which needed its shipping lanes opened by the Swedish Icebreaker Oden. The Amundsen Sea polynya was about halfway in between. The scientists didn’t know what they’d find, but it was a unique chance to investigate the rarely traveled South Pacific coast of Antarctica—that alone made it “a huge scientific opportunity,” she says. The polynya particularly piqued their interest, and they wanted to return and study it more.
So they did, in November 2008-January 2009, and again November 2010-January 2011. On their latest trip, Yager and the ASPIRE team—principal investigators, research professionals, collaborators, graduate students and other crew—boarded the U.S. Researcher Icebreaker Nathaniel B. Palmer in Punta Arenas, Chile, and headed toward the Amundsen Sea, where it was joined in late December by the Oden for a two-boat expedition. A blog (http://antarcticaspire.org) chronicled the scientists’ efforts in studying the climate-sensitive dynamics in the polynya and the sea ice ecosystem nearby, trying to understand how climate change will impact the area in general.
The blog captures the exciting moments, such as the realization that “preliminary results reveal that the phytoplankton bloom in the Amundsen Sea polynya exceeds all expectations,” Yager wrote in January. The blog also has sweet, personal moments—a Christmas Eve walkabout on an ice floe, which turned into playing football and Frisbee for hours on the frozen ice. Or the wacky team-building fun of a King Neptune party, which featured skits and songs, including an international version of Jingle Bells.
It was an experience Yager never dreamed possible when she was a child.
Growing up, Yager lived in the San Francisco Bay Area with her parents and two brothers, where the family spent a lot of time on the water, staying a week or two every summer at a beach house her grandparents rented. Yager was precocious and encouraged to explore science. By age 5, she knew she wanted to be a doctor. When she was 6, her grandfather gave her a copy of Gray’s Anatomy, which she thought was “really cool.”
When she was 13, her family moved to Princeton, N.J., which was her first introduction to an academic community. She felt at home and knew she’d attend an Ivy League school. As a high schooler in Seattle, when all of her friends planned on going to college in Washington, Yager chose Brown.
As a freshman pre-med student, she got her very first taste of that highly competitive world and realized she wasn’t all that interested in becoming a doctor. She found there was too much memorization of material required and not enough opportunity to ask “why” something was as it was.
“Christmas break involved some soul-searching,” she says.
She decided to take an oceanography class. On her first day, she had an epiphany.
Her professor, John Imbrie, was famous for discovering what triggered the timing of the ice ages. He told his students, “Every once in a while, one person in this class discovers they like it well enough to become an oceanographer.”
“I remember sitting there: ‘I didn’t know you could be an oceanographer,’” Yager recalls. “It was the first time I’d heard about the climate of the earth and how it affects oceans…talk about one college class changing your life!”
Yager got her B.S. from Brown’s Department of Geology in 1985, and received her M.S. and Ph.D. in 1988 and 1996 from the University of Washington’s School of Oceanography.
It took a while to bring her parents on board with her change in plans, mostly because they were unfamiliar with a career in research science.
But her parents, who live part-time in Seattle and part-time in Athens, do understand her drive, especially her father. In his 70s, he still works full time in real estate.
Yager describes her work ethic thus: “If I just put in one more hour!”
Of course, being a parent makes that work ethic a little complicated.
She met her husband, Steven Holland, a paleontologist and stratigrapher in UGA’s Department of Geology, when both were graduate students at the same marine lab in Washington state. Today, they have two sons, Zach, 9, and Alex, 7. Being away from her family while at sea is a constant struggle for Yager.
She spent the 1990s going to sea nearly every year for at least a month at a time. When her son Alex was 1, she went on one weeklong cruise, but otherwise, she took a seafaring break when her boys were very young.
The initial “cruise of opportunity” to the Amundsen Sea came when Alex was 3. It was a prospect both thrilling and unsettling. Going meant that she’d have to find her sea legs after being land bound for a time. It also meant that she’d leave on Thanksgiving Day, miss Christmas with her family, and return in January.
“It was one of the hardest things I’ve ever done,” she says. But the family has found a way to make it work.When Yager goes to sea, her husband runs a smooth ship at home and remains extremely supportive of his wife’s career, she says. Her sons’ classmates always know where she is, and their teachers at Athens Montessori School have welcomed Yager as a guest to talk about her work.
“I think there is an element of ‘your mom does really cool stuff,’” says Yager. “It doesn’t make up for the fact that Mom is gone. It’s agonizing sometimes. On the other hand, if I were a man, no one would even think about this. And the fact that Steve is an amazing father gets lost a bit.”
Riding one of the world’s most powerful ice-breakers is not a typical sea cruise.
The Oden is broad, shaped like a whale. Rather than cut through ice with a pointy nose, it surges up and whomps down on the ice to break it.
“It does shake, rattle and roll,” Yager says.
The U.S. Icebreaker N.B. Palmer is better designed for the open-water work the team performed in the polynya. “It holds steady like a rock, even in the strong winds we get in Antarctica.”
The crew have watches, with normal sleeping hours, but the scientists work around the clock. The sun never sets in Antarctica in December so there’s no natural rhythm to the days.
“You work, work, work like crazy,” Yager says. “Then you crash for a few hours. And you get up… it’s a lot like having babies. Some people can handle it. Some can’t. They get punch drunk.”
In the Amazon, at least there’s the night—pitch-black sky filled with stars, phosphorescent fish and squid swimming alongside the ship—to offer relief. Antarctica offers relentless daytime. But sailing to Antarctica offers glimpses of seabirds like albatross, fulmars, petrels and prions; crabeater, Weddell, and leopard seals; and emperor, chinstrap, and Adélie penguins. And then there’s all that ice, glowing white against a backdrop of bright blue.
“There are very precious moments of beauty that keep you going,” Yager says.
She also keeps going because her work directly affects the fate of humanity on a changing planet. In the scientific community, there is no debate on whether climate change is real.
“We’ve known the basic science behind climate change for 115 years; even politically disinclined people (acknowledged) it,” she says. “It was not a partisan issue until very recently.”
She has witnessed climate change.
“I’ve been in these places 20 years or more and they’re changing,” she says. “We know what the natural cycle is, and it’s not enough. It doesn’t explain it… If we don’t understand it, we’ll be victims of it. We have to understand what’s coming. We have to start planning for adaptation and mitigation.”
To that end—as if there’s not enough on her plate—she’s the director of the Georgia Initiative for Climate and Society, sponsored by the UGA Office of the Vice President for Research.
“Climate connects us all,” she says. “We have to walk through it every day.”
—Mary Jessica Hammes is a freelance writer living in Athens.
To learn more about Yager’s research, go to http://alpha.marsci.uga.edu/directory/pyager.htm. | <urn:uuid:a17bebd1-9ba4-4c27-aecd-77ba076ddae4> | 2.734375 | 3,033 | Nonfiction Writing | Science & Tech. | 60.237239 |
Perhaps one of the most useful yet taken-for-granted accomplishments of the recent centuries is the development of electric circuits. The flow of charge through wires allows us to cook our food, light our homes, air-condition our work and living space, entertain us with movies and music and even allows us to drive to work or school safely. In this unit of The Physics Classroom, we will explore the reasons for why charge flows through wires of electric circuits and the variables that affect the rate at which it flows. The means by which moving charge delivers electrical energy to appliances in order to operate them will be discussed in detail.
One of the fundamental principles that must be understood in order to grasp electric circuits pertains to the concept of how an electric field can influence charge within a circuit as it moves from one location to another. The concept of electric field was first introduced in the unit on Static Electricity. In that unit, electric force was described as a non-contact force. A charged balloon can have an attractive affect upon an oppositely charged balloon even when they are not in contact. The electric force acts over the distance separating the two objects. Electric force is an action-at-a-distance force.
Action-at-a-distance forces are sometimes referred to as field forces. The concept of a field force is utilized by scientists to explain this rather unusual force phenomenon that occurs in the absence of physical contact. The space surrounding a charged object is affected by the presence of the charge; an electric field is established in that space. A charged object creates an electric field - an alteration of the space or field in the region that surrounds it. Other charges in that field would feel the unusual alteration of the space. Whether a charged object enters that space or not, the electric field exists. Space is altered by the presence of a charged object; other objects in that space experience the strange and mysterious qualities of the space. As another charged object enters the space and moves deeper and deeper into the field, the affect of the field becomes more and more noticeable.
Electric field is a vector quantity whose direction is defined as the direction that a positive test charge would be pushed when placed in the field. Thus, the electric field direction about a positive source charge is always directed away from the positive source. And the electric field direction about a negative source charge is always directed toward the negative source. | <urn:uuid:63d86883-187a-48e1-a98f-7e4f4e0104e2> | 3.859375 | 481 | Truncated | Science & Tech. | 39.863333 |
A crater doublet formed by the simultaneous impact of the two fragments of a split projectile, or of two mutually orbiting impactors, is seen in this DAWN FC frame. The area outside the larger crater in which the crater doublet formed features a high density of smaller craters, locally in clusters or chains. The area shown in this image is located on the floor of Vesta’s large south-polar impact structure Rhea Silvia.
The image was taken from a spacecraft altitude of approximately 270 km in Vesta’s Low Altitude orbit phase (LAMO) on Dec. 20, 2011. Image resolution is ~ 25 m/pxl. The image center is located at lat. ~ 75° S, long. 108° E.
The Dawn mission to Vesta and Ceres is managed by NASA’s Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, for NASA’s Science Mission Directorate, Washington D.C. UCLA is responsible for overall Dawn mission science. The Dawn framing cameras have been developed and built under the leadership of the Max Planck Institute for Solar System Research, Katlenburg-Lindau, Germany, with significant contributions by DLR German Aerospace Center, Institute of Planetary Research, Berlin, and in coordination with the Institute of Computer and Communication Network Engineering, Braunschweig. The Framing Camera project is funded by the Max Planck Society, DLR, and NASA/JPL.
More information about Dawn is online at http://dawn.jpl.nasa.gov.
Image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA | <urn:uuid:8b556341-2898-4725-85ff-bda218afa770> | 3.359375 | 347 | Knowledge Article | Science & Tech. | 47.799882 |
The top three pictures show a microburst in action. Dust and dirt caught in the air make the path of the wind visible. The bottom picture shows tree damage from the 70-90 mph (112-145kph) straight line winds of a microburst. This microburst was part of a severe thunderstorm that went through Lawrence, KS on March 12, 2006.
Courtesy of NOAA Photo Library, NOAA Central Library, National Weather Service Forecast Office of Topeka, KS/ KHP
Type of Wind: Microburst
Microbursts are dangerous winds that are created by thunderstorms. A microburst is a downdraft that hits the ground and spreads horizontally with a burst of wind. The strong downdraft causing the microburst is formed by cooling. The cooling is caused by evaporation in a cloud. Once the strong downdraft has formed, it is trying to push the cool air out of the cloud to create a balance with the warm temperatures of the surrounding air.
A microburst produces straight-line winds. These winds can be greater than 104 mph (167 kph) and as much as 168 mph (270 kph); the wind speeds can be equal to the winds of small tornadoes. The difference between a microburst and a tornado is that the wind from a microburst is pushed out of the storm. Wind from a tornado flows into the storm. The time duration for a microburst is about 5-15 minutes.
Areas affected by microbursts are 2.5 miles (4 km) or less. If the area affected is larger, then it is called a macroburst. The damage from a microburst can look similar to that of a tornado. Damage from a microburst includes blown down trees and heavy damage to poorly built structures. Ships can be damaged too if a microburst happens over water.
Microbursts are a major cause for airline accidents. An airplane affected by a microburst may have a loss in airspeed, loss of altitude, and major acceleration toward the ground. On August 2, 1985, a tragic plane accident occurred when a plane encountered a microburst at the Dallas-Ft. Worth Airport in Texas. This microburst had a wind speed of 80 mph (129 kph). Airports currently use Doppler radar and LLWSAS (Low Level Wind Shear Alert System) to spot microbursts and wind shears associated with microbursts. A wind shear is a sudden change in the wind speed and direction.
Shop Windows to the Universe Science Store!
The Fall 2009 issue of The Earth Scientist
, which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store
You might also be interested in:
Wind is moving air. Warm air rises, and cool air comes in to take its place. This movement creates different pressures in the atmosphere which creates the winds around the globe. Since the Earth spins,...more
Thunderstorms are one of the most thrilling and dangerous types of weather phenomena. Over 40,000 thunderstorms occur throughout the world each day. Thunderstorms form when very warm, moist air rises into...more
One process which transfers water from the ground back to the atmosphere is evaporation. Evaporation is when water passes from a liquid phase to a gas phase. Rates of evaporation of water depend on things...more
Rainbows appear in the sky when there is bright sunlight and rain. Sunlight is known as visible or white light and is actually a mixture of colors. Rainbows result from the refraction and reflection of...more
The Earth travels around the sun one full time per year. During this year, the seasons change depending on the amount of sunlight reaching the surface and the Earth's tilt as it revolves around the sun....more
Scientists sometimes travel in specially outfitted airplanes in order to gather data about atmospheric conditions. These research aircraft have special inlet ports that bring air from the outside into...more
An anemometer is a weather instrument used to measure the wind (it can also be called a wind gauge). Anemometers can measure wind speed, wind direction, and other information like the largest gust of wind...more | <urn:uuid:20442c71-36ca-41e8-b254-39ca815d1dac> | 4.0625 | 869 | Knowledge Article | Science & Tech. | 62.892414 |
From Science Daily:
I first realized the magnitude of the Himalayas' melting glacier problem from Al Gore's "An Inconvenient Truth." In the documentary, Gore explained that Himalayan glaciers are the source of the the most important rivers in Asia, and thus billions of people.
ScienceDaily (Feb. 22, 2009) — Glaciers that serve as water sources to one of the most ecologically diverse alpine communities on earth are melting at an alarming rate, according to a recent report.
A three-year study, to be used by the China Geological Survey Institute, shows that glaciers in the Yangtze source area, central to the Qinghai-Tibet plateau in south-western China, have receded 196 square kilometres over the past 40 years.
Glaciers at the headwaters of the Yangtze, China's longest river, now cover 1,051 square kilometres compared to 1,247 square kilometres in 1971, a loss of nearly a billion cubic metres of water, while the tongue of the Yuzhu glacier, the highest in the Kunlun Mountains fell by 1,500 metres over the same period.
Melting glacier water will replenish rivers in the short term, but as the resource diminishes drought will dominate the river reaches in the long term. Several major rivers including the Yangtze, Mekong and Indus begin their journeys to the sea from the Tibetan Plateau Steppe, one of the largest land-based wilderness areas left in the world.Read On
The melting of the glaciers is going to feed the rivers with more water in the short term, but dry them out in the long term. Because the water is melting quickly, the next twenty years or so should see raging rivers flowing down from these glaciers. But because the glaciers are losing so much water, they are going to be gone before too long.
As I've talked about before on my blog, North China is drying up. Deserts are spreading, water tables are receding, rivers are polluted and dying, and rain is sporadic. Adding in this new caveat that the remaining rivers will dry up in a few decades and then the sustainability of North China becomes very grave.
One can only hope that it is not too late to reverse things and that societies across the globe will begin seriously addressing the toll unrestrained CO2 emission is having on the environment.
The realist in me says that it probably is too late to reverse course on these melting glaciers and that humanity isn't ready to put curbing CO2 emissions at the top of the problems that we need to solve though. | <urn:uuid:c233278a-00a2-44ab-bfd2-5ea8bcb7b052> | 3.015625 | 531 | Personal Blog | Science & Tech. | 45.128005 |
The "goto" statement comes straight out of ASM or any other assembler language.
Here's a link: http://be2.php.net/manual/en/control-structures.goto.php
I'm wondering: what can this do to make my code more well-organized? How can I implement this in larger projects, without screwing it up. Since the goto will allow you to jump back and forth, accidental assignments and infinite loops are waiting to happen if you use this the wrong way.
Can someone give me an example of a good use of this?
EDIT: allright, I've seen some of the replies and apparently a wide consensus exists about the use of the "goto" statement and it being bad.
So I'm still wondering: why would PHP bother to add it to the language. If they didn't see something in it, they wouldn't do it... so why?
EDIT2: Seeing as this question induced a lot of bad things to be sad about the goto statement, I went and asked my father. He's 52 years old and is an Industrial Engineer.
He told me a couple of times he did a good amount of programming in his days and mostly in FORTRAN and COBOL. Nowadays he does IT services, server&networkmanagment and such.
Anyways, he said some stuff about "back in my day..."
After discussing that a bit, he came back to the goto statement, saying that even back in his days as a student, they allready knew it wasn't a smart idea to use it, but they didn't have much better back then. Try/catch was still years away and error handling hardly excisted.
So what did you do to check your program? Add a few lines at the end that allow you to print output and everything you need to check in your code, and then you place the line: "goto printing;", or something like that, to start the printing of your data.
And in this manner, you gradually debugged your code.
He agrees that the use of goto in the modern programming world is pretty useless. The only use he finds justified is an "emergency break", to be used in extreme debugging and unexpected situations. Kinda like
goto fatal_error;, and have the "fatal_error" part of your code do some things to show you in-depth results.
But only during the creation of something. A finished product should not have goto-statements. | <urn:uuid:d4e5ebf1-c562-4da3-a232-5ca005a2accc> | 3.0625 | 526 | Q&A Forum | Software Dev. | 69.623749 |
Life is biology is species: But how many species live on Earth? About six million arthropods (insects, spiders and crustaceans), says a new study.
We love accurate weather forecasts, but the weather satellites they rely on are nearing the boneyard. Some replacements have crashed into the ocean, others are in financial limbo. Be very worried about our fragile planet: these satellites also track climate, ice, fire, and the health of forests and ocean!
Compared to regular airplanes, radio-controlled craft are safer, cheaper, and easier to use for observing wildlife and environmental conditions. Where are these robots being used? What are they finding? And as prices continue to fall, what stands in the way of much broader use?
Neutrinos are odd: Extremely difficult to see, they travel through mass with scarcely a trace. A 1-billion ton detector in South Pole ice is now counting neutrinos, intent on understanding their origin and role in the universe, and even spotting echoes of the Big Bang.
A crash course in “sink or swim” teaches computerized robots to adapt to changing circumstances. When taught by “directed evolution,” robots that started without legs learned to walk sooner than robots that started with legs! Can you explain?
Can pigeons learn an abstract mathematical rule? Apparently, according to a new study, which asked pigeons to place, five blue dots and eight green squares, in ascending order. Now we know birds and primates can both do this, but where and why did this ability originate?
Seismic study shows crust thinning as continent divides, giving another view of our restless planet, showing tectonic movement in action, and highlighting a major real-estate investment opportunity.
A new report on the ancient universe shows that most galaxies – even all of them – had a black hole at the center, much like modern galaxies. We can understand why a black hole would need to be surrounded by millions of stars, but why should galaxies require black holes?
Fish contamination was rare after the giant oil spill in the Gulf of Mexico in 2010, with levels of dangerous hydrocarbons well below “levels of concern.” But nobody looked systematically at heavy metals, the Gulf still has a lot of oil, and the many different hydrocarbons may have unpredictable impacts.
A federal court has thrown the field of embryonic stem cell research into confusion. Last week, research that destroys embryos could not get federal bucks — even if those embryos were doomed or destroyed years ago. This week, it can. How is the legal yo-yo affecting researchers — and desperate patients?
It’s one of the biggest puzzles of paleontology: Why did North America’s large mammals go extinct shortly after the glaciers melted about 15k years ago? New study suggests that hunters get the credit — or blame.
Underground nuclear tests have been the biggest roadblock to a comprehensive test ban. How are these explosions detected, and how reliably?
The feds put out a massive report on American birds, and the #1 source of data is – amateurs! What is the role of amateurs in ornithology? Hint: if you want to survey 800 species on 3.5 million square miles…
Pres. Obama has removed some limits on studies of cells that can become any body cell. What was lost in eight years of limits on embryonic stem cells? What’s ahead?
Biology operates on the nanometer scale, and now ultra-small technology is producing monster benefits for genetic analysis, cell biologists, and the treatment of blinding glaucoma. | <urn:uuid:c5201519-96fe-4ef8-976d-570c2cbcde27> | 3.109375 | 746 | Content Listing | Science & Tech. | 47.522895 |
|The Development of Galois Theory||MacTutor Index|
The first mathematician who published a commentary on Galois' two main papers was Enrico Betti (1823-1892). In his 1852 publication Sulla risoluzione delle equazioni algebriche, he presented Galois' ideas in a more accessible manner, making a few additions of his own. His main contribution was to fill in gaps in certain proofs by Galois, Abel and Ruffini but he also generalised some of Galois' group theoretical results.
Like his contemporaries, Betti certainly overemphasised the application of Galois Theory to the solvability of equations, and he seemed slightly confused over some group theoretical aspects. For example, he does not start his proofs by defining a group G, but instead normalises in some unspecified universe which leads to a confusion of the group with its quotient groups. However, he achieves some interesting results and advances the understanding of group theory, for example by extending conjugation from elements to subgroups. Furthermore, Betti's paper remained the only discussion of Galois Theory until the publication of Jordan's paper in 1870, and it would have been more widely used had it not been published in Italian. The main mathematical language of the time was French and so the next significant step in the development of Galois Theory was its publication in a French algebra textbook.
Joseph Serret's (1819-1885) Cours d'algèbre supérieure was the main algebra textbook for half a century. The third edition, published in 1866, contained the first exhibition of Galois Theory in a textbook. Serret made a few notational contributions which helped to clarify some ideas but his main significance to the development of Galois Theory was that his presentation reached a wider audience. His understanding of group theory may have been less advanced than Betti's but Serret's clarification and organisation of Galois' Mémoire was crucial and lasting.
Serret's Cours d'algèbre was quickly accepted internationally, and remained the main text on algebra for the next 40 years. It may be argued that this was not beneficial to the development of algebra, particularly in France: Serret was incapacitated by illness after 1871 but his textbook was so popular that new editions of it were published as late as 1928. The later editions did not contain any results which had been published after 1866, and the fact that Cours d'algèbre remained the main algebra reference in France nevertheless greatly slowed down French research progress in algebra, including Galois Theory.
Something more positive which can be said about the Cours d'algèbre is that it inspired Camille Jordan (1838-1922) to write his Traité des substitutions et des équations algébrique which Kiernan describes as the most important French publication on algebra in the second half of the 19th century. Jordan had made several publications on group theory in the 1860s. In the Traité, published in 1870, he compiled everything that was known on group theory at the time. Throughout this text, a group was still connected with permutations and algebraic equations. Nonetheless, the Traité was the first paper whose central object of study was the group which is why Jordan is often seen as the first modern algebraist. As other commentators before him, he filled in gaps in some of Galois' proofs and added some results of his own to his presentation of Galois Theory. However, he went further than Betti and Serret by truly reformulating many of Galois' results to better suit his approach which viewed the group as central. After presenting Jordan's formulation of Galois Theory, Kiernan comments :
This is no longer just a theory of equations; these are theorems about groups, whose results are applicable to equations and their solution.
For the remainder of the 19th century, most progress in Galois Theory was due to the development of Field Theory in Germany. While Lagrange had induced the beginnings of group theory in France, Gauss had been the main influence on mathematics in Germany. His results in number theory inspired mathematicians such as Kronecker and Dedekind to develop what is known as Field Theory today. Although Field Theory was already quite advanced before Galois Theory was closely associated with it, both Kronecker and Dedekind contributed to the development of Galois Theory. For instance, Kronecker was first to describe the Galois group not in terms of permutations on the roots of an equation, but as a group of automorphisms of the coefficient field with adjoined quantities. Dedekind's significance was largely due to the essential progress he made in Field Theory, i.e. many of his results were fundamental to Artin's later formulation of Galois Theory. Other German writers of the time such as Felix Klein, Eugen Netto and Walther von Dyck made important advances in group theory. The last significant group theoretical gap in Galois Theory was closed in 1889 when Otto Hölder (1859-1937) proved what is known as the Jordan-Hölder Theorem today.
So the end of the 19th century was an active period in the development of Galois Theory: The group theorists of the time filled in the last remaining gaps in Galois' proofs which completed the development of classical Galois Theory. The field theorists of the time developed the foundations of the last major reformulation of Galois Theory to this day which was completed by Artin. There were some important expository papers around 1900 which will be discussed shortly, but Galois Theory itself remained largely unaltered from about 1900 until the 1930s.
The first German presentation of Galois Theory was Paul Bachmann's 1881 article Über Galois' Theorie der algebraischen Gleichungen. After a German translation of Galois' works was published in 1889, several reviews appeared in the 1890s. Most important among these were the discussions by Heinrich Weber (1842-1913), namely his article Die allgemeinen Grundlagen der Galois'schen Gleichungstheorie, published in 1893, and the section on Galois Theory in Weber's 1895 textbook Lehrbuch der Algebra. Weber presented Galois Theory in terms of group theory and field theory, making very few references to equations, so that the theory could also be applied to other areas than the solvability of equations. Weber's theorems are no longer restricted to the rationals, but apply to arbitrary fields.
Many Galois Theory concepts seem very complicated in Weber's notation because he is very formal and perhaps too careful in his distinctions. However, he finally combines field theory and group theory in Galois Theory in a consistent way. Furthermore, he is the first author to re-establish the distinction between Galois Theory and its applications which Galois had in mind. The chapter on Galois Theory in Weber's algebra textbook makes no reference to the solvability of groups or the solvability of equations by radicals. These topics are dealt with in the following chapter, Application of permutation groups to equations, although even here, Weber seems more interested in the properties of specific groups than in the application of these properties to the solvability of equations.
Weber's article and textbook are the first modern discussions of Galois Theory. They are presented as the study of field extensions and their automorphism groups. Even when Galois Theory is applied to the solvability of equations, the nature of the solution is the object of study. The aim is no longer to find a procedure for determining the solutions of a given Weber. Hence Weber's treatment was well ahead of his time. The first mathematician after Weber to deal with Galois Theory at such an advanced level was Emil Artin 40 years later.
Weber's publications were not the only presentations of Galois Theory around 1900. For instance, the first English expositions of Galois Theory were Oskar Bolza's 1891 article On the Theory of Substitution-Groups and its Application to Algebraic Equations and the 1892 translation of Netto's Substitutionentheorie. In these papers, we can detect some interest in the structure of the Galois group but they do not go far beyond Betti's presentation of Galois Theory. An even less advanced, but very popular exposition of Galois Theory was James Pierpont's Galois' Theory of Algebraic Equations, published in 1899/1900. While his approach was existential, his main concern was the constructability of the group of an equation and the problems arising from this, and his methods were highly computational. To give another example of the variety of approaches to Galois Theory around 1900, Henri Vogt's approach was even more computational than Pierpont's. Where even Pierpont merely proves that construction of the Galois group is possible, Vogt actually constructs it. His 1895 paper Leçons sur la résolution algébrique des équations was essentially a manual for solving equations which had very little to do with Galois Theory as we understand it today.
Hölder's article Galois'sche Theorie mit Anwendungen in the 1898 Enzyklopädie der Mathematischen Wissenschaften was perhaps more representative of the general understanding of Galois Theory around 1900. Hölder does not present Weber's major innovations but he follows Weber in separating the theory from its applications. Such was the understanding of Galois Theory among advanced mathematicians around 1900. By the early 20th century, Galois Theory was considered a finished subject and active mathematical research moved on to other areas.
Yet the last major step towards today's understanding of Galois Theory was achieved in two sets of lecture notes by Emil Artin (1898-1962): Foundations of Galois Theory, published in 1938, and Galois Theory, published in 1942. According to Artin, Galois Theory studies how field extensions are related to their automorphism groups. He wanted to present the theory independently of its application to the solution of equations. While field theory had been used before to simplify the presentation of Galois Theory, the structure of fields had been largely ignored. Artin abandoned the approach of building a sequence of field extensions by adjoining successive resolvents to the coefficient field. Instead, his starting point was to consider the splitting field of the equation (which is the smallest field containing the roots and coefficients). The splitting field existed by Kronecker; Artin did not investigate the question of how it could be constructed in practice. Thus the historic application of Galois Theory was finally reduced to the question of whether there existed an extension of the coefficient field containing the splitting field. Previous mathematicians, such as Bartel Leendert van der Waerden in Moderne Algebra (1930), had incorporated modern ideas from Linear Algebra and Field Theory into Galois Theory, but Artin was first to make use of the crucial idea of the splitting field in this way. His approach was heavily influenced by Dedekind, Kronecker and Weber, but Artin was able to comprehend how it all fitted together and gave a very succinct, precise and modern presentation of Galois Theory.
Artin gathers together all important conclusions in one Fundamental Theorem of Galois Theory :
If p(x) is a separable polynomial over a field F, and G the group of the equation p(x) = 0 where E is the splitting field of p(x), then:
- Each intermediate field B is the fixed field for a subgroup GB of G, and distinct subgroups have distinct fixed fields.
- The intermediate field B is a normal extension of F if and only if the subgroup GB is a normal subgroup of G. In this case the group of automorphisms of B which leave F fixed is isomorphic to the quotient group G/GB.
- For each intermediate field B, [B:F] is equal to the index of GB and [E:B] is equal to the order of GB.
This Fundamental Theorem makes no reference to substitutions of roots; it talks about field extensions and their automorphism groups. A polynomial is mentioned only to produce a splitting field in relation to the ground field -- Galois Theory is no longer about polynomials. Today's formulations (see or ) of the Fundamental Theorem of Galois Theory are equivalent to Artin's; their aim is to reveal the parallel structure of the extension field and its automorphism group.
Artin followed Weber in making a clear distinction between theory and application. Solvable groups and the idea of solvability by radicals appear only in an appendix to his text, and only for historic reasons. In the 1930s, mathematicians were certainly much more receptive to abstract algebra than Weber's audience had been, but Artin's crucial innovation was the way in which he combined results by Dedekind, Kronecker, Weber and others in order to present a new conception of Galois Theory. In short, Artin unified and completed all the earlier approaches in a formulation of Galois Theory which we still use today. | <urn:uuid:0f96a95c-0f5b-455c-944e-f4208f650150> | 2.921875 | 2,731 | Knowledge Article | Science & Tech. | 36.756867 |
In this cool science experiment, you’ll show how to make raisins "swim" up and down in a jar, using nothing more than a few things that you commonly find in a kitchen.
What makes phosphorescence last longer?
An ultraviolet light source (as opposed to other forms of light sources) will be able to make the glow of phosphorescence material last longer.
Effectiveness of garlic in fighting bacteria
This experiment was done to find out if garlic is effective in killing bacteria. This will help us understand the effectiveness of home remedies such as the use of natural herbs (including garlic) for medicinal purposes.
Exposure of Baby food and the degree of contamination
This experiment was done to find out if leaving baby food out of the refrigerator after it has been opened will result in bacterial contamination.
Effect of Bird's Eye chili(i.e. capsicum frutescen) on Gryllus assimilis (the common black cricket)
This science project was conducted to determine if Bird's Eye chili can be used as a form of deterrent against Gryllus assimilis (the common black cricket). The experiment was done by spraying various concentrations of Bird's Eye chili extract on crickets.
This science fair project was done to investigate the effect of increasing DC voltage and the concentration of electrolyte salt on the rate of production of hydrogen gas during the process of electrolysis.
The importance of ammonia in the formation of salt crystals
This science fair project was done to understand the importance of ammonia in the formation of salt crystals. Tests were done to compare the resultant formation of salt crystals when ammonia was not used, compared to when different amounts of ammonia were used.
Levels of carbohydrates in different varieties of milk
This science fair project was done to find out the amounts of lactose carbohydrates contained in various types of milk. The tests were done using low fat milk, powdered milk and soy milk.
The effect of salt and sugar on the freezing point of water
This science fair project was done to find out how the freezing point of water is affected by the presence of sugar or salt. The experiment was conducted using salt, sugar and water.
Effect of carbonated drinks on meat
This science fair project was performed to find out if the acidity in Coca Cola can dissolve meat. The testing was done by placing pieces of steak, chicken breast and salmon into bowls filled with Coca Cola, and observing the meat for 5 days. | <urn:uuid:7e81ddea-37e6-447d-b1a7-10322d10aa3a> | 3.328125 | 502 | Content Listing | Science & Tech. | 41.988158 |
Why is this moon shaped like a smooth egg?
The robotic Cassini spacecraft completed the first flyby ever of Saturn's small moon
in May and discovered that the moon has no obvious craters.
Craters, usually caused by impacts, have been seen on every
asteroid, and comet nucleus
ever imaged in detail -- until now.
Even the Earth and
The smoothness and egg-like shape of the
3-kilometer diameter moon might be caused by
surface being able to shift --
something that might occur were the moon coated by a deep
pile of sub-visual
If so, the most similar objects in our Solar System would include Saturn's moons
Calypso, as well as asteroid
all of which show sections that are unusually smooth.
is not entirely featureless, though, as some surface sections
appears darker than others.
Although flybys of Methone are
difficult, interest in the nature and
history of this unusual moon is sure to continue.
Cassini Imaging Team, | <urn:uuid:f3c4e90a-1e65-414c-9721-032b2c7a5541> | 3.75 | 217 | Knowledge Article | Science & Tech. | 40.001651 |
If a constant current has to be driven through a resistor it needs a certain voltage or potential difference across the outlets of the resistor.
If the voltage is changed it is plausible to expect a corresponding change of the current. What is not evident is the question if this relation is linear, if the current changes proportional to the voltage.
Under normal conditions a proportional relation is rather seldom. If the current changes, normally the temperature of the resistor and therefore its resistance changes. This implies a non-linear relation between voltage and current. If, however, the temperature of the resistor and all other properties (length, cross section) remain constant, it has been experimentally proven that for metallic and for most solid state conductors there exists a strict proportional relation between voltage and current.
Conclusion: If a constant current I is driven by a voltage V through a resistor with resistance R and if all external parameters remain constant we have: V/I = constant. This relation was first detected by the physicist Ohm and is called Ohm´s law.
By convention this constant, which is characteristic for the specific resistor, is used as definition for the resistance R.
R = V/I. The unit of resistance is Ohm, abbreviated as Ω, in honour of the German physicist Georg Simon Ohm (1789-1854). | <urn:uuid:e4c15dd7-f023-4134-92f8-34e015ef2870> | 4.125 | 271 | Knowledge Article | Science & Tech. | 41.722365 |
The combined effects of climate change and fishing are complicated by complex trophic interactions (see e.g. Hjermann et al. 2007). However, based on the present situation, known trophic interactions and ecosystem effects of fishing and climate change, some possible scenarios can be outlined; these are discussed below. Although a continued warming is likely in the longer term, short term cooling might occur due to natural fluctuations, and medium and long-term prospects may therefore differ.
Irrespective of temperature development, we have some knowledge about the short-term (<5 year) development, based on the present stock size and age composition of the main fish stocks. The cod stock will stay at a stable, but high, level in the coming years, the recent growth in stock size is not likely to continue as the incoming year classes (2006-2008) are below average. The large capelin stock together with a reasonable amount of other prey should ensure enough food for the large cod stock in the coming years. The haddock stock is at a historic high level, but will probably decrease from 2010 onwards due to reduced recruitment. It is unknown whether haddock, which mainly feed on benthic organisms, will be food-limited at such high stock sizes. There are no strong year classes of herring in the Barents Sea at present, and we do not know when the next strong year class will occur. Several researchers support the view that high herring abundance in the Barents Sea seems to be a necessary but not sufficient condition for a capelin collapse, whereas others suggest that a multitude of factors are involved, including climatic fluctuations, predation from fish and marine mammals and fisheries. Based on the view that predation from herring is an important factor, and taking into account the lag between the occurrence of a strong herring year class and a capelin collapse, a capelin collapse is not likely to happen before 2012.
A large spawning stock, low harvesting and continued warming is favourable for NSS herring, and the stock can therefore be expected to increase further in the future. A large herring stock has a strong impact on the marine environment. NSS herring consume a considerable part of the copepod production in the Norwegian Sea (Dommasnes et al. 2004). An increasing stock is therefore likely to reduce the biomass of copepods, with possible consequences for the biomass of zooplankton that is transported into the Barents Sea. Herring is also an important predator on eggs and larvae of several fish species (Gjøsæter and Bogstad 1998, Godiksen et al. 2006, Segers et al. 2007, Huse et al. 2008). In the Barents Sea, a large herring stock has negative consequences for the recruitment of capelin (Gjøsæter and Bogstad 1998, Hjermann et al. 2004b). Although a high abundance of juvenile herring did not prevent the current capelin “outbreak”, a continued increase in herring might be expected to have long-term effects on capelin by affecting the frequency and amplitude of the capelin fluctuations, and possibly reduce its dominating role in the ecosystem.
If alternative prey is not present, a severely reduced capelin stock will have a strong negative impact on top predators in the Barents Sea, as observed in the late 1980s (Gjøsæter et al. 2009). A low capelin stock might for example have negative impacts on a range of seabird and sea mammal species in the area (Hamre 1994, Sakshaug et al. 1994). For some species, alternative prey such as juvenile herring, polar cod and crustaceans might provide foraging alternatives, but a low stock of capelin generally means that ice-edge feeding top-trophics must travel further to access food (see e.g. Barrett and Krasnov 1996, Barrett 2002). A low capelin stock is also associated with increased cannibalism in cod (Gjøsæter et al. 2009, Yaragina et al. 2009). The adverse effect of cannibalism might be counteracted to a degree by increased cod recruitment due to increased water temperature (e.g. Ottersen and Sundby 1995), and an increased abundance of alternative prey. As long as the harvesting of cod is kept below the long-term sustainable limit, and a large herring stock does not impair cod recruitment, the NEA cod stock might continue to be relatively strong, even with capelin at low levels. Intensive fishing has, however, reduced the cod’s ability to affect the large fluctuations in the stocks of capelin and juvenile herring.
A marked increase in primary production north of the polar front is an expected consequence of continued warming (Ellingsen et al. 2008). This new production will support an increased zooplankton community and enhance benthic production. How the benthic community will respond to the increased input of organic matter, will however, depend partially on how these communities have been impacted by trawling. Capelin is the major consumer of secondary production in the Arctic Barents Sea (Orlova et al. 2002, Dalpadado et al. 2003). A reduced capelin stock might initiate a trophic cascade resulting in an increase in the zooplankton standing stock (see Dalpadado et al. 2003; Orlova et al., 2001), and possibly a subsequent decrease in the biomass of phytoplankton. Reduced consumption by capelin could be compensated for by an expansion and increase in the stock of polar cod (Orlova et al. 2009), and an increase in the abundance of omnivorous and carnivorous crustaceans such as krill and amphipods (see Dalpadado et al. 2001, 2008; Drobysheva and Yaragina, 1990). The response will, however, depend on how these species will be impacted by warming and the continued thinning of sea ice. During the recent periods of low capelin abundance, krill/amphipods and polar cod were apparently unable to compensate for the reduced consumption of zooplankton (see Dalpadado et al. 2003). Moreover, with a reduced capelin stock, less arctic production will be transported to the Norwegian and Murman coasts during capelin spawning. This might have long-term consequences for the coastal ecosystems.
Predictions for the development of the Barents Sea ecosystem on a time scale of more than 5 years are associated with large uncertainties. Although our understanding of the system has increased considerably in recent years, a number of important questions are still unresolved. Some of these are:
• How will warming impact oceanographic drivers of ecosystem function responsible for determining quality, quantity, and timing of primary production?
• How will warming affect the match/mismatch between phytoplankton, zooplankton and the spawning of major fish stocks?
• How will a large NSS herring stock affect the zooplankton community and the recruitment of cod and capelin?
• Will the capelin stock continue to fluctuate?
• How will top-predators respond to changes in the abundance of pelagic fishes?
• Will changes in the abundance of pelagic fishes cause a trophic cascade?
• How will the benthic community respond to changes in organic input, combined with fish trawling, temperature increase and invasive predatory species? | <urn:uuid:d33bb48e-cfc0-49a4-bd77-bfe3b74c76cc> | 2.796875 | 1,547 | Academic Writing | Science & Tech. | 41.597271 |
matthew_avison at email.msn.com
Thu Aug 6 06:34:07 EST 1998
Some tissues that constantly proliferate (like skin for example) do so
because the tissues contain stem cells. These cells are constantly dividing
and producing daughters who the differentiate into the main cells of the
tissue (e.g. skin cells). Such tissues usually undergo a constant cell
death at their surface (e.g. skin cells die at the surface and are shed) so
the stem cells are essential to stop the tissue waring away, providing a
constant supply of new cells. The brain as a tissue is very different. It
is not designed to cope with constant waring away of differentiated cells
like the skin is. Therefore there are very few neuronal stem cells to
provide new cells to replace ones that do die. Hence eventually the brain
starts falling apart. For most people, however, they are dead before this
really happens. One of the holy grails of neuroscience is to try and get
neurones to de-differentiate into stem cells, thus allowing a source of
proliferation for people who have got problems with too much neuronal death.
Watch this space.
Interestingly, a reason why neuronal derived tumours are so rare and skin
cancers relatively common is because it is a stem cell going wrong that
usually results in a tumour and so the more stem cells there are, the more
chance of getting a tumour. The most rapidly increasing cancer nowadays
(particularly in men) is Colon cancer, the surface of which is very similar
to skin in terms of its ability to regenerate.
Hope some of this helps.
Matthew B. Avison, University of Bristol, UK.
JJ Miranda wrote in message ...
>FOrgive the ignorance of this question... I'm a chemist...
>I was wondering, does anyone know the cellular or molecular explanation
>as to why brain cells don't grow back when killed? Also, I know thatsome
>brain cells grow back but not others... What distinguishes between these
More information about the Cellbiol | <urn:uuid:a2f33351-4bbb-4be4-8e44-eac74c48dbc3> | 3.109375 | 445 | Comment Section | Science & Tech. | 57.433668 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
The right answer—accretion by gravity onto supermassive black holes—was proposed shortly after Schmidt’s discovery independently by Russian astronomers Yakov Zel’dovich and Igor Novikov and Austrian American astronomer Edwin Salpeter. The combination of high luminosities and small sizes was sufficiently unpalatable to some astronomers that alternative explanations were posited that...
What made you want to look up "Igor Novikov"? Please share what surprised you most... | <urn:uuid:dc32d8e1-22c9-414d-bf5b-9ad9af3d0f71> | 2.90625 | 132 | Knowledge Article | Science & Tech. | 27.5865 |
The following article is an example of a .NET implementation of Kruskal’s algorithm for finding the minimum spanning tree of a connected, undirected graph.
The minimum spanning tree for a graph is the set of edges that connects all nodes and has the lowest cost.
In order to be able to run this solution, you will need .NET 4.0. The example was constructed using Visual Studio 10, and WPF for the graphical representation.
In this article, I will be using a large portion of the code that I used to exemplify Dijkstra’s algorithm for finding the minimum distance path between two nodes
in a connected undirected graph. You can find this example here.
Similar to Dijsktra, Kruskal’s algorithm is also characterized as “greedy.” Even though these algorithms are quite similar, their approaches are quite different since
their goals are simply not the same. In the case of minimum spanning tree, there are some additional structures that we will be using, making the understanding of the algorithm
a little bit more difficult.
In addition, the minimum spanning tree is a more abstract concept and a little more difficult to compare to a practical situation.
In the case of minimum distance algorithm, we thought of the graph as a map of cities where the nodes represent the cities and the edges – the roads that connect them.
We could imagine a similar scenario for a minimum spanning tree. For example, we could imagine that we are an internet provider that is currently building their network
in a given country and that we use optic fiber to connect all the cities in it. Since optic fiber costs us money, we would like to use as little as possible. The minimum spanning
tree algorithm can help us do that.
As already mentioned, Kruskal’s minimum spanning tree is similar to Dijkstra’s shortest path in the way that both are “greedy” algorithms.
When we were looking for the shortest path, we were trying to select the best possible edge from the node with the smallest total cost incurred.
The reason why this is an efficient strategy is because it reduces the number of nodes that we need to visit in order to get to the target.
In order to find the minimum spanning tree, we need a slightly different approach. To construct the tree in an efficient manner, we need to visit all nodes by visiting
as few edges as possible. Therefore, it makes sense to collect all the edges of the graph and have them ordered so that we consider the least costly ones first.
However, that is not enough since it does not guarantee us that we will select only the edges that we need to construct the minimum spanning tree. Let’s consider
the following simple case. We have a graph with three nodes that are interconnected. We start collecting the edges to construct the minimum spanning tree.
We start with the edge between nodes 1 and 2. The next logical one would be the edge between nodes 2 and 3. At this point, there is a direct connection between
nodes 1 and 2, 2 and 3, and there is an indirect connection between 1 and 3 by going through node 2. Therefore, it would be a mistake to add the last edge to the minimum spanning tree.
Therefore, the minimum spanning tree consists only of the edges between nodes 1 and 2 and 2 and 3. We need a way to keep track of the edges that we have already
selected for the minimum spanning tree and the nodes that we have visited. In order to do that, we will use an additional structure that we will call cluster.
A cluster is simply a collection of nodes that we have already visited. As we keep adding edges to the minimum spanning tree, we need to check if each node the edge
connects is already in the cluster or not. If both nodes have been already added, the edge should not be included in the minimum spanning tree, even if it is the shortest edge in the list.
Having defined the role of the cluster, it becomes clear that at the start of the algorithm, we will need to have a cluster for each node.
As we keep adding edges to the minimum spanning tree, we will keep merging the clusters until we are left with what encompasses all the nodes in the graph.
The best way to exemplify the algorithm is to view an example that is a bit more complete than the example before.
Starting with the edge with the minimum cost, we will first add the edge between 2 and 5 to the minimum spanning tree and we will first merge the clusters of nodes 2 and 5.
We will apply the same logic for nodes 1 and 6 and the edge between them. The next edge we should consider is the one between nodes 3 and 4 or between 6 and 4
since they have the same cost of 15. Let’s imagine that we take the edge between nodes 3 and 4. We find ourselves in the same situation as before.
The nodes are part of different clusters; therefore, we proceed in the same fashion. The next edge in line is the one between nodes 4 and 6.
The nodes are from different clusters: node 6 is in the same cluster as node 1, and node 4 is in the same cluster as node 3. We add the edge to the minimum
spanning tree and merge the two clusters into a single that will contain nodes 1, 6, 4, and 3. At this point, there are only two clusters left, the one we just
constructed and the one containing nodes 2 and 5. Therefore, the next edge that we select must allow us to merge these two clusters and it will be the last edge
in the minimum spanning tree. The edge with lowest cost is the one between nodes 4 and 5. This yields a minimum spanning tree with a total cost of 63.
Using the Application
In order to use the application, first the user needs to create the nodes and the edges between them. To create a node, the user needs to click on the canvas.
In order to connect two nodes, the user needs to consecutively select two nodes created on the canvas. To find the minimum spanning tree, the user needs to click
on “Find Minimum Span Tree”. The “Clear” button allows the user to clear the canvas and build a new graph. “Restart” allows the user to go back to the initial condition
of the graph before finding the minimum distance. | <urn:uuid:2b9dbc7d-ad74-46ef-8d41-b21ae0ffd2c5> | 3.5625 | 1,333 | Documentation | Software Dev. | 60.619599 |
A small town in Canada called Churchill, with a population of 800, has been under attack from polar bears. These large carnivores have even been caught breaking into homes to find food. Villagers have been terrified of these massive, 90st animals for years; therefore, have decided to stand their ground and fight back!
Entries in polar bears (3)
Intraspecific predation in polar bears is not that unusual in times where food is scarce, but the frequency of reports of cannibalism is increasing and poses the question, why?
Photo courtesy of Ashley Coates
That’s right, David is back on our screens on the 26th of this month to bring us another stunning wildlife series; Frozen Planet. Little snippets of this series have been released to the media and general public over the past few days and it looks to be just as cool as the locations it's set in! A few species have already made an appearance in wildlife news recently such as crafty killer whales and thieving penguins. To get us all in the mood for the new series, here is a mini food chain detailing why some species make good predators and why some make tasty prey!
Polar bears are one of the apex predators within this food chain. Males are very large and can reach up to 350 – 680 kg and 7.9 – 9.8 ft. in length, with females measuring half that length. Because of their large size, it makes it possible for them to smash into ice dens of seals and tear into prey easily. This is assisted by shorter claws on their feet and their extremely large paws, which can measure approximately 30cm across! Their keen sense of smell also helps them when hunting prey. Polar bears are able detect unburied seals from nearly 1 mile away and buried seals under 3 ft. of snow!
Killer whales are another apex predator that drift in and out of the icy waters surrounding Antarctica and the Artic. They have a varied diet depending on which subspecies they are and their geographical location. Killer whales make excellent predators due to their high intelligence and ability to work as a team. Just recently, new images of killer whales working together to knock a seal off of an ice float have been released. A team of killer whales will rush towards an ice float causing a wave to appear that is powerful enough to knock an unsuspecting seal into the mouth of another member of their pod. They work together like this in many clever hunting situations displaying team work that some think is reinforced by their own ‘culture.’
Weddell seals are the preferred prey of apex predators as they are not as aggressive as crabeater and leopard seals, so the chance of injury by them is not as likely. Weddell seals measure between 8.2 - 11.5 ft. long and can weigh between 400 – 600kg. They are insulated with a thick layer of blubber which not only keeps them warm, but also attracts predators. Their energy rich blubber is vital for them to stay alive because food is so hard to come by. The weddell seal does have a few tricks for avoiding gaping jaws, which are also used when hunting for their own prey. They can dive to depths of approximately 2,300 ft. and can hold their breath for around 80 minutes! That’s a very long time to play hider or seeker!
The Frozen Planet team filmed the Adélie penguin stealing stones from neighbour’s nests to put in their own. Unfortunately for them, penguins make a tasty snack for seals and killer whales (but without the wrapper and bad joke – if you exclude that one!) Penguins may make up the bulk of a predators diet perhaps due to their sheer numbers, making them easier to locate. In the Ross Sea region of Antarctica, there are currently around 5 million Adélie penguins! This may make them an attractive option for many in such a harsh environment.
With these species featured (and I’m sure a lot more) together with the great camerawork from the BBC, I know what I will be doing on Wednesday nights!
By Haley Dolton | <urn:uuid:5a8f5d91-575b-4373-a1c6-734e7d8fcf32> | 3.0625 | 844 | Personal Blog | Science & Tech. | 61.545344 |
February 28, 2007
GCRIO Program Overview
Our extensive collection of documents.
Archives of the
Global Climate Change Digest
A Guide to Information on Greenhouse Gases and Ozone Depletion
Published July 1988 through June 1999
FROM VOLUME 6, NUMBER 11, NOVEMBER 1993
PALEOCLIMATOLOGY: BOREHOLE TEMPERATURE ANALYSES
Special issue: Global
and Planetary Change, 6(2-4), Dec. 1992 (Elsevier), contains 18
papers on inferring climatic change from underground temperatures, many of which
were presented at an IASPEI meeting (Vienna, 1991). The first five concern
general application of the technique, while the rest apply to specific
Measurements in Borehole GC-1, Northwestern Utah: Towards Isolating a Climate
Change Signal in Borehole Temperature Profiles," D.S. Chapman (Dept. Geol.,
Univ. Utah, Salt Lake City UT 84112), R.N. Harris, Geophys. Res. Lett.,
20(18), 1891-1894, Sep. 15, 1993.
Repeating measurements (1978, 1990, 1992) in the same borehole is one method
of isolating climatic change from other factors that can influence local ground
Inferred from Borehole Temperatures," H.N. Pollack (Dept. Geol. Sci., Univ.
Michigan, Ann Arbor MI 48109), Global Plan. Change, 7(1-3),
173-179, May 1993.
Reviews the application of this technique, which has the potential to extend
direct observations of temperature well into the pre-industrial era,
particularly in North America. (Part of the special issue on quaternary Earth
system changes listed below.)
Guide to Publishers
Index of Abbreviations | <urn:uuid:26337fb0-f130-4e27-b4cc-bf011813dee3> | 2.734375 | 395 | Content Listing | Science & Tech. | 48.656377 |
I really don’t understand the “CO2 causing stratospheric cooling” thing.
I mean I get the basic idea that CO2 mostly heats up the troposphere, and as a result there is less IR to go into the stratosphere and above, causing those layers to cool. But some of the details don’t make sense. From this website (the only source that RC gives to explain it):
...carbon dioxide emits heat radiation, which is lost from the stratosphere into space. In the stratosphere, this emission of heat becomes larger than the energy received from below by absorption and, as a result, there is a net energy loss from the stratosphere and a resulting cooling.
Can someone explain how the bolded part doesn't violate the laws of physics?
And this claim does agree with another, equally confusing claim I’ve heard a few times, that is (if I understood it right) something to the effect of: “CO2 in the upper atmosphere acts as a radiator for heat, accelerating the heat loss to space and thus causing cooling.”
Wouldn’t the fact that the direction of emission is random (360 degrees) mean that CO2 is just as effective at warming the stratosphere as it is the troposphere? (The fact that concentrations are lower and there is less IR aside...how does what is left not do the same thing as what happens in the troposphere?)
Can anyone tell me where I'm confusing myself on this one?
Edited by dawei - 4/2/2009 at 05:05 am | <urn:uuid:379b619e-a014-4ed3-a098-89b77eb1db35> | 3.140625 | 328 | Comment Section | Science & Tech. | 62.929026 |
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Every Java object inherits a set of base methods from
java.lang.Object that every client can use:
Each of these methods has a sensible default behavior that can be overridden in the subclasses (except for
final methods, marked above with
F). This article discusses overriding the
hashCode() methods for data objects.
The purpose of the
equals() method is to determine whether the argument is equal to the current instance. This method is used in virtually all of the
java.util collections classes, and many other low-level libraries (such as RMI (Remote Method Invocation), JDBC (Java Database Connectivity),
etc.) implicitly depend on the correct behavior. The method should return
true if the two objects can be considered equal and
false otherwise. Of course, what data is considered equal is up to each individual class to define.
Since computing an object's equality is a time-consuming task, Java also provides a quick way of determining if an object
is equal or not, using
hashCode(). This returns a small number based on the object's internal datastructure; if two objects have different hash codes, then
they cannot be equal to each other. (Think of it like searching for two words in a dictionary; if they both begin with "A"
then they may be equal; however, if one begins with "A" and the other begins with "B" then they cannot be equal.)
The purpose of computing a hash code is that the hash should be quicker to calculate and compare than computing full object
equality. Datastructures such as the
HashMap implicitly use the hash code to avoid computing equality of objects where possible. One of the reasons why a
HashMap looks up data faster than a
List is because the list has to search the entire datastructure for a match, whereas the
HashMap only searches those that have the same hash value.
Importantly, it is an error for a class to have an
equals() method without overriding the default
hashCode() method. In an inheritance hierarchy, only the top class needs to provide a
hashCode() method. This is discussed further below.
equals() method's signature must be:
public boolean equals(Object)
Note: Regardless of which class contains the
equals() method, the signature must always declare an
Object argument type. Since Java's libraries look for one with an
Object argument, if the signature is not correct, then the
java.lang.Object method will be called instead, leading to incorrect behavior.
|Forum migration complete By Athen|
|Forum migration update By Athen|
The most misleading article! By Pinocio
( 1 2 all )
|Nothing New Here By Avi Abrami|
|Good advice, but inappropriate example By Anonymous|
|More articles about this subject in JavaWorld By shan|
|3rd page is good By dubwai|
|Object equality By JavaWorld| | <urn:uuid:4ed970fe-727c-4072-8b78-d1c315cf0b11> | 2.796875 | 660 | Truncated | Software Dev. | 39.833774 |
Need Help!! Describe how to make 2.5L of 0.1M HCL solution from 12M HCL concentrated solution and water. I got help earlier but still could not figure out the calculations. Please help, it has been a while since i took chemistry. Thank you soo much
Describe how to make 2.5L of 0.1M HCL solution from 12M HCL concentrated solution and water. Thank you
how do you solve this equation with the substitution method. 1) 6x + 4y = 6/5
In Exam Figure 7, AA′ = 33 m and BC =7.5 m. The span is divided into six equal parts at E, G, C, I, and K. Find the length of A′B. Round your answer to two decimal places.
the structural formula,with atoms A,B,and C unspecified,is? [Q2] the formula is C3 H6 Br2 If the compounds is called 1,2-dibromopropane, A,B,and C must be...what?
in describing an equilibrium reaction involving hydrogen gas, iodine gas, and hydrogen iodide gas, all of the statements about equation Q=[Hl]2 divide [H2][l2]are true except for which one?
with the exception of the transition elements, Groups3-12 in the periodic Table, and the noble gases, Group 18, atomic radii___ within periods, while atomic radii___ within groups.
if a small piece of sodium is dropped into water,it begins to react, forming hydrogen gas and sodium hydroxide. In the equation, 2Na(s)+2H2O(l) 2NaOH(aq)+H2(g),the sodium hydroxide that is formed is a..
find the value of the expression 3*45 divide 9-2 (10-6) -4 divide 2
For Further Reading | <urn:uuid:e3167016-2968-497d-aeca-009af56c43a6> | 3.25 | 404 | Q&A Forum | Science & Tech. | 84.926951 |
Summary: Physics 182: Sample exam questions for Exam 1
Multiple Choice. Choose the one alternative that BEST completes the statement or answers the question, and mark your
scan sheet. Only the scan sheet will be graded.
The correct answers are: C, C, A, C, B, D
1) The diagrams shows a PV diagram for 4.3 g of oxygen gas in a sealed container. The temperature of state 1 is
21°C. What are the temperatures T3, and T4?
A) 16°C and 47°C B) 11°C and 32°C C) -52°C and 390°C D) 220°C and 660°C
2) A gas is initially at (20 Pa, 8 m3) and expands to (27 Pa, 12 m3). The minimum amount of pressure the gas can
be under is 9 Pa, and the maximum pressure the gas can be under is 40 Pa. Find the minimum amount of work
that can be done by the gas in going from its initial state to its final state.
A) 180 J B) 320 J C) 36 J D) 160 J
3) A 406.0 kg copper bar is put into a smelter for melting. The initial temperature of the copper is 300.0 K. How
much heat must the smelter produce to completely melt the copper bar? (The specific heat for copper is
386 J/kg·K, the heat of fusion for copper is 205 kJ/kg, and its melting point is 1357 K.)
A) 2.49 × 105 kJ B) 2.96 × 105 kJ C) 1.66 × 108 kJ D) 1.66 × 1011 kJ
4) An ideal gas initially at 300.0 K and occupying a volume of 20.0 L is adiabatically compressed. If its final
temperature is 400.0 K and = 1.3, what is its final volume?
A) 22 L B) 14 L C) 7.7 L D) 52 L
5) 4.8 mol of gas #1 initially has 9000 J of thermal energy. It interacts with 3.4 mol of gas #2, which initially has
5000 J of thermal energy. If both gases are monatomic, what is the change in thermal energy of gas #1? | <urn:uuid:5e68f73e-0a16-44ad-bc31-57dc3b852658> | 2.859375 | 494 | Tutorial | Science & Tech. | 108.255768 |
|Nov9-12, 07:24 AM||#1|
Thermodynamic - water/ice piston help !
An inventor proposes to make a heat engine using water/ice as the working substance inside a cylindrical piston and taking advantage of the fact that water expands as it freezes and can therefore lift a piston supporting some mass m. The engine process consists of four steps as shown in the schematic below.
(i) Load: The weight to be lifted is placed on top of a piston over a cylinder of water held at a temperature of 1oC. The piston sits at height hw.
(ii) Lift: The system is then placed in thermal contact with a low temperature reservoir at −1oC until the water freezes into ice, lifting the weight to a height hi.
(iii) Unload: The weight is then removed at height hi while the ice remains frozen.
(iv) Reset: The ice is melted by putting it back in contact with the high-temperature reservoir at 1oC, returning the piston hw. Another mass is added to the piston and the cycle is ready to be repeated.
The inventor is pleased with this device because it can seemingly perform an unlimited amount of work (by lifting an unlimited mass m) while absorbing only a finite amount of heat each cycle.
Assuming that the piston has a cross-sectional area of 10 cm2 and contains 50 cm3 of liquid H2O (i.e. hw = 5 cm), calculate:
(i) The work done by the piston in raising a mass of 10 g.
(ii) The mass required to stop the engine working (i.e., reduce the freezing point of the water to −1oC).
physics news on PhysOrg.com
>> Promising doped zirconia
>> New X-ray method shows how frog embryos could help thwart disease
>> Bringing life into focus
|Nov9-12, 08:00 AM||#2|
Interesting idea. For your questions:
2. You will have to look into the compressibility of water and ice.
|Similar Threads for: Thermodynamic - water/ice piston help !|
|Water Piston Engine....!!||Mechanical Engineering||1|
|Sea Water Thermodynamic Properties?||Mechanical Engineering||5|
|Thermodynamic Piston problem.||Introductory Physics Homework||3|
|Thermodynamics: water in a piston-cylinder device||Engineering, Comp Sci, & Technology Homework||0|
|Thermodynamics-heating water in piston help||Engineering, Comp Sci, & Technology Homework||1| | <urn:uuid:e494460c-b0b8-483f-947f-f4c1f639cabb> | 4 | 560 | Comment Section | Science & Tech. | 56.45371 |
Three new analyses on climate extremes together explain how extremes may change in the future, what’s driving them, their impacts on people and ecosystems, and how we can adapt. The most extensive report is from the Intergovernmental Panel on Climate Change (IPCC) and it details the current state of knowledge on climate extremes.
The Sonoran Desert may look very different under hotter and drier conditions in the future, reports a recent study in Global Change Biology.
A draft report by the Southwest Climate Alliance provides information into the state of climate change knowledge in the Southwest region—Arizona, California, Colorado, Nevada, New Mexico, and Utah—and discusses the links between climate in the region and other sectors such as energy and transportation.
New research published in Ecology Letters shows that a single climate parameter, the timing of spring snowmelt, has many different effects on the population growth of the Mormon Fritillar
What had previously been thought—that mountain pine beetles are able to fit two reproductive cycles into a single season due to warming temperatures—has finally been documented by the authors of a new study set to be published in The American Naturalist in May.
Over 3 million people in the U.S. could face the threat of sea level rise within the next century, according to a new study in Environmental Research Letters.
Authors of a new study classified 128 species of birds in California out of 358 evaluated as vulnerable to climate change. Wetland species were found to be the most vulnerable to climate change relative to species that live in other habitats.
Tornado season began early this year, and in one day, more tornadoes than are usually seen for the entire month of March ripped across the Midwest and southeastern U.S.
Winter precipitation in the Southwest is likely to decrease by about 7.5 percent in the future, according to a new study led by University of Arizona researchers. | <urn:uuid:ead9b92d-569c-4e4a-8c09-125137a5a69e> | 3.1875 | 385 | Content Listing | Science & Tech. | 31.252808 |
So what is everyone's thoughts on the origin of Mars' moons Phobos and Deimos? They are a bit of a mystery.
Here are the different theories:
1. They formed along with Mars when it accreted out of the plantary nebula.
Pros: explains how both are in the same circular, equatorial orbit around Mars.
Cons: Seems a strange coincidence that we are around to witness Phobos in such a low orbit that it is about (in a couple million years) to crash out of orbit. Also this would be the only case in the solar system where such small "asteroid-like" moons formed around such a large body.
2. They were captured into orbit around Mars.
Pros: This would explain their similarity to asteroids out in the Belt.
Cons: The probability that they would be both be captured into circular and equatorial orbits is virtually zero. Also, there is no know mechanism for asteroids to be captured by such a small body like Mars (after all the moons didn’t do perigee burns to brake them into orbit)
3. They were once part of a larger moon that that broke up into several pieces. Phobos and Deimos are the last remnants of it.
Pros: This would explain how both moons have circular and equaltorial orbits (since they started from the same body). Theoretically, there would have been many more moons at one time, but they have crashed into Mars one by one, as Phobos is on course to do.
Cons: Phobos and Deimos do not appear to be very similar compositionally, which is strange if they came from the same moon. Of course it was large enough, the large proto-moon may have been differentiated.
4. The moons were formed from a large impact early in Mars history, perhaps from the impact that created the Hellas basin or the northern lowlands. This impact formed a small debris field around Mars which accreted into the moons.
Pros: Explains the circular orbits of the moons and Moons created from early gigantic impacts seems to be a re-occurring theme we see in the rest of the solar system (i.e. Earth's Moon and likely Pluto's moons)
Cons: While it explains the circular orbits, it does not explain how they are equatorial.
I believe the favored theory this decade is number 3, where a large body was present, but was broken up.
What is everyone's thoughts? | <urn:uuid:7a05cb27-1c8e-4461-9eda-a43b5fc2a449> | 3.53125 | 514 | Comment Section | Science & Tech. | 59.917897 |
TeX is a powerful text formatter written by Donald Knuth; it is also free, like GNU Emacs. LaTeX is a simplified input format for TeX, implemented by TeX macros; it comes with TeX. SliTeX is a special form of LaTeX.
Emacs has a special TeX mode for editing TeX input files. It provides facilities for checking the balance of delimiters and for invoking TeX on all or part of the file.
TeX mode has three variants, Plain TeX mode, LaTeX mode, and
SliTeX mode (these three distinct major modes differ only slightly).
They are designed for editing the three different formats. The command
M-x tex-mode looks at the contents of the buffer to determine
whether the contents appear to be either LaTeX input or SliTeX
input; if so, it selects the appropriate mode. If the file contents do
not appear to be LaTeX or SliTeX, it selects Plain TeX mode.
If the contents are insufficient to determine this, the variable
tex-default-mode controls which mode is used.
When M-x tex-mode does not guess right, you can use the commands M-x plain-tex-mode, M-x latex-mode, and M-x slitex-mode to select explicitly the particular variants of TeX mode.
Go to the first, previous, next, last section, table of contents. | <urn:uuid:27af4da2-9233-473b-8745-f3f3037123a9> | 3.09375 | 303 | Documentation | Software Dev. | 58.003173 |
According to Inuit myth, a urine-soaked cloth was once whipped from an old lady’s hand and carried out to sea, where it turned into a sea monster called “skalugsuak.” Of its legendary peculiarities, skalugsuak lives for 200 years, has thousands of teeth, weighs over a ton, eats caribou whole, has skin that can destroy human flesh, and possesses—in place of eyes—living, glowing creatures which lure its prey.
But skalugsuak isn’t a fable—it’s a real shark, whose flesh is so packed with urea that it smells and tastes like urine. Commonly known as the Greenland shark, the animal is the second largest carnivorous shark (after the Great White), and the apex predator of the eastern Arctic. When their carcasses have washed up, scientists have opened their stomachs to find eels, sharks, beluga whales, dog, horse, reindeer, and a lot of fish, and they’ve even been reported to hunt caribou in the manner of a crocodile ambush. But despite seeming like a pretty awesome research subject to tell your friends about, very little work is currently being conducted on the smelly monster, and virtually nothing is known about its behavior.
Now Canadian scientists of the University of Windsor have taken on the task of tagging Greenland sharks to track their living conditions and location. Despite living at depths of over a mile, the animal doesn’t play too hard to get when it surfaces—it can be dragged out of the water with one’s bare hands.
Image: Christine Williams | <urn:uuid:11d3e525-f252-4a69-b05f-822def98d909> | 3.546875 | 344 | Personal Blog | Science & Tech. | 41.176094 |
Last week we saw, in "Climate Change and the Pacific Decadal Oscillation" that the Pacific Decadal Oscillation had flipped into its cold phase.
The Atlantic Multidecadal Oscillation went into its warm phase in 1995.
If I'm reading the literature correctly (always ask that question!), the drought in the southwestern U.S. (+PDO; +AMO) should start to relax* and, as the PDO settles in, move to the nations breadbasket (-PDO; +AMO).
Over the next five to twenty years this is going to get really interesting.
Just a little heads-up.
...North American Drought
Drought over north America has been correlated to the Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation.
The relationship between drought in the continental US and the phases of the Pacific Decadal Oscillation (PDO) and the Atlantic Multidecadal Oscillation (AMO). The most severe droughts occur when the PDO is in a negative phase, and the AMO is in a positive phase.
From McCabe (2004).
More than half (52%) of the space and time variance in multidecadal drought frequency over the conterminous United States is attributable to the Pacific Decadal Oscillation (PDO) and the Atlantic Multidecadal Oscillation (AMO). An additional 22% of the variance in drought frequency is related to a complex spatial pattern of positive and negative trends in drought occurrence possibly related to increasing Northern Hemisphere temperatures or some other unidirectional climate trend. Recent droughts with broad impacts over the conterminous U.S. (1996, 1999-2002) were associated with North Atlantic warming (positive AMO) and northeastern and tropical Pacific cooling (negative PDO). Much of the long-term predictability of drought frequency may reside in the multidecadal behavior of the North Atlantic Ocean. Should the current positive AMO (warm North Atlantic) conditions persist into the upcoming decade, we suggest two possible drought scenarios that resemble the continental-scale patterns of the 1930s (positive PDO) and 1950s (negative PDO) drought.
More from Texas A&M.
More from NOAA.
And this could turn out to be old news:
The U.S. Southwest's current drought could be the start of the Dust Bowl-like future that some scientists have already predicted will come from human-caused warming.
Or, it could just be another in the long line of natural, cyclical droughts in the region dating back 1,000 years.
But one of the nation's leading climate scientists, the University of Arizona's Nobel Prize-sharing Jonathan Overpeck, says he's coming to believe there's "a real likelihood" the drought is caused by global warming.
New research at the UA backs that up.
Some other scientists disagree, even as leading climate experts generally agree that someday, global warming will make the Southwest drier.This drought is already known as the region's worst in more than a century and one of the worst in the past 500 years... | <urn:uuid:44b326a6-e241-4ce7-b7be-f4b18bad1a0e> | 2.75 | 655 | Personal Blog | Science & Tech. | 45.924118 |
The upside in the unknown realms of the desalination equation is rapidly gaining more potential as technology jumps forward in unexpected leaps. Are we at a point now, or will we be there soon, of wondering what effect large (mega) scale desalination and irrigation would have on global warming?
Even in a lower-tech world, desalination has had a huge role. Currently, there are 13,800 desalination plants operating in the world, producing a total of about 12 billion gallons of water a day. That, according to the International Desalination Association.
Consider: About 15-20% of the Earth's non-polar land surface is considered desert. Many of these areas could be spectacularly transformed if only they had access to cheap, plentiful water.
The problem with desalination is that, up until now, it has been an extremely energy intensive and, therefore, expensive process.
Two recent announcements have provided more reason to believe that larger scale desalination is already on the horizon.
(KACST), has already begun a solar-powered project that will supply 30,000 cubic meters of clean water per day to 100,000 people in the city of Al-Khafji. The KACST project is leveraging a new technology developed with IBM to allow more intense heat to be harvested from the sunlight and also foresees other innovations, such as proprietary new membranes developed for the reverse osmosis process.
Back on the other side of the planet, a Vancouver company, Saltworks, is continuing to dazzle specialists from the Middle East to Australia with its own unique version of solar-powered desalination.
Saltworks' breakthrough process uses far less energy than conventional systems. Saltworks' Thermo-Ionic™ desalination technology harnesses renewable energy sources such as dryness in the air and heat from the sun - to provide sustainable, low cost, desalinated water with minimal environmental impact.
Besides requiring only 20% or less energy than conventional desalination, other advantages of Saltworks' process include that it does not release a concentrated saltwater brine as a by-product and in fact, it could even use the brine produced by other desalination plants.
All that remains is for Saltworks to prove that its technology can be scaled up. Their initial Vancouver test plant will produce (or is producing) 1,000 liters of clean water per day.
Given that both of these new desalination processes use considerably less energy than previous technologies, it now becomes more interesting to project the potential and consequences of very large scale desalination. Indeed, there are sure to be further technological improvements in the near future. Deserts may be transformed - but at what price or benefit? Would a green Sahara accelerate global warming - or discourage it? The Aswan Dam in Egypt has wreaked havoc with the desert environment in unexpected ways. Now is the time to consider the large picture of desalination. | <urn:uuid:9516c8a3-4f1c-47ce-971c-4a0ff897ce2b> | 3.875 | 609 | Personal Blog | Science & Tech. | 28.731957 |
-> Click here to learn how to get live help <-
NAMEcrypt - password and data encryption
DESCRIPTIONcrypt is the password encryption function. It is based on the Data Encryption Standard algorithm with variations intended (among other things) to discourage use of hardware implementations of a key search.
key is a user's typed password.
salt is a two-character string chosen from the set [fBafP(enfBzAfP(enfBZ0fP(enfB9./fP]. This string is used to perturb the algorithm in one of 4096 different ways.
By taking the lowest 7 bits of each of the first eight characters of the key, a 56-bit key is obtained. This 56-bit key is used to encrypt repeatedly a constant string (usually a string consisting of all zeros). The returned value points to the encrypted password, a series of 13 printable ASCII characters (the first two characters represent the salt itself). The return value points to static data whose content is overwritten by each call.
Warning: The key space consists of 2**56 equal 7.2e16 possible values. Exhaustive searches of this key space are possible using massively parallel computers. Software, such as crack(1), is available which will search the portion of this key space that is generally used by humans for passwords. Hence, password selection should, at minimum, avoid common words and names. The use of a passwd(1) program that checks for crackable passwords during the selection process is recommended.
The DES algorithm itself has a few quirks which make the use of the crypt(3) interface a very poor choice for anything other than password authentication. If you are planning on using the crypt(3) interface for a cryptography project, don't do it: get a good book on encryption and one of the widely available DES libraries.
RETURN VALUEA pointer to the encrypted password is returned. On error, NULL is returned.
GNU EXTENSIONThe glibc2 version of this function has the following additional features. If salt is a character string starting with the three characters "$1$" followed by at most eight characters, and optionally terminated by "$", then instead of using the DES machine, the glibc crypt function uses an MD5-based algorithm, and outputs up to 34 bytes, namely "$1$<string>$", where "<string>" stands for the up to 8 characters following "$1$" in the salt, followed by 22 bytes chosen from the set [fBafP(enfBzAfP(enfBZ0fP(enfB9./fP]. The entire key is significant here (instead of only the first 8 bytes).
CONFORMING TOSVID, X/OPEN, BSD 4.3, POSIX 1003.1-2001
SEE ALSOlogin(1), passwd(1), encrypt(3), getpass(3), passwd(5) | <urn:uuid:027d22f8-dd15-4d4f-b9a5-64cb74444f49> | 3.59375 | 613 | Documentation | Software Dev. | 59.478101 |
Return to Mathematics Index
Williams, Gwendolyn D. Paul Robeson High School
To show the ratio of pi by revolving a circle one complete time.
The students will learn why the ratio is true.
To allow the students to do the same independent of the teacher's
You need: Two random size pieces of plexiglass.
One screw (long enough to pass through all thickness of the
pieces you use)
One pair of nuts to hold the screw in place.
One aluminum channel no longer than four feet.
To make the circle, cut either piece first. Draw a circle on the
paper covering of the plexiglass or cut random tangents to a squared
piece until it is it is somewhat round. Drill a hole at the center of
the piece large enough to put your screw through. Finding the center
should not be a problem. Once the "round" piece is ready, sand out
the rough spots. Find a thick piece of wood that you can nail the
circle to . After nailing the circle to the wood, go to the sander
and rotate the plexiglass until the rough spots are gone. Draw an
arrow from the center to the edge of the circle (a radius). The
second piece will be used for the handle and the spacers. Cut three
pieces, two of the same length and long enough to extend from the
center of the circle to the outside and the other about one inch
shorter. The first two pieces should be wide enough to fit
comfortably in your hand as they will be used to guide the circle
through the channel. The third piece will be used for spacers. Drill
the same size hole near the end of the two pieces to be used for the
handle and two holes four inches apart in the piece to be used for
spacers. Do not cut before the holes are drilled. At this point get
help from the shop manager to assist in putting the final product
together. If it is not obvious, the spacers should be cut to fit one
under each handle and flush against the circle to reduce friction when
rotating. Use other scrap pieces to space the edge of the handle that
will be held in your hand. Sand all edges. Use the glue to hold the
spacers in place at the edge of the handle. For the classroom demo,
the aluminum channel will be placed on the desk or table. Stand the
circle on its side, arrow down and perpendicular to the channel. Mark
the origin with a pencil on the channel. Make a complete rotation of
the circle. At the end of this rotation, mark the channel again. To
find this length, do either of two things. Lay the circle on the
channel end over end from the origin to the end of the length at which
time the measure is obvious. Or lay the circle between perpendiculars
for as many times as it takes to cover this length. In either case,
the measure should be 3.14159... or a number very close to that. | <urn:uuid:617fc726-92f1-444a-8ff6-c7e8ca919642> | 4.03125 | 637 | Tutorial | Science & Tech. | 71.934749 |
The world's 8,240 species of reptiles inhabit every continent except Antarctica. Reptiles include turtles, snakes, crocodiles, and lizards. They can be as small as the dwarf gecko (less than an inch long) or as big as the saltwater crocodile, which can weigh more than a ton. All reptiles have scales, but some are too small to be seen. Reptiles are ectothermic (their body temperature is regulated by their environment). Most lay eggs, but a few give birth to live young.
Amphibians are in crisis. One-third of all known amphibian species are in danger of being wiped out by Batrachochytrium dendrobatidis (Bd)—also known as chytrid fungus. Experts and institutions have joined together to form the Amphibian Rescue and Conservation Project, which aims to rescue and possibly save numerous species.
The discovery of what may be three new frog species by researchers in Panama illustrates the hope and fear encountered daily by the Panama Amphibian Rescue and Conservation Project. The discoveries lead to hope that project researchers can save these animals from a deadly fungus killing frogs worldwide and the fear that many species will go extinct before scientists even know they exist. Read more.
There are more than 6,000 species of amphibians on Earth, including frogs, toads, salamanders, and newts. One-third of amphibians are threatened with extinction.
For most amphibians, life begins in the water—the young have gills and lack legs when they hatch from eggs laid in the water. They metamorphose, growing legs and changing in other ways to live on land. The word "amphibian" comes from Greek—both lives. Amphibians became the first vertebrates to live on land, and like their "cold-blooded" reptile relatives, depend on external energy sources (such as the sun) to maintain their body temperatures. Read more about being ectothermic.
The Reptile Discovery Center is home to many distinctive animals, from the massive Aldabra tortoise to the unusual gharial, to better known creatures such as the American alligator, Komodo dragon, and boa constrictor.
As frogs around the world continue to disappear—many killed by a rapidly spreading disease called chytridiomycosis, which attacks the skin cells of amphibians—one critically endangered species has received an encouraging boost. more
One of Japan’s “special natural treasures” is now among the National Zoo’s most valued scientific gems, after a voyage that has united two cultures in an international conservation effort. more | <urn:uuid:abcc5a64-b3e0-4ab4-8967-b7eedee9fd07> | 3.671875 | 544 | Content Listing | Science & Tech. | 38.332822 |
In 1883, German physiologist Max Rubner proposed that an animal's metabolic rate is proportional to its mass raised to the 2/3 power. This idea was rooted in simple geometry. If one animal is, say, twice as big as another animal in each linear dimension, then its total volume, or mass, is 23 times as large, but its skin surface is only 22 times as large. Since an animal must dissipate metabolic heat through its skin, Rubner reasoned that its metabolic rate should be proportional to its skin surface, which works out to mass to the 2/3 power.
In 1932, however, animal scientist Max Kleiber of the University of California, Davis looked at a broad range of data and concluded that the correct exponent is 3/4, not 2/3. In subsequent decades, biologists have found that the 3/4-power law appears to hold sway from microbes to whales, creatures of sizes ranging over a mind-boggling 21 orders of magnitude. …
Rubner was on the right track in comparing surface area with volume, but that an animal's metabolic rate is determined not by how efficiently it dissipates heat through its skin but by how efficiently it delivers fuel to its cells.
Rubner should have considered an animal's "effective surface area," which consists of all the inner surfaces across which energy and nutrients pass from blood vessels to cells, says West. These surfaces fill the animal's entire body, like linens stuffed into a laundry machine.
The idea, West says, is that a space-filling surface scales as if it were a volume, not an area. If you double each of the dimensions of your laundry machine, he observes, then the amount of linens you can fit into it scales up by 23, not 22. Thus, an animal's effective surface area scales as if it were a three-dimensional, not a two-dimensional, structure.
This creates a challenge for the network of blood vessels that must supply all these surfaces. In general, a network has one more dimension than the surfaces it supplies, since the network's tubes add one linear dimension. But an animal's circulatory system isn't four dimensional, so its supply can't keep up with the effective surfaces' demands. Consequently, the animal has to compensate by scaling back its metabolism according to a 3/4 exponent.
Though the original 1997 model applied only to mammals and birds, researchers have refined it to encompass plants, crustaceans, fish, and other organisms. The key to analyzing many of these organisms was to add a new parameter: temperature.
Mammals and birds maintain body temperatures between about 36°C and 40°C, regardless of their environment. By contrast, creatures such as fish, which align their body temperatures with those of their environments, are often considerably colder. Temperature has a direct effect on metabolism—the hotter a cell, the faster its chemical reactions run.
In 2001, after James Gillooly, a specialist in body temperature, joined Brown at the University of New Mexico, the researchers and their collaborators presented their master equation, which incorporates the effects of size and temperature. An organism's metabolism, they proposed, is proportional to its mass to the 3/4 power times a function in which body temperature appears in the exponent. The team found that its equation accurately predicted the metabolic rates of more than 250 species of microbes, plants, and animals. These species inhabit many different habitats, including marine, freshwater, temperate, and tropical ecosystems. …
A single equation predicts so much, the researchers contend, because metabolism sets the pace for myriad biological processes. An animal with a high metabolic rate processes energy quickly, so it can pump its heart quickly, grow quickly, and reach maturity quickly.
Unfortunately, that animal also ages and dies quickly, since the biochemical reactions involved in metabolism produce harmful by-products called free radicals, which gradually degrade cells.
"Metabolic rate is, in our view, the fundamental biological rate," Gillooly says. There is a universal biological clock, he says, "but it ticks in units of energy, not units of time." …
The team's master equation may resolve a longstanding controversy in evolutionary biology: Why do the fossil record and genetic data often give different estimates of when certain species diverged? …
The problem is that there is no universal clock that determines the rate of genetic mutations in all organisms, Gillooly and his colleagues say. They propose in the Jan. 4 Proceedings of the National Academy of Sciences that, instead, the mutation clock—like so many other life processes—ticks in proportion to metabolic rate rather than to time.
The DNA of small, hot organisms should mutate faster than that of large, cold organisms, the researchers argue. An organism with a revved-up metabolism generates more mutation-causing free radicals, they observe, and it also produces offspring faster, so a mutation becomes lodged in the population more quickly.
When the researchers use their master equation to correct for the effects of size and temperature, the genetic estimates of divergence times—including those of rats and mice—line up well with the fossil record.
Friday, February 11, 2005
Animal lifespans and space-filling curves
Science News has a review article on the 3/4 law of animal lifespans and metabolism. | <urn:uuid:9eff0812-daa2-4143-b0c0-c8e23807d343> | 3.9375 | 1,087 | Personal Blog | Science & Tech. | 33.741555 |
Class: Insecta (Insect)
Species: about 14,000
Body length: 0.08 to 1.5 inches (2 to 40 millimeters)
Life span: highly variable; queens of some species can live up to 15 years
Incubation: varies by species
Number of eggs laid: hundreds to millions over the life of a queen, depending on species
Age at maturity: 1 or more weeks
Conservation status: stable where habitats remain secure
The ant family's scientific name, Formicidae, comes from the Latin name for ant, formica.
If you combine the weight of all the ants on Earth, the total would be about the same as the weight of all the humans on Earth!
An ant can lift 20 times its body weight using its jaws. If we humans were as strong as an ant, we could lift three cars up on our head!
Incredible engineers, ants support their tunnels with tiny sticks, pieces of grass, and leaves.
Some ants control the temperature in their nest chambers by stacking leaves near the entrances. If it gets too warm, they remove some of the leaves.
Range: all continents except Antarctica
All ants have three body parts: head, thorax, and abdomen.
You’ve seen them at picnics, wandering around your kitchen, and in the garden: ANTS! They seem to be everywhere—and we are lucky that they are! Ants are one of the most abundant animals on Earth, and their contributions to our ecosystems are important.
Ants are complex insects that live in large social groups called colonies. As insects, ants have a hard outer body called an exoskeleton and three body parts: head, thorax, and abdomen. Ants have two pairs of appendages on their heads: the mandibles, used for grabbing or fighting, and maxillae, used for breaking up food into small bits for swallowing. Typically, ants have 2 compound eyes containing 6 to 1,000 lenses, though in some species the eyes are reduced or even nonfunctional. These eyes can only see objects close up but are very good at detecting motion. Their head has antennae, which are used for touching, feeling, smelling, and tasting. Ants also use the antennae to communicate with one another and keep the colony running smoothly.
An ant's six legs are attached to its thorax. Each leg has nine segments and two claws for gripping whatever the ant is climbing. The ant's abdomen holds the digestive organs, including the crop, which can be used to store food for the colony.
A young honeypot ant queen and her small brood are attended by workers.
Hail to the queen!
Most ant colonies live in nests on or under the ground or in trees, but some ant species live in clusters, not building any nests at all. Most ant colonies have a queen, large numbers of female worker ants, and occasionally some males. The queen’s only job is to lay eggs, and this she does throughout her entire life.
But how does she begin her "reign"? A young winged queen leaves her birth colony on her first and only flight with a number of winged males. Males are only produced for mating purposes and do no work other than to fertilize virgin queens. Mating flights typically include neighboring ant colonies, and the signals for this coordinated nuptial swarm are still not fully understood. After the queen and males mate, the males die. The young queen now finds a good site to make her nest and start her colony. She rakes the wings off her body, as she no longer needs them for her new life.
Some ant colonies do not have queens at all, and several species use a different way to start a new colony in addition to having a founding queen. In many primitive ant species, certain workers become egg layers and mate with males inside the nest to continue the colony after the queen dies. And many species of ants have several queens, either at the nest-founding stage or for the length of the colony’s life.
Honeypot ant workers tend larvae and cocoons in a brood chamber. Winged males are also present.
Born to work
A queen ant lays thousands, sometimes millions, of eggs in her lifetime. Workers move the eggs to brood chambers in the nest, where they hatch into larvae and are fed until they turn into a pupa. The process that determines what kind of ant a young larva becomes is still not well understood, but it is thought to involve the type of food and chemical signals that they receive from their sisters.
Talk about teamwork: ants really know how to work well together. Recent research has shown that ants in the nest change jobs regularly, and some spend a good deal of time doing nothing at all! The jobs within the colony are the same for all ant species. Workers must feed and care for the young and the all-important queen, provide food for the colony, defend the colony, and maintain the nest. Keeping the nest clean of waste and the bodies of dead members is important for the health of all.
A leafcutter ant holds the leaf piece in a groove on top of her head as she travels back to the nest.
Call out the cavalry
While all workers defend the nest as needed, many species have specialized workers called majors, also known as soldiers. These ants are larger than the other workers and have specialized mandibles for fighting, moving large objects, and crushing tough food items like seeds. However, it has been shown in some species that the kind of ant sent to defend the nest depends on the type of intruder. Invasions by other ant species of similar size bring the smaller workers running, but disturbances from larger animals call out the cavalry of soldiers, which are better equipped to stab tender flesh and encourage the predator to look elsewhere for food!
An ant colony may have up to eight million individuals at any one time, so a communication system is important for keeping everyone and everything organized. Ants release scents, called pheromones, from glands on their body. Each pheromone is a special scent message that is "read" or received through the antennae of the other ants in the colony. Many different kinds of information can be communicated this way. A scent trail can be left on the ground to lead other workers to a food source. Ants in the colony can smell each other's rank and can "sniff out" the presence of an intruder. Ants even have an alarm scent to alert the colony to danger. Dead ants have a scent that signals the cleanup workers to remove the body from the nest, keeping it clean and free of disease.
The ability of ants to make decisions based on the chemical composition of their nestmates is a fascinating topic of study. Researchers are finding that an ant can sweep her antennae over the body of her sister and determine things like her reproductive status and whether or not she should help with a particular task for the colony!
A honeypot ant replete filled with nectar hangs from the nest chamber wall. She provides this food from her "storage" system when her nestmates need it.
It takes all kinds!
Ants use an amazing variety of food items and have bizarre nesting behaviors. Some are considered farmers, some gather seeds and insects, and others are straight predators. Species that farm generally have a stable nest site and use areas of the nest to do their farming. For example, leafcutter ants bring leaves into the nest, and these leaves are then used to grow a fungus that the ants eat. Wood ants protect and "herd" nectar-sucking insects, such as aphids, then "milk" them. When the ant strokes the aphid’s body, a sweet liquid called honeydew comes out.
Honeypot ants collect water, nectar, and insect fluids when available in their desert ecosystems. The liquid is then fed to special worker ants, called repletes, which hang from the ceiling in a special nest chamber and store the nectar in their bodies. The replete’s body expands to hold the liquid, sometimes swelling to the size of a grape! This stored food is used by all members of the colony during lean times.
Army ants, the best known of the hunting species, may number over 700,000 in a colony. They travel to find insects, spiders, and even small mammals and reptiles to eat. The only time they stop marching and rest is while they are waiting for new eggs to hatch and pupae to emerge as adults. During this phase, the ants link their bodies together and form a living nest called a bivouac, which protects the queen and her brood.
In some parts of the world, ants are considered a delicacy.
Ant colonies treat ants from another colony or of another species as intruders: alarm pheromones signal the intruder. Fire ants and army ants respond as one and can overrun another ant colony, taking the individuals as food. Still other ant species may capture workers from another colony, taking them back to their nest and, by shifting their pheromone messages, make them do the work of the colony. However, when two colonies of nonnative Argentinian ants meet, instead of fighting they have a family reunion, often joining into one big colony! However, this only happens in areas where they are not native, like the United States. In Argentina, separate colonies regard one another as enemies.
As with every animal on the planet, ants are an important part of their habitat. They are essential in turning and aerating soil in all the ecosystems where they occur, sometimes even surpassing the work of earthworms. Ants help spread seeds for plants and are food for countless animals. Many are pollinators and even more are decomposers, breaking down organic waste and creating healthy habitats. Although their journeys into our homes to locate food or water may be a bit troublesome, consider their important place in the overall web of life. | <urn:uuid:527a606f-87e4-4ebf-9343-d0783f88794a> | 3.671875 | 2,057 | Knowledge Article | Science & Tech. | 54.753075 |
Q&A: General Astronomy and Space Science
As the planet Earth is accumulating meteoric dusts in tons
every day, will not the mass of the Earth increase?
If so, does it affect its spin? Does the gravity get affected?
You are right, as dust from the cosmos
falls on the Earth, the Earth does gain mass. Even the tiniest
particle of dust will cause the Earth's mass to increase very slightly.
And spin does depend on the mass of the object that is spinning.
However, the mass of the Earth is so great compared with the mass of
dust falling onto it that this change is negligible. It is hard to
imagine how large the numbers are that describe astronomical bodies,
including our planet, but they are so big that even several thousand
tons of dust per day has almost no effect on the object as a whole.
For the record, the mass of the Earth is approximately 5.97 times 1024
kilograms - remember that one million is 1 times 106, so the mass of
the Earth is measured in millions of millions of millions of millions | <urn:uuid:646089bd-4e2b-4514-9491-a9a72e02775e> | 3.625 | 232 | Q&A Forum | Science & Tech. | 62.782778 |
Behind these devastating individual events there is a common physical cause, propose scientists of the Potsdam Institute for Climate Impact Research (PIK). The study will be published this week in the US Proceedings of the National Academy of Sciences and suggests that man-made climate change repeatedly disturbs the patterns of atmospheric flow around the globe's Northern hemisphere through a subtle resonance mechanism.
“An important part of the global air motion in the mid-latitudes of the Earth normally takes the form of waves wandering around the planet, oscillating between the tropical and the Arctic regions. So when they swing up, these waves suck warm air from the tropics to Europe, Russia, or the US, and when they swing down, they do the same thing with cold air from the Arctic,” explains lead author Vladimir Petoukhov.
“What we found is that during several recent extreme weather events these planetary waves almost freeze in their tracks for weeks. So instead of bringing in cool air after having brought warm air in before, the heat just stays. In fact, we observe a strong amplification of the usually weak, slowly moving component of these waves,” says Petoukhov. Time is critical here: two or three days of 30 degrees Celsius are no problem, but twenty or more days lead to extreme heat stress. Since many ecosystems and cities are not adapted to this, prolonged hot periods can result in a high death toll, forest fires, and dramatic harvest losses.
Climate change caused by greenhouse-gas emissions from fossil-fuel burning does not mean uniform global warming – in the Arctic, the relative increase of temperatures, amplified by the loss of snow and ice, is higher than on average. This in turn reduces the temperature difference between the Arctic and, for example, Europe, yet temperature differences are a main driver of air flow. Additionally, continents generally warm and cool more readily than the oceans. “These two factors are crucial for the mechanism we detected,” says Petoukhov. “They result in an unnatural pattern of the mid-latitude air flow, so that for extended periods the slow synoptic waves get trapped.”
The authors of the study developed equations that describe the wave motions in the extra-tropical atmosphere and show under what conditions those waves can grind to a halt and get amplified. They tested their assumptions using standard daily weather data from the US National Centers for Environmental Prediction (NCEP). During recent periods in which several major weather extremes occurred, the trapping and strong amplification of particular waves – like “wave seven” (which has seven troughs and crests spanning the globe) – was indeed observed. The data show an increase in the occurrence of these specific atmospheric patterns, which is statistically significant at the 90 percent confidence level.
“Our dynamical analysis helps to explain the increasing number of novel weather extremes. It complements previous research that already linked such phenomena to climate change, but did not yet identify a mechanism behind it,” says Hans Joachim Schellnhuber, director of PIK and co-author of the study. “This is quite a breakthrough, even though things are not at all simple – the suggested physical process increases the probability of weather extremes, but additional factors certainly play a role as well, including natural variability.” Also, the 32-year period studied in the project provides a good indication of the mechanism involved, yet is too short for definite conclusions.
Nevertheless, the study significantly advances the understanding of the relation between weather extremes and man-made climate change. Scientists were surprised by how far outside past experience some of the recent extremes have been. The new data show that the emergence of extraordinary weather is not just a linear response to the mean warming trend, and the proposed mechanism could explain that.
Article: Petoukhov, V., Rahmstorf, S., Petri, S., Schellnhuber, H. J. (2013): Quasi-resonant amplification of planetary waves and recent Northern Hemisphere weather extremes. Proceedings of the National Academy of Sciences (Early Edition) [doi:10.1073/pnas.1222000110]
Weblink to the article (once it is published): www.pnas.org/cgi/doi/10.1073/pnas.1222000110
For further information please contact:
PIK press office
Phone: +49 331 288 25 07
Mareike Schodder | Source: PIK Potsdam
Further information: www.pik-potsdam.de
More articles from Earth Sciences:
Cracking the Ice Code
21.05.2013 | University of Wisconsin-Milwaukee
GPS solution provides three-minute tsunami alerts
17.05.2013 | European Geosciences Union
University of Würzburg physicists have succeeded in creating a new type of laser.
Its operation principle is completely different from conventional devices, which opens up the possibility of a significantly reduced energy input requirement. The researchers report their work in the current issue of Nature.
It also emits light the waves of which are in phase with one another: the polariton laser, developed ...
Innsbruck physicists led by Rainer Blatt and Peter Zoller experimentally gained a deep insight into the nature of quantum mechanical phase transitions.
They are the first scientists that simulated the competition between two rival dynamical processes at a novel type of transition between two quantum mechanical orders. They have published the results of their work in the journal Nature Physics.
“When water boils, its molecules are released as vapor. We call this ...
Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset.
For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ...
A new study of glaciers worldwide using observations from two NASA satellites has helped resolve differences in estimates of how fast glaciers are disappearing and contributing to sea level rise.
The new research found glaciers outside of the Greenland and Antarctic ice sheets, repositories of 1 percent of all land ice, lost an average of 571 trillion pounds (259 trillion kilograms) of mass every year during the six-year study period, making the oceans rise 0.03 inches (0.7 mm) per year. ...
About 99% of the world’s land ice is stored in the huge ice sheets of Antarctica and Greenland, while only 1% is contained in glaciers.
However, the meltwater of glaciers contributed almost as much to the rise in sea level in the period 2003 to 2009 as the two ice sheets: about one third. This is one of the results of an international study with the involvement of geographers from the University of Zurich.
21.05.2013 | Studies and Analyses
21.05.2013 | Life Sciences
21.05.2013 | Studies and Analyses
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:84932e44-1389-4ffa-af9c-ad13f201153e> | 3.8125 | 1,503 | Knowledge Article | Science & Tech. | 45.746378 |
Busy as Bees: Reproductive Chaos after Queen's Death
This nest of Asian dwarf red honeybees is built as a single comb from a twig, making it accessible to invading workers from other colonies once the queen dies.
CREDIT: © Nature
Female honeybees follow a simple code: only the queen lays eggs. If a female worker breaks the code, the other females quickly devour her rogue eggs. The queen even releases chemical signals that render other females' ovaries inactive.
But once the queen dies, the code goes out the window and chaos reigns.
Within a week of her death, her chemical signals wear off, the workers' ovaries become active, egg-policing stops and the workers rear one last batch of males before the whole colony dies.
New research shows that among Asian dwarf red honeybees, Apis florea, even females from other hives get in on the action.
To determine just how many interloping A. florea bees take advantage of a queenless colony, Benjamin Oldroyd of the University of Sydney in Australia collected four wild A. florea nests and transplanted them to a location with many bee colonies.
Oldroyd and his colleagues took samples of worker bees from each colony and used genetic techniques to determine the percentage of natives versus outsiders in the nest. Then they removed the queen from each nest and returned four weeks later to measure changes in the population.
Before the removal of the queen, 2 percent of the workers were unrelated, and none of these had activated ovaries.
Once the queen was out of the picture, unrelated workers increased to 4.5 percent. And among workers with activated ovaries, unrelated workers held a significant lead over the natives, 43 percent to 18 percent.
And while the non-natives accounted for a small percentage of the total number of workers, they had better reproductive success and were responsible for 36 percent of the eggs and 23 percent of the pupae.
This split, according to the authors, is evidence that invading workers are seeking out queenless colonies in order to lay eggs.
Despite all the extra eggs, the colony still dies. Females do all the pollen foraging, honey producing, and defend the hive. Since only the queen can produce females, the colony cannot survive without her. The sterile worker females can lay eggs, but they can't mate with the male drones, and unfertilized eggs yield only males.
A. florea nests are suspended from twigs and built on a single honeycomb. According to researchers, this structure makes the nest easily accessible to invading workers.
The research is reported in the Oct. 6 issue of the journal Nature.
MORE FROM LiveScience.com | <urn:uuid:778d14a5-cb64-4a1d-be9a-6b1b74c63c0f> | 3.0625 | 560 | Truncated | Science & Tech. | 46.881892 |
int mkstemp(char *template);
The mkstemp() function shall replace the contents of the string pointed to by template by a unique filename, and return a file descriptor for the file open for reading and writing. The function thus prevents any possible race condition between testing whether the file exists and opening it for use. The string in template should look like a filename with six trailing 'X' s; mkstemp() replaces each 'X' with a character from the portable filename character set. The characters are chosen such that the resulting name does not duplicate the name of an existing file at the time of a call to mkstemp().
Upon successful completion, mkstemp() shall return an open file descriptor. Otherwise, -1 shall be returned if no suitable file could be created.
No errors are defined.
The following sections are informative.
The following example creates a file with a 10-character name beginning with the characters "file" and opens the file for reading and writing. The value returned as the value of fd is a file descriptor that identifies the file.
#include <stdlib.h> ... char template = "/tmp/fileXXXXXX"; int fd; fd = mkstemp(template);
It is possible to run out of letters.
The mkstemp() function need not check to determine whether the filename part of template exceeds the maximum allowable filename length.
getpid() , open() , tmpfile() , tmpnam() , the Base Definitions volume of IEEE Std 1003.1-2001, <stdlib.h> | <urn:uuid:d081519d-e43c-4411-b052-0499176042fe> | 3.765625 | 327 | Documentation | Software Dev. | 53.363077 |
Apparently both these equations can't be right at the same time, number of objects and weight of objects are not the same thing.
Mass of the nucleus equals mass of the neutrons plus mass of the protons MINUS binding energy.
Neutrons and protons hold very strong to each other in nucleus. When you take several separated neutrons and several protons and you fuse them into a nucleus a lot of energy is emitted (that's where the energy in stars and hydrogen bombs comes from). You have probably heard about Einstein equation E=mc2
- it means energy is equivalent to mass. When the energy is emitted, mass of the remaining nucleus is smaller than the sum of masses of protons and neutrons, this missing mass is called "mass deficit" or "binding energy". | <urn:uuid:a6a69879-3462-4885-836d-80e07a2fbc33> | 3.265625 | 162 | Comment Section | Science & Tech. | 51.392976 |
NASA’s J-2X rocket engine is on the test stand and ready for its second round of tests, building on last year’s successful test-firings that by some metrics were the most successful rocket engine firings NASA has ever undertaken. The J-2X will provide upper-stage power propelling NASA’s next-gen Space Launch System (SLS) from the upper atmosphere out into deep space after the first stage is jettisoned.
To test future rocket designs, NASA is employing an age-old bar trick: Slowly and deliberately apply pressure to an aluminum can until it crumples. No foreheads will be involved, however.
In late March, engineers will use a million pounds of force to crush a 27.5-foot diameter, 20-foot-tall canister made of aluminum and lithium, hoping to learn more about shell buckling so they can design sturdier rocket skins.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:7bf4b39e-8119-4fe8-bdba-c7b351a0f332> | 3.015625 | 248 | Content Listing | Science & Tech. | 49.7825 |
> > >The strong typing of object-oriented languages encourages
> > >narrowly defined packages that are hard to reuse. Each package
> > >requires objects of a specific type; if two packages are to work
> > >together, conversion code must be written to translate between the
> > >types required by the packages.
> If you define the interface well in OOL's, you can handle a variety
> of typed inputs. It's much easier to extend that typing in OOL's than
> with something like C (and have deterministic output). I would guess
> serious programmers do develop their own libraries, or work with
> corporate standard ones (what does Adobe use? they can't be rewriting
> every graphics routine from scratch). Having classes inherit methods
> and types is much nicer than having everything default to a string
> and thinking that's sufficient. MFC is a help for OOP exactly by
> defining libraries that simplify programming. The dangers on
> include that it might come at the expense of too much speed, that
> it might not provide the tools to do everything a lower level
> approach would, or that in doing more complex tasks, the "simpler"
> language actually becomes more obfuscated than the lower-level
having started to use the C++ Standard Template Library (STL) recently, i
must say it totally disproves ousterhout's assertions. the interfaces
are extremely general and because they use templates instead of virtual
members, all the overhead is compile-time. there is no performance hit
for the genericity.
also, i think MFC is perhaps a bad example of how to use C++. M$ has
never impressed me with their API design ability. | <urn:uuid:073da173-48d6-47f2-bdff-07e5d5dcfc39> | 3.078125 | 369 | Comment Section | Software Dev. | 41.674057 |
Cool Butterfly Photonic Crystals
The small structures in the scanning electron microscope image of a
butterfly wing scale (a) are natural photonic crystals that give the
wings of some butterflies their brilliant iridescent blue colors. The
structures in the second image (b) are responsible for a blue-violet
iridescence. In the third image (c), the small structures are almost
entirely absent, and the butterfly wing scales are a dull brown shade.
New research suggests that photonic crystals keep butterfly wings cooler,
as well as making them beautiful. In higher elevations where butterflies
are more reliant on sunlight to keep them warm, some of the insects
have evolved wing scales in which the photonic crystals have been disrupted
(as in image c), improving the chances that they survive long enough
to mate despite the frigid climate. | <urn:uuid:c5d030e4-6ae9-4332-a32a-41c937131d26> | 3.171875 | 181 | Knowledge Article | Science & Tech. | 27.776667 |
Relationships between ants and other organisms are numerous
- Ant/Ant: Some species of ants are extreme in their
dependence upon other ant species. For example, the ant Tuleutomyrmex schneidere
spends almost its entire life riding on the backs of host ant species. They
seem to contribute nothing to the hosts, but are tolerated and even fed. Slavemaker
ants (Formica subintegra, for example) steal brood from other colonies
and return the brood to develop and serve the Slavemaker colony. The slaves
are absolutely dependent in that if they don't work, they don't get fed. Other
ants work together as with the Crematogaster limata parabiotica and
Monacis debilis. These ants have their nests close together and share
the same foraging trails. Camponotus has also been seen giving food
to the Monacis workers.
- Ant/Other Insect: These relationships are many and
diverse, ranging from commensual to parasitic. Aphids and ants have many species
relationships where both the ants and aphids benefit (mutualism). Aphids secrete
honeydew and amino acids through their anus. The ants eat or store the honeydew.
The ants sometimes incorporate the aphid territory into their own territory,
which allows easier access to the aphids and affords the aphids protection
by a greater number of ants. The honeydew sometimes contains chemicals that
are purposely directed at attracting ants. The aphids sometimes release chemical
signals that warn other aphids of a predatorial attack and also alert the
ants so they can attack the invader.
- Ant/Plant: These relationships are also known to be
abundant. Some carnivorous plants allow ants to hunt herbivores on them. In
turn, the ant protects the plant from the herbivores eating their plant tissue.
Many plants have extrafloral nectaries on various parts of the plant. These
are nectar- producing structures not associated with flowers. The ants are
attracted to the plant where they can obtain small amounts of sugar and, in
turn, defend the plant from other insects. Ants provide this same service
of eliminating herbivores to many plants. Other ants confiscate plant parts
to grow fungus on in fungus gardens deep with colonies. These leaf cutter
ants process the leaves and use the fungus grown upon the leaf material for
food. Sometimes ants live in tree hollows and have no effect on the plant
at all. Harvester ants do a great service to plants by collecting and transporting
seeds. In one case, the ants eat a small part of the seed and leave the rest
of the still- viable seed to germinate. | <urn:uuid:026b923c-67a3-4ee7-b109-3e0a143afd55> | 3.5625 | 587 | Knowledge Article | Science & Tech. | 43.146481 |
Coldest space instrument ever
Coldest ever space instrument set to fly
The coldest instrument ever to fly in space is set to launch aboard a Japanese X-ray observatory called Astro-E2 on Wednesday. The instrument, which has suffered several previous setbacks, will study some of the most energetic phenomena in the universe.
This sensitive task requires Astro-E2ís main instrument, the X-Ray Spectrometer (XRS), to be cooled to just 0.06 degrees above absolute zero. This will allow it to detect extremely small changes in photon energies - representing a 10-fold improvement in sensitivity compared to existing detectors.
Everything I need to know I learned through Googling. | <urn:uuid:71a9fcbd-21d8-4f0c-a925-3edf4459f2ec> | 2.90625 | 142 | Comment Section | Science & Tech. | 34.718769 |
Cloning the Gaur
By Britt Bailey
day may come when the rest of the animal creation may acquire those
rights which never could have been withholden from them but by the hand
of tyranny." - Jeremy Bentham
We are about to meet "Noah", the first cloned member of an endangered
species. Noah is a Gaur, a strikingly colored, white footed member of the
ox family that normally lives in Northern India. His birthday has been set
by a coterie of white-coated lab scientists in Worcester, MA. If Noah is a
success, a company known as Advanced Cell Technology (ACT) will be the first
biotechnology company to clone an endangered species.
Noah promises to be the "poster child" for a whole new generation
of artificially created animals. While hardly controversial at first blush,
the company's efforts raise a host of new questions about the nature of a
human-controlled world and how we may decide to populate it with our human-chosen
creatures. Advanced Cell Technologies is not, of course, in the business (yet)
of cloning rare animals for profit. But, according to Philip Damiani, Ph.D.,
ACT's researcher and principal investigator, the first endangered animal to
be cloned is set to be "the spokesperson for the cloning of endangered
species." While the newswire stories hail cloning of scarce creatures
as a heroic effort in species resuscitation, ACT's ambitions appear more parochial.
Presently, it is concentrating on more self-evidently controversial aspects
of genetic engineering, such as producing human pharmaceuticals in milk, and
fabricating cloned transgenic animals as donors of cells for transplant therapies.
While the thought of saving species from possible extinction pulls at our
heart strings, deeper consideration of the project reveals a darker undercurrent
of subterfuge and caution.
First, the animal being cloned is the gaur, Bos frontalis, the large wild
ox native to the woodlands of rural India. The gaur family still has 30,000+
members, and while qualifying for endangered status focusing on other more
threatened species mabey crucial. While land developers increasingly encroach
on its tropical woodland habitat, the animal is poised to be born into a wholly
new environment--the laboratory. To achieve this reproductive feat, ACT scientists
mimicked their protocol for cloning cows. In this case they inserted 42 early
gaur embryos into 32 domestic cows. Eight of the cows became pregnant. Seven
of the pregnancies ended in either spontaneous abortions or had their products
of conception removed for scientific analysis. The eighth and final pregnant
cow is Bessie.
In selectively propagating a genetically unknown "exemplar of the species,"
ACT appears to be neglecting some of the facets of the basic biology of the
animal. Hiring a Public Relations firm, Noonan-Russo Communications, articles
were drafted for the AP Wire and Reuters announcing the gaur would be "born"
at the end of November. Yet, Damiani admitted "we thought because domestic
cows had a 9 month gestation period, Noah would be born in November, but we
just realized gaur's have a 10 month gestation period. So, Noah will be born
at the end of December." (Actually, the new birth date has been pushed
up to January 2, 2001 perhaps to avoid the public relations dead zone between
Christmas and the New Year.)
Little seems known about the social and biological effects of cloning endangered
animals, especially a herd animal like the Gaur. Can the survival of a single
member of a highly social herd animal be considered a bona fide rescue? While
other alternatives clearly exist for protecting the genetic lineage of the
Gaur, for example, obtaining a cell line and keeping it at hand in perpetuity,
the ultimate goal is not to maintain genetic diversity via frozen cell lines.
Rather, the company's goal is to produce a visible entity that embodies the
essential physical reality of the gaur. But what value will the gaur have
if the reasons for its endangered state are not simultaneously addressed?
At issue are the motives and values at stake in creating living organisms
from cloned adult cells or artificially maintained cell lines. Currently,
it is not too difficult for the genetic material of an egg to be supplanted
with foreign DNA. But what will we actually have accomplished when we create
a living, breathing animal? Stepping back from the science, we should be asking
ourselves what such a recreation of a host animal does to our views and beliefs
of nature. Is a cloned animal simply a copy of an original? And if the animals
created are simply copies, then what of the embryonic gaurs which never culminated
in a fully live animal. Why were so many "sacrificed" in the name
of science? Is not each embryo an individual organism in and of itself? Does
Noah's birth negate the other clone deaths? How can researchers justify killing
(or allowing to die) almost 700 endangered animals to obtain but one survivor?
Cloning to Rescue Species
The gaur, like most endangered species, is having difficulty thriving
because its habitat is being diminished. In particular, it is being threatened
by pressure exerted on its territory, through logging and the building of
roads, homes, ranches and factories. If a species were truly to be rescued
through cloning, some animals would need to be re-established in the wild.
But, to introduce an animal back into a stressed ecosystem is questionable
under the best of circumstances, and may even be an act of cruelty.
Consider a more radical thought experiment: the human species is becoming
extinct because endocrine disruption is so widespread that we cannot reproduce
effectively. To reverse our imminent extinction, a cell line is created and
clones are then produced to repopulate our dwindling species. Of course, the
resulting humans would have to be placed back into a habitat which is by definition
unfit for survival. Not only will the clone likely have a quality of life
which is severely compromised, but would not all that we are trying to achieve
become moot if the damaged surroundings hamper its continued existence?
More to the point, non-human animals evolve and acquire adaptations to highly
specific habitats. A diminished habitat would by nature stress such a narrowly
adapted creature. Hence, if gaurs were to be re-introduced into the wild,
the tropical woodland habitat they depend on would first need to be restored.
But regeneration of their natural environment is not nearly as sexy as the
successful propagation of an actual wild oxen. Perhaps we are putting the
ox before the cart.
Another issue is inherent in population decline: the loss of genetic variability.
When the number of animals becomes too small, the population loses a key factor
in genetic diversity, the "polymorphisms" that tend to keep populations
diverse and healthy. A good example of such polymorphisms in human populations
is the plethora of blood groups and hemoglobin types, which among other things,
gives us protection against some forms of malaria.
So, along with restoring the habitat of a particular species, the gaur would
need to be re-introduced as a herd. Think for a moment about a cloned herd
in an artificially reclaimed environment. Even if we were successful, are
we ready to accept a simulated species in a renovated habitat? And how would
we continue to ensure sufficient genetic diversity among its members if we
could not control the normal dominance patterns that usually give a single
male the greatest genetic contribution.
Cloning for Captive Breeding Programs
By the time Noah arrives in this world, the sweep of ethical discussion
may be moot. And, by the time Noah steps out of the flashbulbs, ACT acknowledges
he will never end up in the wild. In fact, in spite of the public relations
effort to twist and pull on our heartfelt hopes of rescuing a dying species,
ACT's spokesperson Damiani acknowledged most of the cloned endangered animals
will end up in zoos. Said Damiani, "We will not need to remove an endangered
animal from the wild for zoos, we will just obtain skin cells from animals
in the wild and clone them."
But should we be creating laboratory animals for public gawking? The idea
of satisfying the sometimes prurient curiosity of the public is more akin
to the ethics of P.T. Barnum than of Albert Schweitzer. Looking at copies
of animals will create a whole new take on the meaning of "a visit to
a zoo." Such a use of cloned "material" highlights the problems
inherent in the exclusive use of animals for our entertainment and possible
education. True, under the best of circumstances, learning about animals can
provide a renewed sense of reverence for nature as we begin to appreciate
their lives. Adult humans may nonetheless be hard pressed to explain to a
wondering child why scientists allowed the animals to be regenerated for zoo
life while allowing all their brethren to succumb in the wild.
We are at a juncture where we can either disregard the considerations of
animal rights through cloning and confining animals, or we can selflessly
begin to curb the very activities that are decimating animals in the wild.
Instead of the millions being spent to clone a single member of a complex
web of animals in danger of extinction, why not restore their habitat? To
do less is to invite the impression of capitalizing upon the extinction of
species to create novel commercial adventures. | <urn:uuid:66f90566-7194-4bb2-b099-536d5c2ba845> | 2.984375 | 2,013 | Nonfiction Writing | Science & Tech. | 37.926634 |
Landman, M., Kerley, G. I. H. and SCHOEMAN, DS (2008) Relevance of elephant herbivory as a threat to Important Plants in the Addo Elephant National Park, South Africa. JOURNAL OF ZOOLOGY, 274 (1). pp. 51-58. [Journal article]
Full text not available from this repository.
Although elephants are recognized as keystone species, the mechanisms of their impacts on biodiversity and community structure are rarely identified. In the Addo Elephant National Park (AENP), South Africa, elephant Loxodonta africana herbivory is apparently responsible for a significant reduction in plant richness, especially among the regionally rare and endemic small succulent shrubs and geophytes (Important Plants). We used faecal analysis to investigate the utilization of Important Plants in elephant diet in the AENP. Ninety plant species were identified in the diet. Only 14 of the 77 (c. 18%) Important Plants previously thought particularly vulnerable to elephant browsing occurred in the diet, while at least 6% of species for which there are data were avoided. This refutes the generally held belief that elephant herbivory is the major driver of decline among Important Plants, and emphasizes the likely contribution of other mechanisms (e.g. knock-on effects, trampling, zoochory, etc.) to this phenomenon. The accurate prediction of impacts caused by elephants in the AENP and elsewhere, therefore requires an understanding of these previously marginalized mechanisms. By demonstrating appropriate cause-and-effect relationships between elephants and ecosystem change, we will be able to move beyond assuming that all the observed changes are due to elephant herbivory.
|Item Type:||Journal article|
|Faculties and Schools:||Faculty of Life and Health Sciences|
Faculty of Life and Health Sciences > School of Environmental Sciences
|Research Institutes and Groups:||Environmental Sciences Research Institute|
Environmental Sciences Research Institute > Coastal Systems
|Deposited By:||Dr David Schoeman|
|Deposited On:||09 Mar 2010 14:23|
|Last Modified:||28 Mar 2012 16:20|
Repository Staff Only: item control page | <urn:uuid:5232e6df-2e1b-48e0-ac66-6ddccf6cff4e> | 3.21875 | 462 | Academic Writing | Science & Tech. | 28.587899 |
As far as studying ball lightning as a scientific phenomenon, scientists have been limited by the inability to recreate these objects in the laboratory. The entirety of data on the subject consists of the thousands of eye-witness accounts of ball lightning which go back as far as the Middle Ages. The 'average' ball lightning is somewhere between the size of a golf ball and a beach ball, lasting about 15 seconds (ranging from 2 to 50 seconds) before it suddenly fades out or explodes. The ball may be spherical or dumb-bell shaped and may pulsate or shine steadily. It can come in various colors but is usually yellowish with brightness of a 100-watt light bulb. It will float around making little or no noise and can scorch wooden objects and has been known to injure or kill people, so it is a considerable source of energy. Its motion is not dictated by wind and usually floats close to the ground, although cases have been documented where it appears in closed rooms or even in airplanes. It will bounce when it hits the ground or comes close to electric fields.
the burning ball of silicon fluff model
Several models exist which describe many of the properties of lightning balls, but few are able to explain all, or even most of them. Generally, it has been suggested that the ball is made up of a cloud of plasma (ionized air) that forms due to the electrical discharge during a thunderstorm. The plasma may be sustained through the absorption of radio waves, but this is unsubstantiated. Other theories suggest that the source of the energy is anything from the flow of current from the ground to the cloud to even focused cosmic ray particles. However, these theories do not adequately explain how ball lightning appears in closed containers such as an airplane.
A theory, recently published in the journal Nature suggests that ball lightning is caused by burning balls of silicon nanoparticles, liberated when lightning strikes the ground. This is similar to the process people used to form semiconductor silicon from sand. For this to occur, the critirea include 3000 K temperatures (room temp is around 300 K) and the carbon present at the site of the lightning strike is twice as prevalent as silicon dioxide. The authors checked soils from various sites, finding some did have the appropriate ratio of materials. It is also known that the point of lightning strikes can easily reach temperatures above 3000K. The free silicon cools rapidly after impact into nanoparticles, which then further assemble, forming long chains or even spherical dendritic balls. Calculating the physical properties of a 30 centimeter burning ball of silicon nanostrings, the authors find that the duration, brightness and density all correspond well to the observed behavior of ball lightning in nature.
See: Abrahamson & Dinnis - Nature v 403 2000 p 519-521
The following are an eye-witness accounts of ball lightning:
I saw ball lightning during a thunderstorm in the summer of 1960. I was 16 years old. It was about 9 PM, very dark, and I was sitting with my girlfriend at a picnic table in a pavilion at a public park in upstate New York. The structure was open on three sides and we were sitting with our backs to the closed side. It was raining quite hard. A whitish-yellowish ball, about the size of a tennis ball, appeared on our left, 30 yards away, and its appearance was not directly associated with a lightning strike. The wind was light. The ball was eight feet off the ground and drifting slowly towards the pavilion. As it entered, it dropped abruptly to the wet wood plank floor, passing within three feet of our heads on the way down. It skittered along the floor with a jerky motion (stick-slip), passed out of the structure on the right, rose to a height of six feet, drifted ten yards further, dropped to the ground and extinguished non-explosively. As it passed my head, I felt no heat. Its acoustic emission I likened to that of a freshly struck match. As it skittered on the floor it displayed elastic properties (a physicist would call them resonant vibrating modes). Its luminosity was such that it was not blinding. I estimate that it was like staring at a less than 10-watt bulb. The whole encounter lasted for about 15 seconds. I remember it vividly even today, as all eyewitnesses do, because it was so extraordinary. Not until ten years later, at a seminar on ball lightning, did I realize what I had witnessed.
- Graham K. Huber Naval Research Lab, Washington D.C.
in Nature 2000 v403 p 487
About 16 years ago, when I was a child and lived with my family in Russia, a curious thing happened. My mother and I were sitting in our 13th floor flat, when we heard a sizzling noise and I saw a ball of fire about 5 centimeters in diameter right behind my mother's head. It stayed there a few seconds, then went "pop" like an exploding light bulb and vanished. It was not raining that day, but was cloudy and quite stuffy. The windows were closed at the time of the incident.
- Maria A Bennet, Aberdeen, England
in the NewScientist - last word
My wife and I bought an old house at the foothills of a central mountain range in AZ. It had been a mining shack that had been refurbished and an elderly couple was living there. The husband related the following story, and there were burn marks on a tree and in the linoleum to back up his story. The man was peeling potatoes in the small kitchen and a mountain thunderstorm arrived as they do in the summer time. He stated that there was a huge clap of thunder that stunned him. Almost immediately there was a blue ball of light that started to richochet around in the kitchen. He said that he had to lift up his feet so that the ball did not hit him. There was a small wood burning stove against one wall with a piece of galvanized time behind it to shield the wooden wall from the heat of a fire. The lightening blasted one of nails holding the metal sheet out of the wall onto the linoleum and burnt the bent shape of an 8 penny nail into the flooring. The burn mark was still visible when we bought the house. Outside the damage was greater. The lightening had struck a huge cottonwood tree leaving a major scar in the wood and bark that is still visible twenty years later. There was a wire clothes line from the tree to the house that was vaporized. The clothes pins were neatly aligned on the ground where they had fallen from the clothes line. It was presumed that the lightening was carried to the wall of the house by the wire clothes line, and then blasted the nail from the wall. How or what the blue ball of energy was that was bouncing around the kitchen is anyone's guess. BUT it fits the description of ball lightening. The man who told me the story was not known to tell tall tales, and he was very convincing when he told the story to me. My father-in-law described "sheet lightning" that he saw in NW Missouri during a major thunderstorm, but that is a different thing.
- Gordon Bradshaw Mayer, AZ | <urn:uuid:40bf3c86-5f8e-4048-9ac4-a5e478d550e5> | 3.953125 | 1,490 | Comment Section | Science & Tech. | 59.379083 |
Conversion Operators (C# Programming Guide)
C# enables programmers to declare conversions on classes or structs so that classes or structs can be converted to and/or from other classes or structs, or basic types. Conversions are defined like operators and are named for the type to which they convert. Either the type of the argument to be converted, or the type of the result of the conversion, but not both, must be the containing type.
For more information: | <urn:uuid:2734160b-8e6c-4995-ab42-671649349032> | 2.828125 | 99 | Documentation | Software Dev. | 39.445 |
Luna moths are insects that I think we can all agree to admire. One of the largest moths in the eastern United States, it has a wingspan of nearly 4½ inches. Attracted by porch lights (possibly mistaking them for the moon) they are a welcome visitor to any nature-friendly home or garden.
The first batch of this year’s luna moths have already come and gone. However, if we are lucky, we’ll have a chance to see their kids when they grow up and make their cameo as moths in late July or early August. These second-generation luna moths will lay eggs that will get as far as the pupa stage. They spend the winter as pupae in cocoons, nestled in the leaf litter. If you don’t feel like raking all the leaves in your garden this fall, you now have a great excuse: “I’m saving the luna moths!” These overwintering adolescents will emerge as adults early next summer.
Adult luna moths only live for about a week and do not eat. All the adults aspire to accomplish is to find a mate and/or lay eggs and then expire. The caterpillars feed on a variety of trees, including white birch, sweet gum, hickory, sumac and walnut. The caterpillars look superficially like a tomato hornworm, minus the horn. They are well camouflaged and feed up in trees, so don’t feel bad if you have never seen one.
You can tell the male from female moths, since the males have bushier antennas than the females. The males use their antennae to follow the pheromone trail of females. Since they fly in the dark, they need to rely on their sense of smell rather than by vision, like day-flying butterflies.
Anecdotal evidence indicates that luna moths might have been more common in the past. To my knowledge, long-term records have not been kept for this area so it’s hard to know. Please feel free to leave a comment if you have seen a change in the luna moth population of your favorite local habitat. Fortunately, luna moths are neither rare or endangered within the ranges that they have been historically observed.
Like many of our native species, luna moths are facing some strong headwinds. A small parasitic fly called Compsilura concinnata was introduced from Europe in an attempt to control invasive gypsy moths. The parasitic flies were intentionally released for an 80-year span, starting in 1906. So far, the flies seem to be doing a better job of killing native and charismatic friends, such as luna moths, giant silkmoths and royal walnut moths.
Researchers in a 2003 study set out luna moth larvae in a forest that was not yet invaded by gypsy moths. As many as 60 percent of the luna moth caterpillars were parasitized by the introduced parasitic flies. It is not yet known if the flies have had a significant impact on luna moths in the Lehigh Valley, but it’s worth investigating. These errant parasitic flies illustrate the reason why researchers are now very careful before releasing new and exotic predatory insects into the wild.
Anything we can do to preserve forested areas will help the luna moth. Recent research at the University of Delaware has shown that the luna moth’s very favorite food is American sweetgum followed by black walnut. The researchers also found that they were not very fond of non-native plants. They not only require a forested habitat, but it also matters what is growing in it.
Luna moths were apparently named for the crescent-shaped markings on their wings. I think they would be a good mascot for the new Luna Park at Coney Island. Interestingly, the term "Luna Park" has been equated with amusement parks since 1903. If they adopted my idea, Luna Park would probably run into trademark issues with the makers of Eszopiclone (aka Lunesta), a wildly popular sleeping medication. I think an organism that is most active between the hours of 11:00 p.m. and 2 a.m. is an odd choice for promoting a sleeping pill, but maybe I’m missing something.
This work by Marten Edwards is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. | <urn:uuid:f7836774-74f4-492a-aaab-e8ff78f8df45> | 3.21875 | 935 | Personal Blog | Science & Tech. | 54.577764 |
W and Z bosons
W and Z bosons are some of the least understood elementary particles. Both are bosons, which means that they have a spin of 0 or 1. Both had been found in experiments by the year 1983. Because of this, Z bosons are theoretical. Together, they are responsible for a force known as "weak force." Weak force was called weak because it is was not predicted to be very strong compared to other forces like electromagnetism. There are two W bosons with different charges, the normal W+, and its antiparticle, the W –. Z bosons are their own antiparticle.
W bosons are named after the weak force that they are responsible for. The naming of the Z boson is still uncertain, although it is believed to have been the last particle on a list that a lab that named it was looking for. Weak force is what physicists believe is responsible for the breaking down of some radioactive elements, in the form of Beta decay. In the late '70s, scientists managed to combine weak force and electromagnetism, and called it the electroweak force.
Creation of W and Z bosons [change]
W and Z bosons are only known to be created under Beta decay, which is a form of radioactive decay.
Beta Decay [change]
Beta decay occurs when there are a lot of neutrons in an atom. An easy way to think of a neutron is that it is made of one proton and one electron. When there are too many neutrons in one atom nucleus, one neutron will split and form a proton and an electron. The proton will stay where it is, and the electron will be launched out of the atom at incredible speed. This is why Beta radiation is harmful to humans.
The above model is not entirely accurate, as both protons and neutrons are each made of three quarks. (A quark is a very small particle that is believed to be among the smallest particles in existence, and so it is not made of any smaller particles). A proton is made of two up quarks (which each have a charge of +2/3), and one down quark (which has a charge of -1/3). This is why a proton has a charge of +1. A neutron is made of one up quark and two down quarks, which is why its charge is 0. Up quarks and down quarks are two of the six types of quarks. These six types are known in the scientific world as flavours.
Weak force is believed to be able to change the flavour of a quark. For example, when it changes a down quark in a neutron into an up quark, the charge of the neutron becomes +1, since it would have the same arrangement of quarks as a proton. The three-quark neutron with a charge of +1 is no longer a neutron after this, as it fulfills all of the requirements to be a proton. Therefore, Beta decay will cause a neutron to become a proton (along with some other end-products).
W boson decay [change]
When a quark changes flavour, as it does in Beta decay, it releases a W boson. W bosons only last for 3x10-25 seconds, which is why we had not discovered them until less than half a century ago. Surprisingly, W bosons have a mass of about 80 times that of a proton (one proton weighs one atomic mass unit). Keep in mind that the neutron that it came from also only weighs one atomic mass unit. In the quantum world, it is not an extremely uncommon occurrence for a more massive particle to come from a less massive particle because it lasts less time than Planck's constant. (Planck's constant is simply a convenient number that falls out of the math when calculating this). After the 3x10-25 seconds has passed, a W boson decays into one electron and one neutrino. Since neutrinos rarely interact with matter, we can ignore them from now on. The electron is propelled out of the atom at a high speed. The proton that was produced by the Beta decay stays in the atom nucleus, and raises the atomic number by one.
Z boson decay [change]
Z bosons are also predicted in the Standard Model of physics, which successfully predicted the existence of W bosons. Z Bosons decay into a fermion and its antiparticle, which are particles such as electrons and quarks which have spin in units of half of the reduced planks constant.
|Particles in Physics|
|Elementary:||Fermions:||Quarks: up – down – strange – charm – bottom – top
Leptons: electron – muon – tau – neutrinos
|Bosons:||Gauge bosons: photon – W and Z bosons – gluons|
|Composite:||Hadrons:||Baryons: proton – neutron – hyperon|
|Mesons: pion – kaon – J/ψ|
|Atomic nuclei – Atoms – Molecules|
|Hypothetical:||Higgs boson – Graviton – Tachyon| | <urn:uuid:e94ad0bd-983d-417e-99b8-14701f136b74> | 4 | 1,095 | Knowledge Article | Science & Tech. | 57.777367 |
Destinations: Mountains are prevalent not only on Earth, but found on many planets and moons throughout our solar system. Most mountains, as we know them on Earth, were formed by geologic processes caused by the convergence of tectonic plates.
The highest mountains on Earth are the Himalyas, which can readily be seen from space. Some mountains throughout the solar system were formed, not from plate tectonic processes, but from volcanic activity specifically, lava flows causing upthrusts of rock to form mountain regions. One such example of this type is Tohil Mons, a mountain found on Jupiter's moon, Io. Other mountains, such as on our own Moon, or Mercury were probably created near the time of the their formation.
There are still many mysteries to uncover about the nature of mountain formation in our solar system. Recently on the Cassini-Huygens mission to Saturn scientists, discovered a large equatorial ridge on Saturn's moon Iapetus and active geological processes on the moon Enceladus. As we continue to explore planets and moons, we will undoubtedly discover more ways in which mountains are formed.
Missions: The European spacecraft Corot is scheduled to launch this month. Once in orbit around Earth, Corot is designed to search for habitable planets around other stars.
The Mars Reconnaissance Orbiter is the latest spacecraft to orbit Mars. It is returning many high-resolution images of Mars surface, including mountains.
Features: There are several features on our website to help you learn more about some of the many mountains and planetary/moon surfaces in our solar system - Big Mountain, Big Landslide on Jupiter's Moon, Io, Volcanism on Io, Titan's Surface Revealed, Europa's Salty Surface, and Uranus, Neptune, and the Mountains of the Moon.
Fast Lesson Finder: K-12 Activities: Search our Fast Lesson Finder to find classroom lessons related to our solar system and beyond. Some activities related to this month's theme includes Building Blocks of Planets , Changes Inside Planets, Edible Rocks, Geologic Landforms Seen on Aerial Photos , Lunar Surface and Strange New Planet.
People: Meet Carol Raymond: Assists in leading the Dawn mission to the asteroids Ceres and Vesta. While there may not exist traditional mountains on Ceres or Vesta, there might be large, high altitude features of interest that will picque our interest when we explore these surfaces. | <urn:uuid:eb71e7e1-4feb-49ed-8a9d-78e14a0cc3b2> | 3.859375 | 498 | Knowledge Article | Science & Tech. | 36.25179 |
Mission Type: Flyby
Launch Vehicle: Mu-3S-II (No. 1)
Launch Site: Kagoshima Space Center, Japan
Spacecraft Mass: 138.1 kg
Spacecraft Instruments: 1) solar wind ion detector; 2) plasma wave probe and 3) magnetometer
Spacecraft Dimensions: Outer drum: 70 cm high, 140 cm diameter
Spacecraft Power: 1750 solar cells with a 2 A-hr nickel-cadmium battery
Maximum Power: 100 W
S-Band Data Rate: 64 bps at closest approach
Maximum Data Rate: 64 bps
Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000, Monographs in Aerospace History No. 24, by Asif A. Siddiqi
National Space Science Data Center, http://nssdc.gsfc.nasa.gov/
Solar System Log by Andrew Wilson, published 1987 by Jane's Publishing Co. Ltd.
The MS-T5 spacecraft was the first deep-space vehicle launched by any country apart from the Soviet Union and the United States (the two German Helios probes had been launched by NASA). Japan's goal was to launch a single modest probe to fly past Comet Halley. The country's Institute of Space and Astronautical Sciences (ISAS) launched this test spacecraft, nearly identical to Suisei, the actual vehicle launched later, to prove out the technologies and operations of the actual mission. A new Japanese launch vehicle, the Mu-3S-II, propelled the spin-stabilized spacecraft into space. After launch, the spacecraft was renamed "Sakigake," which means "pioneer" in Japanese.
Following two course corrections on 10 January and 14 February 1985, Sakigake was sent on a long-range encounter (about 7.6 million kilometers) with Halley. The spacecraft served as a reference vehicle to permit scientists to eliminate Earth atmospheric and ionospheric contributions to the variations in Giotto's transmissions from within the coma. The spacecraft's closest approach to Halley was at 04:18 UT on 11 March 1986, when it was 6.99 million kilometers from the comet. Sakigake found that the solar wind appeared to be disturbed by the comet at that distance. Previously, it had been thought that the range of a comet's influence on the solar wind was only 1 million km. (Sakigake's near-twin, Suisei, found the range to be only 420,000 km.)
Nearly six years after the Halley encounter, Sakigake flew by Earth on 8 January 1992 at a range of 88,790 kilometers. After two more distant flybys through Earth's magnetic tail (in June 1993 and July 1994), Sakigake maintained weekly contact with the ground until telemetry was lost on 15 November 1995, although the ground continued to receive a beacon signal until all contact was terminated on 7 January 1999.
Future mission planning had included a 23.6 km/s, 10,000 km flyby of Comet P/Honda-Mrkos-Pajdusakova on 3 February 1996 (approaching the nucleus along the tail) some 0.17 AU from the Sun, and a 14 million km passage of Comet P/Giacobini-Zinner on 29 November 1998. | <urn:uuid:812dc691-3793-430b-9d99-4d1c7ff50641> | 3.140625 | 684 | Knowledge Article | Science & Tech. | 59.336126 |
Last time I gave you a taste of what it is like to participate in a major expedition. This time I’m going to explain how we actually find and collect nudibranchs and other sea slugs.
First of all, sea slugs can be found just about anywhere in the world’s oceans from the shallow intertidal down to the deep sea, and from the cold polar regions to the warm tropics. Depending on what type of habitat we are trying to sample, we may use different techniques.
No matter what technique we use, one thing that is particularly important to our lab is to make sure that we leave a minimal impact on the habitats that we sample. This means that if we turn over a rock or a dead coral boulder, we make sure to turn it back to the way that it was. We do this because there are many kinds of animals that live on the bottom of or underneath rocks and rubble (like certain kinds of sponges). If you turn over a rock and don’t put it back the way it was, those animals lose their habitat and may not survive. Because of this, we do our best to leave habitats in the same condition as we found them.
For species that are found in the intertidal, we go out during a low tide and wade around and look for slugs. The lower the tide, the better. When I was at Kings Beach in Queensland, Australia, I surveyed the intertidal by wading and turning rocks. I found this technique very effective for this habitat and most of the slugs that I found at this site were hiding under rocks.
While we occasionally sample in the intertidal, most of our sampling happens subtidally (below the low tide mark). The main technique we use to survey for nudibranchs subtidally is SCUBA diving. In Madang, Papua New Guinea we would do about 2 to 3 dives nearly every day. While diving, we turn over rocks or dead coral boulders. It’s really amazing what you can find living under these! Most of the slugs that live on a coral reef are found hiding under coral rubble. For this reason, I get really excited when I find a dive site with a lot of coral rubble! I will also mention that some slugs are only found at night, so to find these, we SCUBA dive at night. Many of the slugs I study are found at night over a sandy bottom. To find these, you want to survey as much of the bottom as possible during the night dive.
Another technique we may use as an alternative to SCUBA is snorkeling or free-diving. At one of my sites in Australia, I looked for nudibranchs while snorkeling. This is a little bit more difficult than SCUBA for me because I’m not able to hold my breath for that long. This made it challenging to turn larger rocks and requires a lot more energy to be constantly free-diving down to the bottom. However, in some ways snorkeling is more convenient and less expensive than SCUBA diving.
Once we find a slug, we need to get it into a container. This can be a bit tricky because the slugs are very soft, often slippery and usually squishy. It becomes even more challenging if you are working with a very small slug (the size of a grain of rice, or a sesame seed!) and if you have currents in the water. There is nothing worse than a tiny slug floating off in the currents! Typically, I will pick up the slug with my fingers and do my best to place it in a plastic jar underwater. Other people often use plastic bags, but I find the hard containers easier to manage. After the slug is in the jar, I place it in my mesh collecting bag, and go off to find the next slug!
Finding slugs can be quite challenging, but extremely satisfying! Check back in a few weeks to hear about the preservation process!
Project Lab Coordinator and Graduate Student
Department of Invertebrate Zoology and Geology | <urn:uuid:9828e9d6-f497-4612-a7c9-a4db4248317f> | 3.09375 | 852 | Personal Blog | Science & Tech. | 60.286056 |
Chandra has observed hundreds of targets in the last 5 years, ranging from
planets and supernovas up to gargantuan galaxy clusters, and has generated
over a hundred press releases and beautiful images. The most popular topics
have been active galaxies, black holes, and neutron stars and mass accreting binaries.
Because active galaxies are incredibly bright galaxies
powered by supermassive black holes, top billing should be given to black
holes as the most popular object observed by Chandra. The most spectacular
Chandra images have been those of supernova remnants, the debris left over
after massive stars have exploded. To complete the Chandra quest, you'll need to be familiar with 20 Chandra observations.
How to Play:
Figure out which Chandra images are featured in the puzzle shown
below (clicking on the puzzle image will bring up a full-sized
version for better visibility).
To show the numbers for each piece of the puzzle, click on the "Show Numbers" link beneath the puzzle. When the numbers are shown, the link will change to "Hide Numbers".
Correctly fill in the form below the image, assigning the
appropriate field (1, 2, 3, etc) to an object name listed in the
pull-down menu next to it. There are more than 20 objects listed
in the pull-down menus; not all will be used.
When you're ready to send your answers, press "Submit". (If you'd like to start over, press the "Reset" button.) Your
answers will be checked and you will be notified whether or not
you correctly named all 20 objects.
The official contest ended on September 24, 2004, but the activity can still be played for enjoyment.
All of the Chandra images included in this activity can be found within the pages of the Chandra Photo Album. Most of the images in the collage are the primary image featured in the photo album page, but some can be found on the "More Images" pages within the object's section. A few of these images are multi-wavelength composites featuring Chandra data as well as data from other telescopes. | <urn:uuid:eade0486-9336-42eb-8008-a97299f621e5> | 2.6875 | 441 | Tutorial | Science & Tech. | 48.123975 |
The following Figure illustrates the principle of a Daniell cell in which copper and zinc metals are immersed in solutions of their respective sulfates.
Schematic of a Daniell cell.
The Daniell cell was the first truly practical and reliable electric battery that supported many nineteenth century electrical innovations such as the telegraph. In the process of the reaction, electrons can be transferred from the corroding zinc to the copper through an electrically conducting path as a useful electric current. Zinc more readily loses electrons than copper, so placing zinc and copper metal in solutions of their salts can cause electrons to flow through an external wire which leads from the zinc to the copper. (reference)
The difference in the susceptibility of two metals to corrode can often cause a situation that is called galvanic corrosion named after Luigi Galvani, the discoverer of the effect. The purpose of the separator shown in the previous Figure is to keep each metal in contact with its own soluble sulfates, a technical point that is critical in order to keep the voltage of a Daniell cell relatively constant. The same goal can be achieved by using a salt bridge between two different beakers as shown in the following Figure.
Schematic of a Daniell cell with a salt bridge
The salt bridge, in that case, provides the electrolytic path that is necessary to complete an electrochemical cell circuit. This situation is common in natural corrosion cells where the environment serves as the electrolyte that completes the corrosion cell. The conductivity of an aqueous environment such as soils, concrete, or naturals waters has often been related to its corrosivity.
The short-hand description in the following equation is valid for both Daniell cell configurations. Such a description is often used to simplify textual reference to such cells.
Conc1 and Conc2 in equation describe respectively the concentration of zinc sulfate and copper sulfate that may differ in the two half-cells while the two slanted bars (//) describe the presence of a separator. The same equation also identifies the zinc electrode as the anode that is negative in the case of a spontaneous reaction and the copper cathode as positive.
Why is a separator commonly used between the anodic and cathodic half cells of a Daniell cell?
|(previous)||Page 2 of 6||(next)| | <urn:uuid:8e584f40-0f22-4e84-90ca-89b6a9c97935> | 3.25 | 484 | Knowledge Article | Science & Tech. | 29.911822 |
John Puopolo is, with Sandy Squires, the author of the F# Survivial Guide, on which this article is based. The F# Survivial Guide covers all of the essential elements of functional programming and the F# language.
There are many ways to examine problems and model their solutions. In software, we tend to look at problems through the lens of our experiences and the tools that we have at our disposal. In other words, our experiences and the tools we know influence how we approach solving problems. For example, in the days of structured programming, we solved many problems using procedural decomposition and subroutines, focusing on internal cohesion and loose coupling. Building on these successful techniques, we transitioned to modeling solutions using objects and object-oriented techniques, further encapsulating imperative code and its data.
While object-oriented design and object-oriented programming will continue to play a significant role in years to come, I believe we are facing challenges that require new ways of thinking and new ways of implementing solutions. For example, massive computing power via multicore architectures and cloud computing will make developing concurrent systems and parallel algorithms a virtual necessity. Functional programming provides us with many of the tools and technique we need to make this a reality. Additionally, functional programming gives us new ways of developing scalable systems that are easy to debug and test.
What is Functional Programming?
Functional programming is a specific way to look at problems and model their solutions. Pragmatically, functional programming is a coding style that exhibits the following characteristics:
- Power and flexibility. We can solve many general real-world problems using functional constructs
- Simplicity. Most functional programs exhibit a small set of keywords and concise syntax for expressing concepts
- Suitable for parallel processing. Via immutable values and operators, functional programs lend themselves to asynchronous and parallel processing
Functional programming treats computation, e.g., running a program or solving a numeric calculation, as the evaluation of functions. Not surprisingly, in dealing with functional programming, we will use the term function quite often and in many contexts. So, before we continue, let's make sure we understand what a function is:
A function is fundamentally a transformation. It transforms one or more inputs into exactly one output.
An important property of pure functions is that they yield no side effects. This means that the same inputs will always yield the same outputs, and that the inputs are not changed as a result of the function. This idea of "no side effects" is one (of several) that makes functional programming particularly attractive when writing multi-process and multithreaded systems. The key concepts that make functional programming attractive for solving certain types of problems are described in F# Survivial Guide.
Some programming languages are "purely functional languages." This means that they follow strict functional rules, e.g., no imperative looping constructs and no implicit way to introduce state changes. For example, Haskell, a pure functional language, disallows state changes and side effects altogether. Programs written in pure functional languages are almost solely made up of functions that accept arguments and return values. Unlike imperative and object-oriented languages, they allow no side effects and use recursion instead of loops for iteration. Languages like Haskell are often criticized as being too extreme in not allowing side effects, giving them limited penetration in mainstream development environments. Operations such as reading a file become awkward to code in a language where no side effects are allowed. In contrast, impure, multi-paradigm languages such as F#, a strongly typed, first-class .NET programming language designed by Don Syme and others at Microsoft Research, discourage side effects and imperative constructs, but make them available alongside their functional brethren.
From the abstract mathematical theory of functions comes a variety of interesting concepts that have broad application and use in functional computing. For example, functions can take other functions as arguments, and functions can return other functions as results. This ability results in flexible and powerful mechanisms that enable us to implement ideas simply and elegantly -- ideas that are difficult to express in non-functional languages. | <urn:uuid:4186f8b7-0e57-419e-b2f3-e738925972e0> | 2.75 | 827 | Knowledge Article | Software Dev. | 25.967974 |