text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
December 27, 2011
In Greek mythology, Echidna was half snake and half woman, and she was the mother of all monsters. The animal echidna, with its stocky body covered in defensive spines, doesn’t look much like a monster, but as a type of mammal called a monotreme, it does share features with both snakes and humans. Like reptiles, echidnas lay eggs–just one a year–but they keep that egg and the resulting baby, called a puggle, in a pouch, like many marsupials do. And like all mammals, that baby will lap up milk until it grows old enough to eat solid food.
Also known as “spiny anteaters,” echidnas come in two varieties. The short-beaked echidna (Tachyglossus aculeatus) lives throughout Australia and New Guinea and is well adapted to a wide range of habitats, including deserts and rain forests. Its long-beaked cousin (Zaglossus bruijni), however, is found only in the tropical rain forests of New Guinea. These rare animals are officially endangered, their numbers brought low because of land clearing and hunting made easier with dogs and guns–the people of New Guinea consider the echidna, roasted over the coals of a fire, a delicacy.
The first western person to encounter an echidna and write about it was William Bligh, infamous captain of the Bounty. In 1792, his ship stopped in Tasmania on its way to Tahiti. On February 7 he wrote:
An animal shot at Adventure Bay. It had a Beak like a Duck – a thick brown coat of Hair, through which the points of numerous Quills of an Inch long projected these very sharp – It was 14 inches long & walked about on 2 legs. Has very small Eyes & five claws on each foot – Its mouth has a small opening at the end of the Bill & had a very small tongue.
The ship’s officer, George Tobin, who shot the poor animal reported: “The animal was roasted and found of a delicate flavour.”
Echidnas are as weird as Bligh reported all those years ago. The animal uses its snout, or “beak,” to unearth termites, ants and worms that it laps up with its long tongue. An echidnas has no teeth, though, so it has to use its tongue to grind its food against the roof of its mouth, turning it into a paste it can swallow.
An echidna isn’t good at running. It has short legs that, in the rear, point backwards to help it dig. An extra-long claw on one toe allows them to clean between their spines. If an echidna encounters a predator or enemy, it won’t run away or fight. Instead, it will curl into a ball, sharp spines pointing out, sometimes wedging itself into a space beneath a rock or burrowing into the soil to escape predators such as dogs and eagles.
The echidna isn’t the world’s only monotreme. Do you know the other?
Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week. | <urn:uuid:89448307-98fc-4588-a33c-85d87354cd93> | 2.96875 | 682 | Personal Blog | Science & Tech. | 60.091758 |
Scientific name: Vanessa atalanta
Brown/black wings with red bands and white spots near the tips of forewings. Undersides dark and mottled.
A large and strong-flying butterfly and common in gardens. This familiar and distinctive insect may be found anywhere in Britain and Ireland and in all habitat types.
Starting each spring and continuing through the summer there are northward migrations, which are variable in extent and timing, from North Africa and continental Europe. The immigrant females lay eggs and consequently there is an emergence of fresh butterflies, from about July onwards. They continue flying into October or November and are typically seen nectaring on garden buddleias or flowering Ivy and on rotting fruit.
There is an indication that numbers have increased in recent years and that overwintering has occurred in the far south of England.
Size and Family
- Family – Nymphalids
- Large Sized
- Wing Span Range (male to female) - 67-72mm
- UK BAP status: Not assessed
- Butterfly Conservation priority: Low
- European status: Not assessed
In Britain and Ireland the most important and widely available larval foodplant is Common Nettle (Urtica dioica). However Small Nettle (U. urens) and the related species, Pellitory-of-the-wall (Parietaria judaica) and Hop (Humulus lupulus) may also be used.
- Countries – England, Wales, Irleland and Scotland
- Common throughout Britain and Ireland
- Distribution Trend Since 1970’s = Britain: +25%
Can be found in almost any habitat from gardens to sea-shores and from town centres to the top of mountains!
- Brownfields for butterflies
- Butterflies and farmland
- Farmland Butterflies ID chart
- Gardening for Butterflies and Moths
- Butterflies in towns and cities | <urn:uuid:a51e7c53-fc58-4923-9b7b-9dfe0f56160f> | 3.578125 | 398 | Knowledge Article | Science & Tech. | 33.250455 |
Cross Posted from ScienceDaily (July 6, 2011)
Decreasing autumn and winter rainfall over southern Australia has been attributed to a 50-year decrease in the average intensity of storms in the region — a trend which is forecast to continue for another 50 years.
“The drop in winter and autumn rainfall observed across southern Australia is due to a large downturn in the intensity of storm formations over at least the last three decades compared with the previous three decades, and these effects have become more pronounced with time,” Dr Frederiksen said.
“Our recent work on climate model projections suggests a continuation of these trends over the next 50 years.”
Dr Frederiksen’s address was based on recent CSIRO and Bureau of Meteorology research that has just been published in the International Journal of Climate Change: Impacts and Responses.
The research, based on observations and climate modelling, centres on the changes in southern Australian winter rainfall linked to atmospheric circulation changes that are directly associated with storm formation, and particularly rain bearing lows and frontal systems crossing southern Australia.
The most important circulation feature associated with winter storm formation is the strength of the sub-tropical jet stream. For example, winter storms give south-west Western Australia much of its rain. Between the 20-year periods 1949 to 1968 and 1975 to 1994 south-west WA rainfall reduced by 20 per cent. In south-east Australia, there were reductions of 10 per cent.
“Our research has identified the historic relationship between the reduction in the intensity of storms, the southward shift in storm tracks, changing atmospheric temperatures and reductions in mid-latitude vertical wind shear affecting rainfall.” Vertical wind shear is the change in the westerly winds with height.
“We expect a continuation of these trends as atmospheric temperatures rise based on projections from climate models forced by increasing carbon dioxide concentrations.
“Trends during the 21st Century are likely to be similar to those observed during the second half of the 20th Century, when we saw substantial declines in seasonal rainfall across parts of southern Australia.
“Indeed, reductions in projected southern Australian rainfall during the 21st Century, particularly over south-west WA, may be as much as, or larger than, those seen in recent decades,” Dr Frederiksen said.
The research results from collaboration between the Bureau of Meteorology’s Dr Carsten Frederiksen and Janice Sisson, and CSIRO’s Dr Jorgen Frederiksen and Stacey Osbrough. It was conducted for the Australian Climate Change Science Program, funded through the Department of Climate Change and Energy Efficiency, and for the Western Australian Department of Environment and Conservation, under the Indian Ocean Climate Initiative.
The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by CSIRO Australia.
- Carsten Segerlund Frederiksen, Jorgen Segerlund Frederiksen, Janice Maria Sisson and Stacey Lee Osbrough. Changes and Projections in Australian Winter Rainfall and Circulation: Anthropogenic Forcing and Internal Variability. International Journal of Climate Change: Impacts and Responses, Volume 2, Issue 3, pp.143-162 [link] | <urn:uuid:80d8fc85-14b7-43e8-99ad-c68946316216> | 3.125 | 674 | Truncated | Science & Tech. | 27.06065 |
View Full Version : What are Messier's?
2003-Dec-20, 05:43 PM
I am very new to astronomy, my question is, what is the exact definition of a Messier Object and a Nebula? are they the same thing?
Thanks in Advance.
2003-Dec-20, 06:24 PM
Messier onject is an Deep Sky object, including Nebula, star cluster, and galaxies, so its the same thing, as u read the article above (Site).
(I guess I was wrong again :) , correct me if there is an mistake :) )
2003-Dec-23, 03:48 AM
Charles Messier was an 18th century comet hunter who got fed up with deep sky "faint fuzzys" interfering what he felt was his real work of finding comets.
So he made a catalogue of these objects. He did so without any particular system or framework, just numbered them pretty much at random I think, so it is a mix of a whole bunch of objects such as Galaxies , open clusters, nebulae and so on and the M number alone tells you little about what sort of object it is, or for that matter even where to find it or how bright it is etc.
Ironically who now remembers Messier's comet hunting successes? Hardly anyone. But everyone who looks at the sky sooner or later comes across his catalogue of objects.
If you live in the Northern Hemisphere you can observe all the Messier objects-110 of them - in one night, it's called a Messier Marathon. I'd imagine you would be pretty tired at the end of it
Powered by vBulletin® Version 4.2.0 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved. | <urn:uuid:49f51249-3e1f-4cf7-a28c-eba2538f5697> | 3.03125 | 367 | Comment Section | Science & Tech. | 68.012993 |
Functions to determine the current geolocation of the device.
This file defines the geolocation service, which provides functions for reading the device's geolocation. To read the geolocation data, the application must have the read_geolocation capability. To grant an application the read_geolocation capability, the bar-descriptor.xml file in the application's project must contain the line "<action>read_geolocation</action>".
Some of these geolocation functions are designed to return boolean values that indicate whether their associated attributes are valid. For example, geolocation_event_is_altitude_valid() indicates whether the altitude from a GEOLOCATION_INFO event is valid.
In this context, a valid attribute means that the value of the attribute was included in the last update from the geolocation system. For example, if the device cannot obtain a GPS fix, but has Wi-Fi connectivity, the geolocation system will report latitude, longitude, and accuracy. The system will not provide values for any other attributes (such as altitude, heading, and so on), and these attributes are marked as not valid. This means that the validity functions for these attributes will return false.
Subsequently, if the device obtains a GPS fix, the geolocation system will report values for all attributes, and all attributes are marked as valid. This means that the validity functions for these attributes will return true. If the GPS fix is lost, the attributes other than latitude, longitude, and accuracy are marked as not valid again. | <urn:uuid:531fe44c-447a-40a9-ac34-6dc6e66ca7ff> | 2.796875 | 328 | Documentation | Software Dev. | 20.450345 |
GAMMA RAY ERUPTION IN THE CRAB NEBULA May 11, 2011Posted by jcconwell in Astronomy, Neutron Stars.
Tags: Crab Nebula, Gamma Ray Eruption, neutron star
add a comment
WASHINGTON (NASA) — The famous Crab Nebula supernova remnant has erupted in an enormous flare five times more powerful than any flare previously seen from the object. On April 12, NASA’s Fermi Gamma-ray Space Telescope first detected the outburst, which lasted six days.
The nebula is the wreckage of an exploded star that emitted light which reached Earth in the year 1054. It is located 6,500 light-years away in the constellation Taurus. At the heart of an expanding gas cloud lies what is left of the original star’s core, a superdense neutron star that spins 30 times a second. With each rotation, the star swings intense beams of radiation toward Earth, creating the pulsed emission characteristic of spinning neutron stars (also known as pulsars).
Apart from these pulses, astrophysicists believed the Crab Nebula was a virtually constant source of high-energy radiation. But in January, scientists associated with several orbiting observatories, including NASA’s Fermi, Swift and Rossi X-ray Timing Explorer, reported long-term brightness changes at X-ray energies.
Since 2009, Fermi and the Italian Space Agency’s AGILE satellite have detected several short-lived gamma-ray flares at energies greater than 100 million electron volts (eV) — hundreds of times higher than the nebula’s observed X-ray variations. For comparison, visible light has energies between 2 and 3 eV.
On April 12, Fermi’s LAT, and later AGILE, detected a flare that grew about 30 times more energetic than the nebula’s normal gamma-ray output and about five times more powerful than previous outbursts. On April 16, an even brighter flare erupted, but within a couple of days, the unusual activity completely faded out.
“These superflares are the most intense outbursts we’ve seen to date, and they are all extremely puzzling events,” said Alice Harding at NASA’s Goddard Space Flight Center in Greenbelt, Md. “We think they are caused by sudden rearrangements of the magnetic field not far from the neutron star, but exactly where that’s happening remains a mystery.”
Extreme Universe: The Most Massive Neutron Star October 27, 2010Posted by jcconwell in Astronomy, Extreme Universe, General Relativity, Neutron Stars.
Tags: most massive neutron star, neutron star, PSR J1614-2230
add a comment
Using the National Science Foundation’s Green Bank Telescope , astronomers have discovered the most massive neutron star ever, this discovery will offer profound insight on the limits of neutron stars and the nature of matter under such extreme conditions.
“This neutron star is twice as massive as our Sun. This is surprising, and that much mass means that several theoretical models for the internal composition of neutron stars now are ruled out,” said Paul Demorest, of the National Radio Astronomy Observatory (NRAO). “This mass measurement also has implications for our understanding of all matter at extremely high densities and many details of nuclear physics,” he added.
The neutron star, called PSR J1614-2230 contains twice the mass of the Sun but compressed down into pulsar that is smaller than 20 kilometer It is estimated cubic inch of material from the star could weigh more than 10 billion tons. I have two videos below with more details for you.
The first is about the Discovery
The second is about the Instruments
New EINSTEIN@HOME effort launched March 25, 2009Posted by jcconwell in Astronomy, Neutron Stars.
Tags: Astronomy, neutron star
add a comment
Einstein@Home, based at the University of Wisconsin–Milwaukee (UWM) and the Albert Einstein Institute (AEI) in Germany, is one of the world’s largest public volunteer distributed computing projects. More than 200,000 people have signed up for the project and donated time on their computers to search gravitational wave data for signals from unknown pulsars.
Today, Prof. Bruce Allen, Director of the Einstein@Home project, and Prof. Jim Cordes, of Cornell University and Chair of the Arecibo PALFA Consortium, announced that the Einstein@Home project is beginning to analyze data taken by the PALFA Consortium at the Arecibo Observatory in Puerto Rico. The Arecibo Observatory is the largest single-aperture radio telescope on the planet and is used for studies of pulsars, galaxies, and the Earth’s atmosphere. Using new methods developed at the AEI, Einstein@Home will search Arecibo radio data to find binary systems consisting of the most extreme objects in the universe: a spinning neutron star orbiting another neutron star or a black hole. Current searches of radio data lose sensitivity for orbital periods shorter than about 50 minutes. But the enormous computational capabilities of the Einstein@Home project (equivalent to tens of thousands of computers) make it possible to detect pulsars in binary systems with orbital periods as short as 11 minutes.
“Discovery of a pulsar orbiting a neutron star or black hole, with a sub-hour orbital period, would provide tremendous opportunities to test General Relativity and to estimate how often such binaries merge,” said Cordes. The mergers of such systems are among the rarest and most spectacular events in the universe. They emit bursts of gravitational waves that current detectors might be able to detect, and they are also thought to emit bursts of gamma rays just before the merged stars collapse to form a black hole. Cordes added: “The Einstein@Home computing resources are a perfect complement to the data management systems at the Cornell Center for Advanced Computing and the other PALFA institutions.”
Extreme Universe: Magnetic Fields and Magnetars March 12, 2009Posted by jcconwell in Astronomy, Extreme Universe, Gamma Ray Bursts, Neutron Stars.
Tags: Astronomy, Gamma Ray Burst, magnetar, neutron star
1 comment so far
Neutron Stars are extreme to begin with, but magnetars add a whole new level of extreme to these exotic objects. Magnetars, as the name implies, are neutron stars with ultra high magnetic fields. As a matter of fact, the most extreme magnetic fields ever found in the universe!
There are about 15 magnetars known, they are all examples of a class of objects called “soft gamma repeaters” . The most magnetic one, and the most magnetized object in the known universe is SGR 1806-20. The magnetic field of this magnetar is estimated to be about 2 x 1011 Teslas or 2 x 1015 gauss, one Tesla being equal to 10,000 gauss.
Now, to give you some sense of how big this is, the Earth’s magnetic field is about 1/2 gauss or .00005 Tesla. The magnet in a hospital’s MRI is about 3.2 Tesla or 32,000 gauss, and the largest sustained magnetic field created in a lab is about 40 Tesla.
So we’re talking about magnetic fields 1000 trillion times bigger than the Earth’s field. Very weird things can happen with fields this large. One thing that’s interesting is how much energy is stored in such a field. So let’s break out an equation from physics and use an example I did in my electricity & magnetism class last week. If you look it up, you’ll find the energy per cubic meter, or energy density, of a magnetic field is given by:
u = B2/2 μ0
u is the energy density given in Joules per cubic meter. A Joule is the energy you use to lift a kilogram about 10 centimeters off the ground.
B is the strength of the magnetic field given in Teslas, and μo is a constant that has a value of 4π x 10-7 (it has units , but we’ll ignore them). Using a field of B = 2 x 1011 Teslas, the most powerful magnetar, we will get a huge number…
1.6 x 1028 Joules/(cubic meter)
or every cubic meter contains this amount of energy. To put this in context, the largest hydrogen bombs have a yield of 20 Megatons of TNT, which is about 1017 Joules of energy. So in each cubic meter of magnetic field has the stored energy of 160,000,000,000 (160 billion), 20 Megaton bombs.
Since we’re having so much fun, lets think about it this way. Einstein showed mass and energy are equivalent, so how much mass would one cubic meter of this HUGE magnetic field have? Well…
or m = E/ c2 = 1.6 x 1028 Joules/(3 x 108m/s)2 = 1.78 x 1011kilograms
Each cubic centimeter of magnetic field would have a mass of 178 metric tons!!! If you multiply this by the number of cubic meters in the Magnetar, about 40 trillion, assuming the whole neutron star is magnetized, you get a lot of magnetic energy stored in Magnetar.
To give you an idea of what a small amount of this energy would do, consider the events of December 27, 2004. On that day the magnetar we’ve been using as a example, SGR 1806-20, under went a “superflare”. The “superflare,” from a magnetar named SGR 1806–20, irradiated Earth with more total energy than a powerful solar flare. Yet this object is an estimated 50,000 light-years away in Sagittarius. During that flicker of time it outshone the full Moon by a factor of two. The gamma rays struck the ionosphere and created more ionization which briefly expanded the ionosphere. Assuming that the distance estimate is accurate, the magnetar must have let loose as much energy as the Sun generates in 250,000 years. | <urn:uuid:ca364eac-31ed-4cbb-82ac-e02db461420a> | 3.671875 | 2,145 | Content Listing | Science & Tech. | 48.473338 |
[Editor's note: Spoiler alert: "El Niño Modoki (Japanese for “similar but different”) triggers more landfalling storms in the Gulf of Mexico and the western Caribbean than normal, and more tropical storms and hurricanes in the Atlantic than El Niño does. Another difference: Modoki’s precipitation patterns are the reverse of El Niño’s—making the American West, for instance, drier rather than wetter."]
Republished from National Geographic Magazine.
It used to be simpler. Whenever the surface waters of the equatorial Pacific turned warmer than normal in summer, climatologists would expect an El Niño year, then forecast when and where droughts, floods, and hurricanes might occur. But that was before a study by Georgia Tech scientists, led by Hye-Mi Kim, deciphered the effects of another pattern in which high temperatures are confined to the central Pacific (Click this link to expand the graphic). Now the already difficult field of atmospheric forecasting has become even trickier. | <urn:uuid:abca3756-752d-4f7d-896b-ef94913648e7> | 3.46875 | 208 | Truncated | Science & Tech. | 28.030385 |
See also the
Dr. Math FAQ:
3D and higher
Browse Middle School Two-Dimensional Geometry
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Pythagorean theorem proofs.
- Area and Perimeter [05/01/2001]
I do not understand area and perimeter - can you explain them?
- Area and Volume [04/10/2002]
I cannot figure out how to do volume.
- Explementary Angles [05/29/2003]
What do we call two angles that add up to 360 degrees?
- Find Equation for Two Points [12/12/2002]
I want to know how to find an equation for two given coordinate
- Finding Areas using Unit Conversions [3/19/1996]
How do you calculate area, expressed in English units? For example: how
do you calculate the area of a rectangle, where one side is, say, 2
miles, 3 furlongs, 4 yards, 2 feet and 5 inches...?
- How to Measure Angles [04/19/1998]
How do you measure angles? What are degrees and radians?
- Measuring Angles with a Protractor [02/28/2002]
I want to know how to measure acute, reflex and obtuse angles with a
- Painting Cubes [01/06/2003]
How many unit cubes would be needed to create a cube with a side
length of 3 units? If you painted the larger cube and broke it up
into unit cubes, how many faces of each unit cube would be painted?
- Remembering Area Formulas [12/23/2001]
Is there was a good way to help me memorize the formulas for areas of
- Turning a Perimeter into a Scale Factor [02/17/2003]
Perimeter and area ratios of similar figures are given. Find each
- Using a Protractor [06/28/1998]
How do you construct two parallel lines and intersect them making a
- A 14-Sided Polygon [7/28/1996]
What is the proper name for a 14-sided figure?
- 2 Square Feet vs. 2 Feet Square [11/22/1998]
What is the difference between 2 square feet and 2 feet square?
- About Basic Geometry [10/14/1998]
Who developed basic geometry? What is it used for? Who uses it?
- Abraham Kaestner and Euclid's Fifth (Parallel) Postulate [12/02/1996]
Where can I find information on Abraham Kaestner?
- Adjacent Angles [09/19/2001]
What is the definition of adjacent angles?
- The Algebra of Complements and Supplements [01/25/1999]
What are complements and supplements? How do you translate these concepts
- Alternate and Corresponding Angles [10/21/1996]
Please explain corresponding and alternate angles.
- Alternate Exterior Angles [06/03/2002]
What is an 'alternate exterior angle'?
- Angles: Acute, Right, Obtuse, Reflex [06/06/2001]
How can I remember what the names of angles mean?
- Angles as Turns [05/29/2003]
How can angles be negative?
- Angles Greater than 360 Degrees [01/01/1999]
We know the definitions of acute, obtuse, and reflex angles, but we were
debating what kind of angle a 425 degree angle is.
- Angle, Side Length of a Triangle [9/4/1996]
What is the relation between the angles and side lengths of a triangle?
- Angles in a Diagram [10/14/1998]
In a diagram with perpendicular, parallel, and transversal lines, how can
you find the measure of the angles given the measure of one angle?
- Angles of Stars [08/18/1997]
What are the interior and external angles of stars built on regular
pentagons and octagons.
- Area and Perimeter [10/4/1995]
I have a problem with finding area and perimeter - can you help me?
- Area and Perimeter: Mowing the Lawn [6/1/1996]
How many circuits are necessary to cut half the lawn?
- Area Larger Than Perimeter? [04/15/2002]
Can the area of a shape be larger than the perimeter?
- Are Angles Dimensionless? [08/31/2003]
If you look at the dimensions in the equation arc length = r*theta, it
appears that angles must be dimensionless. But this can't be right.
Or can it?
- Area of a Circle with Radius less than 1 [02/18/2002]
If the radius is less than 1 it just gets smaller and you get a smaller
- Area of an Irregular Polygon [03/29/2001]
How can I find the area of an irregular polygon? What about a polygon
made out of rectangles?
- Area of an L [07/10/2003]
How do you find the area of an 'L' shaped object?
- Area of a Rectangle Outside a Rectangle [6/11/1996]
Find the area of the concrete border of a rectangular swimming pool.
- Areas of Figures Broken into Rectangles [10/17/2001]
Calculate the area of each figure. First divide the figure into
rectangles and squares...
- Area, Surface Area, and Volume: How to Tell One Formula from Another [01/18/1999]
Unit dimensions -- and even an idea from calculus -- can tell you which
formula you're using.
- Area: Triangle vs. Rectangle [4/30/1996]
Why do you have to use a different formula to get the area of a triangle
than a rectangle or square?
- Area vs. Perimeter of Rectangles [03/19/2000]
Can you explain how two rectangles with exactly the same perimeter can
enclose different areas?
- Arrange 7 Points in a Plane... [10/05/1998]
Arrange 7 points in a plane so that if any three are chosen, at least 2
of them will be a unit distance apart.
- Building Two Column Proofs [09/12/1998]
We just started learning proofs, and I don't understand how to figure out
the ordering. Can you explain?
- Circle Area and Square Units [11/12/1997]
Is pi metres squared the same as 10,000 pi centimetres squared? Does a
square with sides of 10m have an area of 10m squared or 100 square metres
- or are these the same? | <urn:uuid:cc1c630c-1660-4844-8c1b-31e9d071bf2d> | 3.828125 | 1,462 | Q&A Forum | Science & Tech. | 70.43816 |
Solve the simultaneous congruences
2x = 1 (mod7)
those aren't supposed to be equal signs- they are three slashes(congruences).
I would do the first one, but I'm out of time, gotta get ready for work, and wouldn't want to steer you wrong.
For the second one, you can see that 3=5*0+3, so when x = 3, it is congruent to 3(mod5). So find the next in the progression by just adding 5, so x=3+5n for any integer n (this creates the series ...,3,8,13...)
For the third one, you can see that 3=8*0+3, so when x=3 it is congruent to 3(mod8). So find the next progression by just adding 8, so x=3+8n for any integer n (this creates the series ...3,11,19...)
First let's solve the first congruence for x:
So we need to solve
The modulos are pairwise coprime, so we may state a solution exists by the Chinese Remainder Theorem.
Use the extended Euclidean algorithm to find integer r1 and s1 such that
One such pair is and .
Now do the same for
<-- r2 = -17 and s2 = 3
<-- r3 = -13 and s3 = 3
Now, define , , and .
Then a solution to the system will be:
which you may verify is a correct solution.
Edit: Whoops! I just noticed. This answer will be correct a modulo 5 x 7 x 8 = 280. So we have a smaller answer which is equivalent to . So we can use x = 163 as the solution.
(See, you can teach an old dog new tricks!) | <urn:uuid:8ff6a24e-a9c2-4d59-a4f2-06e47850f906> | 2.9375 | 396 | Tutorial | Science & Tech. | 98.514483 |
Pi - or π - is a mathematical constant whose value is the ratio of any circle's circumference to its diameter in Euclidean space, it is the same value as the ratio of a circle's area to the square of its radius. It is approximately equal to 3.14159 in the usual decimal notation. Pi is one of the most important mathematical and physical constants: many formulae from mathematics, science, and engineering involve π.
It's also a number with endless decimal places. The Guinness-recognized record for remembered digits of π is held by Lu Chao, a 24-year-old graduate student from China, who recited pi to 67,890 digits!
When you get through all the technical terms, the bottom line is that Pi is a massively important number that impacts our world in ways that most of us never think of or imagine. It struck me as amazing that such a little word makes such a big difference in our lives. | <urn:uuid:0597be1b-433d-40a4-a667-8948cec0cece> | 2.828125 | 198 | Personal Blog | Science & Tech. | 53.45311 |
Sinkholes by Caitlyn, Meredith and Margi
Southeast Minnesota is the sinkhole capital of the U.S.A. These strange holes in the ground are all over the place. Some have been here for as long as we can remember and some seem to pop up overnight. But we wanted to know, how do these sinkholes form?
What did we do?
We looked at three different sinkholes and measured their size with a tape measure. Then we stood in the center and stacked some tubes together to measure their height. The last thing we did was sketch what each sinkhole looked like. We also looked around the sinkholes for other features, like standing water and any special kinds of rocks.
What did we find out?
In the bottom of the first sinkhole, we found a little bit of limestone and a shell fossil. The hole wasn't very deep or wide, but we thought the limestone might be a clue to how the hole formed. The second hole was wider and deeper than the first and there was a lot of wood debris at the bottom. It reminded us of a kitchen sink drain, except this drain led underground. The last sinkhole was huge! It was so deep that we couldn't safely climb into it to measure the depth. We saw a lot of limestone shelves and another drain at the bottom. We explored some more and found that the drain led to an underground cave. From all these sinkholes we learned that limestone and water play a big part in making sinkholes. We figured that limestone layers must dissolve over time and then collapse to form a sinkhole.
- You can get to the bottom of sinkholes too. Make your own model. You'll need a medium jar, sugar cubes, graham crackers, soil and water. From bottom to the top, stack layers of sugar cubes, graham crackers and a little soil. The sugar cubes represent the Earth's limestone. The graham crackers are soil, and the dirt is topsoil. Add drops of water to the topsoil, and then watch the limestone slowly crumble. The water will act like rain and will filter to the bottom. The sugar limestone layers will crumble and form a mini sinkhole.
- Do you have sinkholes where you live? Why do some sinkholes have trees growing out of them and some don't? Have you passed sinkholes before and never paid attention to them? How far apart are they from each other?
- In the old days, farmers filled sinkholes in their fields with garbage, old furniture and cars, but it began polluting their water supply. Sinkholes act like a drain. Rain flows through the Earth's layers, filtering pollution from the sinkhole down in to underground rivers that flow into our drinking water. How else is our water supply affected by pollution? How fast does the supply get contaminated? How would you test this?
- Use this earth science investigation as a science fair project idea for your elementary or middle school science fair! Then tell us about it! | <urn:uuid:146fd798-4ed2-4505-8c9a-64f493df8b63> | 3.953125 | 620 | Knowledge Article | Science & Tech. | 66.692778 |
Basic Rules are simple attributes of a design. The rule must return a value of a specific data type, which is included in the declaration of the rule. The long form of a rule specification begins with the Rule keyword and terminates with End Rule.
Rule numberOfBearings As Integer Return 2 End Rule
Rule totalSprocketWidth As Number Dim L As Number = sprocketWidth * numberOfSprockets If isDrive? Then totalSprocketWidth = L + driveSprocketSpacerLength _ - (sprocketHubRecess * 3) Else totalSprocketWidth = L - (sprocketHubRecess * 2) End If End Rule
Rule names can contain any number of alphabetic characters, numeric characters, underscores, question marks (?), or percent signs. Intent ignores the case of a rule name. The following names are considered to be the same when evaluated by Intent:
Rule numberOfBearings As Integer = 2
A special form of short-form rule uses the keyword Required in place of the rule expression. Required is a keyword that may only appear in this way; it is not a flag. Required rules may only be used in conjunction with the Canonical or Parameter flags. The Required keyword signifies to the compiler that this rule must be supplied (in the case of a Parameter), or that it must be assigned by a Group rule (in the case of a Canonical).
Rule numberOfBearings As Integer = Required | <urn:uuid:22ce93af-1a60-4694-acab-8953c868a24c> | 3.265625 | 312 | Documentation | Software Dev. | 33.000526 |
Intense warm climate intervals--warmer than scientists thought possible--have occurred in the Arctic over the past 2.8 million years.
That result comes from the first analyses of the longest sediment cores ever retrieved on land. They were obtained from beneath remote, ice-covered Lake El'gygytgyn (pronounced El'gee-git-gin) ("Lake E") in the northeastern Russian Arctic.
The journal Science published the findings this week.
They show that the extreme warm periods in the Arctic correspond closely with times when parts of Antarctica were also ice-free and warm, suggesting a strong connection between Northern and Southern Hemisphere climate.
The polar regions are much more vulnerable to climate change than researchers thought, say the National Science Foundation-(NSF) funded Lake E project's co-chief scientists: Martin Melles of the University of Cologne, Germany; Julie Brigham-Grette of the University of Massachusetts Amherst; and Pavel Minyuk of Russia's North-East Interdisciplinary Scientific Research Institute in Magadan.
The exceptional climate warming in the Arctic, and the inter-hemispheric interdependencies, weren't known before the Lake E studies, the scientists say.
Lake E was formed 3.6 million years ago when a huge meteorite hit Earth, leaving an 11-mile-wide crater. It's been collecting layers of sediment ever since.
The lake is of interest to scientists because it has never been covered by glaciers. That has allowed the uninterrupted build-up of sediment at the bottom of the lake, recording hitherto undiscovered information on climate change.
Cores from Lake E go far back in time, almost 30 times farther than Greenland ice cores covering the past 110,000 years.
The sediment cores from Lake El'gygytgyn reflect the climate and environmental history of the Arctic with great sensitivity, say Brigham-Grette and colleagues.
The physical, chemical and biological
|Contact: Cheryl Dybas|
National Science Foundation | <urn:uuid:4e2ec14c-4ff1-48c3-986e-0108ccc7dd98> | 4.21875 | 404 | Knowledge Article | Science & Tech. | 32.999511 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
extraction and processing
...the austenite crystals transform into a fine lamellar structure consisting of alternating platelets of ferrite and iron carbide. This microstructure is called pearlite, and the change is called the eutectoidic transformation. Pearlite has a diamond pyramid hardness (DPH) of approximately 200 kilograms-force per square millimetre (285,000 pounds per square inch), compared to a DPH of 70...
What made you want to look up "eutectoid change"? Please share what surprised you most... | <urn:uuid:68b02a5d-e90e-4ac5-9c01-77f5eb9dae08> | 3.3125 | 151 | Knowledge Article | Science & Tech. | 45.690584 |
The marine food chain
Marine biologists know that the marine food chain is particularly complex and sensitive to the distribution and duration of the ice cap. Accordingly, Danish research scientist Professor Torkel Gissel Nielsen of The National Environmental Research Institute, NERI, working with other researchers, has developed a model to calculate how the food chain is affected by temperature rises.
If the water flea goes hungry
The model indicates that one winter without heavy frost, and thus without an ice cap, could damage fish stocks for the next few years.
For example, the water flea plays a key role in Greenland’s fjords as food for fish larvae, and thus as a base for the food chain that has polar bears and seals at the top. The water flea rises to the surface from hibernation in April and lives mainly on algae. However, when the ice is no longer there to keep the light out, the algae blooms as early as March. Single-cell types of phytoplankton immediately begin to consume the algae. Therefore, the water fleas have to make do with a less varied diet of single-cell organisms. This reduces the number of water fleas, meaning there is less food for the fish larvae.
The models can help people to adapt to climate change. “Once we know how the food chain is affected by temperature changes, fishing can be managed sustainably and can thus retain its significance for the people of Greenland,” Torkel Gissel Nielsen tells DMU-Nyt, no. 1. Torkel also believes the fauna will adapt to the changes.
However, it is difficult to say with any certainty what climate change means in terms of the food chain in the longer term. Some take a positive view and hope it may mean a change in the combinations of species; for example, after an absence of 30 years, cod may make its way back into Greenland’s waters. | <urn:uuid:3499532b-fa6d-4fa3-8166-bf3845ca4053> | 3.703125 | 399 | Knowledge Article | Science & Tech. | 48.764167 |
What does it mean when you say that two variables are
In general mathematics, two variables are considered "related" if
changing the value of one changes the value of the other. This is a very
general concept, so do not make it over-complicated. For example, Y= X+2.
The variables X and Y are related by the equation: Y= X+2.
In statistics, the concept of "related-ness" is more subtle. In this
case "related" is often used to mean "correlation". Here, if a collection
of some observations (X's) changes in some way, and another collection of
observations of some other variables (Y's) also changes in some way -- the
two variables are said to be "correlated"-- and there are formulas that
measure how well the variables are correlated. The caution here is that
"correlation" does not necessarily imply "causation". That is, two
variables can be correlated without one "causing" the other. Both may be
changing due to some other variables that have not even been identified,
so you have to be very careful.
A fanciful example is the following: I have a magic bell. It protects me
from tigers in my back yard. Every morning I ring it three times, and look
to see how many tigers are in my back yard. I have done this every morning
for 3 years, and not once have I ever seen a tiger. Therefore, my magic bell
is 100% effective in protecting me from tigers in my back yard. The fallacy
here is transparent, but often that transparency is not so self-evident.
Click here to return to the Mathematics Archives
Update: June 2012 | <urn:uuid:6137efaa-d648-44e1-af20-202331ad5578> | 3.375 | 368 | Knowledge Article | Science & Tech. | 50.20276 |
|Feb22-12, 01:51 PM||#1|
Imagine you had a torodial tank filled with a (super)conductive fluid that was flowing around the tank in a clock-wise direction. Could the minute differences in speeds between the liquid flowing on the inside of the torus and those on the outside -slowed due to friction - create an electric charge differential in the fluid? Basically what kind of liquid could produce this kind of electrokinetic charge? And does something like this already exist.
|Feb22-12, 09:44 PM||#2|
cynopolis, Will you please help me understand your word-experiment by giving a more detailed description? I have these questions:
1. Is electrokinetics defined as an electrically driven fluid flow and particle motion in liquid electrolytes? What drives the clockwise motion?
2. The torroidal tank is filled with a fluid. Is that a conductive fluid or super-conductive fluid? Are there colloidal particles in the fluid? Please describe more specifically the fluid’s characteristics.
3. Why do you not mention the friction between the rotating fluid and the top
side, bottom side, and the outside of the torus? I imagine that the greatest velocity of fluid would be in the center, away from all of the walls.
4. Is there any connection between your experiment and the Zeta potential? This is defined by Wiki as “a scientific term for electrokinetic potential in colloidal systems. In the colloidal chemistry literature, it is usually denoted using the Greek letter zeta, hence ζ-potential. From a theoretical viewpoint, zeta potential is electric potential in the interfacial double layer (DL) at the location of the slipping plane versus a point in the bulk fluid away from the interface. In other words, zeta potential is the potential difference between the dispersion medium and the stationary layer of fluid attached to the dispersed particle.”
5. Is there any connection between your experiment and the Aharonov-Bohm effect?
6. Is there any connection between your experiment and electro-osmotic and electrophoretic effects?
Thanks in advance, Bobbywhy
|Similar discussions for: Electrohydroconvection Engine?|
|Engine balancing:possibility of a completely balanced engine||Mechanical Engineering||9|
|Does the Carnot heat engine law apply to an internal combustion engine?||General Physics||21|
|Converting engine bmep to engine torque||Mechanical Engineering||2|
|Work Done By Engine/Heat Engine||Mechanical Engineering||3| | <urn:uuid:a65ef0ab-c2e3-4d1b-b61e-7ecc68e56a4e> | 3.03125 | 550 | Comment Section | Science & Tech. | 38.401347 |
Inconstant Constants; November 1998; Scientific American Magazine; by Musser; 2 Page(s)
Of all the assumptions that undergird modern science, perhaps the most fundamental is the uniformity of nature. Although the universe is infinitely diverse, its basic workings appear to be the same everywhere. Otherwise, how could we ever hope to make sense of it? Historically, scientists presupposed uniformity on religious grounds. In this century, Albert Einstein encapsulated it in his principle of relativity. As geologists and astronomers peered far beyond the domain of common experience, they saw no sign that nature behaved any differently in the distant past or in deep space.
Until now. A team of astronomers led by John K. Webb of the University of New South Wales has found the first hint that the laws of physics were slightly different billions of years ago. "The evidence is a little flimsy," says Robert J. Scherrer of Ohio State University. "But if it¿s confirmed, it¿ll be the most startling discovery of the past 50 years." | <urn:uuid:f91e8f43-a698-4daf-b6b8-8cd006ca1973> | 3.546875 | 217 | Truncated | Science & Tech. | 41.86231 |
This composite image of the Tycho supernova remnant combines X-ray and infrared observations obtained with NASA's Chandra X-ray Observatory and Spitzer Space Telescope, respectively, and the Calar Alto observatory, Spain. It shows the scene more than four centuries after the brilliant star explosion witnessed by Tycho Brahe and other astronomers of that era.
Crab Nebula: A supernova remnant and pulsar located 6000 light years from Earth in the constellation of Taurus. More at http://chandra.harvard.edu/photo/2008/crab/
With the press release for G1.9+0.3 we talked about when an event in a distant part of the Milky Way galaxy occurred. One delicate issue that immediately came to mind was what to do about the light travel time to this object. We decided to adopt the astronomer's convention and talk about events in Earth's time frame, that is when the light reached the Earth, as we noted in the press release and in a few other places on our web-site.
Dr. Carles Badenes is a Chandra postdoctoral fellow at Princeton, having spent the previous few years at Rutgers University. His main research focus is on supernova explosions and supernova remnants, particularly the class known as Type Ia.
Dr. Patrick Slane from the Chandra X-ray Center recently shared some information on the G292.0+1.8 supernova remnant with NASA's museum alliance. Part II of this conversation talks more on what we're seeing in the Chandra image....
Please note this is a moderated blog. No pornography, spam, profanity or discriminatory remarks are allowed. No personal attacks are allowed. Users should stay on topic to keep it relevant for the readers.
Read the privacy statement | <urn:uuid:271ebd46-4827-4527-9eef-a03e8594bed9> | 3.1875 | 361 | Personal Blog | Science & Tech. | 53.53319 |
see photos below
Leigh is back in Maine and files this report on the field season:
I had to break this journal entry up into ‘The Science’ and ‘The Life’ because scientifically everything went smoothly – the difficulties came from everyday life activities.
Our objectives at the Allan Hills were similar to those at Mt. Moulton – to study the ice flow and gather information for a potential horizontal ice core (refer to our projects page). Our tasks are outlined below:
- Mapping: The first 3 days were spent mapping the area so that we could figure out the topography and, consequently, the direction of ice flow. With GPS antennas attached to the snowmobiles, we drove around in grids and then zigzags to collect as much data as we could.
- Ice Flow: After we mapped the area and got a general idea of the ice flow direction, we installed ~20 ice velocity poles - 3m conduit pipes drilled into the ice. We determine their exact position with GPS and then resurvey them in the future to measure their displacement. The poles that we installed expanded an already existing network of poles that Blue installed in 1997 and that he and I resurveyed in 1999.
- Ice Core: Accumulation rates are an important variable in glaciology, yet they are difficult to measure. The method that we use is called beta counting. We try to identify radioactive horizons within the ice core (using a beta counter back in Maine) – namely 1955 and 1964 bomb tests. If we know the age of the ice at a certain depth we can deduce what the average accumulation rate for the area is.
- Radar Mapping: While Andrei, John and I put in the ice flow poles and collected the ice core, Blue collected radar data. The shallow radar is a small unit and can be easily dragged behind a snowmobile. There is already a bed topography map of the Allan Hills, so we didn’t need to do any deep radar mapping.
- Tephra Mapping: There are a couple of pictures of me and Andrei mapping the tephra layers. While I mapped the layers, Andrei collected tephra samples. These tephra layers were significantly smaller than the huge ones we saw at Mt. Moulton.
- Electronator: There are many more tephra layers in the ice than the ones that Andrei and I mapped by sight. To try to locate these layers John used his ‘electronator’. Two electrodes are attached to a wood bar that we drag behind the snowmobile. As we’re driving, the electronator measures the conductivity of the ice. This is a new invention of John’s and we were excited to test it.
- Meteorites: We did collect a few meteorites while we were in the Allan Hills. These will be used to date the surrounding ice (through the radioactive decay of cosmogenic nuclides). Unfortunately we did not collect more because of time (and temperature) restraints.
Don’t be fooled by the sunny pictures you see on the website! We all agreed that the Allan Hills was probably the most brutal place we’d been to (this is coming from 4 people who have done extensive field work in Antarctica, Svalbard, Greenland, Finland and Kamchatka). It was beautiful, but the weather that late in the season was tough. The brisk temperatures (-24°C) and light breeze of the katabatic winds (30 knots) were relentless and made everyday living pretty difficult. Here are two examples:
- Task 1: get out of your sleeping bag
- Ignore the thin layer of ice that has coated your sleeping bag from condensation of your breath.
- Ignore the fact that you can hardly talk with your tent-mate because the wind is flapping the tent so loudly.
- Take contact lens case out of your hat. If contacts are defrosted put them in. If they are still frozen, stick them in your armpit.
- Pull insulated windpants over fleece pants over long underwear. Add two polypro shirts on top of your base layer. Put on down jacket. Put on Antarctic issue down jacket (yes, I did have two down jackets on!), two pairs of socks, boot liners and boots. Top it off with a turtle neck liner, wind-stopper balaclava, goggles and hat. Put on your gloves and mittens. Okay, now you’re ready to go outside.
Despite the tough weather, we all managed to maintain our spirits and stay focused on our science goals. It became quite amusing how persistent the bad weather was. Usually you get one or two days of weather like that an entire season…we had it for 10 days straight. | <urn:uuid:064b0865-42f3-4379-8513-9601116bb8e4> | 2.78125 | 993 | Personal Blog | Science & Tech. | 54.180138 |
Le Brocque, Andrew F. and Zammit, Charlie (2010) Four years of sheep exclusion shows no changes in understorey composition in grazed woodlands of southern Queensland. In: Ecological Society of Australia 2010 Annual Conference: Sustaining Biodiversity - the next 50 Years (ESA 2010), 6-10 Dec 2010, Canberra, ACT, Australia. (Unpublished)
|HTML Citation||EndNote||Dublin Core||Reference Manager|
Full text not available from this archive.
Official URL: http://www.esa2010.org.au/Detailed%20program.pdf
Retaining trees in low-input, low-productivity grazing systems in southern Queensland can provide biodiversity benefits without adversely impacting upon production. Although previous research conducted during period of extended drought, may have failed to determine the overall biodiversity potential in relation to management practices. We describe a grazing exclusion trial designed to monitor biodiversity changes following the removal of grazing in the Traprock wool producing region of southern Queensland. Eighteen sites across 10 properties were sampled across two vegetation types (grassy box woodland and ironbark/gum woodland), three overstorey tree densities (<6 trees/ha; 6-20 trees/ha; >20 trees/ha), and three exclosure types (full exclosure; partial exclosure and control (open). Exclosures were established in 2005 and sampled over a four year period for understorey composition and above-ground biomass. No differences were apparent in composition between exclosure treatments (ANOSIM, p > 0.05), although patterns were observed in overstorey tree density treatments within vegetation types. There were no differences (p > 0.05) in biomass between exclosures, although significantly higher plant biomass was observed in low density treatments. Exclusion of grazing has not significantly altered composition after 4 years. However, above-ground biomass has responded to the removal of grazing in open paddock areas. A longer period of exclusion may be necessary to detect changes (if any) in plant species composition.
Archive Staff Only: edit this record | <urn:uuid:c3b2bc5b-c3c0-44e4-997f-ec0b638b9c5f> | 2.796875 | 428 | Academic Writing | Science & Tech. | 29.46 |
Viking Data Suggests Life?, Universe Today via NASA's Astrobiology Magazine
"Researchers from universities in Los Angeles, California, Tempe, Arizona and Siena, Italy have published a paper in the International Journal of Aeronautical and Space Sciences (IJASS) citing the results of their work with data obtained by NASA's Viking mission."
Is it Snowing Microbes on Enceladus?, Science.nasa.gov
"There's a tiny moon orbiting beyond Saturn's rings that's full of promise, and maybe -- just maybe -- microbes. In a series of tantalizingly close flybys to the moon, named "Enceladus," NASA's Cassini spacecraft has revealed watery jets erupting from what may be a vast underground sea. These jets, which spew through cracks in the moon's icy shell, could lead back to a habitable zone that is uniquely accessible in all the solar system."
Keith's note:I am a biologist. Back in the day I ran many NASA peer review panels for exobiology research and helped plan NASA's initial astrobiology program. I run astrobiology.com and would absolutely love this story to be true i.e. microbes raining on Enceladus but ... its not true - at least no one has proved it. Dr. Porco's guesses are imaginative and inspired and are not without some strong supporting data but they are just guesses - and Cassini does not have any way to prove that there is anything alive in these plumes. So yes, "let's go back".
As for Gil Levin's Viking research, a quick check will show that this journal is mostly run by Korean scientists and seems to have little stated expertise when it comes to astrobiology or exobiology (at least none that I can determine). Levin regularly publishes re-written papers that all point back to a claim that he has been making for decades i.e. that his experiment on the Viking landers found life on Mars - or at least some solid evidence of its possible existence. Who knows, Mars is a much different world than we thought it was back in the 1970s. Again, I'd be ecstatic if his claims turn out to be true but no one really agrees with him.
Why am I being such a wet blanket? At a time when climate change deniers are criticizing NASA for the way it selects and interprets its science, one would think that the agency would at least exercise a little more caution in putting things on its official websites that either jump to conclusions, or prompt the reader to do so. Indeed, I cannot find a simple policy that the agency adheres to in this regard. Not to have a clear policy that is enforced simply makes it harder for NASA to refute some of the attacks that others throw against it.
After last year's fiasco with the 'life in meteorites' paper claims by NASA MSFC's Richard Hoover, you'd think SMD and the Astrobiology Institute would be paying a little closer attention to this matter. I have sent comments to NASA's astrobiology folks but no one bothers to respond. | <urn:uuid:840fa173-a093-4a20-9667-6d6caa9c8ed6> | 2.75 | 635 | Comment Section | Science & Tech. | 54.24985 |
A little-known meteor shower named after an extinct constellation, the Quadrantids will present an excellent chance for hardy souls to start the year off with some late-night meteor watching.
Peaking in the wee morning hours of January 3 (Thursday), the Quadrantids have a maximum rate of about 80 per hour, varying between 60-200. Unfortunately, light from a waning gibbous moon will wash out many Quadrantids, cutting down on the number of meteors seen by skywatchers.
Unlike the more famous Perseid and Geminid meteor showers, the Quadrantids only last a few hours, so it's Thursday morning or nothing. Given the location of the radiant -- northern tip of Bootes the Herdsman -- only observers at latitudes north of 51 degrees south will be able to see Quadrantids.
Like the Geminids, the Quadrantids originate from an asteroid, called 2003 EH1. Dynamical studies suggest that this body could very well be a piece of a comet which broke apart several centuries ago, and that the meteors you will see before dawn on Jan. 3 are the small debris from this fragmentation. After hundreds of years orbiting the sun, they will enter our atmosphere at 90,000 mph, burning up 50 miles above Earth's surface -- a fiery end to a long journey! | <urn:uuid:702c1212-bcaa-4733-a0b6-18c50a4f5a42> | 3.109375 | 275 | Knowledge Article | Science & Tech. | 50.325145 |
In the movie Terminator 3: Rise of the Machines, Arnie's fuel cells were twice shown to be extremely devastating when their stored energy was released.
In real life, how many grams of, say, gunpowder or TNT are equivalent to the energy stored in
- An AA alkaline battery
- A laptop battery
- An electric car battery
Would the batteries be as devastating as the explosives if the energy could be released all at once (within milliseconds, say)? Presumably the power of chemical explosives come from the volume of gas they release... | <urn:uuid:ba244e90-005a-4ac1-805c-8d6f55bc2742> | 2.703125 | 111 | Q&A Forum | Science & Tech. | 36.285 |
It is remarkable that in the climate science debate, the ideal gas law and its consequences in dynamic systems has been variously forgotten, misinterpreted, denied and ignored. In order to clear up the misconceptions, obfuscations , ignorance, error, and denial, it is time to do some practical science in order to lay the various misapprehensions and mis-statements to rest.
It’s very encouraging to see that ‘Lucy Skywalker’ is intending to replicate the experimental work of Roderich Graeff. This is a serious undertaking and a difficult task, due to the very accurate measurement of small differences required. I have decided the Talkshop is going to enter the fray with some empirical experimental work too. The aim is somewhat simpler. We are going to measure the effect of Pressure on a contained volume of air which has energy passing through it, as per Ned Nikolov and Karl Zeller’s outline of the situation in Earth’s atmosphere, which is a volume of air contained by gravity, with sunlight passing through Earth’s day side. This is so we can determine whether there is merit in their hypothesis that the atmospheric temperature profile is underpinned by the effect of gravity on atmospheric mass: warm near the surface where the air pressure is around 14 psi, and cold at high altitude, where the air pressure drops nearly to zero.
Talkshop regulars will remember that a few months back, contributor Konrad Hartmann performed an experiment using pet bottles in direct sunlight. There was some constructive criticism of his experiment, and he made some design improvements which we are awaiting results from. Konrad tested the effect of increased pressure. Initially, we will go the other way, and see what happens when we reduce pressure towards vacuum. After that we will test positive pressures too.
I have been scouring ebay for the equipment we will need to accomplish the task. So far I have a pack of 20 5W ceramic resistors to make a controllable and accurately measurable heat source. And I have also scored a very nice old vacuum pump. This was a real bargain. It’s a Leybold Heraeus, a veritable piece of German precision engineering.
Although it won’t achieve a really hard vacuum, it will get near enough to find out what sort of relationship exists between pressure and temperature as we change the pressure from near vacuuum to ordinary atmospheric levels. John, the ebay seller, very kindly reduced his price when I went to pick it up (it was too heavy for the postal service), when I told him it was for non-profit experimental work.
I’m also awaiting the arrival of another bargain – a Siemens pressure sensor which is insensitive to temperature changes.
It works on the principle of piezo resistivity, using a hybrid ceramic diaphragm to transmit the pressure from the test medium. Full spec here. These are expensive sensors. The new price is 391 Euros. I just won a brand new in-the-box 0-4bar P4 model for £5 on the ‘bay – my favourite shopping channel. :)
For the pressure vessel, I intend to use an old ‘camping gaz’ butane cylinder. This is small enough for vacuum pumping times to be reasonable, and large enough for the required separation between heat source and thermistor.
Which brings me to the measurement side of the experimental set-up.
I have found a neat little four input oscilloscope/datalogging module.
This comes with free software and connects to a USB port. Amplification will be needed for the thermistor signal, and a small power supply for this and the the pressure sensor. I intend to build all this into an old SCSI external hard drive enclosure to keep things neat. More on the construction of this unit once I have all the necessary components in stock.
All this effort is being undertaken in the spirit of Einstein’s famous injunction:
Experimentum Summus Judex – experiment is the final arbiter.
It the climate science arena, we have witnessed how computer model output resting on untested theory has been put forward as scientific truth. This is a fallacy, because ultimately, hypothesis is mere conjecture until experiments are designed and carried out. If experimental results support the hypothesis, it may go forward to become a fully fledged theory.
We are going back to basics here and doing some real science. Before we conduct the experiments, we will make predictions from our hypothesis, and test them properly. All ideas and input from Talkshop contributors and the wider scientific community are welcome. We want to do this as well as we can within the budgetary constraints imposed on non-institutional research. | <urn:uuid:a741d678-aca3-4d27-9cc1-ebef0f0708f3> | 2.71875 | 965 | Personal Blog | Science & Tech. | 41.118372 |
All about the world's ice
By Jack Williams, USATODAY.com
At the beginning of the 20th century, the world's attention turned to the polar regions as explorers raced to be first to reach the North Pole and then the South Pole.
Going into the 21st century, the Arctic and Antarctic, along with glaciers in other parts of the world, are the focus of scientific attention because ice holds answers to questions about the Earth's past, present and future climates.
You often hear the question: Is polar ice melting?
The answer that that is, "Yes, it is melting." But, you could have said the same thing during the height of any of Earth's ice ages.
Polar ice is always melting, and also always growing as more snow falls that doesn't melt during the summer.
The real question: "Is polar ice melting faster than new ice is being added?"
This is the question that scientists hope to answer by studying the Earth's largest ice sheets, in Antarctica and Greenland. One of the largest such studies is the ongoing West Antarctic Ice Sheet Initiative.
If all of these two ice sheets melted sea levels would rise by about 215 feet all over the world. Fortunately, even the most extreme global warming scenarios don't see this happening for centuries. Greenland's ice is probably in the most danger of melting, and this would raise global sea levels by about 21 feet. Scientists who study the ice don't think this is likely during the life of anyone alive now.
The figures above come from the U.S. Geological Survey's estimate of how much ice is locked up in the world's ice sheets and glaciers and how much sea level would rise if they melted. (Related document: Estimated area and volume of global ice)
Not all ice is near the poles
While most of the world's ice is in the Arctic and Antarctic, quite a bit is scattered around the Earth in the form of glaciers. Not all of these are in cold places, such as Alaska.
Several glaciers are in the tropics, but they are on tops of high mountains.
Smaller glaciers around the world are melting, however, and this is causing a tiny rise in sea-level.
Of more concern than sea-level rise is the fact that many of these glaciers, especially in the tropics, supply needed water.
Ice tells us about the past
Scientists can confidently say things such as, "Earth was in an ice age 35,000 years ago," because they have learned how to read climate history using things such as fossil pollen or that that setteled on the bottom of the ocean or a lake.
Ice in ice sheets and galciers, however, turns out to be one of the best records of past climates.
The basic technique for reading the climate stories that ice has to tell is to dril a hole and pull out ice, which scientists have been doing since 1966. Such ice is known as an "ice core." Special drills pull up pieces of ice about 5 or 6 inches in diameter and maybe 20 feet long, one after another until the drill reaches the rock under the ice sheet or glacier, which can be under two or three miles of ice.
One of the big stories that ice cores have had to tell is that the Earth's climate does not steadly warm up or cool down. Discoveries from ice cores have led to the current concern about "abrupt climate change."
What's likely in a warming world
While all scientists who study polar ice don't agree on how much is likely to melt if the world continues warming, the January 2001 report of Working Group 1 of the Intergovernmental Panel on Climate Change (IPCC) offers the best summary of the latest scientific thinking. This group consisted of experts from around the world and looked at the basic science of climate change.
In its Summary for Policy Makers, the working group says on that during the 21st century:
• Northern Hemisphere snow cover and sea-ice extent are projected to decrease further.
• Glaciers and ice caps are projected to continue their widespread retreat during the 21st century.
• The Antarctic ice sheet is likely to grain mass because of greater precipitation, while the Greenland ice sheet is likely to lose mass because the increase in runoff will exceed the precipitation increase.
• Concerns have been expressed about the stability of the West Antarctic ice sheet because it is grounded below sea level. However, loss of grounded ice leading to substantial sea level rise from this source is now widely agreed to be very unlikely during the 21st century.
• Global mean sea level is projected to rise by 0.09 to 0.88 meters (0.29 to 2.88 feet) between 1990 and 2100.
The report notes that the 2001 projections of sea level rise are lower slightly lower than in the Working Group's 1995 report even though the 2001 report projects higher temperatures by 2100 than the 1995 report did.
The reason is "primarily due to the use of improved models, which give a smaller contribution from glaciers and ice sheets" than the models used for the 1995 report. (Related document: IPCC Working Group 1 Summary)
(Based in part on The Complete Idiot's Guide to the Arctic and Antarctic by Jack Williams) | <urn:uuid:2c5cd1a4-c6e5-48cc-a673-2b4d5a78581a> | 3.78125 | 1,071 | Truncated | Science & Tech. | 57.1044 |
The Physics Classroom: Energy Transport and the Amplitude of a Wave Relations
Physics Front Related Resources
A related Physics Classroom tutorial that explores the relationship between wave frequency and period.
Other Related Resources
Visit The Physics Classroom's Flickr Galleries and enjoy a photo overview of the topic of waves.
Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on Wave Properties.
Create a new relation | <urn:uuid:05f30118-a9e8-4bd9-a2f3-ff511f4e2a3f> | 2.859375 | 90 | Content Listing | Science & Tech. | 32.754394 |
Improper rotations are described by 3-by-3 orthogonal matrices with a determinant of -1. A proper rotation is simply an ordinary rotation, which has a determinant of 1. The product (composition) of two improper rotations is a proper rotation, and the product of an improper and a proper rotation is an improper rotation.
An improper rotation of an object thus produces a rotation of its mirror image.
When studying the symmetry of a physical system under an improper rotation (e.g. if a system has a mirror symmetry plane), it is important to distinguish between vectors and pseudovectors (as well as scalars and pseudoscalars, and in general; between tensors and pseudotensors), since the latter transform differently under proper and improper rotations (pseudovectors are invariant under inversion).
See also: Isometry | <urn:uuid:7ce98747-21a2-4813-b7ca-e8868d603be0> | 3 | 181 | Knowledge Article | Science & Tech. | 27.895769 |
By summer 2005, researchers in the Fluids Research Laboratory at Virginia Tech will be able to look for evidence of water on Mars by examining submicroscopic bubbles in martian meteorites, determine whether fluids and silicate melts trapped in volcanic rock can help predict future eruptions, and locate buried mineral deposits using data from surface rocks. Robert Bodnar, University Distinguished Professor in the Department of Geosciences in the College of Science, has received equipment grants from the National Science Foundation (NSF) that will make the lab one of the best equipped for the study of fluid inclusions in the United States.
When minerals form on Mars or deep in a volcano on Earth, small droplets of fluid, vapor, or silicate may be trapped. These tiny, ancient samples contain the rock’s chemical history and represent time capsules from the moment they were sealed in a rocky envelope. Recovering that moment in time has been a long-term challenge for geoscientists.
"Scientists can learn a lot about the composition of such inclusions by observing their behavior during heating and cooling under the microscope," said Bodnar, "but to really learn what is going on, you have to do quantitative chemical analysis." Non-destructive techniques using lasers to approximate the compositions of inclusions have existed for many years. Now there is an instrument that goes a step further – actually digging or ablating into the inclusion and removing the fluid for direct chemical analysis.
Bodnar learned in late May that he has received a $400,000 Major Research Instrumentation grant from the NSF to purchase an "Excimer-laser Based Laser Ablation System coupled to an inductively coupled plasma mass spectrometer (LA-ICP-MS)." "It is the single most important analytical method for those studying the geochemistry of Earth fluids," said Stephen E. Kesler, professor of geological sciences at the University of Michigan.
Last year, Bodnar had received $450,000 from NSF and Virginia Tech to upgrade the lab with two Raman microprobes, one of them specifically designed for the analysis of petroleum inclusions. "It uses a UV laser and will help us understand whether a given basin or rock might host oil deposits based on analysis of fluid inclusions in surface rocks, which could save millions of dollars in fruitless drilling, or at least help identify the most promising sites," said Bodnar.
The latest acquisition, which will be in place by next summer, will be a national resource. There are only three other LA-ICP-MS systems specifically designed for analysis of fluid inclusions in the world – at the Swiss federal technical university (ETH) in Zurich, where the system was developed, at the University of Leeds, and at Australian National University, Canberra. Bodnars newly equipped lab will become the National Laser Ablation ICP-MS Laboratory for Fluid Inclusion Analysis.
At a 2002 meeting in Denver, sponsored by the NSF and the Society of Economic Geologists, the establishment of an LA-ICP-MS laboratory in the United States was identified as a number one instrument priority, and it was suggested that it be located in Bodnars lab at Virginia Tech because of his long history of fluid inclusion research. Bodnar was invited to apply for an MRI and 37 leading scientists provided letters of support.
"There is no better location than Virginia Tech for such a laboratory in North America," said Kesler. "Dr. Bodnar is a pioneer in work on both natural and experimental fluid inclusions and has a reputation for thoroughness that is important to the scientists that will use this facility."
Bodnar’s interest in fluid inclusions began when he was a master’s student at the University of Arizona 25 years ago and continued through his PhD research at Penn State University to the present time. The Mars research is one of Bodnars recent interests, now shared by his students. Work to predict volcanic activity at the Vesuvius volcano that destroyed Pompeii in 79 AD is a joint project with researchers at the University of Naples. Bodnar’s early work on fluid inclusions involved studies of extinct volcanoes that host some of the world’s largest copper and gold deposits.
Bodnar is searching martian meteorites for samples of fluid inclusion, which are rare in these extraterrestrial samples. He and his graduate student, Megan Elwood Madden, a native of Jacksonville, Ill., are creating geochemical computer models to predict what fluids would have been on Mars at the time the rocks now comprising the meteorites were formed. "Our findings would help answer questions regarding the presence of water on Mars, which is crucial for the development and survival of life," Bodnar said.
Madden, a Ph.D. student with funding from the NSF VTAdvance program, is examining fluid inclusions in other space material as well as in terrestrial meteorite impact sites, including Meteor Crater in Arizona. Previous studies of meteorites indicate that Earth is not so unique, as fluid inclusions indicate that water has been present on other bodies in the solar system at some time in their history (Zolensky, Bodnar, Gibson, Nyquist, Reese, Shih, Wiesmann, Science, Aug. 27, 1999).
Bodnar is looking at melt inclusions from the magma chambers associated with volcanoes. A melt inclusion is a droplet of silicate material, rather than a fluid, that was trapped in a mineral. "Our research collaboration with the University of Naples is looking at melt inclusions in the magma from Vesuvius," Bodnar said.
Naples, with more than a million residents, sits on the flank of this active but sleeping volcano.
Luca Fedele a Ph.D. graduate of Virginia Tech who is a now faculty member at the University of Naples, and Claudia Cannetelli, a PhD student from Naples, have come to Blacksburg for the summer to conduct research on melt inclusions from the Vesuvius volcano. "The LA-ICP-MS system will enable us to analyze the composition of the melt inclusions to determine the composition of the Vesuvius magma at the time of eruption,” Bodnar said.
The researchers have determined that the composition of magma changes. "If the composition at a particular time can be related to when a volcano erupts, then knowing the composition might be an aid in predicting eruptions," Bodnar said.
"Predicting volcano activity is an active area of research," Bodnar said. Many researchers are studying Mount Rainier in Washington, another dormant volcano that is near 2.5 million people in the Seattle Tacoma metropolitan area.
Bodnar is also studying the role that volcanoes play in forming valuable mineral deposits. Volcanic magma can contain rich deposits of gold or copper – or not. Bodnars focus is porphyry copper deposits, which include the famous Bingham Canyon, Utah, and Butte, Mont., deposits, although he has studied gold deposits related to volcanoes as well (reported in The Economist Oct. 21, 1995). "One to two kilometers below the top of a volcano, as the magma chamber cools, minerals precipitate. Later, the volcano is eroded to reveal these deposits. When I study these deposits, I am studying the fossil of a volcano," Bodnar said.
"There are thousands of fossil (or extinct) volcanoes worldwide, but only a few have concentrations of metals that can be mined. Why? Fluid inclusions offer the key to answering this question," Bodnar said.
As molten magma cools and crystallizes, water enters and is heated. What happens at this "magmatic hydrothermal, or hot-water, transition determines whether or not an ore deposit forms, he said. "We want to analyze melt inclusions and fluid inclusions that formed at the same time to try to understand what happens to the chemistry within the magma chamber as the system evolves from the magmatic stage to the hot-water stage."
"The few LA-ICP-MS analyses of fluid inclusions that have been made provide information on the amount of metal that is dissolved in natural, ore-forming fluids, and analyses of melt and sulfide inclusions are providing important insights on the geochemistry of incompatible elements during magmatic crystallization," said Kesler. "Preliminary data are challenging well established concepts and are likely to lead to completely new theories about the processes that form mineral deposits and other geochemical anomalies in the upper crust."
Meanwhile, mining companies could save hundreds of millions of dollars in exploration costs if analysis of inclusions in surface rocks could indicate whether or not to drill.
Researchers from around the world will be able to use the new National Facility for Laser Ablation Analysis of Fluid Inclusions at Virginia Tech to explore rocks hundreds of millions of years old for knowledge ranging from how copper and gold deposits formed to the opportunities for life across the solar system.
More articles from Earth Sciences:
GPS solution provides three-minute tsunami alerts
17.05.2013 | European Geosciences Union
NASA Sees Eastern Pacific Get First Tropical Storm: Alvin
17.05.2013 | NASA’s Goddard Space Flight Center
Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset.
For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ...
A new study of glaciers worldwide using observations from two NASA satellites has helped resolve differences in estimates of how fast glaciers are disappearing and contributing to sea level rise.
The new research found glaciers outside of the Greenland and Antarctic ice sheets, repositories of 1 percent of all land ice, lost an average of 571 trillion pounds (259 trillion kilograms) of mass every year during the six-year study period, making the oceans rise 0.03 inches (0.7 mm) per year. ...
About 99% of the world’s land ice is stored in the huge ice sheets of Antarctica and Greenland, while only 1% is contained in glaciers.
However, the meltwater of glaciers contributed almost as much to the rise in sea level in the period 2003 to 2009 as the two ice sheets: about one third. This is one of the results of an international study with the involvement of geographers from the University of Zurich.
Second sound is a quantum mechanical phenomenon, which has been observed only in superfluid helium.
Physicists from the University of Innsbruck, Austria, in collaboration with colleagues from the University of Trento, Italy, have now proven the propagation of such a temperature wave in a quantum gas. The scientists have published their historic findings in the journal Nature.
Below a critical temperature, certain fluids become superfluid ...
Researchers use synthetic silicate to stimulate stem cells into bone cells
In new research published online May 13, 2013 in Advanced Materials, researchers from Brigham and Women's Hospital (BWH) are the first to report that synthetic silicate nanoplatelets (also known as layered clay) can induce stem cells to become bone cells without the need of additional bone-inducing factors.
Synthetic silicates are made ...
17.05.2013 | Physics and Astronomy
17.05.2013 | Physics and Astronomy
17.05.2013 | Physics and Astronomy
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:4408b3c9-9cb0-4c25-95a3-07f55a502c39> | 3.625 | 2,422 | Content Listing | Science & Tech. | 40.658177 |
此物種詳細資訊 King George Is. (Antarctic)
來源: Chwedorzewska 2008
On King George Island Poa annua spreads and inhabits areas of the Arctowski oasis in the vicinity of the Polish Antarctic Station. P. annua has been observed in the vicinity of the Polish Antarctic Station since 1985. Its origin still remains unknown. The abundance of P. annua only in the vicinity of Arctowski Station suggests that Polish Antarctic expeditions may have been responsible for the introduction of the species over the course of 7 years. Thus, the most probable source of introduction is Central Europe.
At the Arctowski
oasis P. annua colonises places strongly altered by human activities, where the soil structure is destroyed by earthworks or caterpillars and prefers sites sheltered from the wind (Olech 1996, in Chwedorzewska 2008).
生態系統變化: Poa annua was noted for the first time in natural communities in summer 2005 (M. Olech Unpub. Data). However, at South Georgia and Arctowski oasis it seems to be rather persistent than invasive, and has restricted distribution in the colonisation area (M. Olech Unpub. Data, McIntosh and Walton 2000, in Chwedorzewska 2008).
最後修改 : 26/03/2009 3:13:48 p.m. | <urn:uuid:7bc6813f-a9eb-439f-a0c4-022886bb54c7> | 2.875 | 330 | Knowledge Article | Science & Tech. | 57.199006 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 12 results on physics.org and 38 results in our database of sites
38 are Websites,
0 are Videos,
and 0 are Experiments)
Search results on physics.org
Search results from our links database
Gravity supplies the necessary centripetal force to hold a satellite in orbit about the earth. The circular orbit is a special case since orbits are generally ellipses, or hyperbolas.
Orbit simulator software for school and college students and teachers. The orbit simulator allows laboratory exercises on gravitational physics, elliptical orbits, double stars, escape velocity, ...
Section of the excellent hyperphysics site dealing with orbits and the concepts connected to this.
Detailed analysis of geostationary orbits, includees diagrams and the original 1945 description from Arthur C Clarke.
Student selects satellite velocity to try to keep satellite in earth orbit
Orbits in gravitational fields
A simulation to show how gravity affects orbits
This site takes you through the calculation of an orbit velocity.
Definition of a geostationary orbit with some accompanying calculations.
An elementary quiz based on the position and orbits of the planets.
Showing 1 - 10 of 38 | <urn:uuid:8d9d5494-a5ef-4df1-ae4f-dde031167930> | 3.6875 | 288 | Content Listing | Science & Tech. | 37.013503 |
Sorry for the late reply, was a busy day.
Had a better look at the mounds and deduced that it is infact a mound of Trinervitermes
. Which is in the same subfamily as the Amitermes just the size of the holes in the mound made me rethink slightly. They both exhibit blackened mounds and soldiers are characterized by a snouted long pointy conical head capsule which secretes a deterrent fluid upon harassment. Which groups both in the subfamily Nasutitermitinae (if you care)…
Right so the mounds are made from digested paste as I mentioned earlier. The Trinervitermes are unlike the Macrotermes [ which i mentioned in another thread about who is responsible for the large termite hueweltjies/towers found in kruger] as they do not require to cultivate fungus to break down their cellulose rich plant material diets like the macrotermes.
They contain enzymes within their hindgut and saliva which enable the breakdown of cellulose. This is a direct resilt as being in a symbiosis with unicellular eukaryotic flagellates often making up 1/3 of the termites mass. These gut flaggelates are often colonized by prokaryotic bacteria. Endomicrobia
are a lineage of bacterial microbes which is present in, and often restricted to
hind guts of certain termites and wood-eating cockroaches. These bacteria (often referred to as TG-1 [termite group 1] bacteria) are capable of cellulase production and therefore cellulose digestion.
The particular termite species happens to do all this with dark pigmented chemicals persisting to the digested material 'waste' which gives its saliva, and therefore mound (and old queen) its colour.
Now queens are housed in elaborate chambers underneath such visable mounds to offer a certain degree of protection. These colonies grow very slowly and take about a decade to reach maturity. Like an ice-berg 95% of the mound is under the surface. Making digging to get there quite an incentive.
Predators such as aardvark are specialized to get past such protection as they are increadibly capable earth moving machines and have evolved for just such porposes. They dig into the mound and have a feast.
The holes are as ecologically important as the mounds themselves as many species use them for shelter. As I mentioned above Warthogs can dig, but are lazy creatures and would rather modify existing aardvark burrows. Few African species are as diurnal (strictly daytime) as the warthog. This is useful as Porcupines are nocturnal and use such holes during the day for shelter and sleep. When the warthog gets up in the morning to start a day and leaves the den, the porcupine is just coming home. When the warthog comes to sleep, the porcupine goes out foraging.
This is a great setup, except warthog have tusks on the front. So for protection they backup down the hole ready to fend off trespassers with their only weapons. Porcupines on the otherhand have spines on their backs. So they move down head first leaving the most protected rear wall of spines facing outwards to fend off intruders.
I have more than once seen warthog with quills stickin in their porcine posteriors for coming home early, backing up down their hole and not seeing the porcupine still sleeping in the hole. Recipe for disaster
prick in the bum and an uncomfortable few days for the warthog.
AWD's do not dig their own dens. Which results in aardvark burrows playing a very active role in their conservation, as AWD need dens to litter pups. Spotted hyena often utilize such dens as do ground squirrels and serval.
Sorry if I went into too much
detail I just have the pleasure of working with probably the top entomologist in South Africa. Who recently (2002) played a huge part in describing the new insect order mantophasmatodea, the first new order since ice crawlers in 1914, bringing the total insect orders to 30, and the first to note them as far south as namaqualand here in Sunny SA…my inspiration to mouth off big words hehehe | <urn:uuid:891ae86b-6a6b-4ff8-850f-3bde0cbed53e> | 3.203125 | 929 | Comment Section | Science & Tech. | 44.137216 |
Unlike other primates, the whites of human eyes contrast sharply with our colored irises and dark pupils. One theory suggests that our eyes evolved this way specifically to make it easier to figure out the direction of another person's gaze. If this theory is correct, you would expect humans to pay more attention to eye ¿orientation than other primates do.
To find out, researchers at the Max Planck Institute for Evolutionary Anthropology in Leipzig compared the behavior of adult chimps, gorillas, bonobos and human children
This article was originally published with the title Keep Your Eyes on the Eyes. | <urn:uuid:e1e5ae7b-2a9c-46e1-8ff0-6e7117fdd25e> | 3.34375 | 122 | Truncated | Science & Tech. | 29.18 |
San Antonio, Texas is saving water and saving the Texas blind salamander. The Texas blind salamander (Eurycea rathbuni) is a rare, endemic, aquatic salamander only 5 inches long that lives in underground cave streams. They have no functioning eyes and little skin pigment making them appear white. These unique salamanders hunt prey in the pitch black caverns by detecting water pressure changes created by moving prey. This species is listed as Endangered because it is only found within the Edwards Aquifer system near San Antonio. Extensive draining of the aquifer for human usage was threatening the survival of the salamander. The Edwards Aquifer Authority was formed to help save the Texas blind salamander and six other endemic species associated with the aquifer. Since 1984, San Antonio has experienced a 42% drop in per capita water demand. The city and its residents have implemented water conservation measures such as installing low flush toilets, reducing lawn watering, and using precision sprinklers for farm lands. Hopefully, these and other water conservation methods will help the strange little Texas blind salamander continue to stalk prey through the midnight caves of the Edwards Aquifer. Read Article. | <urn:uuid:e28c2d50-98d6-4e7d-9044-b67a62fe2897> | 3.4375 | 241 | Truncated | Science & Tech. | 32.413003 |
The reason, in one word, is RUNAWAY.
Runaway is a descriptive term for what the scientists call abrupt irreversible rapid global warming, which would be global climate catatsrophe. It involves tipping points.
A 2012 paper By Prof C Duarte says The Arctic could Trigger Domino Effect Around the World.
The science says it can happen (IPCC 2007), but it is not included in the linear projecting climate models.
Abrupt climate change encompasses two extreme results of Arctic warming- abrupt cooling can happen (thermohaline circulation change) and abrupt warming (+ve Arctic feedbacks). This page covers the warming process. Abrupt climate warming could be over 10 years or more than100 years.
The major risks to society and environment from climate change are posed primarily by abrupt and extreme climate phenomena. Potential forms of abrupt change include [...] widespread melting of permafrost leading to large-scale shifts in the carbon cycle. Abrupt and extreme phenomena can exceed the thresholds for ecological and societal adaptation through either the rapid rate or magnitude of the associated climate change [IPCC 2007].
The US is conducting research abrupt warming situations under the Investigation of the Magnitudes and Probabilities of Abrupt Climate Transitions (IMPACTS) Project out of the Lawrence Berkeley National Laboratory. This includes the following topics.
Rapid destabilization of methane hydrates in Arctic Ocean sediments; and Mega droughts in North America, including the role of biosphere-atmosphere feedbacks.
The background information confirms these are all real risks with continued warming, What is not addressed is that these would be mutually reinforcing.
The Arctic responds to global warming by increasing the rate of warming through several feedback processes, which, if allowed to become established, will inevitably lead to uncontrollable accelerating global warming or what for many years has been called "runaway" climate change. This is not to be confused with the scientific term "runaway greenhouse effect" or Venus syndrome.
There are two general, very large feedback processes in the Arctic that definitely will increase as global warming continues. One is melting Arctic ice and the other is emitting Arctic methane. The loss of ice will definitely increase the emission of Arctic methane to the atmosphere, which makes the Arctic sea ice meltdown the big planetary emergency.
"Runaway" is an apt description as runaway climate change is the result of the combined three following inevitabilities:
Increased radiative forcing and inertia from multiple feedbacks
We all know about the rapid meltdown of the Arctic summer sea ice. It has long been known that the vast expanse (2.5 million square miles before industrial atmopsheric GHG pollution) of the year-round Arctic sea ice acts in the summer as a cooling influence to the Arctic region, northern hemisphere and to some extent the whole global climate. Its loss in the summertime will lead to additional warming.
This emergency to our planet's biosphere comes from multiple positive Arctic climate feedback processes, each of which affects the whole biosphere and each of which will increase the rate of global warming / temperature increase. Atmospheric temperatures are rising faster in the Arctic than in other regions.
Already today, all the potentially huge Arctic positive climate feedbacks are operating.
The Arctic summer sea ice is in a rapid, extremely dangerous meltdown process. The Arctic summer ice albedo loss feedback (i.e., open sea absorbs more heat than ice, which reflects much of it) passed its tipping point in 2007 – many decades earlier than models projected, and scientists now agree the Arctic will be ice free during the summer by 2030. However, that is not to say it couldn't happen very much earlier.
Models of sea ice volume indicate a seasonally ice-free Arctic likely by 2015, and possibly as soon as the summer of 2013.
Such a collapse will inexorably lead to an accelerated rate of Arctic carbon feedback emissions of methane from warming wetland peat bogs and thawing permafrost.
The retreat of sea ice appears to be leading to the most catastrophic feedback process of all. This is the venting of methane to the atmosphere from frozen methane gas hydrates on the sea floor of the Arctic continental shelf.
At the Fall Meeting of the American Geophysical Union in San Francisco from 5-9 December 2011, there was a session on Arctic Gas Hydrate Methane Release and
Climate Change at which Dr. Igor Semiletov of the Far Eastern branch of the Russian Academy of Sciences reported dramatic and unprecedented plumes of methane – a greenhouse gas that is over 70 times more potent than carbon dioxide for 20 years after emission – were seen bubbling to the surface of the Arctic Ocean by scientists undertaking an extensive survey of the region. This has been reported by UK's Independent newspaper and copied by news agencies around the world and in a number of online blogs.
All of these Arctic feedbacks are described in detail in the 2009 World Wildlife Fund (WWF) report, Arctic Climate Feedbacks: Global Implications.
If methane release from Arctic sea floor hydrates happens on a large scale — and this year's reports suggest that it will — then this situation can start an uncontrollable sequence of events that would make world agriculture and civilization unsustainable. It is a responsible alarm, not alarmist, to say that it is a real threat to the survival of humanity and most life on Earth.
What to do
There are several ways to tackle the problem if action is not delayed: they may be grouped together as geo-engineering solutions. However, they do require rapid mobilisation on national and international scales: first, to verify the science, and second, to implement the necessary counter-measures. There is an almost impossible challenge to implement the counter-measures quickly enough to prevent the possible collapse of the Arctic sea ice in summer 2013, but this challenge has to be faced as an international emergency. | <urn:uuid:c923e130-b14d-4e7a-984b-b82d62fc9490> | 4 | 1,215 | Knowledge Article | Science & Tech. | 33.687701 |
|Manatees are long-lived marine mammals whose diet consists of sea grass (Photo: S. Lutz)|
Benjamin S. Halpern et al. have published an article in the nature journal unveiling the "Ocean Health Index", a tool to measure the condition of the world's oceans and coastal ecosystems. The Ocean Health Index also includes a map of global Blue Carbon storage.
An index to assess the health and benefits of the global ocean
Abstract: The ocean plays a critical role in supporting human well-being, from providing food, livelihoods and recreational opportunities to regulating the global climate. Sustainable management aimed at maintaining the flow of a broad range of benefits from the ocean requires a comprehensive and quantitative method to measure and monitor the health of coupled human–ocean systems. We created an index comprising ten diverse public goals for a healthy coupled human–ocean system and calculated the index for every coastal country. Globally, the overall index score was 60 out of 100 (range 36–86), with developed countries generally performing better than developing countries, but with notable exceptions. Only 5% of countries scored higher than 70, whereas 32% scored lower than 50. The index provides a powerful tool to raise public awareness, direct resource management, improve policy and prioritize scientific research.
Reference: Halpern, B. S. et al. (2012): An index to assess the health and benefits of the global ocean. Nature advance online publication: http://dx.doi.org/10.1038/nature11397.
Web page: http://www.oceanhealthindex.org
- Posted by Sven Stadtmann, GRID-Arendal | <urn:uuid:8c129e0a-9d41-481b-b8db-0f1e1154e389> | 3.25 | 341 | Personal Blog | Science & Tech. | 45.009514 |
Developing OpenCL Programs Using Xcode
This chapter describes a streamlined process in which, using tools provided by OS X v10.7, you can include OpenCL kernels as resources in Xcode projects, compile them along with the rest of your application, and use Grand Central Dispatch as the queuing API for executing OpenCL commands and kernels on the CPU and GPU.
If you need to create OpenCL programs at run-time, with source loaded as a string or from a file, or if you want API-level control over queueing, see The OpenCL Specification, available from the Khronos Group at http://www.khronos.org/registry/cl/.
In the OpenCL specification, computational processors are called devices. An OpenCL device has one or more compute units. A workgroup executes on a single compute unit. A compute unit is composed of one or more processing elements and local memory.
A Macintosh computer has a single CPU and GPUs. The CPU on a Macintosh has multiple compute units, which is why it is called a multi-core CPU. The number of compute units in a CPU limits the number of workgroups that can execute concurrently.
CPUs commonly contain two to eight compute units, with the maximum increasing year-to-year. A graphics processing unit (GPU) typically contains many compute units—the GPUs in current Macintosh systems feature tens of compute units, and future GPUs may contain hundreds. As used by OpenCL, a CPU with eight compute units is considered a single device, as is a GPU with 100 compute units.
The OS X v10.7 implementation of the OpenCL API facilitates designing and coding data parallel programs to run on both CPU and GPU devices. In a data parallel program, the same program (or kernel) runs concurrently on different pieces of data and each invocation is called a work item and given a work item ID. The work item IDs are organized in up to three dimensions (called an N-D range).
A kernel is essentially a function written in the OpenCL language that enables it to be compiled for execution on any device that supports OpenCL. Although kernels are enqueued for execution by host applications written in C, C++, or Objective C, a kernel must be compiled separately to be customized for the device on which it is going to run. You can write your OpenCL kernel source code in a separate file or include it inline in your host application source code.
OpenCL kernels can be:
Compiled at compile time, then run when queued by the host application
Compiled and then run at runtime when queued by the host application
Run from a previously-built binary
A work item is a parallel execution of a kernel on some data. It is analogous to a thread. Each kernel is executed upon hundreds of thousands of work items
A workgroup is set of work items. Each workgroup is executed on a compute unit.
Workgroup dimensions determine how the input is operated upon in parallel. The application usually specifies the dimensions based on the size of the input. There are constraints: for example, there may be a maximum number of work items that can be launched for a certain kernel on a certain device.
The program that calls OpenCL functions to set up the context in which kernels run and enqueue the kernels for execution is known as the host application. The application is run by OS X on the CPU. The device on which the host application executes is known as the host device. Before kernels can be run, the host application typically completes the following steps:
Determine what compute devices are available, if necessary.
Select compute devices appropriate for the application.
Create dispatch queues for selected compute devices.
Allocate the memory objects needed by the kernels for execution. (This step may occur earlier in the process, as convenient.)
Note that the host device (the CPU) can itself be an OpenCL device and can be used to execute kernels.
The host application can enqueue commands to read from and write to memory objects. See “Creating and Managing Memory Objects in OS X OpenCL.” Memory objects are used to manipulate device memory. There are two types of memory objects used in OpenCL: buffer objects and image objects. Buffer objects can contain any type of data; image objects contain data organized into pixels in a given format.
Essential Development Tasks
In OS X v10.7, the OpenCL development process includes these major steps:
Identify the tasks to be parallelized.
Determining how to parallelize your program effectively is often the hardest part of developing an OpenCL program. See “Identifying Parallelizable Routines.”
In Xcode, write your kernel functions. See “Basic Kernel Code Sample.”
In Xcode, write the host code that will be calling the kernel(s). See “Basic Host Code Sample.”
Compile using Xcode. See “Creating An Application That Uses OpenCL In Xcode.”
Debug (if necessary). See “Debugging.”
Improve performance (if necessary). See “Improving Performance.”
© 2012 Apple Inc. All Rights Reserved. (Last updated: 2012-07-23) | <urn:uuid:4d8342dc-94d5-4530-ae29-7f557690151d> | 3.25 | 1,084 | Documentation | Software Dev. | 44.562644 |
Vortices of water, called "eddies," form off the
northwestern coast of North America in the winter, and are particularly
large during El Niño winters when warm waters along the coast
flow northward at greater speed than normal. These eddies carry local
nutrient-rich waters far offshore into regions with low ambient nutrient
levels. As the eddies transport fresh water and nutrients out into the
middle of the Gulf of Alaska, they provide nourishment for
phytoplanktonmicroscopic plants that form the foundation of the marine food
chain. (To learn more about these tiny plants, read: What are Phytoplankton?) How important are these eddies for the Gulfs ecosystem?
It is hard to answer that question now but, in one example, scientists
say it is possible that variations in the size and frequency of the
eddies are one of the factors governing the success of salmon in the
Recently, Canadian and American scientists teamed up to collect and analyze data from satellite and ship-borne sensors taken over the region. With these data, they set out to determine the properties and behavior of the eddies and measure their impact on the Gulf of Alaskas ecosystem. The researchers found that the eddies, particularly those created during El Niño years, can last several years. They found that the eddies migrate slowly through the Gulf, moved about by shifting currents, and replenish nutrient-starved regions with iron and nitrate.
"Our concern over the depletion of fish in this region makes
satellite altimeter measurements such as TOPEX/Poseidon data
particularly important in understanding the formation and movement of
these nutrient-rich eddies, and how they influence salmon growth and
other fisheries," says William Crawford of Fisheries and Oceans
Canada at the Institute of Ocean Sciences.
|Eddies are rotating masses of water in the ocean that typically form along the boundaries of ocean currents. In the Gulf of Alaska, eddies of warm water, filled with nutrients from shallow coastal water, mix with the cold water off the continental shelf. The mixing fertilizes the nutrient-poor water of the gulf, resulting in blooms of phytoplankton (microscopic ocean plants.) This true color image from the Sea-viewing Wide Field-of-view Sensor shows the green spiral of an eddy in bright blue water. Also notice the sediments suspended in the water along the south coast of Alaska. (Image provided by the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE)|
In 1998, he and colleague Frank Whitney began using TOPEX/Poseidon images produced by the University of Colorado to track the large-scale eddies. The satellite data, along with in situ data collected aboard a Canadian Coast Guard Ship gave Crawford and Whitney unique insight into these eddies as a natural mechanism for nourishing the sea.
Satellites monitor movement and evolution of eddies continuously. Using radar that sees through clouds, the TOPEX/Poseidon mission and the European Remote-Sensing Satellite-2 (ERS-2) produce maps of sea surface height. Since eddies that are warmer than the surrounding water are higher than the usual sea surface height, they appear on these maps. This image shows the difference from normal sea surface height for the northeastern Pacific. Warm core eddies appear as red circles. (Image courtesy Colorado Center for Astrodynamics Research)
|Eddies in the Gulf of Alaska|
|It was in mid-September 1998 that Bill Crawford and Frank Whitney met over
coffee to discuss Whitneys cruise on the Coast Guard Vessel
John P. Tully into the Gulf of Alaska. While senior scientist on
board ship the previous three weeks, Whitney had found a huge, warm,
relatively fresh water mass 200 km wide and more than 1000 m deep, about 600 km
west of Vancouver Island.
"Do you see this feature in the TOPEX data?" Whitney asked. TOPEX/Poseidon can measure sea surface height accurate to within 2 centimeters. Crawford and colleague Josef Cherniawsky of the Institute of Ocean Sciences had been processing TOPEX/Poseidon data for several months to look for sea level rise in coastal waters, as part of the El Niño event the previous winter. The idea is that as its temperature increases, sea water expands, and TOPEX/Poseidon can measure the corresponding change in sea surface elevation. Although most water in the eddy is warmer than the surrounding ocean, the waters near the surface are either similar or even slightly cooler than surrounding seas. For this reason, satellites that sense ocean surface temperature seldom find these eddies.
An American-French program launched TOPEX/Poseidon in 1992, and released the first data from it in October that year. The satellite senses sea surface height along a 20-km-wide swath, on an orbital track that repeats about every ten days. Each ten-day sample is denoted a "cycle." "TOPEX" refers to the American dual-frequency radar sensor that is turned on for nine of the ten cycles. "Poseidon" is the French radar unit that samples on every tenth cycle. The satellite can measure sea surface topography accurately to within several centimeters.
Cherniawsky told Crawford that he found a new Web site that posts
near-real-time TOPEX images. Crawford signed on and entered the
latitude and longitude range of the Gulf of Alaska, and the cruise date,
and there it was: a red bulls-eye of water whose core rose 30 cm above
the surrounding ocean, at the same place and diameter as Franks
warm, relatively fresh water mass (see letter "A" in sea surface height image below). He had
just found a Web site that posts, for free, the most up-to-date,
accurate information on these eddies!
|Temperature (°C) in the Gulf of Alaska as measured in Aug-Sept 1998 from the Canadian Coast Guard Ship John P. Tully. The depressed isotherms near 600 km from the coast are at point A in the sea surface height image below. Canadian scientists have sampled ocean waters from Vancouver Island to Station P at 50°N, 145°W for more than 40 years. (Image courtesy Fisheries and Oceans Canada, Institute of Ocean Sciences)|
Robert Leben of the University of Colorado had posted the web site only three weeks before Cherniawsky looked. Leben wanted to enable the public to find their own eddies. He combined TOPEX/Poseidon altimetry data with similar observations by the ERS-2 satellite, launched by the European Space Agency. Leben then applied spatial filters to enhance the display of ocean eddies and suppress large-scale seasonal signals. He developed this tool for his own studies in the Gulf of Mexico, but by putting all the data on his web site, he provided a new "digital eye on the world." To see for yourself, visit his altimeter data viewer site.
Crawford used this web site to plot images of the eddy over the previous seven months, and continues to track the eddy. Its status in June 2000 was ambiguous, but a trace of it might be found at 45.5°N, 142°W. The satellite images revealed that the eddy formed in winter 1997/1998 along the West Coast of the Queen Charlotte Islands. He labeled it Haida-1998, after the First Nations of the region and its year of formation. Crawford and Cherniawsky have completed their own analysis of TOPEX/POSEIDON and ERS-2 data, beginning with processed data provided by Richard Ray and Brian Beckley of NASA Goddard Space Flight Center. Crawford and Cherniawsky have applied their own tidal constants near shore, and have found that the eddy first began to form off the west coast of the Queen Charlotte Islands in November 1997. (These tidal constants are provided by a detailed numerical model of tides in the Gulf of Alaska computed by the team of scientists at the Institute of Ocean Sciences led by Michael Foreman.) They also determined that some of these eddies might be the source of meanders and eddies in the Alaskan Stream (Crawford et al., 2000).
This team is presently using the same models to determine the average seasonal height of the sea surface along the Canadian margin of the Gulf (Cherniawsky et al., submitted; Foreman et al., submitted). Once combined with all satellite altimetry data, they will be able to determine absolute sea surface heights, and use them to compute northward flow of surface currents along our coast.
The Colorado web site showed Haida-1998 to be one of an annual supply of eddies that transport fresh water and nutrients into the Gulf from the Alaskan Panhandle and the Canadian West Coast. The unusually high elevation of the eddy core marks it as one of the largest eddies observed in this region. Haida eddies belong to a class of anticyclonic, coastal-generated eddies first noticed in water property data near Sitka, Alaska at 57 °N (Tabata, 1982), and later in satellite infrared measurements by Thomson and Gower (1998). Crawford and Whitney (1999) identified another region where eddies are typically generated between 51°N and 54°N, off the West Coast of the Queen Charlotte Islands. Over the years 1994 to 1999, they found that three to five large eddies formed along the Alaskan Panhandle and Canadian West Coast in any one winter.
These false-color images show contours of sea surface height from ERS-2 and TOPEX altimeters, as displayed on the Colorado Center for Astrodynamics Research Global Near Real-Time Altimeter Data Viewer web site.
|Inside an Eddy|
Whitneys salinity and temperature measurements in August 1998 showed the waters in Haida-1998 to be to be fresher and warmer than surrounding waters below 100-m depth (see previous page). Above 100 m in depth, both salinity and temperature in the eddy were slightly lower than in surrounding waters. Dynamic height calculations, which use seawater density profiles to determine how high the eddy surface "sits" above the surrounding ocean surface, reveal that sea surface in the core of the eddy was 30 cm higher than outside the eddy. This calculation matches the altimetry measurements from TOPEX/Poseidon. Nutrient levels in its thermocline were substantially higher than in surrounding waters. (Here, "thermocline" refers to the temperature gradient across the width and depth of the eddy.) The ocean water type of this eddy matches that found near the Queen Charlotte Islands in winter (53°N, 133°W).
In February and June 1999, Crawford sent the web-generated images to Whitney at sea on the John P. Tully to direct him to the eddys location for sampling. His measurements taken in September 98, February 99, and June 99 show the steady erosion of the nutrient excess in the eddy waters, and a three-fold enhancement of phytoplankton in the September 1998 samples around the perimeter of the eddy. Whitneys measurements demonstrate that the eddy provided nutrient to a nitrate-starved region of the Gulf of Alaska (Whitney, Wong, and Boyd 1998).
According to Crawford and Whitney, the eddies usually drift westward
and disappear within two years in deep waters in the Gulf of Alaska.
These rotating masses of water average up to a two hundred kilometers in
diameter, and a large eddy can contain up to 5,000 cubic kilometers of
water, which is about the volume of Lake Michigan.
The Canadian Coast Guard Offshore Research & Survey Ship, John P. Tully. (Image courtesy Canadian Coast Guard)
Crawford notes that in 1999, colleagues of theirs published a paper showing that Sitka and Haida eddies are frequently created in their computer simulations of wind-driven currents along this coast (Melsom 1999). The researchers believe it is baroclinic instability of the coastal flow that triggers the set up of eddies. ("Baroclinic instability" may occur in a flow in which there are density gradients along surfaces where the pressure is constant. Such instabilities are typically produced in rotating systems where there is ample potential energy being converted into kinetic energy.)
Based on calculations of dynamic heights of the 100-m surface relative to the 1000-m surface, using archived water property data, two of the highest-elevation eddies were Haida-1998, and Haida-1983, both generated in severe El Niño winters. This finding supports the calculations by Melsom et al. (1999), based on their numerical model.
So whats up with these eddies now? By mid-June 2000, new Haida and Sitka eddies had drifted away from shore, and the final remnants of Haida-1998 were merging into the surrounding seas, as shown on the previous page. The eddies of 1999 were weak and by June 2000, had either disappeared or were barely visible. Whitney is senior scientist on another cruise of the John P. Tully to sample Haida-2000 in June 2000. Its position shown in places it over Bowie Seamount, a potential Canadian Marine Protected Area. "We now have a combined eddy and seamount study, with too little time to sample both," says Whitney. The cruise was set up to examine nitrate and iron concentrations in the eddy, and to map their depletion in time and impacts on surrounding biota. He hopes to examine eddy water "upstream" of the seamount, and than run a quick survey over the seamount on the way home.
The first satellite images solved a previous mystery. Canadian scientists had wondered why Bowie Seamount biota could be so similar to coastal species, when no prevailing currents flowed from shore to the seamount. However, the track of Haida-1998, passing directly over Bowie Seamount, provided the missing link. The eddies had carried coastal species away from shore right to the seamount.
Cherniawsky, J.Y., M.G.G. Foreman and W.R. Crawford, Ocean Tides from TOPEX/ POSEIDON sea level data, submitted to Journal of Atmospheric and Oceanic Technology.
Crawford, W.R., J.Y. Cherniawsky and M.G.G. Foreman, 2000: Multi-year meanders and eddies in Alaskan Stream as observed by TOPEX/Poseidon altimeter, Geophysical Research Letters, 27(7), 1025-1028.
Crawford, W.R. and F. Whitney, 1999: Mesoscale eddies aswirl with data in Gulf of Alaska Ocean, EOS, Transactions of the American Geophysical Union, 80(33), 365, 370.
Foreman, M.G.G., W.R. Crawford, J.F.R. Gower, L. Cuypers and V.A. Ballantyne, 1998: Tidal correction of TOPEX/POSEIDON altimetry for seasonal sea surface elevation and current determination off the Pacific Coast of Canada. J. Geophys. Res. 103:(C12) 27,979-27,998.
Foreman, M.G.G. W.R. Crawford, J.Y. Cherniawsky, R.F. Henry, and M. Tarbotton,: A high-resolution assimilating tidal model for the Northeast Pacific Ocean, submitted to J. Geophys. Res.
Gower, J. F. R., and S. Tabata, 1993: Measurement of eddy motion in the northeast Pacific using the Geosat altimeter, in Satellite Remote Sensing of the Oceanic Environment, edited by I. S. F. Jones, Y. Sugimori and R. W. Stewart, pp 375-382, Seibutsu Kenkyusha, Tokyo.
Melsom, A., S. D. Meyers, H. E. Hurlburt, E. J. Metzger, J. J. O'Brien, 1999: ENSO effects on Gulf of Alaska eddies, Earth Inter., 3, pap. 001, (Available at http://EarthInteractions.org.)
Tabata, S., 1982: The anticyclonic, baroclinic eddy off Sitka, Alaska, in the Northeast Pacific Ocean, J. Phys. Oceanogr., 12, 1260-1282.
Thomson, R. E., and J. F. R. Gower, 1998: A basin-scale oceanic instability event in the Gulf of Alaska, J. Geophys. Res., 103, 3033-3040.
Whitney, F. A., C. S. Wong, and P. W. Boyd, 1998: Interannual variability in nitrate supply to surface waters of the Northeast Pacific Ocean, Mar. Ecol. Prog. Ser., 170, 15-23.
|This schematic shows an idealized eddy in the Gulf of Alaska. "Isotherms" are lines connecting points of equal temperature, as on a weather map. Warm, nutrient-rich coastal water spirals clockwise, forming the core of the eddy. Phytoplankton grow in the edges of the eddy near the ocean surface, nourished by the nutrient-rich eddy water. (Image by Robert Simmon)| | <urn:uuid:9f9b898d-6dec-4d57-ae67-27a2c5383701> | 4.3125 | 3,648 | Knowledge Article | Science & Tech. | 53.798172 |
Diamagnetism is the property of an object or material that causes it to create a magnetic field in opposition to an externally applied magnetic field. It is a quantum mechanical effect that occurs in all materials; where it is the only contribution to the magnetism the material is called a diamagnet. Unlike a ferromagnet, a diamagnet is not a permanent magnet. Its magnetic permeability is less than μ0 (the permeability of free space). In most materials diamagnetism is a weak effect, but a superconductor repels the magnetic field entirely, apart from a thin layer at the surface.
Diamagnets were first discovered when Sebald Justinus Brugmans observed in 1778 that bismuth and antimony were repelled by magnetic fields. The term diamagnetism was coined by Michael Faraday in September 1845, when he realized that every material responded (in either a diamagnetic or paramagnetic way) to an applied magnetic field.
Diamagnetic materials
|Material||χv (× 10−5)|
Diamagnetism, to a greater or lesser degree, is a property of all materials and always makes a weak contribution to the material's response to a magnetic field. However, for materials that show some other form of magnetism (such as ferromagnetism or paramagnetism), the diamagnetic contribution becomes negligible. Substances that mostly display diamagnetic behaviour are termed diamagnetic materials, or diamagnets. Materials called diamagnetic are those that non-physicists generally think of as non-magnetic, and include water, wood, most organic compounds such as petroleum and some plastics, and many metals including copper, particularly the heavy ones with many core electrons, such as mercury, gold and bismuth. The magnetic susceptibility of various molecular fragments are called Pascal's constants.
Diamagnetic materials have a relative magnetic permeability that is less than or equal to 1, and therefore a magnetic susceptibility less than 0, since susceptibility is defined as χv = μv − 1. This means that diamagnetic materials are repelled by magnetic fields. However, since diamagnetism is such a weak property its effects are not observable in everyday life. For example, the magnetic susceptibility of diamagnets such as water is χv = −9.05×10−6. The most strongly diamagnetic material is bismuth, χv = −1.66×10−4, although pyrolytic carbon may have a susceptibility of χv = −4.00×10−4 in one plane. Nevertheless, these values are orders of magnitudes smaller than the magnetism exhibited by paramagnets and ferromagnets. Note that because χv is derived from the ratio of the internal magnetic field to the applied field, it is a dimensionless value.
All conductors exhibit an effective diamagnetism when they experience a changing magnetic field. The Lorentz force on electrons causes them to circulate around forming eddy currents. The eddy currents then produce an induced magnetic field opposite the applied field, resisting the conductor's motion.
Superconductors may be considered perfect diamagnets (χv = −1), since they expel all fields (except in a thin surface layer) due to the Meissner effect. However this effect is not due to eddy currents, as in ordinary diamagnetic materials (see the article on superconductivity).
Demonstrations of diamagnetism
Curving water surfaces
If a powerful magnet (such as a supermagnet) is covered with a layer of water (that is thin compared to the diameter of the magnet) then the field of the magnet significantly repels the water. This causes a slight dimple in the water's surface that may be seen by its reflection.
Diamagnetic levitation
Diamagnets may be levitated in stable equilibrium in a magnetic field, with no power consumption. Earnshaw's theorem seems to preclude the possibility of static magnetic levitation. However, Earnshaw's theorem only applies to objects with positive susceptibilities, such as ferromagnets (which have a permanent positive moment) and paramagnets (which induce a positive moment). These are attracted to field maxima, which do not exist in free space. Diamagnets (which induce a negative moment) are attracted to field minima, and there can be a field minimum in free space.
A thin slice of pyrolytic graphite, which is an unusually strong diamagnetic material, can be stably floated in a magnetic field, such as that from rare earth permanent magnets. This can be done with all components at room temperature, making a visually effective demonstration of diamagnetism.
The Radboud University Nijmegen, the Netherlands, has conducted experiments where water and other substances were successfully levitated. Most spectacularly, a live frog (see figure) was levitated.
In September 2009, NASA's Jet Propulsion Laboratory in Pasadena, California announced they had successfully levitated mice using a superconducting magnet, an important step forward since mice are closer biologically to humans than frogs. They hope to perform experiments regarding the effects of microgravity on bone and muscle mass.
Recent experiments studying the growth of protein crystals has led to a technique using powerful magnets to allow growth in ways that counteract Earth's gravity.
A simple homemade device for demonstration can be constructed out of bismuth plates and a few permanent magnets that levitate a permanent magnet.
Theory of diamagnetism
The electrons in a material generally circulate in orbitals, with effectively zero resistance and act like current loops. Thus it might be imagined that diamagnetism effects in general would be very, very common, since any applied magnetic field would generate currents in these loops that would oppose the change, in a similar way to superconductors, which are essentially perfect diamagnets. However since the electrons are rigidly held in orbitals by the charge of the protons, and are further constrained by quantum mechanics, although many materials exhibit diamagnetism, they typically respond very little to the applied field.
The Bohr–van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. Yet the classical theory for Langevin diamagnetism gives the same prediction as the quantum theory. The classical theory is given below.
Langevin diamagnetism
The Langevin theory of diamagnetism applies to materials containing atoms with closed shells (see dielectrics). A field with intensity B, applied to an electron with charge e and mass m, gives rise to Larmor precession with frequency ω = eB / 2m. The number of revolutions per unit time is ω / 2π, so the current for an atom with Z electrons is (in SI units)
The magnetic moment of a current loop is equal to the current times the area of the loop. Suppose the field is aligned with the z axis. The average loop area can be given as , where is the mean square distance of the electrons perpendicular to the z axis. The magnetic moment is therefore
If the distribution of charge is spherically symmetric, we can suppose that the distribution of x,y,z coordinates are independent and identically distributed. Then , where is the mean square distance of the electrons from the nucleus. Therefore . If is the number of atoms per unit volume, the diamagnetic susceptibility in SI units is
Diamagnetism in metals
The Langevin theory does not apply to metals because they have non-localized electrons. The theory for the diamagnetism of a free electron gas is called Landau diamagnetism, and instead considers the weak counter-acting field that forms when their trajectories are curved due to the Lorentz force. Landau diamagnetism, however, should be contrasted with Pauli paramagnetism, an effect associated with the polarization of delocalized electrons' spins.
See also
- Nave, Carl L. "Magnetic Properties of Solids". Hyper Physics. Retrieved 2008-11-09.
- Beatty, Bill (2005). "Neodymium supermagnets: Some demonstrations—Diamagnetic water". Science Hobbyist. Retrieved September 2011.
- Quit007 (2011). "Diamagnetism Gallery". DeviantART. Retrieved September 2011.
- "The Frog That Learned to Fly". High Field Laboratory. Radboud University Nijmegen. 2011. Retrieved September 2011.
- "The Real Levitation". High Field Laboratory. Radboud University Nijmegen. 2011. Retrieved September 2011.
- Liu, Yuanming; Zhu, Da-Ming; Strayer, Donald M.; Israelsson, Ulf E. (2010). "Magnetic levitation of large water droplets and mice". Advances in Space Research 45 (1): 208–213. Bibcode:2010AdSpR..45..208L. doi:10.1016/j.asr.2009.08.033.
- Choi, Charles Q. (09-09-2009). "Mice levitated in lab". Live Science. Retrieved September 2011.
- Kleiner, Kurt (08-10-2007). "Magnetic gravity trick grows perfect crystals". New Scientist. Retrieved September 2011.
- "Fun with diamagnetic levitation". ForceField. 02-12-2008. Retrieved September 2011.
- Kittel, Charles (1986). Introduction to Solid State Physics (6th ed.). John Wiley & Sons. pp. 299–302. ISBN 0-471-87474-4.
- Chang, M. C. "Diamagnetism and paramagnetism". NTNU lecture notes. Retrieved 2011-02-24.
- Drakos, Nikos; Moore, Ross; Young, Peter (2002). "Landau diamagnetism". Electrons in a magnetic field. Retrieved 27 November 2012.
- Video of a museum-style magnetic elevation train model that uses diamagnetism
- Videos of frogs and other diamagnets levitated in a strong magnetic field
- Video of levitating pyrolytic graphite
- Video of a piece of neodymium magnet levitating between blocks of bismuth. | <urn:uuid:f0b23490-e748-4a57-ac16-2849b23ebf85> | 3.84375 | 2,157 | Knowledge Article | Science & Tech. | 39.185894 |
A positive pion is an up and an anti-down. A negative pion is a down and an anti-up. What's a pion with an electrical charge of 0?
Up and anti-up.
Or down and anti-down.
Funny thing is, both of those have the exact same quantum numbers - parity, spin, baryon number and the rest. So a neutral pion can be a mixture of (u + anti-u) and (d + anti-d). There actually result two types of neutral "pion" that decay differently. One is actually heavier, and we call it the eta meson.
Oops I didn't mention yet the strange and anti-strange quark combination, which also gets tangled into the mixes... but it's not important to the neutral pion.
They're made of a combination of up and anti-up, down and anti-down, and strange and anti-strange. And that's just to begin with. To be perfectly accurate you'd have to add all six quarks (okay, maybe not the top).
The short reason is that all these quark combinations have the right quantum numbers and so are included. Feynman diagrams of all sorts contribute, for example:
In the above, the wavy lines are virtual vector bosons. I've put two in because, as Anna V notes, the pion is a psuedoscalar and so has zero spin. Of course the heavy quarks don't appear very much in a light meson since they have to be made from borrowed energy.
People aren't much interested in the heavy quark contribution to the pion. An analogous subject is the heavy quark content of the proton and a reference which will illustrate how this comes about is:
Note that the masses of the nucleons are about 1 GeV while the mass of the t-quark is around 171 GeV. | <urn:uuid:7bffc83d-962b-4557-97a7-e0cdfdec28e0> | 3.0625 | 405 | Q&A Forum | Science & Tech. | 69.053245 |
Raman Spectroscopy and Four-Wave Mixing in Sodium
In the 1920s, an Indian physicist named C.V. Raman noticed that
light incident on a variety of surfaces is sometimes scattered with different wavelengths
(Milonni 682). With the invention of the laser this phenomenon known as the Raman effect
could be studied extensively. Such a non-linear process occurs when a photon of energy E1
excites an atom to a "virtual state" and then quickly relaxes to an eigenstate E3
, releasing a photon E2 = E1 E3.
Unlike a fluorescence process, Raman scattering involves no transfer of electron
population to the intermediate state. It is an effect of superpositioning of waves.
The electron simply begins in state E1 and ends in state E3.
If the electron ends up in a higher state than its initial state, this is called
"Stokes scattering;" if it ends up in a lower state, the effect is called
A very useful and popular spectroscopic technique known as Coherent
Anti-Stokes Raman Spectroscopy (C.A.R.S.) employs these phenomena. Raman scattering
may also take place in molecules: transitions to a vibrational virtual state and then to
various vibrational and rotational levels yield Stokes and Anti-Stokes line.
We believe we have observed this effect in our data for two-photon absorption of
Cesium: one photon at 6548 angstroms excited the atoms to a virtual state which quickly
decayed to the 5d state, absorbed another atom to the 12p state and then ionized.
Four Wave Mixing
Four-wave mixing refers to the general phenomenon by which four photons of light, some
of which are incident on an atom, constructively interfere leaving the atom in its
original state. This may be in the form of third-harmonic generation in which an atom
absorbs three photons simultaneously and transitions to an eigenstate before relaxing back
to the original state with the release of a photon with energy equal to the sum of the
three incident photon energies. In the particular case of Sodium which we studied, an atom
absorbs two photons of light simultaneously and transitions to a virtual state just below
the 3d3/2 and 3d5/2 eigenstates. In this kind of
"parametric four-wave mixing," there is no transfer of population to this
virtual state, only an induced polarization of the atom and a superposition of waves
(Moore 3, 20). The two photons are Raman scattered, generating two more photons and
bringing the atom to another virtual state (very close to the 3p1/2 and 3p3/2
lines) before returning to the initial state (1s2 2s2 2p6 3s).
In the end, we have put in two identical photons but find two different photons scattered;
the polarization of the atom induced by an incident electric field is non-linear.
As in all photon excitation processes, four-wave mixing must conserve energy and
momentum. To obtain the constructive interference required for four-wave mixing, there
must also be a phase correspondence between incident and scattered photons. The
electric fields of the photons are:
where ki are the phase vectors of the fields.
The phase vectors ki of the incident photons must match
head to tail with the vectors of the two emitted photon. When the two incident photons
bring the atom to an eigenstate, the index of refraction is such that this phase matching
condition cannot be met. The parametric four-wave emission will be suppressed. When we
excite the atom off resonance, the phase matching requirement may be met; however, because
the indices of refraction are different for each wavelength, this requirement will cause
the emitted photons to propagate at an angle from the path of the incident light. This
results in a cone of light emitted form the sodium vapor. No light of the scattered
wavelengths may be emitted along the axis of the laser beam. Click here to view some data from our four-wave mixing
Table of Contents: | <urn:uuid:67f34ba2-9ad0-42ed-90b3-be8929203f12> | 3.234375 | 874 | Academic Writing | Science & Tech. | 39.320759 |
An exponential function is a mathematical function of the following form:
f ( x ) = a x
where x is a variable, and a is a constant called the base of the function. The most commonly encountered exponential-function base is the transcendental number e , which is equal to approximately 2.71828. Thus, the above expression becomes:
f ( x ) = e x
When the exponent in this function increases by 1, the value of the function increases by a factor of e . When the exponent decreases by 1, the value of the function decreases by this same factor (it is divided by e ).
In electronics and experimental science, base-10 exponential functions are encountered. The general form is:
f ( x ) = 10 x
When the exponent increases by 1, the value of the base-10 function increases by a factor of 10; when the exponent decreases by 1, the value of the function becomes 1/10 as great. A change of this extent is called one order of magnitude.
For a given, constant base such as e or 10, the exponential function "undoes" the logarithm function, and the logarithm undoes the exponential. Thus, these functions are inverses of each other. For example, if the base is 10 and x = 3:
log (10 x ) = log (10 3 ) = log 1000 = 3
If the base is 10 and x = 1000:
10 (log x) = 10 (log 1000) = 10 3 = 1000 | <urn:uuid:94440d24-4574-40cc-a8f2-52cb7e665588> | 3.84375 | 316 | Knowledge Article | Science & Tech. | 63.680103 |
C3-plant and C4-plant
philcorbett at ntlworld.com
Mon Jul 30 07:12:09 EST 2001
We also need to know the growing media. Moist warm organic compost would be
a source of CO2 and possible pathogens.
"Cereoid*" <cereoid at prodigy.net> wrote in message
news:yi_87.4908$2v1.1004094687 at newssvr17.news.prodigy.com...
> This is not some hypothetical "thought experiment".
> This is either some real experiment using actual plant species or a pure
> Do you see the difference?
> David Kirschtel <kirschte at msu.edu> wrote in message
> news:3B6462E0.AD841BBD at msu.edu...
> Cereoid* wrote:
> > Pure speculation.
> > Its worthless without one being able to refer back to the actual
> > itself, if it really existed, or being able to duplicate it.
> > We still do not know where was it published or which species were used.
> Sorry Cereoid, we need none of the above. Einstein's greatest
> contribution to science was the _Gedanken_Experiment_ ("Thought
> Experiment") and that's sufficient for our purposes here. So, let's
> The problem is, paradoxically, O2 concentration not CO2 and is the
> result of photorespiration. RUBISCO has an affinity for O2 as well as
> CO2. If O2 is bound to RuBP then it is oxidized, with CO2 being lost
> from the Calvin-Benson cycle. This can result in a 20-50% loss of fixed
> C from the plant in "normal" conditions.
> This is considered to be the selective force for the evolution C4
> photosynthesis. C4 plants have spatially separated the O2-producing
> Light Reactions from the C-fixing Dark Reactions of photosynthesis and
> thus minimized/reduced the C-fixing inefficiency of photorespiration.
> So, in a sealed container with (presumptively) high O2 concentrations
> the C3 plant would starve to death for lack of carbon. Meanwhile, the C4
> plant would be "cannibalizing" the CO2 being lost from the C3 plant as a
> result of photorespiration.
> Original Question:
> > I´ve recently heard of following experiment:
> > If you cultivat a C3-plant and a C4-plant together in an airtight
> > glassy container, the C3-plant will die after a short period of time.
> > Could anybody explain to me for what reason / reasons the C3-plant
> > will die.
> David Kirschtel, Ph.D. * kirschte at pilot.msu.edu * 517.432.0898
> 112 N Kedzie Lab * Mich State Univ * E Lansing, MI * 48824
> First Year Online/Biology http://lecture.lite.msu.edu/~bio/fyol/
> [after 17 Sept: Biology Program, Hitchcock Hall,
> Univ. of Washington, Seattle WA, 98150 206.543.9120]
More information about the Plantbio | <urn:uuid:c283800b-a8d1-404a-a605-202443f299d0> | 2.875 | 742 | Comment Section | Science & Tech. | 68.542188 |
great spotted cuckoo
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
...their hosts, compared with up to 72 percent of mismatched eggs. Few cuckoos have been studied intensively in terms of egg mimicry, but the phenomenon is known to occur in at least some species. The great spotted cuckoo has an egg pattern mimicking that of the magpie (Pica pica), its usual host in southern Europe. In Africa, where it is apparently a recent colonist, this cuckoo exhibits...
What made you want to look up "great spotted cuckoo"? Please share what surprised you most... | <urn:uuid:4139a0f0-0291-4bdc-ad75-2e7c8276f0ed> | 2.703125 | 156 | Knowledge Article | Science & Tech. | 60.810308 |
Flooding of the Mississippi River near the St. Louis Gateway Arch in August, 1993.
Credit RMCO and NRDC (Figure 2, p.6, from the report “Doubled Trouble: More Midwestern Extreme Storms”)
Average frequency of days with 3 inches or more of precipitation per weather station per year. Dots indicate annual average frequency per station, the blue line the 1961-2011 linear trend line, and the dashed line the average frequency 1961-1990.
Credit RMCO and NRDC (Figure 9, p.14, from the report “Doubled Trouble: More Midwestern Extreme Storms”)
By-decade changes in the frequency of storms of at least 3 inches in Missouri, compared to a 1961-1990 baseline.
Credit RMCO and NRDC (Figure 4, p.9, from the report "Doubled Trouble: More Midwestern Extreme Storms")
By-decade changes in the frequency of storms of at least 3 inches in Illinois, compared to a 1961-1990 baseline.
Missouri Botanical Garden ethnobotanist Jan Salick crosses the highest pass (5,400 m) in the Himalayas. The pass lies to the north of the Annapurna Mountain range in western Nepal, where one of her climate change research sites is located.
Credit (Burgund Bassuner)
The red triangles on this map represent Salick’s climate change research plots, which are located along a 2,000 km transect across the Himalayas in Nepal, Bhutan and Tibetan China.
Credit (Ken Bauer)
Missouri Botanical Garden researcher Katie Konchar examines plants in a Himalayan research plot.
Credit (Katie Konchar)
Himalayan climate change research often requires days of hiking, as well as camping in remote research sites like this one near sacred Mount Jomolhari (7,320 m) in Bhutan. | <urn:uuid:d6f10b5d-6610-4d95-9ee2-87744596db1a> | 2.96875 | 404 | Content Listing | Science & Tech. | 51.586667 |
Water On The Moon
Sat Sep 26 02:06:37 BST 2009 by Ignacio
Doesn't the Coran say that there is water on the moon?
Wasn't Cassini The First To Discover Water?
Sun Sep 27 05:37:19 BST 2009 by Paul Whitmore
Correct me if I'm wrong but wasn't it Cassini who first discovered water on the moons surface? If I remember correctly, water was detected on the moon by Cassini during it's flyby on the way to Saturn. However, they dismissed it as an instrumentation error because 10 years ago water on the moon was borderline pseudoscience.
Fri Oct 23 16:30:02 BST 2009 by Mike Licht
But why is NASA really seeking water on the Moon?
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:efa555b7-7ba6-4e29-9f8b-f0e9d0bd55f9> | 3.296875 | 213 | Comment Section | Science & Tech. | 72.276316 |
If you have started learning Objective c, this small tutorial on "Loops in Objective c" would be helpful. In this series of loop examples, i have discussed about...
The loop concept and the usage of it, is almost same as in C & C++ programming language. So, in this tutorial we have tried to explain the loop concept in Objective c with the help of few examples.
But if you are new to object oriented programing then ..you must read the given definition of loops.
What is loop in programming
In reference to programming language, loop is a collection of statement that repeatedly executes until the given condition is true. There could kind of loops for example do loop, do while loop, for loop etc..
You will learn about all kind of loops in further Objective c examples.
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:eff847a0-e096-47f4-bfc7-04e60cdaae9b> | 3.296875 | 206 | Tutorial | Software Dev. | 63.093985 |
Under classical physics, as Preskill explained on Caltech’s Quantum Frontiers blog, Alice and Bob can both have copies of the same newspaper, which gives them access to the same information. Sharing this bond of sorts makes them “strongly correlated.” A third person, “Carrie,” can also buy a copy of that newspaper, which gives her equal access to the information it contains, thereby forging a correlation with Bob without weakening his correlation with Alice. In fact, any number of people can buy a copy of that same newspaper and become strongly correlated with one another.
But with quantum correlations, that is not the case. For Bob and Alice to be maximally entangled, their respective newspapers must have the same orientation, whether right side up, upside down or sideways. So long as the orientation is the same, Alice and Bob will have access to the same information. “Because there is just one way to read a classical newspaper and lots of ways to read a quantum newspaper, the quantum correlations are stronger than the classical ones,” Preskill said. That makes it impossible for Bob to become as strongly entangled with Carrie as he is with Alice without sacrificing some of his entanglement with Alice.
This is problematic because there is more than one kind of entanglement associated with a black hole, and under the AMPS hypothesis, the two come into conflict. There is an entanglement between Alice, the in-falling observer, and Bob, the outside observer, which is needed to preserve No Drama. But there is also asecond entanglement that emerged from another famous paradox in physics, one related to the question of whether information is lost in a black hole. In the 1970s, Stephen Hawking realized that black holes aren’t completely black. While nothing might seem amiss to Alice as she crosses the event horizon, from Bob’s perspective, the horizon would appear to be glowing like a lump of coal — a phenomenon now known as Hawking radiation.
The entanglement of particles in the No Drama scenario: Bob, outside the event horizon (dotted lines), is entangled with Alice just inside the event horizon, at point (b). Over time Alice (b') drifts toward the singularity (squiggly line) while Bob (b") remains outside the black hole.
Image: Courtesy of Joseph Polchinski
This radiation results from virtual particle pairs popping out of the quantum vacuum near a black hole. Normally they would collide and annihilate into energy, but sometimes one of the pair is sucked into the black hole while the other escapes to the outside world. The mass of the black hole, which must decrease slightly to counter this effect and ensure that energy is still conserved, gradually winks out of existence. How fast it evaporates depends on the black hole’s size: The bigger it is, the more slowly it evaporates.
Hawking assumed that once the radiation evaporated altogether, any information about the black hole’s contents contained in that radiation would be lost. “Not only does God play dice, but he sometimes confuses us by throwing them where they can’t be seen,” he famously declared. He and the Caltech physicist Kip Thorne even made a bet with a dubious Preskill in the 1990s about about whether or not information is lost in a black hole. Preskill insisted that information must be conserved; Hawking and Thorne believed that information would be lost. Physicists eventually realized that it is possible to preserve the information at a cost: As the black hole evaporates, the Hawking radiation must become increasingly entangled with the area outside the event horizon. So when Bob observes that radiation, he can extract the information.
But what happens if Bob were to compare his information with Alice’s after she has passed beyond the event horizon? “That would be disastrous,” Bousso explained, “because Bob, the outside observer, is seeing the same information in the Hawking radiation, and if they could talk about it, that would be quantum Xeroxing, which is strictly forbidden in quantum mechanics.”
Physicists, led by Susskind, declared that the discrepancy between these two viewpoints of the black hole is fine so long as it is impossible for Alice and Bob to share their respective information. This concept, called complementarity, simply holds that there is no direct contradiction because no single observer can ever be both inside and outside the event horizon. If Alice crosses the event horizon, sees a star inside that radius and wants to tell Bob about it, general relativity has ways of preventing her from doing so. | <urn:uuid:7ddfa94f-b543-4074-befc-642e1055e2dd> | 3.203125 | 954 | Knowledge Article | Science & Tech. | 33.420534 |
|OBJECTIVE||Discover the water cycle is more complex than just from the ground to the atmosphere.|
|OVERVIEW||Students will act as water molecules and travel through parts of the water cycle.|
|TOTAL TIME||30 minutes|
|SUPPLIES||A die for each student or each pair of students (or some device where a random number from 1 through 6 can be generated).
|PRINTED/AV MATERIAL||Station cards (8 mb) for each station in the water cycle (print two-sided).
Large labels (13 mb) for each station.
Water cycle worksheet (for each student).
|TEACHER PREPARATION||Before the exercise, print the front and back sides of each station card on its own sheet. Cut out each of the six cards for each station.|
|SAFETY FOCUS||Flash Flood Safety|
At its basic, water moves from the ground to the atmosphere and then returns to the ground. However, the actual path water may take in its cycle is far more complicated. There are many sub-cycles within the main overall circulation.
- Around the classroom, select locations to represent different stations in the water cycle. Place the numbered cards (1-6) face-up at each station.
- Distribute a die to each student or pair of students. Distribute a worksheet for each student.
- Distribute the students to different portions of the water cycle by:
- Placing one-half of students at the 'Oceans' station.
- Evenly spreading the remaining students across the other stations except for the 'plants' station.
- Have each student circle their starting location on their worksheet.
- Each student is to roll their die.
- Based upon the number rolled, the student turns over that card to determine their progress in the water cycle.
- If told to move, have the students move to their new location. On their worksheet, draw an arrow from their starting location to their current position. Label that arrowhead with a number one (1).
- If told to stay at their current position, have the students place a number one (1) inside their drawn circle.
- Repeat steps 5 and 6.
- If told to move, have the students move to their new location. On their worksheet, draw an arrow from their previous location to their current position. Label that arrowhead with a number one (2).
- If told to stay at their current position, have the students place a comma and a number two (2) beside their number one (1).
- Repeat the procedure up to a total of ten (10) times.
|Water Source||Percent of
|Glaciers & Snow||2.14%||2,140|
|Rivers & Lakes||0.017%||17|
While this exercise is to be somewhat realistic, in actuality it is far more complicated to leave the ocean via evaporation due to the fact that nearly all of the earth's water is confined to the oceans. To truly represent the water cycle we would need approximately 100,000 people located at each station as seen in the table (at right).
Not only would there be over 97,000 people who represented the ocean, it would take close to 3,600 rolls of the die before just one person would move to the atmosphere station via evaporation.
This exercise also does not take into consideration human and animal interactions with the water cycle. The water we and animals consume is stored and then eventually eliminated or it evaporates (via perspiration).
Flash floods are the deadliest natural disaster in the world. They are usually caused by thunderstorms that stay over one area for a long time and produce heavy rain over a small area. Hilly and mountainous areas are especially vulnerable to flash floods, because steep terrain and narrow canyons funnel heavy rain into small creeks and dry ravines, turning them into raging walls of water. Even on the prairie, normally-dry low spots can fill with rushing water during heavy rain.
Take time to develop a flood safety plan for home, work, or school, and wherever you spend time during the summer. For more information and safety tips, see the National Weather Service flood safety website and download "Floods and Flash Floods...The Awesome Power". | <urn:uuid:8e93205d-5bca-461d-946e-718f4411193c> | 3.921875 | 909 | Tutorial | Science & Tech. | 57.155328 |
Modeling and simulating the aftermath of hurricanes before they hit landfall is no easy task.
But Clint Dawson, director of the Computational Hydraulics Group at the Institute for Computational Engineering and Sciences (ICES), has taken on the challenge.
With accurate data and powerful computing resources such as the Texas Advanced Computing Center, Dawson, who is also a professor in the Department of Aerospace Engineering and Engineering Mechanics, has been able to create storm simulations and predict the effectiveness of new infrastructure, such as hurricane walls and levees, for future storms.
Read more about the proposed Galveston Bay dike in the Further Findings research blog.
Read a transcript of the interview (PDF). | <urn:uuid:45e8e93c-ae1d-4201-9bb0-396e76770749> | 3.015625 | 140 | Truncated | Science & Tech. | 21.199297 |
In a year marked by a relentless assault of extreme weather, several events stand out. Some, like the tornado that leveled Joplin, Missouri on May 22. were extraordinarily devastating and deadly. Others - such as the “Snowtober” storm that buried the Northeast under a crushing load of heavy, wet snow - were downright freakish. In a typical weather year, one might expect a few extreme events like these.
But this was no ordinary year. At times it seemed as if Mother Nature was on steroids, slamming Americans with one deadly event after another (a good case can be made that Mother Nature is, in fact, on steroids, thanks to global warming). Consider this: according to NOAA, there were at least 12 events that cost a billion dollars or more, an all-time record (there were 14 such events by other measures). More than 1,000 people died from weather-related causes this year, most of them from tornadoes, and more than 8,000 people were injured, according to the National Weather Service.
Related link: Guest blog post on U.S. extreme weather in 2011 by CWG’s Jason Samenow for the BBC
As we’ve covered on this blog, scientific research shows that global warming is likely increasing the odds and severity of certain extreme events, such as heat waves and heavy precipitation events. These aren’t exactly comforting findings, given what transpired this year.
Here are the top 5 extreme weather events of 2011.
1. The April 25-28 Tornado Outbreak.
This was the year of the twister, and multiple tornado outbreaks exacted a heavy price in terms of lives lost. The death toll from many of this year’s tornadoes were so high (tied for second highest on record) that the National Weather Service has embarked on a broad-scale effort to reexamine how it educates the public about tornado risks, and how tornado warnings are worded and disseminated.
The deadliest of this year’s outbreaks occurred in late April. During a four-day period from April 25 to 28, more than 200 tornadoes touched down in five southeastern states. The deadliest day was April 27th, when 316 people died - mainly in Alabama and Mississippi - from 122 tornadoes.
On that day, major tornadoes tore through the cities of Tuscaloosa and Birmingham, and all-but wiped out many rural communities. The Tuscaloosa-Birmingham EF-4 tornado was on the ground for more than 80 miles, and seven other tornadoes stayed on the ground for at least 50 miles.
April 2011 now ranks as the most active tornado month on record with 753 tornadoes, beating the previous record of 542 tornadoes set in May 2003. NOAA has put the total number of April tornado-related fatalities at 364.
The most intense tornadoes flattened some communities. Here’s what a Weather Service storm damage assessment team found in the small town of Phil Campbell, Alabama, after a tornado tore through there on April 27th.
Along Bonner Street, multiple block homes were leveled to the ground with the block foundations destroyed. A twenty-five foot section of pavement was sucked up and scattered. Chunks of the pavement were found in a home over 1/3 of a mile down the road. The damage in this area was consistent with EF-5 damage.
2. Joplin, Missouri EF-5 Tornado
On May 22, the small city of Joplin, Missouri joined the list of cities whose names are synonymous with tornado disasters and recovery efforts, when an EF-5 tornado swept through the heart of the city, destroying nearly everything in its path. The Joplin tornado killed 160 people and injured about 1,000 more, becoming the seventh-deadliest tornado in U.S. history, and the deadliest since modern recordkeeping began in 1950. The Joplin tornado had maximum winds estimated at 210 mph, reached a width of at least a mile wide, and remained on the ground for six miles - just long enough to devastate the city.
The tornado was part of a larger, multi-day outbreak during which 180 tornadoes touched down in the central and southern states, resulting in more than $9.1 billion in total losses, according to NOAA.
3. The Triple Threat of Drought, Heat and Wildfires
One of the costliest natural disasters of the year evolved over a longer timespan and across broader geographic region, as drought conditions parched the Texas landscape and portions of Oklahoma, New Mexico, Arizona, Kansas and Louisiana. In Texas, the drought was the most intense one-year drought on record, with direct losses to crops, livestock and timber of close to $10 billion.
The drought was aggravated by record heat, with many locations across the Southern Plains setting records for the most 100-degree days, including San Angelo, Texas, which reached the century mark on 98 days this year. Oklahoma had the hottest summer of any state in American history, just edging out Texas, which came in second. Oklahoma’s average July temperature was 88.9 degrees, making it the warmest month in any state on record.
The dry conditions forced ranchers to send their cattle to the slaughterhouse early. According to recent reports, 2011 saw the largest-ever one-year decrease in the number of Texas cattle, a loss of about 600,000 cattle in just one year. This may translate to higher beef prices in 2012, due to a below average supply.
The drought and heat also set the stage for the worst wildfires in Texas state history, including the Bastrop fire, which was the state’s most destructive wildfire on record. Wildfires also charred vast stretches of Arizona, New Mexico and Oklahoma. In fact, Arizona and New Mexico both saw their largest wildfires on record. At one point, the Las Conchas Fire in New Mexico, which burned over 150,000 acres, threatened the Los Alamos National Laboratory, the birthplace of the atomic bomb.
4. Hurricane Irene
Hurricane Irene, which made landfall in North Carolina as a Category One storm on August 27, will be remembered mainly for two things:one being the devastating inland flooding the storm caused from New Jersey to Vermont. The second concerns what it (fortunately) did not do - cause significant coastal flooding in the New York City area.
Nevertheless, Irene managed to grind life to a halt in the city that never sleeps - with the first ever shutdown of mass transit, including all three airports, and mandatory evacuations of people from vulnerable parts of the city. Because the storm was weaker than expected when it struck the city - as a tropical storm rather than a hurricane - the damage was far less than feared. However, as Jeff Masters has detailed at Weather Underground, the lack of damage in New York City should not be taken as a sign that the city is safe from such storms.
Irene reserved her worst for a sneak attack on inland areas. After an extremely wet summer, Irene’s rains caused record flooding in New Jersey, New York and Vermont. Many of Vermont’s iconic covered bridges were washed away, and entire communities became cut off from transportation routes. The storm also caused massive power outages, with upwards of seven million homes and businesses without power at the height of the storm, according to NOAA.
Just as the Northeast was beginning to bounce back from Irene, the region was walloped by an early season winter storm that rewrote the history books. Heavy snow fell at a time when most trees still had most of their leaves; causing widespread power outages that in at least one state - Connecticut - eclipsed the outages caused by Irene. As I wrote on this blog:
To put the storm into its proper meteorological context, consider these snowy facts. The storm brought thundersnow to New York City shortly past lunchtime on Saturday, October 29, before the city had even recorded its first freeze. Central Park received 2.9 inches of snow, with up to six inches falling in the Bronx. This was the only time in recorded history that an inch or more of snow has fallen in Central Park during the month of October.
Jaffrey, New Hampshire, recorded 31.4 inches of snow, and 32 inches fell in Peru, Mass. In Concord, N.H., 22.5 inches fell in just 16 hours. October snowfall records were smashed in Hartford, Connecticut, which received 12.3 inches; Worcester, Mass., where 14.6 inches fell; and Newark, NJ, where 5.2 inches piled up.
The timing of this storm made it a high impact event, although it would have been noteworthy for its intensity and heavy snowfall even if it had occurred in February. | <urn:uuid:92d8b565-6d8c-4117-83cb-d3dc7bd9c4ca> | 3.21875 | 1,804 | Listicle | Science & Tech. | 53.980458 |
The following methods can be defined to customize the meaning of
attribute access (use of, assignment to, or deletion of
for class instances.
nameis the attribute name. This method should return the (computed) attribute value or raise an AttributeError exception.
Note that if the attribute is found through the normal mechanism, __getattr__() is not called. (This is an intentional asymmetry between __getattr__() and __setattr__().) This is done both for efficiency reasons and because otherwise __setattr__() would have no way to access other attributes of the instance. Note that at least for instance variables, you can fake total control by not inserting any values in the instance attribute dictionary (but instead inserting them in another object). See the __getattribute__() method below for a way to actually get total control in new-style classes.
|self, name, value)|
If __setattr__() wants to assign to an instance attribute, it should not simply execute "self.name = value" -- this would cause a recursive call to itself. Instead, it should insert the value in the dictionary of instance attributes, e.g., "self.__dict__[name] = value". For new-style classes, rather than accessing the instance dictionary, it should call the base class method with the same name, for example, "object.__setattr__(self, name, value)". | <urn:uuid:2cf0b38c-69d9-4f5c-83ef-020ed3d0c8f7> | 3.515625 | 297 | Documentation | Software Dev. | 40.58338 |
A key part of Rhodes' analysis:
(...)however, for algae to grow, vital nutrients are also required, as a simple elemental analysis of dried algae will confirm. Phosphorus, though present in under 1% of that total mass, is one such vital ingredient, without which algal growth is negligible. I have used two different methods of calculation to estimate how much phosphate would be needed to grow enough algae, first to fuel the UK and then to fuel the world:
(1) I have taken as illustrative the analysis of dried Chlorella , which contains 895 mg of elemental phosphorus per 100 g of algae.
UK Case: To make 40 million tonnes of diesel would require 80 million tonnes of algae (assuming that 50% of it is oil and this can be converted 100% to diesel).
The amount of "phosphate" in the algae is 0.895 x (95/31) = 2.74 %. (MW PO4(3-) is 95, that of P = 31).
Hence that much algae would contain: 80 million x 0.0274 = 2.19 million tonnes of phosphate. Taking the chemical composition of the mineral as fluorapatite, Ca5(PO4)3F, MW 504, we can say that this amount of "phosphate" is contained in 3.87 million tonnes of rock phosphate.
From the internet, for the cyanobacterium Microcystis aeruginosa :
The P quota of these cells was high (mean concentration 132 mmol per kg dry weight)
One converts 132 mmol P X (31 mg/mmol) = 4092 mg = 4.092 g P. This gives 4092 mg P per 1000 g of dry cyanobacteria, or 409.2 mg per 100 gram of dry cyanobacteria. The number 409 is less than half of Rhodes' 895, but who wants to quibble?
The "amount of phosphorous" matter is not the biggest problem with the analysis of Chris Rhodes. Rhodes assumes that the phosphorous is lost. In many algal/cyanobacterial biofuel production schemes (e.g., US 6,306,639; US 6,699,696; US published application 20090155871 ), carbon dioxide and water lead to secretion of biofuel in the absence of consumption of any biomass. The phosphorous argument presented by Chris Rhodes is inappropriate against those schemes.
As Ramachandra said, “We do not harvest milk from cows by grinding them up and extracting the milk. Instead, we let them secrete the milk at their own pace, and selectively breed cattle and alter their environment to maximize the rate of milk secretion” [ Ramachandra TV, Mahapatra DM, Karthick B (2009) Milking diatoms for sustainable energy: Biochemical engineering versus gasoline-secreting diatom solar panels. Ind Eng Chem Res 48:8769–8788. ] | <urn:uuid:1bb6c70c-2766-4277-85ef-9b39fe1afc3b> | 3.09375 | 614 | Personal Blog | Science & Tech. | 70.178818 |
See also the
Dr. Math FAQ:
Browse High School Sequences, Series
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Strategies for finding sequences.
- Infinite sums [6/26/1996]
Given the function f(x) = x^2/(1+x^2) find the sum f(1/n)+f(2/n)+...+
f(n-1/n)+f(n/n) for any positive integers "N".
- Integer Sequence [08/15/1997]
Show that if 19 distinct integers are chosen from the sequence
1,4,7,10,13,16,19...,97,100, there must be two whose sum is 104.
- Interesting Number Sequence Pattern [11/01/2004]
The sequence of digits 1,2,3,4,0,9,6,9,4,8,7,... is constructed in the
following way: every digit starting from the fifth is the last digit
of the sum of the previous four digits. (a) Do the digits 2,0,0,4
appear in the sequence in that order? (b) Do the initial digits
1,2,3,4 appear again in the sequence in that order?
- Interpolation and Extrapolation [08/10/1999]
Can you give me a step-by-step procedure for finding missing values based
on interpolation and extrapolation?
- Investigating Sequence Patterns [05/14/1998]
How can we find the pattern in the following sequence? Take a circle with
three dots on the circumference and connect with lines...
- Irrational Decimals [08/04/1998]
I know the proof stating that 0.9 (repeating) is actually equal to one,
but from a representation standpoint are they actually considered to be
- Isomorphisms [08/16/1999]
Can you give me an explanation and a nice example of isomorphism?
- Laurent Expansion [02/25/1999]
Explanation the Laurent Expansion using some examples.
- Laying a Brick Walkway [04/22/2002]
How many different ways can I build a walkway 2' by 20' of bricks 1'
by 2'? The bricks can lie vertically and horizontally, but in no other
- Laying Paving Stones [11/28/2001]
Finding a relation for a sequence that relates to the number of ways
paving stones can be laid to make a 3-foot-wide path using 3-foot by 1-
- Let f(x) = 1 + 1/2 + 1/3 + ... + 1/[(2^n)-1] [05/15/1999]
Which of the following inequalities are correct?
- Level of Medicine in the Human Body [11/1/1996]
A patient receives a 10mg dose of medicine every four hours... prove that
there will always be less than 40mg in the patient's body."
- The Limit of (1+1/x)^x As x Approaches Infinity [02/17/1998]
How Euler calculated e, and what it has to do with the equation
- Limit of (-1)^n? [03/14/1998]
As n approaches infinity, does (-1)^n have a limit?
- Limit of Area [03/01/1998]
Limit approached by area of a square when its sides are repeatedly
divided into three congruent parts and squares are constructed outwardly
on the middle parts.
- Limit of a Series [03/02/2003]
What is the limit of the following series: a(n) = ((1^n) + (2^n) + ... +
- Limits - Indeterminate Forms [10/12/1997]
I cannot do a problem where I need to convert into the form 0/0 and then
use L'Hopital's Rule...
- Limits of Sequences [02/25/2001]
Is the limit of [(1 + 1/sqrt(n))^(1.5n)], as n goes to infinity, e? What
is the limit as n goes to infinity of [(1 + a/n)^n], where a is not equal
- Limits of Sequences [08/19/1997]
Please explain the limit superior of a sequence .
- Linear Recurrance Relations [08/10/2004]
Is there a general approach to taking a pattern that is defined
recursively and finding an explicit definition for it?
- Looking for Patterns [10/30/2001]
What would be the answer to: (x-a)(x-b)(x-c)(x-d)(x-e)...(x-y)(x-z)?
- Maclaurin Series for Tangent [09/17/2001]
What is the Maclaurin series for tangent (not inverse tangent)?
- Making a Series Sum to Zero [05/24/2002]
How can I place + and - signs between 1^2, 2^2, 3^2, ..., 2005^2 to
make the sum equal zero?
- Mangoes at the Gates [04/06/2001]
To pick some mangoes from a tree inside seven walls with seven guards,
you tell each guard that you'll give him half of the mangos you have, but
he must give you back one mango. What's the minimum number of mangos you
must pick to have at least one mango left?
- Mathematical Series [04/19/2005]
Given the series mtbf = 1/a + 1/2a + 1/3a + .... + 1/na with mtbf =
23998 and a = 0.0008, how do I solve for n?
- Math Virus Formula [10/23/2001]
The virus spreads to all the squares directly touching each other (not
including diagonally) and I have found the formula for the number of
newly infected cells (although this does not include the first minute)...
- Mean of a Sequence [06/09/2002]
What is the arithmetic mean of the first one thousand positive odd
- Method of Finite Differences [10/12/2000]
How can I find the generating equation for the series -3, 2, 13, 30, 53?
- The Method of Finite Differences, Extended -- and Shortened [12/22/2010]
Given the first four terms of two series, a student struggles to generate polynomials that describe them with the method of finite differences. Doctor Greenie shows her how to create a row with a common difference -- and save effort with a shortcut.
- Millionth Digit of the Counting Numbers [02/26/2001]
A number is formed by writing the counting numbers in order:
123456789101112131415... What is the one millionth digit in this number?
- Millionth Term [04/16/2001]
What is the millionth term in the sequence 1, 2, 2, 3, 3, 3, ... ?
- Millionth Term of a Sequence [06/18/2002]
what would be the millionth term in the sequence 1, 2, 2, 3, 3, 3, 4,
4, 4, 4,...?
- A Monster of a Continued Fraction [11/09/1996]
How do you find the value of a continued fraction?
- Multiplying Mice [07/23/1997]
Baby mice can breed when they are 6 weeks old and the babies are born
after 3 weeks. If each mother mouse has only one litter and all the
litters have 8 babies, half males and half females, how many mice will
you have 18 weeks from today?
- Naming Geometric and Arithmetic Progressions [04/04/2003]
Why is an exponential progression called 'geometric'? Why is a linear
progression called 'arithmetic'?
- Natural Numbers [01/08/1998]
What are two ways of finding the sum of n natural numbers?
- Nested Radical [02/25/2002]
Prove the following nested radical:
sqrt(1+2sqrt(1+3sqrt(1+4sqrt(1+...)))) = 3
- Nested Square Roots [07/17/1998]
Solve for n where n = sqrt(6 + sqrt(6 + sqrt6 + ...
- Newton's Method and Continued Fractions [10/06/1999]
Can you clarify some points on Newton's method of finding square roots
without a calculator, and on the continued fraction algorithm (CFA)?
- Nonhomogeneous Linear Recurrence Relations [05/18/2004]
Given a recursive formula: a(n+1) = a(n) + (a(n) - b)*t, where b is a
known constant and a(1) is also known, I am trying to find the
explicit formula like y = ????? * t^n. | <urn:uuid:752e20f9-8b81-417d-9aff-1a1af7b91f7e> | 3.046875 | 1,970 | Q&A Forum | Science & Tech. | 87.984489 |
Sci. STKE, 24 July 2007
Genetics Whats the Buzz
Laura M. Zahn
Science, AAAS, Washington, DC 20005, USA
The residents of bee hives are well known to be closely related, but hives can often exhibit more genetic diversity than might be anticipated from theories on the benefits of cooperation among closely related individuals. Mattila and Seeley show that one reason for this is that more genetically diverse hives (those originating from a female mating with multiple males) perform better in the rate of comb building, foraging rates, and honey production than those originating from a single female and male. To advertise her presence in the colony and to exert influence over its members, a honey bee queen produces a complex blend of substances known as queen mandibular pheromone. Vergoz et al. (see the Perspective by Galizia) found that exposure to queen pheromone leads to a reduction in aversive learning but not to a reduction in appetitive learning in young honey bees. The queen substance modulates the dopaminergic system of bees, which reduces the capacity of young workers to form aversive memories.
Citation: L. M. Zahn, Whats the Buzz. Sci. STKE 2007, tw267 (2007).
The editors suggest the following Related Resources on Science sites:
In Science Signaling
Science Signaling. ISSN 1937-9145 (online), 1945-0877 (print). Pre-2008: Science's STKE. ISSN 1525-8882 | <urn:uuid:413e7d02-0c01-46c8-b6f2-365f45d41570> | 3.34375 | 317 | Academic Writing | Science & Tech. | 46.630142 |
New satellite measurements of Arctic Ice shocked scientists. Chris Rapley suggests that the warming Arctic is involved in jet stream instability associated with increasing volatility in weather in lower latitudes, such as we are now experiencing. In other words, that gigantic heat dome and the US Great Drought of 2012 are connected to the warming which seemed so far away it was easy to ignore in Texas and Oklahoma. Do I hear the echo of a cruel laugh? Climate Change is just a conspiracy, they said.
This rate of loss is 50% higher than most scenarios outlined by polar scientists and suggests that global warming, triggered by rising greenhouse gas emissions, is beginning to have a major impact on the region. In a few years the Arctic ocean could be free of ice in summer, triggering a rush to exploit its fish stocks, oil, minerals and sea routes.
The consequences of losing the Arctic's ice coverage, even for only part of the year, could be profound. Without the cap's white brilliance to reflect sunlight back into space, the region will heat up even more than at present. As a result, ocean temperatures will rise and methane deposits on the ocean floor could melt, evaporate and bubble into the atmosphere. Scientists have recently reported evidence that methane plumes are now appearing in many areas.
Professor Chris Rapley of UCL said: "With the temperature gradient between the Arctic and equator dropping, as is happening now, it is also possible that the jet stream in the upper atmosphere could become more unstable. That could mean increasing volatility in weather in lower latitudes, similar to that experienced this year."
... the shrinking of sea-ice coverage we have observed... is telling us that something highly significant is happening to Earth.
Discouraging news. Thanks for the article link Ruth!
... over the last three decades, satellites have observed a 13% decline per decade in the summertime minimum.
The thickness of the sea ice is also declining, so overall the ice volume has fallen far - although estimates vary about the actual figure.
Professor Peter Wadhams, from Cambridge University, told BBC News: "A number of scientists who have actually been working with sea ice measurement had predicted some years ago that the retreat would accelerate and that the summer Arctic would become ice-free by 2015 or 2016."
"I was one of those scientists - and of course bore my share of ridicule for daring to make such an alarmist prediction."
"Measurements from submarines have shown that it has lost at least 40% of its thickness since the 1980s, and if you consider the shrinkage as well it means that the summer ice volume is now only 30% of what it was in the 1980s," he added. [emphasis mine]
Here's the latest update on the 2012 Arctic Sea Ice melt. From Why the Arctic Sea Ice Death Spiral Matters
But what happens in the Arctic, doesn’t stay in the Arctic. The rapid disappearance of sea ice cover can have consequences that are felt all over the Northern Hemisphere, due to the effects it has on atmospheric patterns. As the ice pack becomes smaller ever earlier into the melting season, more and more sunlight gets soaked up by dark ocean waters, effectively warming up the ocean. The heat and moisture that are then released to the atmosphere in fall and winter could be leading to disturbances of the jet stream, the high-altitude wind that separates warm air to its south from cold air to the north. A destabilized jet stream becomes more ‘wavy’, allowing frigid air to plunge farther south, a possible factor in the extreme winters that were experienced all around the Northern Hemisphere in recent years. Another side-effect is that as the jet stream waves become larger, they slow down or even stall at times, leading to a significant increase in so-called blocking events. These cause extreme weather simply because they lead to unusually prolonged conditions of one type or another. The recent prolonged heatwave, drought and wildfires in the USA are one example of what can happen; another is the cool, dull and extremely wet first half of summer 2012 in the UK and other parts of Eurasia. [emphasis mine]
We also need to discuss how the melting Arctic glaciers will effect sea level rise.
Greenland, especially, was melting at unprecedented rates this spring and summer. Greenland, which is reality a mile-thick glacier of approximately 100,000 year old ice covering 3 islands, could have globally devastating consequences as it melts. It would raise global sea levels by 7 metres (23 feet). That would be the end of coastal cities and other coastal regions. Most of the world's population lives in the zones which would be permanently under the water.
Note too, that Greenland would not be the only ice sheet that melts significantly. There are other, smaller ice sheets in the Arctic, as well as the ice sheet which covers most of Antarctica. If those melt too, much of the land upon which humans live or grow crops would be under the ocean.
Here's an update on the 2012 Arctic Sea ice on Sept 12th.
The 2012 record Summer Arctic Sea ice melt has staggering implications. Scientists now predict Summer Arctic Sea ice it will disappear all together in Summer in as little as 20 years. According to Nick Toberg when that happens it will have the same warming effect as 20 years of CO2 emissions at the current rate.
The Cambridge University Sea ice researcher Nick Toberg... said: "This is staggering. It's disturbing, scary that we have physically changed the face of the planet. We have about 4m sq km of sea ice. If that goes in the summer months that's about the same as adding 20 years of CO2 at current [human-caused] rates into the atmosphere. That's how vital the arctic sea ice is.
"In the 1970s we had 8m sq km of sea ice. That has been halved.We need it in the summer. It has never decreased like this before".
..."We are on the extreme edge of the models, suggesting that ice loss is happening much faster than the models suggested ," says Stroeve. [emphasis mine]
See the tipping point, folks. Oops, looks like we've lost our footing.
Here's a nice graphic which shows Arctic Sea Ice volume, instead of surface extent. You can compare the year round volume declines, instead of only comparing Summer lows. | <urn:uuid:f7057799-7ae4-4fb5-8615-b7fb12f68a7b> | 3.1875 | 1,299 | Comment Section | Science & Tech. | 54.488766 |
A solid non-uniformly charged cylinder, of radius R and charge
density ?(r) = (?oexp(-r/R))/r, is situated inside of a conducting
cylindrical shell of inner radius a and outer radius b
(a) Find the total charge contained in a length h of the charged
(b) Find the electric field for all values of r (that is, r<R,
R<r<a, a<r<b, b<r).
(c) Graph your results.
(d) What is the total charge on the inner and outer surfaces of the
conducting cylindrical shell? | <urn:uuid:d8f42a9a-6ab9-464f-9c2a-3a179c2748b1> | 2.703125 | 140 | Q&A Forum | Science & Tech. | 80.403955 |
A walker in a city is at the green point in the upper-right corner of the plot. Her goal is the red point at the origin. Thus, at each intersection of streets, she goes either to the west or to the south. She has to go intersections to the west and intersections to the south. Suppose that at each intersection, there is always a green light in one of the two possible directions and a red light in the other direction; the walker always chooses the direction where she sees the green light. Also suppose that the walker faces the lights randomly, that is, there is a probability of 1/2 that the green light is to the west and also a probability of 1/2 that the green light is to the south.
As long as she does not come to the axes, she does not need to wait at any intersection. As soon as she comes to an axis, she will stay on it and may well come to an intersection at which there is a red light in the direction she has to go; this happens with probability 1/2. We are interested in the number of stops during her walk, that is, in the number of intersections where she has to wait because of a red light.
The Demonstration shows sample paths of the walker for up to 1000 intersections of streets, the number of encountered red lights, and the expected number of red lights.
Snapshots 1, 2, 3: these plots show the results when the starting point is , , or
When the starting point is for , the expected number of red lights is 1.76, 2.51, 3.98, 5.63, 7.97, 12.6, and 17.84, respectively. Thus, the expected number of red lights grows slowly, approximately as .
The expected number of red lights is easy to calculate with Mathematica by using recurrence relations. Let be the expected number of red lights when starting from . The recurrence relations are:
for and ,
The Demonstration is based on problem 18 in . See also .
P. J. Nahin, Digital Dice: Computational Solutions to Practical Probability Problems, Princeton: Princeton University Press, 2008.
H. Sagan, "On Pedestrians, City Blocks, and Traffic Lights," Journal of Recreational Mathematics, 21(2), 1989 pp. 116–119. | <urn:uuid:f4eaf68d-2757-4920-b68c-e6f1fc8598cb> | 2.78125 | 487 | Tutorial | Science & Tech. | 73.929739 |
Name: Micheal N. B.
Date: November 2003
A chicken egg with the shell removed with vinegar is used
as semipermeable membrane to demonstrate to students the movement of
water by osmosis. In addition to a concentrated sugar and distilled
water setup - I would like to include an isotonic setup. If sucrose is
used as the solute,
(1) are the sugar molecules large enough not to pass
into the egg, and
(2) what percent solution would make for an average
I'm not sure about sugar, but a 5% salt solution is about isotonic to the inside of a chicken
egg. I use 20% salt as a hypertonic solution and distilled water as a hypotonic solution.
Click here to return to the Molecular Biology Archives
Update: June 2012 | <urn:uuid:cfb0c06c-ae41-4b0e-a056-a9fcbedea231> | 2.84375 | 172 | Q&A Forum | Science & Tech. | 39.465794 |
Squirt Gun Pressure
I need information on how air pressure works in a water
gun that is pumped up like the super soakers. Why? Well, I am a 10 year
old doing a science project on figuring out the ideal number of pumps to
use for the best distance and water soakage. I have gathered all of my
data and made my charts and eliminated all the variables I possibly could
(strapping down my waterguns, shooting 5 times at each pump with two guns
that were the same and brand new, windless as possible day, and the same
person measuring and timing throughout the experiment). Now I need back
ground information on air pressure or compressed air that would relate to
this project and I am having a real tuff time finding it. Could you
please help me?
Well, it sounds like you have done a pretty good job on the experiment. The
principles of fluid flow are not really complicated, but you can have some
complicated equations to predict your fluid flow. Understand that when I
say a fluid, I mean water, air, ketchup, etc. Almost anything can be
considered a fluid if it has to have a container to have a defined volume.
In simple terms, you will have fluid flow when there is a difference in
pressure between two points. For instance, the pumping action of your water
guns creates a pressure inside the water/pressure chamber. I don't know how
much pressure because that depends on how many pumps you make, how well the
vessel holds pressure, the efficiency of your pump, just to name a few
variables. Now, where you live, the air pressure around you is around 14.7
pounds per square inch (psi). So now you have a pressure difference, say 20
psi in the vessel and 14.7 psi at the discharge location (end) of the squirt
gun. Now, this pressure difference causes a flow of water and thus a
velocity of the water. The greater the pressure difference, the greater the
velocity. Velocity is the measure of distance per time (like miles per hour
in your car). If you assume that the water falls the same distance (you
said that you strapped the guns down, so I assume the height of the guns is
the same), then the time of flight of the water is the same for each gun.
So therefore, the higher the velocity, the farther the water travels over a
set time. Theoretically, the more pumps you make, the farther it should go.
HOWEVER, there is a matter of how good your pump is and how well your vessel
can keep the pressure. Typically the kind of pump built onto the squirt
guns and the vessel are going to have an upper limits on how much pressure
you can pump in, not to mention, a stronger person may be able to get more
pressure in because he/she is stronger. It sounds like you have done a good
job in trying to eliminate as many of these variables as possible, so
chances are you are proving how good (efficient)the pump is per squirt gun.
Now, there are many books out there on this principle, called fluid
mechanics. Most of the books in your public library in the adult section
may be over your head when trying to explain this stuff. Heck, it is even
over my head alot of the times! Maybe you should see your local librarian
to inquire if they have some juvenile books on how things like pumps and
fluid flow systems work. Explain that you have done this experiment and
realize that you need to know more about fluid flow, hydraulics, and pumps.
See if she can help. If not, maybe give us here at Newton another e-mail.
Good luck with the experiment and keep up the good work. You sound very
intelligent and inquisitive. You'd make a good engineer when you get older.
Dr C. Murphy
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:2183128f-f9b0-4cff-ba23-43eaf1264fdf> | 2.84375 | 852 | Q&A Forum | Science & Tech. | 61.211032 |
An uncommonly thin snowpack that year had been chased by a windy spring that came weeks early, hot and dry. Forest fires and heat waves soon followed. Farm crops withered, suburban laws crinkled and foreheads wrinkled at this long-ignored limit to growth. Water was front-page news.
Dillon Reservoir was the emblem in Colorado for that touchy, grouchy summer. Blue segued to brown, water to mud and then dry sands. Hiking down from timberline that hot afternoon I looked up to see a dust devil spinning through an area once marked by sailboats. It was like hearing a funeral dirge at a wedding.
Climate scientists caution against making too much out of any one year when talking about global warming. Still, in looking ahead at a planet redefined by warmth, the future they describe in Colorado and the Southwest looks and feels very much like 2002.
This is a land where aridity rules. Any map reveals as much. Large expanses of land from Denver to Los Angeles are public lands. Rugged topography and short-growing seasons explain why some of these lands were not homesteaded, but the larger reason is the lack of native moisture.
Oh, there is water, but mostly – about 85 percent in the West – it comes from snow in the high mountains. This snowpack is like a giant reservoir for the farms and cities of Denver, Las Vegas and San Diego. And the focal point for this civilization is the Colorado River and its tributaries.
But, as this drought that has afflicted the region since 1999 has made clear, the Colorado River is a resource badly strained. No extra water made it to the Sea of Cortez even during wet years. Now, water managers are getting a better idea of how much drier this region was for much of the last 1,000 years. This past century was actually unusually wet.
Whether this current drought is a result of a normal climatic fluctuation or is an early signal of global warming really doesn't matter. Either way, a fundamentally new way of looking at the river is taking hold.
The new reality is crystallized in photos of Lake Powell, now sitting only 40 percent full, or the other big storage vessel, Lake Mead, at 54 percent of capacity.
"The Colorado River is the canary in the coal mine for global warming," says Eric Kuhn, general manager of the Glenwood Springs-based Colorado River Water Conservation District. "You have a system where the demand and supply are so close that a small change of 10 percent in the annual flow at Lee's Ferry (in Arizona, the divide between upper and lower basin states) could cause a major disruption. A 10 percent change wouldn't make much difference on the Fraser River in British Columbia or on the Missouri."
Already, water managers are trying to imagine what this disruption would look like. Allocation of the water among the states is governed by the Colorado River Water Compact of 1922. That compact was signed after a period of what was, in retrospect, extraordinary wetness. The river has carried the same volume only occasionally since then.
Even so, there were no major problems except – from the perspective of politicians in Colorado – that California was "stealing" Colorado's unused share of water. The presumption in Colorado was that this water would eventually be used to feed the growing cities along the Front Range and otherwise enable economic development.
That fear is now being turned upside down. Given how little water the river is now carrying each year, arguably the compact governing the river will call upon Colorado and other upper-basin states to allow more water to flow down to California, Nevada and Arizona.
This possibility has the full attention of Glenn Porzak, a Boulder-based water attorney who represents most major water groups and companies in Summit County and the Eagle Valley.
"Regardless of whether it's your normal climatic cycle or the result of global warming, the affect is the same," says Porzak. "One need only look at what is happening at Lake Powell to see that if things continue, we are going to enter an era of Colorado River Compact calls, which heretofore had not occurred, and that will dramatically change the landscape. I really think people need to pay attention."
The link between the current drought and global warming is still unclear, says Brad Udall, managing director of the University of Colorado-National Oceanic and Atmospheric Administration Western Water Assessment. Udall notes new research that shows droughts of the 13th century – about the time the Ancestral Puebloans abandoned Mesa Verde and Chaco Canyon – were decades in length.
"There's no way to attribute this drought to climate change," he says.
Still, water managers are increasingly thinking about the specter of global warming. "I think it's in the back of everybody's mind," acknowledges Porzak.
DANGER IN DEGREES
Any way you cut it, global warming will redefine the landscape of Colorado and the Southwest. Hotter is the most fundamental difference. It makes winters shorter and summers longer. During those longer summers, even if precipitation remains the same, warmer temperatures leave forests drier and hence more susceptible to fires.
Precipitation is less clear. But the most important thing to remember is that if temperatures rise, precipitation must also rise – just to stay equal. That's because evapotranspiration – the return to the atmosphere of water through either evaporation or respiration by plants – will reduce precipitation by 8 to 20 percent.
Shorter, warmer winters spell changes. More rain, which drains more rapidly than snow, can be expected. But with shorter winters, spring comes earlier, with runoffs cresting in the rivers not in June, as has been the case, but in May or even April.
Already, anecdotal evidence of such changes is found. Records in Aspen show the frost-free season has expanded about two weeks into spring as compared with a half-century ago. Dillon Reservoir during the last decade has lost its winter ice more rapidly. And peak runoff in the Colorado River below Glenwood Springs is occurring a few days earlier.
On the West Coast, changes have been even more profound in response to the increase of 1.44 degrees Fahrenheit during the last half-century. The peak of the annual runoff in the Sierra Nevadas now comes as much as three weeks earlier than it did in 1948.
"The mountain ranges are essentially draining and drying earlier," said Dan Cayan, a climate researcher with the Scripps Institution of Oceanography in La Jolla, Calif.
Recent studies project that the heat will cause smaller snowpacks across the West. Cayan and U.S Geological Survey researcher Noah Knowles concluded that a temperature increase of about four degrees Fahrenheit would reduce the Sierra Nevada snowpack by a third by 2060, primarily at lower elevations, and half it by 2090.
Dams can hold back some of the runoff. They already do. However, in the Cascade Mountains of Oregon and Washington, artificial reservoirs store only about 10 percent of the annual flow. California has more storage, but not enough given this drier, hotter future.
Colorado has a different story. It has more dams to hold back the spring snowmelt, and it also has more rain in summer. Still, if global warming causes more rain and less snow, that will make the existing water infrastructure less functional, points out the River District's Kuhn.
The reduced snowpack forecast for the Rocky Mountains is less than on the West Coast, but one study foresees 30 percent less snow. And while the more hurried pace of runoff in Colorado lags that of the West Coast, runoff can be expected to be four weeks earlier within 50 to 90 years, says Kevin Trenberth, who heads the climate change analysis unit at the National Center for Atmospheric Research in Boulder.
All of this has water managers thinking more dams for Colorado. "My attitude at this juncture is there is no such thing as too much storage," says Porzak, whose clients include Vail Resorts.
At the River District in Glenwood, Kuhn agrees, but also notes that not all dam sites are equal. At Hoover Dam, Lake Mead loses about one million acre-feet of water each year to evaporation, and Powell loses about half that much. That's more than a quarter of the water in the river in a drought year.
"The question is do we have the storage buckets in the right places," says Kuhn. "My view is that you will see additional storage, but not necessarily large main-stem storage that evaporates."
HOTTER MEANS DRIER
For Gerald Meehl, a senior scientist at Boulder's NCAR, this climate of the future is both professional and personal. He was among the 120 scientists who wrote the 2001 report issued by the International Panel on Climate Change that reported a strong consensus among scientists that the fingerprints of man had become the dominant influence on climate change.
Meehl was reared on a dryland farm near Hudson, about 30 miles northeast of Denver. There, the winter wheat crop depends entirely upon natural precipitation, not irrigation. Even now, wheat farming is a crap shoot, and in the future the odds will worsen.
"The global models for quite a number of years have always projected that there would be a tendency toward drier conditions in the summer in the mid-continental regions, which would include Colorado," he explains. "This is due to warmer temperatures."
In other words, even if thundershowers are as frequent 30 years from now, the soil will be drier because of warmer temperatures. That does not bode well for wheat but also other crops in Colorado.
Of course, that's just the probability. Meehl long ago left the farm, but he still has relatives who till the soil. Like most of us, they are interested less in long-term climate shift than in next year's weather.
"My farmer uncles always make fun of me, because they ask what will happen next winter, and I always will give them a certain range of uncertainty," reports Meehl. "They say – you guys never give a straight answer."
ATTENTION OF FARMERS
But the odds are high enough that climate change has the attention not only of academics and environmentalists, but also more mainstream groups such as the Rocky Mountain Farmers Union.
John Stencel, president of the organization, which includes 23,000 families in Colorado, Wyoming and New Mexico, says many older members who can remember droughts as far back as the 1930s believe something new and different is now occurring. "They say that weather fluctuations are greater, more severe," he reports.
Are man-caused greenhouse gases to blame? Rank-and-file members are not necessarily persuaded of the connection, but Stencel is. "There has to be something to what a lot of scientists are saying," he says. | <urn:uuid:ada07d89-043a-4950-81d8-829da82215bf> | 3.015625 | 2,265 | Truncated | Science & Tech. | 48.873976 |
This illustration shows the route traveled by oil leaving the
subseafloor reservoir as it travels through the water column to the
surface, and ultimately falls back to the seafloor. The oil remaining
after weathering falls in a plume shape onto the seafloor where it
remains in the sediment.
(Jack Cook, Woods Hole Oceanographic Institution) | <urn:uuid:74f25ac2-629d-4ca7-9985-ad0de5a1d626> | 2.703125 | 77 | Knowledge Article | Science & Tech. | 29.03 |
Droughts are nothing new to Wisconsin and the United States. In fact, nearly every year, at least one portion of the country experiences drought, whether it is moderate, severe, or extreme. Droughts are caused by a long period of little to no rainfall. They are quiet – no damaged buildings, uprooted trees, or twisted scraps of metal. So it may be surprising to know widespread droughts can cause as much damage as a powerful hurricane making landfall near a major coastal city.
As of early October 2012, no official dollar amount has yet been assigned to the 2012 drought. But a few experts around the country have come up with a ballpark figure after analyzing the impacts. Chris Hurt, an economist at Purdue University, was recently quoted in The Madison Courier suggesting the 2012 drought will cost $77 billion. If that is true, when adjusting for inflation, the drought of 2012 will cost nearly the same amount as the drought of 1988. Some economists argue it will cost more. Regardless, it is unbelievable to think how a long-term weather pattern like can put us in such a bad position.
Obviously, it is too late for farmers who hoped for a nice crop. But there is help available through the USDA’s Drought and Drought Assistance program. If you, or someone you know needs more information, please don’t hesitate to share this post.
A bit of good news: some decent rain is expected this weekend! (October 13 & 14) The forecast models have been consistent on bringing an area of low pressure through the middle U.S. and into the Great Lakes region. Rain totals could range from 1/4″ to 1″+, depending on how much moisture the low draws north from the Gulf of Mexico. This will certainly help our drought, but it will not eliminate it. We need frequent periods of rainfall and a nice snow cover this winter. Speaking of winter, a month ago, indications were El Niño would develop in the eastern Pacific, forcing our weather pattern into a drier and milder state during the upcoming season. El Niño has not yet formed, meaning we could see more of a “normal” winter in the area. Time will certainly provide all the answers. Much of our weather also depends on ocean osscilations.
Thanks for reading and keep visiting for more from Beyond the Forecast…
This post was written by Nick Grunseth on October 10, 2012 | <urn:uuid:509a7c50-7582-40c3-8094-03118d6d541a> | 3.28125 | 497 | Personal Blog | Science & Tech. | 59.773208 |
|photo credit: woodleywonderworks|
Well, it’s that time of year again… we have started to get Attacus atlas, aka Atlas moths, YEAH!!!! This is always an exciting time for me because I get to tell everyone who keeps asking me that they are finally here! Last week, I received 60 atlas moth cocoons from Malaysia and the Philippines. Unlike the butterflies we receive on a regular basis that all emerge within a few weeks, the atlas moths should be emerging over a few months, so we should have them for a while.
The Atlas moth belongs to the family of giant silk moths, Saturniidae. They are considered to be the largest moths in the world in terms of wing surface area. These impressive moths can only be found naturally in Southeast Asia, where they are very common. Their name comes from either the Titan of Greek mythology or from the striking pattern on their wings, which resembles a map. If you look at the tips of the forewings they resemble a snakes head, which makes for great predator protection.
| The females
are larger and have bigger abdomens
|The males are smaller and have longer antennae|
Each moth starts it’s life as a beautiful, emerald-colored caterpillar. The larvae feed on a wide variety of food plants, and may even wander from one to another. As it gets bigger it developes a more waxy, light-white-ish green coloration. It then spins a silken cocoon to protect itself and pupates inside (This is different from butterflies who develop inside a chrysalis, not a cocoon). The adults, as in other Saturniids, have no mouth parts whatsoever, so they cannot feed. They survive off of fat reserves they build up as caterpillars.
photo credit: Andreanna
Moths fly at night, so you may see these large moths resting on trees in the Butterfly Center during the day, paying no attention to the butterflies fluttering all around them. I try really hard each time a moth emerges to place it in a very obvious place so people can see them. Many people think they are fake because they sit so still, but now you know they are not!
Some other moths that belong in the Saturniidae family that you can find around here include the cecropia moth (Hyalophora cecropia), polyphemus moth (Antheraea polyphemus), and the luna moth (Actias luna). These moths aren’t as big as the atlas moth, but they are big when compared to other moths and butterflies in Texas.
photo credit: Aunt Owwee
I hope you get a chance to stop off and see our wonderful giants and keep a look out for the native moths, they are a wonder to see too! | <urn:uuid:7af91dfe-f211-4a9c-8e00-be65f3210926> | 2.78125 | 599 | Personal Blog | Science & Tech. | 53.519609 |
The only information one has about a particle
(or a system for that matter) in quantum physics is its wavefunction
The wavefunction is related directly to the probability distribution of the particle. In other words, the wavefunction allows one to find out the probability that the particle will be found in a given place if you were to measure its position. The shape of the wavefunction itself is harder to figure out; the Schrodinger Equation
can be nasty to solve.
When one actually performs the measurement
(in this case, position), one gets a specific location as a result. If you redo the measurement immediately afterwards, you should measure the same position again.
This means that the wavefunction of that particle has collapsed
. In other words, instead of a complicated function
with lots of peaks and valleys, the wavefunction is now a sharp spike at the location where you measured the particle. The probability
of finding the particle there is now 100%, and all information about what the wavefunction was is now gone.
Let the system
sit, though, after the first measurement, and due to the uncertainty principle
, the wavefunction begins to creep outward again, slowly... until you again have a probability distribution
, with no sure knowledge of where you'll find the particle. Back to square 1!
Collapsed wavefunctions don't stay that way | <urn:uuid:068c1311-94b6-4f0a-8605-75fb6bcaa225> | 2.984375 | 281 | Knowledge Article | Science & Tech. | 39.785055 |
by Katie Bowell, Curator of Cultural Interpretation
The International Institute for Species Exploration and an international team of taxonomists (the people who classify organisms) just released their list of the top ten new species discovered in 2009 and boy oh boy, it’s a doozy!
The winners this year include the Green bomber worm (Swima bombiviridis) who can shed its bioluminescent green gills, the Aiteng sea slug (Aiteng ater), who is now the head of a whole new family of insect-eating sea slugs (sea slugs and insects in one animal description — I love it), and Omar’s banded knifefish (Gymnotus omarorum), a fish that scientists already knew about, but didn’t know they had misidentified as another type of banded knifefish.
I think my favorite for this year has to be the killer sponge, Chondrocladia tubiformes. That’s right, I said “killer.” Well, technically it’s just carnivorous, but that’s still pretty terrifying. What’s so cool is that 20 years ago scientists didn’t know there were carnivorous sponges, and now they’re popping up everywhere. And, fossil evidence indicates that organisms similar to these carnivorous sponges may have been around as far back as the Jurassic period!
To date, scientists estimate that almost 2 million species have been identified, but there are probably anywhere from 10-15 million species of animals and plants in the world. In 2008, 18,225 new species of animals, plants, algae, fungi and microbes were found, so there’s still a lot of work to be done.
Interested in the top species discovered in 2008? Check out last year’s list here. My favorite is definitely a tie between the World’s Longest Insect and the bacteria that lives in hairspray. I guess we should just be happy that the world’s longest insect doesn’t live in hairspray – that could make up-dos a little more awkward… | <urn:uuid:e78b7194-5fb8-4eb7-9d64-5b8bf37c20a7> | 3.0625 | 454 | Personal Blog | Science & Tech. | 45.249213 |
Hello Everyone, please help me on this dynamics questions!
A boy pulls a sled across ice by exerting a force of F kg wt. The mass of the sled is m kg, the friction between sled and ice is negligible but the sled is subject to air resistance in kg wt of magnitude k times the speed. If the sled starts from rest find expressions for:
i) Velocity as a function of time
ii) Displacement as a function of velocty | <urn:uuid:cc6de0ff-2637-444e-a02f-13d5216ede38> | 2.953125 | 98 | Q&A Forum | Science & Tech. | 61.328333 |
My question is regarding the interference in sound waves
of slightly different frequencies which indeed produces beats. If
three tuning forks of frequencies, say, 401Hz, 402Hz and 403Hz are
vibrated simultaneously then what will be the beat frequency heard?
Since the first two forks differ by one Hz The second two do also. Each pair
(401 and 402 plus 402 and 403), will generate a 1 hz beat. The difference in
frequency between first and third (401 and 403), is 2 Hz., They will generate a
2 Hz. beat. But since the 1 beat per second fits neatly in the 2 beat per
second , you will just hear 2 beats per second.
Beat frequencies in general are the sum and difference of the two frequencies
being mixed by a nonlinear process.
For 401Hz and 402Hz, the heterodyne frequencies are 1Hz and 803 Hz.
The nonlinearity in your hearing is:
the mind's estimation of the sound intensity envelope.
That's a non-linear function, like rectifying a sine-wave
or using a square-law detector to get the RMS (root-mean-square) amplitude.
But it is slow, <30Hz, so the 803Hz will probably be lost / not created / not perceived.
The 1Hz will be perceived as an amplitude-modulation of an approximately-400Hz tone,
getting louder and softer with 1/second periodicity and a simple sine-wave envelope shave.
Jumbling three frequencies together definitely makes it less simple.
Then you have
401Hz&402Hz -> 1Hz,
402Hz&403Hz -> 1Hz,
401Hz&403Hz -> 2Hz,
The 2Hz will be quite noticeable as a jauntiness or
non-sinusoidal character within the 1-Hz louder-softer cycle.
It is even possible, depending on the relative intensities and phases
of the 401/402/403Hz signals,
to have only 2Hz beat frequency, that also being a simple sinusoidal envelope.
That would happen when the 402Hz was very weak or absent,
or maybe if the 1Hz beats can be of proper phase to cancel each other.
If the two 1-Hz frequency-separations are slightly different
(i.e. 1.0 Hz and 0.9 Hz),
as would often happen with a string instrument rather than digital electronics,
then the sound would slowly drift through a range of perceptually differing
combinations of the 1Hz and 2Hz beat-frequencies.
At one moment it would seem to be lopsided 1Hz modulation,
then about 5 seconds later it might sound more like 2Hz ripples on top of a larger constant tone ,
then back to varyingly lopsided 1Hz beats.
The transitions between all those would be gradual.
If a second stage of nonlinear signal-processing is somehow added,
such as strong electronic distortion (i.e., an electric-guitar's fuzz-box),
then it is quite possible for the 1Hz and 2Hz to beat together
making some 3Hz, and even higher harmonics of 1Hz running to above 10Hz.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:3ad7e785-1dc9-4fc1-bba9-9e944d19bbb3> | 2.921875 | 691 | Q&A Forum | Science & Tech. | 66.323911 |
The sample experiments shown here illustrate the behavior of fluids,
flames, and mechanical systems in microgravity. After each description,
a picture of the experiment, followed by its appearance in normal
gravity (1g) and then micro-gravity (ug) are given.
The weight (W) of a body of mass (m) in a gravitational field of strength
(g) is given by W=mg. In microgravity, g is virtually eliminated and
therefore the weight of the body is also eliminated. To demonstrate
this principle, a mass on a scale is dropped. The two counterbalancing
forces in this experiment are (1) the gravitational force acting on
the mass and (2) the force of the spring in the scale. During the drop,
g tends towards zero and the restoring spring force pushes the indicator
from the original weight of the body toward zero. Shown in figure 3.
Fig. 3. - Weightlessness (normal
gravity shown in the image on the left and microgravity is shown
in the video on the right).
The fluid interface experiment highlights the role of surface tension
in the absence of gravity. In 1g, the effect of surface tension is evident
only near the container walls and most of the surface looks flat. In
reduced gravity, surface tension leads the liquid to assume a very different
shape. Specifically, the liquid creeps up the walls of the container
by capillary forces; this is most evident in the corners. Given enough
ug time, the liquid would wet the walls of the vessel, leaving a bubble
of air in the center. Shown in figure 4.
Fig. 4. - Fluid Interface (normal
gravity shown in the image on the left and microgravity shown
in the video on the right).
The candle flame experiment demonstrates the effect of buoyant convection
and its absence on combustion phenomena. In normal gravity (1g), the
combustion gases are much hotter, and thus lighter than the surrounding
air. Buoyancy causes the hotter, less dense combustion gases to rise,
giving the candle flame its vertically-elongated, conical shape. However,
during the drop experiment, the hot gas expands in all directions. As
a result, the flame becomes shorter and wider. In longer periods of
reduced gravity, the flame becomes spherical and entirely blue. This
was observed in a candle flame experiment performed on the Space Shuttle
(USML-1/STS-50, June-July, 1992). Shown in figure 5.
Fig. 5. - Candle Flame (normal gravity
is shown in the image on the left and microgravity is shown in
the video on the right).
Two magnets are oriented with like polarities opposing one another (i.e.,
N-N or S-S). The lower magnet is fixed to the experiment platform while
the upper magnet is freely supported on a lever arm. In 1g, the upper
magnet is levitated by the magnetic repulsive force which is balanced
by the gravitational force pulling the upper magnet downward. During
the drop, the magnetic repulsion becomes dominant and the upper magnet
moves rapidly away from the lower magnets. Shown in figure 6. | <urn:uuid:1e82e1f1-79a7-4127-98c3-26228aa13a8e> | 3.984375 | 677 | Documentation | Science & Tech. | 51.004242 |
Biodiversity is the degree of variation of life forms within a given species, ecosystem, biome, or an entire planet. Biodiversity is a measure of the health of ecosystems. Biodiversity is in part a function of climate. In terrestrial habitats, tropical regions are typically rich whereas polar regions support fewer species. Researchers have now shown that part of Australia’s rich plant diversity was wiped out by the ice ages, proving that extinction, instead of evolution, can influence biodiversity. The research led by the University of Melbourne and University of Tasmania has shown that plant diversity in South East Australia was as rich as some of the most diverse places in the world, and that most of these species went extinct during the ice ages, probably about one million years ago.
Biodiversity is the variety of all living things; the different plants, animals and micro organisms, the genetic information they contain and the ecosystems they form. Biodiversity is usually explored at three levels - genetic diversity, species diversity and ecosystem diversity. These three levels work together to create the complexity of life on Earth.
Since life began on Earth, five major mass extinctions and several minor events have led to large and sudden drops in biodiversity. The Phanerozoic eon (the last 540 million years) marked a rapid growth in biodiversity via the Cambrian explosion—a period during which the majority of multicellular phyla first appeared. The next 400 million years included repeated, massive biodiversity losses classified as mass extinction events.
Dr Sniderman of the University of Melbourne’s School of Earth Sciences said the new report findings show extinction is just as important to diversity of organisms as evolution.
"Traditionally scientists believed some places have more species than others because species evolved more rapidly in these places. We have overthrown this theory, which emphasizes evolution, by showing that extinction may be more important," he said.
The study compared two regions of Southern Australia and South Africa.
"South-western Australia has a huge diversity of tough-leaved shrubs and trees such as eucalypts, Banksia, Grevilleas and Acacias, making it one of the most biodiverse places on earth," Dr Sniderman said.
"The southern tip of South Africa is even richer, with astonishing numbers of similar kinds of plants like proteas and ericas."
Scientists have long maintained that this diversity is somehow related to the poor soils and dry summers of these places.
For the study researchers analysed plant fossils that accumulated in an ancient lake in South Eastern Australia. They found the region had at least as many tough-leaved plants 1.5 million years ago as Western Australia and South Africa do today.
"As Australia dried out over the past several million years, rainforest plants largely disappeared from most of the continent," said Dr Sniderman
"It has been thought that this drying trend allowed Australia’s characteristic tough-leaved plants to expand and became more diverse. We have shown that the climate variability of the ice ages not only drove rainforest plants to extinction but also a remarkable number of tough-leaved, shrubby plants," he said. ”¨
Dr Greg Jordan of the School of Plant Sciences at the University of Tasmanian said not only has the study overturned current thought on the role of extinction in plant diversity, it has implications for understanding how Australian plant diversity will deal with current and future climate change.
"The species that went extinct in SE Australia during the ice ages were likely to be the ones most sensitive to rapid climate change, meaning that the species that now grow in eastern Australia may be more capable of tolerating rapid changes than predicted by current science," he said.
"However, the species in hotspots of diversity like Western Australia may be much more sensitive to future climate change, because they have been protected from past climate changes."
The existence of a global carrying capacity, limiting the amount of life that can live at once, has been debated, as is the question of whether such a limit would also cap the number of species. On the other hand there is the theory of extinctions due to over population growth sudden environmental changes and similar effects.
For further information see Biodiversity Extinction.
Biodiverse Field image via Wikipedia. | <urn:uuid:d576f567-94b6-4429-adaf-3d832767aa8b> | 4.125 | 872 | Knowledge Article | Science & Tech. | 30.097097 |
New 'walking' fishes discovered in Gulf oil-spill zone
Two new fish species — with pancake-flat bodies, wiggling lures on their faces, and elbowed fins for "walking" on the seafloor — have been discovered in the path of spewing Gulf of Mexico oil.
One of these pancake batfishes lives in the northern Gulf where oil is already spreading from the Deepwater Horizon blowout, says ichthyologist Prosanta Chakrabarty of Louisiana State University's Museum of Natural Sciences in Baton Rouge, a codiscoverer of the species.
Chakrabarty calls this narrowly distributed species the Louisiana pancake batfish. Its full scientific name, in the genus Halieutichthys, hasn't even been published yet. The oil's impact on the soon-to-be new species isn't clear. "All we can say is that its habitat is threatened," Chakrabarty says.
The other newly identified pancake batfish has a somewhat broader range. Yet all pancake batfishes, now three species in total, live in water that could be fouled if Gulf oil heavily taints the Loop Current off Florida's west coast.
Louisiana pancake batfish grow only about "that big," Chakrabarty says, making a circle of his thumb and forefinger. They're as thick as an exceptionally fluffy pancake. Fins that work almost like stubby arms prop them up or let them waddle along the bottom.
Unlikely as it may sound, these little squashed-looking fishes are anglerfish, a group most people know from nature documentaries depicting these chunky, fanged creatures of the deep ocean. Anglerfishes get their name from projections that dangle somewhere in the vicinity of their mouths and invite overly curious passersby in for lunch. Pancake batfishes have a lure too, a stubby projection that twitches where a nose would be on a mammal face. | <urn:uuid:ad37b0b5-c837-47c9-8957-3af7f5c18b3f> | 3.15625 | 406 | Truncated | Science & Tech. | 40.101185 |
Which "the" models would those be?
The models the IPCC used
in their report. This graphic shows the simulated forcings of TSI, Volcanism, Well mixed GHGs, ozone, aerosol and the sum of all of the forcings combined. Note that the Greenhouse Signature as PREDICTED by the models looks TOTALLY different than if it were forced by the sun.
Figure 9.1. Zonal mean atmospheric temperature change from 1890 to 1999 (°C per century) as simulated by the PCM model from (a) solar forcing, (b) volcanoes, © well-mixed greenhouse gases, (d) tropospheric and stratospheric ozone changes, (e) direct sulphate aerosol forcing and (f) the sum of all forcings. Plot is from 1,000 hPa to 10 hPa (shown on left scale) and from 0 km to 30 km (shown on right). See Appendix 9.C for additional information. Based on Santer et al. (2003a).
So the Greenhouse signature is UNIQUE according to this IPCC graphic, and we SHOULD see more warming occur in the mid to upper troposphere than at the surface.
Other models have the SAME exact signature for GHG warming, with an area in the mid to upper troposphere in the Tropics warming faster than everywhere else.
Unfortunately for the GHG theory, temperatures in this area in the mid to upper troposphere in the Tropics have remained relatively constant as surface temepratures increased.
This means GHGs are not the cause of the recent warming, since the Greenhouse signature portrayed in the IPCC models is not present in reality.
So you are telling us that the predicted mean forcing over a century level is refuted because you do not see a continuing increase in the final 20 year period temperature? | <urn:uuid:be600b42-c3ae-4d4d-8766-45f8867a9464> | 3.265625 | 383 | Comment Section | Science & Tech. | 52.642353 |
Tell the audience only that you're immersing a ping-pong ball in liquid nitrogen. Ask them to predict what will happen (a good guess would be that it will collapse in on itself). The unexpected result will give them a good opportunity to apply some bits of knowledge they possess (depending on their age: that liquid nitrogen is very cold, and at its boiling point; that a gas has a much greater volume than a liquid; how a jet works/conservation of momentum; that water droplets are visible but water vapor is not) and to analyze the problem (they should be able to figure out that the ball has a hole in it, and that the hole has particular characteristics). You can guide them to the solution by being as vague or explicit as you need to be based on how much time you have, but I recommend allowing them to closely inspect the ball only once they have deduced the existence of the hole. Temporize by repeating the demonstration.
Disclaimer: I have no idea what rate the ball is spinning at, and 10,000 rpm is certainly gross exaggeration, but we are definitely talking some serious rotational velocity. It speeds up as the liquid nitrogen inside is consumed.
Step 1: Equipment
ping-pong ball ~ pin ~ marker pen ~ tongs ~ liquid nitrogen
Liquid nitrogen is at its boiling point of -196ºC. It's dangerous, but only on prolonged contact with skin (causes frostbite) or if confined (it will explode its way out of the vessel). Handle it with respect and in the right containers (stainless steel dewars), and wear appropriate clothing. Getting splashed with liquid nitrogen is not a problem because you are protected by the Leidenfrost effect. Getting more than splashed can cause serious burns. Companies like Praxair and Airgas sell it (you'll need an appropriate vessel), and universities always have a lot on hand in science departments. | <urn:uuid:d3029c13-d289-42a7-a523-4cf0e3bba578> | 3.09375 | 396 | Tutorial | Science & Tech. | 45.16851 |
Country: United States
Date: Summer 2009
Are mutations [like the increase in number of limbs on a
frog an example of] an increase in genetic information to a genome?
If not, what is it?
A lot of terms you used seem to be used incorrectly, so I want to start with
First, a mutation refers to a change in genetic information (the actual base pairs of
DNA). Traits like "number of legs" or "color of eyes" result from genes, which are
made up of DNA. A changes in the DNA sequence (also called genotype) can result in a
different trait (although not always). However, the trait itself would not be called
a mutation -- the trait is what results from the mutation.
Also, there is not a direct link between "amount of DNA" and "number of [whatever]"
trait. When a frog is born with double-limbs, it does not mean that the frog's genome
doubled, or that the "leg gene" doubled. More than likely there was some problem in
the frog's development (which could be genetic or could be some environmental factor).
There is not a one-to-one relationship between a trait and a gene; for example,
humans have two eyes, but that doesn't mean we have two "eye genes". Likewise, we
don't have 100,000 copies of a "hair" gene for every hair on our body, etc. etc.
Last, I am stumbling a little bit over one of the terms you used -- "an increase in
genetic information" is problematic to define.
If you simply mean the number of base pairs, than increases in genetic information
happen all the time -- mutations can cause duplications of genetic information,
resulting in more base pairs. These mutations can also result in evolutionary
advantages (for example, read about color vision --
I could imagine trying to define "an increase in genetic information" related to
phenotypes (expressed traits) or some qualitative or quantitative assessment of the
'advantage' of the traits. All of these have problems though too. More mathematically,
one can describe the complexity of a string of digits, although there is no direct
relationship between mathematical complexity of genetic code and biological traits.
As a hypothetical example, let's suppose you combine a mutation that duplicates a
gene, and some random mutation. Let's assume this process keeps occurring without
natural selection. Over time you would build a great deal of (mathematical) complexity.
The pressure of natural selection would serve to reduce the frequency of this, but it
does happen (see vision, above).
I do recall having heard a line of reasoning based on a misapplication of the second
law of thermodynamics (it dealt with entropy and information theory) to argue that
biological diversity cannot emerge without God. After reading your question, I did
a little internet searching and re-discovered some of these claims. The best short
answer I can give you is that this line of reasoning is fundamentally flawed.
"Entropy" is *not* a synonym with "randomness" or "disorder". I'm not sure if this
is what you were asking about, so feel free to respond if you want more information
on this topic.
Hope this helps,
Click here to return to the Molecular Biology Archives
Update: June 2012 | <urn:uuid:bba1a394-924f-4422-af3b-b6f40e70311e> | 3.1875 | 719 | Q&A Forum | Science & Tech. | 41.307564 |
The Scanning Tunneling
The scanning tunneling
microscope (STM) is a type of electron microscope
that shows three-dimensional images of a sample. In
the STM, the structure of a surface is studied using
a stylus that scans the surface at a fixed distance
An extremely fine
conducting probe is held close to the sample.
Electrons tunnel between the surface and the stylus,
producing an electrical signal. The stylus is
extremely sharp, the tip being formed by one single
atom. It slowly scans across the surface at a
distance of only an atom's diameter. The stylus is
raised and lowered in order to keep the signal
constant and maintain the distance. This enables it
to follow even the smallest details of the surface it
is scanning. Recording the vertical movement of the
stylus makes it possible to study the structure of
the surface atom by atom. A profile of the surface is
created, and from that a computer-generated contour
map of the surface is produced.
The study of
surfaces is an important part of physics, with
particular applications in semiconductor physics and
microelectronics. In chemistry, surface reactions
also play an important part, for example in
catalysis. The STM works best with conducting
materials, but it is also possible to fix organic
molecules on a surface and study their structures.
For example, this technique has been used in the
study of DNA molecules. | <urn:uuid:61e7888d-a490-4429-a9ed-c8f6564a71a3> | 3.859375 | 315 | Knowledge Article | Science & Tech. | 38.686854 |
Mechanics: Vectors and Projectiles
Vectors and Projectiles: Audio Guided Solution
A weather report shows that a tornado was sighted 12 km south and 23 km west of your town. The storm is reported to be moving directly towards your town at a speed of 82 km/hr.
a. What distance from your town was the tornado sighted?
b. Approximately how much time (in minutes and hours) will elapse before the violent storm arrives at your town?
Audio Guided Solution
Click to show or hide the answer!
b. 0.32 hr or 19 min
Habits of an Effective Problem Solver
- Read the problem carefully and develop a mental picture of the physical situation. If necessary, sketch a simple diagram of the physical situation to help you visualize it.
- Identify the known and unknown quantities in an organized manner. Equate given values to the symbols used to represent the corresponding quantity - e.g., vox = 12.4 m/s, voy = 0.0 m/s, dx = 32.7 m, dy = ???.
- Use physics formulas and conceptual reasoning to plot a strategy for solving for the unknown quantity.
- Identify the appropriate formula(s) to use.
- Perform substitutions and algebraic manipulations in order to solve for the unknown quantity.
Read About It!
Get more information on the topic of Vectors and Projectiles at The Physics Classroom Tutorial.
Return to Problem Set
Return to Overview | <urn:uuid:d7c2a4bd-12e8-4983-b028-67d5ad1af6d3> | 3.296875 | 318 | Tutorial | Science & Tech. | 63.060494 |
Malloc and free provide a simple memory allocation package. Malloc
returns a pointer to a new block of at least size bytes. The block
is suitably aligned for storage of any type of object. No two
active pointers from malloc will have the same value. The call
malloc(0) returns a valid pointer rather than null.
The argument to free is a pointer to a block previously allocated
by malloc; this space is made available for further allocation.
It is legal to free a null pointer; the effect is a no–op. The
contents of the space returned by malloc are undefined. Mallocz
behaves as malloc, except that if clr is non–zero, the memory
returned will be zeroed.
Mallocalign allocates a block of at least n bytes of memory respecting
alignment contraints. If align is non–zero, the returned pointer
is aligned to be equal to offset modulo align. If span is non–zero,
the n byte block allocated will not span a span–byte boundary.
Realloc changes the size of the block pointed to by ptr to size
bytes and returns a pointer to the (possibly moved) block. The
contents will be unchanged up to the lesser of the new and old
sizes. Realloc takes on special meanings when one or both arguments
means malloc(size); returns a pointer to the newly–allocated memory|
means free(ptr); returns null|
Calloc allocates space for an array of nelem elements of size
elsize. The space is initialized to zeros. Free frees such a block.
When a block is allocated, sometimes there is some extra unused
space at the end. Msize grows the block to encompass this unused
space and returns the new number of bytes that may be used.
The memory allocator maintains two word–sized fields associated
with each block, the ``malloc tag'' and the ``realloc tag''. By
convention, the malloc tag is the PC that allocated the block,
and the realloc tag the PC that last reallocated the block. These
may be set or examined with setmalloctag, getmalloctag,
setrealloctag, and getrealloctag. When allocating blocks directly
with malloc and realloc, these tags will be set properly. If a
custom allocator wrapper is used, the allocator wrapper can set
the tags itself (usually by passing the result of getcallerpc(2)
to setmalloctag) to provide more useful information about the
Malloctopoolblock takes the address of a block returned by malloc
and returns the address of the corresponding block allocated by
the pool(2) routines. | <urn:uuid:0e8e274a-6ee1-405e-a9b2-d0c042ebced1> | 3.28125 | 596 | Documentation | Software Dev. | 49.05135 |
About Table Views in OS X Applications
A table view displays data for a set of related records, with rows representing individual records and columns representing the attributes of those records. For example, in a table of employee records, each row represents one employee, and the columns represent such attributes as the first and last name, address, salary, and so on. Table views are versatile user-interface elements that are found in many Mac apps.
A table view can have a single column or multiple columns and allows scrolling both vertically and horizontally. It consists of rows, each of which have a a corresponding cell that represents a field in a data collection.
In OS X version 10.6 and earlier each individual cell within a table view was required to be a subclass of
NSCell. This approach has caused limitations when designing complex custom cells, typically requiring the developer to write their own
NSCell subclass. Additionally, providing animation, such as progress views, was extremely difficult. Throughout this book these types of table views are referred to as cell-based table views.
In OS X version 10.7 table views have been redesigned and now support using views as individual cells. These are referred to as view-based table views. View-based table views allow you to design custom cells in the Interface Builder portion of Xcode 4.0. It allows easy design time layout as well as making it easy to animate changes and customize drawing. As with cell-based table views, view-based table views support selection, column dragging, and other user-expected table view behavior. The only difference is that the developer is given much more flexibility in design and implementation.
Creating view-based and cell-based table views and adding columns use the same techniques within Interface Builder. The differences occur in your application code when providing the individual cells, populating the content of the table view, and customizing the table view appearance. As well, the Cocoa bindings techniques are entirely different between the two implementations.
Throughout the book view-based table view and cell-based table view specific functions are called out in the chapter titles. Some chapters, for example row selection are the same when using both techniques. Chapters that are applicable to both technologies will be called out as such.
At a Glance
Understanding the structure of a table view, and knowing how to build one, lets you add a compelling user interface to Mac apps that work with collections of information.
Creating Table Views Using interface Builder
Interface Builder is the most common method of creating a table view. You create a table view in Interface Builder by dragging the table view prototype from the Library palette and positioning it within a window or superview. You then add table columns to the table view and arrange them as desired, possibly providing visible headers with names. Many aspects of table columns can be set directly in Interface Builder allowing you to avoid the need to write additional code.
Once the table is created, if you are using view-based table views, you drag instances of
NSTableCellView into the table columns and modify them to represent the interface required for the column.
Providing the Data to the Table View
You must provide data to the table view. This can be done using one of two methods: programmatically by implementing a datasource class or by establishing a relationship between the table view and a controller using Cocoa bindings.
When providing data programmatically you create a class that conforms to the
NSTableViewDataSource protocol and implement the method that provides the row and column data as requested. Content can simply be displayed and, when appropriate, can also be edited.
You can also provide content to a table view using Cocoa bindings. Bindings allows you to create a relationship between a controller class instance, which manages the interaction between data objects and the table view. The bindings approach allows you to bypass the datasource class for providing the data as well as supporting editing of the data.
Because the techniques for view-based cell creation and populating and cell-based table view populating and editing are quite different these chapters have been separated by topic.
Responding to Selections
When the user selects a row by clicking on it, the delegate of the table view is notified that the selection is changing. The delegate then has the opportunity to allow or deny the selection of the row.
The selection functionality is identical with both view-based and cell-based table views.
How to Use This Document
This document describes the capabilities of the table view. The view-based table views are dissected to highlight their individual view components, and additional chapters describe how to create table views in general, view-based table views, and how to populate both styles programmatically and using Cocoa bindings.
Additional table components are also discussed including sorting and row selection. Cell-based table views are also discussed in this document.
To develop successfully with the
NSTableView class, you need a strong grasp of the model-view-controller design pattern. Refer to “The Model-View-Controller Design Pattern” in Cocoa Fundamentals Guide.
NSTableView instances can be used with Cocoa bindings, both in view-based and cell-based modes. However it is sternly suggested that learning and understanding the programmatic interface of the table view be understood before beginning to use the more advanced Cocoa bindings. To understand bindings start with the Core Competency article on Cocoa bindings. For a complete understanding, refer to Cocoa Bindings Programming Topics and “Synchronize Your OS X Views and Data with Cocoa Bindings” in Xcode User Guide.
It is recommended that you read the OS X Human Interface Guidelines that provides information on the recommended uses of table views.
You will find the following sample-code projects to be instructive when designing your own table view implementations:
With and Without Bindings project suite
Cocoa Tips and Tricks project. Amongst other topics, it includes projects the demonstrate how to use hyperlinks within a tableview and how to implement variable row height sizes within table view rows.
TableViewPlayground project. This project uses view-based table views to demonstrate a wide range of capabilities including: populating cells programmatically and with Cocoa bindings, creating a simple
NSTableCellViewclass based table, using custom
NSTableRowViewinstances supporting the pasteboard which allows single cell dragging and multi-image dragging. It also demonstrates the new view-based capabilities that have been added to the
© 2011 Apple Inc. All Rights Reserved. (Last updated: 2011-07-06) | <urn:uuid:02807313-d1ed-4c10-965c-d6eca2edbc91> | 3.640625 | 1,357 | Documentation | Software Dev. | 35.083459 |
A Sydney marine ecologist researching the immense potential of seagrasses to store carbon for thousands of years, and therefore highlighting its conservation value.
Clink here to watch the finalist profiles
Peter Macreadie's love of fishing and snorkelling in seagrass environments makes his work as a marine ecologist a pleasure. But the research he is doing into the carbon-storing potential of seagrasses may soon boost everybody's appreciation of the sometimes neglected marine plantlife.
Seagrass can store carbon for thousands of years with little leakage, according to Peter's research. This compares to the just decades-long capture and storage provided by the current carbon-sink favourites, land-based forests.
The news has the seagrass world abuzz and Peter hopes the rest of the population will feel the same before too long.
"The research that we've got from seagrass systems indicates that they're really good at capturing and storing carbon. Unlike terrestrial systems, which often only capture and store carbon for a decade or so, the seagrass systems can store the carbon for thousands of years. And Australia has more seagrass than anywhere else in the world."
Currently these grasses - together with saltmarshes and mangroves - are thought to be responsible for up to 70 per cent of all submarine carbon capture, despite only covering 1 per cent of the sea floor. Australia has an exceptionally large seagrass habitat, being 25 per cent larger than the size of Tasmania.
This resource not only captures carbon - it is also vital for the survival of fisheries in Australia and around the world with half the world's fisheries relying on sea grass meadows. Unfortunately it is rapidly being depleted by coastal development and warming oceans and nearly a third has been destroyed already.
By extracting 4-metre deep cores from the seabed, Peter and his team are looking thousands of years back in time to see how things have changed. They use radio-dating and carbon labelling to assess the age of the sediments and then look at what sort of carbon signatures are there. Is the carbon residue derived from seagrasses, microalgae, or terrestrial plants? What does the answer suggest about changes in that landscape over time?
"Our research shows that since the time of European settlement in Botany Bay there's been a sudden change in the landscape. We see that we used to have this really important carbon sink for 6,000 years up until European settlement and then suddenly you don't see the signature of this seagrass carbon source and it shifts to a microalgal carbon source."
Peter's research is now directed towards making what he calls the "coalmine canaries of the coastal ecosystems", more resilient. Preserving genetic diversity is already high on this list.
"Australia needs to recognise these are very, very powerful carbon sinks and they are being destroyed."
Peter is a Chancellor's Postdoctoral Research Fellow at the University of Technology, Sydney and entered his research into the Eureka Prize for Outstanding Young Researcher.
How much control do we have over the amount of seagrass? Can we, for example, plant more to increase the level of sequestration, as we can do with trees? And does it sequester atmospheric carbon?
Yes, seagrass can be planted, although it's considered to be much more challenging than planting trees. Research is underway around the world to find ways to improve the success and feasibility of planting seagrass. The other way to control the amount of seagrass is by managing existing meadows to increase the likelihood that they'll expand. Atmospheric carbon readily dissolves in seawater, and then is taken up by seagrasses during photosynthesis.
I am doing a science assignment biography on you- When and where were you born? What did you study? What is your marital status?
I was born in Melbourne in 1981. I spent most of my life growing up in Melbourne's eastern suburbs. After high school (Donvale Christian College), I did a Bachelor of Science (inc. Honours) and PhD at the University of Melbourne. I then took up a postdoctoral fellowship at the University of Technology, Sydney (UTS), which is the position I currently hold. I married my high school sweetheart.
What role do dugongs play in the propagation of sea grass beds? Shouldn't dugongs be protected to maintain healthy sea grass beds and thus enhance carbon capture and storage?
Great question. I'm unaware of any evidence that dugongs contribute towards propagation of seagrass beds, but it is possible that their feeding could facilitate the spread of seeds and whole plants.
Some argue that dugongs are detrimental to seagrasses because they remove and fragment large amounts of seagrass (they can eat up to 50 kg a day!), but I think it's fair to say that our love of dugongs indirectly benefits the seagrasses because you have to protect the seagrasses if you want to protect dugongs.
In Australia, dugongs are protected under the Environmental Protection and Biodiversity Act 1999, but we still have a long way to go towards minimising human threats (e.g. collision with boats, entanglement in fishing nets, degradation of seagrass meadows).
Overall, I think the message should be: ‘Look after the seagrasses (and dugongs!), and they'll look after you'
How much sea grass (roughly) is there in the world? Is it more than the Amazon (in helping w/ carbon dioxide)?
Another great question. Seagrasses occur on every continental margin of our planet (except Antarctica), and recent estimates put the total global seagrass area between 177,000 and 600,000 km2.
Australia has more seagrass area than anywhere else in the world: around 95,000 km2. To put this in perspective, it's an area the size of the state of Victoria!
Although the seagrasses represent a much smaller area than terrestrial forests, their total contribution to long-term global carbon capture and storage is comparable. This is because they are so effective at burying carbon. Seagrasses bury carbon at a rate 30 times greater than tropical rainforests!
If you want the hard core numbers, here they are: global carbon burial of seagrasses has recently been estimated at 48-112 Tg C yr-1, whereas the world's tropical rainforests (which includes the Amazon Rainforest) are estimated to bury to 78.5 Tg C yr-1. | <urn:uuid:34759185-7d4c-4c0e-82dd-53c9a5e664b1> | 3.4375 | 1,376 | Audio Transcript | Science & Tech. | 48.058344 |
You can use Open SQL to store character strings and binary
data as strings in database columns. There are two kinds of strings in the database, short strings
and long strings, which differ in the form of how the data is stored in the database. Whether a string column is a short or a long string, is specified in the ABAP Dictionary.
Short strings are only available for character strings (DDIC type SSTRING).
They are normally implemented as VARCHAR fields in the database and stored in the data record. Short
strings must always have a length restriction in the ABAP Dictionary which cannot exceed 255 characters. Trailings spaces are ignored by the database.
In Open SQL statements, you can use short strings wherever you can use C fields.
Long strings (also: "LOBcolumns") are available for character strings (DDIC type
STRING) and binary data (DDIC type RAWSTRING).
They are normally implemented as LOB in the database. The system only stores an LOB locator in the data
record while the actual string data is stored outside the data record. You can define a length restriction
for long strings in the ABAP Dictionary. For columns of the type STRING, trailing spaces are retained.
Long strings are subject to the following restrictions:
It is possible for long strings and mandatory for short strings to define a length restriction for them in the ABAP Dictionary. If this restriction is violated when data is written to the database, the system triggers an exception of the class CX_SY_OPEN_SQL_DB. Any truncation of the string when data is read from the database into a target field is ignored. You can get the value of the length restriction using the function DBMAXLEN( ).
Since the data of long strings is stored outside the data record, access to long strings is slower than to other data types. This applies particularly to set operations. This note is not applicable if you use short strings. | <urn:uuid:8f820b01-1789-468a-b3b5-a915eca9d4a9> | 3.25 | 408 | Documentation | Software Dev. | 51.916213 |
The very basic reason LEP stopped going to higher energies ( it reached over 200GeV center of mass, at the last stage, LEPII) and the tunnel was used for LHC is synchrotron radiation .
Note that radiated power is proportional to 1/m^4
It is not possible to feed a circular beam of electrons the energy needed to raise it to higher energies at the radius of LEP, it is a loosing game, The energy would go into feeding synchrotron radiation. The reason the same radius can be used for much higher energy protons is the ratio of the masses of electron to proton.
Synchrotron radiation is not present in linear colliders and that is why the next electron positron accelerator will be a linear collider the ILC.
What would happen if the scientists would use leptons instead of hadrons?
It has happened at LEP and LEPII with electrons on positrons. If the scattering is not elastic a lot of hadrons appear, as well as leptons and Z bosons. The data from LEP confirmed the calculations of the standard model for elementary particles to great accuracy.
Especially: What would happen if they would collide electrons?
from the previous paragraph, the standard model predicts what would happen if electrons were scattered on electrons : all the variants of the feynman diagram possibilities would appear also. | <urn:uuid:764bae83-d92a-4736-9075-1b5d3a71bf6e> | 3.734375 | 289 | Q&A Forum | Science & Tech. | 44.445243 |
is one of
the most widely used Java APIs for RDF and OWL, providing services
for model representation, parsing, database persistence, querying
and some visualization tools. Protégé-OWL has always had a close relationship
with Jena. The Jena ARP parser is still used in the Protégé-OWL
parser, and various other services such as species validation and datatype
handling have been reused from Jena. It was furthermore possible to
convert a Protégé OWLModel into a Jena OntModel, to get a static snapshot
of the model at run time. This model, however had to be rebuilt after
each change in the model.
As of August 2005, Protégé-OWL is now much more closely integrated with
Jena. This integration allows programmers to user certain Jena functions
at run-time, without having to go through the slow rebuild process each time.
The architecture of this integration is illustrated below.
The key to this integration is the fact that both systems operate
on a low-level "triple" representation of the model. Protégé has its
native frame store mechanism, which has been wrapped in Protégé-OWL
with the TripleStore classes. In the Jena world, the corresponding
interfaces are called Graph and Model. The Protégé TripleStore has
been wrapped into a Jena Graph, so that any read access from the Jena
API in fact operates on the Protégé triples. In order to modify these
triples, the conventional Protégé-OWL API must be used. However, this
mechanism allows the use of Jena methods for querying, while the ontology
is edited inside Protégé.
The OWLModel API has a new method
to access a Jena
view of the Protégé model at run-time. This can be used by Protégé
plug-in developers. Many other Jena services can be wrapped into Protégé
plug-ins this way, by providing them a pointer to the Model created by Protégé. | <urn:uuid:27c6a507-04a9-416b-953e-31cb2459321f> | 2.8125 | 462 | Documentation | Software Dev. | 46.311157 |
Every January, I journey out to the American Astronomical Society for its annual Winter Meeting. And, every time, I’m amazed at new bit of information about the universe. Today’s revelation (and it’s only Day 1 of the meeting), is that the Milky Way Galaxy is populated with many planets — in fact, one team of scientists estimates that at least one out of every six stars in the galaxy has an Earth-sized planet.
That, my friends, is pretty profound.
If you postulate that the Milky Way has about a hundred billion stars, that means there are at least 17 BILLION Earth-sized planets in our galaxy. Again, that’s pretty profound. Now, the next question everybody will ask is, “How many of those are capable of supporting life?” And to answer that question requires a lot more observation. First, to support life, those planets need to be orbiting close enough to their stars that liquid water will be available to sustain life on them. Then, scientists need to look at the other conditions on the planets, and look for “bio signatures” in the planets’ atmospheres that indicate life could be there. So, even though there could be the potential for 17 billion Earth-sized worlds out there, that doesn’t say they are Earth-like… or that they have life. But, there are 17 BILLION Earth-sized worlds out there. Up until the last decade of the 20th century, we didn’t know of any.
Thanks to the Kepler Mission, which has been cranking out planetary candidate discoveries for some time now, the hunt for planets is now an understood and successful ongoing project.
Want to read more details about how the scientists came up with their numbers? Check it out here. And, stay tuned for more AAS news! | <urn:uuid:d8631f5f-75a0-40b7-bc48-e63ffd4a612c> | 3.125 | 386 | Personal Blog | Science & Tech. | 57.141572 |
To fill the Schrödinger equation, , with a bit of life, we need to add the specifics for the system of interest, here the hydrogen-like atom. A hydrogen-like atom is an atom consisting of a nucleus and just one electron; the nucleus can be bigger than just a single proton, though. H atoms, He+ ions, Li2+ ions etc. are hydrogen-like atoms in this context. We'll see later how we can use the exact solution for the hydrogen-like atom as an approximation for multi-electron atoms.
The potential, V between two charges is best described by a Coulomb term, , where Ze is the charge of the nucleus (Z=1 being the hydrogen case, Z=2 helium, etc.), the other e is the charge of the single electron, ε0 is the permittivity of vacuum (no relative permittivity is needed as the space inside the atom is "empty").
With the system consisting of two masses, we can define the reduced mass, i.e. the equivalent mass a point located at the centre of gravity of the system would have: , where M is the mass of the nucleus and m the mass of the electron.
Thus, the hydrogen atom's Hamiltonian is .
The Schrödinger equation of the hydrogen atom in polar coordinates is:
Both LHS and RHS contain a term linear in ψ, so combine:
Using the Separation of Variables idea, we assume a
product solution of a radial and an angular function:
Since Y does not depend on r, we can move it in front of the radial derivative:
and, similarly, R does not depend on the angular variables. Thus replace ψ and the differentials:
Multiply by r2 and divide by RY to separate the radial and angular terms:
The first and fourth terms depend on r only, the middle terms depend on the angles only. They can only balance each other for all points in space if the radial and angular terms are the same constant but with opposite sign.
|Therefore, we can separate into a radial equation:|
|...and an angular equation:||,|
|where A is the separation constant.|
The angular part still contains terms depending on both θ and φ. Another separation of variables is needed.
|We'll replace Y by a product of single-variable functions:||.|
|Replacing Y and the differentials, we have||.|
|Isolate variables in separate terms:||.|
|With B as separation constant, we have a polar part:|
|...and an azimuth part:||.|
|The azimuth equation:|
|is a 2nd order ODE with constant coefficients solved by:||,|
The angle φ is the azimuth, i.e. if you think of the atom as a globe, then φ is the longitude of the position of the electron. As long as there is no external reason to do otherwise, we can choose the "Greenwich meridian" of the atom in a mathematically convenient way by setting c2=0.
Note that m must be an integer number - otherwise the value of the azimuth wave function would be different for φ=0o and φ=360o. In quantum terminology, m is called a quantum number as it restricts the possible values of the wave function (and hence of observables) to integer multiples (quanta) of a base unit.
Thus, the azimuth part of the wave function is .
|With B=m2, the polar equation is:||.|
We can express the wave function as depending on cos(θ) rather than on θ itself.
To figure this out, think of the function y(x)=x4. If you plot the function logarithmically, you are effectively plotting a new function z(log x) = 4 (log x) on a linear scale, where (log x) is the independent variable. Try it yourself!
|and hence the differential,||,|
|leaves us with||.|
|we can further substitute|
|Apply the product rule to the first term:||.|
Unfortunately, the coefficients in this ODE are not constant but depend on x, so the recipe for ODE with constant coefficients doesn't really help here. However, it is a known type of differential equation (called associated Legendre-type DE), for which a solution is known in the maths literature. The solutions are known as associated Legendre polynomials, and they contain a power series with recursive coefficients.
|with the coefficients||.|
This means that there are two power series (for the even and odd terms, respectively) and that the coefficients of the higher terms can be calculated recursively if the first coefficient of each series, a0 and a1, is known. In the recursion formula, A and m are the constants we have from the previous parts of the solution strategy, while n is the index variable of the two power series.
A series solution is only helpful if the series converges so that it can be truncated as soon as the solution is sufficiently accurate. For the recursion formula above, the series converges if A=l(l+1), where l is an integer number. The root coefficients of the two series, a0 and a1, are chosen depending on the particular value of l to ensure only the convergent series survives. The first few Legendre functions are:
The value of l limits the choices of m; m must have a value between -l and l. As far as the polar part is concerned, the ±m solutions are equivalent, but the sign of m makes a difference to the azimuth part as seen above.
Remember to resubstitute P and x when using these polar wave functions.
|In the radial equation,||,|
|apply the product rule to the first term:||,|
|and divide by r2:||.|
We can't solve this straight away, but for very large r, the terms in red are forced to zero because they go reciprocal with r.
|That leaves us with an asymptotic equation:||,|
|which is another ODE with constant coefficients. Solution:||.|
It makes sense to use as the zero point of potential energy the energy of a free electron, i.e. in this asymptotic case, E → 0 for an electron far away from the nucleus, as it is practically free. Since the presence of the positive charge in the nucleus stabilises the atom, we must look for solutions where E becomes negative as the electron comes closer to the nucleus. These two conditions are met if we choose c4=0 and use the fact that E<0 to get rid of the imaginary unit.
The asymptotic solution is then
The detail nearer the nucleus is expanded in a power series: .
This results in a series of powers of r whose coefficients must all be zero to match the RHS of the differential equation. From that, a recursion formula is derived for the bq, and the requirement for the series to converge produces another quantum number, n.
This results in the radial solution
where the coefficient b0 contains the l-dependence.
At the same time, the solution of the radial part also fixes the possible energy levels by linking them to the quantum number n. The energy levels of the hydrogen-like atom are given by .
The full solution of the Schrödinger equation of the hydrogen-like atom is, according to the
separation approach taken:
where N is obtained by normalisation and includes the coefficients of each partial solution.
In solving the Schrödinger equation of the hydrogen atom, we have encountered three quantum numbers. Two of them, m and l, arise from the separation constants of the R/Y and θ/φ separations. The possible values of the separation constant are restricted to integer numbers by boundary conditions (the need for the azimuth wave function to return to its value after a full 360o turn of φ and the need for the power series in the Legendre polynomial to converge to produce a physically sensible solution). The third quantum number, n, arises, again, from the need to have a convergent series representing the non-asymptotic part of the radial function.
The quantum numbers are not independent; the choice of n limits the choice of l, which in turn limits the choice of m. A fourth quantum number, s, does not follow directly from solving the Schrödinger equation but is to do with spin. The possible combinations of quantum numbers are given in the table.
Note that the energy of a state (i.e. of a wave function) depends only on n but not on the other quantum numbers. This degeneracy is only strictly true for the hydrogen-like atom; any approximate solutions for higher atoms cause a dependence of the energy eigenvalue of a state on all quantum numbers. | <urn:uuid:4170f328-976f-4cb8-95c6-320a4caaa7ba> | 3.640625 | 1,908 | Tutorial | Science & Tech. | 53.715206 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
...astronauts, who expect to fly on several space missions during their time at NASA, there is a third category of individuals who have gone into space on the shuttle. These individuals are designated payload specialists. The specialists are required to carry out experiments or payload activities with which they are particularly familiar. Although they are known to the general public as...
A third category of individuals who have gone into space are called variously payload specialists or guest cosmonauts. These individuals include scientists and engineers who accompany their experiments into orbit; individuals selected to go into space for political reasons, such as members of the U.S. Congress or persons from countries allied with the Soviet Union or the United States; and a...
What made you want to look up "payload specialist"? Please share what surprised you most... | <urn:uuid:d62831c4-6058-49e7-b5d5-20d5d94ce180> | 2.9375 | 199 | Truncated | Science & Tech. | 41.98146 |
[quote:left to right]Setting aside the issue of mass extinctions caused by external sources: volcanism, asteroids/comets etc., most species are kept near optimal population levels by disease and predation. When a species is dropped into a small ecological niche like the reindeer herd on St. Matthews Island, it may be temporarily freed from those natural checks and balances until it has consumed virtually all available food sources. And that's the bad side of being freed from the bonds of nature; as when the food runs out, and animals (and people in some examples) are weakened by starvation and disease epidemics, so that even after a population crash, the survivors may be too weak, and have too little genetic variety to rebuild the population....some of the idiot survivalists who fantasize about surviving the end of the world out in the woods ought to take note!
From my pov I do not believe technological progress is either unlimited or even linear. The modern global civilization we consider natural today has only been around for less than two centuries, and has already consumed more than half of the oil in the ground which took millions of years for nature to sequester in place. We are running out of many other natural resources also, but oil - specifically cheap, easy-to-extract oil has been the lifeblood of modernity, and the real source of the food supply that is straining today to try to feed 7 billion people. So, you could say that we will once again be in harmony with the natural order of things....but that's providing that we, or our children and grandchildren actually survive the process back to real sustainable living.[/quote]
Great post. I agree that human knowledge and technology reach plateaus, or dead ends. The human population is on the same general curve as other mammal stimulated population crash curves, i.e. the half bell curve. Afterward, it would softly undulate around long term sustainable, but in humans' case, another thing comes up;http://www.independent.co.uk/environmen ... 06484.html
It is not so much "may" as will if HGHGs are not cut 90% by 2020. It is unfortunate that the population crashes around or just before mid-century, country by country, region by region, with migrations, wars, diseases, and cannibalism along with killing everything in sight to eat, plant and animal. It would be bad enough if humans just caused their own extinction, but taking down most other species, too, is unforgivable. Anthropocene Epoch Thermal Maximum and Extinction Level 'Event' (on a geologic scale), are words and processes not understood by most. Can an underground fortress with the sum total of human knowledge and genomes/seeds, etc., a nuclear breeder reactor over a large aquifer, with a human population kept at a level well above genetic erosion, last 200,000 years???? Then could they still be human enough to come to the surface, this time living sustainably as they replant the world?(and fighting cockroaches and ants which have taken over). | <urn:uuid:2bde11a4-69eb-41dc-91b7-9de3d714b8e2> | 2.90625 | 639 | Comment Section | Science & Tech. | 53.473153 |
An astronaut or cosmonaut is a person trained by a human spaceflight program to command, pilot, or serve as a crew member of a spacecraft. While generally reserved for professional space travelers, the terms are sometimes applied to anyone who travels into space, including scientists, politicians, journalists, and tourists.
Until 2002, astronauts were sponsored and trained exclusively by governments, either by the military, or by civilian space agencies. With the sub-orbital flight of the privately funded SpaceShipOne in 2004, a new category of astronaut was created: the commercial astronaut.
The criteria for what constitutes human spaceflight vary. The Fédération Aéronautique Internationale (FAI) Sporting Code for astronautics recognizes only flights that exceed an altitude of 100 kilometers (62 mi). In the United States, professional, military, and commercial astronauts who travel above an altitude of 50 miles (80 km) are awarded astronaut wings.
As of June 20, 2011, a total of 654 people from 38 countries have reached 100 km (62 mi) or more in altitude, of which 520 reached low Earth orbit or beyond. Of these, 24 people have traveled beyond Low Earth orbit, to either lunar or trans-lunar orbit or to the surface of the moon; three of the 24 did so twice: Jim Lovell, John Young and Eugene Cernan. The three astronauts who have not reached low Earth orbit are spaceplane pilots Joe Walker, Mike Melvill, and Brian Binnie.
Under the U.S. definition, as of June 20, 2011, 529 people qualify as having reached space, above 50 miles (80 km) altitude. Of eight X-15 pilots who exceeded 50 miles (80 km) in altitude, only one exceeded 100 kilometers (about 62 miles). Space travelers have spent over 30,400 man-days (83 man-years) in space, including over 100 astronaut-days of spacewalks. As of 2008, the man with the longest cumulative time in space is Sergei K. Krikalev, who has spent 803 days, 9 hours and 39 minutes, or 2.2 years, in space. Peggy A. Whitson holds the record for the most time in space by a woman, 377 days.
In English-speaking nations, a professional space traveler is called an astronaut. The term derives from the Greek words ástron (ἄστρον), meaning "star", and nautes (ναύτης), meaning "sailor". The first known use of the term "astronaut" in the modern sense was by Neil R. Jones in his short story "The Death's Head Meteor" in 1930. The word itself had been known earlier. For example, in Percy Greg's 1880 book Across the Zodiac, "astronaut" referred to a spacecraft. In Les Navigateurs de l'Infini (1925) of J.-H. Rosny aîné, the word astronautique (astronautic) was used. The word may have been inspired by "aeronaut", an older term for an air traveler first applied (in 1784) to balloonists. An early use in a non-fiction publication is Eric Frank Russell's poem "The Astronaut" in the November 1934 Bulletin of the British Interplanetary Society.
The first known formal use of the term astronautics in the scientific community was the establishment of the annual International Astronautical Congress in 1950 and the subsequent founding of the International Astronautical Federation the following year.
NASA applies the term astronaut to any crew member aboard NASA spacecraft bound for Earth orbit or beyond. NASA also uses the term as a title for those selected to join its Astronaut Corps. The European Space Agency similarly uses the term astronaut for members of its Astronaut Corps.
By convention, an astronaut employed by the Russian Federal Space Agency (or its Soviet predecessor) is called a cosmonaut in English texts. The word is an anglicisation of the Russian word kosmonavt (Russian: космонавт Russian pronunciation: [kəsmɐˈnaft]), one who works in space outside the Earth's atmosphere, a space traveler, which derives from the Greek words kosmos (κόσμος), meaning "universe", and nautes (ναύτης), meaning "sailor".
The Soviet Air Force pilot Yuri Gagarin was the first cosmonaut—indeed the first person—in space. Valentina Tereshkova, a Russian factory worker, was the first woman in space, as well as arguably the first civilian to make it there (see below for a further discussion of civilians in space). On March 14, 1995, Norman Thagard became the first American to ride to space on board a Russian launch vehicle, and thus became the first "American cosmonaut".
Official English-language texts issued by the government of the People's Republic of China use astronaut while texts in Russian use космонавт (cosmonaut). In official Chinese-language texts, "yǔ háng yuán" (宇航员, "space navigating personnel") are used for astronaut and cosmonaut, and "háng tiān yuán" (航天员, "space navigating personnel") is specially used for Chinese astronaut. The phrase "tài kōng rén" (太空人, "spaceman") is often used in Taiwan and Hong Kong.
The term taikonaut is used by some English-language news media organizations for professional space travelers from China. The word has featured in the Longman and Oxford English dictionaries, the latter of which describes it as "a hybrid of the Chinese term taikong (space) and the Greek naut (sailor)"; the term became more common in 2003 when China sent its first astronaut Yang Liwei into space aboard the Shenzhou 5 spacecraft. This is the term used by Xinhua News Agency in the English version of the Chinese People's Daily since the advent of the Chinese space program. The origin of the term is unclear; as early as May 1998, Chiew Lee Yih (趙裡昱) from Malaysia, used it in newsgroups.
With the rise of space tourism, NASA and the Russian Federal Space Agency agreed to use the term "spaceflight participant" to distinguish those space travelers from professional astronauts on missions coordinated by those two agencies.
While no nation other than the Russian Federation (and previously the former Soviet Union), the United States, and China have launched a manned spacecraft, several other nations have sent people into space in cooperation with one of these countries. Inspired partly by these missions, other synonyms for astronaut have entered occasional English usage. For example, the term spationaut (French spelling: spationaute) is sometimes used to describe French space travelers, from the Latin word spatium for "space", and the Malay term angkasawan was used to describe participants in the Angkasawan program.
The first human in space was Soviet Yuri Gagarin, who was launched on April 12, 1961 aboard Vostok 1 and orbited around the Earth for 108 minutes. The first woman in space was Soviet Valentina Tereshkova, who launched on June 16, 1963 aboard Vostok 6 and orbited Earth for almost three days.
Alan Shepard became the first American and second person in space on May 5, 1961 on a 15-minute sub-orbital flight. The first American woman in space was Sally Ride, during Space Shuttle Challenger's mission STS-7, on June 18, 1983. In 1992 Mae Jemison became the first African American woman to travel in space aboard STS-47.
Cosmonaut Alexei Leonov was the first person to conduct an extra-vehicular activity (EVA), (commonly called a "spacewalk"), on March 18, 1965, on the Soviet Union's Voshkhod 2 mission. This was followed two and a half months later by astronaut Ed White who made the first American EVA on NASA's Gemini 4 mission.
The Soviet Union, through its Intercosmos program, allowed people from other "socialist" (i.e. Warsaw Pact and other Soviet-allied) countries to fly on its missions. An example is Czechoslovak Vladimír Remek, the first cosmonaut from a country other than the Soviet Union or the United States, who flew to space in 1978 on a Soyuz-U rocket. On July 23, 1980, Pham Tuan of Vietnam became the first Asian in space when he flew aboard Soyuz 37.
Also in 1980, Cuban Arnaldo Tamayo Méndez became the first person of Hispanic and black African descent to fly in space, and Guion Bluford became the first African American to fly into space. The first person born in Africa to fly in space was Patrick Baudry, in 1985. In 1985, Saudi Arabian Prince Sultan Bin Salman Bin AbdulAziz Al-Saud became the first Arab Muslim astronaut in space. In 1988, Abdul Ahad Mohmand became the first Afghan to reach space, spending nine days aboard the Mir space station.
With the larger number of seats available on the Space Shuttle, the U.S. began taking international astronauts. In 1983, Ulf Merbold of West Germany became the first non-US citizen to fly in a US spacecraft. In 1984, Marc Garneau became the first of 8 Canadian astronauts to fly in space (through 2010). In 1985, Rodolfo Neri Vela became the first Mexican-born person in space. In 1991, Helen Sharman became the first Briton to fly in space. In 2002, Mark Shuttleworth became the first citizen of an African country to fly in space, as a paying spaceflight participant. In 2003, Ilan Ramon became the first Israeli to fly in space, although he died during a re-entry accident.
The youngest person to fly in space is Gherman Titov, who was 25 years old when he flew Vostok 2. (Titov was also the first person to suffer space sickness). The oldest person who has flown in space is John Glenn, who was 77 when he flew on STS-95.
The longest stay in space thus far has been 438 days, by Russian Valeri Polyakov. As of 2006, the most spaceflights by an individual astronaut is seven, a record held by both Jerry L. Ross and Franklin Chang-Diaz. The farthest distance from Earth an astronaut has traveled was 401,056 km (249,205 mi), when Jim Lovell, Jack Swigert, and Fred Haise went around the Moon during the Apollo 13 emergency.
Depending on the exact definition of 'civilian', the first civilian in space was either Valentina Tereshkova aboard Vostok 6 (she also became the first woman in space on that mission) or Joseph Albert Walker on X-15 Flight 90 a month later. Tereshkova was only honorarily inducted into the USSR's Air Force, which had no female pilots whatsoever at that time. Joe Walker had joined the US Army Air Force but was not a member during his flight. The first people in space who had never been a member of any country's armed forces were both Konstantin Feoktistov and Boris Yegorov aboard Voskhod 1.
The first non-governmental space traveler was Byron K. Lichtenberg, a researcher from the Massachusetts Institute of Technology who flew on STS-9 in 1983. In December 1990, Toyohiro Akiyama became the first paying space traveler as a reporter for Tokyo Broadcasting System, a visit to Mir as part of an estimated $12 million (USD) deal with a Japanese TV station, although at the time, the term used to refer to Akiyama was "Research Cosmonaut". Akiyama suffered severe space sickness during his mission, which affected his productivity.
The first person to fly on an entirely privately funded mission was Mike Melvill, piloting SpaceShipOne flight 15P on a suborbital journey, although he was a test pilot employed by Scaled Composites and not an actual paying space tourist. Seven others have paid to fly into space:
The first NASA astronauts were selected for training in 1959. Early in the space program, military jet test piloting and engineering training were often cited as prerequisites for selection as an astronaut at NASA, although neither John Glenn nor Scott Carpenter (of the Mercury Seven) had any university degree, in engineering or any other discipline at the time of their selection. Selection was initially limited to military pilots. The earliest astronauts for both America and the USSR tended to be jet fighter pilots, and were often test pilots.
Once selected, NASA astronauts go through twenty months of training in a variety of areas, including training for extra-vehicular activity in a facility such as NASA's Neutral Buoyancy Laboratory. Astronauts-in-training may also experience short periods of weightlessness in aircraft called the "vomit comet", the nickname given to a pair of modified KC-135s (retired in 2000 and 2004 respectively, and replaced in 2005 with a C-9) which perform parabolic flights. Astronauts are also required to accumulate a number of flight hours in high-performance jet aircraft. This is mostly done in T-38 jet aircraft out of Ellington Field, due to its proximity to the Johnson Space Center. Ellington Field is also where the Shuttle Training Aircraft is maintained and developed, although most flights of the aircraft are done out of Edwards Air Force Base.
Mission Specialist Educators, or "Educator Astronauts", were first selected in 2004, and as of 2007, there are three NASA Educator astronauts: Joseph M. Acaba, Richard R. Arnold, and Dorothy Metcalf-Lindenburger. Barbara Morgan, selected as back-up teacher to Christa McAuliffe in 1985, is considered to be the first Educator astronaut by the media, but she trained as a mission specialist. The Educator Astronaut program is a successor to the Teacher in Space program from the 1980s.
Astronauts are susceptible to a variety of health risks including decompression sickness, barotrauma, immunodeficiencies, loss of bone and muscle, loss of eyesight, orthostatic intolerance due to volume loss, sleep disturbances, and radiation injury. A variety of large scale medical studies are being conducted in space via the National Space and Biomedical Research Institute (NSBRI) to address these issues. Prominent among these is the Advanced Diagnostic Ultrasound in Microgravity Study in which astronauts (including former ISS commanders Leroy Chiao and Gennady Padalka) perform ultrasound scans under the guidance of remote experts to diagnose and potentially treat hundreds of medical conditions in space. This study's techniques are now being applied to cover professional and Olympic sports injuries as well as ultrasound performed by non-expert operators in medical and high school students. It is anticipated that remote guided ultrasound will have application on Earth in emergency and rural care situations, where access to a trained physician is often rare.
In Russia, cosmonauts are awarded Pilot-Cosmonaut of the Russian Federation upon completion of their missions, often accompanied with the award of Hero of the Russian Federation. This follows the practice established in the Soviet Union.
At NASA, those who complete astronaut candidate training receive a silver lapel pin. Once they have flown in space, they receive a gold pin. U.S. astronauts who also have active-duty military status receive a special qualification badge, known as the Astronaut Badge, after participation on a spaceflight. The United States Air Force also presents an Astronaut Badge to its pilots who exceed 50 miles (80 km) in altitude.
Eighteen astronauts (fourteen men and four women) have lost their lives during four space flights. By nationality, thirteen were American (including one of Indian origin), four were Russian (Soviet Union), and one was Israeli.
Eleven people (all men) have lost their lives training for spaceflight: eight Americans and three Russians. Six of these were in crashes of training jet aircraft, one drowned during water recovery training, and four were due to fires in pure oxygen environments.
The Space Mirror Memorial, which stands on the grounds of the John F. Kennedy Space Center Visitor Complex, commemorates the lives of the men and women who have died during spaceflight and during training in the space programs of the United States. In addition to twenty NASA career astronauts, the memorial includes the names of a U.S. Air Force X-15 test pilot, a U.S. Air Force officer who died while training for a then-classified military space program, and a civilian spaceflight participant.
|Look up cosmonaut, spationaut, astronaut, or taikonaut in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to: Astronauts|
Here you can share your comments or contribute with more information, content, resources or links about this topic. | <urn:uuid:b11c11db-ea15-41c3-9dee-e3c4ea9a7592> | 3.34375 | 3,573 | Knowledge Article | Science & Tech. | 44.134912 |
ONE of astronomy's great orbiting observatories has breathed its last. On 14 January, the liquid helium cooling one of the Planck space telescope's two photon sensors ran dry, ending a mission to map the big bang's echo.
Planck's data will help tease apart the large-scale structure of the universe and determine how it formed. It will also provide the most detailed views of nearer phenomenon, such as galactic dust and magnetic fields, which are superimposed on the spacecraft's view of the CMB. This data will be released in early 2013, once it has been processed.
Planck is the last in a line of observatories studying the CMB, which date back to 1989. As yet, there is no successor. "We want to know what the spacecraft will reveal before planning the next mission," says Jan Tauber of the European Space Research and Technology Centre in Noordwick, the Netherlands.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article | <urn:uuid:153be525-69f3-4d8b-9b54-0fdf76e6a3da> | 3.40625 | 272 | Truncated | Science & Tech. | 51.373684 |
Tue Dec 02 22:46:12 GMT 2008 by Michael Paine
Over the lifetime of the Earth it is estimated that 4 star systems have passed within 1000au of our Sun. Exchange of comets between Ooort Clouds seems likely during these encounters.
Thu Dec 04 10:50:23 GMT 2008 by Lindsay
This comment rings true. The outer regions of Sol's Oort cloud must have had many more than 4 close encounters with stars over the last 4 billion years. 1000AU corresponds to roughly 5.5 light days. A star approaching as close as 1 light year would be near enough to pass through the outer regions of the Oort cloud and severely disrupt it. I would suggest that there have been many many more than four stellar encounters at that distance over the last 4 billion years or so. If other stars have similar Oort like clouds then it is almost certain that there has been cross contamination of Oort bodies during these stellar encounters. There may well be many such 'aliens' amoung us.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:255426ee-7937-4ea8-8d0c-39409ddea2ce> | 3.09375 | 261 | Comment Section | Science & Tech. | 70.223545 |
Summary: Science Goals for an Antarctic Large Infrared Telescope
MG. Burton, J.WV. Storey and M.C.B. Ashley
Joint Australian Centre for Astrophysical Research in Antarctica,
School of Physics, University of New South Wales, Sydney NSW 2052, Australia
Over the past few years, site-testing at the South Pole has revealed conditions that are uniquely favorable for
infrared astronomy. In particular, the exceptionally low sky brightness throughout the near- and mid-infrared leads
to the possibility of a modest-sized telescope achieving comparable sensitivity to that of existing 8--10 metre class
telescopes. An 8 m Antarctic telescope, if constructed, would yield performance that would be unrivaled until the
advent of the NGST. In this paper we review the scientific potential of infrared telescopes in Antarctica, and discuss
their complementarity with existing 8--10 m class telescopes.
Keywords: Antarctica, Site Testing, Astronomy, Infrared, Star Formation, Interstellar Medium, Disks
The Antarctic plateau provides unique conditions on the Earth for the conduct of observational astronomy. The
air is thin, dry and cold and the weather stable; attributes all offering significant sensitivity gains over temperate
latitude sites. These conditions are quite different to those experienced at Antarctic coastal locations, which are
frequently subject to violent storms.
The plateau is over 3,000m in elevation, rising up to 4,300m at Dome A. An average year-round temperature
of _500 C, falling to _900 C at times, vastly reduces the thermal background in the near-JR. A reduced particulate
content of the atmosphere lowers the emissivity of the atmosphere in the mid-IR, reducing backgrounds still further. | <urn:uuid:879aa3c7-7c5a-4675-99f1-f034dad32062> | 2.90625 | 363 | Academic Writing | Science & Tech. | 33.705682 |
Solar Energy Uses
Solar energy is nothing but the energy which is acquired from the sun light. As is common knowledge, light is a form of energy. This energy is in the form of electro magnetic radiations. The solar energy is harnessed using, the solar receptors. The conversion of the solar energy in to electrical energy is the most important aspect of the solar energy. It is the production of the electrical energy, which helps in the practical applicability of the solar energy.
Solar energy can be put in to use where ever electrical energy is used. Now that applies to the electrical gadgets, where ever they are used. In addition the solar energy can be used to power solar cookers too, which aid in the cooking of food. On installation of the solar panels, the solar energy can be then routed to the inverters to produce electrical energy. The electrical energy can be either use as the major source of electrical energy for the entire electrical needs of the house, or if the electrical energy is not sufficient enough, it can also be used as a supplemental source of electrical energy. The homes which depend completely on the solar energy are in need of more than 1 inverter, and for most of the time, require 2 or even 3 invertors.
When a home is totally dependent on solar energy for power and does not make use of any electrical energy form the local electrical grid, then such a home is described to be off grid. Despite being off grid, these homes will certainly have a back up system, by having a connection to the local grid, so that in case of the solar panels not functioning due to a break down or repair work, or any other reason, the electrical needs are still been taken care of.
Commercial establishments can very well make good use of the solar energy. The electrical needs of the commercial establishments are huger than that of the homes. They can definitely make use of the solar energy to power their gadgets. Laptops, computers, mobile phone chargers, ovens, water heaters can all function well, with the solar energy.
Departments in charge of the local Highway also use solar energy to economize on their electrical needs. Road signs, stop lights at traffic signals and flashing bill boards are often power-driven with the electrical energy, derived from the solar panels.
Oil Job Search
Oil is one of the most precious materials that mankind has searched for. There are three categories of oil: mineral oil, organic oil and synthetic oil. The mineral oil is found in underground rocks as it is produced from mineral materials such as dead organisms like animals, plants or ... | <urn:uuid:cf13ac83-40ee-49a5-99ac-fbe2f4b54790> | 3.046875 | 529 | Knowledge Article | Science & Tech. | 41.941137 |
Subsequent research found that the pupils of more intelligent people (as defined by their Scholastic Aptitude Test scores) dilated less in response to cognitive tasks compared with those of lower-scoring participants, indicating more efficient use of brainpower.
Scientists have since used pupillometry to assess everything from sleepiness, introversion and sexual interest to race bias, schizophrenia, moral judgment, autism and depression. And whereas they haven't been reading people's thoughts per se, they've come pretty close.
"Pupil dilation can betray an individual's decision before it is openly revealed," concluded a 2010 study led by Wolfgang Einhäuser-Treyer, a neurophysicist at Philipps University Marburg in Germany. Participants were told to press a button at any point during a 10-second interval, and their pupil sizes correlated with the timing of their decisions. Dilation began about one second before they pressed the button and peaked one to two seconds after.
But are pupils informative outside the lab? Can pupil size be used to "read" a person's intentions and feelings? According to Men's Health magazine a man can tell when it is "time to make your move" by watching his date's pupils, but some skepticism is warranted. "It is unclear to me to what extent this can be exploited in completely unrestrained settings," Einhäuser-Treyer wrote in an e-mail, pointing out that light conditions could easily interfere with amateur attempts at interpersonal pupillometry.
Other efforts to exploit pupil dilations for purposes beyond scientific research have failed. During the Cold War, Canadian government officials tried to develop a device they called the "fruit machine" to detect homosexuality among civil service employees by measuring how the pupils in their eyes responded to racy images of women and men. The machine, which never worked, was to aid the government's purge of gay men and lesbians from the civil service and thereby purportedly reduce vulnerability to Soviet blackmail.
A pupil test for sexual orientation remains as unlikely as it was in the 1960s. Researchers at Cornell University recently showed that sexual orientation correlated with pupil dilation to erotic videos of their preferred gender, but only on average and only for male subjects. Although pupillometry shows promise as a noninvasive measure of sexual response, they concluded, "not every participant’s sexual orientation was correctly classified" and "an observable amount of variability in pupil dilation was unrelated to the participant's sexual orientation."
Pupillometry also became popular in the advertising industry during the 1970s as a way to test consumers' responses to television commercials, says Jagdish Sheth, a marketing professor at Emory University. But the practice was eventually abandoned. "There was no scientific way to establish whether it measured interest or anxiety," Sheth says. | <urn:uuid:4947d3df-97fd-4e7a-8b63-cc0710bb89fd> | 2.9375 | 573 | Knowledge Article | Science & Tech. | 20.718707 |
Using the left mouse button, select piece and drag into position. Pressing the right mouse button rotates piece. If you need a hint, press the "V" key to view the whole image. Change the number of pieces: 9 | 24 | 48 | 160.
The atmosphere is a cloud of gas and suspended solids extending from the Earth's surface out many thousands of miles, becoming increasingly thinner with distance but always held by the Earth's gravitational pull. The atmosphere is made up of layers surrounding the earth that holds the air we breath, protects us from outer space, and holds moisture (clouds), gases, and tiny particles. In short, the atmosphere is the protective bubble we live in. To learn more about our weather go to JetStream - An Online School for Weather. | <urn:uuid:de6e39bb-09a7-4a9f-ac9b-1d440741b5af> | 3.5 | 160 | Tutorial | Science & Tech. | 60.838173 |
SOUTH Africa, with its small science and research budget, tends to focus on the things it’s good at, or fields in which it has a geographic advantage. It is difficult for a country that spends 0.96% of its $408.24bn gross domestic product on research to compete with the likes of the US and China without some kind of advantage. As a bioscientist once told me, “A lab is a lab is a lab.”
While South African scientists still participate in these infrastructure-, equipment- and money-heavy scientific areas, we do try to make the best of what we have.
The most obvious is astronomy, with South Africa recently winning 70% of the Square Kilometre Array telescope. With South Africa being home to a new species of hominid, Australopithecus sediba, it is a given that we focus on palaeosciences and human evolution.
Another area in which South Africa is very active is the Southern Ocean.
I’ve been writing quite a bit about it, and today a colleague came to my desk and asked me what the Southern Ocean is — he’d never heard of it.
For some quick crib notes, this is the Southern Ocean.
Historically, South Africa has been involved in the Southern Ocean, but this research has generally focused on biodiversity, marine resources and the like.
In part because it is, for the most part, an understudied marine scientist and oceanographer’s dream, but also because South Africa’s territory extends to two islands in a sub-Antarctic archipelago of the Prince Edward Islands: Marion Island (290km²) and Prince Edward Island (45km²). We also have a base in the Antarctic, and recently bought a shiny new polar research vessel called the SA Agulhas II.
So, why is the Southern Ocean important? It is the only ocean that is surrounded by other oceans, instead of land. Someone once described it to me as the “lungs” of the ocean system.
With global climate change inching up all governments’ agendas, the Southern Ocean is also falling under the microscope.
One of the things I find so fascinating about science, and an important factor in being a science journalist, is just how much we don’t know. People often think that science is axiomatic, and that it is taking all the mystery out of the world, but the truth is that the more we find out, the more we realise we don’t know.
Also, people spend their lives giving into curiosity about strange, arcane topics.
Take these researchers from Stellenbosch University, Bjorn von der Heyden and Prof Alakendra Roychoudery, who, with American-based researchers, have been mapping the different kinds of iron in the Southern Ocean.
Why should iron matter? Plants (like humans) need iron to survive, both in water and out of it. These plants also consume carbon dioxide, which they convert into oxygen.
Carbon dioxide and greenhouse gases have become public enemy number one in the battle against climate change. We need to remove carbon dioxide (and monoxide) from the atmosphere, and the ocean is one of the world’s main carbon sinks. And the Southern Ocean is one of the main players in removing carbon dioxide from the atmosphere.
“Marine algae such as phytoplankton are more important than the Amazon rainforest for taking greenhouse-causing carbon dioxide out of the atmosphere,” Von der Heyden explains. “In parts of the Southern Oceans, the amount of algae or phytoplankton present depends on how much iron is available in the water,” he says.
“Ultimately, this means that the more iron there is available, the more phytoplankton can grow, and the more carbon dioxide can be taken from the atmosphere.”
Part of their research is to find out what kind of iron is present where and in what quantities. You have Fe(III), which is insoluble, and Fe(II), which is soluble. Physorg.com has an interesting article about it.
In their research, the scientists found that areas with high concentrations of Fe(II) had more phytoplankton.
This matters because — in an extreme scenario — people might one day use this information to geo-engineer the planet to stop global warming.
(I’m still trying to make my mind up about geo-engineering. You can read an article I wrote in 2009, which follows below this post.)
Now, before you stop reading, thinking I’m a conspiracy theorist or doom-and-gloom prophet, people are already doing it, as reported in The New York Times in October.
In short, the more we know the better. The more data we have, the better able we are to make informed decisions about the future of our planet.
From The Weekender, October 3 2009
A radical solution to climate change
If we fail to act, geo-engineering could be our last option — but we should beware the law of unintended consequences, writes Sarah Wild
A REPORT released by the UK’s Royal Society last month presents climate change and the issues surrounding it in direct and unequivocal terms: climate change is a reality, with global temperatures expected to rise between 2°C and 4°C this century.
If the world’s governments do not make a concerted and meaningful effort to reduce carbon emissions, the planet’s only hope may lie in the untested, and possibly dangerous, science of geo-engineering.
The phrase “climate change” is bandied about so frequently these days that it has lost its meaning and impetus.
Some people believe it is touted by flower-wearing hippies who want to impede the progression of industrialised, and in some cases developing, nations; others that they are buzz words wielded by politicians to instil the fear of God into their populations and win votes. Some people view the whole idea with scepticism.
Geo-engineering is a meeting place of various disciplines — engineering, science and risk management — with the aim of modifying the earth’s atmosphere and makeup to mitigate the effects of climate change caused by greenhouse gases.
While the thought of engineering the planet might get people’s hackles up, it is a broad science with measures ranging from the innocuous, such as painting roofs white to reflect heat rather than absorb it, to the extreme, such as shooting reflectors into the atmosphere.
The large quantities of carbon dioxide in the atmosphere mean solar radiation filters unfettered through the stratosphere, but then becomes trapped lower down and bounces back onto the earth’s terrestrial and ocean surfaces, causing global temperatures to rise.
The side-effects of this phenomenon are vast and severe, from the melting of the permafrost in the Arctic and rising sea levels, to the acidification and warming of the oceans, to the disturbance of global climate patterns, causing drought and flooding and threatening water and food security.
The Royal Society has broken the various geo-engineering techniques into two categories: carbon dioxide removal and solar radiation management.
While carbon dioxide removal addresses the root of the problem — the removal of excessive quantities of greenhouse gases in the atmosphere — its techniques are expensive in relation to the hoped-for benefits, and its effects will be felt only in the long term.
This approach is considered to have the fewest side-effects and the most likely to succeed in recalibrating the earth’s climate.
The most promising carbon dioxide removal technique examined by the Royal Society was proposed by Klaus Lackner of Columbia University in New York.
His plan is to pepper the earth with artificial trees that suck carbon dioxide out of the atmosphere.
However, the costs of such a project are substantial and, even on the scale proposed, would not necessarily solve the greenhouse gas problem.
Another option is to create areas of oceanic algae to absorb the carbon dioxide that has dissolved in the oceans.
In order to cultivate these algae farms, large quantities of iron would have to be dumped into the ocean to promote this growth.
This not only raises questions about possible adverse effects on ocean ecosystems, but also on the people living on nearby land masses and the resultant legal, political and ethical issues.
Although removal of carbon dioxide may have side-effects, these are minor compared with the possible consequences of solar radiation management, which addresses rising temperatures.
Comparatively cheaper, these options have possible, wider-ranging results, which may be even more detrimental than climate change. One of the problems with these techniques is that they involve the manipulation of climate patterns, which are mysterious at the best of times. Even the most apparently innocent of these measures could have unforeseen consequences.
For instance, painting roofs white to reflect heat could alter the flow of warm air, disturbing air circulation and thus altering weather patterns.
A more drastic solution to the problem of global warming is the introduction of sulphates into the stratosphere to act as radiation reflectors.
This has precedent in nature. In 1991 Mount Pinatubo — a volcano in the Philippines — erupted, showering sulphate particles into the atmosphere. These particles reflected solar radiation back into space, and consequently global temperatures dropped by 0,5°C.
Once again, not enough is known about how Earth’s climate and atmosphere behave to state unequivocally that scientists know all of the consequences of something as radical as shooting tons of sulphates into the atmosphere.
All the case studies so far have been carried out in controlled environments on a small scale. With the millions of variables at play in our delicately balanced atmosphere, there is no way of knowing which one of them may be adversely affected, thus creating a chain of unforeseen events. Ozone depletion, which may result from the proliferation of sulphates, is just one of many possible results that could make the climate change problems worse.
As the Royal Society’s report points out, all of the proposed geo-engineering techniques are dogged by uncertainty regarding their effects as well as efficacy.
Prof John Shepherd, who chaired the study, says: “Our research found that some geo-engineering techniques could have detrimental effects on many people and ecosystems — yet we are still failing to take the only action that will prevent us from having to rely on them.
“Geo-engineering and its consequences are the price we may have to pay for failure to act on climate change. It is essential that we strive to cut emissions now, but we must also face the very real possibility that we will fail.
“If Plan B is to be an option in the future, considerable research and development of the different models must be undertaken now.”
Geo-engineering’s detractors accuse governments of turning to these possibly dangerous measures rather than addressing the root of the problem — greenhouse gas emissions caused by wide-spread use of fossil fuels.
Doug Parr, the chief scientist at Greenpeace UK, says: “Geo-engineering is creeping onto the agenda because governments seem incapable of standing up to the vested interests of the fossil fuel lobby, who will use the idea to undermine the emission reductions we can do safely.”
And geo-engineering really is “creeping onto the agenda”, with these issues being taken up by the UK parliament, Nasa and the Institute of Mechanical Engineers, among others.
To date, governments across the world have failed to take significant action on climate change and make concerted efforts to reduce greenhouse-gas emissions.
The world waits with bated breath for the United Nations summit on climate change taking place in Copenhagen in December but, considering the failure of the Kyoto Protocol, cynics are doubtful. Consequently, the use of geo-engineering techniques to mitigate the effects of climate change is becoming a real option, which requires extensive research.
In its recommendations, the Royal Society encourages governments to increase their research funding for geo-engineering. This is because, in its present form, the deployment of these options could be more catastrophic than climate change itself.
Ken Caldeira of the Carnegie Institute in California says: “The worst situation is to not test the options and then face a climate emergency and then be faced with deploying an untested option, a parachute that you’ve never tested out as the plane’s crashing.”
Jeunesse Park, founder of Food and Trees for Africa, has a more pragmatic view: “Even if we introduced serious reductions and offset programmes internationally, we will still have to deal with the 150 years of abuse we have caused.
“Technology is developing at such a rate that we’re not going to be able to stop it, so we should rather promote research and understanding, which can help save our species.”
Even if the geo-engineering techniques are found to be fool-proof, a quantum leap in itself, that would not solve the problems of using these techniques.
According to the Royal Society, technical and scientific issues may not be the ones to impede the promulgation of geo-engineering technology, but rather social, legal, ethical and political issues.
If world leaders have been unable to reach consensus on how to reduce carbon emissions, how will they agree on an international legislative framework to ensure the safe usage of this dangerous technology? | <urn:uuid:17de557e-48e5-46e1-bdf0-e45fc0fb89ad> | 2.6875 | 2,768 | Nonfiction Writing | Science & Tech. | 35.475209 |
What’s negative 2 to the fourth power? 16? -16? If you put “-2^4″ into the TI-83, you get -16. But we know that (-2)(-2) = 4 and (-2)(-2) = 4, and (4)(4) = 16. So why does the calculator give us -16?
This post is no doubt for the high schooler and not for someone addicted to the )( buttons on the calculator like I am. I parenthesize. It comes from a fear that something will go negative that should be positive. I have reminded my students more times than I can count to parenthesize, so many times, in fact, that I am more than sure that most tune me out as soon as they hear the first syllable. But still the negative raised to an even number sneaks past the best of ‘em.
The evil negative base reared its ugly head again today when I graded papers on the geometric sequence an = a1 • r^(n-1) where:
an = the value of the nth term
a1 = first term’s value
r = ratio of change (ie “doubling” would be 2)
n = the terms placement (ie: 5th term would be n = 5)
“Find a7 if a1 = 5 and r = -2.” The answer I or course got more than the correct answer was ” -320″. What should the answer be? “320″. The problem should be written out first as: 5(-2)^(7-1) to make the process clear.
At least no one gave -1,000,000 as an answer. There’s still hope! | <urn:uuid:77bf34dc-a09a-40f6-a114-53bd01690df5> | 3.71875 | 376 | Personal Blog | Science & Tech. | 90.891747 |
The effect of pH on enzyme actvity
The pH of a solution can have several effects of the structure and activity of enzymes.
For example, pH can have an effect of the state of ionization of acidic or basic amino acids. Acidic amino acids have carboxyl functional groups in their side chains. Basic amino acids have amine functional groups in their side chains. If the state of ionization of amino acids in a protein is altered then the ionic bonds that help to determine the 3-D shape of the protein can be altered. This can lead to altered protein recognition or an enzyme might become inactive.
Changes in pH may not only affect the shape of an enzyme but it may also change the shape or charge properties of the substrate so that either the substrate connot bind to the active site or it cannot undergo catalysis.
In geneal enzyme have a pH optimum. However the optimum is not the same for each enzyme.
For example in the figure below is represented a situation inwhich two different enzymes might have very different pH optima. The one depicted by the green curve might represent the pH optimum for the enzyme pepsin which degraded proteins (protease) in the vert acidic lumen of the stomach. The second curve (in red) might represent the enzyme carbonic anhydrase that works in the neutral pH of your cytosol.
Back to the outline
Back to effect of environment on enzyme activity | <urn:uuid:d9085c81-79a2-4459-9aea-2fce83c3a71d> | 3.421875 | 293 | Knowledge Article | Science & Tech. | 39.356981 |
Our climate has changed many times during the planet’s history, with events ranging from ice ages to long periods of warmth. Historically, natural factors such as volcanic eruptions, changes in the Earth’s orbit, and the amount of energy released from the Sun have affected the planet’s climate. Beginning late in the 18th century, human activities associated with the Industrial Revolution have also changed the composition of the atmosphere and therefore very likely are influencing the Earth’s climate.
Greenhouse gases are necessary to life as we know it, because they keep the planet’s surface warmer than it otherwise would be. But, as the concentrations of these gases continue to increase in the atmosphere, the Earth’s temperature is climbing above past levels.The eight warmest years on record (since 1850) have all occurred since 1998, with the warmest year being 2005. Most of the warming in recent decades is very likely the result of human activities.
Scientists have observed that some changes are already occurring. Observed effects include sea level rise, shrinking glaciers, changes in the range and distribution of plants and animals, trees blooming earlier, lengthening of growing seasons, ice on rivers and lakes freezing later and breaking up earlier, and thawing of permafrost. Another key issue being studied is how societies and the Earth’s environment will adapt to or cope with climate change.
In the United States, scientists believe that most areas will to continue to warm, although some will likely warm more than others. It remains very difficult to predict which parts of the country will become wetter or drier, but scientists generally expect increased precipitation and evaporation, and drier soil in the middle parts of the country. Northern regions such as Alaska are expected to experience the most warming. In fact, Alaska has been experiencing significant changes in climate in recent years that may be at least partly related to human caused global climate change.
Human health can be affected directly and indirectly by climate change in part through extreme periods of heat and cold, storms, and climate-sensitive diseases such as malaria, and smog episodes.
Climate Change is one of the most serious environmental problems facing the Planet Earth. Many are seeking ways to reduce the greenhouse gas emissions contributing to this problem. Some people are looking for simple, everyday steps. Other are ready to try new things and new ideas.Together, we will find the best way to fight the Global Warming and make our Planet a better place to live!
There are quite a few things that can be done to reduce greenhouse gas emissions.That will help with the improvement of the current ecological situation and will save some money too.
1. Change the bulbs.Use the bulbs that have an ENERGY STAR and you will help the environment and save money on the electric bills.
2. Look for ENERGY STAR products: these products are safer to use, spend less energy, made from environmentally safe materials.
3.Recycle your newspapers, beverage containers, paper and other goods.
4.Reduce the water usage.It is simple: Be smart when irrigating your lawn or landscape; only water when needed and do it during the coolest part of the day, early morning is best. Turn the water off while shaving or brushing teeth. Do not use your toilet as a waste basket – water is wasted with each flush.
5.Convince your family and friends to do the same.
6. When buying a vehicle, find out information about the emissions and fuel economy performance, because the burning of fuels releases carbon dioxide (CO2) into the atmosphere and contributes to climate change, but these emissions can be reduced by improving your car’s fuel efficiency.To do so and reduce greenhouse gas emissions, go easy on the brakes and gas pedal, avoid hard accelerations, reduce time spent idling and unload unnecessary items in your trunk to reduce weight. | <urn:uuid:3315b9ae-f5e1-4f73-8f0d-a691d67fc8ec> | 4.09375 | 799 | Knowledge Article | Science & Tech. | 42.932938 |
This is a large group of diverse unicellular and multicellular aquatic plants; they grow in both fresh water and seawater and are used commercially as a source of thickeners (agar, agarose, algin, carrageenan) and pigments such as beta carotene.
Some countries harvest algae as food. Spirulina is produced by Israel and Mexico; the Scots, Irish, and Canadians produce dulse; and the Japanese (and, increasingly, Americans) consume some species of Chlorella as well as seaweed like nori.
Some species of algae are important commercial pests because they clog pipes and foul pools, reservoirs and waterways (rivers are particularly susceptible to algal overgrowth that can kill fish when fertilizer is dumped in the water).
Specific algal divisions include:
Most of the information in this writeup was found in Introductory Plant Biology by Kinglsey R. Stern. The rest is based on work I did for the science dictionary at http://biotech.icmb.utexas.edu/ | <urn:uuid:4cea5816-38c1-49c1-b43c-cf7be69c88e2> | 3.234375 | 219 | Knowledge Article | Science & Tech. | 32.371118 |
Natural CatastrophesEarthquake. Earthquakes and floods will through 2300 still occasionally kill tens of thousands of humans in developing societies. Of even greater historical consequence would be a possible massive earthquake in Tokyo, San Francisco, or Los Angeles. Such a quake could cause on the order of a trillion dollars in damage and could trigger a worldwide depression. In the worst case this would set back human progress by perhaps a decade.
Pandemic. How much of humanity could be killed in the future by a naturally-arising pathogen? In the 1500s and 1600s, European epidemics killed perhaps 90% of the aboriginal Americans. In the 1400s, the plague killed one third of the humans in Europe. The worldwide influenza of 1918 killed 30 million, and AIDS had killed at least half that by 2000. It seems unlikely that a natural pathogen could kill more than a small fraction of humanity, especially given modern sanitation. Evolutionary pressures tend to make pathogens less virulent over time, and newly-arising pathogens rarely seem to extinct their host species even in their initial outbreak. Genetically-engineered pathogens may be different.
Alien Aggression. The arrival of extraterrestrial intelligence on Earth might seem to pose a threat to human civilization. The arrival of Homo sapiens sapiens in Europe heralded the end of Homo sapiens neanderthalensis. The arrival of Home sapiens in Australia and the Americas quickly led to the extinction of most of the native megafauna. Contact with farming civilization has almost invariably led to the decline or assimilation of hunter-gatherer cultures. Contact with industrial civilization has almost invariably caused severe disruption in pre-industrial civilizations.
Fortunately, ETI would be unlikely to colonize Earth. Biochemical differences would surely render Earth life inedible to any ETI that had not yet become machine-based. As modern economic experience shows, raw human labor is too easy to automate to make enslavement worthwhile. Earth has deuterium-rich oceans of water, but even more water is available on Europa, which is also not as deep in the Sun's gravity well. Except for its reactive oxygen atmosphere, Earth's climate is relatively benign and might be an attractive place to establish an ETI population. However, space faring ETI would probably value Earth more for studying than for exploiting. Space faring ETI could just as easily satisfy its resource needs using the uninhabited parts of the solar system. Note that an alien Von Neumann probe could pose a variant of the robot aggression or nanoplague catastrophes.
Interplanetary Impact. The impact on Earth of an asteroid or comet only a few miles across would have devastating blast, tidal wave, incendiary, and smoke effects. In particular, the global pall of smoke raised by such an impact could block enough sunlight to effectively cancel one or two agricultural seasons and starve billions of humans to death. Such a catastrophe would set back human progress by one or two centuries. With five or ten year's warning, humanity could mount a mission to prevent such an impact by adjusting the impacter's orbit. The probability of such an impact is extremely low, only happening every few hundred thousand years. Less probable by far is impact with or orbital disruption by a small black hole that might wander through the Solar system. Impact with a black hole would effectively destroy the surface of the earth and most or all life on it. Disruption of the Earth's orbit could cause a biosphere-destroying runaway greenhouse effect like on Venus. Even a slight increase in the eccentricity of Earth's orbit would cause ecological disruptions that would probably starve billions of humans. Ejection of Earth from the Solar system would in a matter of months freeze to death all terrestrial life (except perhaps ecosystems around volcanic vents at the bottoms of frozen oceans). Humanity will not be safe from such an event until its first self-sustaining extraplanetary colonies are created around 3000.
Supernova. A supernova would have to be within a few tens of light years of Earth for its radiation to endanger creatures living at the bottom of Earth's atmosphere. No stars that close to Earth will go supernova in the next few million years.
Ice Age. When Earth's next ice age arrives in 10,000 years or so, it will grant slight but welcome relief from the problem of heat pollution.
Magnetic Field Reversal. Earth's magnetic field reverses polarity every few hundred thousand years, and is almost non-existent for perhaps a century during the transition. The last reversal was 780 Kya, and the magnetic field's strength decreased 5% during the 20th century. During the next reversal the ozone layer will be unprotected from charged solar particles that could weaken its ability to protect humans from ultraviolet radiation. However, past reversals are not associated with any changes or extinctions in the fossil record, and the next reversal will not likely affect humanity in a catastrophic way.
Man-made CatastrophesNuclear Catastrophe. Nuclear power could result in three kinds of catastrophe: radioactive pollution, limited nuclear bombing, and general nuclear war. Accidental or deliberate radioactive pollution could kill tens or hundreds of thousands, but is quite unlikely to happen. Regional nuclear conflict in the Middle East or the Indian subcontinent could kill several million. Nuclear terrorism against Washington D.C. or New York City could kill more than a million and set back human progress by up to a decade. General nuclear war would kill hundreds of millions and could trigger a nuclear winter that might starve hundreds of millions more. While such a worst case would set back human progress by one or two centuries, existing nuclear arsenals could neither extinct humanity nor end human civilization.
Cultural Decline. Some humans fear that vice, crime, and corruption indicate ongoing social decline or impending collapse. Other humans fear that problems of class division, pollution, education, and infrastructure indicate economic decline or impending collapse. These fears are perennial and unfounded. Past examples of the drastic decline or collapse of a culture or civilization have almost always been due to environmental change, or infection or invasion by outside humans. But after the advent of continental steam locomotion in the mid-1800s, no society remains unexposed to the infections of the others. Similarly, all societies have been made part of a single global human civilization which is not subject to invasion by outside humans. Environmental change indeed poses a set of challenges, but they seem to represent constraints on growth rather than seeds of collapse.
Cultural stagnation is another possible (but milder) kind of potential catastrophe. As in Ming China, Middle Ages Europe, or the Soviet Bloc, stagnation can result if a static ideology takes hold and suppresses dissent. Such a development seems unlikely, given the intellectual freedom and communication technology of the modern world. Ideologies with totalitarian potential include fideist religions, communism, and ecological primitivism.
Bioterrorism.Could a pathogen be genetically designed to be virulent enough to extinct humanity?A pathogen would have to be designed to spread easily from person to person, persist in the environment, resist antibiotics and immune responses, and cause almost 100% mortality. Designing for long latency (e.g. months) might be necessary to ensure wide spread, but no length may be enough to infect every last human.
Robot Aggression. Some humans fear that the combination of robotics and artificial intelligence will in effect create a new dominant species that will not tolerate human control or even resource competition. These fears are misplaced. Artificial intelligence will be developed gradually by about 2200, and will not evolve runaway super-intelligence. Even when AI is integrated with artifactual life by the early 2200s, the time and energy constraints on artifactual persons will render them no more capable of global domination than any particular variety of humans (i.e. natural persons). Similarly, humanity's first Von Neumann probes will be incapable of overwhelming Earth's defenses even if they tried. To be truly dangerous, VN probes would have to be of a species with both true intelligence and a significant military advantage over humanity. Such a species would be unlikely to engage in alien aggression.
Nanoplague. Self-replicating nanotechnology could in theory become a cancer to the Earth's biosphere, replacing all ribonucleic life with nanotech life. The primary limit on the expansion of such nanotech life would, as for all life, be the availability of usable energy and material. Since any organic material would presumably be usable, the primary limit on how nanocancer could consume organic life would be the availability of usable energy. Fossil fuels are not sufficiently omnipresent, and fusion is not sufficiently portable, so nanocancer would, like ribonucleic microorganisms, have to feed on sunlight or organic tissues. Ribonucleic photosynthesis captures a maximum of about 10% of incident solar energy, while nanocancer should be able to capture at least 50%. The only way to stop nanocancer would be to cut off its access to energy and material or interfere with its mechanisms for using them. | <urn:uuid:39d5c7f5-26c9-48d6-83fd-f55ffc22079e> | 3.34375 | 1,843 | Knowledge Article | Science & Tech. | 32.638328 |
Installs your own termination function to be called by unexpected.
Pointer to a function that you write to replace the unexpected function.
The set_unexpected function installs unexpFunction as the function called by unexpected. unexpected is not used in the current C++ exception-handling implementation. The unexpected_function type is defined in EH.H as a pointer to a user-defined unexpected function, unexpFunction that returns void. Your custom unexpFunction function should not return to its caller.
typedef void ( *unexpected_function )( );
By default, unexpected calls terminate. You can change this default behavior by writing your own termination function and calling set_unexpected with the name of your function as its argument. unexpected calls the last function given as an argument to set_unexpected.
Unlike the custom termination function installed by a call to set_terminate, an exception can be thrown from within unexpFunction.
In a multithreaded environment, unexpected functions are maintained separately for each thread. Each new thread needs to install its own unexpected function. Thus, each thread is in charge of its own unexpected handling.
In the current Microsoft implementation of C++ exception handling, unexpected calls terminate by default and is never called by the exception-handling run-time library. There is no particular advantage to calling unexpected rather than terminate.
There is a single set_unexpected handler for all dynamically linked DLLs or EXEs; even if you call set_unexpected your handler may be replaced by another or that you are replacing a handler set by another DLL or EXE.
ANSI, Windows 95, Windows 98, Windows 98 Second Edition, Windows Millennium Edition, Windows NT 4.0, Windows 2000, Windows XP Home Edition, Windows XP Professional, Windows Server 2003
For additional compatibility information, see Compatibility in the Introduction.
Not applicable. To call the standard C function, use PInvoke. For more information, see Platform Invoke Examples. | <urn:uuid:59c64d6a-f35b-49db-bbc4-c2c2ecb44139> | 2.796875 | 411 | Documentation | Software Dev. | 30.699786 |