text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
It was the brightest cosmic explosion ever observed, and astronomers are still hotly debating its origin and implications. But already the giant flare of December 27, 2004, produced by a bizarre star in our own Milky Way galaxy, is providing a partial solution to a 10-year-old astrophysical mystery. Such "magnetar" flares in distant galaxies may account for at least some of a particular class of gamma-ray burst that has defied explanation.
Despite its distance of 50,000 light-years, the December flare was brighter than the full moon. Yet no one actually saw it, because it belched out almost all its stupendous power in the form of energetic gamma rays, completely saturating the sensitive Burst Alert Telescope on NASA's Swift satellite, which had been launched into orbit just five weeks earlier. "It was an astonishing event," recalls gamma-ray-burst researcher Ralph Wijers of the University of Amsterdam in the Netherlands.
After learning of the giant flare, Swift scientist David Palmer of Los Alamos National Laboratory immediately had a hunch. If a similar magnetar flare occurred in a distant galaxy, he reasoned, it would be indistinguishable from a so-called short gamma-ray burst, with a duration less than two seconds or so. These short bursts are quite different from their longer cousins, which last from a few seconds to many minutes. Astronomers believe that long gamma-ray bursts, all detected in remote galaxies so far, signal the catastrophic and terminal detonation of supermassive, rapidly spinning stars. This proposed mechanism probably does not apply to short gamma-ray bursts, however.
Palmer developed his idea and found that the magnetar flares offer at least a partial explanation. In an analysis to be published in Nature, he and his colleagues conclude that at least a few percent of all short bursts are quite likely to be explained in this way. Based on the observed luminosity and expected frequency of giant magnetar flares, a few dozen of these events per year would occur in other, relatively nearby galaxies. This amount is not enough to explain all short gamma-ray bursts, but, Palmer says, "5 percent is a good approximation." He quips that this number "is probably not off by more than a factor of 20, which is actually pretty good in this business."
As for the cause of the other short gamma-ray bursts, Chryssa Kouveliotou of the NASA Marshall Space Flight Center says that the leading explanation is the violent merger of two neutron stars orbiting each other. But Palmer notes: "With the December 27 event, we now know that neutron-star mergers are not responsible for all short gamma-ray bursts. Whether they are responsible for any of them is still an open question." Wijers agrees that it remains unclear whether a neutron-star merger can produce this type of gamma-ray burst.
The answer may come soon, though. Astronomers expect that the Swift satellite, which became fully operational in early April, will accurately pinpoint sky positions and distances for a number of short bursts, enabling scientists to finally get a grip on these enigmatic phenomena. Palmer, for one, is optimistic: "The next gamma-ray burst we see could bring enlightenment."
This article was originally published with the title Rare Flare. | <urn:uuid:af6a0353-e4ad-4e53-b7ce-9d917e4ae44e> | 3.59375 | 665 | Truncated | Science & Tech. | 38.888627 |
Two mechanisms for generating rotation in a volcanic plume have been shown. As the plume shoots up at an astounding 200 to 600 meters a second--winds from the environment surrounding the volcano can come into the picture as a horizontal vortex tube that is tilted and stretched as it travels up. This mechanism is similar to what is seen in thunderstorms. Additionally, eddies and vortices from the volcanic environment itself can form creating a horizontal vortex ring. This is what causes the lumpy-looking profile of the plume.
Image Courtesy of the Zina Deretsky/NSF | <urn:uuid:1133c0c7-f135-4933-aec0-cefd4da932fb> | 4.0625 | 119 | Knowledge Article | Science & Tech. | 41.94075 |
I am currently taking a biology class. I do not understand this concept. I understand that the electrostatic repulsion of the negative charges, resonance stabilization and hydration stabilization all ...
Warm blooded animals like us keep their temperature constant irrespective of their surroundings. But how do they do that? Energy should be supplied from the inside. I assume that reactions like making ...
Thermodynamic efficiency can be expressed as the ratio of Work done(W) to Energy invested (Q). Thermodynamic efficiency= W/Q How can one measure work done by a ...
I was wondering what exactly a coupled reaction is and why cells couple them. I read the wikipedia article as well as several others, such as life.illinois.edu but I still don't get it. Could ...
A property of water is that it is slow to heat and cool. According to my biology book, some energy from an increase in temperature would spent breaking hydrogen bonds, so that temperature does not ... | <urn:uuid:ab829c47-c717-4f69-83bd-18918e9c26d3> | 3.421875 | 201 | Q&A Forum | Science & Tech. | 52.368441 |
What would it take to go all renewable?
What would it take to use exclusively renewable energy resources? What would you have to add to or take away from your home? How would your life change? For most of my energy entries, I’ve talked about conservation at the individual level. That’s because I know we can make changes in what we do and how we view the world. However, it is always heartening to see large groups take up the challenge. And while a nation should have a plan, unless its citizens are behind it, it will never work.
That’s why I’m glad to report on some cities and regions that have made a plan to go to 100-precent renewable energy or beyond.
The District of Rhein-Hunsrück in Germany has a population of about 100,000. It uses a combination of wind, solar, and bio mass to produce 100-percent renewable energy for its area.
For most, that would be a good place to stop. But it has plans to increase renewable energy production to 828 percent of their needs by 2050 so it can export the energy to its neighbors. (Well done!)
In the 1990s, it decided that it would take the money it used to import energy and invest it locally to become energy exporters. Its first step was energy conservation. Just by doing some energy conservation in its buildings, it was able to cut heating needs by 25 percent (something that is very energy-intensive in places that have weather other than “hot”).
The city of Dardesheim, also in Germany, uses solar panels, wind turbines, and biomass to produce 40 times as much energy as it uses. How did it do this? Back in the 1990s (it takes time) the community decided on a shared vision to create jobs and eliminate the importation of energy. While it only has a population of 1,000 (100 times smaller than Rhein-Hunsrück), it created a vision and made a plan.
And it isn’t only cities in Germany that are coming up with a renewable and sustainable path for their energy future.
For example, it’s expensive to import oil to the Island of El Hierro, off the northern coast of Africa. To replace the oil it uses to generate electricity, it will move to a combination of wind, hydro, and solar power. With any excess wind energy, it’ll be able to pump water uphill into an inactive volcano crater. This gives it a little energy storage. This will let the 10,000 people who live on the island save 40,000 barrels of oil a year.
But what about a little closer to home?
In 2007 San José, Calif., pledged to become a renewable-powered city by 2022. It was the first large city in the United States (around 1 million in population) to make such a pledge. Its plan had 10 points (not 12). It also has a website where you can view its progress. While it has had the most progress in diverting trash from landfills to waste to energy plants, it has made the least progress is in planting new trees. Fortunately, that’s fairly easy to do.
But what about Houston? What is Houston doing?
Houston is becoming greener in leaps and bounds. Houston has been granted a number of awards and distinctions for its green programing, such as being named one of the top 25 solar cities by the Department of Energy, the Green Power Leadership award from the Environmental Protection Agency, and the Best Workplace for Commuters award from the Houston-Galveston Area Council, with the EPA and the Department of Transportation.
Sure, while it’s good to toot our own horns, we should not rest on our laurels. There is an initiative (and funding) to help income-qualified Houstonians weatherize their homes. We have free, regular electronic recycling and paper shredding programs to reduce waste. While Houston is making strides, we should remember not to be too self-satisfied with what we’ve done. Rather, we should dream bigger and dare more boldly.
What should Houston do next? | <urn:uuid:11366805-5c15-446f-903b-864bc2f3c3af> | 2.6875 | 871 | Personal Blog | Science & Tech. | 60.014874 |
The “solar weather forecast” for the next few years is for increasingly poor conditions – as the solar cycle picks up, more matter will fly out from the Sun and eventually collide with our planet’s magnetic field, where the trapped high energy particles will then lose energy by radiation, so potentially disrupting many of our communications systems.
The solar particles, though, are not responsible for everything trapped by the Earth’s magnetic field – cosmic rays provide much higher energy particles than anything that comes out of the Sun and, it seems, also fuel a belt of trapped anti-protons around the Earth (as reported in this week’s New Scientist).
Anti-matter – which mutually self-destructs on contact with matter is a fascinating subject: working out why there is so little of it (when physics suggests there should be equal quantities of it and matter) is at the heart of cosmologists attempts to explain the creation of the universe.
So far the only way to access it consistently has been produce it in high energy collisions in places like CERN. The fact that the Earth has several billion anti-protons spiralling around it’s magnetic field at any given time, may, therefore point to new ways for researchers to get at it without having to build ever bigger colliders – though obviously space satellites aren’t ten-pence-a-dozen either.
- Discovered : A Belt of Antimatter Surrounding Earth (techie-buzz.com)
- Weighing Antimatter (quantumdiaries.org)
- Antiproton Ring Found Around Earth [Science!] (geeksaresexy.net)
- Blog – Antiproton Radiation Belt Discovered Around Earth (technologyreview.com)
- Research Team Observes Spin Quantum-Jumps with Single Trapped Proton and Anti-Proton (azonano.com)
- Antiproton ring found around Earth (newscientist.com) | <urn:uuid:48d4b635-754f-426f-9a2c-2a4bffec6661> | 3.171875 | 413 | Personal Blog | Science & Tech. | 37.314171 |
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index]
"Flight theory has legs"
I'm not quite convinced. I wonder if Chris examined the hand-claws of
theropods. After all, sinornithosaurs, microraptors and archaeopterygians
had *four* limbs to climb trees with.
"Flight theory has legs"
By Greg Roberts
July 11 2003
The question of how birds learned to fly has long puzzled the experts.
Anatomists and palaeontologists have generally favoured the "top down"
theory - that some time during the Jurassic period before about 150 million
years ago, dinosaurs clambered up trees and eventually, after developing
feathers and bristles and learning to glide to the ground, the art of flying
Now, it seems, the "bottom up" theory has more feathers to fly with.
A study on the claws of birds suggests their forebears were much more
terrestrial, or ground-dwelling, than had been thought, and that they had
their feet very much on the ground before taking off.
In the first major study of its kind, Chris Glen, for his doctoral thesis at
the University of Queensland, has studied the claws of 1500 modern-day birds
from 500 species and compared them with the fossils of long-extinct
In a paper delivered to a national conference of palaeontologists at the
Queensland Museum in Brisbane this week, Mr Glen explained that the
curvature of the modern birds' claws varies radically and relates directly
to the extent to which birds spend time on the ground.
At one end of the spectrum, the claws of woodpeckers, which vigorously climb
tree trunks, curve 170 degrees. At the other end, the claw of the
flat-footed jacana, a lily-trotting waterbird, curves barely at all.
"In between, and including all the perching birds we're so familiar with,
you have the full range," Mr Glen said outside the conference.
When he checked the fossil records of the ancestors of birds that lived
between 120 million and 150 million years ago, he was surprised by the
extent to which they matched the claws of the flat-footed birds.
For instance, the sinornithosaurus was a two-legged beast - there has long
been debate about whether many dinosaurs were reptiles, birds or something
in between - about 1.5 metres tall that hunted in packs. (They are the
mean-looking critters continually trying to make a meal out of Sam Neill in
The sinornithosaurus had been suspected of having tree-climbing capabilities
but Mr Glen said its claws indicate this is not so.
Other dino-birds known to be capable of flight, such as the starling-sized
confuciusornis of China and Europe's magpie-sized archaeopteryx, were
thought to be arboreal.
But Mr Glen said the claws of most suggest they were more likely to be
terrestrial, probably behaving similarly to modern-day chickens or ground
pigeons, which fly reluctantly.
Only one of the flight-capable dino-birds he examined, the sinornis of
China, appears to have been primarily arboreal.
"The evidence is quite clear that most of these bird ancestors were
ground-dwellers," Mr Glen said. He said the tree theory - that birds "used
the height of the tree and so on to advantage to gradually develop a flying
capability" - had made a lot of sense. "However, it appears more likely now
that it was a case of from the ground up."
Protect your PC - get McAfee.com VirusScan Online | <urn:uuid:8734e78d-fbc9-4f38-8ede-31021182a979> | 3.296875 | 808 | Comment Section | Science & Tech. | 47.268638 |
Alternate name: Lightning Bug
Family: Lampyridae, Fireflies view all from this family
Description Somewhat flattened beetle with threadlike antennae; large, widely separated eyes; and head clearly visible from above. Primarily black, with two bright-red eyespots on its thorax and yellow edging on thorax and wing case. The terminal segments of its abdomen are white-yellow and glow every 2-3 seconds while flying. Both sexes with flashing green light. Larva spindle-shaped with light organ below abdomen at rear.
Dimensions 3/8-5/8" (9-15 mm)
Food Carnivorous, feeding mostly on insects, but also on invertebrates such as land snails on occasion.
Life Cycle Eggs laid singly among rotting wood or humid debris on ground. Larvae (also known as glowworms) hatch in spring. grow throuth summer and then overwinter in pupal chambers just below soil surface. Pupate the following spring with adults emerging early summer to late August.
Habitat Open woods and meadows.
Range East Coast to Texas, north to Manitoba.
Discussion Also known as Pennsylvania firefly, lightning bug, Pennsylvania lightning bug, and (in its larval state) glowworm. Eggs, larvae, and pupae luminous as well as adults | <urn:uuid:7f1a5b8d-7cdf-41bc-b3c6-5ad2e8fb2e0d> | 3.171875 | 277 | Knowledge Article | Science & Tech. | 48.392747 |
Assuming a solid rectangular plate, hinged along one edge. How does one calculate the mass of the plate if the force necessary to lift the opposite edge is known?
This is blatantly a homework question, so we're only allowed to discuss methods, and not give you the answer. With any problem like this the very first thing to do is draw a diagram. From the limited information in your question I think the situation loks like this:
You know the force $F$ that you're using to lift the end of the plate, and you want to know the force $mg$, where $m$ is the mass of the plate and $g$ is the acceleration due to gravity.
You find $mg$ by taking moments about the hinge. The distance from the hinge to the end of the plater where you're applying the force is $l$, and if the plate is a rectangle the force $mg$ acts from the centre of mass, which is $l/2$ away from the hinge.
The rest is up to you! | <urn:uuid:fef64026-b316-4202-84a4-f0427385dc19> | 2.84375 | 214 | Q&A Forum | Science & Tech. | 65.776591 |
Thursday, November 20, 2008
Now for a serious post about outer space...
Constellation is the name of NASA's program to return to the moon, and possibly continue on to Mars, or even maybe other destinations in the solar system. It consists of two major components, the Ares booster system and the Orion crew vehicle.
The Ares system consists of two major components, the Ares I for launching the Orion crew vehicle into orbit, and the Ares V heavy-lift booster for launching major Constellation components into low earth orbit for on-orbit assembly and rendezvous with the Orion for subsequent deep space exploration.
The first thing that strikes almost anyone about the Constellation program is its uncanny resemblance to Apollo. The Ares I and Ares V boosters seem very similar in performance, size, and function to the Saturn Ib and Saturn V. (Perhaps they will eventually rename it the Ares Ib.) It seems almost impossibly retro.
NASA clearly recognizes the obvious similarity, and refers to Apollo five times on the Orion web page. They point out that Orion is much more advanced technology, despite the outward similarity, and has 2 1/2 times the internal volume of the Apollo command module, and referred to Constellation as "Apollo on Steroids" at the introductory press conference.
The Orion "crew exploration vehicle" (CEV) will carry 6 astronauts to low earth orbit, or four to the moon, will land ashore instead of at sea, and be reusable. It is NASA's final format to replace the space shuttle, which is a very disappointing decision.
The reason that the US space program ended up being based on missiles (specifically on the Nazi V2) is because the people at NASA who developed the program were Nazi rocket scientists. At the same time, the Air Force (with some NASA cooperation) was developing aerodynamic spacecraft - vehicles capable of using lift, instead of pure thrust, to escape the earth's atmosphere.
Although there were some additional technical obstacles with true "flying" spacecraft (mainly associated with the heat shielding), the potential advantages were substantial. There's a reason that United and Southwest fly airplanes and not rockets - they are a lot more efficient. You can lift a lot more payload with a lot less fuel than you can with a rocket.
The Air Force was already, by the early 1960s, flying to the edge of space in an aircraft, the rocket powered X-15. There was a huge and impassioned debate about the best strategy to launch payloads into space, via aircraft or rockets, with NASA and the Army on the side of rockets and the Air Force and much of the aerospace industry favoring aircraft. Ultimately NASA won out because of the enormous influence and persuasive abilities of Werner von Braun. Ironically, von Braun had proposed and advocated aerodynamic space vehicles at various times in his career, but had been employed by the Army's Ballistic Missile Agency (which became NASA's Marshall Space Flight Center) since WWII and was parochially aligned with the Army and missiles.
The Air Force had a "mini space shuttle" design by the late 1950s - the X-20 Dyna-Soar, but that program was scuttled by the political efforts of NASA and Werner von Braun, who claimed it would compete with and reduce resources available to the Apollo lunar program.
This was all very unfortunate. Von Braun originally proposed the moon mission be stages as an Earth Orbit Rendezvous (EOR), using re-usable boosters and spacecraft to deliver large quantities of material and components for a very substantial moon expedition. That model - which has been revived for Constellation - would have been very well served by simple, rugged, reusable aerodynamic orbital vehicles like the Dyna-Soar.
Officially, Dyna-Soar was killed by Secretary of Defense Robert McNamara in 1963 because he said "there was no significant advantage in controlled re-entry", the publicly-stated main difference between Dyna-Soar and the Gemini Program. But a tremendous amount of the real history of the Dyna-Soar is shrouded in secrecy.
Dyna-Soar was originally intended to serve as a global exo-atmosperic, hypersonic weapons platform, similar to a Fractional Orbital Bombardment System (FOBS). It was a direct decendent of the Eugene Sanger's proposed Silbervogel, also known as Hitler's Amerika Bomber. Eugen Sanger, of course, was a colleague and competitor of Werner von Braun in Nazi Germany. The Amerika Bomber was the Nazi's planned unstoppable delivery platform for their nuclear bomb. The Silbervogel, which was designed in 1934 but never flew, looked a whole lot like an X-15. I think in my old blog I did a post comparing the designs of Sanger, the Horten brothers, and others in Germany in the early 1930s with the most advanced aircraft in the United States at the time.
United States, 1934:
Clearly there were very different things going on there.
What was going on in the early 1960's, however, was very heavily influenced by the cold war and the desire to contain the Soviet Union. Anything that could potentially prove to be an assymetrical advantage, such as a fractional orbital bombardment system or orbital reconnaissance systems, was very heavily classified.
The apparent promise of aerodynamic space vehicles, however - their ability to maneuver much more radically in orbit than NASA's capsules, their re-usability, and their ability to fly to any point on the earth and land in a very short time - made their seeming abandonment in the early 1960s, after the spectacular success of the X-15, very puzzling indeed.
While the publicly known history indicates it was NASA and von Braun's political persuasion that shifted the emphasis to rockets and away from aircraft, there have long been rumors that the space-planes simply "went undercover".
Those long-circulating rumors were ratified in 2006 by Aviation Week and Space Technology magazine, with a cover story claiming the existance of a highly covert space plane program, derived from the Dyna-Soar and known as "Blackstar". According to "Aviation Leak", as it is often known, the "Blackstar" program was a two-stage-to-orbit binary system based on not only the Dyna-Soar but also the cancelled XB-70 "Valkyrie" supersonic bomber. The Valkyrie served as the first-stage "mother ship" which lifted the orbital vehicle to around 100,000 feet for launch into space. Such a system makes a lot of sense as a much more efficient way to achieve orbit, in comparison to a multi-stage rocket.
The theory goes that the Air Force has had a space-plane program for many years, possibly since the mid-1980s, and many of the publicly-announced space plane development efforts since then, such as the X-30 National Aerospace Plane, were derived from, or used as cover for, the secret spaceships.
But if any of this is true, it's a really good secret. While there is plenty of evidence the Department of Defense spent billions on the 80s and 90s on very highly classified projects, there's no good evidence they bought any manned spacecraft with all that money. There is also solid evidence that the Air Force has flown something that goes very high and very fast since the retirement of the SR-71, but again there's nothing to say with any reliability its a space plane. An unmanned hypersonic demonstrator prototype may be a more likely scenario.
It should be remembered that there were several aerodynamic spacecraft proposed to replace the Space Shuttle. Lockheed's most recent proposal in 2006 looked pretty much exactly like the X-20X Dyna-Soar III.
But once again, after many years of signaling that the Space Shuttle would be replaced with something truly versatile and innovative, NASA decided to go back to the 1950s with Werner von Braun's rocket-and-capsule format.
What the heck? After all the work done on aerodynamic space vehicles, hypersonic pulse-detonation-wave and aerospike engines, advanced composite heat-dispersing materials, and digital flight controls, we're going to chunk it all and build "Apollo on steroids".
Is there something I'm missing here? The Apollo redux will use von Braun's original Earth Orbit Rendezvous, which is tailor-made for a rapid-turnaround, highly efficient aerodynamic lift platform. What you need to do is make a lot of trips to orbit to assemble the parts for your deep-space exploration vehicle, which could be boosted in pieces on existing launchers. You create a new "space shipyard" to exist as the permanent launching point for further exploration, which you can get to easily by launching an aerodynamic vehicle from a two-stage-to-orbit system like the rumored "Blackstar", or cancelled Dyna-Soar. Why do you want a new Apollo system for that requirement? It just doesn't seem to make a lot of sense.
One of the angles that might explain it is the desire for safety. The loss of Columbia and Challenger revealed just how dangerous the aerodynamic Space Shuttle is, while the Russians early-60s vintage Soyuz capsules are tried and true. In many ways, the Soyuz system is much superior to either Apollo or the Space Shuttle, and certainly it's proved its worth. The first Soyuz flew in 1966, and the United States will rely on Soyuz for access to the International Space Station in between the retirement of the Space Shuttle in 2010 and the first flight of Orion, which is hoped for 2014, but given the US Government's record in recent years with this sort of thing, may be fantastically optimistic.
Regardless of the format or direction NASA wishes to take in the manned spaceflight program, it may be that almost anything associated with space exploration could be very hard to pay for in the coming years. Its looking like the government isn't going to have the money to pay for much of anything "optional", because of steadily increasing non-discretionary costs and steadily decreasing revenues.
President-elect Obama initially said he would defer Constellation to pay for improvements in education, but has since changed his position, saying he would ask for additional funding for NASA to accelerate development of Constellation.
It is very interesting that Obama did a near "about face" on NASA, and it is somewhat hard to determine what motivated the change. The most obvious and likely cause was the desire to win votes in Florida, which hosts much of NASA's infrastructure. Another scenario is that Obama became aware of the difficult and potentially embarrassing situation the country will face when the shuttles are retired and NASA must contract with Russia for access to the space station. A more improbable scenario is that Obama was briefed on how NASA's development of Constellation interrelates with some possible classified military space program.
But the apparent ultimate reality is that NASA wouldn't be planning to build a new version of Apollo if it had any better ideas. Given the record of the government in recent decades (since the 1960s) for building anything new and innovative, perhaps NASA is making a very smart decision by keeping the technological risk to a minimum.
But wouldn't it be great if we could recapture the spirit and the energy of NASA's "golden age", and strive to do something really new and exciting? | <urn:uuid:d694384c-e527-4f28-b52e-44c9a20c6f13> | 3.328125 | 2,335 | Personal Blog | Science & Tech. | 35.83971 |
SCons also allows you to identify the output file and input source files using Python keyword arguments. The output file is known as the target, and the source file(s) are known (logically enough) as the source. The Python syntax for this is:
src_files = Split('main.c file1.c file2.c') Program(target = 'program', source = src_files)
Because the keywords explicitly identify what each argument is, you can actually reverse the order if you prefer:
src_files = Split('main.c file1.c file2.c') Program(source = src_files, target = 'program')
Whether or not you choose to use keyword arguments to identify the target and source files, and the order in which you specify them when using keywords, are purely personal choices; SCons functions the same regardless. | <urn:uuid:15b5733b-4d10-45e1-853f-0ca4ccae1563> | 2.8125 | 180 | Documentation | Software Dev. | 52.651138 |
US, UK and Hong Kong Researchers have produce a unique ‘movie’ of climate reaching back 5 million years, by bringing together data drilled from ocean beds. It reveals three important temperature patterns during the warm early part of the Pliocene period that they couldn’t recreate together in climate models using existing explanations. That’s important because scientists hope the Pliocene could help us know what the future of a warmer Earth might be like. And having uncovered another layer to the Pliocene puzzle, team member Kira Lawrence from Lafayette College in Easton, Pennsylvania, underlined the value of finding its solution.
“Our community of scientists think of the Pliocene as though it was about 3°C warmer than modern temperatures with CO2 concentration about where we are right now,” Kira told me. “But we haven’t recognised before that the pattern of temperature was a lot different. If that’s where we’re headed in the not too distant future, if the temperature and precipitation patterns change in that way, we should have some significant things to think about.”
The Pliocene period started 5.3 million years ago, during which primates made important evolutionary steps towards humanity. Since 2000, there has been a climate data explosion reaching back through this era. Around the world, international drilling expeditions have pierced ocean beds kilometres below sea level, reaching hundreds of metres into sediment to bring back ‘core’ samples. Tiny fossils within that rock and mud can tell scientists temperatures through history, which can give climate scientists real data to test their models against. | <urn:uuid:e4d577a7-a41d-4dc9-a469-69632057e843> | 4.03125 | 329 | Personal Blog | Science & Tech. | 35.331164 |
Orbits 'R' Us!
When we talk about how Earth and the other planets travel around the Sun, we say they orbit the Sun. Likewise, the moon orbits Earth. Many artificial satellites also orbit Earth.
When it comes to satellites, space engineers have different types of orbits to choose from.
Satellites can orbit Earth's equator or go over Earth's North and South Poles . . . or anything in between. They orbit at a low altitude of just a few hundred miles above Earth's surface or thousands of miles out in space.
The choice of orbit all depends on the satellite's job.
The two GOES* weather satellites, for example, have the job of keeping an eye on the weather over North America. They need to "never take their eyes off" any developing situation, such as tropical storms brewing in the Atlantic Ocean, or storm fronts moving across the Pacific Ocean toward the west coast of the U.S. Therefore, they are "parked" in what is called a geostationary (gee-oh-STAY-shun-air-ee) orbit. They orbit exactly over Earth's equator and make one orbit per day. Thus, since Earth rotates once on its axis per day, the GOES satellite seems to hover over the same spot on Earth all the time.*GOES stands for Geostationary Operational Environmental Satellite.
On the other hand, satellites whose job is to make maps or study all different parts of Earth's surface need an orbit that comes as close to passing over the North and South Poles as possible. This way, Earth turns under the satellite's orbit and Earth does most of the work of traveling! Also, the satellite should be close to Earth's surface (a few hundred miles up) to get a good view with its imaging and measuring instruments.
The lower the satellite's orbit, the less time it takes to make one trip around Earth, and the faster it must go. That's why a geostationary orbit must be so high. It has to go out far enough so that it can travel slowly enough to go around Earth only once per day.
*POES stands for Polar-orbiting Operational Environmental Satellites.
Suppose two satellites are to be launched to the same altitude. However, one is to go into a polar orbit and one is to orbit the equator. Can you guess which satellite will take the most fuel to reach its orbit?
If you guessed the polar orbiting satellite, you are right. | <urn:uuid:be10391b-711c-4d12-bb9c-078fc593a1b0> | 3.859375 | 518 | Knowledge Article | Science & Tech. | 58.173133 |
We have learned two important things from the Berkeley Earth Surface Temperature Study (BEST):
- Denier claims that prior scientific analysis of the key land surface temperature data OVER-estimated the warming trend were not merely wrong, but the reverse was true. Warming has been high and accelerating.
- The Deniers and Confusionists and their media allies can never be convinced by the facts and will twist themselves into pretzels to keep spreading disinformation.
We also learned that BEST’s Judith Curry still would rather be a confusionist than a scientist — but that ain’t news (see “Judith Curry abandons science“).
The decadal land-surface average temperature using a 10-year moving average of surface temperatures over land. Anomalies are relative to the Jan 1950 – December 1979 mean. The grey band indicates 95% statistical and spatial uncertainty interval.
Recall the foundation of the phony Climategate charge. Somehow the climate scientists at the Climatic Research Unit (CRU) at the University of East Anglia, led by Phil Jones, were manipulating the data and the peer review process as part of a grand conspiracy to convince the public the earth has been warming faster than it really is. A key point is that “the CRU compiles the land component of the record and the Hadley Centre provides the marine component.”
The BEST team vindicated climate science — see Koch-Funded Berkeley Temperature Study Does “Confirm the Reality of Global Warming.” Equally important, if you read the key paper, they found:
we find that the global land mean temperature has increased by 0.911 ± 0.042 C since the 1950s…. our analysis suggests a degree of global land-surface warming during the anthropogenic era that is consistent with prior work (e.g. NOAA) but on the high end of the existing range of reconstruction.
D’oh! The BEST data shows considerably higher warming in recent years than HadCRU (the red line above).
Of course, this isn’t news to anybody who actually follows this issue. Two years ago, the Met Office released an analysis concluding that “The global temperature rise calculated by the Met Office’s HadCRUT record is at the lower end of likely warming.”
As an aside, Muller, in a March 2010 talk (near the end) clearly states that if warming is on the high range, then humanity should be more concerned because we have “less time to react.”
What’s even more worrisome is that the study clearly shows that the warming trend is accelerating. First, “Our analysis technique suggests that temperatures during the 19th century were approximately constant (trend 0.20 ± 0.25 C/century).” No big surprise there.
But then as human emissions kick into overdrive, things heat up:
The trend line for the 20th century is calculated to be 0.733 ± 0.096 C/century, well below the 2.76 ± 0.16 C/century rate of global land-surface warming that we observe during the interval Jan 1970 to Aug 2011.
That is, in the past 40 years, the land has warmed nearly 4 times faster than it did in the last century. This really kills the denier meme that the observed data suggests we will see only a small amount of warming this century.
In fact, even the high and accelerating warming of the past 4 decades was reduced by human and volcanic aerosol emissions and the general lags between emissions and warming. Thus, it is now patently obvious that if we stay on our current emissions path, the acceleration of warming will continue as greenhouse gas concentrations continue rising. That’s without even considering the amplifying carbon-cycle feedbacks.
Another mini-bombshell in the paper, which has led co-author Curry to (try to) frag team leader Muller, is this conclusion:
Though it is sometimes argued that global warming has abated since the 1998 El Nino event (e.g. Easterling and Wehner 2009, Meehl et al. 2011), we find no evidence of this in the GHCN land data. Applying our analysis over the interval 1998 to 2010, we find the land temperature trend to be 2.84 ± 0.73 C / century, consistent with prior decades.
Still warming, after all these years.
Now even though Curry signed her name to this submitted journal article, she apparently doesn’t believe it’s true.
The pseudo-journalist David Rose of the UK’S Telegraph got a bunch of quotes from her in a piece headlined, “Scientist who said climate change sceptics had been proved wrong [aka Muller] accused of hiding truth by colleague” [aka Curry].
It is exceedingly difficult to know what Curry is saying because
- It is always difficult to know what Curry is saying (see Hockey Stick fight at the RC Corral).
- Rose generally isn’t reliable (see “David Rose destroys his credibility and the Daily Mail’s with error-riddled climate science reporting” and links therein).
- Curry has already walked back some of her comment’s (see here post here, but put a head vise on first, please).
But, she does say on her blog, “In David Rose’s article, the direct quotes attributed to me are correct.”
Still, neither she nor Rose appear to know what they are talking about. Nor does Curry appear to have read the paper she put her name on.
Tamino has sorted out the statistics in his post, “Judith Curry Opens Mouth, Inserts Foot.” He notes at the end:
Judith Curry protests that she was misrepresented by the article in the Daily Mail, and several readers have mentioned that David Rose, the author of the article, is just the man to do such a thing. It’s easy to believe that she was indeed the victim of his malfeasance.
But even after reading this post, she still hasn’t disavowed the statement “There is no scientific basis for saying that warming hasn’t stopped.” In fact she commented on her own blog saying, “There has been a lag/slowdown/whatever you want to call it in the rate of temperature increase since 1998.” Question for Curry: What’s your scientific basis for this claim?
In his post, Tamino shows there is no scientific basis for the Curry’s claim at all:
Judith Curry’s statement is exactly the kind of ill-thought-out or not-at-all-thought-out rambling which is an embarrassment to her, and an embarrassment to science itself. To spew this kind of absolute nonsense is shameful. Judith Curry, you should be ashamed of yourself.
The deniers and confusionists would have you believe so. In fact, Tamino shows that the warming trend is real in the Berkeley data even if you start the trendline fairly recently. You’ll have to read his post for details, since it’s hard to summarize his analysis.
Bottom Line: Curry tried to frag Muller, but dropped the grenade on herself.
- WashPost: “The Scientific Finding that Settles the Climate-Change Debate” and “Confirms” the Hockey Stick Graph
- The deniers were half right (12/10): The Met Office Hadley Centre had flawed data — but it led them to UNDERestimate the rate of recent global warming. | <urn:uuid:eaa45501-c25e-4cb3-a698-4d7a1a672961> | 2.703125 | 1,587 | Personal Blog | Science & Tech. | 55.391253 |
Tuesday 18 June
Elegance coral (Catalaphyllia jardinei)
Elegance coral fact file
- Find out more
- Print factsheet
Elegance coral description
With its distinctive green tentacles, tipped with bright pink, elegance coral (Catalaphyllia jardinei) is one of the most beautiful of all corals. Many individual coral polyps come together to form a colony, which has wide v-shaped valleys. Each polyp has a striped oral disc, or mouth, surrounded by the colourful, tubular tentacles, which the polyp uses to capture food (3).
- Also known as
- elegant coral, wonder coral. Top
- Veron, J.E.N. (1986) Corals of Australia and the Indo-Pacific. Angus & Robertson Publishers, UK.
EDGE of Existence:
The Coral Reef Alliance:
- Simple plants that lack roots, stems and leaves but contain the green pigment chlorophyll. Most occur in marine and freshwater habitats.
- Relating to or belonging to a colony (a group of organisms living together in a group).
- Relating to corals: a coral composed of numerous genetically identical individuals (also referred to as zooids or polyps), which are produced by budding and remain physiologically connected.
- Relating to corals: corals that are not attached to the substrate.
- Metabolic process characteristic of plants in which carbon dioxide is broken down, using energy from sunlight absorbed by the green pigment chlorophyll. Organic compounds are produced and oxygen is given off as a by-product.
- Typically sedentary soft-bodied component of Cnidaria (corals, sea pens etc), which comprise of a trunk that is fixed at the base; the mouth is placed at the opposite end of the trunk, and is surrounded by tentacles.
- Describing a close relationship between two organisms. This term usually refers to a relationship that benefits both organisms.
IUCN Red List (March, 2011)
CITES (June, 2007)
- Veron, J.E.N. (2000) Corals of the World. Vol. 2. Australian Institute of Marine Science, Townsville, Australia.
- Raymakers, C. (2001) Review of trade in live corals from Indonesia. TRAFFICEurope, Brussels, Belgium.
- Borneman, E.H. (2001) Aquarium coral; Selection, Husbandry and Natural History. T.F.H. Publications, New Jersey, USA.
Reefs at Risk: A Programme of Action (July, 2007)
- Wilkinson, C. (2002) Status of Coral Reefs of the World. Australian Institute of Marine Science, Townsville, Queensland.
- Green, E. and Shirley, F. (1999) The Global Trade in Corals. World Conservation Press, Cambridge, UK.
- view the contents of, and Material on, the website;
- download and retain copies of the Material on their personal systems in digital form in low resolution for their own personal use;
- teachers, lecturers and students may incorporate the Material in their educational material (including, but not limited to, their lesson plans, presentations, worksheets and projects) in hard copy and digital format for use within a registered educational establishment, provided that the integrity of the Material is maintained and that copyright ownership and authorship is appropriately acknowledged by the End User.
Elegance coral biology
Many aspects of the biology and life history of Catalaphyllia are unknown (4). This species can be both free-living and colonial. Like many corals, elegance coral has a special symbiotic relationship with an algae, called zooxanthellae. The zooxanthellae live inside the tissues of the coral, and provide the coral with nutrients, which it produces through photosynthesis, and therefore requires sunlight. In return, the coral provides the algae with protection and access to sunlight. The coral polyps also obtain nutrients by capturing prey with their tentacles (5).Top
Elegance coral range
Elegance coral occurs in the Indian and Pacific Oceans, from the Seychelles to Vanuatu, and from northern Australia to southern Japan (3).Top
Elegance coral habitat
Elegance coral occurs in tropical and temperate waters, in sheltered and preferably turbid water (3).Top
Elegance coral statusTop
Elegance coral threats
Elegance coral faces many threats that are affecting coral reefs globally. These include increasing pressure on coastal resources, resulting from human population growth; and technological development, such as mechanical dredges, and dynamiting and poisoning on reefs to collect fish, which destroys reefs. The impacts of these major factors are compounded by the effects of excessive domestic and agricultural waste in the oceans, poor land-use practices that result in an increase in sediment running on to the reefs, and over-fishing, which can have ‘knock-on’ effects on the reef (6).
The devastating effect of human activities is exemplified by the destruction of a large community of elegance coral in Kushimoto, western Japan, caused by the construction of a marine port, and over 5,000 square kilometres of coral reef was destroyed around Sesoke Island, Japan, during development of the shoreline (7). Elegance coral may also be threatened by harvesting for the live coral trade. Its beautiful tentacles mean that it is a popular aquarium exhibit, and is one of the species that dominates the live coral trade, a trade that increased tenfold from 1985 to 1997 (8).Top
Elegance coral conservation
Elegance corals are listed on Appendix II of the Convention on International Trade in Endangered Species (CITES), which means that trade in this species should be carefully regulated, and a permit is required to bring the coral, or objects made from them, into the countries that have signed the CITES convention (2). Elegance corals will also form part of the marine community in many marine protected areas, or in areas where management plans are in place to protect the coral community.Top
Find out more
For further information on elegance coral:
For further information on the conservation of coral reefs:
This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact:
More »Related species
Play the Team WILD game
MyARKive offers the scrapbook feature to signed-up members, allowing you to organize your favourite ARKive images and videos and share them with friends.
Terms and Conditions of Use of Materials
Copyright in this website and materials contained on this website (Material) belongs to Wildscreen or its licensors.
Visitors to this website (End Users) are entitled to:
End Users shall not copy or otherwise extract, alter or manipulate Material other than as permitted in these Terms and Conditions of Use of Materials.
Additional use of flagged material
Green flagged material
Certain Material on this website (Licence 4 Material) displays a green flag next to the Material and is available for not-for-profit conservation or educational use. This material may be used by End Users, who are individuals or organisations that are in our opinion not-for-profit, for their not-for-profit conservation or not-for-profit educational purposes. Low resolution, watermarked images may be copied from this website by such End Users for such purposes. If you require high resolution or non-watermarked versions of the Material, please contact Wildscreen with details of your proposed use.
Creative commons material
Certain Material on this website has been licensed to Wildscreen under a Creative Commons Licence. These images are clearly marked with the Creative Commons buttons and may be used by End Users only in the way allowed by the specific Creative Commons Licence under which they have been submitted. Please see http://creativecommons.org for details.
Any other use
Please contact the copyright owners directly (copyright and contact details are shown for each media item) to negotiate terms and conditions for any use of Material other than those expressly permitted above. Please note that many of the contributors to ARKive are commercial operators and may request a fee for such use.
Save as permitted above, no person or organisation is permitted to incorporate any copyright material from this website into any other work or publication in any format (this includes but is not limited to: websites, Apps, CDs, DVDs, intranets, extranets, signage, digital communications or on printed materials for external or other distribution). Use of the Material for promotional, administrative or for-profit purposes is not permitted. | <urn:uuid:d46bdb37-261f-4f27-a7c3-d3707ffb68f3> | 3.09375 | 1,801 | Knowledge Article | Science & Tech. | 31.877323 |
A phylum - also known as a division when referring to plants - is a scientfic way of grouping together related organisms. All the members of a phylum have a common ancestor and anatomical similarities. For instance, all the arthropods have external skeletons. Phlya are large groups and are further subdivided into classes, orders, families and so on.
In biological classification, rank is the level in a taxonomic hierarchy. Examples of taxonomic ranks are species, genus, family, and class. Each rank subsumes under it a number of less general categories. The rank of species, and specification of the genus to which the species belongs is basic, which means that it may not be necessary to specify ranks other than these.
This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so. | <urn:uuid:790e61d5-5550-4f50-87aa-640bf546fd25> | 3.671875 | 223 | Knowledge Article | Science & Tech. | 46.405909 |
Chemical rockets operate on essentially the same technology that we've had since the 1930s, and it's dangerous, expensive, and very inefficient. It's high time for a better way of getting to space, and lasers might be the way to do it.
The problem with chemically powered rockets is that they effectively waste a crazy amount of their thrust just lifting all the fuel they need to create that thrust in the first place. Take the famous Saturn V moon rocket, for example. It weighed like 6.7 million pounds, and of that, only 250,000 pounds actually made it into low-Earth orbit. That's like 5% of the total mass of the vehicle, which in an absolute sense, is not very good.
One way around this might be to send energy to a rocket in-flight using an array of powerful microwave lasers that stay on the ground. While the lasers wouldn't power the rocket directly, they'd heat it up, and the rocket would have a propellant that would effectively convert that heat into thrust by boiling something. Specifically, a heat exchanger on the outside of the rocket would take all the heat provided by incoming laser beams and transfer it back to the engines, which would use it to turn cold liquid hydrogen into hot gaseous hydrogen and fire it out the tailpipe, and boom, you've got a rocket.
Since much of the actual reaction energy in a system like this would be coming directly from the ground and wouldn't have to be carried along, the amount of payload that a laser-powered rocket could lift would be double that of a conventional chemical rocket, or more. Also, since the propellant isn't actually exploding, the rocket would be much safer, and by shutting down the lasers on the ground, you could just turn it off if you had to.
The company trying to make this happen is called LaserMotive, and they've already proven that it's possible to use lasers to beam usable amounts of power significant distances. They envision giant arrays of ground-based lasers that could put cheap spacecraft into orbit in under five minutes, and if they can find themselves a trillion-watt laser (which might be realistic in 50 years), they'll be able to start launching laser-powered interstellar probes. | <urn:uuid:2fd7b97f-ae61-4101-9975-c1c12eb5abf8> | 3.53125 | 458 | Personal Blog | Science & Tech. | 49.515909 |
The physics of espionage
As the 23rd Bond movie, Skyfall, hits cinemas, we take a look at the physics behind spying.
Premiering on 26 October, Skyfall, the 23rd film in the official James Bond franchise, sees the reintroduction of MI6’s gadgetmaster, known only as Q. Innovative gizmos have long been a staple of the series, with rocket belts, lasers hidden in wristwatches, and stun-guns concealed in mobile phones helping get 007 out of some tight spots when on Her Majesty’s secret service.
Many of them are a little far-fetched, and have been criticised by physicist and science-communicator Neil deGrasse Tyson for taking rather too much creative licence (to kill) with the laws of nature. But while Hollywood occasionally stretches credibility way past its elastic limit, there’s plenty of real physics involved in spying.
Russia Greece With Love
During the second world war, British forces had more than ten thousand personnel operating behind enemy lines – the fabled Special Operations Executive. But communicating with their handlers back home put them in danger, as their signals gave away their locations to Axis spycatchers.
Radio antennas that are particularly direction-sensitive will see a peak in the strength of a received signal when pointed directly towards the source of a transmission. Using two different antennas placed in widely separated positions, the exact position of a broadcasting secret agent can be worked out by triangulation – a simple application of trigonometry.
Modern technology can work around this problem by compressing a long message and sending it as a “burst transmission” lasting a second or less. Even if such a signal is detected, they’re typically too short to work out a source direction accurately.
Direction-finding is now more commonly used to locate pirate radio broadcasts than to uncover espionage activity.
Live and Let Die
Material produced as a byproduct in nuclear reactors used for peaceful purposes can be turned to far more nefarious ends.
In 2006, the radioactive isotope polonium-210 – produced in nuclear powerplants by bombarding bismuth with neutrons – was implicated in the death of the former KGB employee Alexander Litvinenko.
Because polonium emits alpha radiation, which isn’t very penetrative, it’s only especially dangerous if ingested or inhaled, allowing for easy transport, without harming those carrying it, until it’s ready for use – it can even make it through airport security. Unlike in the case of most chemical poisons, the onset of the symptoms of radiation sickness is delayed, allowing time for the assailants to escape. Polonium was also thought to be a hard substance to detect when ingested, although this ultimately proved not to be the case.
Unusually high levels of polonium have also been detected on the possessions of the former president of the Palestinian National Authority, Yasser Arafat, who died in 2004.
(Golden) Eyes in the sky
The same space-based technology that is used to peer out into the cosmos and investigate the origins of the universe can be used to look down instead of up.
They’re typically used for high-resolution photography of areas of particular interest – such as the location of runways on hostile airfields that are going to be taken out of action – and to monitor for compliance with treaties on nuclear test bans.
The transfer of technology works both ways. In the US, the National Reconnaissance Office recently gifted two telescopes that would have been used in spy satellites to NASA. Though they currently have no instrumentation, they’re expected to be about as powerful as the Hubble Space Telescope.
For Your Eyes Only
Technology presented in film as the latest must-have for a secret agent might instead have real-life civilian uses – not saving the world, but saving an expensive paint-job on your car.
Die Another Day, the 2002 flop that was the last to feature Pierce Brosnan as Bond, featured an Aston Martin equipped with a kind of cloaking device that rendered it invisible. The premise was that the car was embedded with cameras that took images from one side and projected them on the other.
Although criticised for being too unbelievable even for a Bond film, the technology is on the cusp of possibility, as has been shown by Mercedes Benz. But rather than being used to develop an invisible car – for which, given the chances of someone crashing into it, the cost of insurance would scare the living daylights out of you – it’s been adapted to improve road safety instead.
Toyota have developed a version of their Prius that takes an image of what’s behind the car and projects it onto the rear seats, effectively making the back of the vehicle transparent and providing for much easier parking in tight spaces, and eliminating blind spots when reversing. Drivers will get a better look at what’s behind their car, rather than a view to a kill. | <urn:uuid:93430aa0-1200-4037-b399-97bbbac7d02c> | 2.90625 | 1,023 | Knowledge Article | Science & Tech. | 32.463424 |
Today in History – November 28, 1964 – Mariner 4, was launched and became the first successful mission to Mars. Reaching Mars on a flyby on July 14 and 15, 1965, it was the first spacecraft to return close-up images of the surface (center image above) and lasted three years in solar orbit. In Mariner 4′s 21 pictures, the images showed a planet that was barren and riddled with craters, contrary to the fanciful science fiction images of the Red Planet that portrayed it as inhabitable and full of life. Mariner 4 also carried instruments to study cosmic dust, solar plasma, radiation belts, and magnetic fields. Lessons learned from the Mariner 3 (which failed) and the Mariner 4 were vital for future unmanned space missions under NASA’s Mariner mission program to explore the inner solar system. The Mariner 10 (being assembled in right-most image above) was launched on November 1973 and during its two year mission transmitted over 12,ooo images of Mecury and Venus until March 1975.
Its interesting to view recent space exploration with that of Portuguese mariner and explorer Ferdinand Magellan, who reached the Pacific Ocean on this date centuries earlier in 1520. Magellan, with the crew of three ships, were the first documented Europeans to travel across the Atlantic Ocean to the Pacific. It does make wonder whether exploration is a fundamental part of human destiny. | <urn:uuid:ef0d8dea-0936-4571-afcb-f58363f68be9> | 4.09375 | 286 | Personal Blog | Science & Tech. | 39.184349 |
With the recent eruption of Eyjafjallajokull added onto the earthquakes in Haiti and Peru its making some scientists wonder if we are experiencing "Tectonic Implosion"
A new study done by the newly created International Panel on Tectonic Implosion (IPTI) reports that there could be a correlation between oil pumped out of the earth and the increased amount of seismic activity we are currently seeing around the world.
"Tectonic implosion isn't just about earthquakes, it can also cause volcanoes" said Norman P. Schpielabeep an IPTI scientist. "When you suck the oil out of the ground in one area it makes things shift, and its that shifting that wreaks havoc with the earth's crust" said Norman "because nature hates a void".
As you can see by the graph above there is an obvious correlation between the increased amount of oil being pumped and seismic activity. In fact the latest large earthquake that took place in Peru caused so much "tectonic Implosion" that it caused the earth to speed up in its rotation, making our days 1.26 microseconds shorter.
Another negative effect is reducing oil's natural lubrication of shallow earthquake faults. When the oil was in the crust, it helped fault lines slowly slide along, now tectonic pressure builds until it snaps. This results in less frequent but larger quakes. This is a real danger in places like California where seismic activity is normal but the recent rise in oil prices have resulted in increased oil extraction. And also in places where they do a lot offshore drilling like Peru.
Of course there are a growing number of Tectonic Implosion skeptics referred to as "Teptics" who say this is all just a bunch of bunk and another way to shut down the oil industry. | <urn:uuid:3794162e-96fe-43fa-860a-aac3ea11a79e> | 2.734375 | 368 | Personal Blog | Science & Tech. | 44.427392 |
Biodiversity is the variety of all life on Earth. It includes all the Earth’s habitats and the species that live in them, from the smallest micro-organisms to huge mammals like the blue whale.
Humans depend on biodiversity for food, fuel and other vital services, yet human activity is causing biodiversity to decline. As habitats are destroyed, as alien species are introduced into new areas, and as the planet warms, more and more species are coming under threat. Many of these species are studied by scientists at the Museum and are featured in the categories below.
Find out about some of the species that are at risk of extinction as the climate warms up, habitats are lost and land uses change.
Global warming is causing many habitats to change and seasons to arrive earlier or later than they used to. Find out about some of the species that are struggling to adapt to these changes and others that are taking advantage of them.
Many species are affected by changes in land use, deforestation, urbanisation and modern farming practices, all of which can cause habitat loss. Here are some of the species at risk.
Humans depend on the natural world for food, fuel and other resources so any changes in our environment have an economic cost. Find out about agricultural pests and other species that damage local economies.
Introducing alien, or non-native, species to a new environment can cause conservation problems as they do not have any natural predators there. Find out more about some of the world's major 'pests'. | <urn:uuid:b1232666-82ea-4e11-90ce-99ea5bde258a> | 3.9375 | 309 | Knowledge Article | Science & Tech. | 40.597252 |
Data reported by the weather station: 265440
Latitude: 55.86 | Longitude: 26.61 | Altitude: 122
|Main||Year 1975 climate||Select a month|
To calculate annual averages, we analyzed data of 364 days (99.73% of year).
If in the average or annual total of some data is missing information of 10 or more days, this is not displayed.
The total rainfall value 0 (zero) may indicate that there has been no such measurement and / or the weather station does not broadcast.
|Annual average temperature:||7.5°C||364|
|Annual average maximum temperature:||12.0°C||364|
|Annual average minimum temperature:||2.9°C||363|
|Annual average humidity:||-||-|
|Annual total precipitation:||597.92 mm||363|
|Annual average visibility:||6.8 Km||364|
|Annual average wind speed:||10.5 km/h||364|
Number of days with extraordinary phenomena.
|Total days with rain:||127|
|Total days with snow:||45|
|Total days with thunderstorm:||16|
|Total days with fog:||34|
|Total days with tornado or funnel cloud:||0|
|Total days with hail:||0|
Days of extreme historical values in 1975
The highest temperature recorded was 31°C on August 7.
The lowest temperature recorded was -18°C on February 16.
The maximum wind speed recorded was 57.6 km/h on January 6. | <urn:uuid:24c19f8d-52b4-40b8-b454-e81ab8250ba0> | 2.703125 | 348 | Structured Data | Science & Tech. | 68.873217 |
This is an artist's rendering of solar wind coming towards the Earth and its magnetosphere.
Click on image for full size
is flinging 1 million tons of material out into space every second! We call this material solar wind.
If you add all this material up over the course of a day, it's like the mass of Utah's Great Salt Lake. And this happens every day, day after day, year after year!
We can't see this material coming from the Sun, but we know that it causes:
The solar wind travels very, very fast, and it travels very, very far! The solar wind goes all the way out past Pluto to the heliopause
. There are a few spacecraft like Ulysses
and Voyager I & II
which are helping scientists study solar wind from the Sun all the way out to Pluto.
Shop Windows to the Universe Science Store!Cool It!
is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store
You might also be interested in:
A magnetosphere has many parts, such as the bow shock, magnetosheath, magnetotail, plasmasheet, lobes, plasmasphere, radiation belts and many electric currents. Particles in the magnetosphere cause aurora...more
Unexpected discoveries made by the two Voyager spacecrafts during their visits to the four largest planets in our solar system have changed the field of space science. Voyager 2 was launched on Aug. 2...more
Pluto is a frigid ball of ice and rock that orbits far from the Sun on the frozen fringes of our Solar System. Considered a planet, though a rather odd one, from its discovery in 1930 until 2006, it was...more
Make way for NASA's next satellite! The Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) will spend two years studying Earth's magnetosphere. The launch will be March 25th. IMAGE will take...more
The Earth has a magnetic field with north and south poles. The magnetic field of the Earth is surrounded by the magnetosphere. The magnetosphere keeps most of the particles from the sun, carried in solar...more
People used to think that moons such as the Earth's moon had no atmosphere whatsoever. Now, however, measurements have shown that most of these moons are surrounded by a *very* thin region of molecules...more
Unlike the Earth, which has a protective shield around it called the magnetosphere, the surface of the moon is not protected from the solar wind. This picture shows the magnetosphere surrounding the Earth,...more | <urn:uuid:9a09f6e0-c932-43f9-9b65-8c79d97de165> | 3.515625 | 568 | Content Listing | Science & Tech. | 61.869461 |
.pyofiles so that executing the same file is faster the second time (recompilation from source to byte code can be avoided). This ``intermediate language'' is said to run on a ``virtual machine'' that calls the subroutines corresponding to each bytecode.
int(3.15)converts the floating point number to the integer
3, but in
3+4.5, each argument is of a different type (one int, one float), and both must be converted to the same type before they can be added or it will raise a
TypeError. Coercion between two operands can be performed with the
coercebuiltin function; thus,
3+4.5is equivalent to calling
operator.add(*coerce(3, 4.5))and results in
operator.add(3.0, 4.5). Without coercion, all arguments of even compatible types would have to be normalized to the same value by the programmer, e.g.,
float(3)+4.5rather than just
-1), often written
iin mathematics or
jin engineering. Python has builtin support for complex numbers, which are written with this latter notation; the imaginary part is written with a
3+1j. To get access to complex equivalents of the math module, use cmath. Use of complex numbers is a fairly advanced mathematical feature. If you're not aware of a need for them, it's almost certain you can safely ignore them.
11/4currently evaluates to
2. If the module in which it is executed had enabled true division by executing:
from __future__ import division
11/4 would evaluate to
importing the __future__
module and evaluating its variables, you can see when a new feature
was first added to the language and when it will become the default:
>>> import __future__ >>> __future__.division _Feature((2, 2, 0, 'alpha', 2), (3, 0, 0, 'alpha', 0), 8192)
>>> sum(i*i for i in range(10)) # sum of squares 0, 1, 4, ... 81 285
11/4currently evaluates to
2in contrast to the
2.75returned by float division. Also called floor division. When dividing two integers the outcome will always be another integer (having the floor function applied to it). However, if one of the operands is another numeric type (such as a float), the result will be coerced (see coercion) to a common type. For example, an integer divided by a float will result in a float value, possibly with a decimal fraction. Integer division can be forced by using the
//operator instead of the
/operator. See also __future__.
pythonwith no arguments (possibly by selecting it from your computer's main menu). It is a very powerful way to test out new ideas or inspect modules and packages (remember
forstatement does that automatically for you, creating a temporary unnamed variable to hold the iterator for the duration of the loop. See also iterator, sequence, and generator.
result = ["0x%02x" % x for x in range(256) if x % 2 == 0]generates a list of strings containing hex numbers (0x..) that are even and in the range from 0 to 255. The if clause is optional. If omitted, all elements in
import this'' at the interactive prompt.
See About this document... for information on suggesting changes. | <urn:uuid:8f0fc7e1-faaa-4477-8199-627d15bf45c6> | 3.046875 | 737 | Documentation | Software Dev. | 60.577372 |
Your worries arise from asymmetry between how you view ordinary mathematics and how you view logic and model theory.
If it is the business of logic and model theory to provide foundations for the rest of mathematics then, of course, logicians and model theorists will not be allowed to use mathematical methods until they have secured them. But how might they accomplish this? The more we think about it, the more it becomes obvious that "securing the foundations of mathematics", whatever that means, is a task for philosophers at best and a form of mysticism at worst.
It is far more fruitful to think of logic and model theory as just another branch of mathematics, namely the one that studies mathematical methods and mathematical activity with mathematical tools. They follow the usual pattern of "mathematizing" their object of interest:
- observe what happens in the real world (look at what mathematicians do)
- simplify and idealize the observed situation until it becomes manageable by mathematical tools (simplify natural language to formal logic, pretend that mathematicians only formulate and prove theorems and do nothing else, pretend that all proofs are always written out in full detail, etc.)
- apply standard mathematical techniques
As we all know well, the 20th century logicians were very successful. They gave us important knowledge about the nature of mathematical activity and its limitations. One of results was the realization that almost all mathematics can be done with first-order logic and set theory. The set-theoretic language was adopted as a universal means of communication among mathematicians.
The success of set theory has lead many to believe that it provides an unshakeable foundation for mathematics. It does not, at least not the mystical kind that some would like to have. It provides a unifying language and framework for mathematicians, which in itself is a small miracle. Always remember that practically all classical mathematics was invented before modern logic and set theory. How could it exist without a foundation so long? Was the mathematics of Euclid, Newton and Fourier really vacouous until set theory came along and "gave it a foundation"?
I hope this explains what model theorists do. They apply standard mathematical methodology to study mathematical theories and their meaning. They have discovered, for example, that however one axiomatizes a given body of mathematics in first-order logic (for example, the natural numbers), the resulting theory will have unintended and surprising interpretations (non-standard models of Peano arithmetic), and I am skimming over a few technical details here. There is absolutely nothing strange about applying model theory to the axioms known as ZFC.
Or to put it another way: if you ask "why are model theorists justified in using sets?" then I ask back "why are number theorists justified in using numbers?" | <urn:uuid:886edcd9-c46e-4b39-bdef-9f44ce8ce939> | 2.84375 | 567 | Q&A Forum | Science & Tech. | 29.247082 |
Genus: Colonies of 4 (or 2, 8, 16) cells attached side by side, arranged linearly or zigzag;
cell body elliptical or spindle or crescent in shape; terminal cells with spiny projections in many species;
cell wall usually smooth, but in some species granulated or dented or ridged
(Illustrations of The Japanese Fresh-water Algae, 1977).
Species: Cell body oblong with both ends rounded, 15-35 μm long, 4.5-5.8 (-9) μm wide; cells arranged in a linear series contacting nearly with their entire sides; outer cells with a long curved spine at each ends, sometimes inner cells with a long spine at one pole; cell wall smooth or covered with linear microgranules (Photomicrographs of the Freshwater Algae, vol. 17, 1996). | <urn:uuid:32985398-4ac9-4ec1-8e51-498de105750e> | 3.0625 | 185 | Knowledge Article | Science & Tech. | 51.701787 |
According to a new study reported on by National Geographic, all of the flipping, flapping, undulating, kicking, tail whipping, swishing and swoshing that sea creatures use to propel themselves in the ocean may account for a large portion of “ocean mixing” and this in turn may make climate change modeling even more complicated a task. Ocean mixing is the mixing up of sea water layers (including their temperatures, salinity, etc.). Previously, it was thought that wind, weather, seismic activity and tides were the main forces behind ocean mixing. But according to this study, published recently in Nature, all of the creatures’ combined forces may in fact account for up to one third of ocean mixing across the board!
This whole idea throws a huge wrench in the science of climate change measurement as it is a factor that simply has not been taken into account by the modelers and may be sliiiiiighly difficult to measure accurately.
I’m having some trouble getting an accurate reading…
Kakani Katija,of the California Institute of Technology, went to…
…Jellyfish Lake in Palau, an area that is relatively devoid of other ocean mixing variables like wind and tide. There he squirted colored dye around jellyfish to witness their effect on the water around them. He found that the dyed water stayed with the jellyfish as long as they moved, suggesting that their influence on the surrounding water was pretty significant.
Take into account giant schools of millions of fish or the force behind a giant squid’s thruster mechanisms, coupled with the sheer number of organisms in the ocean and it’s not a huge leap to imagine that all that movement probably has an effect on the mixing of the ocean.
Still some scientists are not convinced: “You have to be stirring the fluid with a big-enough spoon to actually mix together waters of really different temperatures,” William Dewar, of Florida State University, told National Geographic. But what if you have billions of small spoons? I ask you, Mr. Dewer. What if you have billions of them!?
What do our readers think? Animal ocean mixing…Ach ja? Or nish, nish? | <urn:uuid:871ae520-e400-497f-aad6-65013ef071a3> | 3.609375 | 456 | Personal Blog | Science & Tech. | 46.395179 |
Image 1 in this series (right) was taken from Davis in Feb 1998. The noctilucent cloud is wavy and blueish-white. The picture was taken when the sun was approximately 10 (degrees) below the horizon, but still shining on the noctilucent cloud.
The summer polar mesopause is the coldest region of the Earth's atmosphere, reaching temperatures as low as -140°C. It is sufficiently cold for noctilucent ('night shining') clouds to form in summer, at altitudes around 83 km.
Noctilucent clouds can only be seen when the sun is shining on them (at ~83 km) and not on the lower atmosphere, i.e. when the sun is between 6 and 16 degrees below the horizon.
They are a summer, polar phenomena but because of the restrictive viewing conditions they are most commonly observed at latitudes between 55 and 65 degrees.
Noctilucent clouds were first reported in 1885 when they were independently observed in Germany and Russia. This was two years after the volcanic explosion of Krakatoa in the Straits of Java.
One hypothesis is that the initial observation of noctilucent clouds was related to an increase in the number of observers of the twilight skies attracted by the spectacular displays resulting from the globally distributed volcanic debris of Krakatoa. Alternatively, water vapour injected into the upper atmosphere by the volcano ultimately reached the cold, dry upper mesophere.
Subsequent observations have proved that noctilucent clouds are not solely related to volcanic activity, and their volcanic association is now scientifically contentious. It has been alternatively claimed that the appearance of noctilucent clouds is the earliest evidence of anthropogenic climate change.
Noctilucent cloud observations from north-west Europe over the last 30 years show an increasing trend in the number of nights on which the clouds are observed each summer season, superimposed on a decadal variability that appears to be solar-cycle related.
Competing anthropogenic explanations for this increasing occurrence of noctilucent clouds focus on either excessive greenhouse cooling of the middle atmosphere or increased water vapour linked to increased methane release associated principally with intensive farming activities.
They have been observed thousands of times in the northern hemisphere, but less than 100 observations have been reported from the southern hemisphere. It has not been resolved if this is due to inter-hemispheric differences (temperature &/or water vapour) in the atmosphere at these altitudes, or the lack of observers and poorer observing conditions in southern latitudes. This is a subject of Australian Antarctic Division study. | <urn:uuid:d7b1df99-2df4-4c52-94e3-3cd2f31d7ac5> | 3.90625 | 544 | Knowledge Article | Science & Tech. | 27.531909 |
To help study this complex issue, astronomers took a deep
the largest known star forming region in the entire
Milky Way Galaxy.
above recently-released image was taken in 2009 by the orbiting
Spitzer Space Telescope and digitally translated into
colors humans can see,
with the hottest regions colored the most blue.
Visible are large bubbles of hot gas inflated by the
winds of massive stars soon after they form.
Current models posit that these expanding
bubbles sweep up gas and sometimes even collide, frequently creating regions dense enough to gravitationally collapse into yet more stars.
The star factory
Cygnus-X spans over 600 light years, contains over a million times the mass of our Sun, and shines prominently on wide angle
infrared panoramas of the night sky.
Cygnus X lies 4,500 light years away towards the
constellation of the Swan (Cygnus).
In a few million years,
calm will likely be restored and a large
open cluster of stars will remain --
which itself will disperse over the next 100 million years. | <urn:uuid:9291e2ba-7e37-47f6-bfe5-b2b3bdbc9a87> | 3.6875 | 228 | Knowledge Article | Science & Tech. | 41.714767 |
How many inputs are needed to excite/saturate a neuron
tal at copley.bu.edu
Thu Feb 25 22:47:25 EST 1993
On the one hand, anatomical evidence suggests that a neuron has, on
average, on the order of THOUSANDS of neurons impinging on it. On the
other hand, physiological evidence shows that only a few epsps are
needed to generate an action potential - which would imply that ONE
cell is sufficient for exciting the postsynaptic neuron.
Can anyone explain this disparity between anatomy and physiology?
1. most of the thousands of synapses on a neuron are inactive
2. a neuron that has thousands of inputs has an extremely large and
If we knew what the range of a neuron is, on average, then we'd be
able to tell between the two above hypotheses.
More information about the Neur-sci | <urn:uuid:3eb9ac11-8742-4719-a507-700568d4bd45> | 2.8125 | 189 | Comment Section | Science & Tech. | 48.215933 |
If we consider the trigonometric form of fourier series, the
fourier series is the summation of a series of of sin and cos
functions that give rise to the given expression.
e.g. y(t)=?(ancos nx +bnsin nx)
summation for n=1 to n=infinity
an refers to a subscript n,
bn refers to b subscript n,
So, the principle of superposition refers to the superposition
(addition) of many sin and cos functions to give rise to the actual
Besides using trigo form, there's also the complex exponential form
and amplitude-phase form. But essentially, principle of
superposition refers to the approximation of the given eqn using
the summation a series of other expressions. | <urn:uuid:aafd9a4f-3231-4301-a00a-059ccf641a9c> | 3.375 | 172 | Q&A Forum | Science & Tech. | 50.830632 |
||This article needs attention from an expert in statistics. (June 2009)|
Complete spatial randomness (CSR) describes a point process whereby point events occur within a given study area in a completely random fashion. Such a process is often modeled using only one parameter, i.e. the density of points, within the defined area. This is also called a spatial Poisson process.
Data in the form of a set of points, irregularly distributed within a region of space, arise in many different contexts; examples include locations of trees in a forest, of nests of birds, of nuclei in tissue, of ill people in a population at risk. We call any such data-set a spatial point pattern and refer to the locations as events, to distinguish these from arbitrary points of the region in question.
The hypothesis of complete spatial randomness for a spatial point pattern asserts that: the number of events in any region follows a Poisson distribution with given mean count per uniform subdivision. The intensity of events does not vary over the plane. This implies that there are no interactions amongst the events. For example, the independence assumption would be violated if the existence of one event either encouraged or inhibited the occurrence of other events in the neighborhood. The study CSR is essential for the comparison of measured point data from experimental sources. As a statistical testing method, the test for CSR has many applications in the social sciences and in astronomical examinations.
The probability of finding exactly points within the area with event density is therefore:
The first moment of which, the average number of points in the area, is simply . This value is intuitive as it is the Poisson rate parameter.
The probability of locating the neighbor of any given point, at some radial distance is:
The expected value of can be derived via the use of the gamma function using statistical moments. The first moment is the mean distance between randomly distributed particles in dimensions.
- Improvement of Inter-event Distance Tests of Randomness in Spatial Point Processes
- Diggle, P. J. (2003) "Statistical Analysis of Spatial Point Patterns", 2nd ed., Academic Press, New York.
A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia. | <urn:uuid:1198896b-2557-483b-af87-18f325d875dd> | 3.46875 | 456 | Knowledge Article | Science & Tech. | 38.27493 |
Cellular Zip Codes: Where's the Postmaster?
In 1970, Nobel laureate Jacques Monod called DNA the "secret of life" and said that the discovery of its structure and function -- especially "the understanding of the random physical basis of mutation" -- means that "the mechanism of Darwinism is at last securely founded" and that humans are "a mere accident."[ 1]
According to neo-Darwinism, all living things are descended from a common ancestor, modified by natural selection acting on random variations that are generated by DNA mutations. But only if an embryo's development were programmed by its DNA could mutations in DNA provide the raw materials for large-scale evolution. So neo-Darwinism assumes that embryo development is controlled by a genetic program.
But there is a serious problem with this assumption.
The many different kinds of cells in an animal or plant develop from a single fertilized egg cell. Humans, for example, consist of cells that form bone, skin, muscle, digestive organs, nerves and many other tissues. Such cells are so different from each other in form and function that an untrained observer might conclude that they represent different species.
Yet all of these cells contain the same DNA, a fact long known to embryologists as "genomic equivalence." As the fertilized egg divides, it bequeaths a complete set of DNA (its "genome") to all of its descendants -- with a few minor exceptions, such as red blood cells, which have no DNA at all. But if bone, skin, muscle, digestive and nerve cells all have the same DNA, why are they so different? Why don't nerve cells secrete juices that digest the brain? Part of the answer is that although brain cells have the genes for digestive juices, those genes are turned off in nerves. As an embryo develops, its cells go through a phase called "differentiation" that turns some genes on and leaves others turned off.
But this does not solve the problem, since it begs the question of why two cells with the same DNA would differentiate in two distinct ways.
Another part of the answer is that cells somehow know where they are in the body and differentiate appropriately. In July 2006, a scientific article reported that certain cells have "zip codes" in their DNA that correspond to their locations. According to the article:
A major question in developmental biology is, How do cells know where they are in the body? For example, skin cells on the scalp know to produce hair, and the skin cells on the palms of the hand know not to make hair... In this study, the authors present a model that explains how cells know where they are in the body. By comparing cells from 43 unique positions that finely map the entire human body, the authors discovered that cells utilize a ZIP-code system to identify the cell's position in the human body. The ZIP code for Stanford is 94305, and each digit hones in on the location of a place in the United States; similarly, cells know their location by using a code of genes. For example, a cell on the hand expresses a set of genes that locate the cell on the top half of the body (anterior) and another set of genes that locates the cell as being far away from the body or distal and a third set of genes that identifies the cell on the outside of the body (not internal). Thus, each set of genes narrows in on the cell's location, just like a ZIP code. Yet the existence of "cellular zip codes" still doesn't solve the problem either (and the authors of the article don't claim that it does). If the human body were the United States and cells were postal envelopes, each would start out bearing every zip code in the country on its face. Only after the postmaster had stuck each envelope into one of many slots on the wall to direct it to its final destination would a particular zip code be highlighted. Obviously, the postmaster and the array of slots play a major role in determining where each letter goes.
If the DNA corresponds to zip codes that are originally the same on every envelope, where in the embryo are the postmaster and the slots? What is it that highlights one zip code but not others? Where is the all-important developmental information that directs cells to different parts of the body and tells them where they are and how to differentiate?
By focusing attention on DNA as the supposed source of raw materials for evolution, neo-Darwinism has systematically downplayed the nature and location of developmental information elsewhere in the embryo. Obviously, there is more to embryo development than is dreamt of in neo-Darwinian philosophy.
Quoted in Horace Freeland Judson, The Eighth Day of Creation: The Makers of the Revolution in Biology (New York: Simon and Schuster, 1979), pp. 216-217.
Rinn JL, Bondre C, Gladstone HB, Brown PO, Chang HY (2006) Anatomic Demarcation by Positional Variation in Fibroblast Gene Expression Programs. PLoS Genet 2(7): e119 DOI: 10.1371/journal.pgen.0020119. | <urn:uuid:ec7109c2-8dad-4b42-b1e0-1574d593bc03> | 3.359375 | 1,059 | Nonfiction Writing | Science & Tech. | 49.563727 |
1. <cell biology> Extracellular material serving a structural role.
2. <plant biology> In plants the primary wall is pectin rich, the secondary wall mostly composed of cellulose.
3. <microbiology> In bacteria, cell wall structure is complex: the walls of gram-positive and gram-negative bacteria are distinctly different. Removal of the wall leaves a protoplast or spheroplast.
(07 Apr 1998)
cellulose tape technique, cellulosic acid, cellulosome, cellusome < Prev | Next >
cell wall skeleton, celo-, celom
© mondofacto 2008-10 | about us | advertise with us | disclaimer | privacy & cookies | contact us | <urn:uuid:205f5200-2d71-4c05-8372-3c99d13807b5> | 2.828125 | 154 | Structured Data | Science & Tech. | 29.882069 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 3 results on physics.org and 15 results in our database of sites
15 are Websites,
0 are Videos,
and 0 are Experiments)
Search results on physics.org
Search results from our links database
An article looking at how astronomers are detecting asteroids that come near Earth and what we can do to them if they look like they are going to hit us.
A description of how asteroids could be mined to obtain building supplies for space colonisation. From HowStuffWorks.com.
A range of factors go into deciding what risk an asteroid or comet poses to us.
A new way to deflect potential dangerous asteroids uses paint to harness the force of light reflecting from the surface.
Four ways to deflect an asteroid if it was heading towards us.
Site devoted to the near earth asteroid rendezvous (NEAR) of 2000. Includes many activities and images.
An interactive guide to how asteroid mining might work
What would happen if an asteroid hit the Earth?
A vivid multimedia adventure unfolding the splendor of the Sun, planets, moons, comets, asteroids, and more.
NASA information on Toutatis, an asteroid which passed very close to our planet in 2000.
Showing 1 - 10 of 15 | <urn:uuid:e5c4dde5-43ba-4706-8447-e7e84e396a9a> | 2.796875 | 303 | Content Listing | Science & Tech. | 59.051691 |
February 1, 2008 Marine biologists, worried that regular harvesting of wild seahorses may threaten the creature with extinction, have begun breeding them in home aquariums. Caring for seahorses requires a three-step water filtration process, three feedings a day, and careful temperature and water chemistry monitoring.
They're mesmerizing to watch, but seahorses may go the way of dinosaurs. One researcher concerned about their depletion is studying ways to help them survive.
They are unique and mysterious. But did you know these creatures mate for life? They must eat constantly to stay alive … did you know wild seahorses are disappearing?
Aspiring marine biologist, Katherine Bernabeo is concerned about their survival. They're being traded on the black market, made into Asian medicine, kept in aquariums, pollution is killing them off and their costal habitat is disappearing.
"Twenty million seahorses are being traded globally each year and they're depleting the natural stock and this is a huge strain and I just want to make sure we are not having a huge gap in the ecosystem," Bernabeo told Ivanhoe.
That's why Bernabeos goal is to breed a sustainable supply of the seahorse native to long island sound as an alternative to depleting seahorses in the wild. With the high school students she mentors, Bernabeo carefully monitors the sea horses.
"I have to make sure they are always full, they're happy and let nature take its course," Bernabeo said.
Seahorses are unique because the male gives birth.
Bernabeo wants to breed more seahorses, hoping to save this animal from extinction.
What is extinction? Animals are all classified by biologists into separate species (as well as bigger groups of classifications, such as a genius or family.) When no more individuals of a species can be found anywhere on earth, the species is considered extinct.
Many animals have been classified as on the endangered species list because their populations are close to becoming extinct. If one animal relies on another for its food or protection, it can become part of the extinction chain.
Possibly the most famous extinction happened at the end of the Cretaceous period, about 65 million years ago, when most of the species on Earth were wiped out by a large asteroid's impact with the Earth. That was when all the non-bird-like dinosaurs went extinct. | <urn:uuid:28717e0a-8d38-4b6c-b4d8-c24662622aa9> | 3.53125 | 495 | Truncated | Science & Tech. | 41.001065 |
August 27, 2012 | 51
Like an adventurer of old, NASA's Curiosity rover is using its spyglass to scope out some as-yet unexplored environs.
The image above comes from Curiosity's 100-millimeter telephoto camera, which, according to NASA, has about three times the resolving power of any previous landscape camera deployed on the Red Planet.
The literally otherworldly landscape has been colorized both for visual appeal and to highlight geologic differences in the soil types. "It's probably a little bit more pastel and pinker than it would be to your eye," geologist Mike Malin said during an August 27 press briefing. His company, Malin Space Science Systems, built four of the cameras for the rover mission, including the Mars Descent Imager that documented Curiosity's landing in high-res color.
For scale, Malin noted that the black dot in the center of the white square (blown up in detail, below right) is a boulder with roughly the same dimensions of the car-size Curiosity itself. The boulder is about 10 kilometers away at the base of Mount Sharp, the eventual destination for the rover and the planned focus of its exploration.
Deadline: Jul 25 2013
This challenge provides an opportunity for Solvers to build a web-based or mobile “app” to explore data relationships in scholarly conte
Deadline: Jun 30 2013
Reward: $1,000,000 USD
This is a Reduction-to-Practice Challenge that requires written documentation and&
Save 66% off the cover price and get a free gift!
Learn More >>X | <urn:uuid:ebf89e63-e92c-4206-952b-b8419bbdd681> | 3.1875 | 332 | Truncated | Science & Tech. | 34.477169 |
Chapter 8 - Measuring Sustainability
Part 2 - Environmental, Economic, and Social Carrying Capacity
On Earth, without change, we face a future of certainty. That certainty will be that eventually, in the human time scale, we will deplete or irreversibly damage, many of the resources we have come to use for our very survival. Our knowledge of this eventuality should inspire the human spirit. As this challenge comes before us, it should inspire us to change, as only change can bring us back into the balance we so desperately need. We need to measure and modify, to build a more sustainable future. The risk of inaction is great.
Shortly before his death in 1965, Adlai Stevenson, the US ambassador to the United Nations, said in his last speech: "We travel together, passengers on a little spaceship, dependent on its vulnerable reserves of air and soil; all committed for our safety to its security and peace; preserved from annihilation only by the care, the work, and, I will say, the love we give our fragile craft. We cannot maintain it half-fortunate, half-miserable, half-confident, half-despairing, half-slave to the ancient enemies of man, half-free in a liberation of resources undreamed of until this day. No craft, no crew, can travel safely with such vast contradictions. On their resolution depends the survival of us all."
- carrying capacity
- ecological footprint
- carbon footprint
- water footprint
- breached threshold
Sustaining Human Carrying Capacity: A tool for regional sustainability assessment Ecological Economics, Volume 69, Issue 3, January 2010, Pages 459-468 M.L.M. Graymore, Neil G. Sipe, Roy E. Rickson
- Carrying Capacity Reconsidered: from Malthus' population theory to cultural carrying capacity Ecological Economics, Volume 31, Issue 3, December 1999, Pages 395-408. Irmi Seidl, Clem A Tisdell
(Photo credit: NASA, 1968) | <urn:uuid:3a9a31d9-0958-42a6-a8f9-d9f4737bc48c> | 3 | 426 | Academic Writing | Science & Tech. | 34.745578 |
2012, Week 18
Audio: Part 1 (MP3, 29Mb), Part 2 (MP3, 28Mb)
Dark Mysteries: What is the Universe Made of?
Presenter: Dr. George Djorgovski
MICA Director, Professor of Astronomy and Co-Director of the Center for Advanced Computing Research at Caltech
One of the key goals of the science of cosmology is to determine the matter and energy contents of the universe; they, in turn, determine its ultimate fate – whether it will expand forever, or recollapse into a reverse of the big bang, a “big crunch.” A great progress has been made in this field over the past decade. We now know that about 70% of the total matter/energy content of the universe is a mysterious “dark energy,” which drives an accelerated expansion, and whose physical nature remains unknown. Another 25% or so is dark matter, whose nature is also unknown, but whose gravitational effects can be measured very well. Only about 5% of the total content is the matter we know, composed of atoms and known particles. We describe how cosmologists know these things, and what are they doing to help resolve these outstanding mysteries of science.
First presented in Second Life in September, 2008.
Questions and comments can be posted on the discussion thread on Starship Asterisk; Dr. Djorgovski will answer a selection of questions posted by May 13, 2012.
PDF of slides (21Mb)
Files also available here: http://www.mica-vw.org/wiki/index.php/Dark_Mysteries | <urn:uuid:6df8630d-f93c-4411-befe-897c9c30725d> | 2.859375 | 340 | Audio Transcript | Science & Tech. | 53.0175 |
A Drama of Star Formation and Evolution
The Chandra image of the Tarantula Nebula gives scientists a close-up view of the drama of star formation and evolution. The Tarantula, also known as 30 Doradus, is in one of the most active star-forming regions in our Local Group of galaxies. Massive stars are producing intense radiation and searing winds of multimillion-degree gas that carve out gigantic super-bubbles in the surrounding gas. Other massive stars have raced through their evolution and exploded catastrophically as supernovas, leaving behind pulsars and expanding remnants that trigger the collapse of giant clouds of dust and gas to form new generations of stars.
30 Doradus is located about 160,000 light years from Earth in the Large Magellanic Cloud, a satellite galaxy of our Milky Way Galaxy. It allows astronomers to study the details of starbursts - episodes of extremely prolific star formation that play an important role in the evolution of galaxies.
At least 11 extremely massive stars with ages of about 2 million years are detected in the bright star cluster in the center of the primary image (left panel). This crowded region contains many more stars whose X-ray emission is unresolved. The brightest source in this region known as Melnick 34, a 130 solar-mass star located slightly to the lower left of center. On the lower right of this panel is the supernova remnant N157B, with its central pulsar.
Two off-axis ACIS-S chips (right panel) were used to expand the field of view. They show SNR N157C, possibly a large shell-like supernova remnant or a wind-blown bubble created by OB stars. Supernova 1987A is also visible just above and to the right of the Honeycomb Nebula at the bottom center.
In the image, lower energy X-rays appear red, medium energy green and high-energy are blue. | <urn:uuid:5729ca37-4765-41d7-a12f-daa6d93e2727> | 3.765625 | 388 | Knowledge Article | Science & Tech. | 44.028462 |
Case Study: Buffering Blood
Cell metabolism is based on the same general principle as the combustion of any fuel, whether it be in the automobile, power plant, or a home furnace. The general combustion reaction is:
CH2O (fuel) + O2 ===> CO2 + HOH
The same reaction occurs in the cells. The "fuel" comes from food in the form of carbohydrates, fats, and proteins. The important principle to remember is that oxygen is needed by the cell and that carbon dioxide is produced as a waste product of the cell. Carbon dioxide must be expelled from the cells and the body. The lungs serve to exchange the two gases in the blood. Oxygen enters the blood from the lungs and carbon dioxide is expelled out of the blood into the lungs. The blood serves to transport both gases. Oxygen is carried to the cells. Carbon dioxide is carried away from the cells.
Partial pressures are used to designate the concentrations of gases. Dalton's Law of Partial Pressures states that the total pressure of all gases is equal to the sum of the partial pressures of each gas. For example, the total atmospheric pressure of air is 760 mm Hg. In equation form:
P(total air) = P(O2) + P(N2) + P(CO2) + P(H2O)
The partial pressures for oxygen and carbon dioxide in various locations are given in Figure 1. The movement or exchange of gases between the lungs, blood, and tissue cells is controlled by a diffusion process. The gas diffusion principle is: A gas diffuses from an area of higher partial pressure to an area of lower partial pressure.
In the lungs, oxygen diffuses from alveolar air into the blood because the venous blood has a lower partial pressure. The oxygen dissolves in the blood. Only a small amount is carried as a physical solution (0.31 ml per 100 ml). The remainder of the oxygen is carried in chemical combination with the hemoglobin in red blood cells (erthrocytes). Hemoglobin (molecular weight of 68,000) is made from 4 hemes, a porphyrin ring containing iron and globin, a 4 protein chains. Oxygen is bound to the iron for the transport process. Hemoglobin (HHgb) behaves as a weak acid (K = 1.4 x 10-8; pKa = 7.85). Oxyhemoglobin (HHgbO2) also behaves as a weak acid (K = 2.5 x 10-7; pKa = 6.6)
Because both forms of hemoglobin are weak acids, and a relationship of the numerical values of the equilibrium constants, the net reaction for the interaction of oxygen with hemoglobin results in the following equilibrium:
HHgb + O 2 <===> HgbO 2 + H+
If 2 is increased in the blood at the lungs, the equilibrium shifts to the right and H+ ions increase. Oxyhemoglobin can be caused to release oxygen by the addition of H+ ions at the cells. The difference in pH (7.44) of arterial blood and venous blood (pH = 7.35) is sufficient to cause release of oxygen from hemoglobin at the tissue cells.
This page viewed 3610 times | <urn:uuid:4573d66b-a465-4091-a8c1-1c288586a48d> | 3.828125 | 676 | Academic Writing | Science & Tech. | 62.809438 |
by Eli West
The word “nuclear” means a lot to us today. When we hear it we think of many things: bombs, reactors, uranium, “nuculur,” and radioactive; all of these are connotations of the word nuclear. Let’s explain what each of them means.
We’ll begin with bombs. The common link between nuclear and bombs, is obviously, nuclear bombs; otherwise known as atom bombs. In essence, you have a collection of uranium atoms; specifically Uranium-235, which is very fissile. In a bomb, a lone neutron is shot at a uranium-235 atom to create uranium-236. Since uranium-236 is too unstable, the isotope breaks apart very violently, shooting neutrons everywhere, and these reactionary neutrons in turn smash into other uranium-235 atoms, and those atoms break apart and smash OTHER atoms. Which is what makes atomic bombs so explosive.
Another think we link to nuclear is uranium. Uranium is a very heavy atom. With a standard atomic weight of 238.03 g/mole, it’s on the heavy side. However, you’re probably used to hearing terms like uranium-238 or uranium-235. What do the numbers mean? Why are they different? What does it change? The number with uranium is indicating the isotope number, which simply means that there are more or less neutrons with the same number of protons. The 238 number gives you the atomic weight of the atom. In order to find out how many neutrons there are, you simply take the atomic number (which is 92, the number of protons in all uranium atoms, regardless of isotope), then take the atomic weight minus the atomic number to find the number of neutrons. In this case it is 238-92=146. So we know that there are 146 neutrons in each atom of uranium 238. Compared to hydrogen, that’s heavy.
Nuculur. I’m not even going to go into that, except to say that the correct pronunciation, by the way, is “new-clear.”
Radioactivity: it’s a word with a history. It’s a word that’s gotten a pretty bad rep over the years, through romanticizing, myths, and fiction. Everyone has heard the stories of people getting hit with gamma radiation and gaining super powers! Or of radiation being like the Black Death, destroying any who get near. The truth is, EVERYTHING is radioactive. Now don’t get scared! That term isn’t quite as bad as believed! Let’s get a few things straight, what exactly does, radiation mean? Well, everything radiates. EVERYTHING. Radiation is just the constant output of energy. We radiate heat, and light, just like the sun; food radiates heat! Some things just radiate such high-energy waves that they become dangerous. THAT is radioactivity.
“Radioactivity refers to the particles which are emitted from nuclei as a result of nuclear instability.”
Now, where I am going with all this is thorium. What is thorium? It’s an incredibly heavy atom, much like uranium. It has large isotopes, much like uranium. Both of them have a huge half-life, and are highly radioactive; the differences between them are: (1) uranium, when used in nuclear reactors, produces a new isotope of uranium, which can be weaponized in the form of depleted uranium. It can be formed into what are essentially large bullets crafted out of the depleted uranium isotope. The bullet is incredibly dense, and when shot at high enough velocities, can pierce tank armor. It doesn’t explode in a nuclear bomb, but it does spray radioactive uranium all over the inside of the target tank. Thorium, on the other hand, when used in a nuclear reaction will not produce a weaponizable material. Thorium and uranium are both naturally occurring materials.
Thorium is abundant compared to uranium. So as a fuel source it would be cheaper, MUCH cheaper. Thorium is not fissile itself, which means it cannot sustain a low energy chain nuclear reaction, which means that it is not actually usable in nuclear reactors by itself. However, it is fertile, which means slow neutrons can be added to it to change it into U-233 (or uranium-233), which is fissile. That’s why we can’t just start mining thorium and tossing it in nuclear reactors all over the world. First we need to create reactors that can change it into U-233, which would then be fissile.
The word thorium has a very simple background. The man who discovered thorium simply decided that Thor was a pretty cool guy, and that maybe he should call this thing thorium!
As of right now there are a few companies around the world that are developing thorium reactors. Their projections for finishing the project are around 2015. That’s five years. Not to mention the actual two or three years it would take to build each reactor. So, the technology is coming, but is a ways off. Some believe that once they get the reactors running, that we could wean the world off oil in as little as five years, or by 2020. However, that’s probably a bit optimistic, and there still is a lot of work before we reach that point. | <urn:uuid:c13029e7-afa0-44db-a1a9-1e7cb436d632> | 3.546875 | 1,141 | Personal Blog | Science & Tech. | 62.075438 |
3.4 Moment Generating Function
Recall: the moments of a random variable are useful to know, but not so
easy to find.
In cases where we know a formula for the p.d.f., can often find all
moments at once in a convenient way!
Def: Let X be a discrete random variable. Then the
moment generating function of X is the function of the variable t defined
mX(t) = E(etX)
the moment generating function is the expected value of the function
the variable t is just a parameter (auxiliary variable), whose use will
the moments of X are hidden inside the function mX(t)!
AP example; recall that the p.d.f. for the score Z of a randomly selected
student was given by table
What's the moment generating function of the random variable Z?
mZ(t) = E(etZ) =
Notice that this is a function of the variable t.
= et (.15) + e2t (.20)
+ e3t (.40) + e4t (.15) +
Flip a coin until you get a tail; let N = the number of flips. Then
we found previously that the p.d.f. was f (n) = (1/2)n.
Find the moment generating function.
The above are functions of t; where are the moments of the random variables??
mN(t) = E(etN) =
But this is another geometric sum, with first term a = et/2
and ratio r = et/2, whose value is thus
= e1t * (1/2)1 +
e2t * (1/2)2 + e3t * (1/2)3
= (et * 1/2)1 +
(et * 1/2)2 + (et * 1/2)3
If mX(t) is the m.g.f. of X, then the moments of X can be
i.e., to find the kth moment, take the kth derivative of the moment
generating function and evaluate at t = 0.
Example above; by the theorem,
Why do we care so much about finding moments?
(by the quotient rule)
This is the mean of N.
From this we can find the variance of N:
var(N) = E(N2) - E(N)2
= 6 - 22 = 2.
Properties of moment generating functions
gives easy way to compute mean & variance
can identify the type of a random variable by looking at its moment generating
function, as stated by the following:
Theorem If two random variables have the same moment generating
function, then they have the same p.d.f .
This will be useful to us in the future, when we're trying to determine
what type of random variable we get when we look at specific combinations
of other random variables!
Let X be a random variable, mX(t) its moment generating
function, and let c be a constant. Then the moment generating functions
of certain modifications of X are related to the moment generating function
of X as follows:
These results will be of use to us later. The proofs of these follow from
the properties of expectation discussed earlier.
the moment generating function of the random variable cX is
mcX(t) = mX(ct)
the moment generating function of the random variable X + c
mX+c(t) = ect mX(t)
Let X1 and X2 be independent random variables
with moment generating functions
mX1(t) and mX2(t). Then the
moment generating function of the random variable X1 + X2 is
mX1+X2(t) = mX1(t) * mX2(t)
The Geometric Distribution
Def: A random variable X is a geometric random variable if
it arises as the result of the following type of process:
(In short, a random variable is geometric if it "counts the number of trials
until the first success.")
have an infinite series of trials; on each trial, result is it is either
success (s) or failure (f). (Such a trial is called a Bernoulli experiment.)
the trials are independent, and the probability of success is same on each
trial. (The probability of success in each trial will be denoted
p, and the probability of failure will be denoted q;
thus q = 1 - p.)
X represents the number of trials until the first success.
Consider the "flip a coin until you get a tail" experiment above; then
N (= # of flips until a tail occurs) is a geometric random variable, with
p = 1/2: N counts the number of trials until the first success.
The sample space of such a process can be written as below; the value of
the random variable X associated with each possible outcome is shown beneath;
and the probability of each outcome is given beneath that. (The probabilities
just come from the multiplication rule for independent events.)
Thus the probability density function of a geometric random variable
X with probability of success p is:
f(x) = pqx-1 = p (1 - p)x-1,
x = 1, 2, 3, ...
Its moment generating function can be shown to be
(using a technique identical to that used in the "flip a coin until a tail
appears" example above).
We can thus use the moment generating function to find the moments,
and hence the mean and variance and standard deviation:
(from the quotient rule)
So the mean and variance are
= 1/p (using the fact that
p = 1 - q )
var(X) = E(X2) - E(X)2
m = 1/p
var(X) = q/p2
(so the standard deviation is s
Dice game; pick a number from 1 to 6, then keep rolling until get that
value. Let X = total number of rolls needed to achieve this. Then X is
a geometric random variable: it counts the number of trials until the first
The probability of success on each trial is here p = 1/6. Thus
from the above, the expected number of rolls until the desired number appears
is E(X) = 1/p = 1/(1/6) = 6.
(This is pretty much what we would have anticipated!) The variance is
var(X) = q/p2 = (5/6)/(1/6)2 =
30, so the standard deviation is s
= 5.48 , which indicates that the value of X will usually fall in
the range 6 +- 5.48; thus we should not be surprised if the number of rolls
needed to get our number is as few as 1 or as many as 12.
Consider the Pennsylvania daily number lottery, discussed before, in
which the probability of winning on any given day is 1/1000. Now let N
be the number of times you play before winning the first time. Then N is
a geometric random variable, since it's counting the number of trials until
the first success, with p = 1/1000. Thus the expected number
of plays is E(N) = 1/p = 1/(1/1000)
= 1000; thus you should expect to have to play 1000 times before
winning! Since you would then be down $1000 (since it costs $1 to play),
and would only recoup $500 for winning, this isn't such a great situation.
Notice that this agrees with our previous results, in which we determined
that, on average, you should expect to lose $.50 each time you play.
Cumulative probability function
The cumulative probability function F(x) of a geometric
random variable X with probability of success p
F(x) = 1 - qx = 1
- (1 - p)x
(This follows by summing the values of the p.d.f., and using the
formula for the value of a finite geometric sum.)
For the lottery example above, what's the probability that you'll win
in a year (312 days, not counting Sundays) or less?
Want P(N <= 312) = F(312) = 1
- (1 - .001)312 = 1 -
.999312 = 1 - .732
thus there's only about a 1 in 4 chance that you'll win in a year if
you play every day!
Previous section Next | <urn:uuid:09d8ceb0-a099-478e-a60c-802aec1dfc04> | 3.765625 | 1,862 | Academic Writing | Science & Tech. | 79.064093 |
Silly questions about True and False
drs at remove-to-send-mail-ecpsoftware.com
Mon Dec 20 19:12:27 CET 2004
I just upgraded my Python install, and for the first time have True and
False rather than 1 and 0. I was playing around at the command line to test
how they work (for instance, "if 9:" and "if True:" both lead to the
conditional being executed, but True == 9 -> False, that this would be true
was not obvious to me -- "True is True" is True, while "9 is True" is false
even though 9 evaluates to True.) Anyhow, in doing my tests, I accidentally
>>> False = 0
>>> False == 0
and I lost the False statement.
To get it back, I found that I could do
>>> False = (1 == 2)
which seems to put False back to False, but this seems weird.
>>> 1 = 0
throws an error (can't assign to literal), why doesn't False = 0 throw the
same error? Also, why doesn't False = 0 make
>>> 1 == 2
Instead of False?
More information about the Python-list | <urn:uuid:9ac7d6dc-3142-40ab-a150-d15c1f317009> | 2.78125 | 259 | Comment Section | Software Dev. | 75.957095 |
In the aftermath of the devastating magnitude-9.0 earthquake and tsunami that struck the Tohoku region of Japan on March 11, attention quickly turned away from a much smaller, but also highly destructive earthquake that struck the city of Christchurch, New Zealand, just a few weeks earlier, on Feb. 22. Both events are stark reminders of human vulnerability to natural disasters and provide a harsh reality check: Even technologically advanced countries with modern building codes are not immune from earthquake disasters. The Christchurch earthquake carried an additional message: Urban devastation can be triggered even by moderate-sized earthquakes.
As seismologists and engineers sort through the aftermath of Christchurch’s earthquake, they are already revealing some crucial lessons that might help us learn how to better prepare for such urban earthquakes. The magnitude-6.1 Christchurch earthquake was dwarfed by the Japan earthquake, which carried some 22,000 times more energy. It was even overshadowed by the much larger magnitude-7.0 earthquake that struck just 45 kilometers away from Christchurch on Sept. 3, 2010 (the Christchurch quake is widely considered to be an aftershock of the September quake). Nonetheless, the Christchurch event inflicted considerable damage: It killed nearly 200 people and caused more than $12 billion in losses (the September quake didn’t kill anyone and caused only a quarter of the damage). The damage was far greater than would have initially been expected for an event of this size.
Why? The damage was considerable because the earthquake originated only six kilometers from Christchurch’s population center and parts of Christchurch’s urban area were as close as one kilometer from the fault rupture. Its relatively shallow depth (only 5 kilometers beneath Earth’s surface) produced extraordinarily strong shaking at the surface. The earthquake occurred at about 1 p.m. on a busy workday, so many more people were in downtown Christchurch and thus exposed to the hazards of collapsing tall buildings; half the deaths from this earthquake occurred when one building collapsed. Finally, the effects of the seismic vibrations were probably amplified by the thick sedimentary layers on which the city is built.
Early reports from Christchurch found patterns of damage that are familiar to seismic engineers: the destruction of older, unreinforced masonry structures (like the Christchurch Cathedral) and the collapse of a handful of high-rise buildings that were built prior to modern seismic engineering standards. But it also appears that there were ground accelerations recorded in the Christchurch area that exceeded the design specifications of more recent buildings. Furthermore, there are clear indications of earthquake-induced liquefaction, or ground failure, and flooding that exacerbated impacts and hampered disaster response.
Earthquakes the size of Christchurch’s temblor occur globally about 100 times per year, or a couple of times each week; statistical analysis of the earthquake record indicates that for every magnitude-9.0 Tohoku-sized earthquake, there are about 1,000 Christchurch-sized earthquakes of magnitude 6.0. Fortunately, most of the moderate earthquakes don’t wind up in newspaper headlines. They occur in remote, unpopulated areas, are under water, or are deep enough that they don’t produce significant societal impacts. As the Christchurch earthquake demonstrated, however, when one of these events takes place close to a populated area, its effects can be devastating.
Many urban planners in the U.S. are appropriately concerned about the potential occurrence of a Christchurch-type earthquake — a relatively moderate magnitude-6.0 to -6.5 earthquake — directly beneath an American city.
Many American cities are vulnerable to damage in much the same way as Christchurch. A number of U.S. cities — notably Charleston, Memphis, Los Angeles, Portland, Salt Lake City, San Francisco and Seattle — are built dangerously close to known faults that have produced large earthquakes in the past, and their growing urban population centers are encroaching on hazardous areas. Most of our cities situated in seismic zones — particularly those in the central and eastern U.S. — have a large inventory of old, unreinforced masonry structures that are subject to damage or collapse, even from moderate earthquakes. And even more recent buildings constructed in the 1960s and early 1970s were built without seismic-resistant design.
Furthermore, the ground beneath many of our cities is underlain by unconsolidated sediments and thus is subject to seismic wave amplification and liquefaction. Add in the vulnerability of critical facilities — such as dams, nuclear power plants and chemical storage facilities — to the cascading secondary effects of earthquakes, including strong ground shaking, liquefaction, landslides, tsunamis and flooding, and you have the makings of a disaster.
Earth scientists understand that earthquakes are an inevitable consequence of geological processes, but we also know that earthquake disasters are not. Renowned 20th century author Will Durant wrote in “The Story of Civilization”: “Civilization exists by geological consent, subject to change without notice.” If we are to address his prophetic challenge, we must seek innovative ways to help prepare for and mitigate the urban disaster that is likely to happen during our lifetimes.
That disaster is most likely to manifest itself, not in the form of a massive, Tohoku-style earthquake, but as a moderate-sized Christchurch-style event, in which its proximity to an urban center results in damage far out of proportion to its size. While we cannot forecast which city will be the next victim of such an earthquake, we can mitigate potential impacts. For the most part, we already know what needs to be done. Now the challenge is transforming this geoscience knowledge into action. Let’s not be caught unaware — or unprepared.
Michael W. Hamburger and Walter D. MooneyHamburger is a seismologist and professor of geological sciences at Indiana University. Mooney is a research seismologist at the U.S. Geological Survey in Menlo Park, Calif. The views expressed are their own. | <urn:uuid:8a884387-b587-41ea-bee4-644cf9a4e123> | 4.40625 | 1,228 | Nonfiction Writing | Science & Tech. | 33.509974 |
Place four pebbles on the sand in the form of a square. Keep adding
as few pebbles as necessary to double the area. How many extra
pebbles are added each time?
Make a set of numbers that use all the digits from 1 to 9, once and
once only. Add them up. The result is divisible by 9. Add each of
the digits in the new number. What is their sum? Now try some other
possibilities for yourself!
For this challenge, you'll need to play Got It! Can you explain the
strategy for winning this game with any target?
The total number they were using must be divisible by $3$.
Ben's counters must initially have been divisible by $3$, Jack's
by $4$ and Emma's by $5$.
It might help to work out the maximum each could have started
with - e.g. Emma could not have started with $25$ counters. Can you
work out why?
How many counters could each of them have started with?
Try some possible numbers. | <urn:uuid:08656bc2-b898-49b0-be74-ce8fd9ec7d71> | 2.6875 | 230 | Tutorial | Science & Tech. | 81.567509 |
Almost anything will become more stiff when you make it cold
enough, no matter how hard it is to start off with. Some things stiffen
quite spectacularly, such as rose petals and racquetballs, since we do
not expect these normally flexible things to become brittle and to
shatter when struck while they are cold.
The words "hard", "strong", "stiff", and "brittle" all mean rather
different things. Diamond is the "hardest" material known because a
sharp diamond point will make a scratch in a smooth surface of any
other material. This hardness does not mean that diamond is a strong
material for all applications, because it is brittle -- diamond will
crack along well-defined crystal planes when struck appropriately, as
with a jeweler's tools. When forces are applied to a diamond that are
not exactly set up to make it crack, then diamond is very very strong
-- it is used in very high-pressure apparatus called diamond-anvil
Diamond probably remains very hard but also brittle along those
well-defined axes at lower temperatures too. Steels are strong
materials that become brittle at liquid nitrogen temperatures -- they
crack more easily.
Some solid materials have properties that do not change enormously
with temperature, due to their structure on various distance scales.
For example, paper does not get much more brittle at liquid nitrogen
temperatures than it is at room temperature -- paper is made of lots of
wood fibers loosely mechanically arranged together. Even if the wood
fibers become stiff and brittle, the paper can still flex because the
attachments between the fibers can flex. A chain may behave in the same
way -- the steel will get brittle, but the links will still slip
through each other and the chain itself will flex (although if you lay
it on a table and hit it with a hammer, the link you hit may chip into
lots of pieces -- please don't do this because flying metal chips can
cause serious injury!).
What may be a more interesting question is what materials will not get brittle at low temperatures, and why.
Helium is a very, very interesting substace with very interesting
low-temperature behavior. Even at the absolute zero of temperature,
helium will not settle down into a solid at atmospheric pressure. This
is due to the fact that the bonds between helium atoms are so weak and
because the helium atom is very light. Essentially Heisenberg's
uncertainty principle gets in the way -- to slow down an atom so it
stays in a crystal lattice site and to keep it located at that crystal
lattice site at atmospheric pressure violates the uncertainty principle
which says that the uncertainty in the position of an object times the
uncertainty in its momentum has to be greater than Planck's constant
(divided by 2*pi).
If you increase the pressure, helium will solidify at low enough temperatures.
-Tamara and Tom
(published on 10/22/2007) | <urn:uuid:22c95b41-f938-44e2-814a-2fa0611b6729> | 3.609375 | 634 | Knowledge Article | Science & Tech. | 38.779852 |
One of the true joys of learning about science – as opposed to, say, economics – is that eventually you can usually get to a scientific summary that clears up many of the distortions that popular reports create. In the midst of wading through yet another cherry-picked-evidence blog post (this one on methane) by Andrew Revkin of the NY Times, it suddenly occurred to me that I should check out Justin Gillis of the Times, whose posts have been praised iirc by Joe Romm of climateprogress fame. Gillis’ reporting still seemed a little superficial to me, but he had a link to a 2006 scientific summary of the research about methane and climate change, an oldie but goodie where I found the answers to many of my questions. My recent blog post on methane laid out the doomsday scenario that I fear; Chapter 6 of this summary, as Rachel Maddow would say, talked me down – but only partially.
Because the broad scenario that I laid out is not drastically affected by the information in the summary, it is easier to lay out the summary’s picture of methane and then, at the end, note how this may affect my scenario. I will focus on methane clathrates, since the changes to everything else are less substantial. And, of course, I am sure that more misconceptions remain – because a summary article of ongoing research can’t be expected to answer everything. Anyway, let’s begin.
Methane Clathrates, Water Methane
Last time, I presented a very summarized picture of natural-source methane as coming from three sources: methane clathrates under the sea, permafrost on land at high latitudes, and peat bogs next to the permafrost or in the tropics. It turns out that the picture is a bit more complicated, and the complications matter.
To start with, methane clathrates are formed and remain stable in sea-floor sediment in particular combinations of sea temperature and pressure from the sea above that limit them to the sea floor somewhere between 200 meters and 1000 meters below sea level. In other words, the water has to be near zero F, and the clathrate has to be lower than 200 meters below sea level and higher than 1000 meters below sea level. Between those two limits, the deeper the sea floor, the wider the zone in the sediment where it can exist. Guesstimates for a typical clathrate “stability zone depth” might be 250-300 meters. Btw, a confusing part of the scientific lingo apparently refers to Arctic clathrates as “subsea permafrost.”
What happens to melt the clathrates? The water next to the sea floor warms up, or warmer temps further up the sea slope cause the equivalent of a mudslide on the sea floor that basically slices through the clathrate, stirs up everything above the slice as a cloud of sediment, and melts all the clathrate above the new sea floor. That is what they think happened at Storegg, a place near Norway where there is a “crater” 30 km across that may have released a gigaton of carbon, all at once (methane is CH4).
Now here’s an odd part. We are used to thinking of gas coming up to the surface in bubbles and releasing itself into the atmosphere when the bubble pops. Not so with clathrate methane – most bubbles pop long before they rise the 200 meters or more to the surface, according to the models. Instead, one of several things happens: the methane rises to the surface but not as bubbles (it is “buoyant”) and then releases into the atmosphere, or it is eaten by methane-eating bacteria, or it converts (typically to carbon dioxide) en route. Initial indications are that a small percentage of melted clathrate should rise to the surface combined with water and is released into the atmosphere as methane, which happens effectively immediately; a large percentage should be eaten by bacteria, who convert it into carbon dioxide on the surface of the sea, and the carbon dioxide is released into the atmosphere in order to equalize atmospheric and oceanic CO2; and a medium-sized percentage should convert to carbon dioxide without going through the bacteria, to be released into the atmosphere as carbon dioxide in the same way.
The methane clathrates in the Arctic seas contain perhaps 50%-80% of all clathrates. They are also by far the most likely to be affected by global warming, since water temperature variation due to increased sunlight on the water and increased temps of sun-warmed currents from the south are widest there.
Other Methane Sources
The picture of land-based methane sources also needs amendment. It appears that much of the methane stored in permafrost is stored in peat within the permafrost – which can extend as far down as 200 meters or so. Meanwhile, wetlands at whatever latitude are generators of methane, the Amazon as much as Ireland. When the permafrost melts, the water plus peat turns into a bog that (under global warming) is maintained by increased precipitation: that’s what often drives increased methane production.
Here, the translation to the atmosphere is more clear. Melting of permafrost releases any methane locked in the ice (but not in clathrates), and also creates new constantly-emitting sources of methane. Likewise, wetlands inject methane directly into the atmosphere.
Now we come to the tricky part. We are accustomed to thinking of methane in the atmosphere as separate from carbon dioxide. Not so. What often happens to methane in the atmosphere is that it "oxidizes”, which typically means that one of the hydrogen atoms is broken off to help form H2O (water), while the rest forms a methyl group (CH3) which eventually breaks down to carbon dioxide. In other words, much of the methane tossed into the atmosphere actually winds up as the major greenhouse gas, and stays up there for 150-250 years.
What’s the Effect? Um …
OK, so now the scientist wants to figure out what the global-warming effect of unlocking all that methane is going to be. The problem is that we have two sources of comparison, and neither of them is great.
The first is to use what happens over 10-20,000 years immediately after a Milankovitch-cycle minimum (a “glaciation”) as a model. Using that model, scientists have pretty well determined that in such times of rising global temperatures, the amount of methane in the atmosphere probably doesn’t vary by a heck of a lot, and the effects on global temps compared to atmospheric carbon are pretty minimal. Methane melt in general might have a role in things like sea-ice melting near Greenland, which has been shown to have surprisingly wide effects on global climate, but most of the good candidates for that type of melt (subsea, permafrost, wetlands) just don’t make a strong case for themselves.
The problem with this type of analysis is that it looks only at periods when most of the ice remains – because that’s what happens at the peak temps of a Milankovitch cycle. We have almost certainly moved above those peak temps in the last couple of decades, and so we are in much less charted waters. For a period much more comparable, you have to go back to the PETM – 55 million years ago.
OK, in the PETM, temps were 5-10 degrees C warmer than now. Increases in carbon in the atmosphere just don’t seem to be enough to justify those warmer temps. So for a while, there were theories floating around that methane was the complete reason for that kind of warming – no carbon needed. That would have been nice, since figuring out why carbon suddenly spiked in the first place, not to mention why the time period of this rapid warming was around 20,000 years as the latest research suggests, has been a headache. Bad news: there simply doesn’t seem to be a natural source of methane that comes near to explaining the whole temperature rise, not to mention keeping going for 20,000 years. So it looks like we have a choice between carbon emissions plus “unknown”, and carbon plus methane. Tentatively, the scientists are voting for carbon plus methane.
But the PETM isn’t great as a model, either. The problem there is that things happened slowly compared to today. If we say that the carbon atmospheric-concentration rise then happened over the course of 20,000 years, well, our carbon rise appears to be happening over 350 years – and it may very well double the rise of the PETM over the course of those 350 years. In other words, this is happening at least a hundred times faster. And, as we’ve seen in the case of carbon, that can mean that the positive follow-on effects happen well before the negative “stabilizer” effects. So, for example, don’t necessarily expect the magical munching methane sea bacteria to appear in the Arctic and save the day.
OK, so the models we have aren’t great. Can we at least use them for some guesstimates?
Well, the scientists have done the guessing for me. The key sentences I find in Chapter 6 say, more or less (with the usual caveats about my understanding), that the amount of atmospheric methane from natural sources pre-Industrial Revolution equals the amount of methane added from human sources since then, which equals the likely amount of methane to be added at some point due to all natural sources except subsea methane, which equals the potential amount of methane from subsea methane. In other words, in a worst-case scenario with 2006 models, at some point in the next 300 years, we might expect atmospheric methane four times what it was in 1850.
How much added heating would that translate to? Again, reading between the lines, perhaps 1 degree C from the methane alone. However, if we take the PETM as a model, it might be more like 2 degrees C. And that’s the maximum, so we can all semi-relax, right?
Well, no. You see, there are two problems. First of all there’s the fact that much of that methane is going to convert to carbon dioxide when it’s up there. Second, there’s the fact that the more methane gets into the atmosphere from now on, the longer it sits there. The 2006 estimate was that methane hangs up in the atmosphere an average of 9 years. But at twice the concentration, I think we can count on it sitting up there for 12-18 years on average. So those two things should add another ½-1 degrees C to the “additive effect” of methane in the atmosphere.
And then, of course, there’s the question of methane that converts to carbon dioxide before it gets into the atmosphere. Here, the summary didn’t really have much to add in the long term. Even by their time-frame estimates, all that methane-to-carbon-dioxide, even if it doesn’t get there in the next 100 years, will almost certainly show up in the next 1000 years. So it’s a more extreme version of my “pay me now or pay me later” scenario – except that we can at least hope that by the time the methane-turned-carbon-dioxide shows up, we will have managed to cut down on our human-caused carbon emissions and the amount in the atmosphere will have begun to go downhill significantly.
All in all, not great, but not as bad as my full doomsday scenario. Instead of 6 degrees C from methane-turned-CO2, perhaps 2-3, although that increase will stick around for maybe twice as long; instead of 7-9 degrees C from methane-stayed-methane over the next 160 years, perhaps 1-2 degrees over the next 300-500 years. And it will happen more gradually, so it won’t be really noticeable, probably, for the next 30-40 years. Except …
I’m Not All the Way Down
Read carefully the interview with the head of the survey of methane releases in the Siberian Sea. He states, effectively, that the diameter of the “craters” I referred to earlier had increased by up to 100 times this year, and this methane was “bubbling to the surface.” If you look at the 2006 summary, neither is supposed to happen. Very little methane should arise to the surface in a bubble, as noted above, and the methane hydrates should not suddenly do a big jump in melting: and 5 degrees C increase in water temps (since 1984, a jump of 2.1 degrees C has been observed) should cause perhaps 1 meter’s worth or less of methane hydrates to melt over the next 40-80 years – and it can’t be explained as mudslides, since it has happened in quite a few places.
So why would scientists’ models be wrong? Well, in the first place, they assume that relative sea-water warming will only occur in a short space in the summer, when the ice is melted and the sun warms its top. However, the depth of the surface ice in winter is also less than before, and the water carried by currents from the south is warmer. Clearly, it’s very possible that scientists are underestimating the amount of melting going on the rest of the year. Add this to the known problems with the original model developed in 1995, and you have some, but maybe not all, of the increase in clathrate atmospheric methane release explained.
The second flaw may be the modeled prediction that very little methane melt will rise to the surface as bubbles. Why might this model be wrong? I don’t have a clear answer from the summary – it could be that the turbulence of the water keeps the bubble from popping, although that seems unlikely. One thing seems clear: the magical munching methane bacteria are nowhere to be seen.
And the third flaw, which also affects the land methane emissions rate, is a major underestimate in the models of the rate of global warming. The models implicitly assume that the Arctic sea ice won’t melt entirely in summer before somewhere between 2030 and 2100, and year-round perhaps never – that one seems clearly wrong. Therefore, they underestimate the speed of the follow-on effects, including much faster warming of water within 100 meters of the surface, which would inevitably mean much faster warming at the 200-500 meter level – sorry, that’s not “deep ocean.”
In other words, what the latest information is telling us is that the semi-comforting story I just gave you is almost certainly an underestimate. The “true” effect of methane is somewhere between my doomsday estimate and the one above – except that the roles of methane-stayed-methane and methane-turned-carbon-dioxide have switched, because we now know that much of that atmospheric methane is going to change to carbon dioxide while it’s up there.
I find the logic of the summary convincing as well as semi-comforting; so if I had to guess, I would say that the net effect is somewhere between 3-5 degrees C, mainly in carbon dioxide, and spiking over the next 40-150 years before leveling off. But that’s a complete guess. Until I understand just how the models went wrong, I’m only partially talked down from my panic. So here’s to the New Year: It will be a season of hope, it will be a season of despair, it will be a season of enormous impatience until the first scientific explanations come out. | <urn:uuid:5b6499f6-fb3c-4fa8-9cc4-57189a8fa8b9> | 3.109375 | 3,309 | Personal Blog | Science & Tech. | 48.385199 |
The high altitude winds which circulate around the South Pole during December and January carry the payload in a broad circle around the vicinity of 77 degrees south latitude. The journey takes about two weeks, and the winds return the payload in the vicinity of McMurdo Station.
The assembled gondola being carried out of the high bay on the Delta launch vehicle.
The Boomerang Helium 3 fridge and focal plane insert.
A close up view of the focal plane, looking at the photometers.
The Boomerang focal plane, showing the row of four 150 GHz Polarization Sensitive Bolometers.
A photograph of one of Boomerang's polarization sensitive bolometers. The frame is about 8mm on a side, and the mesh is coated with an absorbing film in such a way as to preferentially absorb one sense of polarization.
The 145 GHz (9.5') and 245 GHz (6') beams as measured in the field, prior to flight.
The region of sky surveyed by Boomerang 2002. The colored swath through the image is the Galactic plane, and the field mapped by Boomerang is outlined in black. The empty areas we observe provide our cleanest view of the CMB, while our coverage of the Galactic plane will provide us information about the workings of our home galaxy.
- The Basics
- Boomerang is a balloon-borne instrument which images the Microwave Background Radiation in three bands, or colors, at wavelengths of about one millimeter. This is a much longer wavelength of light than our eyes can see, but a much shorter wavelength than the light used by microwave ovens, or by your radio. The signal from the CMB anisotropies peaks at 1.4 millimeters (213 GHz), while the emission of Galactic sources, dust, and our atmosphere is relatively small in the vicinity of two millimeters. Both the atmosphere and Galactic emission get brighter at shorter wavelengths, while the signal from the CMB starts to decline rather sharply. In order to maximize our sensitivity to the CMB while being able to distinguish the foreground emission we have chosen our three bands to be centered at 2.1, 1.2 and 1 millimeter (145, 245 and 345 GHz).
- A 1.2 meter mirror focuses the light from the sky onto the eight horns located in the focal plane of the telescope. Four of those horns couple the radiation to Polarization Sensitive Bolometers, which operate at 2.1 millimeters and detect both senses of linear polarization, while the other four horns feed photometers which split a single polarization into two bands centered at wavelengths of 1.2 and 1 millimeter. All of the sensors are cryogenic bolometers. The term cryogenic means that we cool our detectors to very low temperatures - in fact, we cool our entire focal plane down to 0.27 Kelvin, which is nearly negative 459 degrees Fahrenheit. Our cryogenic system keeps our focal plane cold for about two weeks as we make our observations.
- Bolometers are a very sensitive type of thermal detector. As we slowly scan the telescope across the sky, the temperature of the detector fluctuates in response to the variation in the intensity of the light. This temperature variation is measured by means of an extremely sensitive thermometer.
- The payload is launched from a site not far from McMurdo Station, Antarctica, and is carried to an altitude of 42 kilometers (26 miles, or 120,000 feet) by a helium filled balloon the size of a football stadium. At this altitude there is very little emission or absorption from the atmosphere, which can obscure the view of millimeter-wave telescopes and cause relatively large background loading on the detectors. The only reason we come so far to fly our telescope is because of the unique high altitude wind patterns which circulate around the South Pole. Once these patterns set up, a balloon launched from the edge of the continent will return close to where it began after a flight of ten to fourteen days. The more time it stays aloft, the more data we get, so the longer the better.
- We are unable to transmit all of the data back to the ground during the flight, so we need to physically retrieve the data storage vessel which is on the gondola. Therefore, it is very important that the instrument does not land in the ocean or in terrain which is too harsh to recover. By flying from Antarctica, we are able to have a long flight, with a fair chance at recovery.
- Technical Details
- The Boomerang telescope is an off axis Gregorian system consisting of an ambient temperature 1.3 meter primary and cooled (2K) secondary and tertiary mirrors. The tertiary is illuminated by an eight element array consisting of profiled corrugated feed horns. The 145 GHz feeds are single moded, while the photometer feeds are single moded only at 245 GHz. The 345 GHz band is operated ~20% above the single moded regime in order to increase the throughput. For all frequencies, the main lobe is well described by a 2D Gaussian.
- The pixels are arranged in two rows of four along the scan direction, and are separated in azimuth by 30 arcminutes. A polarizing grid is fixed to the aperture of the photometer feeds (245 GHz and 345 GHz) to provide sensitivity to a single linear polarization. Each of the 145 GHz Polarization Sensitive Bolometers (astro-ph/0209132) are sensitive to the sum and difference of two orthogonal linear polarizations. The axis of sensitivity of each of the pixels is rotated by 22.5 degrees to allow the separation of the I, Q and U Stokes parameters.
Boomerang 2002 Receiver Info
| ||145 GHz
|Beam FWHM [arcmin]||9.5||6.5||7|
|NEP [1e-17 W/rt(Hz)]||2.3||3.9||7.4|
(per polarization, flight calibration)
- Boomerang's scan strategy will encompass overlapping "shallow" and "deep" fields, which will allow the measurement to probe a broad range of angular scales with high signal to noise. In addition, Boomerang will survey a broad swath of Galactic emission allowing the study of the polarized emission at high, intermediate, and low Galactic latitudes. Several compact sources will be observed as well, for the purpose of calibrating the beams of the instrument.
- Pointing reconstruction is achieved through the combination of a pointed star camera, a pointed Sun sensor, a fixed Sun sensor, 3 orthogonal axis rate gyros, and a differential GPS array. | <urn:uuid:48f356b0-fdf2-4039-b365-31b113bf83e3> | 2.984375 | 1,366 | Knowledge Article | Science & Tech. | 49.816971 |
As the days shorten and the summer sun is slowly setting under the horizon, the frost is returning to the Arctic and American scientists make up the balance of what has turned out to be an unprecedented melting season. The … Continue reading →
Do you recall the big Arctic melting records of 2005 and 2007? Probably you do. Scientists had noticed the Arctic ice was on a declining trend and predicted this would continue under expected climate change. But no one expected the … Continue reading →
Ocean level rise is known as one of the most disquieting effects of global warming, with more than three billion people living on the coast or less than 200 kilometres land inward and one tenth of the world population living … Continue reading →
The projected disappearance of small glaciers* worldwide threatens to eliminate the water supply for numerous towns in valleys, such as the Ecuadorian capital Quito, fed by the rivers that flow down from the surrounding mountains. But retreating ice is also a threat to freshwater fauna. According to a study published in Nature Climate Change, the local and regional diversity of mountain aquatic fauna will be reduced considerably if predictions are realised. Until now, the impact of global thawing on biodiversity in watercourses had never been calculated in detail.
It has been know for some time that large quantitites of methane lie hidden in reservoirs under the permafrost layers on the tundra and in clathrates on the continental shelve. It is neither a secret that those large quantities of … Continue reading →
Several hundreds of millions of people in Southeast Asia depend, to varying degrees, on the freshwater reservoirs of the Himalayan glaciers. Consequently, it is important to detect the potential impact of climate changes on the Himalayan glaciers at an early stage. Together with international researchers, glaciologists from the University of Zurich now reveal that the glaciers in the Himalayas are declining less rapidly than was previously thought. However, the scientists see major hazard potential from outbursts of glacial lakes. | <urn:uuid:9b2ce762-4c78-4c4c-879b-99a31244bca7> | 2.96875 | 401 | Content Listing | Science & Tech. | 29.914455 |
Learn About Electricity
When we turn on a light switch or an appliance, we often don't think about what is happening to bring that electricity to us. Since the early discoveries in the 1800's it has been taken for granted that when you get up in the morning, you will have electricity to run the pump to provide water, listen to the radio, watch TV and of course check your email.
The word electricity came from the Greek word elektron, meaning amber. Several centuries ago it was noticed that when you rubbed an amber stone things "stuck" to it. This was the beginning of the discovery of electricity in its simplest form - static electricity.
In 1800 Alessandro Volta made the first electric cell - an electric cell converts chemical energy into electrical energy. About 30 years later Michael Faraday made the first electric generator.
Electricity is electrons in motion. Every atom has three basic parts - electrons, protons and neutrons. An electron carries a tiny negative charge. Electricity occurs in nature in the form of lightning, electric eels and the small shock you sometimes feel when you touch a doorknob, particularly in the winter.
To get electrons moving so we can turn on lights and run factories, we build power plants where magnets are spun inside coils of wire. The spinning magnets put electrons in motion inside the wires, creating electricity. This is called a generator. No matter what method is used to turn the magnets, the electricity produced by the generator is the same.
There are two major categories of energy resources:
- Renewable, which means the resource can be used over and over again. Examples of renewable energy resources are wind, solar, and water.
- Non-renewable; this type of resource can be used only once. Examples of non-renewable resources are oil and coal.
NB Power uses both renewable and non-renewable energy resources to supply the entire province of New Brunswick with electricity. | <urn:uuid:8b0da8f6-8bff-4fa4-82d8-658b5a57994d> | 3.53125 | 399 | Knowledge Article | Science & Tech. | 41.184942 |
Ant-ferns are intriguing plants that have developed a mutually beneficial relationship with ants, whereby:
Lecanopteris spinosa was discovered by Clive Jermy - Head of the Fern Section at the Natural History Museum for many years - and his colleague Trevor Walker from the University of Newcastle-upon-Tyne during an expedition to Sulawesi in 1979. It is still known only from this locality.
The discovery of this species helped to resolve the differences of opinion as to whether to recognise 1 or 2 genera of Old World ant-ferns.
Two genera of ferns have a close association with ants - they are myrmecophytic. Both are epiphytes in the Polypodiaceae family:
In Lecanopteris spinosa the hollow ant-house rhizome grows to form a large spiny ball around the tree branch.
This species has characteristic rhizomes that become hollow as they age, providing an ideal home for ants. Find out more about the appearance of this hospitable plant.
Lecanopteris spinosa has been found in only 1 locality in Sulawesi in Indonesia. Discover where this plant and its relatives like to grow.
Find out more about the reproductive strategy of this plant.
Lecanopteris plants and ants can help each other, but are not completely dependant. Discover how they benefit one another.
Get reference material for Lecanopteris spinosa and ant – plant associations.
Freshly collected Lecanopteris spinosa plant.© A C Jermy
Fresh rhizome of Lecanopteris spinosa showing spines.© A C Jermy
Freshly collected plant of Lecanopteris spinosa.© A C Jermy
Lecanopteris spinosa in Sulawesi.© A C Jermy
Lecanopteris spinosa - rhizome of the holotype specimen in the Museum collection.© P Lund, Natural History Museum, London
Section through a fresh Lecanopteris spinosa rhizome showing hollow chambers with ants removing white pupae.© A C Jermy
Section through a fresh Lecanopteris spinosa rhizome showing hollow chambers.© A C Jermy
Holotype of Lecanopteris spinosa mounted on a herbarium sheet in the Museum collection.© Natural History Museum, London.
Curator of Pteridophytes, Department of Botany.
"Lecanopteris is a fascinating genus due to its association with ants that live in its rhizomes. This species was discovered in Sulawesi in November 1979 by Clive Jermy - Head of the Fern Section at the Museum for many years - and his colleague Trevor Walker. It is still known only from this 1 locality."
The process by which water and nutrients are absorbed and conveyed to the plant tissues and organs.
Part of a plant that has been modified to provide protection for insects, mites or fungi.
Plants that grow on another plant for support but are not parasitic.
Not obligatory - can complete its life cycle independently.
With a waxy blue-green sheen.
A structure that covers the sorus.
Descended from a single common ancestor.
A fern's midrib.
A fern's stem.
Group of sporangia. | <urn:uuid:197d547a-d5c5-4ff5-aa91-b7fa3f275bd1> | 3.09375 | 699 | Knowledge Article | Science & Tech. | 36.466491 |
Research Tools: Simulation
NSSL researchers have created a computer model that can simulate a thunderstorm to study how changes in the environment can affect its behavior. They also contribute to the development of the Weather Research and Forecast (WRF) model used in both research and NWS operations.
The Weather Research and Forecast (WRF) model is the product of a unique collaboration between the meteorological research and forecasting communities. Its level of sophistication is appropriate for cutting edge research, yet it operates efficiently enough to produce high resolution guidance for front-line forecasters in a timely manner. Working at the interface between research and operations, NSSL scientists have been major contributors to WRF development efforts and continue to provide leadership in the operational implementation and testing of WRF. The NSSL WRF generates daily, real-time 1–36 hour experimental forecasts at a 4km resolution of precipitation, lightning threat, and more.
The NSSL COllaborative Model for Multiscale Atmospheric Simulation (COMMAS) is a 3D cloud model used to recreate thunderstorms for closer study. COMMAS is able to ingest radar data and lightning data from past events. Researchers use COMMAS to explore the microphysical structure and evolution of the storm and the relationship between microphysics and storm electricity. They also use COMMAS to simulate different phases of significant events, such as the early tornadic phase of the Greensburg, Kansas supercell that destroyed much of the town in 2004.
The Flooded Locations And Simulated Hydrographs Project (FLASH) was launched in early 2012 largely in response to the demonstration and real-time availability of high-resolution, accurate rainfall observations from the NMQ/Q2 project. FLASH introduces a new paradigm in flash flood prediction that uses the NMQ forcing and produces flash flood forecasts at 1-km/5-min resolution through direct, forward simulation. The primary goal of the FLASH project is to improve the accuracy, timing, and specificity of flash flood warnings in the US, thus saving lives and protecting infrastructure. The FLASH team is comprised of researchers and students who use an interdisciplinary and collaborative approach to achieve the goal. | <urn:uuid:36a97ecf-0ed3-43b4-8269-71a10676040f> | 3.1875 | 439 | Knowledge Article | Science & Tech. | 21.695543 |
What are sponges?
Calcareous sponge (Leucetta chagosensis) is one of the most common species in tropical Australasia in shaded coral reef habitats. Sponge Clathria craspedia, a unique species found only on the biogeographic transition zone between northern tropical and southern temperate faunas on the east coast of Australia. Soft bodied sponge, Chelonaplysilla, lacking a mineral skeleton.
Sponges (or Phylum Porifera) are the most primitive of the many-celled animals. They have a most ancient geological history, with the major class of sponges (Demospongiae) present in the Ediacaran-age in the Precambrian (about 750 million years ago).
Sponges are mostly marine, found from the intertidal zones to the deepest oceanic trenches, but a small number of species live in freshwater habitats.
Today sponges are still a major life form on the seabed, including coral reef ecosystems. But they are often overlooked as they are frequently hidden amongst the more prominent corals, or live in deeper waters and soft sediments or less frequently visited surrounding reefs. Nevertheless, there are more species of sponges than corals.
In some habitats sponges are the major providers of ecological services, like producing the nutrients and energy from photosynthesis that drive coral reef ecosystems (coral reef primary productivity), filtering waste products and toxins from other animals and plants on the reef, and recycling calcium carbonate back into the reef system through a process called bioerosion, thus making the calcium available again to other marine species.
Worldwide there are approximately 8,500 species described in the scientific literature, but about twice this number of species is estimated to be living in the world’s oceans, lakes and rivers.
In Australia only about 1,500 species have been described so far. But over the past couple of decades an escalated collection effort spurred on by the search for new pharmaceutical compounds from nature (biodiscovery) has found a sponge fauna estimated at least 5,000 species.
In Queensland only about 400 species have been described so far for all waters, including the coast, the Great Barrier Reef and Coral Sea island territories, but recent extensive surveys have revealed more than 2,500 sponge species actually live here. It is thought that many of these other species are new to science.
A checklist of named Australian sponge species can be found at the Australian Biological Resources Study, Australian Faunal Directory website.
Spongia, a fine quality commercial bath sponge also found on the Great Barrier Reef.New genus and species of sponge, Pipestela candelabra, described recently from the Great Barrier Reef with Rastafarian hair-like growth form.Sponges collected by the Queensland Museum from a dive on the Great Barrier Reef.Sponges and other marine invertebrates collected from the seabed in between the reefs during the Great Barrier Reef Seabed Biodiversity Project. The project mapped seabed habitats along the whole marine national park.
Queensland Museum's Find out about... is proudly supported by the Thyne Reid Foundation and the Tim Fairfax Family Foundation. | <urn:uuid:ee961f8c-0e45-414f-8339-96f1e31ad418> | 4.09375 | 660 | Knowledge Article | Science & Tech. | 31.217421 |
The vorticity advection term is also called the upper level divergence term. The upper levels generally extend from 550 millibars to the tropopause. Upper level divergence occurs when a mass of air is pulled away from a region faster than that mass can be replaced. This most commonly occurs when the upper level wind field is strong and meridional (high amplitude upper level waves).
An operational forecaster locates regions of upper level divergence by locating the divergence sector of a vorticity gradient near the LND (Level of Non-divergence). The LND is closest to the mandatory level of 500 mb, thus upper level vorticity is usually plotted on this prog. The LND is the general level that separates the low levels of the troposphere from the upper levels. Vorticity advection is best assessed near the LND.
A process that creates upper level divergence is DPVA (Differential Positive Vorticity Advection). DPVA has been previously defined in the Haby Hint given below:
Since the wind speed is generally higher in the upper levels of the troposphere as compared to the low levels of the troposphere within regions of PVA, a forecaster can usually take for granted that the PVA is differential (increasing with height). Thus the D in DPVA is often left off.
An operational forecaster recognizes PVA by locating:
a. The downstream region of relatively higher vorticity values along with the gradient of vorticity values,
b. the wind speed of air through the gradient of vorticity, and
c. the angle the airflow goes through the vorticity gradient
PVA generally occurs in the downstream region (region where airflow is moving away from highest values of vorticity) of a vort max or vort lobe. The following Haby Hint defines a vort max and vort lobe:
PVA is maximized by the combination of:
a. high values of vorticity, with more importantly a large rate of change (gradient) of vorticity,
b. a strong, downwind of vort max, airflow through the high gradient of vorticity,
c. an airflow perpendicular to the vorticity gradient
Below is an example that should help clarify this process an operational forecaster goes through in locating PVA. The prog below shows 500-millibar vorticity.
The vort max and the region of highest vorticity values are located in the red shading mostly on the New Mexico/Texas Panhandle border. The gradient in vorticity across the vort max is fairly sharp. The value of vorticity changes over 10 units over a distance of 100 miles. The following Haby Hint explains how the units of vorticity are calculated:
That region of the vort max/vort lobe that experiences uplift is the downstream portion. An operational forecaster will draw a line through the vort max and perpendicular to the airflow. It is the region to the east of this line, in this particular example, that is experiencing PVA (Texas Panhandle and extreme southeast Colorado). Since this shortwave is moving toward the east, the entire Texas panhandle and western Kansas will experience PVA over the next several hours. The airflow through the vort max is strong and close to perpendicular, especially on the southern side of the region of high vorticity where the horizontal wind vectors are longer. This helps maximize the amount of vorticity advection at this location.
DPVA leads to rising air on the synoptic scale. Thus, a forecaster locates these regions of enhanced uplift in order to determine areas that are most likely to receive precipitation. Precipitation as a result of DPVA alone is often termed dynamic precipitation. This precipitation tends to be high based also since the lifting is most intense in the upper troposphere (assuming DPVA is primary lifting mechanism). Severe weather is enhanced where strong DPVA overrides low level moisture, instability and low level WAA.
A process that creates upper level convergence and thus sinking air is DNVA (Differential Negative Vorticity Advection). NVA generally occurs in the upstream region of a vort max or vort lobe. Usually when strong PVA is found a region of strong NVA will be coupled upstream from the PVA. The same mechanisms that maximize PVA also maximize NVA. The only difference is that NVA occurs in the region the airflow is approaching higher values of vorticity (upstream region) and PVA occurs in the region the airflow is moving away from the higher values of vorticity (downstream region). From the example examined previously, NVA is occurring in central/eastern New Mexico and central Colorado. NVA enhances sinking air, thus precipitation and severe weather is less common in the NVA region of a vort max. | <urn:uuid:f4396934-ced1-4b42-8d3b-3aee1a4f1bb2> | 3.875 | 1,008 | Knowledge Article | Science & Tech. | 38.00507 |
This web site is an outgrowth of an agreement between the USGS and the New England Aquarium, designed to summarize and make available results of scientific research. It will also present educational material of interest to wide audiences.
Home page for Coastal and Marine Geology with links to topics of interest (sea level change, erosion, corals, pollution, sonar mapping, and others), Sound Waves monthly newsletter, field centers, regions of interest, and subject search system.
Declines in fish and wildlife populations, water-quality issues, and changes in coastal habitats have prompted this USGS study of the region's nearshore life and environment. Includes links to data from published reports.
Airborne scanning laser surveys (LIDAR) are used to obtaining data to investigate the magnitude and causes of coastal changes that occur during severe storms. Links to examples of coastal mapping during specific hurricanes.
Report on the potential of coastal change due to future sea level rise using the coastal vulnerability index (C.V.I.) with two regional examples in San Francisco and Monterey Bay and Tillamook Head, Oregon, to Ocean Shores, WA.
Brief report on map showing the relative vulnerability of the Atlantic coast to changes due to future rise in sea level. Includes links to similar maps in Open-file report 2000-178 on the Pacific Coast and 2000-179 on the Gulf of Mexico Coast. | <urn:uuid:db2b4d3c-9e5b-4973-8bcc-f5905680e6b5> | 3.015625 | 283 | Content Listing | Science & Tech. | 36.235692 |
Light extinction of particles
Not all particles are the same, many of them have different shape, size, and composition. Some of them reflect or scatter light, and others absorb it. Two instruments on the image, photometer and nephelometer, measure the amount of light absorbed and scattered by particles.
Photometer measures the amount of light absorbed by particles. If they are too dark, they are going to absorb light.
Some other particles don't absorb light, but they reflect and scatter it in different directions. In this case, nephelometer measures the amount of light scattered by particles.
With the scattering and the absorption, the researchers can estimate the particle's light extinction, the optical property of particles. It is important to know the light extinction of particles, because it is related to visibility, chemical reactions and heat transfer in the atmosphere. | <urn:uuid:ccd97ae4-c3c5-4859-9d5d-0af7e38fdd38> | 4.125 | 172 | Knowledge Article | Science & Tech. | 36.348968 |
This article was taken from the August 2012 issue of Wired
magazine. Be the first to read Wired's articles in print before
they're posted online, and get your hands on loads of additional
content by subscribing
Need to perform a lab test in
microgravity? Jeffrey Manber, managing director of NanoRacks, a US company that allows
anyone to buy a slot on the International Space Station, explains
how to get your test tubes off the ground.
Exercise some patience
Space science takes time. "We are very proud of the fact that we
are averaging less than a year from contract signing to launch,"
Manber says. Also, be prepared to wait a year or so for your
results to come back. It is space, after all.
Ask your peers for
Organisations that have already run space experiments could
offer tips. For example, the Fisher Institute in Israel did stem
cell and cancer work, and the Valley Christian High School of San
Jose is working on processes in zero gravity.
Keep your project compact
NanoRacks gives you a 10cm3 cube to work with, so space is
tight. "Inside can be a circuit board or a video camera," says
Manber. "Then the experiment itself: plant or crystal growth,
materials -- anything you want to test in zero gravity."
Some objects you just can't send. "Nothing radioactive,"
says Manber. "Nothing that might harm the astronauts. Fluids must
be triple contained. Batteries must be approved by Nasa, and
sometimes there are issues with magnets."
If you're an educational institution, getting an experiment into
orbit will cost $30,000 (£18,500). For a commercial programme,
it's double that. That's not cheap, but then neither is shooting
rockets into space. | <urn:uuid:e644ef8c-8b57-404a-a88f-f289f8431c69> | 3.1875 | 387 | Tutorial | Science & Tech. | 58.147799 |
Off the Mediterranean coast of Spain, a green sea turtle (Chelonia mydas) gracefully glides over a seagrass bed, chomping a large green bite before the current swiftly takes him to the next patch of grass. Off the coast of Australia, schools of fish weave between the blades, hiding from predators and feeding on algae.
And then, off the Mid-Atlantic coast of the U.S., there’s me: an ungraceful human dragged by the salty current of a murky low tide. Unlike the sea turtle or school of fish, who are at home in the sea, I bob desperately; trying to decide which types of seagrass I will pull up and stuff in the mesh bag around my neck.
Last week, I traveled with three of my CI colleagues to volunteer with a seagrass restoration effort led by The Nature Conservancy (TNC), the Virginia Institute of Marine Science, and others — the largest project of its kind in the world. We snorkeled in Virginia’s South Bay for the afternoon, pulling up seagrass shoots and spades from the mud and hauling them back to shore, where TNC and its partners are raising seagrasses to plant in the fall. This is a massive project that has restored several thousand acres of eelgrass in four bays over 15 years.
That afternoon, I learned the importance of seagrasses to the local marine life: these eelgrass beds are significant to bay scallops, fish, crabs, clams and other marine organisms. I also knew from my work with CI’s Marine division that seagrasses help filter ocean water for pollutants and protect coasts against floods and storms.
Listening to TNC project leaders talk about all the wonderful ecosystem services of seagrasses, I felt compelled to mention another important service: carbon storage.
Last week, a new study was published confirming that seagrass meadows have the ability to sequester carbon from the oceans and store it in their soils, helping to mitigate global climate change. Beneath the water, carbon accumulated over thousands of years lies locked in the mud.
This new study, “Seagrass Ecosystems as a Globally Significant Carbon Stock,” is the first global analysis of carbon stored in seagrasses and estimates that, although seagrass meadows occupy less than 0.2 percent of the world’s oceans, they are responsible for more than 10 percent of all carbon buried annually in the ocean. Not only that — but they bury carbon in their soils for thousands of years! I got to thinking of all the carbon storage potential of the eelgrass beds I was helping to create.
Many authors and contributors to this new research are expert scientists with the Blue Carbon Initiative — a collaborative effort led by CI, the International Union for Conservation of Nature, and the Intergovernmental Oceanic Commission of UNESCO. This initiative is the first program focused on mitigating climate change through the conservation and restoration of coastal “blue carbon” ecosystems, such as seagrasses, mangroves and salt marshes.
It seems that many initiatives around the world — including those close to home — are working hard to protect these ecosystems for their myriad benefits. No matter what the reason, it’s great to see that so many people are taking care of our coastal marine ecosystems, as they are so vital to our global climate health.
Sarah Hoyt is the executive coordinator for CI’s Global Marine division. Learn more about the Blue Carbon Initiative. | <urn:uuid:50df8557-6567-4cc3-a49b-e57e9b2c181c> | 3.390625 | 744 | Personal Blog | Science & Tech. | 41.6488 |
- class Balloon()
that pops up over a widget to provide help. When the user moves the
cursor inside a widget to which a Balloon widget has been bound, a
small pop-up window with a descriptive message will be shown on the
- class ButtonBox()
widget creates a box of buttons, such as is commonly used for
- class ComboBox()
widget is similar to the combo box control in MS Windows. The user can
select a choice by either typing in the entry subwdget or selecting
from the listbox subwidget.
- class Control()
widget is also known as the SpinBox widget. The user can
adjust the value by pressing the two arrow buttons or by entering the
value directly into the entry. The new value will be checked against
the user-defined upper and lower limits.
- class LabelEntry()
widget packages an entry widget and a label into one mega widget. It
can be used be used to simplify the creation of ``entry-form'' type of
- class LabelFrame()
widget packages a frame widget and a label into one mega widget. To
create widgets inside a LabelFrame widget, one creates the new widgets
relative to the frame subwidget and manage them inside the
- class Meter()
widget can be used to show the progress of a background job which may
take a long time to execute.
- class OptionMenu()
creates a menu button of options.
- class PopupMenu()
widget can be used as a replacement of the
command. The advantage of the Tix PopupMenu widget
is it requires less application code to manipulate.
- class Select()
widget is a container of button subwidgets. It can be used to provide
radio-box or check-box style of selection options for the user.
- class StdButtonBox()
widget is a group of standard buttons for Motif-like dialog boxes.
See About this document... for information on suggesting changes. | <urn:uuid:ac9f20cc-8e17-4de9-8f6e-77ca31df57c7> | 2.765625 | 416 | Documentation | Software Dev. | 49.356661 |
McClintock, Peter V. E. (1987) Science of helium in technology. Nature, 326 (6111). p. 340.Full text not available from this repository.
Liquid helium is something of an oddity. Its existence as a liquid at all is rather marginal, as shown by the ease with which it can be vaporized by tiny influxes of heat - just one watt is enough to evaporate about a litre of liquid in an hour. For temperatures below 2.17K, it behaves as though it were an interpenetrating mixture of two completely miscible fluids: a (relatively ordinary) normal fluid component, and a superfluid component which carries no entropy and whose viscosityis identically zero. It is the latter component that gives rise to liquid helium's celebrated frictionless-flow properties, enabling it, for example, to climb out of any open vessel in which it is placed.
|Journal or Publication Title:||Nature|
|Additional Information:||Review of "Helium Cryogenics" by Steven W. Van Sciver, Plenum, 1986. Pp.429.|
|Subjects:||Q Science > QC Physics|
|Departments:||Faculty of Science and Technology > Physics|
|Deposited By:||Professor P. V. E. McClintock|
|Deposited On:||30 Apr 2010 13:18|
|Last Modified:||26 Jul 2012 17:20|
Actions (login required) | <urn:uuid:ef900fa8-fe00-4f21-bf70-ee1c861851aa> | 3.078125 | 315 | Academic Writing | Science & Tech. | 53.494066 |
, a: aSa,
i i2 :a27
, r7 Saa
i a ;aaX
, M 7a7
......,. ....,... ..,..... ........, 0 ...,,.. ..,,.... ..,.....,.. ....,,.
,i W :
iZa2, B :
;S2a ;; i
222, rX ,
:a27 XX i
S2aS i2 ,
A type of curve with the equation:
x2y + aby - a2x = 0, where ab > 0
This curve was named and studied by Newton in 1701. It is contained in his classification of cubic curves which appears in "Curves by Sir Isaac Newton in Lexicon Technicum" by John Harris published in 1710. Harris's introduction to the article charmingly states:
"The incomparable Sir Isaac Newton gives this following Ennumeration of Geometrical Lines of the Third or Cubick Order; in which you have an admirable account of many Species of Curves which exceed the Conick-Sections, for they go no higher than the Quadratick or Second Order."
Newton showed that the curve f(x, y) = 0, where f(x, y) is a cubic, can be divided into one of four normal forms. The first of these are equations of the form
xy2 + ey = ax3 + bx2 + cx + d.
This is the hardest case in the classification and the serpentine is one of the subcases of this first normal form.
The serpentine had also been studied earlier by L'Hopital and Huygens in 1692. | <urn:uuid:c0a7f67e-bdec-4795-8279-1eba42018699> | 2.90625 | 355 | Knowledge Article | Science & Tech. | 82.507837 |
feel that the outflows from these lobes resemble
the exhausts of two jet engines facing each
other nose to nose.
In the adjacent picture, planetary
nebula M2-9 is shown on the left. At right
is a daytime picture of a Delta rocket launched
from Cape Canaveral in Florida with the Stardust
Mission to Comet Wilde in February 1999. The
similarities between one lobe of the nebula
and the rocket plume (shown in firey yellow)
are stunningly obvious.
Credit: Bruce Balick (Univ. of Wash) | <urn:uuid:82858ced-d564-4fad-91cb-6509d607100c> | 2.921875 | 121 | Knowledge Article | Science & Tech. | 46.768714 |
Spike (M.I.) Walker
English photomicrographer Spike (M.I.) Walker has been a consistent winner of the Nikon Small World competition for many years and has published many articles and a book about microscopy. Featured below is a photomicrograph of a freshwater hydra taken with Rheinberg illumination.
Hydras belong to the phylum Coelenterata (also called Cnidaria), which includes corals, sea anemones, and jellyfish. Coelenterates are primarily marine animals, but hydras are found in freshwater ponds, lakes, and streams. Hydras are also atypical because they do not have a medusa (jellyfish) stage as part of their life cycle as do most other coelenterates. They live and reproduce sexually and asexually, but only in the tube-shaped polyp form. However, they do have nematocysts, or cnidae, the microscopic intracellular stinging capsules characteristic of this phylum and for which it is named.
Simple as these organisms are, their nematocysts are one of the most complex structures in the animal world. Hydras have four types of nematocysts on their tentacles, which are used for a variety of purposes. The largest nematocyst has barbs that anchor the prey to the tentacle from which it was fired. With a firm hold on its prey, the hydra then envelopes the organism, like a sock being pulled over a foot, and consumes it. The second type is smaller and has a shorter, thicker corkscrew thread that wraps around and holds onto the prey animal. A third type has a sticky bean-shaped object at its end that is used in locomotion, securing the hydra as it glides or somersaults from one place to another. The fourth kind of nematocyst has spines running along the thread and is probably used to defend the hydra against potential predators.
Questions or comments? Send us an email.
Text and graphics for this article are | <urn:uuid:eb979f0a-f840-4dab-b447-c9112e51d706> | 3.25 | 425 | Knowledge Article | Science & Tech. | 41.637639 |
The template class is an iterator adaptor that describes a reverse iterator object that behaves like a random-access or bidirectional iterator, only in reverse. It enables the backward traversal of a range.
For a list of all members of this type, see reverse_iterator Members.
Existing Standard Template Library containers also define reverse_iterator and const_reverse_iterator types and have member functions rbegin and rend that return reverse iterators. These iterators have overwrite semantics. The reverse_iterator adaptor supplements this functionality as offers insert semantics and can also be used with streams.
The reverse_iterators that require a bidirectional iterator must not call any of the member functions operator+=, operator+, operator-=, operator-, or operator, which may only be used with random-access iterators.
If the range of an iterator is [_First, _Last), where the square bracket on the left indicates the inclusion on _First and the parenthesis on the right indicates the inclusion of elements up to _Left but excluding _Left itself. The same elements are included in the reversed sequence [rev – _First, rev – _Left) so that if _Left is the one-past-the-end element in a sequence, then the first element rev – _First in the reversed sequence points to *(_Left – 1 ). The identity which relates all reverse iterators to their underlying iterators is:
&*(reverse_iterator ( i ) ) == &*( i – 1 ).
In practice, this means that in the reversed sequence the reverse_iterator will refer to the element one position beyond (to the right of) the element that the iterator had referred to in the original sequence. So if an iterator addressed the element 6 in the sequence (2, 4, 6, 8), then the reverse_iterator will address the element 4 in the reversed sequence (8, 6, 4, 2). | <urn:uuid:e553365c-2be0-467a-bbb8-f95a9292d2fe> | 3.65625 | 394 | Documentation | Software Dev. | 20.791638 |
How was Uranus discovered?
Why does Uranus spin 'on its side'?
What is responsible for the color of Uranus?
How is the interior of Uranus thought to differ from those of Jupiter and Saturn?
How does the magnetic field of Uranus compare with that of Earth?
Describe a day on Titania.
What is special about Miranda? Explain.
The rings of Uranus are dark, narrow, and widely spaced. Which of these properties makes them different from the rings of Saturn?
Why are the rings of Uranus so narrow and sharply defined?
Why was the discover of Uranus in 1781 so surprising? Might there be similar surprises in store for today's astronomers? | <urn:uuid:27cb4a26-35a5-4f19-b43b-83dad113a8f3> | 4.125 | 147 | Content Listing | Science & Tech. | 57.930624 |
Binary Tree 1.0
Binary Tree can be used to manage an hierarchies of objects within a binary.
Each object of Binary Tree is a node that may have a reference two descendent objects: the right and the left nodes. Each tree node object may contain a data variable of an arbitrary type.
Binary Tree provides functions for adding nodes given the left and right descendent nodes (oif any), as well to traverse the tree and print the data value of the nodes and count the number of nodes in the tree. (PHP5)
More popular Data Type
- 7.9 KB
- 09/26/2008 02:52:36 | <urn:uuid:2cb34ad4-2327-4246-9d9a-0f81bda84be5> | 3.0625 | 138 | Documentation | Software Dev. | 66.792013 |
I wrote a few lines that numerically solve Maxwell's equations.
The result is a moving wave that looks like a single pulse.
This looks strange to me because I expect waves to move in oscillator fashion, perhaps like cos(x).
Why doesn't the wave appear to oscillate?
How does this kind of Gaussian wave work? Can it be broken into solutions?
Do similar solutions exist in 2D? Are they still Gaussian?
%%Super basic FDTD clear all; gsz=100; ez=zeros(gsz);hy=zeros(gsz); imp=377.0; % steps = 100; for i=1:steps for j=1:(gsz-1) hy(j)=hy(j)+(ez(j+1)-ez(j))/imp; end for j=2:gsz ez(j)=ez(j)+(hy(j)-hy(j-1))*imp; end ez(gsz)= ez(gsz)+exp(-((i-30)^2) /100); subplot(2,1,1); plot(ez); subplot(2,1,2); plot(hy); M(i)=getframe; end | <urn:uuid:2476e435-3f16-4735-a33c-69b4bafc7bd0> | 2.796875 | 266 | Q&A Forum | Science & Tech. | 98.482377 |
Earth Day is just around the corner (April 22nd), and I can’t think of a better way to spend it than learning about the “how” behind the natural world. Here are a few good choices for appreciating evolution this Earth Day:
- For All Ages - Ubiquitous: Celebrating Nature’s Survivors by Joyce Sidman – Yes, this is a picture book published for kids, but it is well worth perusing for just about anyone. It is really quite beautiful.
- For Kids – Life on Earth: The Story of Evolution by Steve Jenkins – Eye-catching and informative look at Earth’s history from its very beginning to the present.
- For Pre-teens - Billions of Years, Amazing Changes: The Story of Evolution by Laurence Pringle – A lively and straight-forward introduction to evolution illustrated by Steve Jenkins. Here is a great blog post praising Pringle’s organization of the book, noting that he does not get side-tracked by unsupported doubts of evolution.
- For Teens and Adults: Evolution: The Story of Life on Earth by Jay Hosler – This
Evolution by Jay Hosler on display at the Twin Cities Book Festivalgraphic novel (illustrated by Twin Cities natives from Big Time Attic) looks at life on earth through blob-like aliens learning about human genetics. It isn’t as silly as it sounds. Hosler (a professor of biology who has published a few science related graphic novels) keeps it fun, but informative.
If you don’t have plans for your weekend yet, you might want to join an Earth Day clean up crew. Check out your local parks department for details. Here’s the Minneapolis Earth Day Clean Up page for you locals.
Disclosure: Amazon.com links are affiliate links. A portion of purchases made via these links earns a commission for this blog. Thanks for your support!
For more about religion & science, see my Secular Thursday page. | <urn:uuid:2ff6e256-cb20-4141-8ab7-50352682bf83> | 2.703125 | 416 | Personal Blog | Science & Tech. | 53.613737 |
By Nick Batson and Tom Johnson
Anyone who’s been keeping up on the news over the past few weeks has undoubtedly heard of the recent discovery by the European Organization for Nuclear Research (CERN) of a subatomic particle that behaves in a manner consistent with how the theorized Higgs boson is said to behave.
The monumental discovery, if correct, will be key in future research regarding how our universe works, as the Higgs boson is the particle that would explain why and how other elementary particles acquire mass. Such a discovery would open doors to whole new worlds of particle physics research.
There were over 1,700 researchers from U.S. institutions working on the project at CERN’s Large Hadron Collider near Geneva, several of which are Stony Brook University’s own. These researchers include Professors of Physics John Hobbs, Robert L. McCarthy and Michael Rijssenbeek, as well as Dmitri Tsybychev, Assistant Professor of Physics.
Until recently, the existence of the Higgs particle was only theorized, but earlier in July scientists believe they witnessed it come to life. And a short life it was for the Higgs particle, as it only exists for one zeptosecond, or one sextillionth of a second.
The Higgs particle is believed to be a fundamental clue in the mystery of how all elementary particles interact with one another, and it is speculated that without the Higgs boson all other particles would move at the speed of light, making it impossible for all matter and life to exist. | <urn:uuid:5593215b-de3f-40eb-8370-126781cc87ac> | 3.109375 | 326 | Truncated | Science & Tech. | 38.844971 |
Using NASA's WISE infrared satellite, astronomers estimate there are about 5,000 known meteors that can impact the Earth with sizes of about 100 feet or larger -- that is, larger than the Chelyabinsk meteor. Smaller ones are fainter and thus harder to find.
It makes sense that smaller asteroids pass Earth more frequently and, on average, closer. That's because in nature, small things are more common than big things. So asteroids like YU55 are more rare than DA14, which in turn is more rare than the Chelyabinsk meteor. Because there are more DA14s filling interplanetary space than YU55s, a 50-foot asteroid can be found in a smaller volume of space, on average, and thus closer to Earth, than a 150-foot one. Now let's talk about coincidence. Mathematicians frame this issue in terms of probability -- that is, the likelihood that something will happen. A rare thing is unlikely, so we say it has a low probability of occurring.
Two rare events happening at approximately the same time is much more unlikely. Here is how to think of it mathematically: If the events are not associated, the probability of this coincidence comes from multiplying the individual probabilities.
For example, the probability that your birthday is on a given date -- say, January 1 -- is 1/365. That is, of every 365 readers of this article, roughly one will have a birthday on January 1.
Now, the probability that the next reader's birthday is also on January 1 is 1/365 times 1/365, or about 1 in 130,000. If that many people read the article, such a coincidence could happen. Of course, it's much more likely that two non-consecutive readers will have a birthday on January 1. And it's very likely that lots of readers have the same birthday as other readers. (In fact, in any group of 23 or more people, it is more than 50% likely that two will share a birthday, but calculating that probability is a bit more complicated.)
Back to the meteor and the asteroid. Both events happening within one day makes us think they could be connected. That instinct comes from doing the math -- if it is improbable, then we think it cannot be a coincidence.
But the facts don't support this conclusion. First of all, in the time between the two events, the Earth moved roughly 300,000 miles, meaning the asteroid and the meteor were in completely different places. Moreover, they traveled in completely different directions, so they couldn't have been associated.
So there is no way the meteor and the asteroid are connected. It has to be a coincidence that the two events happened on the same day. Yet this would seem to be at odds with our instinct that two very rare things would not happen at the same time.
How can we reconcile these two opposite thoughts: the impossibility of an association based on the physics of trajectories, and the improbability of coincidence (lack of association) that the math suggests?
The answer is that we need to rethink the probability calculation. If asteroids as big as DA14 pass close to Earth once every decade or two, and meteors as large as the Chelyabinsk one impact once every 100 years (a similar meteor having caused the Tunguska event in 1908), the chance of both events happening on any one day are indeed very small: 1 in 3,650 days times 1 in 36,500 days, or about 1 in 100 million -- not odds you would bet against. | <urn:uuid:c2a99abd-b996-4b72-919f-9183f15b5b24> | 4.0625 | 732 | Nonfiction Writing | Science & Tech. | 52.105233 |
February 23rd, 2002 12:15 PM
Arrays in C++
In an array, when you allocate the number of arrays,
(ie) int x, y, anarray;
does that make arrays equalling 8 squared as in 64 arrays????
I am not certain how this works so I would appreciate any info people can provide
BTW........ I have found an excellent site for learning programming including c++, cobol and others, at www.cprogramming.com
February 23rd, 2002 12:41 PM
Thanx to smirc and V3ERIZON who both helped to get info on C++ .
You both been a great help
Genius of the mind is not necessarily from the mind of a genius
February 23rd, 2002 04:14 PM
Yes, defining an array as int x (or similar) would be a 8x8 array which is 64. It would run all the way from x, x... to x. | <urn:uuid:5fd0f09e-0944-458a-b028-bd7a84d4a696> | 2.859375 | 204 | Comment Section | Software Dev. | 97.155261 |
A well-organized graphical application has three components:
A model is a ``raw'' program module with a programming interface consisting a collection of publicly visible methods or procedures. In Java, the application is typically a single object (containing references to many other objects) and the programming interface is the collection of methods supported by that object.
When a program with a graphical interface starts, the controller
The commands attached to the graphical input controls are operations on the model implemented using the model's programming interface. Recall the command pattern from Section 1.9. In Java, each of the graphical input controls in the graphics (AWT/Swing) library has an associated command interface that the installed commands implement. In the Java graphics library, these commands are called ``listeners'' because the are dormant until a graphical input event occurs (e.g., a button is ``pressed''). In the programming literature, these commands are often called ``callbacks'' because they call methods ``back'' in the model which is logically disjoint from the code running in the view.
To explain how to write programs using the model-view-controller pattern, we will explore a simple example, namely a click-counter application that maintains and displays a simple integer counter ranging from 0 to 999. The graphical display will show the current value of the counter and include three buttons: an increment button, a decrement button, and reset button.
We will start with the problem of writing the view components of the application. | <urn:uuid:c586f775-b56d-4a79-9a53-70ccffbc548c> | 3.78125 | 303 | Academic Writing | Software Dev. | 34.105466 |
With Emacs, you can have a drag event without even changing your clothes. A drag event happens every time the user presses a mouse button and then moves the mouse to a different character position before releasing the button. Like all mouse events, drag events are represented in Lisp as lists. The lists record both the starting mouse position and the final position, like this:
(event-type (window1 START-POSITION) (window2 END-POSITION))
For a drag event, the name of the symbol event-type contains the
prefix ‘drag-’. For example, dragging the mouse with button 2
held down generates a
drag-mouse-2 event. The second and third
elements of the event give the starting and ending position of the
drag, as mouse position lists (see Click Events). You can access
the second element of any mouse event in the same way, with no need to
distinguish drag events from others.
The ‘drag-’ prefix follows the modifier key prefixes such as ‘C-’ and ‘M-’.
read-key-sequence receives a drag event that has no key
binding, and the corresponding click event does have a binding, it
changes the drag event into a click event at the drag's starting
position. This means that you don't have to distinguish between click
and drag events unless you want to. | <urn:uuid:ca0f2883-1d92-4e71-862f-da56c05f5549> | 2.828125 | 299 | Documentation | Software Dev. | 53.151119 |
|Ivars Peterson's MathTrek|
April 24, 2000
Yet these complicated, surprising movements arise from a remarkably simple geometry. A passenger rides in one of seven cars, each mounted near the edge of its own circular platform but free to pivot about the center. The platforms, in turn, move at a constant speed along an undulating circular track that consists of three identical hills separated by valleys, which tilt the platforms.
The platform movements are perfectly regular, but the cars whirl around independently in an irregular manner. Moreover, there is essentially just one adjustable parameter--the rate at which the platforms move around the track.
When the platforms travel at very low speeds, the cars complete one backward revolution as their platforms go over each hill. In contrast, at high speeds a car gets slammed to its platforms outer edge and stays locked in that position. In both cases, the motion is predictable.
What happens at intermediate speeds?
To model dynamical systems like the Tilt-A-Whirl, mathematicians, scientists, and engineers use equations that describe how the positions and velocities of a system and its components change over time in response to certain forces.
It's convenient to characterize a system's dynamics by plotting how its position and velocity evolve over time. Each plotted point represents the systems state of motion at a particular instant, and successive points generate a winding line through an imaginary mathematical space (known as phase space) representing all possible motions. Different starting points generally initiate different curves.
A simple, repeating motion, like the to-and-fro oscillations of a swinging pendulum, appears as a circle or some other closed curve. Such a plot shows that the system cycles through precisely the same state of motion again and again at regular intervals.
More complicated sequences of movements produce tangled paths that wander through phase space, sometimes never forming a closed loop.
Often, it helps to examine such complicated movements not at every moment but at predetermined, regular intervals. In other words, you start with a point representing the systems initial state, then wait a given time and plot a second point to give the systems new state, and so on.
In the case of a simple pendulum, selecting an interval equal to the time it takes the pendulum to complete one oscillation produces a plot that consists of a single point. The pendulum is always back in its initial state at every repeated glimpse of its motion.
When the motion is chaotic, however, there is no characteristic period. The resulting plot, known as a Poincaré section, shows points scattered across the planelike bullets puncturing a sheet of paper. In a sense, the system is continually shifting from one unstable periodic motion to another, giving the appearance of great irregularity.
To describe the Tilt-A-Whirl's dynamics, physicists Bret M. Huggard of Northern Arizona University and Richard L. Kautz of the National Institute of Standards and Technology found a mathematical equation that approximates the motion of an idealized Tilt-A-Whirl. In essence, the movements of an individual car resemble those of a friction-impaired pendulum hanging from a support that is both rotating and being rocked back and forth while the pendulum swings. Solving the equation determines how a Tilt-A-Whirl car would behave under various conditions.
To find out what happens at intermediate Tilt-A-Whirl speeds, Kautz and Huggard plotted a set of points representing the velocity and angle of a car at the beginning of each of 100,000 tilt cycles. They found that the values never repeated themselves but were scattered in a distinctive swirling pattern confined to a portion of the plane.
For these platform velocities, even slight changes in starting point lead to radically different sequences of points. At the same time, it becomes virtually impossible to predict several steps ahead of time precisely what will happen. Such sensitive dependence on initial conditions stands as a hallmark property of chaos.
Hence, what happens to an individual Tilt-A-Whirl car is highly dependent upon the weight of its passengers and where they sit. The resulting jumbled mixture of car rotations never repeats itself exactly, which gives the Tilt-A-Whirl its lively and unpredictable character. Indeed, no two trips are ever likely to produce exactly the same thrills and chills.
Interestingly, the mathematical model used by Kautz and Huggard predicts that chaotic motion would occur at a speed close to the 6.5 revolutions per minute at which the ride is normally operated.
"A walk around an amusement park suggests that several other common rides display chaotic behavior similar to that of the Tilt-A-Whirl," Huggard and Kautz note. Typically, rides that fit this category have cars that are free to rotate or shift back and forth as they follow a fixed track.
The Tilt-A-Whirl first operated in 1926 at an amusement park in White Bear Lake, Minnesota. Most likely, the ride's inventor, Herbert W. Sellner, discovered its unpredictable dynamics not through mathematical analysis but by building one, trying it out, and making trial-and-error adjustments.
"Ride designers have been fairly adept at finding chaos without appreciating the mathematics underpinning what theyre doing," Kautz notes. The situation is changing, however. To fine-tune the thrills, manufacturers are beginning to take advantage of mathematical analyses and computer simulations to help build chaotic motion deliberately into amusement park rides.
Copyright 2000 by Ivars Peterson
Kautz, R.L., and B.M. Huggard. 1994. Chaos at the amusement park: Dynamics of the Tilt-A-Whirl. American Journal of Physics 62(January):59.
Peterson, I. 1998. The Jungles of Randomness: A Mathematical Safari. New York: Wiley.
______. 1994. Chaos for fun and profit. Science News 145(Feb. 26):143.
Peterson, I., and N. Henderson. 2000. Math Trek: Adventures in the MathZone. New York: Wiley.
The Sellner Manufacturing Company, which makes the Tilt-A-Whirl, has a Web site http://www.whirlin.com/.
To learn more about the mathematics underlying chaos you can try The Chaos Hypertextbook at http://hypertextbook.com/chaos/ or visit the University of Marylands Chaos Group Web site at http://www.chaos.umd.edu/.
Comments are welcome. Please send messages to Ivars Peterson at email@example.com. | <urn:uuid:e046894c-6946-4308-bc40-8351563fb8da> | 3.90625 | 1,367 | Knowledge Article | Science & Tech. | 44.87451 |
Mitochondrial DNA (mtDNA)
is a genome located in the extranuclear mitochondria. mtDNA is inherited
through the maternal egg cytoplasm, with the father's sperm making no
All children of the first
affected woman are affected, including sons. However, only female children can pass the
trait on, again to all of
their descendants. MtDNA thus
persists by being passed down the maternal lineage.
Write out the genotypes of every individual in the tree. | <urn:uuid:305cddb2-68c1-4eba-a1f3-d6acce73ce75> | 3.234375 | 105 | Knowledge Article | Science & Tech. | 44.169 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 31 results in our database of sites
30 are Websites,
0 are Videos,
and 1 is a Experiments)
Search results from our links database
A brief description of how Gas Lanterns work. This site is part of HowStuffWorks.com.
A brief description of how Gas Turbine Engines ( Jet Engines) work. From HowStuffWorks.com.
Java Applet simulating isobaric, isovolumetric or isothermal expansions of an ideal gas
Good revision guide aimed at UK A level standard, covers topics such as temperature scales, heat capacity, gas laws etc.
Demonstration of how temperature, pressure and volume affects the motion of gas particles in a balloon. NB. You can 'pop' the balloon!!
Describes the laws of thermodynamics, and applications such as gas turbines and air conditioning with definitions of related terms.
The site gives a practical demonstration on the collisions between gas molecules by varying the temperature and volume. Java applet.
Allows you to perform gas law experiments including density / molecular weight variations. A very good site for demonstrating variables in the gas laws.
This is a very simple model of a gas. Here, a single atom bounces elastically in a one-dimensional cavity. It will continue to bounce at the same speed forever if the piston does not move.
A delightful animation of an ideal gas in a container. The number of particles, their velocity, pressure and the width of the container can be varied. The volume is then given.
Showing 1 - 10 of 31 | <urn:uuid:f1a4452e-16d3-4772-a5fe-74a0ef5aa262> | 3.328125 | 372 | Content Listing | Science & Tech. | 49.8858 |
Introduction of the notion
Source: Semantic Aspect Guide; page 24
The class diagram is an indispensable part of object modeling. The diagram is not the model, but a partial, biased illustration of it. Once finished, the class model represents the whole substance of the system, both data and processes. (These two notions of data and processing do not belong to the semantic aspect but burgeon over IT terminology. In the semantic aspect, preference is given to the terms: information, action, transformations and associations of objects.)
Each diagram is realized with a communication goal in mind. It only presents the elements which contribute to this objective. Diagrams must be readable: size will be limited to a sheet of A4 paper and rules such as the famous magic number seven can be applied. This rule summarizes the work of G. Miller in experimental psychology: a good structure – from a presentation point of view – is made up of about seven elements (give or take two).
However, at least for the modeler or developer's needs, a class diagram can exceed these limits and even contain the whole of the model. It follows that this diagram will then be known as the class model. It will not necessarily be part of the file, but constitutes a tool to find one's way around the model.
UML diagram that shows a collection of declarative (static) UML model elements such as classes and types, with their contents and relationships (www.omg.org) | <urn:uuid:aeb670d7-524e-4866-acd0-3aac4c53b37f> | 3.6875 | 302 | Knowledge Article | Software Dev. | 48.501195 |
Author(s): Keys, P. W., R. J. van der Ent, L. J. Gordon, H. Hoff, R. Nikoli, and H. H. G. Savenije.
In: Biogeosciences 9, 733-746
Type: Journal article
Link to SEI author(s):
Analyzing precipitationsheds to understand the vulnerability of rainfall dependent regions
It is well known that rivers connect upstream and downstream ecosystems within watersheds. Here the authors describe the concept of ‘precipitationsheds’ to show how upwind terrestrial evaporation source areas contribute moisture for precipitation to downwind sink regions.
The authors illustrate the importance of upwind land cover in precipitationsheds to sustain precipitation in critically water stressed downwind areas, specifically dryland agricultural areas. The authors first identify seven regions where rainfed agriculture is particularly vulnerable to reductions in precipitation, and then map their precipitationsheds.
The authors then develop a framework for qualitatively assessing the vulnerability of precipitation for these seven agricultural regions. The authors illustrate that the sink regions have varying degrees of vulnerability to changes in upwind evaporation rates depending on the extent of the precipitationshed, source region land use intensity and expected land cover changes in the source region.
Read the article (external link to open-access journal) | <urn:uuid:beacc08e-06e9-4b91-b590-b5e79ebbd28c> | 2.734375 | 283 | Academic Writing | Science & Tech. | 24.121893 |
Common Name: Millipede (One thousand legs)
Spirobolida Class Diploda
Millipedes are slender, hard-shelled, worm-like arthropods with elongated rounded body segments. The one most commonly seen on the mountain trails and woods is the black and red Narceus annularis, one of about 1,000 North American species.
Potpourri: Millipedes have between 60 and 400 legs (depending on the species), with two pairs of legs on each body segment (the taxonomy of the Class Diploda is due to this arrangement). When millipedes hatch from eggs, they have only 3 pairs of legs, adding the additional legs as they molt seven to ten times over the course of their lives. As adults, the 1st and 4th segments have no legs and the 2nd and 3rd only have one pair. The number of legs is then 4 times the number of segments minus 10.
Millipedes are important indicators of trends in land-water relationships, such as acid rain, as they affect the biosphere. This is because they have a limited ability to migrate, depend on stable conditions of moisture and shelter (they feed on decaying vegetation and leaf litter), and since they have evolved very little since they first appeared about 380 million years ago in the Devonian Period.
Millipedes, like many hard-shelled arthropods, give off a repugnatorial (distasteful to predators) fluids. The secretions are produced in glands that are in each segment along the sides of the body with the exception of the head and the sections immediately behind. The secretion of the Narceus annularis is a benzoquinone that is a foul-smelling compound that repels potential predators such as birds, toads and rodents. | <urn:uuid:5699c012-ca3b-4b35-85c5-3712e9808bda> | 4.25 | 375 | Knowledge Article | Science & Tech. | 40.449353 |
The San Andreas fault in California is very distinct in the Carrizo Plain east of the city of San Luis Obispo, CA. Many faults can not be seen at the Earth's surface like this.
Click on image for full size
Why Do Earthquakes Happen?
When the giant blocks of rock which, because of plate tectonics, move in different directions, they are bound to bump into each other. These blocks of rock come in contact at faults. Sometimes they slide smoothly past each other along a fault. But other times the blocks of rock get stuck - the rough surfaces of rock snag, preventing movement along the fault. That might lead to an earthquake.
There might be no movement along a fault for a long time if the blocks of rock are hitched together. However, plate tectonic force continues to push the rocks so the energy continues to grow. The energy builds over decades, centuries, and sometimes even over millennia.
Eventually the energy is released as an earthquake when the force is large enough. The rock breaks, often very deep underground, and moves into a new position. Vibrations called seismic waves travel outward in all directions from the point where the energy was released, known as the focus. Like a stone tossed into a pond that sends concentric circles of ripples outward, the seismic waves radiate from the focus of the earthquake. These seismic waves are what people on the surface of the Earth feel when they are in an earthquake.
There are different types of seismic waves. Some rumble the ground surface for hundreds or even more than a thousand miles. Other types of seismic waves travel through the planet. While people in Cuba canít feel an earthquake that shakes Japan, instruments called seismographs can record the seismic waves that have traveled through the planet.
Sometimes small earthquakes are caused when fluids are pumped underground.
Shop Windows to the Universe Science Store!
The Fall 2010 issue of The Earth Scientist
, focuses on rocks and minerals, including articles on minerals and mining, the use of minerals in society, and rare earth minerals, and includes 3 posters!
You might also be interested in:
Many forces cause the surface of the Earth to change over time. However, the largest force that changes our planetís surface is the movement of Earth's outer layer through the process of plate tectonics....more
The expression "on solid ground" is often used to describe something as stable. But sometimes the solid ground underfoot is not stable. It moves as Earthís tectonic plates move. Sometimes it moves gradually....more
During an earthquake, energy is released in waves that travel from the earthquakeís focus or point of origin, in the form of seismic waves. The seismic waves radiate from the focus like ripples on the...more
A major earthquake causing widespread devestation and extensive loss of life struck the nation of Haiti on January 12, 2010. The earthquake had a magnitude of 7.0. Haiti is on the island of Hispaniola...more
Earthís center, or core, is very hot, about 9000 degrees F. This heat causes molten rock deep within the mantle layer to move. Warm material rises, cools, and eventually sinks down. As the cool material...more
At 5:12 am on Wednesday April 18, 1906 most people in San Francisco, CA were still asleep. But they were about to wake up very suddenly. The Earth shook violently - an earthquake. It lasted for only about...more
Each type of mineral is made of a unique group of elements that are arranged in a unique pattern. However, to identify minerals you donít need to look at the elements with sophisticated chemical tests....more | <urn:uuid:4f9f6a1b-71a0-465f-b243-6d4a5fa9a270> | 3.9375 | 749 | Content Listing | Science & Tech. | 57.207004 |
Logistic Growth This
model illustrates resource-limited population growth. Populations have
a per-capita growth rate and carrying capacity. Two populations
are compared on three graphs: N vs time, dN/dt vs N, and dN/Ndt vs N.
Individuals in the populations are viewed in windows, illustrating that,
even at carrying capacity, there are still births and deaths in the
|Population Estimation Knowing how many individuals are in a population can be critical. But how can you tell how many there are, when there are too many to count? This model simulates a pond of tadpoles. The population size can be estimated in three ways: direct sampling, sampling with removal, and mark/recapture.| | <urn:uuid:3ce1ca66-ea45-4026-95df-c63c043b69cc> | 3.375 | 156 | Tutorial | Science & Tech. | 42.961504 |
Asteroids missed us–comets next
By Vernon Whetstone
Did you see asteroid 2012 DA14 last week? I think I did. It was quite a news event.
Combining the very close pass of DA14 with the explosion of a possible asteroid over Siberia injuring more than 1,000 people the day before made for some real headlines.
Even from the heavily light-polluted skies of Denver I was able to locate the Big Dipper. Using the planetarium software on my computer I was able to locate just where D14 would be and when.
Using binoculars I was able to examine the area and there was a tiny spot of light where the asteroid was supposed to be.
Now, down to business. In the coming months there will be two comets that will make a pass through the inner solar system to grace our skies. One in March and the other in November.
As with most things like this, just after their discovery, astronomers started making predictions as to just how bright they think the comets will be–a lesson I thought they would have learned with the fiasco of Comet Kohoutek several years ago.
With both comets being “discovered” so far out in the solar system thoughts began to range toward just how bright they would be.
However, now the words, “might,” “perhaps,” “could,” and “may” are starting to spring up when discussions of the comets appear in internet postings.
For our discussion about comets, Astronomy Class 101 will now come to order.
Comets are basically just icy, frozen clumps of dirt, water, and gas often described as “dirty snowballs.”
There are two possible areas of origin for them.
The first is a spherical cloud of icy planetesimals that is located nearly a light-year from the Sun (you do remember that a light-year is almost six trillion miles).
It is called the “Oort Cloud” after Dutch astronomer Jan Oort who first theorized it in 1950.
This sphere surrounds the entire solar system and is thought to be the place where long period comets–those with orbit length in the hundreds if not thousands of years.
The other possible location is a flat, disc-like belt of similar icy bodies that is roughly parallel with the galactic plane called the Kuiper Belt. It was proposed by astronomer Gerald Kuiper in 1951.
The Kuiper belt is thought to be the origin of short-period comets like Haley’s Comet. It is also the area where the former planet Pluto resides as well as where the other dwarf planets are located.
Not much is known about either the Oort Cloud or the Kuiper Belt which is why astronomers are eagerly awaiting the arrival of the New Horizons space craft on its way to examine Pluto then beyond into the Kuiper Belt.
SKY WATCH: Full moon, Monday, Feb. 25. Tonight, about an hour after local sunset the bright planet Jupiter can be found just to the left of the tiny open star cluster Pleiades located almost overhead. The pair are just above Aldebaran, the brightest star of Taurus, the Bull. Use binoculars to examine the Pleiades cluster as well as the Hyades star cluster–the “V” shaped face or horns of the bull. This week is your last chance for a while to catch a glimpse of tiny Mercury just above the western horizon after sunset.
NEXT WEEK: Astronomy 101 class Part II, the difference between comets, asteroids, and meteors. | <urn:uuid:706d476a-5437-44ff-afec-47f94d8e751c> | 2.96875 | 757 | Nonfiction Writing | Science & Tech. | 57.010841 |
(meteorobs) Question about radiant drift
david at d-entwistle.fsnet.co.uk
Tue Jun 8 05:02:40 EDT 2004
In message <ca18ok+v2c5 at eGroups.com>, bgarcing <bgarcing at yahoo.com>
>What are the factors that affect these? And how to compute these
>factors using elements unique to each stream? Thanks and clear skies.
I don't have an answer for you, but I think I know what needs to
considered to arrive at an answer. My mathematics is a bit rusty, but
with a bit of help we should be able to work through it.
The position of the meteor shower radiant is dependant on the relative
velocity between the Earth and the shower meteoroids. This in turn is
determined by calculating the vector sum of the Earth's heliocentric
velocity (both speed and direction) and the meteoroids heliocentric
velocity (both speed and direction). there may be small correction
required for the Earth's attraction, but we'll ignore that.
McKinley puts it as follows:
'The Earth's velocity is directed along the apex of the Earth's way. A
meteor moving with a heliocentric velocity Vh from a radiant located at
an angular distance c from the apex will have a resultant geocentric
velocity Vg given by
Vg**2 = = Vh**2 + Ve**2 + 2 * Vh * Ve * cos(b)
and the radiant will appear to be shifted to an angular distance c from
the apex where
sin c = (Vh * sin(b)) / Vg
That explains how to calculate the radiant position from the component
velocities, but we'll need to work backwards, from the radiant to find
the component velocities. Apply the factors causing the drift and then
put them through the above equations again.
The radiant drift will be caused by changes in either the meteoroid's,
or the Earth's velocity. The meteoroids velocity may vary somewhat, but
I'd expect the variation in the Earth's velocity and particularly
variation in its direction, which will be dominant. It is the direction
of the Earth's way (apex) which will move along the ecliptic by
approximately 1 degree a day.
Meteor Science and Engineering - D W R McKinley
More information about the Meteorobs | <urn:uuid:100e07dc-4def-485a-adfd-0e0231a68556> | 3.0625 | 529 | Comment Section | Science & Tech. | 60.614042 |
Returns the group number that corresponds to the specified group name.
Assembly: System (in System.dll)
A regular expression pattern may contain either named or numbered capturing groups, which delineate subexpressions within a pattern match. Numbered groups are delimited by the syntax (subexpression) and are assigned numbers based on their order in the regular expression. Named groups are delimited by the syntax (?<name>subexpression) or (?'name'subexpression), where name is the name by which the subexpression will be identified. The method identifies both named groups and numbered groups by their ordinal positions in the regular expression. Ordinal position zero always represents the entire regular expression. All numbered groups are then counted before named groups, regardless of their actual position in the regular expression pattern.
If name is the string representation of a group number that is present in the regular expression pattern, the method returns that number. If name corresponds to a named capturing group that is present in the regular expression pattern, the method returns its corresponding number. The comparison of name with the group name is case-sensitive. If name does not correspond to the name of a capturing group or to the string representation of the number of a capturing group, the method returns -1.
For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers. | <urn:uuid:cc214ff8-85e4-4313-9cc4-7e2fffff9a77> | 2.90625 | 282 | Documentation | Software Dev. | 35.564895 |
Ancient synapsids are interesting for many reasons, but we hope our fieldwork this year in Brazil will help to address a particular problem in synapsid evolution. The oldest synapsids are found in parts of North America and western Europe, and these areas were located in a narrow band near the equator at the time these animals were alive (about 300 million years ago, in the Carboniferous period of earth history).
The synapsid fossil record in these areas continues up to about the end of the Early Permian period (roughly 275 million years ago), and although a number of synapsid species are present, they are mostly members of early lineages (colloquially known as pelycosaurs) with rather lizardlike body plans. To continue to trace synapsid history after this time, we need to look at the fossil record preserved in younger rocks in other geographic areas, traditionally South Africa and European Russia.
Both of these areas were located at relatively high latitudes at the time and, with a few exceptions, the synapsid fossils found in the rocks in these areas are different from the ones from North America and western Europe: They tend to be more closely related to mammals, and they begin to take on a more mammal-like appearance. Likewise, although the South African and Russian synapsids clearly are related to the older synapsids found in North America and western Europe, they don’t seem to have direct ancestors in the latter areas. Thus, there is information missing from the known fossil record about the origin of these younger synapsids (called therapsids).
Paleontologists have presented a number of hypotheses about the cause of this missing information. One common explanation is that there is a time gap of several million years between the rocks in North America and Europe on the one hand, and those in South Africa and Russia on the other. By this thinking, the earliest therapsids must have evolved and dispersed to high-latitude areas during the missing time. However, the time gap has been steadily shrinking as our age estimates for the rocks in the different areas become more refined, so missing time is at best an incomplete explanation.
An alternative is that therapsid ancestors have been found in North America and/or Europe, but have not been recognized as such. For example, the paleontologist Everett Olson, who died in 1993, suggested that a number of fossils from North America represent early members of therapsid groups, but more recent scrutiny of the fossils suggests that they represent pelycosaurs instead.
A third explanation is that the origin of therapsids occurred in a different geographic area, one that either does not preserve rocks with fossils from this time or has a fossil record that has not been thoroughly studied. The recent discovery of the very early therapsid Raranimus in China suggests that incomplete geographic sampling may indeed be an important factor contributing to the uncertainty surrounding the early history of therapsids. If that’s the case, then it is necessary to explore fossiliferous rocks of approximately the right age in new geographic areas to see if we can find evidence of early therapsids.
As we’ll see, the rocks preserved in the Parnaíba Basin of northeastern Brazil appear to be the right age to preserve early therapsid fossils. The area also was ideally located to catch early therapsids or their ancestors if they were dispersing from equatorial North America to southern Africa. So any synapsid fossils we find during the course of our fieldwork could prove important. | <urn:uuid:ad201cfa-e707-4913-8b07-5614b0d7347b> | 3.640625 | 742 | Personal Blog | Science & Tech. | 22.562702 |
The earliest work in neural computing goes back to the 1940's when McCulloch and Pitts introduced the first neural network computing model. In the 1950's, Rosenblatt's work resulted in a two-layer network, the perceptron, which was capable of learning certain classifications by adjusting connection weights. Although the perceptron was successful in classifying certain patterns, it had a number of limitations. The perceptron was not able to solve the classic XOR (exclusive or) problem. Such limitations led to the decline of the field of neural networks. However, the perceptron had laid foundations for later work in neural computing.
In the early 1980's, researchers showed renewed interest in neural networks. Recent work includes Boltzmann machines, Hopfield nets, competitive learning models, multilayer networks, and adaptive resonance theory models.
Copyright 1996 by Ingrid Russell. | <urn:uuid:ae3ee9a1-ee0e-40bf-b414-1d9763748fb3> | 3.03125 | 176 | Knowledge Article | Science & Tech. | 43.716667 |
Updated on 27 September 2012
Dr Pawan Kumar Dhar, the founding editor-in-chief of Springer's System and Synthetic Biology and director of the Center for Biodesign, Symbiosis, India, is a renowned bio-informatician and systems biologist. Dr Dhar is the inventor of Cellware and is know for artificially making proteins from non-coding DNA
DNA sends the coded message to RNA for the onward transmission of message to proteins. For a long time people thought that once the ‘DNA tap' is open, the message will flow out uniformly. Not any longer. Some of the recent experiments have demonstrated that the transmission of DNA's message to RNAs and proteins looks more like perfume spray, with the intensity of gene and protein expression being highly fluctuating. Technically we know this as genetic noise. In parallel, molecular biology experiments have shown that cellular edits in the form of addition, deletion or replacement of genes frequently result in unexpected cellular behaviors.
It is interesting that the information travels at least six magnitude in size from H-atom to the whole cell level. Understanding such a system that shows spatial, temporal and contextual complexity optimized for millions of years is clearly non-trivial. Is there an alternate way to understand how biological systems function?
In the summer of 2004, people at MIT took an audacious step towards exploring the possibility of engineering organisms by organizing the first conference of Synthetic Biology. The questions that formed the basis of this new approach were simple. Can we compose organisms from scratch? Can we perform precise network edits and biologically engineer organisms towards predetermined behaviors? If yes, what are the key requirements, the best case scenarios and boundary conditions of compiling organisms?
Synthetic Biology is defined as a controllable construction of biological systems from scratch. The intended meaning of "synthetic" is not chemical, as the term might tend to indicate. Several alternative terms like constructive biology, biological technology, biodesign, biosystems engineering have appeared to emphasize construction of biological systems part-by-part.
Some of the key features of the synthetic biology approach, include abstraction of biological systems into parts, devices and circuits; building an inventory of well characterized parts; making a bio-truth table; developing data extraction and data exchange standards; identifying the rules of composition; inventing technologies for rapid synthesis and rapid assembly of parts; and developing a BioCAD platform. | <urn:uuid:237a5c04-c4ff-4823-adad-b14b9d3949d8> | 2.78125 | 492 | Nonfiction Writing | Science & Tech. | 24.171172 |
rocessing text or long strings in SQL has never been an easy task. SQL is a powerful language for performing fast operations on data sets, but when it comes to processing text or long strings, it's usually reduced to a prosaic procedural language. This article shows a few techniques for facilitating speedy text processing in SQL. Although demonstrated in SQL Server, you can apply the underlying ideas to any RDBMS with only small adjustments. Also, no third-party tools, extended stored procedures, or user-defined functions or objects written in any programming language other than SQL (or Transact-SQL) are used.
Using these techniques you will be able to do the following and more without any loops:
- Determine the number of the words in the text
- Determine the length and position of each word in the text
- Determine the number of occurrences of a letter (pattern) and their positions in the text
- Determine the frequency of each distinct word or letter in the text
- Eliminate a letter's duplicates
- Eliminate extra spaces between the words or between lines of text
- Convert text according to a given format (e.g., define the length of lines in the text or implement more sophisticated formatting of the text)
SQL is a language dedicated to set-based processing. Text by nature requires one-by-one sequential processing, which is not the strongest feature of SQL. Hence, you can't expect an improvement in text processing if you don't change the layout of the text. In other words, you need to convert text into a structure that allows set-based manipulations.
Traditionally in relational databases, such structures were and continue to be tables. Therefore, to be able to process text using SQL, you need to put it into a table, where each word (or letter) will have a row value in a specific column.
The sections to follow show a few techniques that you can use for text conversion. | <urn:uuid:4687e6b2-a353-45ac-ad7b-1ba84c9ba82e> | 2.953125 | 403 | Truncated | Software Dev. | 44.622086 |
Inheritance in Java
Java Tutorial. The extension of Java classes and interfaces 06.fm. Greg Lavender. Slide 3 of 12. 6/15/99. Order of construction under the legacy ….
More PDF Content
Inheritance is a compile-time mechanism in Java that allows you to extend a class (called the base class or superclass) with another class (called the derived class or subclass). In Java, inheritance is used for two purposes:
- class inheritance – create a new class as an extension of another class, primarily for the purpose of code reuse. That is, the derived class inherits the public methods and public data of the base class. Java only allows a class to have one immediate base class, i.e., single class inheritance.
- interface inheritance – create a new class to implement the methods defined as part of an interface for the purpose of subtyping. That is a class that implements an interface âconforms toâ (or is constrained by the type of) the interface. Java supports multiple interface inheritance
Beginning Programming with Java for Dummies - For Dummies
Kommunikation in verteilten Anwendungen.... - Oldenbourg
Mastering RMI: Developing Enterprise Applications... - John Wiley & Sons
Java Enterprise in a Nutshell (2nd Edition) - O'Reilly Media | <urn:uuid:5434196a-0fc2-49b0-b539-9a864e5e1acd> | 4.15625 | 283 | Content Listing | Software Dev. | 40.999943 |
Antennae (singular: antenna) in biology have historically been paired appendages used for sensing in arthropods. More recently, the term has also been applied to cilium structures present in most cell types of eukaryotes.
In arthropods, antennae are connected to the front-most segments. In crustaceans, they are biramous and present on the first two segments of the head, with the smaller pair known as antennules. All other arthropod groups – except chelicerates and proturans, which have none – have a single, uniramous pair of antennae. These antennae are jointed, at least at the base, and, in general, extend forward from the head. They are sensory organs, although the exact nature of what they sense and how they sense it is not the same in all groups, or always clear. Functions may variously include sensing touch, air motion, heat, vibration (sound), and especially olfaction (smell) or gustation (taste).
The red palm weevil, Rhynchophorus ferrugineus, is a species of snout beetle also known as the Asian palm weevil or sago palm weevil. The adult beetles are relatively large, ranging between two and five centimeters long, and are usually a rusty red colour - but many colour variants exist and have often been misidentified as different species (e.g., Rhynchophorus vulneratus;). Weevil larvae can excavate holes in the trunk of a palm trees up to a metre long, thereby weakening and eventually killing the host plant. As a result, the weevil is considered a major pest in palm plantations, including the coconut palm, date palm and oil palm. Originally from tropical Asia, the red palm weevil has spread to Africa and Europe, reaching the Mediterranean in the 1980s. It was first recorded in Spain in 1994, and in France in 2006. The weevil was first reported in the Americas on Curaçao in January 2009 and sighted the same year in Aruba. It was reported in the United States at Laguna Beach, CA late in 2010. In the European Union, there have been confirmed detections in Malta and Italy (Tuscany, Sicily and Campania), and there are suspect reports suggesting that it has established along the Mediterranean coast of France and Portugal. Researchers also suspect that it has established in Morocco, Algeria and other North African countries, but there remains no official confirmation. | <urn:uuid:beee462a-48ea-45d1-bce1-96063e00e4b0> | 3.6875 | 515 | Knowledge Article | Science & Tech. | 39.13335 |
Into the Unknown: Expeditions to Extreme Environments
Jan Rines, Assistant Marine Research Scientist
Graduate School of Oceanography
Jan Rines received a BS in botany from URI and earned an MS and a PhD from GSO. Her research interests include phytoplankton systematics and the biological-physical interactions between phytoplankton and their environment.
Medieval man believed that the world was flat.
It was feared that anyone who dared venture too close to the edge would fall
off---and disappear into the unknown. But humans are curious. Driven by a quest
for knowledge and the thrill of adventure, the explorers pressed on. They sailed
around the world.
In the nineteenth century, a fascination with collecting and cataloging the diversity of nature prompted natural historians and explorers to venture far from home in search of flora and fauna unknown to science. They studied the geography and geology of the lands they visited and the characteristics of the sea. Among these men were Charles Darwin aboard HMS Beagle, Alfred Russell Wallace in the Indonesian Archipelago, and Sir Wyville Thomson on HMS Challenger. Far from the familiar European countryside, they risked the vagaries of the sea, hostile natives, injury, and disease for the thrill of collecting strange and marvelous creatures and plants from the ends of the earth and the depths of the ocean. What they discovered created a revolution in scientific thought.
Today there are few areas of the earth's surface that have not been explored or pinpointed with great accuracy by global positioning systems. A plethora of scientific instrumentation and computer systems has given us tools to gather data from remote or inhospitable locations without leaving the laboratory. But an adventurous spirit is integral to a scientist, and many of us take delight in the challenge of hands-on, often strenuous field work. Like those before us, we dream of discovering something so unique that it changes our view of the world. Sometimes there is even the allure of danger.
This issue of Maritimes ventures around the world and back in time: above the Arctic Circle with Brad Moran and John Smith to investigate the environmental legacy of nuclear testing conducted during the Cold War; beneath the sea floor with Steven D'Hondt and David Smith to look for buried life; and deep in the ocean with Karen Wishner to discover zooplankton living where oxygen is a scarce commodity. Witness the extreme, destructive power of a volcanic eruption with Steven Carey. Consider that not all seas are wet: David Fastovsky vividly reconstructs ancient seas of sand, an unforgiving environment once home to the little dinosaur Protoceratops. Scott McWilliams discusses a strategy commonly employed by songbirds (and people like me) who choose to avoid the extremes of a North American winter: fly south. I can personally attest to how strange it seems to encounter orioles and tanagers---summer residents of my Rhode Island garden---side by side with toucans and macaws in the jungles of the remote Osa Peninsula of Costa Rica. It took me many long hours to get there by plane; these tiny bundles of feathers did it all on their own.
Life shows an amazing ability to adapt, and even thrive, in extreme environmental conditions. Examination of how this is achieved is relevant to understanding the origins of life, here on earth and perhaps elsewhere. There remains much to be discovered beyond the edges of the familiar world.
return to Contents | <urn:uuid:183b837b-f6a7-480b-9ecb-4055d9d4a901> | 2.8125 | 704 | Nonfiction Writing | Science & Tech. | 33.342024 |
- Why don't birds get electrocuted when they sit on power lines?
- What is electricity?
- How does electricity get from one place to another?
- a closed loop of conductors through which charges can flow
- a substance through which electrical charges can easily flow
- a flow of electrical charges
- a device for producing electrical current by moving a coil of wire in a magnetic field
- a material through which electric charges cannot move
- an atom that has gained or lost one or more electrons and is thus a charged particle
- a device that closes or opens a circuit, thereby allowing or preventing current flow
- the pressure behind the flow of electrons in a circuit
OverviewDavid conducts a study of electrical circuits. Segment length: 9:01 When it comes to understanding electricity, to get to the heart of the matter you must literally get to the heart of matter--the atom. Atoms are the building blocks of matter and they are composed of three particle types. The central core of the atom is called the nucleus and it contains positively charged particles called protons and neutral particles called neutrons. The movement of many charged particles in the same direction is called an electric current. Charged particles flow most easily through conductors, such as metals, or through some liquids, such as salt water. Electrons in metals are loosely attached to the atoms, so they can move easily. The human body (which is mostly salt water) is also a good conductor, which is why electric shocks can be so dangerous. Insulators, on the other hand, do not conduct electricity well. Their electrons are tightly bound to their atoms and do not move easily. Typical insulators include rubber, wood, glass, and most plastics. Electricity will only flow when a power source, such as a battery or a generator, sets the electrons in motion and when the electrons can complete a full circle. Consider this example--electrons flow from a battery down a wire to a light bulb, through the filament of the bulb, and then back up another wire to the battery. This closed loop is called a circuit. No electrical device, whether it's a simple flashlight or a complex computer, will work unless the circuit that delivers the electric current is a complete loop. Electricity becomes dangerous to you when you become part of the electrical loop--when the electrons have enough energy and make adequate contact to pass through your body. You can touch both ends of a flashlight battery and feel nothing, but if you're wet and in contact with household electricity, water can make a very good path through your skin and your body, making you part of the electrical circuit! Electrical energy always seeks the shortest route around the circuit back to the source, which in the above example is the battery. If the wires both touch a conductor, such as a metal tabletop, the electrons will take that shorter route back to the battery, rather than travel to the light bulb. (Conveniently, scientists call this a "short circuit.") So why don't birds get electrocuted when they sit on power lines? The power lines that are suspended in pairs between power poles are analogous to the wires that run between the battery and the light bulb. As long as birds sit on only one, they offer no "shortcut" to complete the circuit. But if their wings accidently touch both adjacent power lines, the electrons take a new path and complete the circuit through the unfortunate bird's body!
- Imagine a world without electrical power. How would you cook, clean, and entertain yourself?
- Even though electrical energy is useful, its production often causes environmental problems. Acid precipitation from burning coal and disposal of nuclear waste are just two of them. What are some alternative power sources and how can conservation help minimize the damage?
ActivityWhich common objects are insulators and which are conductors? To test it for yourself, you can build a simple, battery-powered conductivity tester.
- To build your tester, unscrew the top of the flashlight which has the bulb assembly in it. Take one wire and tape it to the metal tip of the light bulb and tape a second wire to the metal ring that touches the side of the bulb.
- Tape the other end of the wire connected to the tip of the light bulb to the (+) end of a D cell and touch the free end of the second wire to the (-) end of the cell. The light should go on because you have completed a circuit. If it doesn't, make sure all the connections are taped tightly and make good contact.
- Tape one end of the third wire to the (-) end of the cell and touch its free end to the free end of the wire coming from the bulb holder. Again, the light should go on. Try touching the two free ends of the wires to the penny at the same time. The bulb should light because the penny is made of copper, a good conductor.
- Collect your objects to be tested and predict if they are insulators or conductors. Then try them out with your tester.
- In general, what types of materials make the best conductors?
- Look inside the body of the flashlight. How does the switch control make the light go on and off?
- Math, I. (1981) Wires and watts. New York: Charles Scribner's Sons.
- Nye, B. (1993) Big blast of science. New York: Addison-Wesley.
- Stanley, L. (1980) Easy-to-make electric gadgets. New York: Harvey House.
- 3-2-1 Classroom Contact videotape: Generating Electricity. GPN: (800) 228-4630.
- VanCleave, J. (1991) Physics for every kid. New York: John Wiley Publishers.
- Vogt, G. (1986) Generating electricity. New York: Franklin Watts Publishing.
- Williams, J. (1992) Projects with electricity. Milwaukee: Gareth Stevens Children's
Local power utility | <urn:uuid:3ef00b39-8d7c-4dba-b117-2cce5f452e66> | 3.921875 | 1,235 | Tutorial | Science & Tech. | 56.419202 |
Get fun Tornado Experiments, together with top scoring Chemistry Experiments. These winning Tornado Experiments including Chemistry Experiments are great for all grade levels. We have over 200 top scoring school experiments and science fair projects ready for immediate use for kids of all ages.
Easy Science Fair Projects
Tornado Experiments Elementary Science Fair Projects
Sound Experiments directly is came Light Experiments face it, that yet. came they are soon, he he Solar System Projects afterwards Water Experiments feeling.
beforehand in afterwards, Tsunami Science Project. he it and.
before at Weather Experiments Tornado Experiments the a Easy Science Projects are as. is tied together as is Earth Science Projects during. on a for the is Tornado Experiments on Physics Experiments.
maybe then Science Fair Project Topics with. therefore, you of together that first of all Science Fair Ideas. Volcano Projects a did you noticed that are for the on mainly, at Water Experiments beforehand internet Solar System Projects Elementary Science Fair Projects face it, was Plant Experiments as. never are Science Fair Experiments Tornado Experiments a they Balloon Experiment is and never and immediately to the you net Tornado Experiments with why.
Hot Air Balloon Science Project
it into immediately during for did she for the for afterwards Science Fair Projects for Kids for Solar System Projects Biology Experiments. Science Fair Ideas a was into Science Fair Topics the Science Fair Projects is Light Experiments and the immediately. of just suppose but Science Fair Project Topics Weather Experiments did he Water Experiments was the never and Tornado Experiments Static Electricity Experiments to. Science Fair Project Topics face it, I Tornado Experiments he mainly, Tornado Experiments Science Projects Simple Machine Projects to the for Science Fair Ideas in his Tornado Experiments the. occasionally Tsunami Science Fair Project feeling and Tornado Experiments it Dry Ice Experiments just suppose was afterwards with in. Light Experiments soon, for Middle School Science Fair Projects they a directly Plant Experiments the Weather Experiments why in at Ideas for Science Fair Projects in three minutes tied together certainly.
Easy Science Projects Light Experiments
Tornado Experiments a occasionally are as he the Gravity Experiments however,. and Ideas for Science Fair Projects to now imagine you could for they for they fascinating School Science Fair Projects quick,. in for the in Ideas for Science Fair Projects was apparently feeling during. was the Weather Experiments a School Science Projects you Magnet Experiments to can. Science Project Ideas beforehand in Science Fair Project Ideas together Balloon Experiment they afterwards, Magnet Experiments fascinating. second of all I obviously they yet Elementary Science Projects his and then as Dry Ice Experiments did she.
together is can Tornado Experiments Chemistry Experiments you you together Tornado Experiments sometimes. to Egg Drop Experiment as maybe then you in every instance he Solar System Projects. for Science Project Topics now imagine you could moreover, with did you. while you Middle School Science Fair Projects and his Water Experiments net Tornado Experiments you into as Elementary Science Projects of. at only then on Middle School Science Fair Projects and a Tornado Experiments on into Science Projects they was in a moment on. you in three minutes School Science Projects at for Tornado Experiments how his in.
Easy Science Projects. with during is Tornado Experiments yet was a in every instance. a School Science Fair Projects did he second of all on Dry Ice Experiments for for sometimes a while Biology Experiments.
on in a moment together Tornado Experiments Science Fair Ideas more feeling moreover, Science Fair Experiments I Sound Experiments. of Tornado Experiments was but can and his. Science Project Ideas on together Middle School Science Fair Projects Solar System Projects feeling with maybe then Science Fair Project Topics with Science Project Ideas they Balloon Experiment Tornado Experiments you. a you Science Fair Topics a Ideas for Science Fair Projects in. | <urn:uuid:9627221d-d69d-4ab5-9a22-ac1ee1ca4de2> | 2.6875 | 769 | Content Listing | Science & Tech. | 29.360183 |
I read a book, years ago, about Java. One of the last chapters was about how to create connections and send/reveice messages between a computer and a server.
Now I need to create a connection between two computers in the same network (peer to peer, or what is it called). The examples in the book used java.net.MulticastSocket to create a connection to a server. When I try the same with my second computer (connected in same network), I get a UnknownHost-exception.
Does anybody know how to create a connection between two computers in same network?
I would like to know how to send and receive messages/bytes too.
Well, I mean a connection between two computers, a connection without a server, so the applications could "talk" by the network. I'm making a game that could be played in multiplayer between computers in same network (without any server). I just want to send messages (bytes of code) between computers.
Example: Computer 1 handles a key press (arrow down) and sends the command to Computer 2. Every single computer computes the command/key press, and draws the new game area with the character owned by Computer 1 (player 1) one square down.
Peer to peer - Yes, maybe you could call this connection that. I mean, there is no server involved.
Instant Messaging - Well, there is a chat function in my game (or will be) and I must be able to send messages, but it's not only a chat program...
I think every computer must be like a server (listen to ports, accept connections etc.), but I don't know how to do it.
I am maybe not good at explaining. Thanks for helping me!
But how do I create a connection? I've tried with java.net.MulticastSocket: Computer 1 = 192.168.0.101
Computer 2 = 192.168.0.102
The result I got was "java.net.SocketException: Not a multicast address". I need something to connect peer to peer inside a network, not between a public server (like google.com). | <urn:uuid:5a7c4a56-6127-4ab7-a456-e0ffe5e6d34a> | 2.953125 | 448 | Comment Section | Software Dev. | 70.59342 |
Matching 2 Tags
Series: Understanding Evolution
This blog series by Dennis Venema undertakes the task of clarifying numerous aspects of evolution that often become misconstrued by Christians. He first discusses the idea of speciation in a population over time, later applying it to the speciation process that occurred among hominids (human ancestors) which led to modern humans. He continues to support this idea by exploring so called “Mitochondrial Eve,”“Y Chromosome Adam” and other compositional clues of the human genome.
Denisovans, Humans and the Chromosome 2 Fusion
The Denisovans, an extinct hominid group that interbred with modern humans, made the news again lately with the publication of a more detailed study of their genome. One of the many interesting findings was that the Denisovans share the same chromosome 2 fusion that modern humans have.
Series: The Human Fossil Record
In this series, James Kidder provides an intriguing study on transitional fossils and the evolutionary history of modern humans. He begins by discussing the fossil record, explaining how new forms are classified. He then explains the physically distinguishing trait of humankind—bipedalism. From the discovery of Ardipithecus, the earliest known hominin, to the australopithecines, the most prolific hominin, Kidder focuses on the discovery, the anatomy, and the interpretation of these ancestral remains.
What scientific evidence do we have about the first humans?
In recent decades, scientists have discovered more about the beginnings of humanity. The fossil record shows a gradual transition over 5 million years ago from chimpanzee-size creatures to hominids with larger brains who walked on two legs. Later hominids used fire and stone tools and had brains as large as modern humans. Fossils of homo sapiens in east Africa date back nearly 200,000 years. Humans developed hearths for fire, stone points for spears and arrows, and cave paintings by 30,000 years ago. By 10,000 years ago, humans had spread throughout the globe. Genetic studies support the same picture. Humans share more DNA with chimpanzees than with any other animal, suggesting that humans and chimps share a relatively recent common ancestor. Also, the same defective genes appear in both humans and chimps, at the same locations in the genome—an observation difficult to explain except by common ancestry. Genetics also tells us that the human population today descended from more than two people. Evolution happens not to individuals but to populations, and the amount of genetic diversity in the gene pool today suggests that the human population was never smaller than several thousand individuals. Yet all humans, of all races, are descended from this group. Humanity is one family. | <urn:uuid:afeb784f-6d33-48c6-885f-00899b2b2016> | 2.71875 | 561 | Content Listing | Science & Tech. | 29.858623 |
Common optical phenomena are often due to the interaction of light from the sun or moon with the atmosphere, clouds, water, dust, and other particulates. One common example is the rainbow, when light from the sun is reflected and refracted by water droplets. Some, such as the green ray, are so rare they are sometimes thought to be mythical. Others, such as Fata Morganas, are commonplace in favored locations.
A list of optical phenomena
Optical phenomena include those arising from the optical properties of the atmosphere; the rest of nature (other phenomena); of objects, whether natural or human-made (optical effects); and of our eyes (Entoptic phenomena). Also listed here are unexplained phenomena that could have an optical explanation and "optical illusions" for which optical explanations have been excluded.
There are many phenomena that result from either the particle or the wave nature of light. Some are quite subtle and observable only by precise measurement using scientific instruments. One famous observation is of the bending of light from a star by the Sun observed during a solar eclipse. This demonstrates that space is curved, as the theory of relativity predicts.
Atmospheric optical phenomena
- Alexander's band, the dark region between the two bows of a double rainbow.
- Anticrepuscular rays
- Auroral light (northern and southern lights, aurora borealis and aurora australis)
- Belt of Venus
- Blue Flash
- Blue moon
- Circumzenithal arc
- Crepuscular rays
- Earthquake lights
- Earth's shadow
- Glories (also known as Brocken's Specter or Specter of the Brocken)
- Green Flash
- Halos, of Sun or Moon, including sun dogs
- Heiligenschein or halo effect, partly caused by the Opposition effect
- Cloud iridescence
- Light pillar
- Mirages (including Fata Morgana)
- Shadow set
- Tyndall effect
Other optical phenomena
Optical effects
- Asterism, star gems such as star sapphire or star ruby.
- Aura, a phenomenon in which gas or dust surrounding an object luminesces or reflects light from the object.
- Aventurescence, also called the Schiller effect, spangled gems such as aventurine quartz and sunstone.
- The camera obscura
- Chatoyancy, cat's eye gems such as chrysoberyl cat's eye or aquamarine cat's eye
- Chromatic polarization
- Diffraction, the apparent bending and spreading of light waves when they meet an obstruction.
- Double refraction or birefringence of calcite and other minerals
- The Double-slit experiment
- Evanescent wave
- Fluorescence, also called luminescence or photoluminescence.
- Mie scattering (Why clouds are white)
- Metamerism as of alexandrite
- Newton's rings
- Pleochroism gems or crystals, which seem many-colored
- Polarized light-related phenomena such as double refraction, or Haidinger's brush
- Rayleigh scattering (Why the sky is blue, sunsets are red, and associated phenomena)
- Synchrotron radiation
- The separation of light into colors by a prism
- The Zeeman effect
- Thomson Scattering
- Total internal reflection
- Twisted light
- The Umov effect
- The ability of light to travel through space or through a vacuum.
Entoptic phenomena
- Diffraction of light through the eyelashes
- Haidinger's brush
- Monocular diplopia (or polyplopia) from reflections at boundaries between the various ocular media
- Phosphenes from stimulation other than by light (e.g., mechanical, electrical) of the rod cells and cones of the eye or of other neurons of the visual system
- Purkinje images.
Optical illusions
sun rays from the sun
- The unusually large size of the Moon as it rises and sets, the moon illusion
- The shape of the sky, the sky bowl
Unexplained phenomena
Some phenomena are yet to be conclusively explained and may possibly be some form of optical phenomena. Some[weasel words] consider many of these "mysteries" to simply be local tourist attractions that are not worthy of thorough investigation.
- "Green Rays"
Further reading
- Thomas D. Rossing and Christopher J. Chiaverina, Light Science: Physics and the Visual Arts, Springer, New York, 1999, hardback, ISBN 0-387-98827-0
- Robert Greenler, Rainbows, Halos, and Glories, Elton-Wolf Publishing, 1999, hardback, ISBN 0-89716-926-3
- Polarized Light in Nature, G. P. Können, Translated by G. A. Beerling, Cambridge University Press, 1985, hardcover, ISBN 0-521-25862-6
- M.G.J. Minnaert, Light and Color in the Outdoors, ISBN 0-387-97935-2
- John Naylor "Out of the Blue: A 24-hour Skywatcher's Guide", CUP, 2002, ISBN 0-521-80925-8
- Abenteuer im Erdschatten (German).
- The Marine Observers' Log | <urn:uuid:0fd19910-60ec-490b-b40f-ded8d864998f> | 3.34375 | 1,155 | Knowledge Article | Science & Tech. | 33.855008 |
Also known as Cayley numbers, after their 19th century inventor Arthur Cayley.
Octonions make use of seven unique roots of -1, labelled i0 to i6. Addition and subtraction of them is very straight forward, just as for complex numbers and quaternions. However, multiplication is more complex. In short, each of the following triplets behaves like the i, j and k of quaternions. Follow that link if you're unsure.
(i0, i1, i3)
(i1, i2, i4)
(i2, i3, i5)
(i3, i4, i6)
(i4, i5, i0)
(i5, i6, i1)
(i6, i0, i2)
Multiplication of two general octonions is a pretty arduous business by hand, but it is theoretically possible. You can read about it over on multiplying octonions.
Octonions have the highest number of units with which division can be everywhere defined, except by zero. Therefore progressing to 16-ions and beyond is somewhat futile.
However, associativity does not hold for octonions. Therefore (ab)c != a(bc). This takes some getting used, as with complex numbers and even quaternions, we automatically expect it to work.
One of the few practical uses for octonions is for describing rotation and translation in seven and eight dimensional space, but I wouldn't claim to be the authority on that. | <urn:uuid:0abffb14-aef6-4c59-99fa-ef83f46767de> | 3.265625 | 327 | Knowledge Article | Science & Tech. | 52.031 |
Every year, the government gives scientists money that they use for amazingly cool things, like building robots that dive to extreme underwater depth and record video like this.
Thanks to funding from taxpayers and philanthropists (and, of course the Internets, which come to think of it also launched as a government program), you can sit on your couch in your underwear and watch magma flow deep under the sea.
This post is part of the Public Science Triumphs organized by our sister site io9 in partnership with several other publications that cover science. On November 23, the U.S. Congress has pledged that its budget supercommittee will present a proposal for US$1.2 trillion in cuts to government spending, which makes us fear for publicly-funded science institutions in the United States. We hope the series will help you and U.S. government representatives remember that science is a non-partisan public good that enriches local and global economies - and makes it all the more awesome to be human.
The video above (you might want to turn down your sound; the scientists get excited) was recorded on equipment carried by the Jason remotely-operated vehicle, which you can see in the foreground. Jason was designed and built by the Wood's Hole Oceanographic Institute, which gets funding from federal agencies, private contributions, and endowments. The vehicle gives scientists access to the seafloor without leaving the deck of a ship. In the video scientists witness for the first time glowing lava from a submarine volcanic eruption. The undersea volcano is part of the Mariana arc, which extends from south of Guam northward more than 800 nautical miles. It's amazing the lava is so hot that it remains red for a split second before the water snuffs it. It was recorded during the National Oceanic and Atmospheric Association's (a government-funded institution) Submarine Ring of Fire 2006 exploration.
This is another video from the 2006 NOAA Ring of Fire expedition. Scientists were trying to take samples when the "Brimstone Pit" erupts and nearly engulfs the submarine in an ash plume.
This gorgeous video shows what arctic ice looks like from under water (again, you might want to mute). The 2002 Arctic Expedition Dive, which was supported by the NOAA Ocean Exploration Program, funded team of 50 scientists from the United States, Canada, China and Japan to explore the frigid depths of the Canada Basin in the Arctic Ocean for the first time.
Robert Ballard was the first diver to find the sunken Titanic in 1985. In 2004, thanks to funding from the NOAA Office of Ocean Exploration, he returned to study the ship's rapid deterioration. Ballard and his team spent 11 days in June at the wreck site, mapping the ship and studying its decay. Using the remotel operated vehicles, they used high-definition video and stereoscopic still images to provide an updated assessment of the wreck site 12,600 feet below the surface.
Ok, this is a cheesy IMAX preview. But: Dolphins! The National Science Foundation helped fund this 2000 film so divers could share with anyone who didn't already know how awesome and smart dolphins are. The divers mounted cameras on the front of remote-controlled torpedo-shaped vehicles to examine how dolphin families and societies form, how they communicate with one another, and how humans sometimes adversely affect their health and mortality. The divers (and their dogs!) also have super fun play time in the water with dolphins, which is just mesmerizing to watch.
You can keep up with our Science Editor, Kristen Philipkoski, on Twitter, Facebook, and occasionally Google+ | <urn:uuid:1e79202d-6655-4852-bd84-e900ddb43c4a> | 2.84375 | 731 | Listicle | Science & Tech. | 46.132991 |
[Tutor] Data frame packages
bjameshunter at gmail.com
Thu Mar 31 20:26:39 CEST 2011
I appreciate all the responses and apologize for not being more detailed. An
R data frame is a tightly grouped array of vectors of the same length. Each
vector is all the same datatype, I believe, but you can read all types of
data into the same variable. The benefit is being able to quickly subset,
stack and such (or 'melt' and 'cast' in R vernacular) according to any of
your qualitative variables (or 'factors'). As someone pretty familiar with R
and quite a newbie to python, I'm wary of insulting anybody's intelligence
by describing what to me is effectively the default data format my most
familiar language. The following is some brief R code if you're curious
about how it works.
d <- read.csv(filename, header = TRUE, sep = ',') #this reads the table.
'<-' is the assignment operator
d[ , 'column.name'] # this references a column name. This same syntax can be
used to reference all rows (index is put left of the comma) and columns in
The data frame then allows you to quickly declare new fields as functions of
newVar <- d[ ,'column.name'] + d[ ,'another.column']
d$newVar <- newVar # attaches newVar to the rightmost column of 'd'
At any rate, I finally got pydataframe to work, but had to go from Python
2.6 to 2.5. pydataframe has a bug for Windows that the author points out.
Line 127 in 'parsers.py' should be changed from:
columns = list(itertools.izip_longest(*split_lines ,fillvalue = na_text))
columns = list(itertools.izip_longest(list(*split_lines),fillvalue =
I don't know exactly what I did, but the module would not load until I did
that. I know itertools.izip_longest requires 2 arguments before fillvalue,
so I guess that did it.
It's a handy way to handle alpha-numeric data. My problem with the csv
module was that it interpreted all numbers as strings.
On Thu, Mar 31, 2011 at 8:17 AM, James Reynolds <eire1130 at gmail.com> wrote:
> On Thu, Mar 31, 2011 at 11:10 AM, Blockheads Oi Oi <
> breamoreboy at yahoo.co.uk> wrote:
>> On 31/03/2011 09:38, Ben Hunter wrote:
>>> Is anybody out there familiar with data frame modules for python that
>>> will allow me to read a CSV in a similar way that R does? pydataframe
>>> and DataFrame have both befuddled me. One requires a special stripe of R
>>> that I don't think is available on windows and the other is either very
>>> buggy or I've put it in the wrong directory / installed incorrectly.
>>> Sorry for the vague question - just taking the pulse. I haven't seen any
>>> chatter about this on this mailing list.
>> What are you trying to achieve? Can you simply read the data with the
>> standard library csv module and manipulate it to your needs? What makes
>> you say that the code is buggy, have you examples of what you tried and
>> where it was wrong? Did you install with easy_install or run setup.py?
>>> Tutor maillist - Tutor at python.org
>>> To unsubscribe or change subscription options:
>> Mark L.
>> Tutor maillist - Tutor at python.org
>> To unsubscribe or change subscription options:
> I'm not familiar with it, but what about http://rpy.sourceforge.net/
> Tutor maillist - Tutor at python.org
> To unsubscribe or change subscription options:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Tutor | <urn:uuid:d1b1cbda-6ce8-4034-9c6d-fb318eb17067> | 2.6875 | 898 | Comment Section | Software Dev. | 65.384056 |
An open topped rectangular box with a square base is to be constructed with a volume of . Find the Dimension that require the lease amount of surface material.
In microcomputers, most of the components are squeezed into a single box-shaped block. If the block has a length equal to twice the width and if the total surface area of the block must be in order to dissipate the heat produced, find the dimensions for the maximum volume of the block.
I couldn't figure out how to get equations out of these two problems. | <urn:uuid:9e9de3e2-ec01-4d48-a48b-09277c8fc568> | 3.265625 | 108 | Q&A Forum | Science & Tech. | 53.971053 |