text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
When we talk about climate change and the threat that is heading toward us we usually talk about isolated events. The most common examples are the carbon dioxide emissions caused by our industry, our vehicles and our agriculture. The thing that’s not talked about so much is the positive and negative feedback that these events cause. A positive feedback response is when one event amplifies another, while a negative reduces it. The positive feedback loops speed up the warming of the planet, and the negative cools it down. | <urn:uuid:a4c6331e-1cb7-42ad-9ee4-6df57824f70e> | 2.796875 | 101 | Personal Blog | Science & Tech. | 38.579236 |
A Biological Assessment of the Lakekamu Basin, Papua New Guinea: June, 1998
Andrew L. Mack
Conservation International, Department of Conservation Biology, 1998 - 187 pagine
The Lakekamu Basin of Papua New Guinea encompasses roughly 2,500 square kilometers of pristine wilderness lowland rain forest. Exceptional both in terms of numbers of species (e.g. the greatest diversity of ant species known from anywhere in the world) and species found nowhere else, the basin is also sparsely populated, roadless, and so far unexploited by large-scale logging or mining projects, making it an excellent conservation opportunity.
This book provides the results of an intensive survey of the basin's plants, insects, fish, reptiles and amphibians, mammals, and birds. During just one month of field work, 35-47 new species and possibly new genera were discovered, highlighting the biological richness of this region. | <urn:uuid:d5f9772a-4513-4108-81dd-8c0cb1e526c8> | 2.921875 | 189 | Truncated | Science & Tech. | 33.046429 |
|Deoxyribonucleic acid, or DNA, stores all of our genetic information. It makes up the genes of all cells (animal and plant) as well as many viruses. In this module, we will examine the structure of DNA so that we can better understand its interaction with cisplatin. We will also examine replication and transcription, which are the processes by which genetic information is expressed. Finally, we will describe one mechanism by which damaged DNA is repaired in the cell.|
The nitrogenous base is a derivative of either purine or pyrimidine.
In DNA, the purines are adenine (A) and guanine (G), and the pyrimidines are thymine (T) and cytosine (C).
The sugar present in a deoxyribonucleotide is deoxyribose, which lacks an oxygen that is present in its parent compound, ribose.
The sugars and phosphate groups make up the backbone of DNA and therefore provide DNAs structure; the backbone of DNA does not vary. The sequence of the bases does vary, however, and this is how genetic information is carried. The structure of part of a DNA chain is shown below.
The three-dimensional structure of DNA was deduced in 1953 by James Watson and Francis Crick. Their model of DNA had five important features: 1. Two helical chains of polynucleotides are coiled around a common axis; the chains run in opposite directions, as shown in Figure 1.
Figure 1. Model of a double-helical DNA. One chain is shown in green and the other in red. The purine and pyrimidine bases are shown in lighter colors than the sugar-phosphate backbone. (a) Axial view. The structure repeats along the helix axis (vertical) at intervals of 34 Å, which corresponds to 10 residues on each chain. (b) A schematic "ribbon" representation of an axial view of DNA. (Courtesy of M. Meselson and F. W. Stahl) (c) Radial view, looking down the helix axis. Reprinted with permission.1 2. The bases are on the inside of the helix, whereas the backbone (the sugars and phosphates) are on the outside. The bases are perpendicular to the helix axis and are said to stack (as plates are stacked when they are stored). The sugars are nearly parallel to the helix axis. See Figure 2.
Figure 2. Diagram of one of the strands of a DNA double helix, viewed down the helix axis. the bases (all pyrimidines here) are inside, whereas the sugar-phosphate backbone is outside. The bases are shown in blue and the sugars in red. Reprinted with permission.13. The diameter of the helix is 20 Å. The helical structure repeats after ten deoxyribonucleotide units on each chain (at intervals of 34 Å), as shown in Figure 1. 4. The two chains are held together by hydrogen bonds between pairs of bases. Adenine and thymine are always paired (and are joined by two hydrogen bonds); guanine and cytosine are always paired (and are joined by three hydrogen bonds), as shown in Figure 3.
Figure 3. Hydrogen bonding interactions (a) between adenine and thymine, and (b) between guanine and cytosine. Reprinted with permission.15. There are no restrictions on the sequence of bases along a polynucleotide chain, however, the precise sequence of bases carries the genetic information. Replication1 DNA duplicates itself by a process called replication. In replication, a strand from the original DNA molecule acts as a template for the formation of a new strand. The template can be either single- or double-stranded DNA; single-stranded DNA is formed when the hydrogen bonds between base pairs are broken. More than 20 proteins are required for the process of DNA replication. One of these is DNA polymerase I, which catalyzes the step-by-step addition of activated deoxyribonucleotide units to a DNA chain; the activated deoxyribonucleotide units for adenine, guanine, thymine, and cytosine are represented as dATP, dGTP, dTTP, and dCTP, respectively. Divalent magnesium (Mg2+) is also required for replication to occur. A schematic representation of DNA replication is shown in Figure 4.
In short, replication describes the process by which genetic information is stored and transmitted. Transcription1 Genetic information is expressed through the processes of transcription, in which RNA (ribonucleic acid) is synthesized from a DNA template strand, and translation, in which proteins are synthesized from RNAspecifically, messenger RNAtemplates. We will briefly discuss transcription here. Like DNA, RNA is made up of a sugar-phosphate backbone having four possible nitrogenous bases. Unlike DNA, however, ribose (shown below)not deoxyriboseis found in RNA. The four nitrogenous bases in RNA are adenine, guanine, cytosine, and uracil (instead of thymine), as shown below.
Figure 4. A schematic representation of DNA replication, showing the addition of a guanine residue first, and then a thymine residue. Reprinted with permission.1
An enzyme called RNA polymerase catalyzes the initiation and elongation of RNA chains, in which one activated ribonucleotide unit (represented by ATP, GTP, UTP, and CTP) is added at a time. The preferred template for transcription is double-stranded DNA. A divalent metal ioneither Mg2+ or Mn2+is required for transcription. In short, transcription describes the first step in the expression of the genetic information stored by DNA. Repair1,2 DNA is damaged in a variety of ways, and all cells have mechanisms by which to repair the damage. DNA repair is possible because genetic information is stored in both strands of the double helix; therefore, information lost from one strand can be retrieved from the other. One way in which DNA becomes damaged is by the covalent cross-linking of DNA bases; cross-links can be formed from bases on either the same strand of the helix (intrastrand cross-linking) or opposite strands of the helix (interstrand cross-linking). Covalent intrastrand cross-links can be formed by chemical agents, such as cisplatin, or by ultraviolet light, leading to the formation of a pyrimidine dimer, as shown in Figure 5.
Such a pyrimidine dimer cannot fit into the double helix, so the normal functions of the cell (such as replication and transcription) are blocked until the dimer is removed. The removal, or excision, of the dimer and the subsequent repair of the damaged DNA strand is referred to as the excision repair system and requires the action of several enzymes, three of which are described here and shown in Figure 6. After recognition of DNA damage by a damage recognition protein, the DNA strand containing the dimer is cut at two sites by an excinuclease. Then, the cut DNA strand is regenerated by DNA polymerase I. Finally, the newly replicated portion of the DNA strand is joined to the undamaged part of the DNA strand by DNA ligase.
Figure 5. A thymine dimer, which is an example of a pyrimidine dimer. Reprinted with permission.1
As we have seen from studies into the mode of action of cisplatin, covalent intrastrand cross-links can also be formed by chemical agents, such as cisplatin. A similar mechanism is believed to repair damage done to cancer cells by cisplatin, as we will see when we discuss drug resistance.
Figure 6. Repair of a region of DNA containing a thymine dimer by the sequential action of a specific excinuclease, a DNA polymerase, and a DNA ligase. The thymine dimer is shown in blue, and the new region of DNA in red. (Courtesy of P. C. Hanawalt) Reprinted with permission.1
(1) Stryer, L. Biochemistry, 4th ed. W. H. Freeman and Company: New York, 1995, Chapters 4, 5, and 31. (2) Pil, P., Lippard, S. J. In Encyclopedia of Cancer, J. R. Bertino, Ed. Academic Press: San Diego, CA, 1997, Vol. 1, pp. 392-410. | <urn:uuid:0ef7be92-5f1f-4f36-a264-c1502aec0c26> | 4.09375 | 1,812 | Academic Writing | Science & Tech. | 47.007517 |
White sharks have a number of markings that may serve a social purpose. The pectoral fins, for instance, feature black tips on the undersurface and white patches on the trailing edge. Both markings are all but concealed when the sharks swim normally, but are flashed during certain social interactions. And a white patch that covers the base of the lower lobe of the shark’s two-pronged tail may be important when one shark follows another. But if those markings help white sharks signal to one another, they may also make the sharks more visible to their prey. And if so, the trade-off between camouflage and social signaling demonstrates the importance of social interactions among white sharks.
Complex social behaviors and predatory strategies imply intelligence. White sharks can certainly learn. The average shark at Seal Island catches its seal on 47 percent of its attempts. Older white sharks, however, hunt farther from the Launch Pad and enjoy much higher success rates than youngsters do. Certain white sharks at Seal Island that employ predatory tactics all their own catch their seals nearly 80 percent of the time. For example, most white sharks give up if a seal escapes, but a large female we call Rasta (for her extremely mellow disposition toward people and boats) is a relentless pursuer, and she can precisely anticipate a seal’s movements. She almost always claims her mark, and seems to have honed her hunting skills to a sharp edge through trial-and-error learning.
We are also learning that white sharks are highly curious creatures that systematically escalate their explorations from the visual to the tactile. Typically, they nip and nibble to investigate with their teeth and gums, which are remarkably dexterous and much more sensitive than their skin. Intriguingly, highly scarred individuals are always fearless when they make “tactile explorations” of our vessel, lines, and cages. By contrast, unscarred sharks are uniformly timid in their investigations. Some white sharks are so skittish that they flinch and veer away when they notice the smallest change in their environment. When such sharks resume their investigations, they do so from a greater distance. In fact, over the years we have observed remarkable consistency in the personalities of individual sharks. In addition to hunting style and degree of timidity, sharks are also consistent in such traits as their angle and direction of approach to an object of interest.
Any discussion of white sharks must acknowledge their occasional, though much-publicized, “attacks” on people. The vast majority of them, however, bear no resemblance to shark attacks on prey. The attacks on people are slow and deliberate, and the resulting wounds are relatively minor compared with the wounds inflicted on prey. About 85 percent of the victims survive. Deaths do occur from blood loss, but there are very few verified cases in which a white shark actually consumed a person. Clearly, we are not on their menu.
Klimley suggests that, compared with blubbery marine mammals, people are simply too muscular to constitute a worthwhile meal. Our view is different: we believe that white sharks probably bite people not to eat them but to satisfy their curiosity. Fortunately, the shark’s investigation of a person is usually interrupted by the victim’s brave companions.
For all the fear white sharks inspire, it is ironic that people probably pose the single greatest threat to white sharks. People kill them for sport and trophies, and hunt them to reduce their populations near swimming and surfing beaches. In addition, there’s a flourishing and lucrative black market in white-shark jaws, teeth, and fins, even though such trade is illegal under international law. White sharks take between nine and sixteen years to reach maturity, and females give birth to just two to ten pups every two or three years. Such a life in the slow lane makes the white shark extremely vulnerable to even moderate levels of fishing.
In recent studies, electronic tags attached to individual white sharks and monitored by satellites have shown that the animals can swim thousands of miles a year. One individual swam from Mossel Bay, South Africa, to Exmouth, Western Australia, and back—a round trip of 12,420 miles—in just nine months. Such long-distance swimming may take white sharks through the territorial waters of several nations, making the sharks hard to protect (not to mention hard to study). Yet a better understanding of their habitat needs, their movement patterns, their role in the marine ecosystem, and their social lives is critical to the species’ survival.
As September approaches, the white sharks’ hunting season at Seal Island draws to a close. Soon most of them will depart, remaining abroad until their return next May. The Cape fur seal pups that have survived this long have become experienced in the deadly dance between predator and prey. They are bigger, stronger, wiser—and thus much harder to catch. The handful of white sharks that remain in False Bay year-round probably shift to feeding on fishes such as yellowtail tuna, bull rays, and smaller sharks. In effect, they seasonally switch feeding strategies from energy maximization to numbers maximization.
Next May we, too, will return. But fieldwork always has its surprises, and we cannot predict what the white sharks of Seal Island will have in store for us. | <urn:uuid:373d006f-2dbe-4156-b2a9-a680ef572195> | 3.296875 | 1,084 | Knowledge Article | Science & Tech. | 44.225584 |
A floor is covered by a tessellation of equilateral triangles, each having three equal arcs inside it. What proportion of the area of the tessellation is shaded?
Investigate the different ways of cutting a perfectly circular pie into equal pieces using exactly 3 cuts. The cuts have to be along chords of the circle (which might be diameters).
The three corners of a triangle are sitting on a circle. The angles
are called Angle A, Angle B and Angle C. The dot in the middle of
the circle shows the centre. The counter is measuring the size of
Angle A in degrees. What is the smallest Angle A can be? What is
the largest Angle A can be? What else do you notice about Angle A
as you move the corners of the triangle around the circle?
Remember that folding an equilateral triangle in half gives you
a 30-60-90 triangle and the ratios of the lengths of the sides. | <urn:uuid:5c7a6db6-53da-4cb0-a86c-06501062a248> | 4.40625 | 199 | Tutorial | Science & Tech. | 65.003419 |
One of the primary drawbacks to using solar energy as a source of electricity is the unreliability of the harvesting process. The intensity of the sun varies from place to place and from one season to the other in the same place on earth. Also, there is the obvious drawback that the sun is not accessible 24 hours a day when harvesting the solar power from an earth-based location.
For over 40 years there have been studies performed to determine the feasibility of developing space solar power (SSP). A recent study group with the backing of the International Academy of Astronautics has completed a 3 year assessment of using Solar Power Satellites (SPS) to harvest sunlight in space and then deliver it via wireless power transmission to Earth.
The actual goals of this particular study were to determine the role SSP might play in meeting the increasing need for sustainable energy in high quantities over the next century, to assess the readiness and risks associated with the SSP concept and then attempt to frame a roadmap that might help to realize the concept.
What the study concluded was that while the concept of space solar power is technically feasible it will only become economically possible when the technology has been properly developed and matured. It will also only be possible with a significant international consortium prepared to inject considerable funding into the project. The potential figures that may be needed were not actually suggested in the study.
The ability to access the sun 24 hours a day would be a matter of setting up a series of satellites that are sent into specific orbits designed with the specific job of harvesting the sun’s rays. The idea that solar energy is only a part-time energy solution would be overcome turning it into a more reliable and consistent source of energy. It would be assumed that the technological advancements that were achieved to reach the point that this concept could become a reality would also mean that earth-based solar energy solutions will have been vastly improved as well.
Possible advantages for establishing a space based solar power system include:
- The ability to scale the network of power satellites as needed
- The access to solar power is constant 24 hours a day.
- The payback time would be shorter than other solar power methods because of the constant feed of energy.
- Construction of the satellites is a known factor that has already been achieved.
- The materials used to construct the satellites are far lighter than earth-bound equivalents.
To read the final report of the IAA study it its entirety (it’s a 249 page document) you can find the full text here. | <urn:uuid:5fd22ba6-93eb-4c39-b2e5-ac05d9e067d7> | 3.625 | 514 | Knowledge Article | Science & Tech. | 32.812076 |
Thursday, 3 June 2010
The stratosphere is cooling!
Look at the amazing figure at the top! This may very well be the most important figure displayed in this eminent blog ever, and we have already had some pretty important figures! The figure comes from the Remote Sensing Systems webpage. The different curves show how the temperature changes since 1980 according to satellite measurements at different heights in the atmosphere. The people from Remote Sensing systems say that the curves are from different channels, and where in the atmosphere those channels are can be seen in the figure to the right. Beware, the curves of the low channels are at the top and the curves for the high channels are at the bottom in the figure to the top of the figure to the left.
Look very carefully at the curves. The curve for the lowest part of the atmosphere (TLT, at the top) shows an increasing temperature, ie warming. However, the curve for the highest part of the atmosphere, i.e. the stratosphere (TLS, at the bottom), shows a decreasing temperature, ie cooling!
I'm going to repeat that again, because this is probably the most important thing I have ever written on this blog. The lower atmosphere is warming, but the higher atmosphere is cooling!
How could this be, if we have global warming? Then all the atmosphere should be warming! Anything else would be a violation of the second law of thermodynamics and the Stefan-Boltzmann law! I could explain that with complicated mathematical formulas, but then you, dear reader, couldn't follow my train of thoughts anymore. It suffices to say that one is not allowed to violate natural laws. If the so-called climate scientists new as much about physics and mathematics as I do, they would also understand that, but they don't.
So it is apparent that Earth is not warming. We only have a redistribution of temperature in the atmosphere. This is an incontestable fact. But how could this be? The explanation is quite simple - an increase in the force of gravity. When gravity increases, the air atoms are drawn towards the ground, and that results in a higher air pressure. According to the ideal gas law, higher pressure leads to higher temperature. As there are fewer air atoms at the top of the atmosphere, there is a lower pressure and hence a lower temperature. It is all quite elementary if one only is aware of the basics laws of physics. Unfortunately, the so-called climate scientists are not, and their so-called climate models are completely and utterly useless; mere figments of their twisted imaginations!
But do we have independent evidence that the force of gravity is increasing? Yes, indeed we do. We are the evidence ourselves, each time we step up on the bathroom scale. Most people are actually getting heavier and heavier. Chances are that you are getting heavier and heavier, dear reader! Admit it! The graph to the right shows how the people in the Netherlands are getting heavier (percent of men (green), women (red) and both (blue) with a BMI of 30 or more). The same pattern can be seen in other countries. This can only be explained by an increase in the force of gravity. It clearly cannot have anything to do with a minuscle increase in the concentration of carbon dioxide atoms in the atmosphere, as the global warming scammers would like us to believe. Therefore, it has to be the force of gravity.
A bove maiore discit arare minor. | <urn:uuid:86029035-92b3-4277-894d-3d39bb45980a> | 3.125 | 716 | Personal Blog | Science & Tech. | 51.337576 |
Science Fair Project Encyclopedia
A crater (basin or impact crater) is a circular depression on a surface, usually referring to a planet, moon, asteroid, or other celestial body. Craters are caused by meteorite impacts or electrical discharge, although some are caused by volcanic activity (see volcano for more details), or karstic erosion (see Karst Crater for more details). In the center of craters on Earth a crater lake often accumulates, and in craters formed by meteorites a central island (caused by rebounding crustal rock after the impact) is usually a prominent feature in the lake.
Ancient craters whose relief has disappeared leaving only a "ghost" of a crater are known as palimpsests. Although it might be assumed that a major impact on the Earth would leave behind absolutely unmistakable evidence, in fact the gradual processes that change the surface of the Earth tend to cover the effects of impacts. Erosion by wind and water, deposits of wind-blown sand and water-carried sediment, and lava flows in due time tend to obscure or bury the craters left by impacts. Simple slumping of weak crustal material can also play a role, especially on outer solar system bodies such as Callisto which are covered in a crust of ice.
However, some evidence remains, and over 150 major craters have been identified on the Earth. Studies of these craters have allowed geologists to find the remaining traces of other craters that have mostly been obliterated. Impact craters are found on nearly all solid surface planets and satellites. As the number of impact craters increases on a surface, the appearance of the surfaces changes; this can be used to establish the age of extraterrestrial terrain. After a period of time, however, an equilibrium is reached in which old craters are destroyed as quickly as new craters form.
Daniel Barringer was one of the first to identify a geological structure as an impact crater, the Barringer Meteorite Crater (or the "Meteor Crater") in Arizona, but at the time his ideas were not widely accepted, and when they were, there was no recognition of the fact that Earth impacts are common in geological terms.
In the 1920s, the American geologist Walter H. Bucher studied a number of craters in the US. He concluded they had been created by some great explosive event, but believed they were the result of some massive volcanic eruption. However, in 1936, the geologists John D. Boon and Claude C. Albritton Jr. revisited Bucher's studies and concluded the craters he studied were probably formed by impacts.
The issue remained more or less speculative until the 1960s. A number of researchers, most notably Eugene M. Shoemaker, conducted detailed studies of the craters that provided clear evidence that they had been created by impacts, identifying the shock-metamorphic effects uniquely associated with impacts, of which the most familiar is Shocked quartz.
Armed with the knowledge of shock-metamorphic features, Carlyle S. Beals and colleagues at the Dominion Observatory, (Victoria, British Columbia, Canada), and Wolf von Engelhardt of the University of Tübingen in Germany began a methodical search for "impact structures". By 1970, they had tentatively identified more than 50.
Their work remained controversial, but the American Apollo Moon landings, which were in progress at the time, provided evidence of the rate of impact cratering on the Moon. Processes of erosion on the Moon are minimal and so craters persist almost indefinitely. Since the Earth could be expected to have roughly the same cratering rate as the Moon, it became clear that the Earth had suffered far more impacts than could be seen by counting evident craters.
The age of known impact craters on the Earth ranges from about a thousand (e.g. the Haviland crater in Kansas) to almost two billion years, though few older than 200 million years have been found, as geological processes tend to obliterate older ones. They are also selectively found in the stable interior regions of continents. Few underwater craters have been discovered because of the difficulty of surveying the sea floor; the rapid rate of change of the ocean bottom; and the subduction of the ocean floor into the Earth's interior by processes of plate tectonics.
Current estimates of the rate of cratering on the Earth suggest that from one to three craters with a width greater than 20 kilometers are created every million years. This indicates that there are far more relatively young craters on the planet than have been discovered so far.
Formation and structure
An object falling from open space hits the Earth with a minimum velocity of 11.6 km/s (7 mi/s). Since the energy from motion grows as the square of the velocity, this gives moving rock more energy per kilogram than ordinary chemical explosives. Massive objects can easily cause kiloton explosions that resemble nuclear explosions. Seismographs record about one multikiloton impact somewhere on the Earth each year, usually in mid-ocean.
If the object weighs more than 1,000 tonnes, an atmosphere does not slow it down much, though smaller bodies can be substantially slowed by atmospheric drag, as they have a higher ratio of surface area to mass. In any case, the temperatures and pressures on the object are extremely high. These temperature and pressure extremes can destroy chondritic or carbonaceous chondritic bodies before they ever reach ground, but metallic iron-nickel meteorites have more structural integrity and can strike the surface of the Earth in a violent explosion.
When the object hits, it compresses a column of air, water and rock into an extremely hot plasma. This plasma expands violently, and cools rapidly (i.e. it explodes). The plasma and other ejecta splashes at orbital or near-orbital speeds. It can be thrown off into space, or can travel several times around the planet before re-entering as secondary meteors. Airless planets usually preserve stains of the ejecta around impact craters as a pattern of "rays". It should be noted that other non-impact theories for crater-ray formation have been suggested in the scientific literature.
Very energetic chemistry occurs in the plasma. In an Earth impact, powerful acids can be formed from saltwater and air. The vaporized rock of the plasma condenses into characteristic cone-shaped droplets of glass called tektites, and these are widely distributed by the high speeds. Tektites are found in isolated strewnfields on Earth. Note: Several researchers reject the popular impact-origin theory of tektites based on comparisons to bonafide impactite glasses. Curiously, the largest and youngest (700,000 years ago) tektite strewnfield, known as the Australasian field, has no known crater associated with it; this fact strongly suggests that, at least in this case, the tektites are not linked to an impact. A giant "fresh" impact site, less than a million years old, should be visible on land or in the sea. No such Asian impact crater has ever been found..
Oceanic impacts can be considerably more damaging than those on land. Large objects will invariably penetrate or displace the water to impact the seabed, causing huge tsunamis over a large area. The impact at Chicxulub, Yucatán is believed to have produced tsunamis 50 to 100 metres (150-300 feet) high which deposited debris many miles inland.
The result of an impact on land or at sea is a crater. There are two forms, "simple" and "complex". The Barringer crater in Arizona is a perfect example of a simple crater, a straightforward bowl in the ground. Simple craters are generally less than four kilometers across.
Complex craters are larger, and have uplifted centers that are surrounded by a trough, plus broken rims. The uplifted center is due to the "rebound" of the earth after the impact. It is something like the ripple pattern created by a drop of water into a pool, frozen into the Earth when the melted rock cooled and solidified.
In either case, the size of the crater depends on the size of the impactor and the material in the impact regions. Relatively soft materials yield smaller craters than brittle materials. The size of craters invariably changes over time; in the short term, craters shrink as a result of slumping, and over the longer term erosion and other geological processes quickly hide impact craters on the Earth. The Barringer Crater is one of the best-preserved on the planet, but it is only about 50,000 years old. There are almost no signs of the 65 million year-old Chicxulub crater on the Earth's surface, despite it being the largest known on the planet.
Some volcanic features can resemble impact craters, and brecciated rocks are associated with other geological formations besides impact craters. Non-explosive volcanic craters can usually be distinguished from impact craters by their irregular shape and the association of volcanic flows and other volcanic materials. An exception is that impact craters on Venus often have associated flows of melted material.
The distinctive mark of an impact crater is the presence of rock that has undergone shock-metamorphic effects, such as shatter cones, melted rocks, and crystal deformations. The problem is that these materials tend to be deeply buried, at least for simple craters. They tend to be revealed in the uplifted center of a complex crater, however.
Impacts produce distinctive "shock-metamorphic" effects that allow impact sites to be distinctively identified. Such shock-metamorphic effects can include:
- A layer of shattered or "brecciated" rock under the floor of the crater. This layer is called a "breccia lens".
- Shatter cones, which are chevron-shaped impressions in rocks. Such cones are formed most easily in fine-grained rocks.
- High-temperature rock types, including laminated and welded blocks of sand, and tektites, or glassy spatters of molten rock. The impact origin of tektites has been questioned by some researchers; they have observed some volcanic features in tektites not found in impactites. Tektites are also drier (contain less water) than typical impactites. While rocks melted by the impact resemble volcanic rocks, they incorporate unmelted fragments of bedrock, form unusually large and unbroken fields, and have a much more mixed chemical composition than volcanic materials spewed up from within the Earth. They also may have relatively large amounts of trace elements that are associated with meteorites, such as nickel, platinum, iridium, and cobalt. Note: it is reported in the scientific literature that some "shock" features, such as small shatter cones, which are often reported as being associated only with impact events, have been found in terrestrial volcanic ejecta.
- Microscopic pressure deformations of minerals. These include fracture patterns in crystals of quartz and feldspar, and formation of high-pressure materials such as diamond, derived from graphite and other carbon compounds, or stishovite and coesite, varieties of shocked quartz.
Craters can also be created from underground nuclear explosions. One of the most crater-pocked sites on the planet is the Nevada Test Site, where a number of craters were purposely made during its years as a center for nuclear testing (see, for example, Operation Plowshare).
Lists of Craters
- List of craters on Mercury
- List of craters on the Moon
- List of craters on Mars
- List of features on Phobos and Deimos
- List of geological features on Jupiter's smaller moons
- List of craters on Europa
- List of craters on Ganymede
- List of craters on Callisto
- List of geological features on Saturn's smaller moons
- List of geological features on Mimas
- List of geological features on Enceladus
- List of geological features on Tethys
- List of geological features on Dione
- List of geological features on Rhea
- List of geological features on Iapetus
- List of craters on Puck
- List of geological features on Miranda
- List of geological features on Ariel
- List of craters on Umbriel
- List of geological features on Titania
- List of geological features on Oberon
- List of craters on Triton
Notable impact craters on Earth
- Barringer Crater (US)
- Carolina bays (Eastern US)
- Chesapeake Bay impact crater (Eastern US)
- Chicxulub Crater (Mexico)
- Haughton impact crater (Canada)
- Lonar crater (India)
- Mahuika crater (New Zealand)
- Manicouagan Reservoir (Canada)
- Manson crater (US)
- Nördlinger Ries (Germany)
- Panther Mountain New York, (US)
- Sudbury Basin (Canada)
- Silverpit crater (United Kingdom, located in the North Sea)
- Rio Cuarto craters (Argentina)
- The Siljan Ring (Sweden)
- The Vredefort Impact Structure (Vredefort, South Africa)
- Weaubleau-Osceola impact structure (Central US)
See the Earth Impact Database, a website concerned with over 160 identified impact craters on the Earth.
Some extraterrestrial craters
- Caloris Basin (Mercury)
- Hellas Basin (Mars)
- Mare Orientale (Moon)
- Petrarch crater (Mercury)
- South Pole-Aitken basin (Moon)
- Herschel crater (Mimas)
- Photographs of terrestrial impact craters.
- a study of a South Carolina crater
- electrical discharge as a cause of craters
- The Geological Survey of Canada Crater database, 172 impact structures
- A recent news report about tektites
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:54c759dd-302b-4854-905e-38972d161bba> | 4.25 | 2,962 | Knowledge Article | Science & Tech. | 33.711276 |
Hall of North American Mammals
More than 25 Museum expeditions across this continent produced the specimens displayed in this hall's magnificent dioramas. Many belong to the order of mammals called Carnivora (carnivorans), one of the most diverse orders within the mammal group.
Use these free online resources before or after your visit to further explore themes presented in the Hall of North American Mammals Exhibition.
Walk, hop, gallop, swim, glide, burrow, and even swing from trees—these are just some of the ways the planet's 5,400 species of mammals move. See how fast, and slow, they can move.
What would it be like to bite like a saber-toothed cat? Or to gnaw like a beaver? Explore other mammals' teeth with this matching game and coloring book!
From the extinct Cynognathus and Repenomamus to the plant-eating dugongs and manatees, explore some of Earth's most unusual mammals.
From a journey through the basics of evolution to a look at mammal detectives to a re-creation of the world of prehistoric predators, these kid-friendly book and DVD titles bring to life the world of extreme mammals.
Investigate circular and 3D cladograms to see how scientists keep track of species and their evolutionary relationships.
How do how gazelles jump straight up into the air from a standstill? And why do dolphins swim differently than fish? Create six fun flipbooks to explore how mammals move! | <urn:uuid:5f57d6b5-64ae-4d78-bf8f-a98f5cb8c523> | 3.40625 | 313 | Content Listing | Science & Tech. | 50.43731 |
Supersonic Air Flow
Throwing a stone onto the surface of motionless lake triggers a spectacular series of events, beyond what we can see. When the stone hits the lake, a thin sheet of liquid called the "crown splash" is thrown upwards along the stone's rim. Meanwhile, below the surface of the water, a large cavity forms in the stone's wake.
Because of the pressure of the surrounding water and the pull of gravity, the underwater cavity that is formed by the stone hitting the lake's surface immediately starts to collapse and elongate into an "hourglass" shape. The cavity then closes in a single point, ejecting a thin, almost needlelike, liquid jet. The air flow during these changes reaches supersonic speeds.
This research was performed by Stephan Gekle (University of Twente), Ivo R. Peters (University of Twente), José Manuel Gordillo (Universidad de Sevilla), Devaraj van der Meer (University of Twente), and Detlef Lohse1 (University of Twente). | <urn:uuid:3390fb59-e139-44eb-a8fd-c41ddf07182b> | 3.78125 | 220 | Knowledge Article | Science & Tech. | 37.400142 |
basic geometry knowledge.
Puzzle # 110ItalianoFrançais Prof. Gibbus' Angle
Prof. Gibbus find the angle a in the diagram below
(solve this puzzle using just basic geometry, no sines or cosines allowed!)
sent us this funny solution...
more seriously, the solution isn't as difficult as
it appears... You have just to change your point
of view (fig. 1)...
and look beyond the boundaries of the problem (fig.
2), as suggested in our tips
to puzzle solving. Actually, you can find the
solution without any calculation! Just extend the
grid and draw two lines as shown in the picture,
and you'll form a right isosceles triangle having
two opposite angles of 45
degrees. The base (or the hypothenuse) of the
triangle is parallel to the diagonal (the red line
in fig. 2) of the previous 1x3 rectangle. One leg
of the right triangle 'intersects' both parallels.
A transversal line which intersects a pair of parallel
lines produces pairs of alternate interior angles
which are EQUALS. So, angle a worths 45
winner of the puzzle of the month is: John
REIDY, Australia. Congratulations John! | <urn:uuid:4ea2d021-9768-4034-a136-62349716aac3> | 3.328125 | 272 | Tutorial | Science & Tech. | 62.941213 |
Volume 28, Issue 5 (September 2000)
Leaching Behavior of Indian Fly Ashes by an Oedometer Method
Thermal power stations use pulverized coal as fuel, producing enormous quantities of ash as a by-product of combustion. Currently, with very low utilization of the ash produced, the ash deposits at the thermal power stations are increasing rapidly. The disposal problem is expected to become alarming due to the limited space available for ash disposal near most thermal power stations. Among the various applications available for the use of fly ash, geotechnical application offers opportunity for its bulk utilization. However, the possibility of ground and surface water contamination due to the leaching of toxic elements present in the fly ash needs to be addressed. This paper describes a study carried out on two Indian fly ashes. It is found that pH is the controlling factor in the leaching behavior of fly ashes. | <urn:uuid:fd95d569-186f-47d6-919d-4cbab29de0f0> | 2.890625 | 178 | Academic Writing | Science & Tech. | 30.792416 |
When converting from a byte to an int why do you have to do a logical AND with 0xFF.
For example: int i = 'A' & 0xff;
First of all, in that line of code there is no conversion from a byte to an int. The 'A' is a character literal (its type is char, not byte). The 0xff is an integer literal. Also, the AND operator & is not a logical AND, it is a bitwise AND. The logical AND operator is &&.
Doing a bitwise AND with 0xff means keeping the lower 8 bits and clearing all higher bits.
About the second question: You first shift i four bits to the right and with the & operation you keep the lower 4 bits and mask the rest off.
In hexadecimal, each digit represents 4 bits in the value. Take for example the value 156 (decimal). In binary, this is: 10011100. It is easy to convert this to hexadecimal: split the binary number into groups of 4 bits: 1001 1100. Each group of 4 bits corresponds to 1 hexadecimal digit. 1001 = 9 and 1100 = C, so 156 = 10011100 = 0x9C. See this: Hexadecimal (Wikipedia)
Notice the following: The result is 1 when both bits are 1, otherwise the result is 0.
Suppose you have a value, and you want to keep the lower 8 bits of the value, and set all the other bits of the value to 0. You can do this by doing a bitwise AND operation, where you AND the value with a mask in which you set the bits you want to keep to 1, and the bits you want to clear to 0.
Note that bits 8-31 are 0 in the result, and bits 0-7 contain the lower 8 bits of the original value. | <urn:uuid:aa96d61c-ca54-4f33-a2cb-bd95196805e8> | 4.0625 | 394 | Q&A Forum | Software Dev. | 79.820916 |
Have you ever thought about how clouds form? Can you make a cloud here on Earth? When warm moist air rises it expands and pressure drops. When this occurs, the water vapor in the air condenses around dust or other particles in the air and forms clouds. You can duplicate this process here on Earth now that you know the ingredients of a cloud: water vapor, dust and low pressure. | <urn:uuid:3dda0239-f260-4873-96d0-3b6e9dee72d5> | 3.015625 | 79 | Knowledge Article | Science & Tech. | 72.976429 |
A fundamental advance in the theoretical understanding of the role of thermal noise in molecular motors has been made by Chinese scientists. From EurekAlert, this news release from Science in China Press “Thermal noise molecular ratchet mechanism found by researchers in the Chinese Academy of Sciences“:
Designing machines that can be driven by thermal noise is a dream for scientists. In 1912, Smoluchowski presented a gedankenexperiment consisting of an asymmetric ratchet with a pawl that could harness work from thermal noise but the concept was disproved. In 1997, Kelly et al. experimentally designed a molecule and observed spontaneous unidirectional rotations of the molecule (Angew. Chem. Int. Ed. Engl. 36, 1866 (1997)). Later, in a paper entitled “Tilting at Windmills? The Second Law Survives” Davies argued that this observation did not challenge the second law of thermodynamics because the spontaneous unidirectional rotations happened only within a limited angle (Angew. Chem. Int. Ed. 37, 909 (1998)). In 2007, Fang et al. theoretically proposed a charge-driven molecular water pump, in which water spontaneously flows from one side to the other through a nanochannel with asymmetrically distributed charges that are adjacent to the nanochannel (Nat Nanotechnol. 2, 709 (2007)). This pump has been empirically queried, from the viewpoint of whether the second law of thermodynamics holds. Recently, Professor Fang Haiping and his group from the Shanghai Institute of Applied Physics, Chinese Academy of Sciences theoretically showed that asymmetric transport is feasible in nanoscale systems experiencing thermal noise, without the presence of external fluctuations. The key to this theoretical advance is the recognition that thermal noise, previously considered to be white noise, is not white at the nanoscale, i.e., the autocorrelation time of thermal noise becomes significantly long in nanoscale systems. Their work, entitled “Asymmetric transportation induced by thermal noise at the nanoscale”, was published in SCIENCE CHINA Physics, Mechanics & Astronomy. 2012, doi: 10.1007/s11433-012-4695-8.
Smoluchowski’s gedankenexperiment in 1912 is widely considered as the first identifiable contribution to ratchet theory. Later, Feynman recapitulated and extended this device in his lectures (The Feynman Lectures on Physics Vol. 1, Chapter 46). Ratchet theory for meso- and macroscopic systems was established based on this idea, in which long-range time correlation of the perturbation and broken spatial inversion symmetry in the systems are two necessary conditions for biased motion (Europhys. Lett. 28 459 (1994)). In this theory, thermal noise is regarded as white noise, and white noise alone cannot result in biased motion in an asymmetric meso-/macroscopic system.
However, down at the nanoscale, is thermal noise still unable to induce biased motion in asymmetric nanoscale systems, as is the case in meso- or macroscopic systems? If we stick to the traditional ratchet theory, thermal noise alone cannot induce biased motion in asymmetric systems because thermal noise is treated as white noise (zero autocorrelation time). Actually, thermal noise cannot be regarded as white noise for nanoscale systems, because at room temperature, the time duration between two collisions of molecules is on the order of 10 to 100 picoseconds, which, although negligibly small at a meso- or macroscopic view, is significantly large at the nanoscale. Based on a very simple model at nanoscale, Prof. Fang and his group show the feasibility of unidirectional transport in nanoscale systems with thermal noise by considering the finite autocorrelation time of the thermal noise. Although the autocorrelation time of thermal noise has not been measured directly by experiment, they find that the thermal noise in bulk water at room temperature has an autocorrelation time of the order of 10 ps from molecular dynamics (MD) simulations.
Now, one may argue that such a nanoscale system with an asymmetric structure comprises a perpetual motion machine of the second kind, because the biased motion is induced from a single heat bath. Prof. Fang said: “At nanoscale, we cannot keep the asymmetry of the systems against thermal motion without any input of extra energy. This is different from the situation of meso- and macroscopic systems, in which the length scale of the asymmetry is large enough to ignore the thermal motion. Thus, the biased motion driven by the thermal noise in the asymmetric nanoscale systems accompany with the extra energy-input to maintain the asymmetry does not violate the second law of thermodynamics.”
This work provides a seminal contribution to the study of the unique behavior of nanoscale systems, which usually have spatial inversion asymmetry. It is expected to be of fundamental importance in the understanding and prediction of the behaviors of nanoscale systems, including molecular motors. The results should be of interest to a wide-ranging community of scientists in the fields of physics, chemistry, biology, and nanotechnologies. However, experiments are called for to determine the distribution of the autocorrelation time of thermal noise and more studies are required on energy transformations in such nanoscale processes.
The simple take-home lesson from this is from the last sentence of the researchers’ abstract: “Our observation does not violate the second law of thermodynamics, since at the nanoscale, extra energy is required to keep the asymmetric structure against thermal fluctuations.” [Free Fulltext Article] Advances in the theoretical understanding of molecular ratchets are very welcome because a major issue in designing artificial molecular machines is whether in a particular case it should take advantage of Brownian motion, as do biological molecular machines, or deterministic motions transmitted by stiff molecular components, as in nanofactory designs.
—James Lewis, PhD | <urn:uuid:b082e545-1e8c-49ce-8468-6fdad5fb9edf> | 2.828125 | 1,259 | Knowledge Article | Science & Tech. | 24.204278 |
Somehow, the Earth is hotter than ever during the coldest winter since the 1970s. In the face of frigid world-wide temperature trends and mounting evidence that the Earth is entering a cooling phase, the pushers of man-made global warming theory are becoming more shrill with their warnings of runaway warming. Chris Field of the UN’s International Panel on Climate Change (IPCC) recently told a gathering of scientists in Chicago that the climate is heating up faster than scientists had previously predicted (seems that we hear that every year – can’t they ever get a prediction right?). “The consequence of that is we are basically looking now at a future climate that is beyond anything that we’ve considered seriously,” he said.
The more things stay the same, or even when warming seems to have gone into reverse, the more often we are inundated with statements like Field’s. Somehow, it’s always getting worse than predicted, and faster. If we took all these alarmist claims seriously and added them up, we should probably expect 300 degree weather this June – Make that April, it’s accellerating faster than we thought. | <urn:uuid:45750ddc-41e9-4a3a-b7f0-389c3f7eea8a> | 2.9375 | 247 | Personal Blog | Science & Tech. | 44.11875 |
Large elongate worms. Prostomium usually with 2 pairs of antennae but always with a pair of bi-articulate palps. Peristome with usually 4 but sometimes 3 pairs of tentacular cirri. Eversible pharynx with a pair of jaws; some genera armed with many chitinous paragnaths or papillae, while in several genera pharynx unarmed. Parapodia uniramous for first two setigers then usually biramous but some genera uniramous throughout. Most genera usually without branchiae/gills; where branchiae occur usually branched and arise on mid anterior segments of body. Setae mainly compound, both falcigers and spinigers. | <urn:uuid:4d5dbe41-38e9-482d-a8cf-d57b82273d36> | 3.09375 | 150 | Knowledge Article | Science & Tech. | 25.214565 |
Volcanic rock is an igneous rock of volcanic origin.
Volcanic rocks are usually fine-grained or aphanitic to glassy in texture.
They often contain clasts of other rocks and phenocrysts.
Phenocrysts are crystals that are larger than the matrix and are identifiable with the unaided eye.
They were created during fractional crystallization of magma before extrusion.
Rhomb porphyry is an example with large rhomb shaped phenocrysts embedded in a very fine grained matrix.
For more information about the topic Volcanic rock, read the full article at Wikipedia.org, or see the following related articles:
Recommend this page on Facebook, Twitter,
and Google +1:
Other bookmarking and sharing tools: | <urn:uuid:8cb5acfb-9283-477f-8cc0-8e74522dead2> | 3.5625 | 164 | Knowledge Article | Science & Tech. | 31.375 |
For convenience, certain coherent derived units have been given special names and symbols. There are 22 such units, as listed in Table 3. These special names and symbols may themselves be used in combination with the names and symbols for base units and for other derived units to express the units of other derived quantities. Some examples are given in Table 4. The special names and symbols are simply a compact form for the expression of combinations of base units that are used frequently, but in many cases they also serve to remind the reader of the quantity involved. The SI prefixes may be used with any of the special names and symbols, but when this is done the resulting unit will no longer be coherent.
Among these names and symbols the last four entries in Table 3 are of particular note since they were adopted by the 15th CGPM (1975, Resolutions 8 and 9), the 16th CGPM (1979, Resolution 5) and the 21st CGPM (1999, Resolution 12) specifically with a view to safeguarding human health.
In both Tables 3 and 4 the final column shows how the SI units concerned may be expressed in terms of SI base units. In this column factors such as m0, kg0, etc., which are all equal to 1, are not shown explicitly.
The values of several different quantities may be expressed using the same name and symbol for the SI unit. Thus for the quantity heat capacity as well as the quantity entropy, the SI unit is the joule per kelvin. Similarly for the base quantity electric current as well as the derived quantity magnetomotive force, the SI unit is the ampere. It is therefore important not to use the unit alone to specify the quantity. This applies not only to scientific and technical texts, but also, for example, to measuring instruments (i.e. an instrument read-out should indicate both the unit and the quantity measured).
A derived unit can often be expressed in different ways by combining base units with derived units having special names. Joule, for example, may formally be written newton metre, or kilogram metre squared per second squared. This, however, is an algebraic freedom to be governed by common sense physical considerations; in a given situation some forms may be more helpful than others.
In practice, with certain quantities, preference is given to the use of certain special unit names, or combinations of unit names, to facilitate the distinction between different quantities having the same dimension. When using this freedom, one may recall the process by which the quantity is defined. For example, the quantity torque may be thought of as the cross product of force and distance, suggesting the unit newton metre, or it may be thought of as energy per angle, suggesting the unit joule per radian. The SI unit of frequency is given as the hertz, implying the unit cycles per second; the SI unit of angular velocity is given as the radian per second; and the SI unit of activity is designated the becquerel, implying the unit counts per second. Although it would be formally correct to write all three of these units as the reciprocal second, the use of the different names emphasises the different nature of the quantities concerned. Using the unit radian per second for angular velocity, and hertz for frequency, also emphasizes that the numerical value of the angular velocity in radian per second is 2 times the numerical value of the corresponding frequency in hertz.
In the field of ionizing radiation, the SI unit of activity is designated the becquerel rather than the reciprocal second, and the SI units of absorbed dose and dose equivalent are designated the gray and the sievert, respectively, rather than the joule per kilogram. The special names becquerel, gray, and sievert were specifically introduced because of the dangers to human health that might arise from mistakes involving the units reciprocal second and joule per kilogram, in case the latter units were incorrectly taken to identify the different quantities involved. | <urn:uuid:a85ec1f3-2d0f-4b76-aff1-2fc0c0c544dd> | 4.15625 | 815 | Knowledge Article | Science & Tech. | 35.224917 |
“Es ist mir ein prächtiges Licht über die Absorption und Emission der Strahlung aufgegangen ‒ es wird Dich interessieren. Eine verblüffend einfache Ableitung der Planck’schen Formel, ich möchte sagen die Ableitung. Alles ganz quantisch.”
“A splendid light has dawned on me about the absorption and emission of radiation ‒ it will be of interest to you. A stunningly simple derivation of Planck's formula, I might say the derivation. All completely quantical.”
Albert Einstein in a letter to his friend Michele Besso on August 11, 1916.
The “splendid light” refers to Einstein's insight that stimulated emission (also called induced emission) of light from excited atoms occurs in nature, and that this yields an elementary explanation of Planck's formula for the spectrum of black body radiation.
And, of course, some 46 years later and 50 years ago this May, the “splendid light” of Einstein's idea became a real “splendid light” with the construction of the Laser, based on the principle of stimulated emission of radiation. | <urn:uuid:2c144d03-b1c3-49ff-a446-91bcf9c9a1de> | 3.078125 | 277 | Personal Blog | Science & Tech. | 48.472925 |
Today marks the 50th anniversary of the first laser, built and fired by Theodore Maiman at Hughes Research Labs in Malibu, Calif.
Physicists around the world are celebrating the event. In Houston, French laser pioneer Gérard Mourou will give a free public lecture at 8 p.m. May 26 at the George R. Brown Convention Center titled “50 Years of the Laser” (see details).
Today, lasers enable a host of technologies that we take for granted: computers, the Internet, mobile phones, restored vision, cancer treatments, CD and DVD players, precision manufacturing and a host of other advancements.
But when the first laser was made, that is, a device that emitted light at a single wavelength rather than the many wavelengths of natural light, it wasn’t clear of what practical use the new invention would be.
To get a sense of the times, I recently spoke with Rice University’s Frank Tittel, who has studied lasers for half a century.
How did you learn about the first laser?
I was at GE in Schenectady and I saw the article, which first came out in the New York Times. The establishment at Columbia University and Bell Labs, which were the powerhouses in the field, blocked his publication in a journal. I believe his superiors at Hughes Aircraft told him not to work on it. They believed what came out of Columbia and Bell Labs was the truth, that a ruby laser was impossible because it was a three-level system. It didn’t fit their model. When it appeared in the newspaper GE asked me if I could duplicate it, and I did. It was so simple in the end. I could do it just basically on the newspaper report.
When lasers first came out, didn’t a lot of people say they were completely impractical?
That was the classic statement, because of course it is wrong today.
When did you come to Houston?
I joined Rice in 1967. I’m a physicist. The physics department didn’t think lasers were going anywhere so I went into electrical engineering. Today half the physicists here use lasers. It shows you should always be open-minded in science.
What do you think is the next step for lasers?
As I’m getting older I’m excited about lasers in medical applications. Exhaled breath contains several hundred trace gas species that can be related to clinical diseases, a biomarker of diseases.
I’ve been working for the last eight years in trying to develop inexpensive, robust chemical sensors. From the biochemistry side many diseases still are not well understood, the insurance companies still use biopsies to identify human diseases. As we understand more and more about the biochemistry, lasers will have great application in identifying disease.
From the instrumentation side the lasers are still expensive. But they’re sensitive, selective, and fast, and what I really like is they’re non-invasive. I’m a coward when it comes to surgery or taking a biopsy. Right now you have urine tests, blood tests. I predict that in a matter of years you will have breath analysis tests as well.
Give me some trivia about lasers and Houston.
Back in 1986 NASA’s Johnson Space Center was celebrating its 25th anniversary. So NASA commissioned a French musician, Jean Michel Jarre, to have a big laser light show in April 1986. I believe it was the biggest audience ever gathered in Houston with 1.5 million people watching laser beams illuminating downtown. The idea came from the Houston Grand Opera. I was down there and of course it was a giant traffic jam. It took me an hour to extract myself. | <urn:uuid:0b7b442c-f1f5-4d47-bce1-0693c72b0584> | 3.15625 | 769 | Audio Transcript | Science & Tech. | 58.291138 |
LARS Tech Report Number
By measuring certain physical properties of fifteen soils typical of Wisconsin-aged, glacial till soils capped with less than 60 inches of loess in Indiana, the variations in spectral response in the laboratory were explained. Spectral reflectance measure with the Exotech 20-C can be significantly explained by percent moisture, organic carbon, and clay content of these soils. The soils studied were predominantly silty with a range of organic carbon from .60 to 1.33%. The moisture content of the soils was controlled by use of the pressure membrane at 15 bars, pressure plates at 1/3 bar, and oven dried at 105°C for 24 hours in a forced air dryer. The moisture of the samples was equilibrated, and then illuminated artificially by a General Electric DXW lamp and spectrally measured from .53 µm to 2.32 µm with Exotech 20-C.
Date of this Version | <urn:uuid:69b9956a-a960-4c30-9dd0-9059c846662d> | 2.984375 | 191 | Academic Writing | Science & Tech. | 56.462509 |
Today, lets look at two methods that use liquid salt.
The first one is a solar thermal power plant in Spain, a power plant that gets around one weakness of solar power, which is, that the Sun is not always visible due to weather, and it is not available at night.
There is a salty solution to that problem:
"It is the first station in the world that works 24 hours a day, a solar power station that works day and night!" said Santago Arias, technical director of Torresol Energy, which runs the station.
The mechanism is "very easy to explain," he said: the panels reflect the suns rays on to the tower, transmitting energy at an intensity 1,000 times higher than that of the sun's rays reaching the earth.
Energy is stored in a vat filled with molten salts at a temperature of more than 500 degrees C (930 F). Those salts are used to produce steam to turn the turbines and produce electricity.
It is the station's capacity to store energy that makes Gemasolar so different because it allows the plant to transmit power during the night, relying on energy it has accumulated during the day.
"I use that energy as I see fit, and not as the sun dictates," Arias explained.
(Solar Power Even At Night).Imagine a smart power grid built with these liquid salt storage facilities all along it.
As power needs fluctuate downward, energy going into the grid, whether solar thermal, solar photo-voltaic, wind, and other methods, is stored for later use when the fluctuation goes upward again.
Another place liquid salt is used for power generation has been around for quite a while.
The administrator of Oak Ridge National Laboratory, top physicist Alvin M. Weinberg, wanted a Thorium Molten-Salt Reactor (MSR), but was fired by Nixon for advocating MSR.
The following video describes MSR: | <urn:uuid:5af4575d-f09f-480a-9195-b0f1cd8c2937> | 3.53125 | 395 | Personal Blog | Science & Tech. | 51.591574 |
Find information on common issues.
Ask questions and find answers from other users.
Suggest a new site feature or improvement.
Check on status of your tickets.
Note: Results do not include pending, unpublished, and some private items.
100 amps of electricity crackle in a vacuum chamber, creating a
spark that transforms carbon vapor into tiny structures. Depending
on the conditions, these structures can be shaped like little,
60-atom soccer balls, or like rolled-up tubes of atoms, arranged
in a chicken-wire pattern, with rounded ends. These tiny, carbon
nanotubes, discovered by Sumio Iijima at NEC labs in 1991, have
amazing properties. They are 100 times stronger than steel, but
weigh only one-sixth as much. They are incredibly resilient
under physical stress; even when kinked to a 120-degree angle,
they will bounce back to their original form, undamaged. And
they can carry electrical current at levels that would vaporize
ordinary copper wires.
Learn more about carbon nanotubes from the many resources on this site, listed below. More information on Carbon nanotubes can be found here.
No results found
nanoHUB.org, a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies. | <urn:uuid:7a414188-fd6e-49ed-bdd4-5bbf521acd5a> | 3.671875 | 283 | Content Listing | Science & Tech. | 39.752667 |
Background on ENSO
ENSO stands for El Niño and La Niña /Southern Oscillation.
El Niño occurs when
the water temperature in the middle of the
The opposite of El
Niño is La Niña, which occurs when the water temperature in the
El Niño and La Niña affect other parts of the world through the Southern Oscillation and so-called "teleconnections." Teleconnections happen because El Niño and La Niña occur over a large area and a large period of time.
Links related to El Niño and La Niña:
|Hypothesis||Method||Background on ENSO| | <urn:uuid:077141f9-b8d8-45be-9ae5-bfeb3192b7f4> | 3.6875 | 133 | Knowledge Article | Science & Tech. | 29.497464 |
Advances in the study of irruptive migration
Newton, I.. 2006 Advances in the study of irruptive migration. Ardea, 94 (3). 433-460.Full text not available from this repository.
This paper discusses the movement patterns of two groups of birds which are generally regarded as irruptive migrants, namely (a) boreal finches and others that depend on fluctuating tree-fruit crops, and (b) owls and others that depend on cyclically fluctuating rodent populations. Both groups specialise on food supplies which, in particular regions, fluctuate more than 100-fold from year to year. However, seed-crops in widely separated regions may fluctuate independently of one another, as may rodent populations, so that poor food supplies in one region may coincide with good supplies in another. If individuals are to have access to rich food supplies every year, they must often move hundreds or thousands of kilometres from one breeding area to another. In years of widespread food shortage (or high numbers relative to food supplies) extending many thousands or millions of square kilometres, large numbers of individuals migrate to lower latitudes, as an ‘irruptive migration’. For these reasons, the distribution of the population, in both summer and winter, varies greatly from year to year. In particular breeding areas, many species of irruptive migrants fluctuate in density according to food supplies at the time. The facts that the response to food change is rapid, and that increases in numbers from one year to the next are often greater than can be explained by high survival and reproduction from the previous year, imply that such year to year density changes are due mainly to movements. Ring recoveries and other data lend support to this view. In irruptive migrants, in contrast to regular migrants, site fidelity is poor, and few individuals return to the same breeding areas in successive years (apart from owls in the increase phase of a rodent cycle). Moreover, ring recoveries and radio-tracking confirm that the same individuals can breed in different years in areas separated by hundreds or thousands of kilometres. Extreme examples are provided by Common Crossbills Loxia curvirostra, in which individual adults were found in localities up to 3200 km apart in different breeding seasons, and Snowy Owls Nyctea scandiaca found in localities up to nearly 2000 km apart. The implication from irruptive migrations, that individuals can winter in widely separated localities in different years, is also supported by ring recoveries, at least in seed-eaters, in which individuals have been found in one winter hundreds or thousands of kilometres from where they were ringed in a previous winter. Most such shifts could be regarded as lying at different points on the same migration axis, but some were apparently on different axes, as the birds were recovered in winter far to the east or west of where they were ringed in a previous winter. Extreme examples include a Bohemian Waxwing Bombycilla garrulus (6000 km, Ukraine to Siberia), a Siskin Carduelis spinus (3000 km, Sweden to Iran), a Pine Siskin Carduelis pinus (3950 km, Quebec to California), and a Common Redpoll Carduelis flammea (8350 km, Belgium to China). Compared to regular (obligate) migrants, irruptive (facultative) migrants show much greater year to year variations in the proportions of individuals that migrate, and greater individual and year to year variations in the timing, directions and distances of movements. The control systems are flexible in irruptive migrants, enabling individuals to respond to feeding conditions at the time. However, regular and irruptive migrants are probably best regarded, not as distinct categories, but as representing opposite extremes of a continuum of migratory behaviour found among birds, from narrow and consistent at one end to broad and flexible at the other. Both systems are adaptive, the one to conditions in which resource levels are predictable in space and time, and the other to conditions in which resource levels are unpredictable. Depending on the predictability and stability of its food supply, the same species may behave as a resident or regular migrant in one part of its range, and as an irruptive migrant in another, as exemplified by particular species of both seed-eaters and rodent-eaters.
|Programmes:||CEH Programmes pre-2009 publications > Other|
|CEH Sections:||_ Ecological Risk
|Additional Information:||Free copy available by registering on website|
|Additional Keywords:||invasion, irruption, migration, finches, owls|
|NORA Subject Terms:||Zoology
Ecology and Environment
|Date made live:||09 May 2008 10:31|
Actions (login required) | <urn:uuid:962f8b33-f992-4f52-a607-bc0c158e86ae> | 3.4375 | 995 | Academic Writing | Science & Tech. | 23.327155 |
The Oceans: Carbon Sink or Source?
Part A: The Ocean Carbon Cycle
First, carbon dioxide gas enters the ocean by dissolving in the sea surface waters. The amount of CO2 that dissolves in the sea surface depends on variables such as wind, sea surface mixing, and temperature of the water. The colder the sea surface water, the more CO2 will dissolve. This is why more carbon dioxide will dissolve in the sea surface at the higher, colder latitudes. In warmer oceans, less CO2 will dissolve in the sea surface water.
Examine the image, right, of the ocean carbon cycle. As you examine the image, make note of the following:
- The ocean carbon cycle processes and how they compare to the terrestrial carbon cycle processes you explored in Labs 1 and 2.
- The amount of carbon moving into and out of the ocean per year (in GT/year of carbon).
Remember that one gigaton (Gt) = 1 billion metric tons. 1 metric ton = 2,200 lbs.
- The white arrows represent gigatons of carbon moving into and out of the ocean per year.
- The red arrow represent gigatons of carbon that have come from human carbon dioxide emissions.
- The numbers in ( ) represent gigatons of carbon stored in a reservoir.
- Net ocean uptake indicates that the amount of carbon in the ocean is increasing by 2 gigatons per year.
- The Atmospheric Carbon Net Annual Increase indicates that the amount of carbon in the atmosphere is increasing by 4 gigatons per year
- Reactive sediments refer to layers of sediment at the bottom of the ocean
What happens to carbon dioxide once it dissolves in the ocean?Some carbon dioxide stays as a dissolved gas in the ocean and may be transported down to deep ocean. If the water stays at the surface and warms up as it moves around the globe, the carbon dioxide will "undissolve" and move back into the atmosphere. However, if the water sinks to the deep ocean (downwelling), the carbon goes with it and can be stored for hundreds of years in slow moving deep ocean currents. Eventually, these deep ocean currents return to the surface along coastlines in a process of upwelling. Once reaching the surface, some carbon can be released back to the atmosphere. This process is what scientists call the "physical carbon pump."
- Locate the largest ocean carbon sinks on the map. Why do you think they are located there? The largest carbon sinks are located in the Northern Atlantic and off Antarctica for two reasons:
The sinks are located in higher latitudes because cold water is denser and sinks. Because large amounts of carbon dioxide dissolve in cold waters, the carbon dioxide is brought down to deep ocean currents as the denser, colder water sinks.
- What happens to the "old" carbon dioxide dissolved in deep ocean water that eventually rises to the surface? Some of the carbon dioxide can be released into the atmosphere and some will stay dissolved in the ocean.
Stop and Think
1: Describe the physical pump's role in enabling the ocean to be a carbon sink.
The Biological Carbon Pump: Small Organisms, Big Effect!
Compared to the physical pump, the biological carbon pump plays a much bigger role in making the oceans a strong carbon sink. In the biological carbon pump, most carbon dioxide is chemically transformed by marine organisms into other carbon compounds which can then be recycled and transported to different parts of the ocean or buried in sea floor sediments.
Examine the image below of the biological and physical pump below and compare the physical pump on the right with the biological pump on the left.
Algae. Image credit: SERC Phytoplankton-the key driver of the ocean's biological pump.
If you live near a pond, lake or ocean and you have seen green scum on the water's surface, you are most likely looking at algae - just one of the many different types of phytoplankton that exist on Earth. Like land plants, phytoplankton contain chlorophyll and other photosynthetic pigments they use to capture sunlight's energy needed to power photosynthesis.
Phytoplankton are mostly microscopic and uni-cellular and come in many shapes and sizes - from extremely small photosynthetic cyanobacteria to larger eukaryotic protists such as the different types of algae in the image on the right. Some phytoplankton, such as Diatoms and Coccolithophores take chemicals out of sea water to make hard outer or inner shells. Coccolithophores are very important to the carbon cycle because they make their hard outer plates from calcium carbonate.
All phytoplankton photosynthesize. They can obtain CO2from the air overlying the surface of sea water and from dissolved CO2 in the water. Using energy from the Sun, carbon dioxide, and important ocean nutrients such as nitrogen, phosphorus and iron, they convert the carbon dioxide and water into sugars and other carbon compounds. These carbon compounds enter the marine food web and eventually find their way into deep ocean currents and seafloor sediments. To help you visualize this process, view the PowerPoint on the ocean's biological pump below. You can click on the image to watch it in slideshare mode or download it by clicking on the download link.
When you view the power point, make note of the following:
- types of organisms involved in the biological pump
- the processes they use to move the carbon down through the pump
- different places the carbon can end up in
Don't forget to answer the two Stop and Think questions embedded in the PowerPoint. You will also find these questions at the end of this lab. When you are done with the PowerPoint, watch this short animation Ocean Biological Pump.
If you have the time and the interest, watch this TedEd video The Secret Life of Plankton or directly on YouTube YouTube- The Secret Life of Plankton. In this beautiful video, you will see many examples of phytoplankton and zooplankton. Remember that in marine food webs, phytoplankton photosynthesize and zooplankton eat phytoplankton, and each other.
Stop and Think
2. (Powerpoint Question)If phytoplankton populations decrease, you might expect:
A. the amount of CO2 in the atmosphere to decrease
B. The amount of CO2 in the atmosphere to increase
Explain why you chose your answer.
3. (Powerpoint question) Many mountain tops contain fossils of shelled creatures that once lived in the oceans. Which of the Earth's spheres could the carbon have traveled through on its journey to these mountain tops?
B. Geosphere and biosphere
C. Geosphere, biosphere, and hydrosphere
D. Geosphere, biosphere, hydrosphere and atmosphere
4. What is the role of phytoplankton in the biological carbon pump?
5. How are marine phytoplankton and forests similar in their role in the carbon cycle?
Read about new research on "Whale Poop" - an Upside-down Biological Pump.
Whale poop pumps up ocean health (Science Daily) or the original research at
Read about coccolithophores and their importance to the biological pump. What is a Coccolithophore? Fact Sheet : Feature Articles | <urn:uuid:46ec5a02-daeb-431c-aa58-7ea3fd733ea3> | 4.15625 | 1,551 | Tutorial | Science & Tech. | 45.951232 |
The Mexican Red Kneed Spider
The Mexican red kneed spider is known for it's red spots. It can be found in Southwest Mexico. Learn more about it.
The Mexican red kneed spider can be found in Southwest Mexico. It makes its home in rocky abandoned burrows on hillsides near streams. The red kneed spider makes its burrows up to one yard long and up to two yards wide. This spider is very territorial and each spider spaces itself about 400 yards apart.
The red kneed spider has eight eyes, but its vision is not good. The spider has hairs on its legs that he uses to feel vibrations. This is how they red kneed spider hunts, his prey consist of beetles, grasshoppers and other small invertebrate. This spider does not hunt; he waits patiently for his meals to come to him. When the spider senses an intruder he reaches out and attacks his prey with his long legs and sinks his poisonous fangs into them killing his prey. The spider takes his meal deep into his burrow and allows his poison time to liquefy his meal. The spider absorbs his meals; he has no way of eating his food.
The Mexican red kneed spider usually stays in his burrow, until mating season, which occurs in the summer. The male seeks out the female, when he finds his mate he deposits his sperm into her. The female will lay approximately 400 eggs. It will take the eggs up to three months to hatch. Many fall prey to other spiders and other insects. The Mexican red kneed spider once it reaches maturity can live up to thirty years. This spider has very few predators because of its poisonous venom. But his main enemies are the raccoons and the skunks, but they are scavengers that dig up the spiders’ burrows and the spider is usually killed during the digging process.
The Mexican red kneed spider has been hunted through the years for exotic pet shops. Although there are many hunted each year the Mexican red kneed spider is in no immediate danger of extinction. Most of it's habitat is uninhibited by humans. | <urn:uuid:a2b564b6-0411-4c82-a32f-d336275e1e76> | 3.375 | 430 | Knowledge Article | Science & Tech. | 67.270367 |
Webjumps are created by the define_webjump function. The general form is as follows:
name - String that you type into the find-url prompt in order to follow this webjump.
spec - A string or a function.
string - For most webjumps, a string spec is sufficient. Give the url as a string, with the format code %s in the place where the webjump argument (typically a search term) can be subtituted in. When your webjump is called with no argument, an alternative webjump will be auto-generated by trimming the path off of the url. For example:
If you called the above webjump with no search term, Conkeror will go to http://www.example.com/. The alternative may instead be given explicitly using the $alternative keyword to define_webjump.
function - The spec may also be a function that takes the webjump argument as its parameter and returns the url to go to. By default, function webjumps require an argument, but that behavior can be controlled by supplying the $argument keyword to define_webjump. The value of $argument can be one of:
true - The webjump requires an argument.
false - The webjump does not require an argument, and minibuffer completion will not add a space.
"optional" - The web jump does not require an argument, but minibuffer completion will still add a space.
The default value of $argument for each type of webjump spec is as follows.
string - If the string contains the substitution pattern %s, $argument is set to "optional", and an alternative no-arg webjump is auto-generated by trimming the path from the url. If the string does not contain %s, $argument is set to false.
function - $argument defaults to true, meaning function webjumps require an argument unless specified otherwise in the call to define_webjump.
If the $alternative keyword is given, then the $argument defaults to "optional".
Webjumps that emulate filling in an html form can also be defined by providing the parameters in a $post_data array. For example, a language translation webjump could be defined as follows:
define_webjump("e2j", "http://www.freedict.com/onldict/onldict.php", $post_data = [['search', '%s'], ['exact', 'true'], ['selected', '10'], ['from', 'English'], ['to', 'Japanese'], ['fname', 'eng2jap1'], ['back', 'jap.html']], $alternative = "http://www.freedict.com/onldict/jap.html");
Webjumps that construct an index from links in a web page and then provide access to those links with completion can be defined using index webjumps.
All webjumps are in Conkeror's memory in a simple object called webjumps. This object can be manipulated for such things as deleting webjumps or creating aliases.
Making a Webjump Alias
webjumps.g = webjumps.google; | <urn:uuid:c361816c-3ac0-4204-aa4a-c09e10cc1193> | 2.75 | 676 | Documentation | Software Dev. | 54.12029 |
C++ problem with the effects of gravity
Jan 21, 2013 at 10:16pm UTC
Hi, I'm trying to write a code to display the results of gravity on an object being dropped from a certain height. The user inputs the tower's height and the output should be a table displaying the effects of gravity on the object for every second. Example( 0s, 1s, 2s... ect.)
This is what I have so far.
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 #include<iostream>
using namespace std;
const double gravity=9.80665;
double height= tower_height - distance;
while(distance <= tower_height && distance >=0)
cout<< "Enter the height of the tower: "<<endl;
height = tower_height - distance;
cout<< "The height of the object at "<<time<< "/n is"<<height<< 'm'<<endl;
Jan 21, 2013 at 10:27pm UTC
2 3 4
double height= tower_height - distance; tower_height
contains some random value, and so does
is initialized by multiplying a constant times some random value and dividing by 2 which gives
... some random value.
is initialized by subtracting a random value from a random value.
Something tells me, our output is going to look random.
Do you really think you should be asking for input every iteration of the loop that's meant to display the table in question?
The loop condition depends on
is never updated within the loop.
Topic archived. No new replies allowed. | <urn:uuid:6a676ea6-592b-4730-bcb4-414af64a4003> | 3.0625 | 353 | Comment Section | Software Dev. | 81.32112 |
Environment / Climate / Range
Marine; bathypelagic; depth range 549 - 1342 m (Ref. 51262). Tropical
Size / Weight / Age
Maturity: Lm ? range ? - ? cm
Max length : 15.6 cm (female)
Morphology | Morphometrics
soft rays: 4. No males and larvae are known. Diagnostic characters refer to metamorphosed females, which differ from other species in the family Oneirodidae by having an unusually deep caudal peduncle (21.6-23.8% SL); blunt and short snout, highly convex frontals forming an extremely short head (29.8-30.5% SL); few teeth in jaws (20-32 in the upper jaw, 20-31 lower). Metamorphosed females are also distinguished by the following set of characters: presence of vomerine teeth; well-developed sphenotic spines (length 3.4-3.6% SL), directed dorsolaterally; a stout symphysial spine on lower jaw; hyomandibular with double head; well-developed quadrate spine (length 4.5-5.0% SL); articular spine less than half length of quadrate spine; posterior margin of opercle deeply notched; long and narrow subopercle, dorsal end tapering to a point (posterior margin without indentation), ventral end oval in shape (no anterior spine or projection); caudal-fin rays with no internal pigmentation; illicium distinctly longer than length of esca bulb; pterygiophore of illicium emerging on snout from between fontal bones, anterior end exposed, posterior end concealed beneath skin; well-developed first dorsal-fin ray ; D 6; A 4; pectoral -fin rays 15-16; short and broad pectoral-fin lobe (length 8.6-8.9%SL) shorter than longest rays of pectoral fin (19.4-19.9% SL); skin apparently naked, no dermal spinules; darkly pigmented skin of caudal peduncle extending well past base of caudal fin (specimens 2:13.4-15.1 cm SL) (Ref. 51262).
Countries | FAO areas | Ecosystems | Occurrences | Introductions | Faunafri
Western Pacific: the Philippines and Magellan Seamounts (east of Mairiana Is.).
Pietsch, T.W. and V.E. Kharin, 2004. Pietschichthys horridus Kharin, 1989: a junior synonym of Dermatias platynogaster Smith and Radcliffe, in Radcliffe, 1912 (Lophiiformes: Oneirodidae), with a revised key to Oneirodid genera. Copeia 2004(1):122-127.
IUCN Red List Status (Ref. 90363)
Threat to humans
Common namesSynonymsMetabolismPredatorsEcotoxicologyReproductionMaturitySpawningFecundityEggsEgg development
ReferencesAquacultureAquaculture profileStrainsGeneticsAllele frequenciesHeritabilityDiseasesProcessingMass conversion
CollaboratorsPicturesStamps, CoinsSoundsCiguateraSpeedSwim. typeGill areaOtolithsBrainsVision
Estimates of some properties based on empirical models
Phylogenetic diversity index (Ref. 82805
= 1.0000 [Uniqueness, from 0.5 = low to 2.0 = high].
Bayesian length-weight: a=0.01995 (-0.15505 - 0.19495), b=3.01 (2.92 - 3.10), based on all LWR estimates for this BS (Ref. 93245
Trophic Level (Ref. 69278
): 0.0 ±0.0 se; Based on size and trophs of closest relatives
Resilience (Ref. 69278
Vulnerability (Ref. 59153
): Low vulnerability (10 of 100) .
Price category (Ref. 80766 | <urn:uuid:c2c26413-487c-4fb5-9814-729074f9fdc9> | 3.0625 | 882 | Knowledge Article | Science & Tech. | 51.749607 |
A fossil brachiopod found in Carboniferous rocks from Ardclough, Ireland, about 330 million years old. Its true width is 10 cm, height 5 cm. Brachiopods are often confused with the two-shelled 'sea shell' animals called bivalves found on beaches today; however, they are a completely different group of animals. There are over 100 genera (groups of species) of brachiopod found in seas all around the world today and their fossil record goes back to the Cambrian, around 560 million years ago.
Their hard shell protects the soft parts of their body and is made up of two shells called 'valves', a dorsal and a ventral valve. This specimen has been prepared so that the dorsal valve has been removed revealing the inside of the fossil. When the brachiopod was alive the coiled spirals would have been covered in soft tissues and fine tentacles to form an organ called the lophophore which is used for feeding. | <urn:uuid:829f094d-6a47-4c53-8645-3c3d1bf460a2> | 3.859375 | 205 | Knowledge Article | Science & Tech. | 52.976538 |
As scientists have used their super-technos to discover more and more planets, they’ve come to notice a pattern between the distance of these planets from their stars. At first most thought it was because of an unfavorable smell the stars exuded, but it may turn out to be something more clever. The stars themselves put up barriers. Cosmic bumper bowling.
Hot radiation from young stars could explain why planets revolve at certain distances.
The search for planets outside our solar system has turned up more than 700 planets, with thousands more awaiting confirmation.
Exoplanet surveys have found that relatively few giant planets orbit their stars between 1 and 2 astronomical units (AU, the distance from the Earth to the sun), but a lot of planets orbit slightly further out.
What causes this planetary pile-up? These behemoths are thought to form further away from the star and migrate inwards through a surrounding disc of dust and gas, which drags them inwards. So why don’t they just keep going and plunge into the star?
Ilaria Pascucci of the University of Arizona in Tuscon has used space and ground-based telescopes to watch gas escape from seven infant solar systems. High-energy photons from each young, active star heat the disc’s dust and gas until it evaporates into space.
Not all of it, though: only the gas in a certain region around the star can escape, at a distance of between about 1 and 2 AU. Further away, the star’s radiation is too weak to heat the gas enough. Any closer, and the star’s gravity holds on too tightly.
“This can make a barrier for this migration,” says Pasucci, who presented the model on 19 March at the Lunar and Planetary Science Conference in The Woodlands, Texas. “If you remove the gas, the planet cannot pass beyond that gap. It gets stranded.”
Pretty amazing stuff. The dance of the cosmos, hidden strings! Tying us all together, keeping us all apart. Beautiful, man. Beautiful. | <urn:uuid:cbc73934-89a5-4a99-a07d-b17d4953876d> | 3.984375 | 428 | Personal Blog | Science & Tech. | 60.41883 |
Cosmic Diagnosis; June 1993; Scientific American Magazine; by Corey S. Powell; 1 Page(s)
Like doctors, astronomers are finding that x-rays offer an invaluable means for examining otherwise hidden structures. Last year Trevor Ponman and his colleagues at the University of Birmingham in England announced that xray observations of hot gas in the Coma galaxy cluster show that the cluster¿s mass follows a surprisingly complicated, lumpy distribution. "It supports the notion that clusters have grown by the accumulation of blobs of galaxy groups and that the process is still happening now," Ponman explains. That discovery is especially significant because the Coma cluster, located 300 million light-years away in the constellation Coma Berenices, is the nearest and one of the best-studied rich clusters of galaxies.
Simon D. M. White of the Institute of Astronomy at the University of Cambridge and his collaborators have since amplified and expanded on Ponman¿s findings. Using data collected by the Roentgen Satellite (ROSAT), White¿s group has produced an x-ray image of the Coma cluster revealing unprecedented detail (below). White describes his work as "x-ray archaeology" because it enables him to reconstruct the process by which the Coma cluster came together. "It¿s fairly clear that you can see the remnants of previous subclumps," White says. The bright extensions of the cluster, most clearly seen at the bottom right, consist of hot gas surrounding giant galaxies that probably were once the dominant objects in their own, smaller clusters before being swallowed and merging into Coma. | <urn:uuid:cb645a50-de26-405f-b3a9-c200393ce2ec> | 3.71875 | 330 | Truncated | Science & Tech. | 35.479323 |
Wobbegong is the common name given to the 12 species of carpet sharks in the family Orectolobidae. They are found in shallow temperate and tropical waters of the western Pacific Ocean and eastern Indian Ocean, chiefly around Australia and Indonesia, although one species (the Japanese wobbegong, Orectolobus japonicus) occurs as far north as Japan. The word wobbegong is believed to come from an Australian Aboriginal language, meaning "shaggy beard", referring to the growths around the mouth of the shark of the western Pacific.
In what is a soon-to-be classic picture taken by National Geographic, a shark is eating another poor shark whole. Daniela Ceccarelli, of Australia’s Research Council Center of Excellence for Coral Reef Studies took the picture, while conducting a “fish census” off Great Keppel Island, part of the country’s Great Barrier Reef. She thought she saw [...] | <urn:uuid:10ed995d-ff5c-4c81-94ae-896fc9c34210> | 3.53125 | 200 | Knowledge Article | Science & Tech. | 39.875088 |
Methods for Types Generated From Schema
As you may have seen in Getting Started with XMLBeans, you use the types generated from schema to access XML instances based on the schema. If you're familiar with the JavaBeans technology, the conventions used in the generated API will be recognizable.
In general, elements and attributes are treated as "properties" in the JavaBeans sense. In other words, as you would with JavaBeans properties, you manipulate parts of the XML through accessor methods such as getCustomer() (where you have a "customer" element), setId(String) (where you have an "id" attribute), and so on. However, because schema structures can be somewhat complex, XMLBeans provides several other method styles for handling those structures in XML instances.
Several methods are generated for each element or attribute within the complex type. This topic lists each method that could be generated for a given element or attribute.
Note that whether or not a given method is generated is based on how the element or attribute is defined in schema. For example, a customer element definition with a maxOccurs attribute value of 1 will result in a getCustomer method, but not a getCustomerArray method — after all, only one customer element is possible in an instance document.
Note, too, that there may be two sets of parallel methods: one whose prototype starts with an "x". An "x" method such as xgetName or xsetName would be generated for elements or attribute whose type is a simple type. A simple type may be one of the 44 built-in simple types or may be a restriction in schema of one of those built-in types. Of course, an attribute will always be of a simple type. For built-in simple types, an "x" method will get or set one of the types provided with XMLBeans, such as XmlString, XmlInteger, XmlGDay, and so on. For derived types, the "x" method will get or set a generated type.
Methods generated for elements or attributes that allow a single occurrence. An element is singular if it was declared with maxOccurs="1". An attribute is singular if it was not declared with use="prohibited".
Type getFoo() void setFoo(Type newValue)
Returns or sets the value of Foo. Generated when Foo is an attribute, or is an element that can occur only once as a child.
XmlType xgetFoo() void xsetFoo(XmlType newValue)
Returns or sets the value of Foo as an XMLBean simple type. These methods are generated if Foo's type is defined in schema as a simpleType.
boolean isNilFoo() void setNilFoo()
Determines or specifies whether the Foo element is nil (in other words, "null" in schema terms), meaning it can be empty. A nil element looks something like this:<foo/>These methods are only generated when an element type is declared as nillable in schema — it has a nillable="true" attribute.
Adds a new Foo as an XMLBean simple to the document, or returns Foo's value if one exists already.
boolean isSetFoo() void unSetFoo()
Determines whether the Foo element or attribute exists in the document; removes Foo. These methods are generated for elements and attributes that are optional. In schema, and optional element has an minOccurs attribute set to "0"; an optional attribute has a use attribute set to "optional".
Methods generated for elements that allow multiple occurrences.
An element may occur multiple times if it has a maxOccurs attribute set to "unbounded" or greater than 1. Attributes can't occur multiple times.
Type getFooArray() void setFooArray(Type newValue)
Returns or sets all of the Foo elements.// Get an array of the all of the purchase-order elements item children. Item items = myPO.getItemArray();
Type getFooArray(int index) void setFooArray(Type newValue, int index)
Returns or sets the Foo child element at the specified index.// Sets the value of the third item child element. myPO.setItem(newItem, 2);
Returns the number of Foo child elements.// Returns the number of item child elements. int itemCount = myPO.sizeOfItemArray();
void removeFoo(int index)
Removes the Foo child element at the specified index.
XmlType xgetFooArray() void xsetFooArray(XmlType arrayOfNewValues)
Returns or sets all of the Foo elements as XMLBeans simple types. Generated only when the Foo element is defined as a simple type./* * Returns values of all the phone child elements of an employee element, * where the phone element has been defined as xs:string. */ XmlString empPhones = currentEmployee.xGetPhoneArray();
XmlType xgetFooArray(int index) void xsetFooArray(int index, XmlType newValue)
Returns or sets the Foo element at the specified index, using an XMLBeans simple type value. Generated for an element defined as a simple type in schema.
void insertFoo(int index, FooType newValue)
Inserts the specified Foo child element at the specified index.
void addFoo(FooType newValue)
Adds the specified Foo to the end of the list of Foo child elements.
XmlType insertNewFoo(int index)
Inserts a new Foo at the specified index, returning an XMLBeans simple type representing the new element; returns the existing Foo if there's already one at index.
Adds a new Foo element to the end of the list of Foo child elements, returning an XMLBeans simple type representing the newly added element.
boolean isNilFooArray(int index) void setNilFooArray(int index)
Determines or specifies whether the Foo element at the specified index is nil. | <urn:uuid:cd21ac73-7e83-45bf-8e81-f51011a3090a> | 3.046875 | 1,291 | Documentation | Software Dev. | 39.589213 |
The Creation Wiki is now operating on a new and improved server.
From CreationWiki, the encyclopedia of creation science
Homologous chromosomes are a matching pair of chromosomes containing the same genetic loci in the same order. One of the homologous chromosomes is received from the father, and the other is received from the mother.
Because there are two chromosomes in the pair, the pair contains two different alleles at every loci, one on each chromosome. This allows for a great deal of variation in individuals, by means of dominant traits, recessive traits, and polygenic traits, and masking genes. This type of variation, which accounts for the bulk of population diversity does not come from mutation, but the process known as genetic recombination.
Ordinarily, humans have 23 homologous pairs of chromosomes, for a total number of chromosomes (or diploid number) of 46. Down syndrome is a notable exception. People with Down syndrome have an extra 21st chromosome, giving them 22 homologous pairs of chromosomes and a single set of three.
- Homologous Chromosomes Biology Online | <urn:uuid:a6479524-35a6-4284-8323-4e8f0cf26278> | 4.03125 | 227 | Knowledge Article | Science & Tech. | 28.393809 |
What does that tell us about how thunderstorms make tornadoes?
We’ve known for decades that all supercell thunderstorms have a gust front, which is the boundary between the moist, warm air that is flowing into the storm and the generally cooler air coming down out of the storm. But what we noticed in several cases recently is that thunderstorms that are making, or are about to make, tornadoes, have a secondary front, which is like a second wave of air rushing down from aloft. A strong downdraft has an important function: It brings the rotation to the ground. But for a tornado to form, you still need to tilt the rotation into the vertical, and this requires a nearby updraft. The intensity of the downdrafts and updrafts is vital, because in the end there needs to be a lot of stretching, which is when you take that existing rotation and turn it into something really violent like a tornado. It’s like a figure skater pulling in her arms and spinning faster and faster.
In the Goshen County tornado, we have a strong suspicion that the development of this secondary surge or front sparked the genesis of the tornado. We need to test this. If, after looking at more cases, we can demonstrate a causal link, then perhaps in the future a forecaster observing the development of a secondary surge will have an increased ability to forecast tornadogenesis.
The data analysis emerging from VORTEX2 also identifies another possible trigger, a “descending reflectivity core.” What is that, and how does it work?
Some supercell thunderstorms have a descending core of intense rain and hail wrapping around the west side of the storm. That’s what we call a descending reflectivity core, or DRC. This DRC drags rotating air downward from maybe four or five kilometers up and might cool the air in various places. As you drag the air downward, you create rotation and antirotation in different parts of the storm, and that seems to occur around the time of tornadogenesis. Right now these two features, the DRC and the secondary surge, hold the most hope for explaining why some supercells are able to generate rotation near the ground and why the low-level rotation is turning into a tornado when it does.
Why was 2011 the deadliest tornado season we’ve seen in 75 years? Were the storms stronger than usual last year?
In recent years, we’ve become very used to tornadoes causing a relatively small number of deaths. A few dozen is
typical. Unfortunately, while some of that may be due to better forecasts, some of it is also due to luck. Last year, the tornadoes hit larger places. They hit Tuscaloosa, they hit Joplin. The total number of tornadoes may have reached 1,800, which is exceptional, but the big spike in deaths was really based on a few individual points. Just one tornado in Joplin killed almost three times the yearly average of the last few decades. The Joplin tornado was rated EF5, but there wasn’t some added degree of destruction. The difference between Greensburg, Kansas [where an EF5 tornado killed 11 people in 2007], and Joplin was how many people got hit, not the strength of the tornado.
You were busy sifting through data from VORTEX2 in 2011. Was it frustrating to sit out such a volatile tornado season?
We did go out a couple of times. One day we got some fascinating data in a strong tornado in Oklahoma that had winds of about 200 miles per hour. We observed this tornado as it crossed a lake, and in the radar we saw this very clear central eye and a very strange wind, because it was lifting up a huge amount of water. We saw for the first time ever, I think, a tornado surge, like a hurricane surge. Then, as the tornado made landfall—and that’s a term we usually use with hurricanes—it just started shredding the forest. The eye suddenly filled up with a big ball of debris. From a scientific perspective, it’s very interesting because one of the great limitations we have in meteorology is that we’re not a laboratory science. But in this case, we had a tornado that was experiencing pure, simple conditions: lake for a few minutes and then pretty simply, woods. The structure of the tornado changes dramatically right when it crosses from the lake to the forest.
You have warned that the risk of a tornado-caused catastrophe in this
country is underestimated, or even overlooked altogether.
My colleagues and I wrote a paper in 2007 (pdf) that asked, what if one of those large tornadoes that we’ve observed with the Doppler on Wheels went through the suburbs of Chicago or St. Louis? This is a worst-case scenario, but I’d say it was a plausible worst-case scenario. Tens of thousands, even 100,000 homes could be destroyed. I think we should have at least some degree of preparation. | <urn:uuid:93b3774d-fff0-45bf-8e61-d5c2e7d9cb64> | 3.625 | 1,049 | Audio Transcript | Science & Tech. | 52.603792 |
Tired of smiling and nodding along while your electrical engineering buddies debate the finer points of electromagnetic theory? If you're taking a critical eye to the definitive guide to batteries, you've got to understand what electricity is to begin with. Here's a crash course on the fundamental force that's driving our digital revolution.
Electricity is a manifestation of the electromagnetic force...
For the countless permutations of matter and the infinite variety of ways it can behave, all of creation is governed by a quartet of fundamental forces. We hardly notice the Strong Nuclear force, which binds quarks together into neutrons, or the Weak Nuclear force, which accounts for many forms of radioactive decay, because they operate on the subatomic level. On the macro scale, conversely, we can easily appreciate the effects of Gravity every time we step out of doors and are not flung off the surface of the Earth, but other than keeping us grounded, it doesn't do a whole lot. Electromagnetism, also known as quantum electrodynamics, is the driving force of the world as we know it. Responsible for the bonds that weld individual atoms together into molecules, virtually every physical phenomenon we can sense without technological aids, be it from heat from a fire or light from the sun—not to mention all of chemistry—can be attributed to the Electromagnetic force.
...that results from the movement of charged particles through a conductive material.
Electromagnetism exists as either a magnetic or an electrical field and relies on another intrinsic property of atoms—their electrical charge. Charge, both the positive and the negative varieties, is a property of subatomic particles such as protons and electrons. It exists in discrete units, like photons, and is measured in Coulombs (that's the amount of charge moved by one ampere in once second).
The property of electromagnetism also determines whether particles exhibit electrostatic attraction or repulsion—similar charges repel one another, opposite charges attract one another. The strength of this attraction is governed by Coulomb's Law, which states that electrostatic force between two particles is proportional to the product of their charges, and inversely proportional to the square of their distance from one another.
The volume of electrical current is measured in Couloumbs...
Normally, an atom contains the same number of positively-charged protons in its nucleus as has negatively-charged electrons orbiting it. This results in a stable and neutral net charge of 0. However, if a few electrons were to be dislodged from their respective atoms via chemical reactions, these subatomic particles will drift a short distance and resettle on nearby atoms that are themselves short an electron. Now if a large contingency of electrons happen to flow through, say, a metal wire, well you've got yourself an electric current.
...its magnitude is measured in Amperes...
An electric current is a movement of electric charge. The magnitude of flowing current is measured in amperes, and amperes are calculated by timing how long it takes one coulomb of electrons to pass a set point in an electric circuit. One coulomb of electrons (6.241 × 1018) per second makes one ampere.
Now, the direction of this flow is a little tricky to understand, since an electric circuit technically has two currents—negative charge moving in one direction and an equal, positive charge moving in the other. For simplicity, scientific tradition arbitrarily dictates that a "conventional" current flows in the same direction as the positive charge—opposite that of the electrons—towards the most negative part of the circuit, the ground. However, since the flow of positive charge in one direction forces negative charge to flow in the other direction, the circuit just operates normally.
...and its pressure is measured in Volts.
There are two types of current. Direct current, borne from Thomas Edison's workshop, always flows unidirectionally from the power source to the ground. Alternating current, the brainchild of Nikola Tesla, regularly reverses the direction of its flow over time. While DC currents can be transformed into AC using an inverter and the process can be reversed with the help of a rectifier, their unique flow properties make AC and DC currents useful on very different scales.
DC's steady unidirectional current is very handy over the short distances of circuits, electric vehicles, and renewable resources such as solar cells. But it cannot easily step its voltage up or down like AC can. In a DC power transmission system, the 100V created by a distant power plant and distributed across the power grid will be the same 100V coursing through your house. This is massively inefficient over any appreciable difference because resistance grows within great lengths of transmission wires. That means much of the electrical load is lost as waste heat. As such, DC power plants would need to be built close to the communities they serve in order to be economically feasible.
In an AC power transmission system, however, the high-voltage current passing through transmission lines is stepped down to as little as a tenth of the original with the help of transformers and then rectified to DC for use in the home. This allows customers to run devices on much smaller, safer voltages and allows power plants to be located further away from customers while transmitting power without nearly as much of it lost to waste heat.
The shortcomings of DC power over long distances is due to a pair of electrical forces—voltage and resistance. Voltage, measured in volts, is defined as "the practical meter-kilogram-second unit of electrical potential difference and electromotive force equal to the difference of potential between two points in a conducting wire carrying a constant current of one ampere when the power dissipated between these two points is equal to one watt and equivalent to the potential difference across a resistance of one ohm when one ampere is flowing through it." In English, that means voltage measures the electric potential difference between two points—essentially, the amount of work or total energy needed to push one ampere of power between between them, divided by the magnitude of the charge. It can be thought of as the amount of electrical pressure, or tension, on the transmission line.
Resistance, on the other hand, determines how hard a current will have to work to move through a line—think of it as electrical friction—and is measured in ohms. One ohm is equivalent to the resistance generated within a circuit where one volt of potential difference produces one ampere of current. Every material known to science has some degree of resistance—save of course for super-conducting materials. Longer lines exhibit higher resistance while wider ones are less resistive—just as water flows more easily through wider pipes, so does electricity. | <urn:uuid:455ff672-0ce8-48b6-8750-275df68e1d40> | 3.53125 | 1,388 | Nonfiction Writing | Science & Tech. | 34.765711 |
binistream provides an input-only binary stream.
Int readInt(unsigned int size)
size(in bytes) from the stream and returns it. The return value is undefined if an error occured. The maximum number of bytes that can be read at once equals the size (in bytes) of the largest integer type, supported by the system.
Float readFloat(FType ft)
ft. Refer to the list of public types of the
binioclass for information about what floating-point formats are supported. The return value is undefined if an error occured. The value from the stream is always rendered to the biggest floating-point type, supported by the system.
If your architecture is incompatible with the floating-point number
that has just been read,
readFloat() tries to convert it. This
is sometimes not possible or not as accurate as the original value and
an error will be issued in these cases. Refer to the list of public
types of the
binio class for information about what errors
could be issued.
unsigned long readString(char *str, unsigned long maxlen, char delim = '\0')
std::string readString(char delim = '\0')
stringobjects are supported.
The ASCIIZ version takes a pointer to the pre-allocated
string buffer as the
maxlen specifies the
maximum number of characters to be read from the stream (not
including the trailing
\0 that is always appended to the string
buffer). The optional argument
delim is a delimiter
character. If this character is encountered in the stream, no more
characters will be read. The delimiter character itself is
discarded. It will not appear in the final string. If the
argument is omitted, it defaults to
string object version just takes one optional argument, the
delimiter character, explained above. Characters are always read until
the delimiter character or the end of the stream is encountered. If
delim is omitted, it defaults to
\0. It returns a
string object, containing the final string.
void ignore(unsigned long amount = 1)
amountis omitted, it defaults to
1, ignoring exactly 1 byte from the stream.
virtual Byte getByte() | <urn:uuid:0ef97712-5a52-4c20-8ca0-a1ff0b01ff7e> | 2.828125 | 476 | Documentation | Software Dev. | 49.945215 |
Dynamic languages offer a taste of object-
relational mapping that eases application code.
BY chRiS RichARDSon
Active Record and GORM use these dynamic capabilities in ways that can significantly simplify an application.
This article looks at how GORM
works. It compares and contrasts
GORM with Hibernate, focusing on
three areas: defining object-relational
mapping; performing basic save, load,
and delete operations on persistent
objects; and executing queries. It describes how GORM leverages the dynamic features of Groovy to provide a
different flavor of ORM that has some
limitations but for many applications
is much easier to use.
A major component of most enterprise applications
is the code that transfers objects in and out of a
relational database. The easiest solution is often to
use an ORM (object-relational mapping) framework,
which allows the developer to declaratively define
the mapping between the object model and database
schema and express database-access operations in
terms of objects. This high-level approach significantly
reduces the amount of database-access code that
needs to be written and boosts developer productivity.
Several ORM frameworks are in use today. For
example, the Hibernate,
11 and OpenJPA1
frameworks are popular with Java developers, and
NHibernate10 is used by many .NET developers. Two
newer ORM frameworks that have recently received
a lot of attention from enterprise developers are
Active Record for Ruby and GORM (Grails Object
12 for Groovy.
7 These new
frameworks differ from traditional ORM frameworks
in that they are written in dynamic languages that
allow new program elements to be created at runtime.
Groovy, Grails, and GoRm
GORM is the persistence component of
Grails, which is an open source framework that aims to simplify Web development. Grails is written in Groovy,
a dynamic, object-oriented language
that runs on the JVM (Java Virtual Machine). Because Groovy interoperates
seamlessly with Java, Grails can leverage several mature Java frameworks.
In particular, GORM uses Hibernate, a
popular and robust ORM framework.
GORM, however, is much more than
a simple wrapper around the Hibernate framework. Instead, it provides a
very different kind of API. GORM is different in two ways. First, the dynamic
features of the Groovy language enable
GORM to do things that are impossible in a static language. Second, the
pervasive use of CoC (Convention over
Configuration) in Grails reduces the
amount of configuration required to
use GORM. Let’s look at each of these
reasons in more detail.
Dynamic Groovy. GORM relies
heavily on the dynamic capabilities of
the Groovy language. In particular, it
makes extensive use of Groovy’s ability to define methods and properties
at runtime. In a static language such as
Java, a property access or a method invocation is resolved at compile time. In
comparison, Groovy does not resolve
property accesses and method invocations until runtime. A Groovy application can dynamically define methods
48 communicAtionS of the Acm | APriL 2009 | voL. 52 | no. 4 | <urn:uuid:f0f4e01d-49f8-444d-afc8-625ce19ff408> | 2.890625 | 730 | Truncated | Software Dev. | 27.11945 |
Beating Species Extinction
As wildlife spectaculars go, it doesn’t get much better than Madagascar, and if scientists have their way, many of the island’s most biologically rich areas will soon be protected. That’s because a mammoth effort to collect data on the island’s wildlife has yielded one of the world’s most detailed conservation proposals to date.
A consortium of 22 international researchers led by Claire Kremen, professor of environmental science, policy, and management, conducted a survey of 2,315 species to identify which areas the government of Madagascar should protect in order to conserve as many plants and animals as possible. Madagascar has already committed to protecting 10 percent of its land by 2012. The new analysis will help them identify the most species-diverse areas.
“Conservation planning has historically focused on protecting one species or one group of species at a time,” says Kremen. This may help the charismatic species, but the “behind-the-scenes” species that are essential to ecosystem function are often neglected. “In our race to beat species extinction, the old approach is not going to be quick enough,” she says.
The proposed strategy would extend the same protection to creepy crawlies as it would to large and cuddly mammals such as the island’s famous lemurs, which are not seen in the wild anywhere else on Earth. Kremen and her colleagues collected information on the exact location of over 2,300 species of plants, insects, frogs, geckos and mammals. They then built a computer model to extrapolate the range of each species, and used a second model to identify which regions are most vital for saving the largest number of species, giving priority to the most endangered species.
Their survey yielded a detailed map of the most biologically valuable areas in Madagascar.
“Never before have biologists and policy makers had the tools that allow analysis of such a broad range of species, at such fine scale, over this large a geographic area,” says Kremen. “Our analysis raises the bar on what’s possible in conservation planning.” | <urn:uuid:2183c640-1488-42f3-b28d-05b30667bd3b> | 4.15625 | 446 | Knowledge Article | Science & Tech. | 36.51185 |
Instantiating the View Controllers
project should now contain content for each of the views and all the
view controller classes it needs to function. The classes, however,
still need to be instantiated so that we have actual view controllers
and view objects to use in the application.
MultipleViewsViewController.xib in Interface Builder. This file contains
the parent view that we will be using to for the toolbar interface
element, and it is also a logical place to add our other view controller
Using the Library (Tools, Library), drag a view controller (UIViewController) into the Document window. We want this view controller to be an instance of our FirstViewController class. With the controller selected, press Command+4 to open the Identity Inspector. Use the drop-down menu to choose FirstViewController, as shown in Figure 5.
Figure 5. Update the view controllers to point to the classes you created earlier.
Next, the view controller must
be updated to point to the correct XIB file (FileViewController.xib) for
its view. Select the controller in the Document window and press
Command+1 to open the Attributes Inspector. Within the NIB Name
drop-down menu, choose FirstViewController, as shown in Figure 6.
Figure 6. Associate every view controller with the appropriate XIB file.
Repeat these steps for the SecondViewController and ThirdViewController
classes. (That is, add a new view controller instance, set the class,
and associate the view.) When finished, your
MultipleViewsViewController.xib should look very similar to Figure 7.
Figure 7. Add three view controller instances to the XIB file.
With these changes, our
project will build and instantiate the controllers, but there is still
no way of displaying the different views. It’s time to add the toolbar
controls and code to make that happen! | <urn:uuid:b86839b2-ec10-4b08-a2cc-0865eaf57f14> | 2.6875 | 403 | Tutorial | Software Dev. | 50.91163 |
Researchers at MIT’s Picower Institute for Learning and Memory may have found the key to controlling how the brain is wired while studying the bursts of activity that occur after communication between neurons.
First, I will give an overview of neural communication. Neural cells communicate with each other at a synapse, which is the point of contact between the cells at which signals are transmitted. The action potential stimulates the input cell (presynaptic) to release neurotransmitters. These neurotransmitters travel across the synaptic cleft and bind to neurotransmitter receptors on the receiving (postsynaptic) cell. However, the action of the neurotransmitter needs to be controlled so that the cell is not continually activated.
That is where this new research, conducted by Sarah Huntwork and J Troy Littleton, comes in. These scientists have identified a molecule, called complexin, which acts as a gatekeeper to help control the release of neurotransmitters. As it turns out, a few cells will continue to release neurotransmitters even after the major electrical stimulus has passed. They call these events “minis”, which are regulated by complexin. However, they have discovered that in the absence of complexin, these minis can occur without regulation, and when they do, it can lead to rewiring of the brain and synaptic growth.
So what does this mean in terms of neurological diseases? The activity of complexin can be controlled, and if properly regulated, may allow synaptic growth to be stimulated and rewiring of the brain to occur. | <urn:uuid:a125c375-7807-4a15-97a6-e07c15eb0c69> | 3.703125 | 315 | Personal Blog | Science & Tech. | 32.799805 |
Most of us don’t think of sound when we think of mathematics, but math makes up the very basis of sound, which is constructed on an array of numerical properties. In other words, math mainly has to do with the acoustics rather than the composition of the sound itself.
The answer has to do with wave patterns. When you pluck a string of an instrument, it vibrates back and forth creating sound – much like your vocal chords do when you sing or speak. The number of times per second that sound hits your ears is called frequency. Each note gives off its own frequency and in any musical piece, these frequencies have to work together.
When sound is consonant, frequencies of different notes match up so that the sound waves overlap with one another at regular intervals. When they do not overlap on regular intervals, dissonant sounds are heard. All sound frequencies and wave patterns are rooted in mathematics.
So then, what is the sound of math?
It is every sound you have ever heard. | <urn:uuid:578420a3-f5fc-4cf6-9009-b3f8daa8feb0> | 3.84375 | 208 | Personal Blog | Science & Tech. | 62.252955 |
International Year of Biodiversity
- January 8, 2010 3:40 PM |
- By Quirks
By Bob McDonald, host of the CBC science radio program Quirks & Quarks
Following on the International Year of Astronomy, the United Nations is continuing its scientific theme this year with a salute to the hugely important, but often misunderstood, concept of biodiversity.
This term, also known as natural diversity, species richness, or natural heritage, is generally defined as the "totality of genes, species, and ecosystems of a region."
It’s a holistic concept that goes beyond the usual poster children for the environmental movement: polar bears, penguins, snowy owls or eagles. While protecting them is important, they only represent a small part of the food chain. What’s really important are all the other creatures needed to support them: the sea urchins, bacteria, tree fungus, rodents, bugs, the forms of life that don’t look so great blown up to poster size. It’s that entire web of life, the foundation, which is seriously crumbling because of the human tendency to prefer monoculture.
Biodiversity is nature’s backup program. By providing a diverse variety of life in any one environment, the system will be strong enough to withstand stress. If a species is wiped out, whether by hurricane, disease or even climate change, another form of life will fill in the gap. If there is only one form of life, such as a field of corn, the whole thing can be wiped out at once by a single parasite. So, for biodiversity to work, there has to be a huge stock of diverse species to begin with.
Humans, on the other hand, like to focus on one or a few varieties of life and cultivate them. We like yellow corn and golden wheat stretching uniformly all the way to the horizon. We like our apples red and bananas yellow. We cut down the rich diversity of the forest and replace it with green grass, pastures or pavement. We like our food to be consistent wherever we go, lawns free of weeds; in fact, even areas that have been protected from development and designated as parks are often stripped down to the bare minimum of mature trees and grass, so we can spread out our picnic blankets and barbecues.
At the same time, these monoculture landscapes are costing us billions every year in pesticides and herbicides because they’re so vulnerable to attack.
Scientists refer to the current time period as the Holocene Extinction because humans are causing species to disappear worldwide at a rate that hasn’t been seen since a big asteroid hit the planet, 65 million years ago.
Scientist E.O. Wilson calls it HIPPO: Habitat Destruction, Invasive species, Pollution, over-Population, Over-harvesting.
This week on Quirks, Dr. Nancy Shackell describes the impact overfishing has had on the food web in the Atlantic, but also points to more environmentally sensitive fishing practices that would ease the stress on the system.
Preserving biodiversity involves more than saving rainforests from clear cutting. It means including ourselves as one of the species in the mix. Yes, we are the source of the problem, but we also have the unique ability to be a big part of the solution. Farmers have learned the lesson of inter-cropping, animal corridors allow migration, gene banks are attempting to preserve species before they disappear. Many new communities incorporate wetlands to filter runoff water. Solutions come in many forms.
By the most pessimistic estimates, if the current rate of extinction continues, we could wipe the planet clean in the next hundred years. And since we depend on other life for food and medicine, I guess that scenario involves wiping ourselves out as well. But even if that happens, (I think we’re smarter than that) it doesn’t mean we’ve destroyed the planet. The Earth has been sterilized by extinction events many times in the past. Life always renews itself. But what returns is different than what went before, as nature continues to endlessly experiment - with diversity. There’s an irony to that.
All News blogs
Quirks and Quarks
- Chris Hadfield's fall from space
- The final segment of Canadian Astronaut Chris Hadfield's mission, the return to Earth on Monday evening, will be the most difficult of all. As he plunges into the atmosphere, he will transform from a free floating body to a heavy prisoner of gravity. Continue reading this post
- Glimmer of hope even as planet hits CO2 climate milestone
- A new record level of carbon dioxide in the Earth's atmosphere has been recorded at the Mauna Loa observatory on the island of Hawaii, the world's premier atmospheric monitoring station. Continue reading this post
- Celebrating 60 years of DNA
- A ceremony at Cambridge University in England this week unveiled a memorial to Dr. Francis Crick, co-discoverer of the structure of the DNA molecule. His co-author, Dr. James Watson, now 85, attended the ceremony for a discovery many consider to be as important as Darwin's theory of evolution and Einstein's theory of relativity. Continue reading this post | <urn:uuid:56c7dfe8-1f28-487a-9748-143138eca903> | 3.140625 | 1,089 | Content Listing | Science & Tech. | 43.782765 |
Primary and Secondary Colors
Other Online Vision Experiments are available at the Exploratorium website.
Concepts to Investigate: Additive properties of light, primary colors.
Materials: Red, green, and blue lights (use Christmas lights for a small scale investigation, and flood lights for a classroom demonstration), electric drill (any rapidly spinning spindle will work), bolt, nuts, washer, disk, paints or markers (red, green and blue).
Principles and Procedures: Examine a color computer or television monitor using a magnifying lens, and note that the screen is composed of numerous pixels (picture cells) which occur in triplets. Each triplet has a red, green, and blue dot. By illuminating different combinations of these dots at varying intensities, the monitor produces a wide range of colors. Red, green, and blue light are considered the primary colors because they can be projected in different combinations to produce all other colors. Some restaurants have large screen projection monitors on which they broadcast sports events. By casting beams of red, green and blue light onto a screen these monitors generate enough colors to produce a life-like effect. In this activity you will make your own "big screen" monitor.
Part 1: Additive Colors: Darken the room, and adjust the red, green, and blue lights so they shine on the same white surface as illustrated in figure I. Note that the screen appears white where the three beams overlap. Now turn on just the red and the blue lights and describe the color produced. Place an object close to the screen and note the color of the two shadows. Repeat with the red and green lights followed by the blue and green lights. Now turn on all three lights and place an object such as your hand or a ball in front of the beams as shown on figure I. Since the object is illuminated from three different locations, it will cast three separate shadows. Where red is blocked, the shadow will be composed of only blue and green light. What color is produced when blue and green light are combined? Where green light is blocked, the shadow will be composed of red and blue light. What color is produced when red and blue light are combined? Where blue light is blocked, the shadow will be composed of red and green light. What color is produced when red and green light are combined? Record your findings.
Part 2: Color Wheel: The image on a computer monitor is created as a beam of electrons streams rapidly back and forth across the screen in rows from top to bottom. Phosphorous zones on the screen glow when they are hit by electrons, and remain black when they are not, creating an image from a series of bright and dark dots. The electron beam sweeps back and forth across the screen so rapidly that our eyes can not detect that the glowing dots are actually turning on and off, so the picture appears smooth and continuous to our eyes. In a similar fashion, the spokes of a rapidly spinning bicycle wheel appear to blend into a smooth blur. If a disk painted in the three primary colors is spinning rapidly, will we be able to see all of the colors distinctly, or will they blend into a composite image? If they blur together, will they combine to form white light the way red, green, and blue dots do on a computer screen?
Cut a circular disc from a piece of heavy cardboard or thin plywood. Divide it into three equal sections and paint these red, green, and blue as illustrated in figure J. Drill a hole in the center of the disk and mount it on a long bolt using lock washers and nuts. Tighten the nuts securely so they will not loosen while the bolt rotates. Tighten the bolt in the chuck of the drill and turn on (figure K). View the disk in bright light and note its appearance. Can you see the three colors, or do they blend together into gray or white?
(1) What color is produced by mixing green and blue light? Red and blue? Red and green?
(2) How was the green shadow produced? The red shadow? The blue shadow?
(3) What is the color of the background? Why?
(4) Why must big screen projection monitors be viewed in dimly lit rooms?
(5) Do the colors on the rotating disk (part 2) blend into white? Explain. | <urn:uuid:e5048628-70fd-4094-9b92-60c210718660> | 3.984375 | 888 | Tutorial | Science & Tech. | 62.839131 |
1. The Hubble Telescope travels around the Earth at a speed of 5 miles per second.
2. It was launched on April 24, 1990 from the space shuttle Discovery.
3. The Hubble Telescope should remain in space for 20 years.
4. The optical Hubble Telescope was named after Dr. Edwin P. Hubble. Dr. Hubble was the scientist who confirmed his theory of the expanding universe. This provided the foundation for the BIG Bang Theory.
5. The Hubble Telescope is 43.5 feet long and 14 feet wide. It weighs 24,500 pounds.
6. It cost 1.5 billion dollars.
7. It travels 353 miles above the Earth.
8. The Hubble Telescope transmits about 120 gigabytes of data every week.
9.It receives its energy from the sun through two 25-foot solar panels.
10.The Hubble Telescope is able to lock onto an object that it is photographing.. This allows it to remain steady in order to produce a near-perfect image. | <urn:uuid:f13f863d-0c10-4062-b424-05c249bfb16f> | 3.703125 | 207 | Listicle | Science & Tech. | 81.765111 |
An Introduction to Extreme Programmingby chromatic
Extreme Programming (XP) improves the efficiency of writing software. This is accomplished by streamlining complexity, delivering top business value early and consistently, and reducing the cost of nearly inevitable changes to the business rules, programming environment, or software design. Many of these practices have been part of conventional wisdom for years, but rethinking their interaction is the value of XP.
In its purest form, Extreme Programming is simple. The central tenet is, "Find the essential elements of creating good software, do them all of the time, and discard everything else." Programmers should program and make schedule estimates. Managers should make business decisions. Customers should choose the features they want and rank them by importance.
Have you had experience using the XP approach to building software? If so, what are the pros and cons from your perspective?
XP offers several compelling features:
- Comprehensive unit tests,
- Short release cycles,
- Adding only what's needed for the current task,
- Collective code ownership,
- Continual improvement, and
- Adding features in the order of importance.
Basic XP approach
Here's how a typical Extreme Programming scenario might look from the programmer's point of view. This is a very generic procedure outlined here, but it will give you some idea of the work flow if you're a developer in an XP environment.
- Customer lists the features that the software must provide.
- Programmers break the features into stand-alone tasks and estimate the work needed to complete each task.
- Customer chooses the most important tasks that can be completed by the next release.
- Programmers choose tasks, and work in pairs.
- Programmers write unit tests.
- Programmers add features to pass unit tests.
- Programmers fix features/tests as necessary, until all tests pass.
- Programmers integrate code.
- Programmers produce a released version.
- Customer runs acceptance tests.
- Version goes into production.
- Programmers update their estimates based on the amount of work they've done in release cycle.
A contrast of styles between XP and "traditional" programming
Most experienced programmers are familiar with the "large scale up front" design approach. XP programming doesn't embrace this approach for a couple reasons.
A large up-front design means that the whole programming team sits down and plans out the entire program before writing code. That would work if two things were true. First, the team would have to know exactly which features were to be implemented. Second, the team would have to be able to anticipate all issues that would come up when actually writing code.
There aren't a lot of domains where that's possible. (NASA comes to mind.) Customers change their minds or explain things differently, or someone comes up with a better way to code something, and the grand design is different. If it changes more than a little bit, someone will have to update it, and a little more time spent on the initial design was wasted.
The same goes for coding things you don't yet need. If the project has enough uncertainty that things could change, plan on them changing. Don't spend time on features and designs that might be thrown away next week or next month, and that no one will use for next month.
Instead, XP embraces the notion that you should spend your time on the features that provide the most business value to the customer. If you have good unit tests and a simple design, and if you've been working with other programmers, then adding features when you need them isn't nearly as expensive as you would think.
We've seen some of these principles applied to high-quality open source projects. Freed from traditional constraints like market pressure and ever-changing business demands, coders can explore various approaches, picking the best available solution. With the short feedback loop between developers and users, and a culture that encourages savvy users to add features and fix bugs, a project can evolve in new directions very quickly.
Not everyone thinks XP is the best approach
Software produced this way also has its detractors. Commercial developers see it as a competitive threat. Users burned by low-quality freeware and shareware see it as a toy. Experienced managers look at the morass of e-mail folders, IRC logs, and CVS commits, wondering how anything ever gets done.
But when you get right down to it, XP goes against conventional wisdom. Detractors look at XP and often say, "It will cost more to change things in the future, so we'd better add everything we might need right now."
Others look at pair programming and say, "I'm a good programmer. I don't want someone looking over my shoulder and slowing me down, and I don't need somebody pointing out bugs."
And skeptics often embrace the attitude, "Programmers can't be trusted to make good estimates, and managers can't be trusted to take programmers seriously. Customers can't be trusted to know what they want."
Conventional wisdom is based on experience, and therefore has some merit. Where it misses the point is in its failure to grasp the concept that XP practices actually reinforce the traditional process, not degrade it.
The open source connection
Open source development has similar advantages to Extreme Programming, though the motivations may be different. Consider phrases like "scratching his own itch," "release early and often," and "patches welcome." Distributed development, open for the world to see, allows potential customers and contributors to evaluate the evolution of a project at any point. In some cases, given the right motivation, they can even redirect development focus.
These two methodologies overlap in some areas and lock horns in others. Where XP promotes two programmers working together at one computer as a way to reduce bugs, write clearer code, and create better designs, open source collaborators may rarely meet in person, if ever! On the other hand, both approaches often have short release cycles, adding a few features and rapidly integrating user feedback.
Can the XP and open source worlds merge?
Is it possible to merge open source and XP methodology? Nothing in XP prevents source code from being given away freely. Nothing in open source development precludes a feature-at-a-time development style. In many ways, they complement each other -- user-driven focus, continual code review, short release cycle, and tight feedback loops. There's more in common than one might expect at first glance.
From the outside, both methods look like pure chaos. Somehow, though, order emerges, and programmers produce good code that meets requirements and meets or beats the schedule and the budget. At least, when applied properly. The secret is knowing when, where, and how much.
XP Meets XML [XML.com]
Return to the Linux DevCenter.
XP=eXpress Prototype ?
2003-02-27 14:18:56 anonymous2 [View]
XP=eXpress Prototype ?
2003-05-15 07:33:42 anonymous2 [View]
2003-01-28 12:33:30 anonymous2 [View]
How I can Make XP documentation?
2004-12-01 03:47:40 qasem1 [View]
What about documentation?
2003-10-21 02:25:02 anonymous2 [View]
What about documentation?
2003-04-22 06:13:04 anonymous2 [View]
2002-12-03 04:54:20 anonymous2 [View]
2001-11-24 22:30:35 dba [View]
about extreme programming
2003-02-16 14:42:34 a.bilal [View]
2001-11-24 22:16:31 dba [View]
about extreme programming
2007-02-22 21:48:00 teayu [View]
2001-06-28 11:06:39 mvr707 [View]
2001-06-11 21:09:55 stevehayr [View]
2001-05-15 12:49:02 Michael Schwern | [View]
2001-05-14 17:17:51 chromatic | [View]
2001-05-14 15:15:21 gvwilson [View]
2001-05-11 09:10:00 richkatz [View]
2001-05-11 07:14:50 michaelfurman [View]
2001-05-10 13:56:32 traderjones [View]
2001-05-10 12:26:19 aviggio [View]
2001-05-10 10:32:51 slimick [View]
2001-05-10 08:03:02 johansi [View]
2001-05-10 07:32:46 jamesm1 [View]
2001-05-10 05:23:01 froogle [View]
2001-05-09 13:06:12 qzjsrb [View]
2001-05-09 09:20:34 toddbradley1 [View]
2001-05-07 08:01:01 pivot [View]
2001-05-06 13:40:08 jim_warlock [View] | <urn:uuid:16713846-6de8-4d9f-981a-8db6f82c73da> | 3.125 | 1,919 | Comment Section | Software Dev. | 61.837692 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 15 results on physics.org and 67 results in our database of sites
66 are Websites,
0 are Videos,
and 1 is a Experiments)
Search results on physics.org
Search results from our links database
Very simple experiment showing the reaction between two nuclei at different temperatures. The temperature is set with a cursor in millions of degrees. The reaction is then played.
Shows states of different elements at different temperatures
Dealing with black body radiation and stellar temperature to show the relationship between colour and temperature. Also how temperature is dependant on the energy emitted.
A good overview of temperature, including the history of temperature measurement.
A very comprehensive site on all aspects of temperature and temperature measurement.
These pages aim at explaining what heat and temperature are, including different ways of measuring temperature and plenty more information.
Brief description of the Curie temperature. Table of Curie temperature provided for various substances.
Good revision guide aimed at UK A level standard, covers topics such as temperature scales, heat capacity, gas laws etc.
Demonstration of how temperature, pressure and volume affects the motion of gas particles in a balloon. NB. You can 'pop' the balloon!!
Temperature scale information for scientists, meteorologists and calibration engineers. Very technical
Showing 1 - 10 of 67 | <urn:uuid:f1da48d6-95de-4611-bc65-b83b10f2657d> | 3.734375 | 317 | Content Listing | Science & Tech. | 35.190714 |
Nanoscale and Biomolecular Imaging
Femtosecond Time Delay X-ray Holography
Conventional time-resolved optical methods require highly synchronized photon pulses to initiate a transition and then probe it at a precisely defined time delay. In the X-ray regime, these methods are challenging since they require complex optical systems and diagnostics. We have invented a holographic measurement scheme, inspired by Newton's “dusty mirror” experiment to monitor the X-ray-induced explosion of microscopic objects. The time delay is encoded in the diffraction pattern to an accuracy of better than one femtosecond, and the sample depth is holographically recorded to sub-wavelength accuracy. We applied this technique to follow the X-ray induced explosion of a sample irradiated by an intense pulse from FLASH, and observed that sample explosion occurs picoseconds after the femtosecond X-ray pulse traversed the sample. By varying the distance between the reflecting multilayer and the object, different time delays can be set (a new sample is needed for each shot). These results gave the first glimpses into early steps of plasma formation by X-rays.
Chapman, H.; Hau-Riege, S.; Bogan, M.; Bajt, S.; Barty, A.; Boutet, S.; Marchesini, S.; Frank, M.; Woods, B. W.; Benner, W. H.; London, R. A.; Rohner, U.; Szoke, A.; Spiller, E.; Moller, T.; Bostedt, C.; Shapiro, D.; Kuhlmann, M.; Treusch, R.; Plonjes, E.; Burmeister, F.; Bergh, M.; Caleman, C.; Huldt, G.; Seibert, M. M.; Hajdu, J., "Femtosecond time-delay X-ray holography," Nature, 2007, 448, 676-679
PULSE Research Nanoscale & Biomolecular Imaging • Publications • Scientific Staff | <urn:uuid:e4c1bb16-d555-4786-91b2-bb5037a1f603> | 2.6875 | 439 | Academic Writing | Science & Tech. | 52.423235 |
Title: Using micropropagation to conserve threatened rare species in sustainable forests.
Author: Edson, J.L.; Wenny, David L.; Leege-Brusven, A.D.; Everett, R.L.
Source: The Haworth Press, Inc.: 279-291
Description: For forests to be sustainable, viable populations of rare plants should be maintained. Where habitat management alone cannot conserve species threatened by human activity, micropropagation may advance species recovery. Micropropagation protocols were developed for Pacific Northwest endemics; Hackelia venusta, Douglasia idahoensis, Astragalus species, and Cornus nuttallii. Microshoots and seed were multiplied and rooted on nutrient media containning minimal levels of cytokinin and auxin growth regulators to maintain stable gene expression in plantlets. Acclimatized plantlets were reintroduced to protected habitat or propagated for further environmental experiments. Micropropagation serves a useful offsite role in sustaining Pacific Northwest forests by maintaining viability of certain threatened rare plants.
View and Print this Publication (2.1 MB)
- We recommend that you also print this page and attach it to the printout of the article, to retain the full citation information.
- This article was written and prepared by U.S. Government employees on official time, and is therefore in the public domain.
Get the latest version of the Adobe Acrobat reader or Acrobat Reader for Windows with Search and Accessibility
Edson, J.L.; Wenny, David L.; Leege-Brusven, A.D.; Everett, R.L. 1997. Using micropropagation to conserve threatened rare species in sustainable forests.. The Haworth Press, Inc.: 279-291. | <urn:uuid:08e8fb0f-bbea-4ddc-81b3-f26030bb7f7c> | 3.453125 | 367 | Truncated | Science & Tech. | 32.973352 |
Figure 4. Horse flies are large, heavy-bodied flies with large eyes. The wings are swept back at rest and the abdomen is pointed. The females suck blood and are strong, fast fliers. Larvae live in the mud on the bottom of ditches or in moist soil, feeding on other organisms. They are very prevalent at certain times of the year. | <urn:uuid:f3985e47-ec5d-405d-bea5-d0cb5b3e5f1b> | 2.9375 | 74 | Knowledge Article | Science & Tech. | 73.083046 |
Pi (π) a mathematical constant that is defined as the ratio of a circle’s circumference to its diameter. It is approximately equal to 3.14159, but since it is an irrational number (cannot be expressed as a ratio), the decimal places go on and on with no repeating segments. The history of pi extends back to almost 5000 years ago, as it plays such a crucial role in geometry, such as finding the area of a circle (A = π ²). It is not an understatement to say that pi is among the top five most important numbers discovered in history (0, 1, i and e being the others).
The interesting thing about pi is that it is an irrational number. As mentioned above, this means that pi has an infinite number of non-repeating decimal places, with numbers appearing in random sequence. For example, pi to a 30 decimal places is 3.141592653589793238462643383279… Because of this feature, pi contains all possible sequences and combinations of numbers at a certain point. The corollary to this fact is, if pi is converted into binary code (a number system of only 0 and 1, used by computers to encode information), somewhere in that infinite string of digits is every combination of digits, letters and symbols imaginable. The name of every person you will ever love. The date, time and manner of your death. Answers to all the great questions of the universe. All of this is encoded in one letter: π.
That, is the power of infinity. | <urn:uuid:4a8aaf14-392d-474d-96ea-73f531db81f6> | 3.46875 | 319 | Personal Blog | Science & Tech. | 53.556471 |
Acarids are mites belonging to the suborder Astigmata in the Order Acariformes. Most species are terrestrial, among them serious stored-product pests such as Acarus siro. In general, acarids appear to be detritivorous or fungivorous. A number of genera are aquatic or semi-aquatic, and inhabit a wide range of habitats including treeholes and bromeliads (Fashing 1994; OConnor 1994), and slow-flowing streams (pers. obs., HCP). They also have been found on fish and leeches, where they may feed on dead skin or fungus (Proctor et al. 1997).
Aquatic astigmatans of the families Acaridae, Algophagidae and Hyadesiidae are similar in appearance. However, acarids have tarsal claws that attach directly to the tarsi, whereas algophagids and hyadesiids have tarsal claws at the ends of flattened pulvilliform pretarsal stalks. In algophagids, the pretarsi of all legs are the same length, while in hyadesiids the pretarsi of legs I and II are much longer.
Fashing, N.J. 1994. Life-history patterns of astigmatid inhabitants of water-filled treeholes. pp. 160-185 in M.A. Houck (ed.) Mites: ecological and evolutionary studies of life-history patterns. Chapman & Hall, New York.
Krantz, G.W. 1978. A Manual of Acarology. 2nd edition. Oregon State University Book Stores, Corvallis, Oregon.
OConnor, B.M. 1994. Life-history modifications in astigmatid mites. pp. 136-159 in M.A. Houck (ed.) Mites: ecological and evolutionary studies of life-history patterns. Chapman & Hall, New York.
Proctor, H.C., H.M. Gray and B.M. OConnor. 1997. Subaquatic mites (Acari: Astigmata) associated with adult freshwater leeches (Hirudinea: Erpobdellidae). J. Nat. Hist. (Lond.) 31: 539-544.
Walter, D.E. and H.C. Proctor. 1999. Mites: Ecology, Evolution and Behaviour. University of New South Wales Press, Sydney, New South Wales. | <urn:uuid:8b9952f4-c95e-4b18-9664-ad238690f553> | 3.296875 | 519 | Knowledge Article | Science & Tech. | 52.829516 |
Cats are famous for landing on their feet after a fall, but they aren’t the only animals that do so. The tiny pea aphid can also right itself in mid-air, and it does so in a way that’s far simpler than a falling feline.
Pea aphids face many dangers, including parasitic body-snatchers and predators. And since they spend their time sitting on plants, they could be inadvertently eaten by hungry grazing mammals. The aphids have no aggressive defences to deploy. Instead, they escape by falling. If they smell the breath of a grazer, they’ll release their hold on their plant, and tumble to safety.
Moshe Gish from the University of Haifa was studying this behaviour when he noticed that the aphids always rotate their bodies during their descent to land on their feet. To test this ability, he teamed up with Gal Ribak from the Israel Institute of Technology. They placed aphids on fava beans hanging over a layer of petroleum jelly, and threatened them with a ladybird—one of their natural predators. The aphids fell off and the jelly preserved the outline of their impact. Up to 95 percent of the insects landed on their feet.
How do they manage? Cats do it by twisting their flexible backbones, but the aphids have no such specialised structures. Instead, the secret is in the way they hold their limbs. When Ribak and Gish dropped dead aphids from a height, only 52 percent landed feet-first. If the duo amputated the insects’ limbs, things got even worse—just 28 percent landed the right way up. A limbless aphid is even worse at landing than a dead one.
With high-speed videos, Ribak and Gish found that a falling aphid always move its antennae forward and upwards, while holding its back legs above its body. This odd posture makes it look like a tiny insect base-jumper. It also ensures that the only stable orientation is a feet-down one. As the aphid falls, the forces acting upon it automatically rotate its body so that it’s the right way up. Unlike a cat, it doesn’t need to actively twist anything. By splaying out its appendages, it relies on physics to passively right itself.
Why does this matter? At their small size, the fall won’t kill or injure them. However, the ground is full of danger too, including a different set of predators and a lack of food. But falling aphids might never hit it. Ribak and Gish found that when the aphids right themselves, they can use their sticky feet to grab onto lower parts of their host plant as they fall. If they hit those same parts with their backs, they simply bounce off. By righting themselves mid-air, they escape from danger above, without falling into danger below.
Many other insects can control their falls. For example, some species of gliding ants and bristletails, which live in the rainforest canopy, can steer their descent well enough to land upon the trunk of their home tree. It’s possible that these primitive aerobatics were the forerunners to true insect flight, providing a stage for the evolution of wings.
Reference: Ribak, Gish, Weihs & Inbar. 2013. Adaptive aerial righting during the escape dropping of wingless pea aphids. Current Biology http://dx.doi.org/10.1016/j.cub.2012.12.010 | <urn:uuid:e0bceee2-4f5a-4b82-b331-5a135be75ecb> | 3.765625 | 735 | Knowledge Article | Science & Tech. | 65.424545 |
For thousands of years, octopuses have been known for their unique appearance, with their massive bulging heads, electric eyes that we can’t ignore, and eight long arms. But the most interesting facts about the octopus, is its life span in the wild, and how it tries to survive from predators.
The average life span of common octopuses in the wild is 1 to 2 years. When it comes to surviving in the wild, the octopus is a total genius. Octopuses can hide in plain sight because they are able to change the colors of their skin to match the colors, patterns or textures surrounding them. The enemy can swim near the octopuses without even seeing their meal. If by any chance they are being seen distracted by a predator, the octopus will use its ink spray to attack any predator. A cloud of black ink will be release, leaving the predator blind, giving plenty of time for the octopus to swim and hideaway. It has been scientifically proven that their ink contains a strong substance that deadens a predator’s sense of smell, making very difficult for the predator to find the fugitive octopus. As for being fast swimmers, their bodies can move very quickly and their limp bodies can squeeze into very small cracks like roaches, where predators can’t follow.
If the octopus is having a bad day, say, a predator is holding them very tightly; an octopus can give up his arm to escape its predator.The arms of octopuses will regrow without leaving permanent damage to the body after a vicious attack from predators. Not to mention, in a way they are very similar to other venomous animals because they have beaklike jaws that allow them to bite with their venomous saliva, used to bring their attacker under control. Octopuses are also known for using their ink to blind their victims before assaulting. Octopuses are considered by many scientists to be one of the most intelligent invertebrates, the most common octopuses are found in tropical climates where the world’s oceans waters are mild. Most octopuses are average in size, but many of them can grow to about 3.5 feet in length and weight as much as 22 pounds. | <urn:uuid:e6c70ef8-3ba2-4626-bf36-4aeb1432ec6b> | 3.65625 | 460 | Knowledge Article | Science & Tech. | 48.182894 |
Wednesday, August 10, 2005
Looking For Signs Of Water On Mars Today So Humans Can Survive There Tomorrow.
But with the recent deployment of MARSIS on Europe's Mars Express in orbit around Mars and the pending launch of the Mars Reconnaisance Orbiter (MRO) on Thursday, the search for water on Mars today begins in earnest. MARSIS is a radar instrument on the Mars Express that is capable of probing and mapping up to five kilometers below the surface of Mars. NASA's MRO will carry its own instrument that will be capable of mapping up to a kilometer below the surface. Enrico Flamini, an Italian scientist involved with both projects, says,
"With MARSIS we are going to have the broad picture of the distribution of water on Mars, while with SHARAD we are going to have the defined picture. We will be able to provide the position, the depth, and the extension of the possible ice and water layers that are under the Martian surface at a depth that can be reached by future Mars exploration."(See story here.)
Although it is interesting to learn about the ancient history of the geology and climate of Mars, and the rovers have been incredibly helpful in that regard. It is even more important for planning human exploration of Mars to understand the current condition of Mars. Knowing where water is below the surface of Mars is vital if human exploration of Mars is to be successful. The money being spent on these two projects is more than just funding for science; it is an investment in humanity's future. All upcoming robotic missions to Mars should focus more on understanding Mars today in order to pave the way for human exploration of the Red Planet. | <urn:uuid:41b174ab-e1ef-402e-85cf-17ec80960226> | 3.71875 | 343 | Personal Blog | Science & Tech. | 44.132806 |
Copyright © 2008 All rights reserved.
Current Biology, Volume 18, Issue 16, R686, 26 August 2008
Picture storyAdd/View Comments (0)
- It may not feature highly on the ‘must-see’ lists of the thousands of tourists who visit three of the most popular islands in the Caribbean, but until now Barbados, St Lucia and Martinique have all been able to claim to be home to the world's smallest snake. But a new report suggests Barbados may steal a march on its neighbours: a new, even smaller snake has been discovered there.
The new threadsnake was discovered by Blair Hedges, an evolutionary biologist at Penn State University. He found it in a forest fragment on the east side of Barbados. He believes the species is rare because most of its potential forest habitat has been lost to buildings and agriculture. “The Caribbean is particularly vulnerable because it contains an unusually high percentage of endangered species and, because these animals live on islands, they have nowhere to go when they lose habitat,” he says.
Hedges determined that the Barbados species is new to science on the basis of genetic differences from other species and its unique colour pattern and scales. He also determined that some old museum specimens had been misidentified and actually belong to this new species. Hedges published his study of the new snake in the journal Zootaxa earlier this month.
Hedges proposes the name Leptotyphlops carlae for the new snake, which he believes is the smallest of the 3,100 known snake species. At an adult length of only around 10 cm, it is considerably shorter than the previous record holder — which reaches a length of around 14 cm.
Hedges believes the newly described snake may be at the minimum possible size for snakes, “Snakes may be prevented by natural selection from becoming too small because, below a certain size, there may be nothing for their young to eat,” he said. This snake, like its relatives, likely feeds on the larvae of ants and termites.
Another constraint on smallness appears to be reproduction; while large snakes can produce dozens of eggs, the smallest snakes appear able to produce just one egg or offspring. And these offspring are proportionately very large compared to the adults. The hatchlings of the smallest snakes can be up to half the size of the adult whereas the largest snakes produce offspring just one-tenth of the adult size. “The fact that tiny snakes produce just one massive egg — relative to the size of the mother — suggests that natural selection is trying to keep the size of the hatchlings above a critical limit in order to survive,” says Hedges.
He also describes a new threadsnake from St Lucia that is almost, but not quite, as small as the one from Barbados. Although St Lucia has vastly more rainforest and potentially suitable habitat, for the time being it appears that Barbados holds the title in the small snake stakes. | <urn:uuid:317ca90f-8d88-40c9-a11c-a782baab7d7e> | 3.46875 | 617 | Truncated | Science & Tech. | 41.101254 |
Tidal marshes, a type of wetland, can be found along protected coastlines in middle and high latitudes worldwide. They are most prevalent in the United States on the eastern coast from Maine to Florida and continuing on to Louisiana and Texas along the Gulf of Mexico. Some are freshwater marshes, others are brackish (somewhat salty), and still others are saline (salty), but they are all influenced by the motion of ocean tides. Tidal marshes are normally categorized into two distinct zones—the lower or intertidal marsh, and the upper or high marsh.
In saline tidal marshes, the lower marsh is normally covered and exposed daily by the tide. It is predominantly covered by the tall form of smooth cordgrass (Spartina alterniflora). The saline marsh is covered by water only sporadically, and is characterized by short smooth cordgrass, spike grass, and black grass (Juncus gerardii). Saline marshes support a highly specialized set of life adapted for saline conditions. Brackish and fresh tidal marshes are also associated with specific plants and animals, but they tend to have a greater variety of plant life than saline marshes.
Functions & Values
Tidal marshes serve many important functions. They buffer stormy seas, slow shoreline erosion, and are able to absorb excess nutrients before they reach the oceans and estuaries. High concentrations of nutrients can cause oxygen levels low enough to harm wildlife, such as the "Dead Zone" in the Gulf of Mexico. Tidal marshes also provide vital food and habitat for clams, crabs, and juvenile fish, as well as offering shelter and nesting sites for several species of migratory waterfowl.
Pressure to fill in these wetlands for coastal development has lead to significant and continuing losses of tidal marshes, especially along the Atlantic coast. Pollution, especially near urban areas, also remains a serious threat to these ecosystems. Fortunately, most states have enacted special laws to protect tidal marshes, but much diligence is needed to assure that these protective measures are actively enforced.
Disclaimer: This article is taken wholly from, or contains information that was originally published by, the Environmental Protection Agency. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the Environmental Protection Agency should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content. | <urn:uuid:45a03931-e38b-437b-b72e-4286a291e19e> | 3.890625 | 511 | Knowledge Article | Science & Tech. | 25.361079 |
The Singularity, Infomania, and Programmed Reality (cont.)
By Jim Elvidge
Figure 1 is a recreated chart from the data presented by Ray Kurzweil in his book The Singularity is Near. It demonstrates the exponentially accelerating page of change in human evolution. By plotting on log-log paper, exponential trends appear as a straight line. The trend shown in this particular chart is the time between successive significant evolutionary events, both biological and technological. Ostensibly, each successive event is equivalently significant in terms of evolution or technical advancement compared to the previous one. Therefore, it shows how evolutionary events and technology are accelerating and will reach a point of singularity. In an article in Washington Monthly, the same year that Kurzweil’s book was published, Steve Benen notes that if you extrapolate the graph for a few more orders of magnitude, it “indicates that major, paradigm-busting inventions should be spaced about a week apart these days.”
I’ve shown that segment of the graph in grey. Kurzweil has correctly pointed out that you can’t project a log-log graph into the future, but nevertheless, this graph seems to be implying that we are on a trend to a Singularity in the current year. If it were not the case, the trend would be diverging to the right.
Another way of looking at it is to redraw the graph with the x-axis being “Time before 2045” instead of “Time Before Present.” Figure 2 shows such a graph using the same events. If the Singularity were really to happen at 2045 and the events are indeed chosen correctly, they should fall on the straight line. However, they do not.
As can be seen, the paradigm-shifting events are diverging toward the current day. What’s going on? One possibility is that the events chosen are wrong. Perhaps, the technological advance from the computer to the PC, for example, is not as significant as the evolution between Homo Erectus and Home Sapiens. Certainly, the choice of significant events is somewhat arbitrary. Unfortunately, we are looking at the problem through the lens of present day biases.
Even so, it does seem that the PC and the World Wide Web have been the two most significant technological paradigm shifts in recent years. The time between the Computer and the PC was 38 years. The time between the PC and WWW was 13 years. If progress was truly exponential, the next major invention should have occurred in 2001. What was it? It seems to me that we might be looking at a couple possible significant events in our near future:
- A computer passes the Turing Test (true AI)
- Artificial Life is created
- Brain-Computer Interfaces
In the recent Loebner Prize competition at the University of Reading, one computer system came within 5% of fooling enough judges into thinking it was human during a natural language conversation. Given that, one might suspect that we will get AI within a few years.
A year ago, Craig Venter announced that he was on the brink of creating artificial life.
And, although Brain-Computer Interfaces are a technology at its infancy, it certainly has begun, with 60-pixel bionic eyes a reality, and recent successful experiments in determining sensory stimuli merely by analyzing brain waves.
So, it seems that the next paradigm-shifting event may occur about 17 years after the last (WWW), give or take. But the most recent one occurred about 13 years after the previous one (PC). So, by that rationale, the pace of exponential technological evolution is slowing down.
- Lanier, Jaron “One-Half of a Manifesto,” Wired, December 2000. http://www.wired.com/wired/archive/8.12/lanier.html?pg=1
- Nathan Zeldes, David Sward, and Sigal Louchheim,
“Infomania: Why we can’t afford to ignore it any longer,” First
- Kurzweil, Ray, “The Singularity is Near,” Viking Penguin, 2005.
- Benen, Steve, “The Singularity,” Washington Monthly 21 September 2005, http://www.washingtonmonthly.com/archives/individual/2005_09/007172.php.
- Williams, Ian, “Artificial intelligence gets a step closer,” vrunet.com, 13 October 2008, http://www.vnunet.com/vnunet/news/2228123/ai-gets-step-closer
- Pilkington, Ed, “I am creating artificial life, declares US gene pioneer,” The Guardian, 6 October, 2007., http://www.guardian.co.uk/science/2007/oct/06/genetics.climatechange | <urn:uuid:73b2d6bd-edd3-475a-857a-c2fbd40023f7> | 2.8125 | 1,030 | Comment Section | Science & Tech. | 51.187345 |
Down from the
by Berry Wijdeven
Picture if you will, a young German
palaeontologist, roaming the mountain ranges of Europe, examining the
fossilized remains of a giant sponge reef deposited some 150 million
years ago. Remnants of the reef can be found from Russia all the way to
Spain and Portugal. Portions have even been found in Newfoundland. They
were part of a giant reef system, 7,000 km long and up to 60 meters
thick which was the largest living structure ever created.
While the existence of the reefs had been known
for some time, little was known about their ecology, how the sponges
lived and interacted with their environment. These were the answers the
young palaeontologist was looking for as he examined the exposed
remnants of the giant reef, studying the thick layers of fossilized
sponge, searching for clues to ecosystems which had become extinct more
than 40 million years ago.
Then one day in 1996, the palaeontologist happened
upon an article which would change his life and bring him down from the
mountains. Published in 1991 by four Canadian scientists from the
Pacific Geoscience Centre in Sidney, BC, the article described the
discovery of anomalies picked up during sonar scans of the Hecate
Strait and Queen Charlotte Sound. According to the scientists, the
anomalies were sponge reefs.
Palaeontologist, Dr. Manfred Krautter, still
sounds slightly overwhelmed as he describes his first reaction to the
“At first I couldn’t believe
it.” he says. “I was electrified. We
palaeontologists had thought they had died out”
Dr. Krautter contacted the scientists at the
Geoscience Centre and suggested a joint venture to study the reefs. He
received an enthusiastic response and by 1999 the team was ready to
start their study.
Just getting a glimpse of the sponges
wasn’t easy. Located at depths of 150 to 250 meters, the
reefs can’t be reached with scuba gear.
So the team secured the services of the research
vessel CCGC John P. Tully and the two-person submersible Delta. The sub
was cramped and cold, with water temperatures hovering around four
degrees Celsius at those depths, but the scientists were rewarded with
the opportunity to finally sneak a peak.
In July 1999, the sub, with Dr. Krautter on board,
made the first of 18 dives. For the first time ever, anywhere in the
world, living, thriving sponge reefs were being studied by direct
“It was like a time machine,”
says Dr. Krautter. “ Like a dive back millions of years. You
get in the sub, in the present time, dive down and arrive 140 million
years ago.” The journey didn’t disappoint.
Videos of the dives provide a glimpse of this
hitherto unknown world. As the sub descends it passes endless numbers
of jellyfish. Then it gets dark. Near the sea bottom at a depth of some
198 meters, the sub’s lights are turned on. At first, the
camera focuses on a section of muddy sea floor, but as the sub begins
to move, the sponge reef comes into view. Masses of white, ghostlike
shapes emerge. There are sponges everywhere.
Some look like giant wine goblets. Others have
multiple finger-like protrusions or bouquets of slender tubes. It looks
eerie, alien, and stunningly beautiful. Over the next three years,
using the sub, a remote controlled vehicle and side-scan sonar data,
the team discovered four reef structures covering about 700 km2 of
seafloor in Queen Charlotte Sound and Hecate Strait. The mounds are up
to 21 metres high and many kilometres wide. Radiocarbon dating has
determined the reefs to be 9,000 year old.
Sponges are among the oldest life forms on earth.
They have been found fossilized in rock layers 600 millions years old.
Sponges still flourish today with more than 7,000 known species in both
fresh and marine waters all over the world.
Unlike the soft sponges we sometimes find washed
up on the beach or those in our bathtubs, the sponges that make up the
reefs are siliceous or glass sponges that have a rigid structure
created by using silica dissolved in the water. The waters off the BC
coast have some of the highest silica content in the world. This silica
originates in feldspar deposits in the BC interior and is brought to
the coast as sediment in rivers and streams.
Each siliceous sponge can grow up to 1.5 meters
tall. The sponge walls are thin and brittle, two to three millimetres
in thickness, making them extremely fragile. Touch them and they break
apart. When a sponge dies, its skeleton becomes part of the structure
of the reef. Because the sponges are so delicate, the structure of the
reef is dependent on the sponge’s ability to trap sediment
for strength and support.
”If you have too high a sediment input,
the sponges won’t like it, they will die. And if it would be
too low, the structure will collapse. It’s a very balanced
system. And that’s just dealing with the sediment.
It’s also very balanced speaking about nutrients. There are
many, many factors playing together in forming this little niche.
That’s why it’s so unique.”
Though the reefs have survived for 90 centuries,
they have not escaped recent damage. The culprit? Bottom trawling.
“There’s a LOT of
damage,” says Dr. Krautter. “Before we did the
cruise in ’99, we studied all these side-scan sonograms and
based on this data we choose certain areas to go with the submersible.
When we came here and went down with the sub we couldn’t find
any sponge anymore. It had been erased, like a desert.”
According to Dr. Krautter much of the most
southerly reef has already been lost.
”… we know it is bottom
trawling, because we could see the trawl marks. We could see the trawl
marks on the side-scan and we saw it on the screen of the digital
Damage to the reefs is impossible to restore. Once
a reef area is gone, it is gone forever. The reason for this is that
the sponges need firm ground to settle on. The scouring action of the
glaciers provided this solid base when the reefs were established
following the last age of glaciation. Since then, however, much of the
ocean floor has gradually been covered with a thick layer of sediment,
making it impossible for the sponges to anchor themselves.
For Dr. Krautter the saddest part is that bottom
trawlers shouldn’t even be on the reefs.
“Trawling makes no sense in these areas
because we saw a lot of fish in the reefs, but only juvenile fish.
Juvenile rock fish. It’s a nursery, a kind of kindergarten.
They use the niches, the caves, the areas between the sponges to hide
from predators. And after getting a certain size, they get out. If you
erase the reefs, you will absolutely erase the fish stock.”
In July 2002, the sponge reefs got a reprieve of
sorts. Robert Thibault, the Minister of Fisheries and Oceans, declared
the sponge reefs closed to bottom trawlers. These fishery closures are
a good start, but they lack permanence. Long-term protection is still
needed for the reefs.
Dr Krautter displays a lot of passion when it
comes to the sponge reefs. He is clearly anxious not to lose this
opportunity, an opportunity he never dreamed possible.
“These are the only siliceous sponge
reef systems in the world,” he says. “We have an
exceptional opportunity and maybe a last chance to learn more about
these "living fossil reefs" and their environment and therefore we
should use it!”
Years ago, a young palaeontologist started looking
for answers in the mountains of Europe. Now, the depths of Hecate
Strait may finally provide him with those answers.
Trouble with Trolling
by Lynn Lee and Berry Wijdeven
Trolling. It’s considered by many to be
one of the least invasive methods of commercial fishing along the BC
coast. There are no nets involved, by-catch isn’t much of a
problem and there’s little impact on fish habitat. Fishers
are dependent on fish striking the lures, providing at least a
semblance of fairness.
So what then is the trouble with trolling? Well,
it turns out people sometimes confuse trolling with trawling. And
trawling, now there’s a whole different kettle of fish.
Trawling’s got “issues”. Bottom trawling,
commonly known as dragging, has come under attack for its impact on
fish habitat as the heavy weights and big nets crush and flatten the
sea bottom, damaging and destroying fish habitat and delicate ocean
life. Equally troublesome is the catch of a substantial volume of
undersized or unwanted fish, the so-called by-catch, while in pursuit
of the legal-sized targeted fish species. There are also mid-water
trawlers who don’t damage fish habitat but can still produce
By now you probably get an idea why trollers
don’t want to be confused with trawlers. To further clear the
muddy waters, here is a brief description of these two commercial
fishing practises in BC. Stay tuned for upcoming issues to learn about
other coastal fisheries.
First Nations people were the first on the coast
to troll for fish. Using dugout canoes, they would hold the fishing
line in their hand or wrapped around their paddle as they moved through
the water. Nowadays, commercial fishing boats with poles and multiple
fishing lines have replaced this traditional hand-lining technique.
Some boats deliver fresh salmon to the packing plants, packing ice to
chill fish caught over a 10-day trip. Other, generally larger trollers,
have flash freezers on-board, allowing them to freeze their freshly
caught salmon and stay out on the water for several weeks.
On the BC coast, commercial trolling for salmon
involves dragging up to six 300 metre long weighted stainless steel
fishing lines with multiple hooks behind a moving vessel. The fishermen
can be very selective and use knowledge about the behaviour of each
salmon species to adjust lures, fishing depth, fishing location,
fishing speed and other intuitive factors to catch the salmon of
North Coast commercial salmon trollers target all
five species of Pacific salmon: chinook, sockeye, coho, chum and pink,
depending on the numbers of each species expected to return to major
spawning streams. Of the different commercial methods of catching
salmon (gillnet, seine and troll), trolling provides the highest
quality salmon to markets. Although the majority of the BC troll fleet
targets salmon, trolling is also used to catch albacore tuna further
offshore and sometimes to catch lingcod and halibut. The BC
recreational fishery also uses trolling to catch salmon and halibut.
Recreational fishers can only use one single hooked line per fisher.
Trawlers are industrial fishing vessels designed
to catch a lot of fish in one fell swoop. The boats drag a long
wedge-shaped net that narrows into a funnel shaped bag called the
“cod end”. On an otter trawl, the mouth of the net
is kept open by water pressure on two “otter doors”
situated on either side of the net. As the net is dragged along, fish
in front of the net are forced into the cod end. Different sized nets
are used depending on the fish species targeted in order to allow
undersized fish to escape. Trawl tows can last for up to 3 hours.
Trawlers can drag a net in mid-water (pelagic
trawling) and/or along the seafloor (bottom trawling). Mid-water
trawling is used to catch fish like hake that school in large groups
within the water column. Bottom trawlers target bottom-dwelling fish
like cod, sole and flounder. In BC, trawl nets are often fitted with
heavy pieces of rubber tire that roll the net along rough, rocky
seafloor. There are also smaller otter and beam trawls used to catch
shrimp. As their name suggests, beam trawlers use a metal beam instead
of otter doors to keep the mouth of the net open. They also have a
“tickler chain” in front of the net to cause shrimp
to jump up out of the soft bottom. In “dredging”,
nets with chain-mesh bottoms are dragged through soft bottom to catch
scallops (common on Canada’s East Coast).
Trawling provides high volume, lower quality fish
to markets. An otter trawl can bring in 60 tonnes (12,000 lbs) of fish
or more in one haul of the net. Good skippers can generally target and
capture the fish species they are after, but it is impossible to avoid
at least some bycatch. At times, trawlers can be responsible for a lot
of bycatch, including fish and shellfish that have no commercial value,
undersized commercial fish and commercial fish that they cannot retain
either because they are not allowed to keep them or because they are
over allotted quotas.
Then there is the habitat issue. Trawlers have a
big impact on the sea bottom. Everything in the path of the trawl net
is disturbed and all animals at or near the bottom are removed or
destroyed. Although some of these animals are quick to re-colonize a
trawl area, many are slow to return and, if an area is constantly
disturbed, some may never return.
As you can see, there’s a sea of
difference between trolling and trawling. They just sound the same. And
for trollers, that’s a bit of a drag.
by Lynn Lee and Berry Wijdeven
On July 19th, 2002, the Minister of Fisheries and
Oceans, Robert Thibault, announced that the four sponge reef areas in
Hecate Strait and Queen Charlotte Sound would be closed to groundfish
trawl fishing. Fishing industry groups, including the Canadian
Groundfish Research & Conservation Society and the Groundfish
Trawl Advisory Committee, proclaimed their support for this action and
vowed not to fish the reefs again. Case closed, sponge reefs saved,
Well, maybe. Fisheries closures are temporary,
defined in management plans that must be renewed on a yearly basis. If
sponge reefs are to receive meaningful long-term protection, they would
need Marine Protected Area (MPA) status with fishing activity
restrictions under Canada’s Oceans Act.
The difference? Marine Protected Areas are
proactive and planned with long-term objectives, while fisheries
closures are generally reactionary and short-term. MPAs are created
with active participation from interested and affected parties
including fishers and local communities, whereas fisheries closures are
implemented and maintained by government, with little or no community
involvement. And while fisheries closures tend to be isolated and
focussed on single or a few commercially important species, MPAs have a
broader focus that can deal with many species, habitats and ecosystems.
In short, MPAs can provide more secure, long-term protection.
While at the national level MPAs do not
specifically exclude any human activities in the Pacific Region
agencies have agreed that all MPAs should share minimum protection
standards prohibiting ocean dumping, dredging, and the exploration for
or development of non-renewable resources. There must also be a
specific prescription in each MPA to limit other human activities. In
the case of the sponge reefs, an effective MPA designation must
additionally prohibit harmful fishing activities such as trawling and
A quick review of the Oceans Acts suggests that
the sponge reefs are ideal candidates for MPA status. Section 35 (1) of
the Act states:
A marine protected area is an area of sea the
forms part of the internal waters of Canada, the territorial sea of
Canada or the exclusive economic zone of Canada and has been designated
under this section for special protection for one or more of the
(a) the conservation and protection of commercial
and non-commercial fishery resources, including marine mammals and
(b) the conservation and protection of endangered or threatened marine
species, and their habitats;
(c) the conservation and protection of unique habitats;
(d) the conservation and protection of marine areas of high
biodiversity or biological productivity;
(e) the conservation and protection of any other marine resource or
habitat as is necessary to fulfill the mandate of the Minister (of
Fisheries and Oceans Canada).
The sponge reefs are unique and should be
protected. If you are concerned about the continued well-being and
support their designation as MPAs, your voice will make a difference.
Please send your valuable opinions to:
The Honourable Minister
House of Commons
No postage is required. | <urn:uuid:524cc269-a863-4dcb-93a9-79b73214bc56> | 3.734375 | 3,854 | Content Listing | Science & Tech. | 53.114351 |
Constantly scanning the Earth’s surface, the TRMM Microwave Imager (TMI) allows scientists to both track tropical cyclones and forecast their progression. Used by NOAA’s National Hurricane Center (NHC), the Joint Typhoon Warning Center (JTWC), and tropical cyclone centers in Japan, India, Australia and other countries, detailed microwave information provides data on the location, pattern and intensity of rainfall.
Complimenting the TMI is the TRMM’s Precipitation Radar (PR), which turns two dimensional images into 3D by providing data on vertical rainfall structure. Scientists use PR data to verify their tropical cyclone computer models. They also use the data to understand the distribution and movement of latent heat throughout the storm, particularly in the development of hot towers in the wall of clouds around the eye, which have been linked to rapid intensification. Together, TRMM TMI and PR data help scientists establish key characteristics of where, how and why rain falls in tropical cyclones as well as to better understand storm structure, intensity and the environmental conditions that cause them.
Currently the TRMM Mission observes cyclones at mid to low latitudes around the equator, flying in an orbit that moves between 35°S and 35°N — the distance from the southern tip of Africa to the Mediterranean Sea. The GPM Mission extends tropical cyclone tracking and forecasting capabilities into the middle and high latitudes, covering the area from 65° S to 65°N — from about the Antarctic Circle to the Arctic Circle. This orbit will provide new insight into how and why some tropical cyclones intensify and others weaken as they move from tropical to mid-latitude systems. The sensors onboard other satellites within the GPM constellation along with GPM Core Observatory sensors provide the detailed and global observations needed to estimate, monitor and forecast extreme rainfall that may trigger natural hazards, such as flooding or landslides. | <urn:uuid:6a253274-239f-4e73-bf3f-c5f749d92dde> | 3.203125 | 390 | Knowledge Article | Science & Tech. | 23.40287 |
THE ancestors of plants that use seeds to reproduce began diverging from other plants around 385 million years ago.
When land plants appeared about 450 million years ago they reproduced by releasing spores. The oldest known seeds date from 365 million years ago, but several types are known, so they must have evolved earlier. Now a new examination of a little-known, 385-million-year-old plant fossil called
Philippe Gerrienne of the University of Liège in Belgium reports that
The whole structure is a precursor of later seeds, Gerrienne says, because in later plants ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:f3cfcf08-d02d-4206-8b85-bb15c857cccd> | 3.828125 | 146 | Truncated | Science & Tech. | 50.263505 |
Guest Commentary by George Tselioudis (NASA GISS)
In the past few years several attempts have been made to assess changes in the Earth’s planetary albedo, and claims of global dimming and more recently brightening have been debated in journal articles and blogs alike. In a recent article entitled “Can the Earth’s Albedo and Surface Temperatures Increase Together,” that appeared in EOS, Enric Palle and co-authors use recently released cloud data from the International Satellite Cloud Climatology Project (ISCCP) to explain how it is possible for the Earth to be warming even as it’s albedo is increasing. The need for an explanation arises from the author’s claim that the earth’s albedo has increased since the year 2000, an increase that was not followed by a decrease in surface temperature. They base this claim on Earthshine data (a measurement of the glow of the dark side of the moon that they use to deduce the earth’s reflectance) and on an albedo proxy derived from ISCCP parameters after they are regressed with two years of overlapping, but not global, earthshine observations. Subsequently they claim that the rising reflectance of the Earth has not led to a reversal of global warming because the difference between low and middle-plus-high ISCCP clouds has increased in the last four years. This they say implies that as the low-level, cooling clouds have decreased during the most recent years, the high-level, warming clouds have increased even more negating any potential cloud-induced cooling.
There are several issues connected to the use of earthshine data to calculate the earth’s albedo that have been discussed in peer-reviewed publications and that I will not discuss in this posting. I will say a few things, however, about the selective use of ISCCP data in this article to construct qualitative arguments that do not stand up to detailed quantitative analysis .
First, let’s take the claim that the Earth’s albedo has increased in the last four years. This is based primarily on the huge earthshine-derived albedo increase in 2003, which the authors now admit may be caused by undersampling of the data but was the the highlight of the authors’ recent Science paper (Palle et al, 2004). The other three years have values close to zero (relative to the reference year) with two years having error bars extending into the negative territory. The earthshine-trained ISCCP reconstruction of the albedo is a purely statistical parameter that has little physical meaning as it does not account for the non-linear relations between cloud and surface properties and planetary albedo and does not include aerosol related albedo changes such as associated with Mt. Pinatubo, or human emissions of sulfates for instance. Even this albedo reconstruction, however, shows only a weak positive trend in the last four years.
The ISCCP group produces an independent estimate of the albedo, from performing a full radiative flux calculation that takes into account observations of all radiative forcings and produces top of the atmosphere, surface, and in-atmosphere fluxes (data, figure right). This has been shown to be in excellent quantitative agreement with satellite measurements at the top-of-atmosphere and with surface measurements. The year-to-year variations of these values show some qualitative agreement with the earthshine-trained ISCCP reconstruction but very large quantitative differences.
The ISCCP estimate (right) shows a decreasing albedo trend of 1-2% in the 80s and 90s (as opposed to 7-8% in the earthshine-based proxy), a small increase of 1% form 1999 to 2001 and a flattening of the curve in the last three years. Quantitatively similar trends are derived from radiative flux retrievals by the ERBS and Terra and Aqua satellites.
Next consider the difference in trends between low-level and high-level clouds. It’s important that definitions be used carefully when we interpret satellite retrievals. First, the satellite can see actual low clouds only when higher cloud layers are not present. Second, the satellites retrieve the radiative, not the physical top of the clouds. As a result, a low cloud with a cirrus cloud overhead can be classified as a midlevel cloud in satellite observations. All these issues must be taken into account when calculating the radiative effect of clouds, as is done in the radiative calculations by the ISCCP group. More importantly, not all high-level and almost none of the middle-level clouds are radiative-warming agents. There is an optical depth threshold that depends on the cloud top height, above which the cloud becomes a cooling agent even with tops at high altitudes. Therefore the use of combined middle-plus-high clouds as a measure of the warming potential of the cloud field is a substantial overestimate of the effect. Moreover, a more careful look at the changes of ISCCP clouds by cloud type shows that the increase in total cloud cover from 2000 to 2004 is due to a small increases in high-level clouds and a larger increase in middle-level clouds that are mostly thermally neutral and therefore could not cause warming (see figures, data).
The increases in both high-level and middle level clouds (right) are caused by increases in the optically thicker cloud types, cirrostratus and cumulonimbus for the high-level and altostratus and nimbostratus for the mid-level clouds, that due to their large optical depths, cause radiative cooling. In fact, the same radiative calculations performed by the ISCCP group show that the outgoing longwave radiation increases during this time, opposite to the effect claimed. Therefore, the qualitative explanation given in the article is contrary to the quantitative analysis results derived from the ISCCP data.
The reconstruction of radiative fluxes from atmospheric properties is a very difficult and tedious job and both the ISCCP and ERBE/CERES groups are putting a great deal of effort into producing detailed and carefully evaluated radiative flux datasets. Both datasets show little or no albedo trend in the last four years. Thus explanations for how the albedo trends of the last four years are consistent with the surface warming and the ocean heat content increases are not necessarily required at this point in time. | <urn:uuid:3cbb46b2-a076-4106-af18-287cfcb091e6> | 3.25 | 1,335 | Academic Writing | Science & Tech. | 29.362609 |
about EL example program and usebean example progm
please show best example for Expression Language program and jsp usebean programin jsp concepts
About Java Books
hi.. i want free online books for core java ( pdf or doc) format,where can i have these books ?
thanks in Regards
Please run the EJB on Ecllipse IDE.bcz i have problem in it.
how can i fast the execution of thread.
That e.g was very nice and intutive.
want to a more example in all topic
I would like to say thanks to
Roseindia for giving the beautiful soln of my question.
In Java, any thread can be a Daemon thread. Daemon threads are
like a service... thread. Daemon threads are used for background supporting tasks and are only... the thread is daemon thread or not.
The following program demonstrates the Daemon
Java Daemon Thread
Daemon thread is the supporting thread.
It runs in the background.
Daemon thread gets teminated if no non daemons threads are
Any threads can be set as daemon thread.
Java Daemon Thread Example
Daemon thread - Java Beginners
Daemon thread Hi,
What is a daemon thread?
Please provide me example code if possible.
Daemon thred are those...; Hi Friend,
Daemon threads are the service providers for other threads
Explain about threads:how to start program in threads?
Explain about threads:how to start program in threads? import...; Learn Threads
Thread is a path of execution of a program... more than one thread. Every program has at least one thread. Threads are used
for services should be marked as Daemon threads.
Example: A timer thread.
If only daemon-threads remain, the program is terminated... allows other threads to execute.
Create a thread. Do
daemon thread hello,
What is a daemon thread?
These are the threads which can run without user intervention. The JVM can exit when there are daemon thread by killing them abruptly
Java Thread Tutorials
In this tutorial we will learn java Threads in detail... in Java. Thread is simple path of execution of a program. The Java Virtual Machine... about Java Threads
Thread Write a Java program to create three theads. Each thread should produce the sum of 1 to 10, 11 to 20 and 21to 30 respectively. Main thread....
Java Thread Example
Main Thread and Child Thread
There are two types of threads in Java Progarm
In Java there are Main and Child Threads used in Programming.
Main thread is automatically created when program runs.
Child Thread gets created by the main thread .
; Please visit the following links:
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:d5203b31-51d4-49c0-9889-19835bdb7119> | 3.140625 | 619 | Comment Section | Software Dev. | 69.982998 |
What we did not learn in our class was that, even back at that time, there had been several clever experiments with neutrons which demonstrate the influence of the gravitational potential on the phase of the neutron wave function using interferometers. Neutrons, of course, are ideal particles to perform such experiments, since they have no electric charge and are not subject to the influence of the ubiquitous electromagnetic fields.
But only over the last few years, new experiments have been realised that show directly the quantisation of the vertical "free fall" motion of neutrons in the gravitational field of the Earth. I had heard about them some time ago in connection with their possible role for the detection of Non-Newtonian forces, or the modifications of Newtonian gravity at short distances. Then, earlier this year, I heard a talk by one of the experimenters at Frankfurt University, and I was quote fascinated when I followed the papers describing the experiments.
The essential point of these experiments is the following: If you prepare a beam of very slow neutrons - with velocities about 10 m/s - you can make them hop over a reflecting plane much like you can let hop a pebble over the surface of a lake. Then, you can observe that the vertical part of the motion of the neutrons - with velocities smaller than 5 cm/s - is quantised. In fact, one can detect the quantum states of neutrons in the gravitational field of the Earth! Let me explain in more detail...
Free Fall in Classical Mechanics ...
In order to better understand the experiment, let's go back one step and consider the very simple motion of an elastic ball which is dropped on the ground. If the ground is plane and reflecting, and the ball is ideally elastic such that there is no dissipation of energy, the ball will jump back to the height of where it was dropped from, fall down again, jump back, fall, and so on. The height of the ball over ground as a function of time is shown as the blue curve in the left of this figure: it is simply a sequence of nice parabolas.
We can now ask, What is the probability to find the bouncing ball in a certain height above the floor? For example, we could make a movie of the bouncing ball, take a still at some random time, and check the distribution of the height of the ball if we repeat this for many random stills. The result of this random sampling of the bouncing motion of the ball is the probability distribution shown in red on the right-hand side of figure. The probability to find the ball at a certain height in this idealised, "stationary" situation, where the elastic ball is bouncing forever, is highest at the upper turning point of the motion, and lowest at the bottom, where the ball is reflected.
... and in Quantum Mechanics
So much for classical mechanics, as we know it from every-day life. In quantum mechanics, unfortunately, there is not anymore such a thing as the path of a particle, with position and velocity as well-defined quantities at any instant in time. However, it still makes sense to speak of stationary states, and of the probability distribution to find a particle at a certain position. In quantum mechanics, it is the wave function which provides us with this probability distribution by calculating its square. And the law of nature determining the wave function is encoded in the famous Schrödinger equation. The Schrödinger equation for a stationary state is an "eigenvalue equation", whose solution yields, at the same time, the wave function and the value of the energy of the corresponding state. For the motion of a particle in a linear potential - such as the potential energy mgx of a particle with mass m at height x above ground in the gravitational field with acceleration g at the surface of the Earth - it reads
In some cases, there are so-called "exact solutions" to the Schrödinger equation - wave functions that are given by certain functions one can look up in thick compendia, or at MathWorld. These functions usually are some beautiful beasts out of the zoo of so-called "special functions". Such is the case for the motion of a particle in a linear potential, where the solution of the Schrödinger equation is given by the Airy function Ai(x). Interestingly, this function first showed up in physics when the British astronomer George Airy applied the wave theory of light to the phenomenon of the rainbow...
Quantum States of Particles in the Gravitational Field
As a result of solving the Schrödinger equation, there is a stationary state with a minimal energy - the ground state - and a series of excited states with higher energies. Here is how the wave function of the second excited state of a particle in the gravitational field looks like as a function of the height above ground:
The wave function, shown on the left in magenta, oscillates through two nodes, and goes down to zero exponentially above the classical limiting height, which corresponds to the upper turning point of the parabola of a classical particle with the same energy. For neutrons in this state, this height is 32.4 µm above the plane. The green curve on the right shows the probability density corresponding to the wave function. It is quite different from the classical probability density, shown in red. As a characteristical property of a quantum system, there is, besides the two nodes, a certain probability to find the particle above the classical turning point. This is an example of the tunnel effect: there is a chance to find a quantum particle in regions where by the laws of classical physics, it would not be allowed to be because of unsufficient energy.
However, going from the ground state to ever higher excited states eventually reproduces the probability distribtion of classical physics. This is what is called the correspondence principle, and you can see what it means if you have a look at the wave function for 60th excited state: Here, the probability distribution derived from the quantum wave function follows already very closely the classical distribution.
So far, we have been talking about theory: the Schrödinger equation and its solutions in guise of the Airy function. There is no reason at all to doubt the validity of the Schrödinger equation: it has been thoroughly tested in innumerable situations, from the hydrogen atom to solid state physics. However, in all these situations, the interaction of the particles involved is electromagnetic, and not by gravitation. For this reason, it is extremely interesting to think about ways to check the solution of the Schrödinger equation for particles in the gravitational field. As we have seen before, the best way to do this is to work with neutrons, in order to avoid spurious electromagnetic effects.
Bouncing Neutrons in the Gravitational Field
Unfortunately, it is so far not possible to scan directly the probability distribution of neutrons in the gravitational field. However, in a clever experimental setup, one can look at the transmission of neutrons through a channel between with a horizontal reflecting surface where they can bounce like pebbles over a lake, and an absorber ahead. This is a rough sketch of the setup:
The decisive idea of the experiment is to vary the height of the absorber above the reflecting plane, and to monitor the transmission of neutrons as a function of this height. If the height of the absorber is to low, the ground state for the vertical motion of the neutrons does not fit into the channel, and no neutrons will pass the channel. Transmission sets in once the height of the channel is sufficiently large to accommodate the ground state wave function of the vertical motion of the neutron. Moreover, whenever with increasing height of the channel, one more of the excited wave function fits in, the transmission should increase. The first of these steps, and the corresponding wave functions and probability densities, are shown in this figure:
The interesting point now is, can this stepwise increase of transmission be observed in actual experimental data? Here are measured data, and indeed - the first step is clearly visible, and the second and third step can be identified:
Adapted by permission from Macmillan Publishers Ltd: Nature (doi:10.1038/415297a), copyright 2002.
This has been the first verification of quantised states of particles in the gravitational field!
What can be learned
You may wonder if the experiment may not have shown just some "particle in a box" quantisation, since the channel for the neutrons formed by the reflecting plane and the absorber may make up such a box. This objection has been raised, indeed, in a comment paper, and has been answered by detailed calculations, and improved experiments: the conclusion about quantisation in the gravitational field remains fully valid!
However, limits about modifications of Newtonian gravity from this experiment remain restricted. Such a modification would change the potential the neutrons are moving in. For example, a short-range force caused by the matter of reflecting plane could contribute to the potential of the neutrons. However, as comes out, such an additional potential would be very weak and have nearly no influence at all on the overall wave function of the neutron.
Moreover, it is clear that in this experiment, the gravitational field is always a classical background field, which itself is not quantised at all. There may be the possibility that a neutron undergoes a transition from, say, the second to the first quantised state, thereby emitting a graviton - similar to the electron in an atom, which emits a photon when the electron makes a transition. Unfortunately, this probability is so low that it is not reasonable to expect that it may ever be measured....
But all these restrictions do not change at all the main point that this a very exciting, elementary experiment, which could find its way into textbooks of quantum mechanics!
Here are some papers about the "bouncing neutron" experiment:
Quantum states of neutrons in the Earth's gravitational field by V.V. Nesvizhevsky, H.G. Boerner, A.K. Petoukhov, H. Abele, S. Baessler, F. Ruess, Th. Stoeferle, A. Westphal, A.M. Gagarski, G.A. Petrov, and A.V. Strelkov; Nature 415 (2002) 297-299 (doi: 10.1038/415297a) - The first description of the result.
Measurement of quantum states of neutrons in the Earth's gravitational field by V.V. Nesvizhevsky, H.G. Boerner, A.M. Gagarsky, A.K. Petoukhov, G.A. Petrov, H.Abele, S. Baessler, G. Divkovic, F.J. Ruess, Th. Stoeferle, A. Westphal, A.V. Strelkov, K.V. Protasov, A.Yu. Voronin; Phys.Rev. D 67 (2003) 102002 (doi: 10.1103/PhysRevD.67.102002 | arXiv: hep-ph/0306198v1) - A more detailed description of the experimental setup and the first results.
Study of the neutron quantum states in the gravity field by V.V. Nesvizhevsky, A.K. Petukhov, H.G. Boerner, T.A. Baranova, A.M. Gagarski, G.A. Petrov, K.V. Protasov, A.Yu. Voronin, S. Baessler, H. Abele, A. Westphal, L. Lucovac; Eur.Phys.J. C 40 (2005) 479-491 (doi: 10.1140/epjc/s2005-02135-y | arXiv: hep-ph/0502081v2) - Another more detailed discussion of the experimental setup, possible sources of error, and the first results.
Quantum motion of a neutron in a wave-guide in the gravitational field by A.Yu. Voronin, H. Abele, S. Baessler, V.V. Nesvizhevsky, A.K. Petukhov, K.V. Protasov, A. Westphal; Phys.Rev. D 73 (2006) 044029 (doi: 10.1103/PhysRevD.73.044029 | arXiv: quant-ph/0512129v2) - A long and detailed discussion of point such as the "particle in the box" ambiguity and the role of the absorber.
Constrains on non-Newtonian gravity from the experiment on neutron quantum states in the Earth's gravitational field by V.V. Nesvizhevsky, K.V. Protasov; Class.Quant.Grav. 21 (2004) 4557-4566 (doi: 10.1088/0264-9381/21/19/005 | arXiv: hep-ph/0401179v1) - As the title says: a discussion of the constraints for Non-Newtonian forces.
Spontaneous emission of graviton by a quantum bouncer by G. Pignol, K.V. Protasov, V.V. Nesvizhevsky; Class.Quant.Grav. 24 (2007) 2439-2441 (doi: 10.1088/0264-9381/24/9/N02 | arXiv: quant-ph/07702256v1) - As the title suggests: the estimate for the emission of a graviton from the neutron in the gravitational field.
TAG: physics, quantum mechanics | <urn:uuid:adcf8714-19ec-41ea-80a3-102a7ffa9d0d> | 3.703125 | 2,878 | Personal Blog | Science & Tech. | 58.917565 |
Cicadas are large insects that spend most of their lives underground. Cicadas are known for their loud mating calls.
Cicadas tend to congregate (but act independently) in certain areas that contain many sap-yielding deciduous trees. In the summer their mating season begins, and they start to emit loud mating calls. Cicadas are known to come in "plagues" at set intervals (cicadas remain underground for over a decade). | <urn:uuid:8bd920b0-ab6d-43cb-8c2e-a217e92ea296> | 3.03125 | 94 | Knowledge Article | Science & Tech. | 47.469235 |
|Adapting to extreme weather: Part 1|
|Written by James J. Hoorman|
|Thursday, February 14, 2013 1:49 PM|
By James J. Hoorman
In 2011, the wettest and warmest year on record occurred followed by the hot weather and drought of 2012. Weather experts say that the last 50 years were rather “mild” in relationship to weather changes but we now are entering an era where we should expert “extreme” weather changes.
Weather and climate both deal with atmospheric conditions like temperature, cloud cover, and precipitation. Weather describes short term events like what the temperature is today, while climate deals with average weather changes over time, like what is the average temperature. Global warming is a term used to describe the increase in average temperatures due to greenhouse gases. Climate change describes changes in precipitation (rainfall or snow), wind patterns, sea levels, extreme events, and includes temperature changes. In the future, while Midwestern USA is expected to heat up, the west coast may actually be cooler than normal. In the next several decades due to “climate change”, we should expect average temperatures to rise, expect more intense precipitation and storms, longer growing seasons, earlier snow melts, and changes in plant and animal migrations.
To read the rest of this article please subscribe or sign in | <urn:uuid:ae05a02f-597f-4962-8f10-3a2fd812a6e4> | 3.40625 | 279 | Truncated | Science & Tech. | 35.285028 |
Among the most powerful but as yet undetected events in the universe are the mergers of supermassive black holes. These billion-solar-mass monsters, residing in the centers of large galaxies, can glom together during galactic collisions after a spiraling dance, as shown here. The colorful bands represent propagating gravitational fields, while the gray spheres indicate the black holes' event horizons, the boundary from within which not even light can escape.
Albert Einstein's general theory of relativity predicts that black hole mergers should send out intense blasts of gravitational waves, ripples in space-time. Scientists are learning how to detect and recognize those waves by studying supercomputer models run at two NASA campuses, the Ames Research Center at Moffett Field, California, and the Goddard Space Flight Center in Greenbelt, Maryland. The simulations reveal that the recoil from the combining of black holes could shoot the resulting merged supermassive black hole right out of its galaxy. | <urn:uuid:634174e0-9c8a-47e4-886d-5b459598f8cd> | 3.71875 | 191 | Knowledge Article | Science & Tech. | 22.633462 |
Solid state detectors
Nuclear Instruments & Methods in Physics Research Section a-Accelerators Spectrometers Detectors and Associated Equipment 623,1 (2010) 35-41;
Solid state detectors play a fundamental role in particle physics providing precision vertex detection, identification of heavy quarks and leptons, and momentum measurement. The status of the LHC experiments will be reviewed including their performance and the lesson that we have learned from their construction. The development of extremely radiation hard sensors, for the LHC luminosity upgrade will also be discussed. Progress in hybrid, monolithic or semi-monolithic pixel detectors will be presented including the application of monolithic active pixels (MAPS) and DEPFET pixels for RHIC and SuperBelle upgrades. (C) 2010 Elsevier B.V. All rights reserved.
Date of this Version | <urn:uuid:8471e9e1-9b57-4209-a589-8cf8fa646da2> | 2.6875 | 172 | Academic Writing | Science & Tech. | 24.614634 |
Experimental Hall C - Under the Beamline
When electrons from the accelerator strike atoms in the target, fragments will fly throughout this room. Two sets of detectors will be used to measure the fragments. The largest detector goes off to the right and is used to measure electrons from the beam that have bounced off of the target. The second detector goes off to the left and is used to measure any fragments that might be knocked out of the nucleus when the beam hits it. These fragments might be protons or other particles. | <urn:uuid:d835e0f8-6020-4ab7-8c8b-87ed1018a927> | 3.984375 | 104 | Knowledge Article | Science & Tech. | 57.872129 |
Here is the same with the pentagramma, but without the sphere (play with the sliders, and try to get a sense of how the octahedra interconnect):
And here are just the five octahedra:
The five octahedra together define what is called a “compound” solid, and, as a compound, exist as an artifact of the pentagramma mirificum. Play with the animations above, and make one for your self.
(*Note that this is not the compound of five octahedra (the author knows a different compound of five octahedra, and there likely could be others), but this is a very interesting compound of five octahedra, as we will see below; we can call it the compound mirificum)
Take note of how the different octahedra connect. Begin by looking (on the animations above, or the one you made) at a corner of the pentagon of the pentagramma mirificum: at each of these five corners of the pentagon, two octahedra come together, and, as should be evident from the work above, the same two octahedra will also come together at a vertex on the opposite side of the sphere. Thus, each of octahedron shares two opposite vertices, and an axis, with another octahedron; this actually occurs twice for each octahedron. In the picture below: the red octahedron connects to the brown and wood colored octahedra at the indicated places (which are corners of the pentagon of the pentagramma).
However, the red octahedron does not connect to the white or silver octahedra; the wood and brown do connect to the silver and white octahedra though. So traveling along the edge of the red, one could, through crossing at a shared vertex, pass to the wood, and from the wood travel to the white, across the vertex they share. So all five are interconnected in this chain-like fashion. Looks complex, at least as it appears visually; but we could, as we did in another way in part one, develop a map. How many faces, edges and vertices does an octahedron have? (If you haven't built one, build one and find out.) The following image will suffice as a planar map of one octahedron, where we have the proper number of sides, vertices, and faces, and they all connect properly, though the sizes are distorted (note that, for this planar map, we treat the largest, outer triangle (EFD) as one of the faces; then, along with the other 7 irregular triangles, the 8 faces of an octahedron are accounted for):
In the compound mirificum, one octahedron connects to another by sharing one pair of opposite vertices, so we could take two such planar maps, and place them vertex to vertex, and we have one of the connections:
However, a connected pair of octahedra share two vertices; no worry, we just have to find the second set of vertices which are be connected, and we can represent this second connection with a dashed line like so:
Thus, the map of the compound of five octahedra will look like this (the colors correspond to the one pictured above; also note that when two dashed lines cross they do not represent a connection, any one dashed green line is only connecting two vertices of two octahedra):
What may, initially, look to be a jumbled mess, now, having its relations mapped out, shows a beautiful symmetry; but what can we make of this compound mirificum now that we have this map? The specific way the octahedra connect to one another (with any one octahedron not being connected to all the others directly, only to two specific others) is shown clearly here. An investigation of these interconnections leads to the question of “topology.” | <urn:uuid:03ab015c-b566-4710-ac2b-d110000ecd75> | 3.5625 | 845 | Tutorial | Science & Tech. | 21.934765 |
WHAT WE KNOW ABOUT HURRICANES; Several Theories Have Been Put Forth to Explain Them
By ROBERT K. PLUMB ();
September 05, 1954,
, Section REVIEW OF THE WEEK, Page E8, Column , words
Four to six tropical cyclones build up to hurricane force each year and move out of the region of the equatorial doldrums into the Gulf of Mexico or up to the north and east along the Atlantic Coast. Fortunately, not many hurricanes come ashore. | <urn:uuid:01564dc7-c7c4-4a3c-9ebc-a189961e45af> | 3.015625 | 103 | Truncated | Science & Tech. | 46.895631 |
We know that different languages can have different alphabets. The first step in localizing an alphabet is to find a way to represent, or encode, all its characters. In general, alphabets may have different character encodings.
The 7-bit ASCII codeset is the traditional code on UNIX systems.
The 8-bit codesets permit the processing of many Eastern and Western European, Middle Eastern, and Asian Languages. Some are strictly extensions of the 7-bit ASCII codeset; these include the 7-bit ASCII codes and additionally support 128-character codes beyond those of ASCII. Such extensions meet the needs of Western European users. To support languages that have completely different alphabets, such as Arabic and Greek, larger 8-bit codesets have been designed.
Multibyte character codes are required for alphabets of more than 256 characters, such as kanji, which consists of Japanese ideographs based on Chinese characters. Kanji has tens of thousands of characters, each of which is represented by two bytes. To ensure backward compatibility with ASCII, a multibyte codeset is a superset of the ASCII codeset and consists of a mixture of one- and two-byte characters.
For such languages, several encoding schemes have been defined. These encoding schemes provide a set of rules for parsing a byte stream into a group of coded characters.
Handling multibyte character encodings is a challenging task. It involves parsing multibyte character sequences, and in many cases requires conversions between multibyte characters and wide characters.
Understanding multibyte encoding schemes is easier when explained by means of a typical example. One of the earliest and probably biggest markets for multibyte character support is in Japan. Therefore, the following examples are based on encoding schemes for Japanese text processing.
In Japan, a single text message can be composed of characters from four different writing systems. Kanji has tens of thousands of characters, which are represented by pictures. Hiragana and katakana are syllabaries, each containing about 80 sounds, which are also represented as ideographs. The Roman characters include some 95 letters, digits, and punctuation marks.
Figure 1 gives an example of an encoded Japanese sentence composed of these four writing systems:
The sentence means: "Encoding methods such as JIS can support texts that mix Japanese and English."
A number of Japanese character sets are common:
JIS C 6226-1978
JIS X 0208-1983
JIS X 0208-1990
JIS X 0212-1990
There is no universally recognized multibyte encoding scheme for Japanese. Instead, we deal with the three common multibyte encoding schemes defined below:
JIS (Japanese Industrial Standard)
EUC (Extended UNIX Code)
The JIS, or Japanese Industrial Standard, supports a number of standard Japanese character sets, some requiring one byte, others two. Escape sequences are required to shift between one- and two-byte modes.
Escape sequences, also referred to as shift sequences, are sequences of control characters. Control characters do not belong to any of the alphabets. They are artificial characters that do not have a visual representation. However, they are part of the encoding scheme, where they serve as separators between different character sets, and indicate a switch in the way a character sequence is interpreted. The use of the shift sequence is demonstrated in Figure 2.
For encoding schemes containing shift sequences, like JIS, it is necessary to maintain a shift state while parsing a character sequence. In the example above, we are in some initial shift state at the start of the sequence. Here it is ASCII. Therefore, characters are assumed to be one-byte ASCII codes until the shift sequence <ESC>$B is seen. This switches us to two-byte mode, as defined by JIS X 0208-1983. The shift sequence <ESC>(B then switches us back to ASCII mode.
Encoding schemes that use shift state are not very efficient for internal storage or processing. Sometimes shift sequences require up to six bytes. Frequent switching between character sets in a file of strings could cause the number of bytes used in shift sequences to exceed the number of bytes used to represent the actual data!
Encodings containing shift sequences are used primarily as an external code, which allows information interchange between a program and the outside world.
Despite its name, Shift-JIS has nothing to do with shift sequences and states. In this encoding scheme, each byte is inspected to see if it is a one-byte character or the first byte of a two-byte character. This is determined by reserving a set of byte values for certain purposes. For example:
Any byte having a value in the range 0x21-7E is assumed to be a one-byte ASCII/JIS Roman character.
Any byte having a value in the range 0xA1-DF is assumed to be a one-byte half-width katakana character.
Any byte having a value in the range 0x81-9F or 0xE0-EF is assumed to be the first byte of a two-byte character from the set JIS X 0208-1990. The second byte must have a value in the range 0x40-7E or 0x80-FC.
While this encoding is more compact than JIS, it cannot represent as many characters as JIS. In fact, Shift-JIS cannot represent any characters in the supplemental character set JIS X 0212-1990, which contains more than 6,000 characters.
Extended UNIX Code (EUC) is not peculiar to Japanese encoding. It was developed as a method for handling multiple character sets, Japanese or otherwise, within a single text stream.
The EUC encoding is much more extensible than Shift-JIS since it allows for characters containing more than two bytes. The encoding scheme used for Japanese characters is as follows:
Any byte having a value in the range 0x21-7E is assumed to be a one-byte ASCII/JIS Roman character.
Any byte having a value in the range 0xA1-FE is assumed to be the first byte of a two-byte character from the set JIS X0208-1990. The second byte must also have a value in that range.
Any byte having the value 0x8E is assumed to be followed by a second byte with a value in the range 0xA1-DF, which represents a half-width katakana character.
Any byte having the value 0x8F is assumed to be followed by two more bytes with values in the range 0xA1-FE, which together represent a character from the set JIS X0212-1990.
The last two cases involve a prefix byte with values 0x8E and 0x8F, respectively. These bytes are somewhat like shift sequences in that they introduce a change in subsequent byte interpretation. However, unlike the shift sequences in JIS which introduce a sequence, these prefix bytes must precede every multibyte character, not just the first in a sequence. For this reason, each multibyte character encoded in this manner stands alone and EUC is not considered to involve shift states.
The three multibyte encodings just described are typically used in separate areas:
JIS is the primary encoding method used for electronic transmission such as email because it uses only 7 bits of each byte. This is required because some network paths strip the eighth bit from characters. Escape sequences are used to switch between one- and two-byte modes, as well as between different character sets.
Shift-JIS was invented by Microsoft and is used on MS-DOS-based machines. Each byte is inspected to see if it is a one-byte character or the first byte of a two-byte character. Shift-JIS does not support as many characters as JIS and EUC do.
EUC encoding is implemented as the internal code for most UNIX-based platforms. It allows for characters containing more than two bytes, and is much more extensible that Shift-JIS. EUC is a general method for handling multiple character sets. It is not peculiar to Japanese encoding.
Multibyte encoding provides an efficient way to move characters around outside programs, and between programs and the outside world. Once inside a program, however, it is easier and more efficient to deal with characters that have the same size and format. We call these wide characters.
Here is an example that illustrates how wide characters make text processing inside a program easier. Consider a filename string containing a directory path with adjacent names separated by a slash, such as /CC/include/locale.h. To find the actual filename in a single-byte character string, we can start at the back of the string. When we find the first separator, we know where the filename starts. If the string contains multibyte characters, we scan from the front so we don't inspect bytes out of context. If the string contains wide characters, however, we can treat it like a single-byte character and scan from the back.
Conceptually, you can think of wide character sets as being extended ASCII or EBCDIC; each unique character is assigned a distinct value. Since they are used as the counterpart to a multibyte encoding, wide character sets must allow representation of all characters that can be represented in a multibyte encoding as wide characters. As multibyte encodings support thousands of characters, wide characters are usually larger that one byte -- typically two or four bytes. All characters in a wide character set are of equal size. The size of a wide character is not universally fixed, although this depends on the particular wide character set.
There are many wide character standards, including those shown below:
The programming language C++ supports wide characters; their native type in C++ is called wchar_t. The syntax for wide character constants and wide character strings is similar to that for ordinary, narrow character constants and strings:
L'a' is a wide character constant, and
L"abc" is a wide character string.
Since wide characters are usually used for internal representation of characters in a program, and multibyte encodings are used for external representation, converting multibytes to wide characters is a common task during input/output operations. Input to and output from a file is a typical example. The file usually contains multibyte characters. When you read such a file, you convert these multibyte characters into wide characters that you store in an internal wide character buffer for further processing. When you write to a multibyte file, you have to convert the wide characters held internally into multibytes for storage on a external file. Figure 3 demonstrates how this conversion during file input is done:
The conversion from a multibyte sequence into a wide character sequence requires expansion of one-byte characters into two- or four-byte wide characters. Escape sequences are eliminated. Multibytes that consist of two or more bytes are translated into their wide character equivalents. | <urn:uuid:72e3f1c7-04f4-4b77-8054-f3232c25f0e7> | 4.25 | 2,331 | Documentation | Software Dev. | 37.724055 |
How much of planet Earth is made of water?
Very little, actually.
oceans of water cover about 70 percent of Earth's surface, these oceans are
shallow compared to the Earth's radius.
The above illustration shows what would happen is all of
the water on or near the surface of the Earth were bunched up into a ball.
The radius of this ball would be only about 700 kilometers, less than half the radius of the
Earth's Moon, but slightly larger than Saturn's moon
Rhea which, like many moons in our outer Solar System, is mostly water ice.
How even this much
water came to be on
the Earth and whether any significant amount is
beneath Earth's surface remain topics of research. | <urn:uuid:3364245f-4d1a-4fda-8ce0-b00a298a32f1> | 3.28125 | 156 | Content Listing | Science & Tech. | 61.305714 |
superimposed epoch method
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
study of biosphere and temperature impact
...leafing on the buildup of humidity in the lower atmosphere has received the attention of researchers in recent years. In the late 1980s, American climatologists M.D. Schwartz and T.R. Karl used the superimposed epoch method to study the climate before and after the leafing out of lilac plants in the spring in the U.S. Midwest. (This method uses time series data from multiple locations, which...
What made you want to look up "superimposed epoch method"? Please share what surprised you most... | <urn:uuid:a4a50c5e-52e3-451c-a829-a8790c2dc2c2> | 2.765625 | 158 | Knowledge Article | Science & Tech. | 54.538774 |
Deividson on Databases: Triggers
Triggers are pieces of code executed automatically when a certain action happens. That action can be any kind of data manipulation (insertion, update, or deletion). It can also be executed before or after the actual data manipulation, having different options and uses (validating data, allowing or disallowing data manipulations, changing other data, etc.).
In PostgreSQL, triggers are special stored procedures - so everything we saw on the last article can be used here, too. Let's go down to an example:
Example 1: Hit Counting
This can be used on a web page, or something similar. We will have a table to store the ID and IP of each access (you can store any information you want here, such as referrer, time and date, etc.), and another table with a single row to store the actual page views (this can be extended to have an ID for each page on your site, storing individual hits.) Here is the SQL to create these tables:
CREATE TABLE access ( access_id serial NOT NULL PRIMARY KEY, access_ip text ); CREATE TABLE hit ( hit_id serial NOT NULL PRIMARY KEY, hit_value integer ); insert into hit values(0, 0);
We'll want to increment the "hit_value" of ID 0 every time an access is recorded. We could use a simple "select count()" to count accesses, but that would mean losing the count when you do a clean up on the access table. (We will not want all that data there forever, will we?) To do it right, first we need to create a procedure that increments the "hit" table when "hit_id" = 0. This is the SQL to create this procedure:
create or replace function add_hit() returns TRIGGER as $$ begin if(TG_OP='INSERT') then update hit set hit_value = (select hit_value from hit where hit_id = 0) +1 where hit_id = 0; end if; return new; end; $$ Language PLPGSQL;
Here, we see three new commands/features in addition to what we used in the last article in this series: the first one is "returns TRIGGER as $$". This is a trigger-specific return type to hold the changed data that will be stored/updated/deleted from the database, useful when you need to add or change the data before inserting into the database. The other new command is "if(TG_OP='INSERT')". TG_OP will store the operation being executed in the database - useful when you use the same trigger on more than one event (insert/update/delete). And finally, we have "return new". "New" is an internal variable that stores the data after the changes. (In an insert, new is the data being inserted; on an update, new is the existing data after the update; on a delete, new does not exist.) Along with "new", there is also "old", which stores the data before the changes: on delete, old is the data that will be deleted; on an update, old is the data that will be changed, before the change; on an insert, old does not exist.
Now, we will turn our stored procedure into a trigger and activate it. Here is the SQL to do that:
create TRIGGER tg_add_hit before insert on access for each row execute procedure add_hit();
The syntax is pretty simple - "create TRIGGER <trigger name> <before/after> <event(s)> for each <row/statement> execute procedure <procedure name>([parameters])". trigger name is a unique name to identify the trigger, before/after defines if the procedure will be executed before or after the actual data change, events are the events when the trigger will be executed - 'insert', 'update', 'delete', or a mix of them ("on insert or update"). for each row means that the trigger will be executed for each row of data that gets changed, while the for each statement means it will only be executed once, no matter how many rows a single statement modifies. In the end, there's the procedure name and its parameters, if it takes any.
Now, to test this trigger, we'll run "select * from hit" to check the current count (should be 0). Then, insert an access with "insert into access(access_ip) values('111');". Then, do a "select * from hit" again, and you will notice that the count changed.
Example 2: Stock/Inventory Control
A classic use of triggers is stock/inventory control - keeping a record of how many of each product you have in stock, and using triggers to change the number of remaining items when some are sold. We will use the following tables in this example:
create table product( pro_id serial primary key, pro_name varchar(50), pro_quant integer); create table sale( sale_id serial primary key, sale_value date default current_date); create table sale_product( sp_id serial primary key, sale_id integer references sale(sale_id), pro_id integer references product(pro_id), sp_quant integer); insert into product(pro_name, pro_quant) values ('Computer', 10); insert into product(pro_name, pro_quant) values ('Printer', 15); insert into product(pro_name, pro_quant) values ('Monitor', 10); insert into sale(sale_id) values (0);
Pretty simple - although I left some "details" (prices, clients, etc.) out of it so we could focus on the quantities and on our trigger. I've also created some basic test data for the products and sales tables. Now, let's create the stored procedure to remove products when they are sold, and activate the trigger for it. Here's the SQL to do that:
create or replace function upd_stock() returns TRIGGER as $$ begin if((TG_OP='DELETE') OR (TG_OP='UPDATE')) then update product set pro_quant = pro_quant + OLD.sp_quant where pro_id = OLD.pro_id; end if; if((TG_OP='UPDATE') OR (TG_OP='INSERT')) then update product set pro_quant = pro_quant - NEW.sp_quant where pro_id = NEW.pro_id; end if; if(TG_OP='DELETE') then return old; else return new; end if; end; $$ Language PLPGSQL; create TRIGGER tg_upd_stock before insert or update or delete on sale_product for each row execute procedure upd_stock();
OK, this one is a bit more complex, so let's go through it slowly. First, it's a trigger that runs on every event ("on insert or update or delete"). If the user is deleting data, it will only give the amount sold back to the stock. If it's an insert, then it will remove only the products being sold from the stock. Finally, if it's updating (changing), then the trigger will first add the old amount back into the product table, then it will remove the new quantity. This is done to prevent data corruption. Even if your system does not support data deletion, for example, this ensures that your database will remain correct, no matter what happens.
Now, if you do want to practice stored procedures and triggers, there are two additions you need to make to this last example. The first one will add a table to store data when you buy stuff and a trigger to add the products to the stock; the second one will add a total to the sales table, add the price of the product to the products table and the price of the product when it was sold to the sale_product table, and create a trigger to add the price of the sold products to the sale total.
PostgreSQL is a very advanced database system, and some of its features can aid you greatly in developing systems, eliminate the need for a considerable amount of external code, and usually result in a faster solution, reduced bandwidth requirements, etc. The options we saw in this series of articles are very powerful but are usually under-used - so it's good to remember that they exist. Who knows - next time you are developing something, they might be exactly what you need.
I hope you enjoyed these articles. In case of any questions or suggestions, make sure to send a Talkback message by clicking the link below.
Deividson was born in União da Vitória, PR, Brazil, on 14/04/1984. He became interested in computing when he was still a kid, and started to code when he was 12 years old. He is a graduate in Information Systems and is finishing his specialization in Networks and Web Development. He codes in several languages, including C/C++/C#, PHP, Visual Basic, Object Pascal and others.
Deividson works in Porto União's Town Hall as a Computer Technician, and specializes in Web and Desktop system development, and Database/Network Maintenance. | <urn:uuid:6baebc09-ab52-4b8d-976e-ec65cbe77395> | 3.40625 | 1,937 | Personal Blog | Software Dev. | 43.25528 |
Bloodsuckers they may be, but who's to say they can't be useful too? Leeches store blood from their most recent meal for months, betraying the identity of their prey to those who care to look – which could help find and count endangered species.
Tom Gilbert of the University of Copenhagen in Denmark fed goat blood to leeches in the lab, and found that some of the goat DNA stuck around for more than four months. Next, his team collected 25 leeches from a remote tropical forest in Vietnam which is rich in rare but shy animals.
Four leeches yielded DNA from a rare striped rabbit, one from a rare muntjac, six from a rare badger and three from a rare goat. The team say the rabbit's presence in the sampled area has been suspected since 1996 but 2000 nights of camera surveillance couldn't confirm it. The badger and goat DNA is the first confirmation they live in the area.
Gilbert says that at $5 to $10 per sample, the method is cheaper than alternatives. And as an added bonus, leeches don't need to be sought out. "They look for you," says Gilbert.
Journal reference: Current Biology, DOI: 10.1016/j.cub.2012.02.058
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article | <urn:uuid:b6ea4f23-9c5a-4a58-b535-f5878f3443bb> | 3.328125 | 348 | Truncated | Science & Tech. | 68.635956 |
- guardian.co.uk, Monday 21 February 2011 00.01 GMT
- Subscribe via iTunes
- Download mp3
- Podcast feed URL
On the 12 April 1981, Columbia blasted off from Cape Canaveral. It was the first shuttle to go into space.
Thirty years on, the shuttle fleet is finally being taken out of service. The three remaining shuttles - Discovery, Atlantis and Endeavour - each have one mission left.
At the time of recording, Nasa has scheduled STS-133 for 24 February: Discovery is destined for the international space station.
Back in the 1980s and 1990s, Dr Jeffrey Hoffman flew five times on the shuttle - he was the first astronaut to log more than a thousand hours of flight time on board and travelled more than 20 million miles in space.
But as this era of spaceflight comes to a close, what is the shuttle's legacy and what's next for human spaceflight?
We spoke to Jeff from his new workplace, the Department of Aeronautics and Astronautics at MIT.
He has presented a new documentary on the BBC World Service, The Last Chance to Fly the Space Shuttle.
Don't forget to listen to our regular Science Weekly podcast.
Follow the podcast on our Science Weekly Twitter feed and receive updates on all breaking science news stories from Guardian Science.
We're always here when you need us, listen back through our archive. | <urn:uuid:0df6731b-fe66-4bd7-a5e5-ef8d0c1fc1da> | 2.984375 | 291 | Truncated | Science & Tech. | 63.885185 |
Harvard applied physics professor David Keith is building a machine that can suck carbon dioxide from the air. Keith has started a company called Carbon Engineering that has attracted venture capitalists that see a future for this technology.
The machine uses a three-step process to filter the air and separate and sequester the carbon dioxide. First, a fan sucks air into the machine where it enters a 31-foot-long chamber filled with wavy plastic material. A sodium hydroxide solution runs down that plastic and reacts with the CO2 to pull it out of the air and turn it into carbonate solids. Those solids then go into a 900 degree Celsius kiln where they're broken down and become a stream of pure CO2. That pure CO2 is then capture where it can go on to be stored underground or used for other purposes.
The machine reuses ash left behind in the kiln to regenerate the sodium hydroxide solution and the process continues.
Of course the removal of the CO2 from the air is never the tricky part of these projects, rather it's what is done with the captured CO2 that leaves people feeling unsure. The permanence of underground storage is still untested.
But the potential for the technology has generated some interest. Bill Gates and other billionaire investors have given money to Keith's project and Keith himself hopes that it can be scaled up to a size that could actually make a positive impact on the environment. | <urn:uuid:ff5aeccb-c117-4995-bdf0-d221761b62f2> | 3.390625 | 292 | Knowledge Article | Science & Tech. | 46.065141 |
Past Time Linear Temporal Logic
Past time Linear Temporal Logic (ptLTL) is a logic for specifying properties of reactive and concurrent systems. ptLTL provides temporal operators that refer to the past states of an execution trace relative to a current point of reference. The logic plugin here is based on an rewriting-based algorithm for generating an optimized monitoring program from an ptLTL formula.
The PTLTL formula follows the following syntax:
<Fm> ::= true | false | A | ! <Fm> | <Fm> /\ <Fm> | <Fm> \/ <Fm> | <Fm> ++ <Fm> | <Fm> -> <Fm> | <Fm> <-> <Fm> | [*] <Fm> | <*> <Fm> | (*) <Fm> | <Fm> Ss <Fm> | <Fm> Sw <Fm> | [ <Fm> , <Fm> )s | [ <Fm> , <Fm> )w | start(<Fm>) | end(<Fm>)
The different operators in decreasing order of precedence are [*], <*>, (*) > ! > U > /\ > ++ > \/ > -> > <-> > Ss, Sw. A is an event or a predicate. The propositional binary operators, e.g. /\, ++ and so on, are the standard ones, that is, disjunction, conjunction, implication, equivalence, and exclusive disjunction (++).
The standard past time and the monitoring operators are called ``temporal operators, because they refer to other (past) moments in time. The operator (*) F should be read ``previously F; its intuition is that F held at the immediately previous step of execution. <*> F should be read ``eventually in the past F, with the intuition that there is some past moment in time when F was true. [*] F should be read ``always in the past f, with the obvious meaning. The operator F1 Ss F2, which should be read ``F1 strong since F2, reflects the intuition that F2 held at some moment in the past and, since then, F1 held all the time. F1 Sw F2 is a weak version of ``since, read ``F1 weak since F2, saying that either F1 was true all the time or otherwise F1 Ss F2. The monitoring operators start, end, [ )s, and [ )w were inspired by work in runtime verification. We found these operators often more intuitive and compact than the usual past time operators in specifying runtime requirements, despite the fact that they have the same expressive power as the standard ones. The operator start(F) should be read ``start F; it says that the formula F just started to be true, that is, it was false previously but it is true now. Dually, the operator end(F) which is read ``end F, carries the intuition that F ends to be true, that is, it was previously true but it is false now. The operators [F1,F2)s and [F1,F2)w are read ``strong/weak interval F1,F2 and they carry the intuition that F1 was true at some point in the past but F2 has not been seen to be true since then, including that moment.
Example: [*](start(dialing) -> !((*)(busyTone\/connected))) (one cannot dial when the phone is busy or connected) | <urn:uuid:cc672424-decf-4ca7-81eb-e5343d0e7b2d> | 3.0625 | 759 | Documentation | Software Dev. | 64.985812 |
"If you have ants in your house," the great Harvard ecologist EO Wilson once said, "be kind to them." Keep this in mind the next time you want to flick one off the kitchen table: The tiny critters, which collectively weigh about as much as all of humanity, could wield a big weapon in the fight against climate change.
In the U.S., corn-based ethanol is a big business, consuming 40 percent of the domestic corn crop and providing roughly 10 percent of the fuel supply, which would otherwise be dirty fossil fuels. But the practice of topping your tank off with corn is fraught with problems: Some argue that the crop should be used for food; it's sensitive to drought; and the ethanol-making process might be contributing to an E. coli epidemic, to name a few. That's why the Obama administration recently announced a plan to invest $2 billion in organic fuels that rely on things other than corn, including switchgrass and gas from cattle poo.
But this weekend, a group of scientists discovered a chemical key that could revitalize corn-based ethanol by allowing it to be made from stalks, leaves, and other bits beside the cob itself. This won't help much with the drought problem (less corn is still less corn), but it could alleviate the food-vs.-fuel debate and the E. coli problem as more kernels are saved to go straight to livestock. Turns out, the savior of ethanol could be the South American leafcutter ant. | <urn:uuid:2e0a747a-b072-49da-b1cb-0d5f60d95e16> | 3.03125 | 305 | Knowledge Article | Science & Tech. | 60.938714 |
What is Global Warming?
There is a good probability that you've got heard of global warming before. After all, global warming is often talked about on the radio, tv, and even on the internet. Although there is a good probability that you have heard of global warming before, chances are you'll not essentially know what it is. There may be a considerable amount of debate circulating the idea of global warming and these debates typically result in confusion.
In scientific terms, global warming is outlined as the process that ends in the earth's temperature rising. This rise in temperature is attributed to a rise in greenhouse gases. These gases are raising the earth's temperatures to levels that concern many meteorologists and scientists.
When it comes to global warming and the increase within the earth's temperature, there are lots of individuals who marvel why all of the concern. In many aspects, a slight rise in temperate is not stunning, however there appears to be no relief. The rise of the earth's temperature may also lead to temperature will increase on the ground. The worry is that these temperature will increase will lead to the melting of large ice masses. The fear is that these melting masses will end in a sea stage rise that might end in flooding throughout the world. If so, thousands and thousands of individuals could also be displaced from their properties, with their cities and towns underwater.
Another question generally requested about global warming is how does it impact us now. While we're starting to see many modifications in our climate and with the earth's temperature, these adjustments may or will not be attributed to global warming. That is where the controversy once again enters the picture. Regardless, if the earth's temperature continues to rise you might not see the modifications firsthand, but your children will. These changes might include warming temperatures, as well as stronger storms, and varying climate conditions around the globe.
Speaking of the global warming debate, there are numerous scientists, as well as politicians and different well-known figures who claim that global warming is nothing to be involved with. In actual fact, many claim that reducing inexperienced house fuel emissions, especially from industrial associated actions, can have a worse influence on our economic system than global warming. What is difficult for a lot of in regards to the global warming debate is that both sides claim to have proof that backs up their views and theories.
Many also wonder who shall be impacted the most by global warming. As beforehand acknowledged, it's possible you'll not see the impacts of global warming in your lifetime, however your youngsters or their kids may. That is why you are urged to take motion or not less than look at global warming in-depth to see what there's that you can do to help. Global warming may have an impact on the place your future family members are in a position to dwell, the actions that they are able to take part in, and the financial system in general.
In addition to your future family, global warming may even have an impact on sea and wildlife. It has been said that coral reefs have gotten unstable because of the massive quantities of toxins in our waters. Many animals rely on coral reefs for shelter and food. There is also concern for different wildlife, including bears. There have been some unverified experiences that bears are hibernating later in the year. This may have a big impact on the wildlife cycle, in addition to the out of doors activities loved by many humans.
As previously acknowledged, if you are concerned with the impact that global warming might have on your future household, you might want to take action. Although global warming itself is basically debated, many state that the prevention ideas usually prompt are ones that may do great good for the earth anyways. The following pointers involve holding your heat at a moderate stage, utilizing vitality efficient gentle bulbs, limiting your automobile use, or purchasing a hybrid or one other power efficient vehicle. | <urn:uuid:8e0d6065-cbb4-41b4-b546-2636fc26a6be> | 3.046875 | 782 | Knowledge Article | Science & Tech. | 42.118501 |
Planning on colonizing the moon? Cyanobacteria may be your best friend:
The cyanobacteria were taken from hot springs in Yellowstone National Park in Wyoming, US. When put in a container with water and simulated lunar soil, the cyanobacteria were found to produce acids that are amazingly good at breaking down tough minerals, including ilmenite.
They use the nutrients freed up this way to grow and reproduce. "This is unbelievable," Brown told New Scientist. Breaking down the same minerals artificially would require heating them to very high temperatures, which uses enormous amounts of energy, he says. Cyanobacteria, on the other hand, use only sunlight for energy, though they do their extraction work more slowly than heating the soil artificially. | <urn:uuid:cbc79529-751d-4130-a089-5d884de579d8> | 3.703125 | 150 | Personal Blog | Science & Tech. | 34.863983 |
Qualifies for the list as a Declining Yellow List Species
|Photo: Jeff Higgott
Formerly breeding in the U.S. from the Gulf Coast and in the Ohio and Mississippi Valleys as far north as Minnesota, the Swallow-tailed Kite now breeds from coastal South Carolina south to Florida and west to Louisiana and east Texas, where it ranges over freshwater and brackish marshes and lowland and swamp forests.
Winter range includes roughly the northern half of South America where it favors humid forests and avoids arid areas and the higher elevations. The kite also breeds in southern Mexico and in Central and South America.
The birds are gregarious, and several pairs may nest close to each other. For nesting it needs tall, accessible trees (most often pines) and nearby open areas to hunt prey. The birds may roost communally at night, and some premigration roosts may draw hundreds of kites.
Flying insects are the main food items for these species, which often eats its catch on the wing. Nesting birds also feed a variety of small vertebrates to their young .
In 1990 the estimated U.S. population was 800 to 1,150 pairs, with probably 60-65% in Florida. At present the population is thought to be stable.
The main threat to this lovely kite is habitat loss and degradation, largely due to agricultural and urban development, particularly in Florida. Vital habitat is also threatened by logging and flood control, which results in altered hydrology in coastal lowlands. An analysis in the early 1990s in Florida indicated there were about 2,400 squre miles of suitable habitat available for the species, of which only 742 square miles were on lands managed for conservation. This is sufficient area for only about 200 pairs of kites.
Recommended conservation measures include avoiding cutting of pines around active nests and protection of large premigration communal roosts, which are used year after year. | <urn:uuid:c2115d58-4474-4f96-bcc6-becebed6dfe2> | 3.453125 | 403 | Knowledge Article | Science & Tech. | 48.31503 |
Editor's note: This activity is both on the Astrocappella web site and the CD-ROM featured in this article. You can read the lyrics to "Doppler Shifting" and hear it performed by the Chromatics at the same site.
Here It Comes, There It Goes!
An activity by Kara C. Granger related to: 'Doppler Shifting'
Every student can demonstrate the Doppler effect! During this interactive outdoor procedure, students will use an ordinary toy to reveal the Doppler effect. The connection is also made to moving cars, and to the shifting of the lines in the absorption or emission spectrum when the distance between a star and Earth is increasing or decreasing.
Students will perform an experiment in which they will demonstrate the Doppler effect. They will also understand the connection to everyday-life examples.
Materials for each group of students:
1. Twist the 'splash out' ball open.
2. Thread one end of the jump rope through the holes of the 'splash out' ball and tie the end back to the rope. Next, twist the wires of the electronic noise making mechanism together with the wires of the battery clip. Plug the battery into the battery clip, and tape this assembly to the inside of the 'splash out' ball. You now have a 'Doppler ball assembly'. See the illustration below.
3. The teacher should stand about 5 meters away from the students twirling the Doppler ball assembly in a circle above his or her head. In order to gain enough speed, let out about 1.5 meters of jump rope as you twirl it.
4. Students should observe, record and describe what they hear as the Doppler ball approaches, passes, and goes away from them.
5. Let different students try twirling the Doppler ball. Ask them to describe what they hear.
This is a demonstration of a phenomenon called the Doppler effect. It results from the motion of a source coming towards, and going away from an observer. This effect can occur with both sound and light, because both sound and light reveal wave- like behavior. For instance, if a source of light was a moving star relative to an observer on Earth, this would cause the star's spectra to be shifted toward the red (going away) or toward the blue (coming towards) end of the spectra.
The number of waves reaching an observer in one second is called the frequency. For a given speed, frequency depends upon the length of the wave. Long waves have a lower frequency than short waves. As long as the distance between the source of the waves and the observer remains constant, the frequency remains constant. However, if the distance between the observer and the source is increasing, the frequency will decrease. If the distance is decreasing, the frequency will increase. The images below illustrate this effect.
Now imagine an everyday-life example such as observing and listening to an approaching car. The sound waves coming from the engine are squeezed closer together than they would be if the car were still. This happens because the car is moving in your direction. This squeezing of the waves increases the number of waves (i.e., the frequency) that reach your ear every second. But after the noise of the car's engine passes, the frequency diminishes. In actuality, the sound waves are stretched apart by the car's movement in the opposite direction. As the observer, you perceive these frequency changes as changes in the pitch of the sound. The sound's pitch is higher as the car approaches, and lower as it travels away. An image contained in this activity plan, located below the title of this activity, illustrates what happens.
A similar situation takes place with stars. If the distance between a star and Earth is increasing, the lines in the absorption or emission spectrum will shift slightly to the lower frequency, or red end of the spectrum. If the distance is decreasing, the lines will shift toward the blue end. The image below shows a simplified star spectrum with red and blue shifting. Notice how the entire spectra gets shifted to the blue (left) or red (right).
Extensions or Further Discussion:
1. Does the person swinging the Doppler ball assembly hear the Doppler shift? Why or why not?
2. Can the red/blue shift technique be used for objects other than stars? Can you tell which way an emergency vehicle is traveling by the pitch of its siren?
3. What has the Doppler shift told astronomers about the expansion of the universe?
The general idea for this lesson plan was adapted from a Doppler effect lesson located within "Space-Based Astronomy: A Teacher's Guide with Activities" associated with the Office of Space Science Astrophysics division of NASA.
Thank you to John Wood for suggesting the use of the Doppler ball assembly.
**This toy is made by Galoob and can be bought at your local toy store. For more information, contact Galoob Customer Service at 1-800-442-5662.
<< previous page | 1 | 2 | 3 |
back to Teachers' Newsletter Main Page | <urn:uuid:3a2a543b-00f3-47e0-a047-1cde1c6c3129> | 3.6875 | 1,064 | Tutorial | Science & Tech. | 59.909192 |
|The type of HTML input tag that we will discuss in this tutorial is the password input tag. The password input tag is very similar to the text input tag that we discussed in the last tutorial. You should read that tutorial before you continue. When you sign in to any private or protected area such as a customer account you encounter a form asking for your username and password. The input field for your username is created with the text input tag and the password input field is created with the password input tag. The most notable difference between these two types of input tags is the built-in security feature for the password input tag. When you type in your password you cannot see the characters of the password on the computer screen. This is because the password input tag tells the web browser to show a dot or an * in place of the characters that you type in. Give it a try below in the sample password input field.|
Below is the basic code for this type of input tag. | <urn:uuid:7b0d67c0-e26a-437e-82f1-7da7a317578e> | 3.234375 | 198 | Tutorial | Software Dev. | 61.129352 |
So how did 2011’s weather shake out in the grand scheme of things? First off, let’s make sure we are all on the same page regarding the difference between weather and climate.
Weather refers to the short term, while climate is about the long term. Weather can be what’s happening outside your window right now or what the past year has been like. Climate is the longer term, big picture view; it describes weather patterns across a number of years. Add all the weather together, and you get climate.
Let’s describe 2011’s weather and compare it with the climate for our region, beginning with snow. Many recall the winter of 2010-11, which, in terms of snow, began in December 2010. A whopping 68 inches of snow fell, making it the third snowiest winter in our 19 years of data collection at the Cary Institute. The winter of 1995-96 brought 92 inches of snow and the winter of 2002-03 brought 91 inches, so, although last winter was a humdinger, it wasn’t outstanding.
Our least snowy winter was 2001-02 with 19 inches. So far, this winter is turning out to be a calm one — with a total of 20.5 inches so far. This is despite a very unusual start, with a freak pre-Halloween snowstorm of 15 inches.
Total precipitation for 2011 tells us a different story. It was the wettest year in our 28-year precipitation data record, with 64.69 inches of rain and snow. Two big events associated with Tropical Storm Irene and then Tropical Storm Lee brought a total of 12.13 inches of rain in late August (Aug. 27-28) and early September (Sept. 5-8). These events resulted in major flooding in the region. While the total volume of precipitation we received did not break records, the already saturated soils and full lakes, ponds and wetlands caused the water to run off into streams and rivers, causing the major floods that we experienced. The long-term record of precipitation at the Cary Institute shows an increase in precipitation, which is consistent with climate change predictions.
How about temperature? While the average temperature in 2011 did not break records, it was the 19th year in a row with an average temperature above normal. Every month in 2011 except January had higher than normal temperatures. While the overall trend in temperature in our 24-year record shows only a slight increase over time, some other indicators show a definite warming trend over time.
The length of the growing season has increased in the last 24 years. That is, the number of frost-free days in the summer has gone up. The date of first autumn frost is coming later and later, but the date of the last frost in the spring isn’t changing.
Another measure of our changing climate can be seen in stream temperatures. Our 19-year record of average stream temperatures shows an increasing trend in the summer, but not the winter months.
While stream temperatures reflect overall air temperature to some extent, they can also be driven by water level and the temperature of water inputs to the stream among other factors. Lower water levels usually result in warmer stream temperatures in the summer.
Vicky Kelly manages the Environmental Monitoring Program for The Cary Institute in Millbrook. | <urn:uuid:1465a59d-fce0-4967-a7da-ea9ecb591490> | 3.125 | 676 | Knowledge Article | Science & Tech. | 61.333333 |
A remote-controlled robot may stop satellites in space from running on empty.
As part of a NASA project, researchers at John Hopkins University have modified a robotics console normally used in surgery so it could be used to operate a filling station in space. By refueling aging satellites, their owners can get more useful life out of their expensive hardware. If it works, satellites can be repaired or refueled without having to send out human repair crews.
John Hopkins was tapped to address the problem of operating the fuel tanker in space from Earth because of its experience in robotically-enhanced … Read more | <urn:uuid:3db49f29-c44e-4d51-ace2-861588df1374> | 2.984375 | 120 | Truncated | Science & Tech. | 25.073846 |
|A view of Hyperolius nimbae revealing its red legs.|
The Mount Nimba reed frog (Hyperolius nimbae) is known only from lowlands on the south-eastern foot of Mount Nimba in Cote d'Ivoire. It may occur in Guinea and Liberia but this has never been confirmed. With its bright reddish limbs this frog is highly conspicuous and so probably inhabits a very small range: It would have been seen if it were more widely spread.
After 43 years, it has been rediscovered by Dr. N'goran Germaine Kouame in a swampy field in Danipleu, an Ivorian village near the Liberia border.
Hyperolius nimbae lives in clearings of lowland forest and calls at the edge of large swamps, in which it probably breeds. Although part of Mount Nimba is protected as the Mount Nimba Strict Nature Reserve (which was added to the list of World Heritage Sites in 1981) the site is urgently in need of stricter protection, particularly given that it represents the only known site of several highly threatened species.
PRESS RELEASE: Learn more about the rediscovery of this species. | <urn:uuid:a486eefa-da03-44fd-9bfd-e46fc13fa055> | 2.765625 | 246 | Knowledge Article | Science & Tech. | 50.407858 |
From the moon to the earth
A rocket taking off from the surface of a planet or other celestial body will escape its gravitational field and continue moving in space, only if it reaches a speed that exceeds the “escape velocity”. Then, even without further propulsion, the rocket will keep moving indefinitely in space, unless it fires again, comes near the gravitational field of another celestial body, is hit by an asteroid or space junk, etc. For the earth’s moon, the escape velocity is 2.4 km/s.
Consider a spaceship that has landed on the surface of the moon and needs to return to earth. The first step would be to fire a rocket on the spaceship and bring it to a speed that is at least equal to the escape velocity. The mass of the spaceship and rocket with empty tanks (namely, without any fuel and oxidizer) is 40 kg. The mass flow rate of the exhaust gas from the rocket’s nozzle is controlled at 40 kg/s. The density of the exhaust gas is 0.50 kg/m3 and the exit diameter of the rocket nozzle is 320 mm. The gravitational acceleration on the moon’s surface is 1.6 m/s2 and its variation may be neglected over the distance that the spaceship travels while the rocket is fired. As you know, the earth’s moon has no atmosphere (unlike some moons of other planets). Determine the minimum mass of fuel/oxidizer that the spaceship must carry in order to escape the moon and the time required for the spaceship to reach the escape velocity. Also determine the height that the spaceship would reach when it attains the escape velocity.
Contributed by Stavros Tavoularis,
Department of Mechanical Engineering, University of Ottawa, Ottawa, Canada. Image from NASA | <urn:uuid:31b08aaa-781b-4a0a-9909-4ff00f9193f3> | 4.21875 | 370 | Tutorial | Science & Tech. | 53.773564 |
Rich Zwelling is one of the expert teachers in Knewton’s GMAT course. “Combinatorics” is a word he throws around casually.
I was recently discussing a particular GMAT problem with a friend, and as so often happens with standardized-test nerds, the discussion turned into an extended analysis. We can’t help ourselves, I suppose.
The question went something like this:
Jim and John are workers in a department that has a total of six employees. Their boss decides that two workers from the department will be picked at random to participate in a company interview. What is the probability that both Jim and John are chosen?
Now, with many GMAT problems, there are multiple ways to skin a cat. As it turns out, my friend and I chose completely different strategies that arrived at the same answer. But interestingly enough, our different strategies got us to hit upon some key distinctions between probability and combinatorics.
1. My friend chose to go strictly with probability:
There is a 1/6 chance that Jim will be selected first. Then, there are 5 workers left, so the probability that John is chosen next is 1/5. Therefore, the probability of Jim being chosen first, then John being chosen second is simply 1/6 * 1/5 = 1/30.
However, we also have to consider the possibility that John is chosen first and Jim second. That still leads to the same number: 1/6 * 1/5 = 1/30.
So, because we are interested in each of these possibilities (and nothing else), we must add the two probabilities to get the final answer:
1/30 + 1/30 = 1/15.
2. I chose to bring combinatorics into the picture:
There are 15 possible combinations of 2 people that you can choose from a group of 6. You can find this using the combination formula:
n! / [k! * (n-k)!]
In this case, n = 6, since there are six people total, and k = 2, since we’re finding a subgroup of two. Therefore:
6! / (2! * 4!) = 6 * 5 / 2 = 15 total combinations.
Now, out of those 15 combinations, we are interested in only one — Jim and John. And recall that this is a combination (where order does not matter), as opposed to a permutation (where order does matter). Jim and John is the same combination as John and Jim, since the same two people are involved.
(For clarification, it would be a permutation if, say, John and Jim were running a race, and we awarded different prizes for 1st and 2nd. In that case, Jim finishing first is different from John finishing first. But in our problem, we’re not concerned with who is picked first; we only care about who’s in the group of two).
Back to the problem: We’re interested in only one combination (Jim and John) out of a total of 15. Therefore, the final answer is 1/15.
“But wait,” said my friend, “It’s a combination, so that means order shouldn’t matter. Jim and John is the same combination as John and Jim. So how come in my solution, we added two different probabilities for Jim-John and John-Jim? Order shouldn’t matter, but in my solution, it did.”
What we realized is that order mattered in my friend’s solution because he was considering two different events, not two different combinations. Jim and John is the same combination as John and Jim, so if we were restricting ourselves to finding information solely about combinations, then order would not matter.
However, we were not only finding information about combinations; we were also interested in probability. The situation of Jim and John being chosen first and second, respectively, is a distinct event from that of John and Jim being chosen first and second, respectively. So even though both events involve the same combination of people, the events themselves are different.
What makes problems like this a little bit tricky is that they can involve both probability and combinatorics, and it might be easy to confuse the two. But always remember, combinatorics on their own deal solely with finding the number of combinations or permutations in a given set of data, while probability deals with finding the likelihood that an event or events will occur. | <urn:uuid:c2b28a18-7714-41f5-99ef-c0785c1c1c69> | 2.734375 | 935 | Personal Blog | Science & Tech. | 59.203964 |
Return to Lesson List
Chapter 7: Air, Water, and Energy
NOAA Photo Library: National Undersea Research Program (NURP) Album:
Dive into the deep with this photo album full of images of ocean life, sea floor formations, and scientific submersibles.
Can you really get electricity from water? Find out how here and learn where hydroelectricity is most common in our country.
BrainPop: What is Weather?:
Check out the movie on this site to learn about the atmosphere.
Antimatter: The Mirror of the Universe: Kid’s Corner:
Did you ever wonder about “anti-matter”? How does it relate to matter? Check out this site to learn more!
Ancient Egypt: Geography:
Few rivers have been so influential to human history as the Nile. Take a virtual journey along its banks and experience the riches of ancient Egyptian culture. | <urn:uuid:2ce4e4bd-8a44-4b36-bd61-b124c74f3a10> | 3.390625 | 191 | Content Listing | Science & Tech. | 48.999167 |
Pulsars are very small, dense stars known as neutron stars, as small as only 20 km in diameter. We can detect regular periodic bursts of electromagnetic radiation that these stars emit as they spin. Some of them spin very fast - up to 1000 revolutions per second!
The very first pulsar was accidentally discovered in 1967 by Jocelyn Bell and Antony Hewish. Bell and Hewish were studying known radio sources with a large radio telescope at Cambridge University, when they detected periodic bursts of radio noise, apparently originating from one of these sources. At first the regularity of the pulses led scientists to speculate that they might be signals from extraterrestrial life, but as more similar sources were discovered, an explanation of their behavior became more clear.
(c) 1996 STSCI
Galaxies sprinkle the sky in this image obtained by the Hubble Space Telescope. The area of sky in this image could be covered by a dime held 75 feet away.
The discovery of this pulsar and three others from Cambridge was soon followed by more discoveries at observatories around the world. All of the new objects behaved the same way, emitting short pulses of noise at a specific period that remained constant for each pulsar. The first pulsar, later named PSR 1919+21 because of its location on the sky, emitted a pulse every 1.33 seconds and the others had signature periods in the neighborhood of one to a few seconds. More recently, pulsars have been discovered which emit as many as 1000 pulses per second.
Since 1967 over one thousand pulsars have been discovered and catalogued, and it is now estimated that our own galaxy, the Milky Way, contains perhaps as many as one million pulsars. So why do we continue to look for new pulsars? What could possibly be so interesting about them that one thousand of them is not enough? Why do we even use radio telescopes to observe some the known pulsars as often as twice a month? | <urn:uuid:bec65ab1-af31-48a2-9a09-86a4e029571c> | 4.15625 | 395 | Knowledge Article | Science & Tech. | 45.07831 |
Variables (VB Classic and C# Cross Reference Guide)
By Mike Prestwood
VB Classic versus C#: A side by side comparison between VB Classic and C#.
Language basics is kind of a catch all for absolute beginner stuff. The items (common names) I chose for language basics is a bit random and include items like case sensitivity, commenting, declaring variables, etc.
A variable holds a value that you can use and change throughout your code so long as the variable is within scope. With variable declaration, you not only want to know the syntax of how you declare a variable but you also want to know where. Are you allowed to declare a variable inline? What are the available scopes: local vs. global. Can you assign a value at the same time you declare a variable? | <urn:uuid:2608df84-98ed-4ec2-b008-d1955d442914> | 2.90625 | 165 | Truncated | Software Dev. | 50.61 |
In general, a metamorphic rock is coarser and has a higher density and lower porosity than the rock from which it was formed. Under low grade metamorphic conditions, the original rocks may only compact, as in the formation of slate from shale. High grade metamorphism changes the rock so completely that the source rock often cannot be readily identified.Foliation
Alteration of rock texture by metamorphism commonly results in a rearrangement of mineral particles into a parallel alignment, called foliation, as a result of directed stress. Foliation, called banding or layering, is probably the single most characteristic property of metamorphic rocks. For example, slate is a metamorphic rock in which there has been little recrystallization of fine-grained sedimentary shale, but mineral realignment gives the rock a tendency to break along smooth planes termed slaty cleavage. Further higher-grade metamorphic conditions lead to a foliation called schistosity, resulting in schists, formed when tabular minerals, such as hornblende, graphite, mica, or talc are aligned and tightly packed in a parallel fashion. High grade metamorphism can segregate minerals, thereby forming bands. This foliation is called gneissic layering and forms gneiss from such rock as granite. Foliation does not always occur during metamorphism.Changes in Chemical Constituents
Chemical changes occurring during metamorphism also can rearrange the chemical constituents into assemblages stable in their new environment, thus often forming new minerals of essentially the same chemical composition as those occurring in the rock prior to metamorphism. For example, hornblende can be changed into garnet or pyroxene. The mineral composition of rocks may also be altered by the addition of new elements or by the removal of elements formerly present through the action of circulating liquids or gases or by recrystallization under pressure.
Contact metamorphism occurs when local rocks are metamorphosed by the heat from an igneous intrusion, such as limestone turning to marble along the contact zone. Some of the changes that occur in the older rock are due simply to the heat radiated from the igneous mass and to the pressures it creates. More extensive alterations are produced by the fluids and gases given off by the igneous mass; metamorphism of this type rarely causes foliation. Rocks around hot springs, or mineral-rich water, both of which are common along active plate boundary ridges (see plate tectonics), are often changed by hydrothermal metamorphism (or metasomatism), which may, for example, transform granite into china clay; black smokers, which occur along mid-ocean ridges, are the exit vents for extensive hydrothermal systems that alter basalts and can deposit mounds of metalliferous sediments on the seafloor. Metamorphic rocks that develop by shearing and crushing of the rock at low temperature are called cataclastic and are usually associated with the mechanical forces, especially pressure, involved in faulting (see fault).Regional Metamorphism
Metamorphism on a grander scale, called regional metamorphism, accompanies mountain-building activity. These metamorphic rocks pervade regions that have been subjected to intense pressures and temperatures during the development of mountain chains along boundaries between crustal plates. Large scale, intense regional metamorphism is particularly great in the "roots" of these mountains, which were at considerable depths when the pressures forming the mountains were active. These kinds of metamorphic rocks are most commonly exposed in old mountain chains, like the Blue Ridge Mts., that have substantially eroded away over time, leaving only disturbed structure and regional metamorphic rocks.
Mineralogic and structural changes in solid rocks caused by physical conditions different from those under which the rocks originally formed. Changes produced by surface conditions such as compaction are usually excluded. The most important agents of metamorphism are temperature (from 300°–2,200°F, or 150°–1200°C), pressure (from 10 to several hundred kilobars, or 150,000 to several million lbs. per sq in.), and stress. Dynamic metamorphism results from mechanical deformation with little long-term temperature change. Contact metamorphism results from increases in temperature with minor differential stress, is highly localized, and may occur relatively quickly. Regional metamorphism results from the general increase, usually correlated, of temperature and pressure over a large area and a long period of time, as in mountain-building processes. Seealso metamorphic rock.
Learn more about metamorphism with a free trial on Britannica.com.
Metamorphism produced with increasing pressure and temperature conditions is known as prograde metamorphism. Conversely, decreasing temperatures and pressure characterize retrograde metamorphism.
The upper boundary of metamorphic conditions is related to the onset of melting processes in the rock. The maximum temperature for metamorphism is typically between 700 - 900°C, depending on the pressure and on the composition of the rock. Migmatites are rocks formed at this upper limit, which contain pods and veins of material that has started to melt but has not fully segregated from the refractory residue. Since the 1980s, it has been recognized that rarely, rocks are dry enough, and of a refractory enough composition, to record without melting "ultrahigh" metamorphic temperatures of 900 - 1100°C.
Metamorphic facies are recognizable terranes or zones with an equilibrium assemblage of key minerals that were in equilibrium under specific range of temperature and pressure during a metamorphic event. The facies are named after the metamorphic rock formed under those facies conditions from basalt. Facies relationships were first described by Eskola (1920).
Low grade ------------------- Intermediate --------------------- High grade
Contact metamorphism is greater adjacent to the intrusion and dissipates with distance from the contact. The size of the aureole depends on the heat of the intrusive, its size, and the temperature difference with the wall rocks. Dikes generally have small aureoles with minimal metamorphism whereas large ultramafic intrusions can have significantly thick and well-developed contact metamorphism.
The metamorphic grade of an aureole is measured by the peak metamorphic mineral which forms in the aureole. This is usually related to the metamorphic temperatures of pelitic or alumonisilicate rocks and the minerals they form. The metamorphic grades of aureoles are andalusite hornfels, sillimanite hornfels, pyroxene hornfels.
Magmatic fluids coming from the intrusive rock may also take part in the metamorphic reactions. Extensive addition of magmatic fluids can significantly modify the chemistry of the affected rocks. In this case the metamorphism grades into metasomatism. If the intruded rock is rich in carbonate the result is a skarn. Fluorine-rich magmatic waters which leave a cooling granite may often form greisens within and adjacent to the contact of the granite. Metasomatic altered aureoles can localize the deposition of metallic ore minerals and thus are of economic interest.
The textures of dynamic metamorphic zones are dependent on the depth at which they were formed, as the confining pressure determines the deformation mechanisms which predominate. Within depths less than 5km, dynamic metamorphism is not often produced because the confining pressure is too low to produce frictional heat. Instead, a zone of breccia or cataclasite is formed, with the rock milled and broken into random fragments. This generally forms a mélange. At depth, the angular breccias transit into a ductile shear texture and into mylonite zones.
Within the depth range of 5-10km pseudotachylite is formed, as the confining pressure is enough to prevent brecciation and milling and thus energy is focused into discrete fault planes. The frictional heating in this case may melt the rock to form pseudotachylite glass or mylonite, and adjacent to these zones, result in growth of new mineral assemblages.
Within the depth range of 10-20km, deformation is governed by ductile deformation conditions and hence frictional heating is dispersed throughout shear zones, resulting in a weaker thermal imprint and distributed deformation. Here, deformation forms mylonite, with dynamothermal metamorphism observed rarely as the growth of porphyroblasts in mylonite zones.
Overthrusting may juxtapose hot lower crustal rocks against cooler mid and upper crust blocks, resulting in conductive heat transfer and localised contact metamorphism of the cooler blocks adjacent to the hotter blocks, and often retrograde metamorphism in the hotter blocks. The metamorphic assemblages in this case are diagnostic of the depth and temperature and the throw of the fault and can also be dated to give an age of the thrusting.
Metamorphism is further divided into prograde and retrograde metamorphism. Prograde metamorphism involves the change of mineral assemblages (paragenesis) with increasing temperature and (usually) pressure conditions. These are solid state dehydration reactions, and involve the loss of volatiles such as water or carbon dioxide. Prograde metamorphism results in a rock representing the maximum pressure and temperature experienced. These rocks often return to the surface without undergoing retrograde metamorphism , where the mineral assemblages would become more stable under lower pressures and temperatures.
Retrograde metamorphism involves the reconstitution of a rock under decreasing temperatures (and usually pressures) where revolatisation occurs; allowing the mineral assemblages formed in prograde metamorphism to return to more stable minerals at the lower pressures. This is a relatively uncommon process, because volatiles must be present for retrograde metamorphism to occur. Most metamorphic rocks return to the surface as a representation of the maximum pressures and temperatures they have undergone.
Winter J.D., 2001. An introduction to Igneous and Metamorphic Petrology. Prentice-Hall Inc. , 695 pages. ISBN 0-13-240342-0.
1.2 Ga Thermal Metamorphism in the Albany-Fraser Orogen of Western Australia: Consequence of Collision or Regional Heating by Dyke Swarms?
Jan 01, 2003; Abstract: Compressive fabrics in the Late Palaeoproterozoic Mount Barren Group of the Albany-Fraser Orogen, southwestern...
The Nature and Origin of the Barrovian Metamorphism, Scotland: ^Sup 40^ar/ ^sup 39^ar Apparent Age Patterns and the Duration of Metamorphism in the Biotite Zone
Jan 01, 2011; Abstract: A geochronological traverse across the Barrovian metamorphic series, Scotland, shows ^sup 40^Ar/^sup 39^Ar apparent age...
P-T Constraints and Timing of Barrovian Metamorphism in the Shetland Islands, Scottish Caledonides: Implications for the Structural Setting of the Unst Ophiolite
Dec 01, 2011; Abstract: An integrated in situ monazite laser-ablation inductively coupled plasma mass spectrometry and metamorphic equilibria... | <urn:uuid:a10a0177-6ae1-472b-b732-47d2ff32e833> | 4.28125 | 2,363 | Knowledge Article | Science & Tech. | 20.481657 |
From Tracy Churchill, 2000. Impacts of fire and
grazing on invertebrates. In Managing for healthy country in the
VRD eds. Tropical Savannas CRC. Tracy Churchill is from CSIRO
Wildlife and Ecology.
A project undertaken at Mt Sandford and Kidman Springs has shown
that invertebrates such as spiders, ants, beetles and grasshoppers
can respond quite differently to the effects of grazing and fire.
Variations across seasons within these particular invertebrate
groups were also explained by clear responses at the species
Once data analysis is complete, a summary of responses within
each invertebrate group, and an overall comparison of similar or
contrasting responses across the groups, will be available. This
information will assist in the development of fire and grazing
strategies that minimise impacts on invertebrate biodiversity.
Sites were selected at each location at an increasing distance
from water in order to represent a gradient of grazing pressure.
These sites were associated with 100 metre linear transects used by
another CRC project investigating changes in landscape function
(See completed CRC research project Modelling,
monitoring and managing landscape change).
Sampling was undertaken at six sites at Mt Sandford in April
1998 and at five sites at Kidman Springs in April and October 1998.
At each of these sites, ground active fauna was sampled using
pitfall traps and foliage dwelling fauna was sampled using a sweep
net. The pitfall trap grid was placed parallel to the landscape
function transect at each site, and the sweep netting conducted
adjacent to these. In April 1999 another six sites were selected at
Kidman Springs. At this time, all 11 sites were only sampled.
Invertebrates were preserved and identified so that the abundance
of species in each group was available as a total from each given
Golden orb spider
Grazing impacts on favoured habitats
The work has shown that some invertebrate groups respond quite
differently to changes in grazing pressure, while others respond
quite similarly. This is probably because different aspects of the
environment that change as a result of grazing, such as vegetation
cover and soil compaction, have different levels of importance to
each group, or species within them.
At Mt Sandford, ant and spider numbers determined from
ground-based sampling were high in areas of high grazing intensity
near water. By contrast, grasshopper and spider numbers in foliage
were at their lowest close to water, with the recovery of numbers
increasing with distance from water. A particular species of
grasshopper (Austracris guttalosa) was found to increase in
abundance with decreasing grazing pressure.
Contrasting responses to grazing pressures were shown between
two dominant spider families at Mt Sandford. This demonstrates how
even at the family level, spiders can show changes according to
grazing pressure. Where grazing impacts are high, oxyopids (or lynx
spiders), which move freely across vegetation to feed, are more
prevalent. Where grazing pressure is lower however, araneids (or
orb weavers), which require undisturbed structural supports to hold
their webs in place to feed, occur in greater numbers. Close to
water, the abundance of lynx spiders seemed to coincide with the
amount of grass cover. Decreased grass cover led to a parallel
decrease in the number of lynx spiders. This infers that impacts on
cover due to grazing will affect this type of spider.
The project has also shown that the effects of grazing may vary
across seasons for a given invertebrate group because different
species within the group have different seasonal responses. For
example, seasonal differences were apparent in the overall
abundance of spiders and beetles sampled from the foliage at Kidman
Springs. Although these foliage dwellers responded similarly to
grazing (reduced in numbers in areas of high grazing) on all three
surveys (April and October 1998, April 1999) the response was
affected by season due to the presence of different dominant
species. Beetles were particularly more abundant in April 1998 and
1999 than October 1998. These changes were greatest within the
first few kilometres of impact.
Fire and ants
The impact of various fire regimes on the total number of ants
(abundance) and the number of ant species (richness) was examined
at the Kidman Springs fire plots. This work showed that on black
soil sites the abundance and richness of ants was enhanced after
fire compared to ants on unburnt sites. This effect was not seen on
red loam soils. Comparing all burnt sites, the abundance and
richness of ants on both soil types was generally unaffected by the
fire regime - i.e. by the season and frequency of fire. | <urn:uuid:3b311f1a-8a80-4edf-ae8c-4e2fdc435902> | 3.546875 | 1,018 | Knowledge Article | Science & Tech. | 35.236454 |
There's a good chance that pile of snow in your yard contains bacteria -- but not because it's dirty. The bacteria may have played an important role in helping those snow crystals form. New work published this week in the journal Science suggests that bacteria may play a surprisingly important role in guiding the formation of the snow and rain forming ice crystals found in high-level clouds.
The researchers looked at snow samples from around the globe, including Montana, France, and Antarctica, and found that cells and cell fragments were a significant part of the ice-nucleating aerosol particles that lead to the formation of ice and raindrops. In this segment, Joe Palca talks with a member from the research team about connecting microbiology to meteorology.
Produced by Molly Webster | <urn:uuid:7035a43d-0f04-409d-b9b2-39fd9b5e2218> | 3.234375 | 155 | Truncated | Science & Tech. | 40.961727 |
Coronal Hole on the Sun
This image of a coronal hole on the sun bears a remarkable resemblance to the 'Sesame Street' character Big Bird. Coronal holes are regions where the sun's corona is dark. These features were discovered when X-ray telescopes were first flown above the Earth's atmosphere to reveal the structure of the corona across the solar disc. Coronal holes are associated with 'open' magnetic field lines and are often found at the sun’s poles. The high-speed solar wind is known to originate in coronal holes. The solar wind escaping from this hole will reach Earth around June 5-7, 2012.
Text and Image Credit: NASA/AIALogin or Register to post comments
- Newest Member
- Currently Active Users (2 members and 0 guests)
- 11Orion11, Quinton
- Recently Active Users
- 11Orion11, Quinton, alexeiypetov, Chris, ShockRah Zulu, Ecbra de Oaoj, LoveTruthPeace, bluesbaby5050, dvogel, Terran resistance, ShelbyNU925, butterfly, derrickklingerksv, Starperson, lila9898qcxevqvcgvs, obsrvantlouie, melinaezv, joshoreillyniqmsq, douglaspwdidrkt, enejenny, emilylamb emilylamb, lindaoliv lindaoliv, Arctos, Crackdown, edisonik, Phaminator, sun, Tarheel Alpha Male, Tantarum, Who.Is_John.Galt, Consuelo335 | <urn:uuid:2e8ef36c-8e59-48b1-8423-abbe6c431dc9> | 3.515625 | 351 | Truncated | Science & Tech. | 21.716446 |