text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
What are some examples of which man
made materials bacteria does or does not break down? Where could I find
more information on this topic?
Bacteria can degrade so many compounds that your
question is nearly impossible to answer. I'll give
some interesting web sites on the application of
bacterial degradation. Take a look at the Virtual
Museum of Bacteria for general properties and
diversities of bacteria (www.bacteriamuseum.org). In
this museum a display is planned on applied
microbiology which will treat some of the topics you
may be interested in. Check in a month or so. Follow
just some examples that I hope are useful. Note that
there is no omnipotent bacterium that can do all this.
They are all different species, in part not even well
characterized or combinations of species working in
Bacteria can degrade oil
(http://www.uscar.org/techno/bacteria.htm) and here's
how they do it
also degrade other toxic organic compounds, including
They can degrade (that is, corrode) metals which is
mostly not wanted
They can degrade biological material in, and thus
clean, waste water, check here what research is going
on in this field
Bacteria can detoxify chemicals in which heavy metals
are present, though they cannot get rid of the heavy
metals themselves. Similarly, Deinococcus radians is
used to detoxify radio active waste which is often
mixed with highly toxic chemicals, not because D.
radians can 'destroy' radioactivity but because it is
highly resistant to it. See the museum display
So in conclusion, probably every organic compound can
be degraded by bacteria.
Maybe we haven't identified
the proper bacteria for some compounds (some PCBs and
DDT are very stable in the environment) but that is
not to say that there are no microbes around who can
do it. Maybe we haven't looked properly.
What bacteria can't do is change the atoms:
radioactive isotopes and toxic elements cannot be
eliminated. At the best the latter can be incorporated
in metallo-organic compounds that are less toxic.
Dr. Trudy Wassenaar
Click here to return to the Molecular Biology Archives
Update: June 2012 | <urn:uuid:3bb38a4e-5ace-430b-8c13-085b6a5a7f7c> | 2.78125 | 488 | Q&A Forum | Science & Tech. | 44.369822 |
Northern Prairie Wildlife Research Center
Of all the winds that sweep this planet, tornadoes are the most violent. Tornadoes are local storms of short duration consisting of winds rotating at very high speeds, usually counter-clockwise. These small, severe storms form several thousand feet above the earth's surface, usually during warm, humid, unsettled weather, and usually in conjunction with a severe thunderstorm. Sometimes a series of two or more tornadoes is associated with a parent thunderstorm, such as the series of tornadoes which struck the Fargo vicinity in 1957.
In the period from 1953 to 1970, 232 tornadoes were reported in North Dakota for an average of 12.9 tornadoes per year (Table 13). Yearly occurrence of tornadoes has varied widely, ranging from only two in 1961 to 41 in 1965. July is the peak month for tornado activity, and in June and July nearly two-thirds of all tornadoes occur. No part of the state is safe from a tornadic event, although statistics indicate that tornado frequency is higher in the southeast than in other areas of the state. Since 1916, 19 persons have lost their lives in tornadoes and 182 have been injured. The most notable tornado in the state's history slammed into Fargo on June 20, 1957 causing more than $5 million damage and 10 fatalities.
Table 13. North Dakota tornado statistics.
No reliable statistics are available concerning the frequency of windstorms in North Dakota. The main reason for this is that reporting windstorms is based almost entirely on damage to property. In large areas of the state the potential for wind damage is small because of the sparsely settled countryside. Therefore, many vigorous thunderstorms in the state with high winds go unreported which would be detected by property damage in more populous states. The same argument may be true in part for tornadoes, but it is believed in this era of easy and rapid communication that nearly all tornadoes are reported because of their spectacular nature.
Severe thunderstorms with outflow winds strong enough to cause widespread property and tree damage leave their signature at many places about the state every year. Statistics are not available but it would be reasonable to assume that the seasonal frequency distribution of wind storms would parallel that of tornadoes, although the number of occurrences would be much higher. Probably the most destructive windstorm to strike the state happened on July 12, 1943. The windstorm was said to have destroyed 1,847 buildings with damage to 5,678 others. It seems likely that North Dakota windstorms in nearly all years cause several times more property damage than do tornadoes.
A climatology of hail in the central United States was recently published (4). Figure 53, showing the hail climatology for North Dakota, was redrawn from that publication.
Over a 20-year period, the number of days with hail ranged from less than 30 days in the central portion of the state to more than 70 days around Selfridge in the south central. July is the major hail month in North Dakota as it is for most states in the upper Great Plains, followed by June and then August. The lower number of hail days in August, when the state is considered as a whole, was a consequence of about a 50 percent decrease in hail days in many parts of the west and north.
The diurnal frequency of tornadoes, windstorms with winds more than 57 miles per hour (mph), and hailstones 3/4- inch diameter or larger is given in Table 14. The data were compiled by the Severe Local Storms Unit at the National Severe Storms Forecast Center in Kansas City, Missouri (12).
Table 14. Frequency distribution by hour of day of tornadoes, windstorms with winds 58 mph or over, and hailstorms with hailstones 3/4-inch diameter or larger.
|Data for years 1955-1967*|
Tornadoes, windstorms and hailstorms have well-developed daily cycles in North Dakota. Peak activity occurs between 2:00 p.m. and 8:00 p.m. During these hours, about 75 percent of the tornadoes, 62 percent of the windstorms, and 83 percent of the hailstorms take place. It should be emphasized that although three-fourths of the tornadoes are reported between the hours of 2:00 p.m. and 8:00 p.m., a scattering of tornadoes has occurred for nearly all hours of the day during the 12-year period for which the data were processed. Fewer windstorms with winds greater than 57 mph occurred during the six-hour peak activity period than tornadoes or hailstorms, but the windstorm activity continued at a substantially higher rate between the hours of 8:00 p.m. and 1:00 a.m. than the other two types of severe storms. Hailstorms producing large hail appear to be rare between the hours of 5:00 a.m. and 2:00 p.m. However, after 2:00 p.m. hailstorm activity increases sharply and continues at a high rate until dropping off at 8:00 p.m. | <urn:uuid:27653997-790c-4ba3-a6db-280ec595d2f2> | 3.859375 | 1,051 | Knowledge Article | Science & Tech. | 59.635992 |
More than a century ago, in 1895, two Smithsonian scientists described a new kind of deep sea creature living at least 1000 m (3,280 ft) below the ocean’s surface—a part of the ocean that we still know very little about.
The scientists named their find the whalefish because of its whale-like appearance. Little did they know that this fish would become one of the prime suspects in a mystery that took scientists from around the world decades to solve.
The Mystery Develops
Flash forward to 1956, when scientists described another new kind of fish. It was named the tapetail because of its long, streamer-like tail. It also had a large upturned mouth.
Unlike the whalefish, the tapetail was found living near the ocean’s surface. And there was something very curious about this sea creature: Every single one of the 120 tapetail specimens scientists studied was a larva or juvenile.
Where were all the adults?
The Plot Thickens
In 1966, based on 11 specimens, scientists added another deep sea creature to the list of mystery suspects: the bignose fish, found living deep in the sea like the whalefish. It has an unusual nose-like bulge on its snout with large organs for smelling. Its upper jaw can’t move. And something else proved odd about the bignose fish: Of the 65 specimens now collected, every one is a male. Where were the females?
Then, in 1989, the whalefish also became a suspect. An Australian scientist studied all the whalefish specimens collected so far—a total of over 500 from all over the world. Every adult was a female. Where were the males?
In 2003, a team of Japanese scientists analyzed the DNA of tapetails and whalefish. The results suggested that these two very different looking fishes were almost identical in one specific gene. But more clues were needed. An international team of marine biologists took a closer look at specimens of tapetails, bignose fish, and whalefish in museum collections. The team included Dave Johnson, an ichthyologist at the Smithsonian. Here’s what the team found.
Aha! They’re All in the Family
It may be hard to believe because they look so different, but tapetails, bignose fish, and whalefish are actually all members of the same family (Cetomimidae).
There are other examples of males and females with very different shapes (sexual dimorphism) and of animals changing from one shape to another as they grow older (metamorphosis). But this is one of the most amazing examples of sexual dimorphism combined with metamorphosis ever found among vertebrates.
Museum Collections Hold the Clues
“This is an incredibly exciting finding,” says Smithsonian ichthyologist Dave Johnson. “The answer to the puzzle was right under our noses all along—in the specimens. We just needed to study them more carefully.”
This scientific mystery clearly demonstrates the importance of museum collections. Many years after a specimen was collected, it may provide biologists with the answer to a new question raised by science.
“The study also shows the need for continued exploration and collection in the open ocean—from the surface to the deep sea,” says Johnson. “Who knows what other mysteries remain to be solved there?” | <urn:uuid:1209807b-d32e-4f28-9faa-ed2fe494c197> | 3.96875 | 722 | Knowledge Article | Science & Tech. | 48.447876 |
Newton's Second Law of Motion
Improve your problem-solving skills with problems, answers and solutions from The Calculator Pad.Flickr Physics
Visit The Physics Classroom's Flickr Galleries and take a visual overview of Newton's laws of motion.
Looking for a lab that coordinates with this page? Try the Friction Lab from The Laboratory.Curriculum Corner
Learning requires action. Give your students this sense-making activity from The Curriculum Corner.Curriculum Corner
Practice makes perfect. Give your students practice with these problems from The Curriculum Corner.Treasures from TPF
Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on Newton's second law.The Laboratory
Looking for a lab that coordinates with this page? Try the Normal Force-o-meter Lab from The Laboratory.
Finding Individual Forces
As learned earlier in Lesson 3 (as well as in Lesson 2), the net force is the vector sum of all the individual forces. In Lesson 2, we learned how to determine the net force if the magnitudes of all the individual forces are known. In this lesson, we will learn how to determine the magnitudes of all the individual forces if the mass and acceleration of the object are known. The three major equations that will be useful are the equation for net force (Fnet = m•a), the equation for gravitational force (Fgrav = m•g), and the equation for frictional force (Ffrict = μ•Fnorm).
The process of determining the value of the individual forces acting upon an object involve an application of Newton's second law (Fnet=m•a) and an application of the meaning of the net force. If mass (m) and acceleration (a) are known, then the net force (Fnet) can be determined by use of the equation.
Fnet = m •
If the numerical value for the net force and the direction of the net force is known, then the value of all individual forces can be determined. Thus, the task involves using the above equations, the given information, and your understanding of net force to determine the value of individual forces. To gain a feel for how this method is applied, try the following practice problems. The problems progress from easy to more difficult. Once you have solved a problem, click the button to check your answers.
Free-body diagrams for four situations are shown below. The net force is known for each situation. However, the magnitudes of a few of the individual forces are not known. Analyze each situation individually and determine the magnitude of the unknown forces.
A rightward force is applied to a 6-kg object to move it across a rough surface at constant velocity. The object encounters 15 N of frictional force. Use the diagram to determine the gravitational force, normal force, net force, and applied force. (Neglect air resistance.)
A rightward force is applied to a 10-kg object to move it across a rough surface at constant velocity. The coefficient of friction between the object and the surface is 0.2. Use the diagram to determine the gravitational force, normal force, applied force, frictional force, and net force. (Neglect air resistance.)
A rightward force is applied to a 5-kg object to move it across a rough surface with a rightward acceleration of 2 m/s/s. The coefficient of friction between the object and the surface is 0.1. Use the diagram to determine the gravitational force, normal force, applied force, frictional force, and net force. (Neglect air resistance.)
A rightward force of 25 N is applied to a 4-kg object to move it across a rough surface with a rightward acceleration of 2.5 m/s/s. Use the diagram to determine the gravitational force, normal force, frictional force, net force, and the coefficient of friction between the object and the surface. (Neglect air resistance.)
A couple more practice problems are provided below. You should make an effort to solve as many problems as you can without the assistance of notes, solutions, teachers, and other students. Commit yourself to individually solving the problems. In the meantime, an important caution is worth mentioning:
Avoid forcing a problem into the form of a previously solved problem. Problems in physics will seldom look the same. Instead of solving problems by rote or by mimicry of a previously solved problem, utilize your conceptual understanding of Newton's laws to work towards solutions to problems. Use your understanding of weight and mass to find the m or the Fgrav in a problem. Use your conceptual understanding of net force (vector sum of all the forces) to find the value of Fnet or the value of an individual force. Do not divorce the solving of physics problems from your understanding of physics concepts. If you are unable to solve physics problems like those above, it is does not necessarily mean that you are having math difficulties. It is likely that you are having a physics concepts difficulty.
1. Lee Mealone is sledding with his friends when he becomes disgruntled by one of his friend's comments. He exerts a rightward force of 9.13 N on his 4.68-kg sled to accelerate it across the snow. If the acceleration of the sled is 0.815 m/s/s, then what is the coefficient of friction between the sled and the snow?
2. In a Physics lab, Ernesto and Amanda apply a 34.5 N rightward force to a 4.52-kg cart to accelerate it across a horizontal surface at a rate of 1.28 m/s/s. Determine the friction force acting upon the cart. | <urn:uuid:61ca1804-55f9-4e7a-b3ca-da4a1fd384b8> | 4.28125 | 1,174 | Tutorial | Science & Tech. | 56.773659 |
appearance property enables web authors to change the appearance of HTML elements to resemble native User Interface (UI) controls.
Note that the
appearance property has now been dropped from the CSS 3 specification. However, I've decided to keep this page here for reference. You can also check out the
-webkit-appearance property for more information on formatting HTML elements as UI controls.
The examples on this page include browser-specific properties that start with extensions such as
-moz-, etc. This is for browser compatibility reasons. See the bottom of this article for more on this.
appearance: button; /* CSS3 */
-webkit-appearance: button; /* Safari and Chrome */
-moz-appearance: button; /* Firefox */
-ms-appearance: button; /* Internet Explorer */
-o-appearance: button; /* Opera */
Note that this example includes the CSS3
appearance property as well as other CSS extensions. This is for browser compatibility.
|Try it yourself!
Note that at the time of writing, the
appearance property had been dropped from the CSS3 draft specification, however, these are the values that had been proposed. You might also like to see the possible values for the
- a small picture representing an object, often with a name or label.
- a viewport, a framed surface used to present objects and content for user viewing and interaction. There are several specific types of windows:
- a window used to represent a system as a whole that often contains other windows.
- a window used to represent a project or application that may contain other windows, typically with a titlebar that shows the name of the project or application.
- a window used to represent a user document, typically with a titlebar that shows its name. May also be used to represent folders or directories in a file system.
- a window that is used to temporarily display information or help about an object. Also called "info" in the CSS2 system colors.
- a window used to present a notification or alternatives for the user to choose as part of an action taken by the user. Also called "message-box" in the CSS2 system fonts.
- a small object usually labeled with text that represents a user choice
- a button that has a border surrounding it, often beveled to appear three dimensional, as if it is raised. Also called "caption" in CSS2 system fonts.
- a button that represents a hypertext link, often as simple as normal text that is underlined and perhaps colored differently.
- a button that displays whether or not it is checked with a small circle next to the button label. There may be a disc inside the circle when the button is checked. An indeterminate (neither checked nor unchecked) state may be indicated with some other graphic in the circle.
- a button that displays whether or not it is checked with a small box next to the button label. There may be an 'x' or check mark inside the box when the button is checked. An indeterminate (neither checked nor unchecked) state may be indicated with a dash '-' or a square or some other graphic in the box.
- a choice within a menu, which may also act as a label for a nested (hierarchical) menu.
- a button representing the label for a pane in a tabbed interface.
- a set of options for the user to choose from, perhaps more than one at a time. There are several specific types of menus.
- a menu of menus, typically arranged linearly, in a horizontal bar.
- a menu where the name of the menu is displayed and the options remain hidden until the user activates the menu. When the user releases or deactivates the menu, the options are hidden again.
- a menu where all but the currently selected option remains hidden until the user activates the menu. When the user releases or deactivates the menu, all but the selected option are hidden again.
- a list of options for the user to choose from, perhaps more than one at a time.
- a menu where the options are displayed as radio-buttons.
- a menu where the options are displayed as checkboxes.
- a menu where the options can be shown or hidden with small widgets, often represented by a small triangle or plus and minus signs.
- a control that displays the current option, perhaps graphically and allows the user to select other options, perhaps by dragging a slider or turning a knob.
- an area provided for a user to enter or edit a value, typically using a keyboard. There are several special fields.
- a field which is accompanied by a menu of preset values that can be used to quickly enter common or typical values.
- a field for entering a signature.
- a field for entering a password. Typically the text is rendered as a set of bullets or boxes to obscure the value.
At the time of writing, CSS3 was still under development and browser support for many CSS3 properties was limited or non-existent. For maximum browser compatibility many web developers add browser-specific properties by using extensions such as
-webkit- for Safari and Google Chrome,
-ms- for Internet Explorer,
-moz- for Firefox,
-o- for Opera etc. As with any CSS property, if a browser doesn't support a proprietary extension, it will simply ignore it.
This practice is not recommended by the W3C, however in many cases, the only way you can test a property is to include the CSS extension that is compatible with your browser.
Be aware that if you choose to use the proprietary CSS extensions in a live environment, your code will not pass any W3C CSS validation, as the browser-specific properties are not valid W3C properties.
Many of the CSS3 examples on this website include these browser specific properties. If they weren't included, most of the examples wouldn't work for most users (at least, not until possibly years after the article was written).
The major browser manufacturers are working to support the W3C properties, and eventually, you will be able to omit these browser-specific properties.
Enjoy this page?
- Link to this page (copy/paste into your own website or blog):
- Link to Quackit using one of these banner ads.
Thanks for supporting Quackit! | <urn:uuid:1da95198-59fb-432b-877f-844c17d39749> | 3.0625 | 1,335 | Documentation | Software Dev. | 45.505033 |
Young Earther Dino Blood Claim
The AiG article starts off like this:
"Actual red blood cells in fossil bones from a Tyrannosaurus rex? With traces of the blood protein hemoglobin (which makes blood red and carries oxygen)? It sounds preposterous—to those who believe that these dinosaur remains are at least 65 million years old."
That's funny, if you read the Q&A session NOVA held with Dr. Schweitzer, you would come away with the impression that red blood cells were NOT found:
"Q: It looks as if the T. rex may have nucleated red cells. Is this so?
Judith Chester, Santa Fe, New Mexico
A: Well, there are small, red structures within the vessels that look like nucleated red cells. So on the surface, this is a case of "if it looks like a duck…." But after 70 million years, just because something looks familiar doesn't mean that that is what it is. The fossil record can mimic many things, so without doing the chemistry to show that there are similarities to blood cells at the molecular level, I do not make any claims that they are cells. "
No Hemoglobin. No red blood cells. Remnants of hemoglobin and what appeared to be red blood cells were found. AiG has responded to a reader's doubts on this matter:
"This seems rather disingenuous, since they saw what appeared to be red blood cells under the microscope. Obviously, this was stunning, and it was Dr Horner who, as we cited, suggested to Mary Schweitzer that she try to disprove that they were red blood cells that were being seen by these people under the microscope. The immunological reaction was the factor that, coupled with the histological appearance, made it more than reasonable to claim that these were actual red blood cells (i.e. their remains). As you will see from the rest of this, they have most definitely not succeeded in disproving that these are red cells."
Note that they were unable to disprove that these were red blood cells, they did not prove they were red blood cells. We do not have to prove a negative, it is on them to prove the positive. So basically, AiG is arguing that this point is "rather disingenuous..."; But if it is, why did they have to inflate the finding in the first place?
He goes on:
"It should surely qualify as ‘wishful thinking’ to try to believe that red blood cells and at least part of some hemoglobin molecules could last 65 million years."
Again, not according to her. A North Carolina University News Release had this to say:
"She [Dr. Schweitzer] believes that heavy metals, specifically iron, may have played a role in preserving these structures. Hemoglobin, the protein inside a red blood cell, contains iron, and when this protein breaks down, the iron is released and becomes unstable. When the iron attempts to restabilize, it creates free radicals, which cause “cross linking,” or the binding together, of tissues. In living creatures, this cross linking explains why your skin loses elasticity as you age.
Once cross linking occurs in a cell or vessel, the structure usually becomes insoluble, meaning that it won’t dissolve, and may not degrade further. Schweitzer believes that heavy metal cross linking could be one mechanism by which soft tissues may be preserved within the fossils she’s studied."
Finally, Dr. Mary answered the question about whether this was evidence for a young earth or not:
"Q: Many creationists claim that the Earth is much younger than the evolutionists claim. Is there any possibility that your discoveries should make experts on both sides of the argument reevaluate the methods of established dating used in the field?
Carl Baker, Billings, Montana
A: Actually, my work doesn't say anything at all about the age of the Earth. As a scientist I can only speak to the data that exist. Having reviewed a great deal of data from many different disciplines, I see no reason at all to doubt the general scientific consensus that the Earth is about five or six billion years old. We deal with testable hypotheses in science, and many of the arguments made for a young Earth are not testable, nor is there any valid data to support a young Earth that stands up to peer review or scientific scrutiny. However, the fields of geology, nuclear physics, astronomy, paleontology, genetics, and evolutionary biology all speak to an ancient Earth. Our discoveries may make people reevaluate the longevity of molecules and the presumed pathways of molecular degradation, but they do not really deal at all with the age of the Earth."
The original article is here: | <urn:uuid:1ddfdd9c-e2db-4cb7-b791-6298b7fdd784> | 3.09375 | 983 | Comment Section | Science & Tech. | 54.53916 |
|Feature Article - February 1997|
|by Do-While Jones|
You may hear it said that certain rocks are so many million (or billion) years old. Most people assume that scientists really know how old the rocks are. The truth is, they don't. The more you study about the various methods for determining the age of the rocks, the more you will realize how unreliable those methods are. The accuracy of these dates is important because they are used to establish the theory of evolution. If these dates are wrong, then the theory of evolution is wrong.
The radioactive dating controversy of a fossil known as Skull KNM-ER 1470 is well-documented. 1 Skull 1470 was discovered by Richard Leakey in 1967. This skull, very modern in appearance, was found in a layer of rock that was believed to be too old to contain a modern skull. Since evolutionists considered this to be important evidence that would tell them when apes evolved into men, they wanted to know exactly how old the skull was. Fortunately, the skull was found beneath a layer of volcanic ash which they believed could be accurately dated. Since Skull 1470 was found in rocks under this layer of ash, the skull must be slightly older than this layer of ash.
Samples of the layer of ash were sent off to the laboratory. Richard Leakey hoped the lab would confirm his estimate of 2.9 million years. (That would make him the discoverer of the oldest human fossil.) But the laboratory results gave dates ranging from 212 to 230 million years old. This was far too old to fit the theory of evolution, so the lab results were rejected.
Over the next ten years the rocks surrounding Skull 1470 were dated dozens of times, using various methods, giving widely varying results. For example, two specimens from the same layer were analyzed by the same people (Fitch and Miller) using the same technique during the same analysis. One specimen was dated at 0.52 to 2.64 million years old. The other was dated at 8.43 to 17.5 million years old.
It is tempting to include a chart of all the different ages given for the rocks surrounding Skull 1470, but the numbers really don't mean much unless you know who did the measurements and what age they were trying to get. The Lubenow reference 1 gives all the numbers and puts them in perspective.
Of all the radioactive dating techniques, only the carbon 14 (abbreviated 14C) method gives generally accurate results for recent dates. We know this because 14C dates compare well with historical data. But 14C dating isn't of much interest to evolutionists because it only works for things that were once alive, and therefore doesn't work for rocks. Even if it did work for rocks, the evolutionists wouldn't care because the half-life of 14C is so short that it is all gone in several thousand years. It would not work on anything a million years old.
The radioisotope methods used for rocks, potassium-argon (K-Ar), rubidium-strontium (Rb-Sr), and lead-lead (Pb-Pb) don't give reliable results. That's because they actually measure the present ratio of elements in the rocks, which is more greatly influenced by the initial ratio of the elements in the rocks than it is by the age of the rocks.
Potassium decays to argon at a known rate. Therefore, if you know the initial amount of potassium and argon when the rock was formed, then you can measure the amount of potassium and argon that is still in the rock to see how much potassium has decayed to argon. Knowing this you can compute the age of the rock. (The same reasoning holds for the Rb-Sr and Pb-Pb methods.) A typical geology textbook will tell you,
The K-Ar method of dating differs from the other common methods by involving a decay product [argon] that is an inert gas. Even at moderately low temperatures (see discussion below), this gas is a fugitive component and is typically not incorporated in minerals. Thus a newly formed mineral contains no argon to begin with, but with time, 40K decays slowly to 40Ar; this argon remains in place as long as the system is not disturbed. The method, in principle, then is not affected by initial isotopic ratios, as is the Rb-Sr method. ( ) For an age determination by the K-Ar method to be accurate, the assumption that no radiogenic argon was present to begin with must be valid.2 [emphasis supplied]
So, the assumption is that when lava comes out of a volcano, all the argon gas escapes from the lava before the lava cools enough to harden. Therefore, all the argon trapped in the lava comes from decayed potassium. That is a plausible assumption, but is it correct?
One way to test this assumption is to measure the K-Ar age of several recent lava flows.3 The Sunset Crater lava flows (from an eruption around 1065 A.D.) have been dated at 210,000 to 230,000 years old. 4 Lava from the Mt. Rangitoto eruption which happened 300 years ago has been dated at 485,000 years old. 5 The Kaupelehu Flow (1800 - 1801 A.D.) has been dated several times, yielding 12 dates ranging from 140 million years to 2.96 billion years, with an average date of 1.41 billion years. 6
These references are nearly 30 years old, so you might think that radioactive dating has improved in recent years. It hasn't. Lava from a 1986 Mount St. Helens lava dome has just been dated at 2.8 +/- 0.6 million years old. 7
The previously quoted textbook said, "The [potassium-argon] method, in principle, then is not affected by initial isotopic ratios, as is the Rb-Sr method." In other words, for radioactive dating methods to work, you must know the initial ratio of the isotopes. The popularity of the potassium-argon method is due to the belief that you can assume the initial ratio of argon to potassium is zero. Laboratory tests, as we have just seen, have repeatedly shown that the initial ratio isn't zero. The assumption that young rocks are free of argon is wrong.
But the difficulties are even worse for Rb-Sr, Pb-Pb, and other radioactive methods because you don't even have the slightest justification for assuming any initial ratio. The evolutionist simply guesses an initial value that is likely to yield a date in the desired ball park. If the resulting date supports the evolutionist's theory, the date becomes gospel. If the date doesn't, then it is rejected as "discordant." It is scandalous that results are accepted or rejected simply on the basis of whether a scientist likes the answer or not.
Radioactive elements with short half-lives, like 14C, can only be used to determine young ages. Carbon 14 doesn't last long enough to measure old ages. More stable elements, like 206lead, which have very long half-lives, are used in age calculations that yield values in billions of years. They can't be used for short intervals because not enough of the element decays in a short time to be measured. This means that the range of possible outputs from the calculations will depend upon the half-life of the element you choose. Therefore, the choice of the dating method determines how old the rock will appear to be. One geology teacher said it this way,
After all field relationships have been established (i.e. stratigraphy, cross-cutting relationships, relative dating, etc.), samples from strata in question are thoroughly examined for their geochronological appropriateness. After sample(s) are deemed worthy of further analysis, then only the appropriate dating technique with an appropriate effective dating range is used. 8 [emphasis his]
This is not a valid approach for a scientist to take. It does not give an independent confirmation of the age of the rock. Selecting a dating method based on the presumed age of the rock merely puts a numerical value on a subjective prejudice.
Radioactive methods cannot determine the age of rocks because there is a fundamental flaw in the method. Yes, we know how rapidly radioactive elements decay. Yes, we can measure the amount of the isotopes in the rock now. But without knowing how much of each isotope was there to begin with, it isn't possible to tell how long the decay has been going on because we don't know how much of the daughter product is the result of decay.
|Quick links to|
|Science Against Evolution
|Back issues of
of the Month
1 Lubenow, Marvin L.,
Bones of Contention Appendix: The Dating Game
2 Philpotts, Anthony R., Principles of Igneous and Metamorphic Petrology, page 430 (Ev)
3 Morris, John, The Young Earth, pages 54-55 (Cr+)
4 Dalrymple, G. B., "40Ar/36Ar Analyses of Historical Lava Flows," Earth and Planetary Letters, Vol. 6, 1969, pages 47-55 (Ev)
5 McDougall, I., et al., "Excess Radiogenic Argon in Young Subaerial Basalts from Auckland Volcanic Field, New Zealand," Geochemica et Cosmochemica Acta, Vol. 33, 1969, pages 1485-1520 (Ev)
6 Funkhouser, John G., and Naughton, John J., "Radiogenic Helium and Argon in Ultramafic Inclusions from Hawaii," Journal of Geophysical Research, Vol. 73, No. 14, July 1968, pages 4601-4607 (Ev)
7 Austin, S. A., "Excess Argon within Mineral Concentrates from the New Dacite Lava Dome at Mount St Helens Volcano", Creation Ex Nihilo Technical Journal, Vol. 10, No. 3, 1996, page 355 (Cr+)
8 Sabin, A. "Geochronology Overview, Ch. 9", October 30, 1996 (Ev-) | <urn:uuid:a3f99811-e2b4-4267-bbee-fe4e4fd08f86> | 3.8125 | 2,131 | Nonfiction Writing | Science & Tech. | 61.193815 |
Query and order satellite images, aerial photographs, and cartographic products through the U.S. Geological Survey. Log in as a guest or as a registered user with more privileges. Uses Java Script or Applet Versions for PC, MacIntosh, and Unix programs.
Fact sheet on the historic and current conditions of mangroves of Dry Tortugas National Park, a cluster of islands and coral reefs west of Key West, Florida. Mangroves and nesting frigate bird colonies are at risk to destruction by hurricanes.
Photographic survey of the impacts of Hurricane Katrina on the barrier islands, barrier shoreline, and the Mississippi River Delta along the Louisiana coastline. Primary focus is on the ecosystems such as fish, rookeries, and seagrass beds.
Information on video and still photography used to supplement laser altimetry measurements of the coast. The photography is used for recognizing geomorphic and cultural features impacted by storms. Links to photo collections of hurricanes and El Nino.
Description of three types of severe coastal storm impacts: hurricane impacts on the southeast U.S., extra-tropical storm impacts on the U.S. west coast during El-Nino winters, and 'northeaster' impacts on the U.S. east coast.
Quantifies the landscape changes and consequences of natural gas extraction by digitizing indications of disturbance on NAIP aerial photographs and using these with the NLCD to show land use-land cover change. | <urn:uuid:2d5d0948-bfaf-4e32-8881-d02296bd8a48> | 3 | 299 | Content Listing | Science & Tech. | 42.38425 |
This is an image of ducks - lifeforms on Earth.
Click on image for full size
Perfect Vision Graphics
Life on Earth
The Earth is very unique to our universe because it is the only planet known to have life . Our planet is the right distance from the sun so that it is possible for animals and plants to live. The first signs of life were blue-green algae and bacteria formed in the seas, and they did not appear until 3.5 billion years ago. More complex plants and animals did not develop until 570 million years ago. There are about 1.3 million species of animals on Earth today and probably millions more that are unknown.
Shop Windows to the Universe Science Store!Traveling Nitrogen
is a fun group game appropriate for the classroom. Players follow nitrogen atoms through living and nonliving parts of the nitrogen cycle. For grades 5-9.
You might also be interested in:
Jupiter's atmospheric environment is one of powerful winds, going 250 miles per hour, and temperatures from -270 degrees to +32 degrees (freezing temperature). These winds make it hard for life forms to...more
In July, 1996 a team of scientists said that they had discovered possible fossils of bacteria in a meteorite named ALH84001 that came from Mars. It was found in Antarctica in 1984 after having landed there...more
Saturn's atmospheric environment is one of powerful winds, going 250 miles per hour, and temperatures from -270 degrees to +80 degrees. With winds like these, it is hard to have peace and quiet. The region...more
The air of Titan is a lot like the Earth's, except that it is very cold, from -330 degrees to -290 degrees! Like the Earth, there is a lot of Nitrogen and other complex molecules. There also may be an...more
Organisms that are able to "make their own food" are called autotrophs, meaning "self-feeders". Some examples of autotrophs are plants and algae (shown in the picture). Both plants and algae use photosynthesis...more
In the warm early ocean, large molecules came together into a form called *coacervates*. Molecules such as these will form coacervates in the same way that beads of vinegar in oil come together. These...more
Over a very long time, gradual changes in the earliest cells gave rise to new life forms. These new cells were very different from earlier cells because they were able to get their energy from a different...more | <urn:uuid:0e487a3b-5c62-4477-ac65-d5cfa72fa62b> | 3.625 | 517 | Content Listing | Science & Tech. | 62.717126 |
This is an image of a terrestrial storm system blowing in.
Click on image for full size
Image from: International Cloud Atlas, Volume II
An Overview of Motions in Jupiter's Atmosphere
Motions in the atmosphere include wind. The major winds in Jupiter's atmosphere are the zonal winds
which flow west to east, and east to west again.
Shop Windows to the Universe Science Store!
Our online store
includes fun classroom activities
for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist
are also full of classroom activities on different topics in Earth and space science!
You might also be interested in:
The striped cloud bands on Jupiter are certainly not as straight as they appear to be in this picture! The picture shows that the striped pattern is divided into belts and zones, which are labeled. In...more
Atmospheres of the giant planets have definetely evolved from their formation out of the primitive solar nebula. How much they have evolved remains to be seen, however. Because of their enormous gravity,...more
The mesosphere of Jupiter is a region of balance between warming and cooling. That essentially means that nothing happens there. Except for diffusion, the atmosphere is still. Upper reaches of the atmosphere,...more
As on Earth, the atmosphere of Jupiter consists of a troposphere, stratosphere, mesosphere, and thermosphere. The troposphere is the region where the visible clouds are to be found. The stratosphere, as...more
The stratosphere of Jupiter is a region of warming as determined by infrared measurements of methane (CH4) in the region. Like the troposphere, the stratosphere is warmed by the sun, warmed by Jupiter's...more
The troposphere of Jupiter is where the clouds are. Clouds form in regions of strong atmospheric motion, when condensation takes place. The troposphere is the region rapidly stirred by vertical motions....more
On Jupiter, the winds in the belts and zones blow first in one direction, then in the opposite direction. Wind blows east in a belt, and west in a zone. The clouds rise up in a belt, and drop down in a...more | <urn:uuid:aa48b4d9-8f61-4c66-8ec8-6c31f2136781> | 3.21875 | 444 | Content Listing | Science & Tech. | 51.96072 |
Russian physicists OG Sorokhtin, GV Chilingar, and LF Khilyuk noted in their book Global warming and global cooling. Evolution of climate on earth. Developments in Earth & Environmental Sciences (Elsevier 2007) that conventional greenhouse theory is not based on sound physical derivation, with most calculations and predictions based on intuitive models using numerous poorly defined parameters and unproven positive feedback forcing from CO2.
Most conventional interpretations and models, such as those of the IPCC, consider only one component of heat transfer- radiation- to create a flat earth
radiation budget of the atmosphere, ocean, and land masses, and do not adequately address the impact of e.g. convection and circulation on a rotating sphere. In contrast, the Sorokhtin et al adiabatic theory considers earth as an open, dissipative system that can be described by non-linear equations of mathematical physics, taking into account the formation of stable thermodynamic structures in each compartment, between compartments, and ruled by strong negative feedbacks (e.g. convection, water cycles, clouds). They devised a model based on well-established relationships among physical fields describing the mass and heat transfer in the atmosphere and subsequently published the paper Cooling of Atmosphere Due to CO2
in Energy Sources
Click source to read more | <urn:uuid:83d94d52-2743-4cd0-a169-8827428aa3d6> | 3.09375 | 274 | Truncated | Science & Tech. | 21.678333 |
The Search for Extraterrestrial Intelligence, commonly known as SETI, is back on. SETI operates by turning an array of audio telescopes to the skies to listen for noise that would indicate intelligent life, for example, a pattern of radio waves that is distinct from the background noise of the Universe. SETI went offline in April of this year after failing to secure enough funding to continue operation. It now has a new agreement with Air Force, who will lease the telescope array when needed.
Found: Most Massive Black Holes Ever
331 million light years away, a black hole 9.7 billion times the mass of the sun is causing havoc at the center of a galaxy. The black hole has a diameter that could accompany our solar system… ten times from end to end. The previously largest known black hole was a paltry 6.3 billion solar masses. In addition to the new black hole, researchers are trying to determine the mass of another black hole 336 million light years away. That black hole could be more than twice as large at 21 billion solar masses.
Ultra-Red Galaxies Found
In a distant corner of the universe, a mere 13 billion light years away, scientists have found four “ultra-red” galaxies. These galaxies are some of the earliest to form after the big bang. Their ultra-redness may be accounted for by their extreme distance, a large amount of dust surrounding the galaxies, and that the galaxies may be made up mostly of red stars… or a combination of all three.
Quantum entanglement is a mysterious phenomenon where two objects, once entangled, they behave the same way and what happens to one object will immediately happen to another object, even if the two objects are separated by millions of light years. Until recently, scientists had only been able to entangle tiny particles, like photons. This week research was published that showed that scientists were able to entangle two millimeter-sized diamonds. The two objects were only entangled for 7 picoseconds, but they were entangled nonetheless. Oh yeah, they managed to entangle the diamonds at room temperature, which is also a first. Normally, entangling objects requires extremely cold temperatures. This is what will hopefully be just the first step in being able to entangle large objects for long periods of time, which will lead to better supercomputers, as well as unraveling the mysteries of the universe.
If you’ve been on any science-related website in the last 24 hours, you’ve inevitably heard about Kepler-22b, the most earth-like planet we’ve found so far. The planet is 2.4 times the size of earth, so it is considerably bigger, and it also sits closer to its star than we sit to the sun. However, Kepler-22b’s star is much smaller, dimmer, and cooler than the sun, so the temperature on the surface of Kepler-22b is estimated at 72 degrees Fahrenheit. From here, researchers will inevitably try to determine the makeup of the planet, and possibly try to determine the make-up of the atmosphere, which is no small feat since Kepler-22b is 600 light years, or more than 3.5 quadrillion miles, away.
Breaking Physics… Again
The lab that produced the neutrinos that traveled faster than light reproduced its findings the other day. This time they controlled for a possible error, so that’s one less reason that their findings are wrong. However, an Italian lab is disputing this whole faster-than-light problem by saying that the first lab failed to account for the neutrinos’ energy properly. So, for now we’re still stuck debating the question as to whether or not the speed of light can be broken.
Engineering the Heaviest Element
Two teams of scientists are both trying to create the heaviest elements ever by firing titanium beams at a wafer of berkelium. The premise is that the 22 protons in titanium will mesh with berkelium’s 97 protons to create the new element 119. However, detecting that element could be difficult, since element 118 lasted only 1.8 milliseconds and element 119 isn’t expected to last that long before decaying.
Probing Mars for Life
The curiosity rover launched Saturday and will probe Mars for signatures of life. The rover won’t reach Mars until August 2012. Once it’s there it will use the most sophisticated tools ever used on the Martian surface to help scientists better understand Mars’ geography and see if life was once feasible on the red planet.
Everyday scientists are finding new planets, called exoplanets, which may have the ability to harbor life. While most of these exoplanets are gas giants, like Jupiter, earth-like rock planets and moons of gas giants are being discovered more frequently. So, with all of these discoveries, how can we narrow down which planets are most likely to have signatures of life? Leave that to astrobiologist Dirk Schulze-Makuch. In a recent interview, he outlined a two dimensional plan to map the likelihood of finding life on possible planets.
The first dimension is called the Earth Similarity Index… which, just as it sounds, compares newly found planets to earth in terms of size, whether or not there is water, proximity to a star, and atmosphere. The second dimension is called the Planetary Habitability Index. This index takes into account the idea that life may not need earth-like conditions to exist. For example, Jupiter’s larges moon Titan has oceans of methane. Titan has a high score on the PHI because life may be able to utilize methane the way life on our planet utilizes water.
As technology becomes more advanced and we start exploring these exoplanets, I think it’s only a matter of time until we find signatures of life on other planets.
Justin Hall is trying to solve our big, big energy problems by going very, very… small. Hall gathered some of the best and brightest scientists from around the globe to find cheap, flexible solutions to the energy problems plaguing the globe. At a TED conference, Hall presented his work. You should definitely check the video out. It’s remarkable.
Now, whether or not Hall’s solutions will be adopted on a grand scale is another matter entirely. For whatever reason, cutting edge energy solutions have been hard to find adopters. Maybe Hall’s solutions will be different… but only time will tell.
Entropy is, in short, a measure of how ordered things are in the universe. The universe is continually moving toward a less ordered universe. Entropy may also be the reason that we experience time, as the change in entropy is the only change that distinguishes the past from the present and the future. The video below gives a much better explanation:
Also, one of my favorite short stories, The Last Question, deals with the problem of entropy. I highly suggest you read it.
So now you know what the deal with entropy is, and why you experience time.
Strides are continuing to be made in the world of quantum computing. The most advanced quantum computer contains about 12 quibits… meaning it can hold 4096 pieces of data simultaneously. So how does quantum computing work? With normal computers, a bit may be represented by a group of electrons. In a quantum computer, information is stored by a single particle, maybe just a single electron. Because the rules of quantum mechanics dictate that a single particle can be in two places at once, that single particle can store two pieces of information. Information is exchanged by hitting these particles with microwaves. Because they can hold twice as much information, quantum computers can perform calculations much faster. As a result, quantum computing will wind up pushing the current limits of computing power.
Well, that didn’t take long. The Dept of Defense is already planning a new initiative to cut down on new space debris. I guess they took that report seriously. The new initiative would work by launching new satellites without heavier parts readily available from defunct satellites (like antennas). This would allow satellites to be launched with less weight, reducing transportation costs, materials costs, and the amount of stuff going into space. While this doesn’t solve the problem of all the stuff that’s up there now, at least we’re reducing the number of new things we’re sending up.
Scientists also managed to figure out an 2,000 year old mystery this week. The mystery centered around a supernova (the explosion resulting from a giant star) witnessed 2,000 years ago by the Chinese. When modern scientists went to look for this supernova they found the remnant was much bigger than it should have been. Supernovas usually only occur from the deaths of larger stars after they collapse in on themselves. Smaller stars become extremely dense and turn into white dwarfs which burn small and hot for a long, long time. In this particular instance, the star had become a white dwarf. However, after it stole material from a nearby star, it destabalized and exploded violently, causing the huge remnant.
You may or may not have heard. A satellite is about to crash land on earth. No, not the UARS satellite; there’s another one. For those of you keeping score at home, that’s two falling satellites in a one month span. You may be wondering why, all of a sudden, space junk is endangering our lives. The simple answer is, because there’s a lot of it. There are 22,000 pieces of useless space junk that are big enough to be tracked from earth. In addition to those, there are more than 100,000 pieces of stuff bigger than 1 cm. That might not seem big, but when it’s moving at hundreds or thousands of miles per hour, it can certainly do some damage. The picture to the left, by the European Space Operations Centre, shows how big this problem really is.
According to a report released in September, the problem is now at the tipping point. If we don’t do something soon, the space clutter could pose extreme threats to working satellites (which control GPS, Cell Phones, and anyone who has a satellite dish for cable), future space missions, and us here on the ground. Imagine if something the size of a school bus, traveling at over 15,000 mph, slammed into a sky scraper. The odds aren’t good, but it could certainly happen. The European Space Operations Centre also released an image of what space will look like if we curb the problem vs. if we continue on our current path:
In the mean time, keep an eye on the sky for a giant satellite.
Image Credits: http://www.universetoday.com/13587/space-debris-illustrated-the-problem-in-pictures/ | <urn:uuid:fede6f25-43bb-47e0-9f5b-6463607d79a8> | 3.3125 | 2,251 | Content Listing | Science & Tech. | 51.557638 |
Scientists Discover Birds Hold ‘Funerals’ For DeadAnimals, Pets, Wildlife Sunday, September 2nd, 2012
(BBC) When western scrub jays encounter a dead bird, they call out to one another and stop foraging.
The jays then often fly down to the dead body and gather around it, scientists have discovered.The behaviour may have evolved to warn other birds of nearby danger, report researchers in California, who have published the findings in the journal Animal Behaviour.
The revelation comes from a study by Teresa Iglesias and colleagues at the University of California, Davis, US.
They conducted experiments, placing a series of objects into residential back yards and observing how western scrub jays in the area reacted.
The objects included different coloured pieces of wood, dead jays, as well as mounted, stuffed jays and great horned owls, simulating the presence of live jays and predators.
The jays reacted indifferently to the wooden objects.
But when they spied a dead bird, they started making alarm calls, warning others long distances away.
The jays then gathered around the dead body, forming large cacophonous aggregations. The calls they made, known as “zeeps”, “scolds” and “zeep-scolds”, encouraged new jays to attend to the dead.
The jays also stopped foraging for food, a change in behaviour that lasted for over a day.
When the birds were fooled into thinking a predator had arrived, by being exposed to a mounted owl, they also gathered together and made a series of alarm calls.
They also swooped down at the supposed predator, to scare it off. But the jays never swooped at the body of a dead bird.
Read the full article: | <urn:uuid:d583766d-4537-4b34-8bc0-dd716b8029df> | 3.078125 | 378 | Truncated | Science & Tech. | 47.462794 |
Comments in HTML
<div id="header"> <p>Stuff</p> </div> <!-- END div-header -->
The <!-- --> stuff is the HTML comment. It is a way to add notes into the code which will not display when the HTML is rendered by the browser. In the example above, to signify which opening div tag the closing tag was actually closing. | <urn:uuid:2fcb82c8-a35c-4d44-ad53-4f8fbd8864ca> | 2.6875 | 78 | Documentation | Software Dev. | 83.948211 |
The basic achievements in studying infinite series were made in the 18th and 19th centuries when mathematicians investigated issues regarding the convergence of different types of series. In particular, they found that the famous geometrical series:
converges inside the unit circle z < 1 to the function , but can be analytically extended outside this circle by the formulas and . The sums of these two series produce the same function . But restrictions on convergence for all three series strongly depend on the distance between the center of expansion and the nearest singular point 1 (where the function has a first-order pole).
The properties of the series:
lead to similar results, which attracted the interest of J. Bernoulli (1713), L. Euler, J. Fourier, and other researchers. They found that this series cannot be analytically continued outside the unit circle z < 1 because its boundary z ⩵ 1 has not one, but an infinite set of dense singular points. This boundary was called the natural boundary of analyticity of the corresponding function, which is defined as the sum of the previous series.
Special contributions to the theoretical development of these series were made by C. G. J. Jacobi (1827), who introduced the elliptic amplitude and studied the twelve elliptic functions , , , , , , , , , , , . All these functions later were named for Jacobi. C. G. J. Jacobi also introduced four basic theta functions, which can be expressed through the following series:
These Jacobi elliptic theta functions notated by the symbols , , , and have the following representations:
A more detailed theory of elliptic theta functions was developed by C. W. Borchardt (1838), K. Weierstrass (1862–1863), and others. Many relations in the theory of elliptic functions include derivatives of the theta functions with respect to the variable : , , , and , which cannot be expressed through other special functions. For this reason, Mathematica includes not only four well-known theta functions, but also their derivatives. | <urn:uuid:da13a693-3301-4769-913e-e7b943533ff8> | 3.609375 | 435 | Knowledge Article | Science & Tech. | 43.796118 |
On the applicability of Darwinian principles to chemical evolution that led to life
Chemical evolution at the primitive prebiotic level may have proceeded toward increased diversity and complexity by the adjacent possible process (originally proposed by Kauffman). Once primitive self-replicating systems evolved, they could continue evolution via Eigen's hypercycles, and by Prigogine's emergence of order at the far-from-the equilibrium, non-linear systems. We envisage a gradual transition from a complex pre-life system, which we call the transition zone. In this zone we find a mixture of complex chemical cycles that reproduce and secure energy. Small incremental changes in the structure and organization of the transition zone eventually lead to life. However, the chemical systems in this zone may or may not lead to life. It is possible that the transition to life might be the result of an algorithm. But, it is uncertain whether an algorithm could be applied to the systems in which chance plays a role.(Published Online August 5 2004)
(Received November 26 2003)
(Accepted March 24 2004)
Key Words: algorithm for evolution; chemical selectivity; Darwinian evolution; origin of life; prebiotic evolution; transition zone.
c1 Randall S. Perry; or Vera M. Kolb, Phone: 262-595-2133. Fax: 262-595-2056. | <urn:uuid:391c2f77-b6ed-44a3-b835-bc042e659f00> | 2.875 | 284 | Academic Writing | Science & Tech. | 35.889172 |
Find information on common issues.
Ask questions and find answers from other users.
Suggest a new site feature or improvement.
Check on status of your tickets.
Electro-magnetics is one of the four fundamental interactions of nature, along with strong interaction, weak interaction and gravitation. It is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields.
Learn more about quantum dots from the many resources on this site, listed below. More information on Electromagnetism can be found here.
Numerical Simulations of Quantum and Electromagnetic Systems for Energy Applications
Introduction to computational techniques employed in research on quantum electronic and electromagnetic systems. Students will learn the strengths and weaknesses of each approach, and what types of …
nanoHUB.org, a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies. | <urn:uuid:948a2c6c-b649-4396-b045-36752bd9a289> | 3 | 193 | Content Listing | Science & Tech. | 21.921661 |
In this episode of O Wow Moments featuring Mr. O from the Children’s Museum of Houston, we play a little game called "Guess That Smell!" where we explore how our sense of smell works. It turns out that we are nanosensors - our noses actually sense molecules - things close to a billionth of a meter in size! But, as you'll see in this video, be careful about with whom you play the game... Many thanks to the Nanoscale Informal Science Education Network (NISE Net) and the National Science Foundation for funding this video. | <urn:uuid:aae48781-72ab-4b97-aa34-10a155663df0> | 2.703125 | 120 | Truncated | Science & Tech. | 63.50875 |
1. Gene Variability. The study of R alleles which Fogel and I reported in the 1943 News Letter has been continued, with the addition of a series of rr types and with further study of specific modifiers of R action and of environmental conditions affecting it. All or nearly all of the 22 Rr's originally included appear to be distinguishable in their effect upon plant color, but since some of these differences are slight they require confirmation in experiments in which modifier action may be excluded more critically than is possible by repeated parallel backcrossing.
For this purpose we have used colorless aleurone mutants of several of the original Rr alleles, since as previously reported spontaneous mutations of Rr → rr have no appreciable effect upon the plant-color action. For example six Rr alleles (Boone, 997, Cornell, Quapaw, Ponca, and Black Beauty) form a group characterized by rather strong pigmentation, though distinguishable in parallel backcrosses by slight though consistent differences. Colorless aleurone mutants of Cornell and Quapaw were crossed with other members of the group, and backcrossed by rg. This yields progenies in which the Cornell or Quapaw phenotype may be compared with the phenotypes of similar alleles in sib plants, the aleurone color difference providing a completely linked marker. Such comparisons, so far as they have gone, confirm the reality of the small differences observed between members of this group. A similar method may be used for the study of "non-linear" variation in the action of the different alleles (News Letter 1943, page 20), and here the mutant rr's may be supplemented by naturally occurring rr's. We are using the latter chiefly for this purpose.
The alleles of B (News Letter 1943, page 22) appear to be fully as variable as those of R, and since the range in plant-color phenotype is even wider, they may be better suited to the identification of small differences. Among 14 Bw's compared, 6 were selected as standards to represent distinct levels spaced roughly between b and B, and in each of these a stock of B-gl rg was established. These alleles listed in ascending order of effectiveness, are designated as follows:
|1. Bw (Boone)||3. Bw (Clarage)||5. Bw (Lookout)|
|2. Bw (Young)||4. Bw (La Paz)||6. Bw (Seattle)|
Additional Bw's, both from existing stocks and from mutations of various B's, have been crossed each with the standard Bw-gl strains which appear to be just below and above them in effectiveness, and backcrosses of these hybrids will determine their position in the series. For further mutation work, Anderson's In2 (v4 B Gl lg) stock is being extracted in homozygous combination with rg since Bw mutations induced in this stock may be crossed with the naturally-occurring alleles to produce backcross progenies with virtually complete linkage of marker genes.
Miss Elizabeth Somers is making a detailed histological study of the development and distribution of anthocyanin under the action of R and of B. | <urn:uuid:f9dd6660-f82d-4dff-a0aa-0907fe380d85> | 2.75 | 668 | Academic Writing | Science & Tech. | 43.292474 |
|Why do objects have mass?
To help find out,
CERN has built the
Large Hadron Collider
(LHC), the most powerful
yet created by
Since 2008, the
LHC smashed protons into each other with unprecedented impact speeds.
The LHC is exploring the leading explanation that mass arises from
ordinary particles slogging through an otherwise invisible but pervasive
virtual Higgs particles.
Were high energy colliding particles to create real
Higgs bosons, the
for mass creation will be bolstered.
Last week, two LHC
groups reported on preliminary indications that the Higgs boson might exist around 120 GeV in mass.
Data from the LHC collisions is also being scanned for
micro black holes,
and explore the possibility that every type of
fundamental particle we know about has a nearly invisible
You can help -- the LHC@Home
project will allow anyone with a home computer to help
search archived LHC data for these strange beasts.
a person stands in front of the huge
one of six detectors attached to the
Credit & Copyright: | <urn:uuid:73dcaad2-b24d-4172-9b0a-1e1f750adae7> | 3.859375 | 229 | Content Listing | Science & Tech. | 38.766203 |
Submitted by kfesenma on Wed, 2012-09-05 07:00
Today, September 5, marks the 35th anniversary of the launch of Voyager 1, which lifted off in 1977 on a Titan III–Centaur launch system just 16 days after its twin, Voyager 2. Now 11 billion and 9 billion miles from the sun, respectively, the spacecraft are the farthest-flung man-made objects, traveling every 100 days a distance equal to that between sun and Earth.
Submitted by ksvitil on Mon, 2012-09-03 19:01
Scientists and engineers around the world are working to find a way to power the planet using solar-powered fuel cells. Such green systems would split water during daylight hours, generating hydrogen that could be stored and used later to produce water and electricity. But robust catalysts are needed to drive the water-splitting reaction. Now Caltech chemists have determined the mechanism by which some highly effective cobalt catalysts work.
Submitted by mwoo on Tue, 2012-08-28 07:00
As an animal develops from an embryo, its cells take diverse paths, eventually forming different body parts—muscles, bones, heart. In order for each cell to know what to do during development, it follows a genetic blueprint, which consists of complex webs of interacting genes called gene regulatory networks. Now, for the first time, biologists at Caltech have built a computational model of one of these networks.
Submitted by mwoo on Sun, 2012-08-26 07:00
A team led by scientists at the California Institute of Technology (Caltech) has made the first-ever mechanical device that can measure the mass of individual molecules one at a time.
Submitted by kfesenma on Thu, 2012-08-23 18:00
Caltech researchers have shown for the first time that a specific sugar, known as GlcNAc ("glick-nack"), plays a key role in helping cancer cells grow rapdily and survive under harsh conditions. The finding suggests new potential targets for therapeutic intervention.
Submitted by katien on Tue, 2012-08-21 07:00
The frontal lobes are the largest part of the human brain, and damage to this area can result in profound impairments in reasoning and decision making. To find out more about what different parts of the frontal lobes do, neuroscientists at Caltech teamed up with researchers at the world's largest registry of brain-lesion patients. By mapping the brain lesions of these patients, the team was able to show that reasoning and behavioral control are dependent on different regions of the lobes than the areas called upon when making a decision.
Submitted by Anonymous (not verified) on Fri, 2012-08-10 07:00
When Curiosity touched down safely on Mars on August 5, John Grotzinger, the mission's chief scientist and the Fletcher Jones Professor of Geology at Caltech, was given the "keys" to the car-sized rover. Since then, most of Curiosity's time has been taken up by a series of checkouts, but she has relayed hundreds of images back to Earth, giving the science team plenty to study and discuss.
Submitted by Anonymous (not verified) on Tue, 2012-08-07 07:00
The mood in von Karman Auditorium at the Jet Propulsion Laboratory (JPL) late Sunday night was overwhelmingly, almost deliriously, celebratory. The Mars Science Laboratory (MSL) rover, Curiosity, touched down safely on Mars at 10:32 p.m. PDT and minutes later relayed its first black-and-white thumbnail images back to Earth, showing one of its wheels firmly planted on Martian soil.
Submitted by lorio on Sun, 2012-08-05 07:00
The "seven minutes of terror" are over, and members of NASA's Mars Science Laboratory (MSL) team have finally let out a collective sigh of relief.
Submitted by cnk on Mon, 2012-07-30 07:00
Since launching in November 2011, NASA's Mars Science Laboratory (MSL) has been traveling full steam ahead on a journey that will traverse over 350 million miles, ending on the Red Planet at 10:31 p.m. on Sunday, August 5. Tucked into a spacecraft for safekeeping during flight, MSL contains a rover named Curiosity. Here are some facts about Curiosity and the mission. | <urn:uuid:a4509428-838c-425c-95c0-4603c5031eba> | 2.828125 | 926 | Content Listing | Science & Tech. | 51.624892 |
“Simple and cheap, like onion dip.” That’s how Seth Shostak (SETI Institute) refers to our early optical search systems, which have involved limited equipment in the hunt for extraterrestrial intelligence, at least when compared to the much more demanding resources deployed by the radio search. Cheap is good, but not when you can only check one part of the sky at a time. All this gets Shostak pondering in a recent article about the parameters of a laser signal from an extraterrestrial civilization.
For if we might miss a faint signal, what about a really big one? Suppose an intelligent species somewhere out there is deliberately trying to contact our planet. Wouldn’t it make sense, Shostak muses, to create a huge optical impression, a signal that would catch our attention so obviously that we could then focus in to detect whatever message might be streaming from that same location? Bright objects in the sky do appear and are usually recorded, as witness historical records of supernovae.
And so it may be telling us something that we have no records of recurring bright objects. Sure, it would take huge resources to make a signal from such a civilization bright enough for the average person to see it without any equipment (Shostak estimates 5 X 1025 watts to push such a signal from 1000 light years away). That’s well beyond our resources, but not those of a Kardashev Type II civilization (one capable of using the entire power output of its Sun), which could imply there is nothing more advanced than a Type I civilization near us.
But whatever its Kardashev type, an advanced civilization may have no interest in beaming a signal to us in the first place. Or perhaps we remain simply undiscovered in a galactic backwater. Whatever the case, ‘naked eye SETI’ adds another twist to the ‘where are they’ question that Fermi posed, and at least seems to be saying that if a Type II culture wanted to reach us, it could have made its presence so blindingly obvious that we would be sure not to miss it. “…it strikes me as paradoxical,” says Shostak, “given the vastness of the cosmos, that such a simple signal has not been recognized, a signal that even a cow could see.” Welcome to the ‘Cow Paradox.’
Centauri Dreams‘ take: Long-time readers know I think there are few technological civilizations in our galaxy to be detected. When asked, I always settle on a number like 5-10 instead of Sagan’s 1 million. That’s the thought of a writer with no scientific qualifications other than a keen interest in these topics. But we’re all just guessing at this point, and this writer is not at all surprised our SETI efforts have so far come up short. | <urn:uuid:552fdb3c-6706-4f39-b497-847a24adaff9> | 2.78125 | 598 | Personal Blog | Science & Tech. | 46.0025 |
How big is an atom? A simple question maybe, but the answer is not at all straighforward. To a first approximation we can regard atoms as "hard spheres", with an outer radius defined by the outer electron orbitals. However, even for atoms of the same type, atomic radii can differ, depending on the oxidation state, the type of bonding and - especially important in crystals - the local coordination environment.
Take the humble carbon atom as an example: in most organic molecules a covalently-bonded carbon atom is around 1.5 Ångstroms in diameter (1 Ångstrom unit = 0.1 nanometres = 10-10 metres); but the same atom in an ionic crystal appears much smaller: around 0.6 Ångstroms. In the following article we'll explore a number of different sets of distinct atomic radius sizes, and later we'll see how you can make use of these "preset" values with CrystalMaker.
Atomic radii represent the sizes of isolated, electrically-neutral atoms, unaffected by bonding topologies. The general trend is that atomic sizes increase as one moves downwards in the Periodic Table of the Elements, as electrons fill outer electron shells. Atomic radii decrease, however, as one moves from left to right, across the Periodic Table. Although more electrons are being added to atoms, they are at similar distances to the nucleus; and the increasing nuclear charge "pulls" the electron clouds inwards, making the atomic radii smaller.
Atomic radii are generally calculated, using self-consistent field functions. CrystalMaker uses Atomic radii data from two sources:
VFI Atomic Radii:
Vainshtein BK, Fridkin VM, Indenbom VL (1995) Structure of Crystals (3rd Edition). Springer Verlag, Berlin.
CPK Atomic Radii:
Clementi E, Raimondi DL, Reinhardt WP (1963). Journal of Chemical Physics 38:2686-
The covalent radius of an atom can be determined by measuring bond lengths between pairs of covalently-bonded atoms: if the two atoms are of the same kind, then the covalent radius is simply one half of the bond length.
Whilst this is straightforward for some molecules such as Cl2 and O2, in other cases one has to infer the covalent radius by measuring bond distances to atoms whose radii are already known (e.g., a C--X bond, in which the radius of C is known).
Van-der-Waals radii are determined from the contact distances between unbonded atoms in touching molecules or atoms. CrystalMaker uses Van-der-Waals Radii data from:
Bondi A (1964) Journal of Physical Chemistry 68:441-
These are the "realistic" radii of atoms, measured from bond lengths in real crystals and molecules, and taking into account the fact that some atoms will be electrically charged. For example, the atomic-ionic radius of chlorine (Cl-) is larger than its atomic radius.
The bond length between atoms A and B is the sum of the atomic radii,
dAB = rA + rB
CrystalMaker uses Atomic-Ionic radii data from:
Slater JC (1964) Journal of Chemical Physics 39:3199-
Perhaps the most authoritative and highly-respected set of atomic radii are the "Crystal" Radii published by Shannon and Prewitt (1969) - one of the most cited papers in all crystallography - with values later revised by Shannon (1976). These data, originally derived from studies of alkali halides, are appropriate for most inorganic structures, and provide the basis for CrystalMaker's default Element Table. The data are published in:
Shannon RD Prewitt CT (1969) Acta Crystallographica B25:925-946
Shannon RD (1976) Acta Crystallographica A23:751-761
Colour-coding atoms by element type is an important way of representing structural information. Of course, atoms don't have "colour" in the conventional sense, but various conventions have been established in different disciplines.
Many organic chemists use the so-called CPK colour scheme These colours are derived from those of plastic spacefilling models developed by Corey, Pauling and (later improved on by) Kultun ("CPK").
Whilst the standard CPK colours are limited to the elements found in organic compounds, CrystalMaker's VFI Atomic Radii, CSD Default Radii and Shannon & Prewitt Crystal Radii Element Tables provide a more diverse range of contrasting colours.
You can easily change the colour and/or radius of a crystal site, or group of sites, using CrystalMaker's Site Browser (to make this visible, choose: Window > Sidebar > Site Browser). The pane shows an hierarchical listing of element types and sites. Each element row has a colour button, which you can use to change the colours for all atoms with that element type. You can edit the radius of atoms of that element type using the radius field "r [Å]".
Editing the radii for all oxygen atoms in a structure, using CrystalMaker's Site Browser.
You can edit the colours and/or radii for specific crystal sites, by using the colour/radius fields on a site row. You can also change the colours of individually-selected atoms in your structure, using the Selection > Atoms > Colour command.
Whilst CrystalMaker lets you edit individual atomic radii (and colours), for greater convenience you'll probably want to specify a default set of atomic radii and colours. CrystalMaker includes a number of different "Element Tables", and you can edit these or create your own, using the Element Editor (Edit > Elements).
Editing the default radius of hydrogen, using CrystalMaker's Element Editor.
This floating window displays the currently-active Element Table: a list of element symbols, atomic radii and colours. At the top of the window is a popup menu, which lists the different Element Tables that are included with the program; you can switch between any of these by choosing them from the popup menu.
Once you've loaded an Element Table (e.g., by choosing its name from the popup menu), you can make this your default set by clicking the Save button. The default set is saved in your CrystalMaker Preferences file, ready for use the next time you use the program.
You can apply the current colours and radii to a currently-displayed structure, by clicking the Apply button.
You can also import or export tables of element data (see the CrystalMaker User's Guide for more information on the format required).
It is important to choose the correct, default, Element Table for more than just aesthetic reasons. When auto-generating bonds, CrystalMaker uses the sum of atomic radii (plus 15%) to estimate the maximum search distances. If your default set isn't right, then you may find that not all bonds are generated in the way you'd expect.
Organic Structures Alert! CrystalMaker's default Element Table is the Shannon & Prewitt "Crystal" radii, which is appropriate for most inorganic structures. When working with organic structures, one of the covalent or Van-der-Waals sets will be more appropriate.
Mark Winter's Web Elements web site.
The following table contains some of the atomic radius data used by CrystalMaker. This is a brief summary of a far more extensive body of work - please see the notes at the end of this page for more information.
Atomic Radii: values are calculated from:
E Clementi, D L Raimondi, W P Reinhardt (1963) J Chem Phys. 38:2686.
Ionic Radii: these data are taken from an empirical system of unified atomic-ionic radii, which is suitable for describing anion-cation contacts in ionic structures. The data were derived by the comparison of bond lengths in over 1200 bond types in ionic, metallic, and covalent crystals and molecules by:
J C Slater (1964) J Chem Phys 41:3199
J C Slater (1965) Quantum Theory of Molecules and Solids. Symmetry and Bonds in Crystals. Vol 2. McGraw-Hill, New York.
Note that calculated data have been used for the following elements: He, Ne, Ar, Kr, Xe, At and Rn. These data were taken from:
E Clementi, D L Raimondi, W P Reinhardt (1963) J Chem Phys 38:2686
Covalent Radii: Data given here are taken from WebElements, copyright Mark Winter, University of Sheffield, UK.
Van-der-Waals Radii: Van der Waals radii are established from contact distances between non-bonding atoms in touching molecules or atoms. Most data here are from:
A Bondi (1964) J Phys Chem 68:441
"Crystal" Radii: These data are taken from Shannon & Prewitt's (S∓P) seminal work on "physical" ionic radii, as determined from measurements of real structures.
Note that in most cases S∓P quote different radii for the same element: the radii vary according to charge and coordination number. We have chosen the most-common charges (oxidation states) and coordination numbers. The details are given in the element text file after each data entry.
R D Shannon and C T Prewitt (1969) Acta Cryst. B25:925-946
R D Shannon (1976) Acta Cryst. A23:751-761 | <urn:uuid:21a5922e-7068-4744-8986-f1592289a6ac> | 3.484375 | 2,039 | Tutorial | Science & Tech. | 45.100808 |
Problem 437: Tangent Circles, Diameter, Chord, Perpendicular, Congruence
The figure shows the circle 1 with
diameter AB and circle 2 with diameter AC tangent at A. Line APD
is a chord of circle 1, line DFE is perpendicular to AB, the
extension of AF intersects circle 1 at G, line GMH is
perpendicular to AB, and AM intersects FE at N. Prove that AN
and AP are congruent. | <urn:uuid:83e08e7c-6b96-44b8-91d5-704eb22640a8> | 2.828125 | 108 | Tutorial | Science & Tech. | 56.928333 |
Hi, I recently came across a code which had the below lines.
unsigned int n = 10;
n ^= (unsigned int) - 1 ;
It is not clear to me what the second line does.
Anyone can help please :)
on 03/05/2010 – Made popular on 03/05/2010
when a date type is considered signed and unsigned is that simple referring to - for signed and positive numbers for unsigned? Further if that is the case would mutiplying and dividing ect where 2 signed numbers, like (-2)*(-2) = 4 result in a unsigned.
I need to write the implementation of __sync_fetch_and_sub atomic operation myself in assembly language based on GCC 3.4 which doesn't have __sync_fetch_and_sub builtins. But I know little about assembly. | <urn:uuid:3bdc3a67-f20a-4310-a413-7e9d807ffff7> | 2.9375 | 175 | Q&A Forum | Software Dev. | 62.535 |
Nutrient-rich slurry from farms has been causing coral populations on Australia's Great Barrier Reef to crash for 90 years.
The corals collapsed between the 1920s and 1950s, say John Pandolfi at the University of Queensland in Brisbane and his colleagues. The team took cores from three reefs and worked out when the corals died. Two had little coral left after the 1950s, while the third had been colonised since then by different types.
By the 1920s, European settlers were farming intensively near rivers flowing onto the reef, boosting agricultural run-off by up to a factor of 20. Events like cyclones kill coral, but the extra nutrients in the water help seaweed move in afterwards, preventing coral from regenerating, says Terry Done of the Australian Institute of Marine Science in Townsville, Queensland.
The reefs were already in decline again when monitoring began in the 1980s, says Joana Figueiredo of James Cook University, also in Townsville. Pandolfi's work shows that it was pristine until the 1920s.
Journal reference: Proceedings of the Royal Society B, DOI: 10.1098/rspb.2012.2100
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Thu Nov 08 13:31:57 GMT 2012 by Anna Dyer
I have only read the on - line synopsis of this article but would like to know what remedial actions are being taken to prevent the situation deteriorating still further? Are the invading species being actively removed? or indigenous corals being re-introduced? Is this issue a high priority for the Australian Government?
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:1a4d4e95-0c5b-4c07-bb15-1809919403f9> | 3.4375 | 460 | Truncated | Science & Tech. | 52.647452 |
This section illustrates you how to draw different curves with the class QuadCurve2D.
The QuadCurve2D class provides a quadratic parametric curve segments. We are providing you an example which shows different curves.
In the given example, we have used Vector class to implement an array of objects. The components of this class can be accessed using an index. The method vector.size() of Vector class returns the number of components in the vector. The method vector.elementAt(k) returns the component at the specified index, in this example we have used 'k' as index.
Here is the code of QuadCurveExample.java
Output will be displayed as:
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:b08e8bd3-5eee-4cc1-89cc-20090dd2e991> | 3.671875 | 185 | Documentation | Software Dev. | 52.434026 |
The one objection I would have is the title – fortunately, extinction is not yet an issue…Read more
Canada and the US have about 50 species of native bumblebees. For five of them, a rapid decline has been observed since the 1990s. Three species — Bombus affinis, B. terricola, and B. occidentalis — will now be submitted to the International Union for Conservation of Nature (IUCN) Red List of Threatened Species (cf NatureNews: Plight of the bumblebee) (via evolvimus).
Two main reasons for the decline are discussed. One is a fungal pathogen, Nosema bombi, that might have been introduced with commercially used bumblebees from Europe. The other might be climate change, which may cause a shift in flowering times and nectarflow that bumblebees are not adapted to.
Our special friend B. griseocollis still seems to do okay, though :)
Also this month, Anna Morkeski and Anne Averill of the University of Massachussetts published “Wild Bee Status and Evidence for Pathogen ‘Spillover’ with Honey Bees” in the American Bee Journal and in Bee Culture with a very good overview over the current research into bumblebee-decline.
(photo: A. Morkeski) | <urn:uuid:41ef243a-ac7b-437f-a868-6c1266d291f2> | 2.953125 | 283 | Content Listing | Science & Tech. | 37.941843 |
• Title: Post-common envelope binaries from SDSS – XVI. Long orbital period systems and the energy budget of CE evolution
• Authors: Rebassa-Mansergas et al.
• First Author’s Institution: Universidad de Valparaiso, Chile
There is no doubt that a high fraction of stars are in binary (or even higher order!) systems. Many of these systems are what we refer to as wide binaries, meaning the stars are far enough apart that neither star affects the evolution of the other. However, this isn’t always the case. For example, one class of binary stars, Algols, is defined by a primary star (the more massive star) which is less evolved than the secondary (the less massive star). If you remember, basic single star evolution tells us the more massive a star is, the more quickly it evolves. The discovery of Algols created the so called Algol paradox, which was ultimately resolved by realizing that when two stars are close enough, one can transfer mass to the other. This suggests the star which is observed to be more massive in the present day was actually the smaller star when the system formed, hence why it has evolved more slowly.
Mass transfer in a binary system is characterized by when a star overflows its Roche lobe. The Roche lobe tell us the point at which the material from one star is more gravitationally attracted to its companion than itself. This material will normally be accreted by the companion to the donor star in a slow and stable manner. In a stable mass transfer case, an accretion disk develops around the accreting star. However, sometimes this mass transfer is unstable because it is occurring at an extremely high rate. In these cases, the overflowing mass builds up so much that is overflows the companions roche lobe as well and engulfs the entire system. This is what we refer to as a common envelope (CE).
No systems have been observed in the CE phase, but there are many binary systems which are believed to be post-CE binaries. This is because CEs are essential to explain close binaries where one member is fully evolved. Let’s think of it in terms of the sun. When the sun evolves to be a red giant, it’s radius will expand by around a factor of 100, completely engulfing the orbit of Mercury. So if we see a white dwarf star which has a sun-like progenitor, and a companion within the orbit of Mercury, we know that it must have been engulfed by the envelope when the white dwarf was a red giant star.
The physics of CE evolution is poorly understood. This is because proper modelling of the CE phase requires time consuming 3-dimensional simulations. While efforts are being made in this regard, it is still not feasible to simulate the entire evolution of CE systems. Therefore, it is common to discuss CE evolution in terms of the energy budget of the system. An efficiency parameter, α, is used to characterize the fraction of available orbital energy which is used to expel the CE (since no observable post-CE binaries still have the CE). There has been some disagreement in the literature on whether the orbital energy can possibly provide enough energy to expel the CE, and some researchers believe that energy from another source (most likely recombination energy) is needed. Recombination energy arises if the CE is ionized because ionization energy is released when atoms recombine. The authors of this paper investigate two “long period” post-CE systems to determine whether orbital energy would have been sufficient to explain the ejection of the CE. These longer period systems are of particular interest because they have lost less orbital energy than post-CE systems on shorter period orbits.
Observations and Analysis
The authors examine two post-CE binaries selected from the Sloan Digital Sky Survey: SDSSJ 1211-0249 and SDSSJ 2221+0029. Rebassa-Mansergas et al. determine the periods of these systems using the radial velocity technique. Because these are double line spectroscopic binaries, the authors not only know the total mass of both systems, but the mass ratios of the two stars in each system.
The authors then separate out the spectral features of both stars in the system and examine them separately. This separation allows Rebassa-Mansergas et al. to determine the surface gravity (log g), effective temperature (Teff), and mass of the white dwarf, as well as the spectral type, mass, and radius of the secondary if they also use the mass-radius relation for white dwarfs and for m-dwarfs. Once they know the present day constraints on the systems, the authors determine the age of the systems (using white dwarf cooling tracks), and attempt to reconstruct the evolutionary history of the systems.
Rebassa-Mansergas et al. find that the evolution of both of the systems presented in this paper can be understood with only energy contributions from the initial orbital energy of the systems. However, it is important to note that this finding does not rule out energy contributions from recombination energy (or other energy sources). In fact, the authors suggest the high efficiency required (α = 0.42−1) for SDSSJ 2221+0029 suggests another energy source, although this system does not provide direct evidence. See the figure for the constraints on direct evidence.
Latest posts by Kim Phifer (see all)
- The effect of magnetic fields on star formation - February 5, 2013
- A deep X-ray observation of Hickson Compact Group 62 - November 6, 2012
- Sgr A*: A flickering black hole - October 5, 2012 | <urn:uuid:bf77ce00-f9a9-4545-8dee-375816802fee> | 3.15625 | 1,174 | Academic Writing | Science & Tech. | 40.940261 |
As today’s San Antonio earthquake attests, such tremblings can occur in Texas, although they are relatively rare.
According to the University of Texas Institute of Geophysics, there were about 100 earthquakes in Texas large enough to be felt during the last century. Just four of these earthquakes have had magnitudes between 5 and 6, making them large enough to be felt over a wide area and produce significant damage near their epicenters.
Here’s a map showing key earthquakes in Texas between 1847 and 2001.
Most earthquakes happen in four regions: near El Paso and in the Panhandle, in northeastern Texas, and finally, though less commonly, in south-central Texas. Outside of these areas, including the Houston region, earthquakes are exceedingly rare.
Compared to the rest of the country Texas sees very few earthquakes:
Earthquake hazard map for the continental United States.
Clearly, then, in other parts of the country, and the world for that matter, earthquakes of 3.7 magnitude are quite common. According the U.S. Geological Survey, an estimated 130,000 earthquakes of magnitude 3.0 to 3.7 occur every year in the world. And 250 earthquakes of such magnitude have already occurred this year in the United States. | <urn:uuid:456d9b29-695c-4ea4-bdfd-a90d8125ac6e> | 3.578125 | 259 | Personal Blog | Science & Tech. | 49.957471 |
Astronomers have witnessed one of the rarest and most extreme galaxy clusters in the universe! That’s not all; behind it, they have found an object that ought not exist!
Using NASA’s Hubble Space Telescope, cosmologists have uncovered an extremely massive cluster of galaxies existing 10 billion light-years away and behind them, an obscure arc of light. The galactic cluster that had been uncovered by NASA’s Spitzer Space Telescope, formed during an era when the universe was a quarter of its existing age of 13.7 billion years.
The humongous arc is the expanded shape of a more remote galaxy whose light is tarnished by the powerful gravity of this huge cluster; this is an effect called “gravitational lensing”. In case you’re wondering what galaxy clusters actually are, well they are collections of galaxies that orbit one another and are the most massive objects in the universe. What is troubling though is the fact that this arc should not exist in the first place.
“When I first saw it, I kept staring at it, thinking it would go away,” said study leader Anthony Gonzalez of the University of Florida in Gainesville, whose team includes researchers from NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “According to a statistical analysis, arcs should be extremely rare at that distance. At that early epoch, the expectation is that there are not enough galaxies behind the cluster bright enough to be seen, even if they were ‘lensed,’ or distorted by the cluster. The other problem is that galaxy clusters become less massive the further back in time you go. So it’s more difficult to find a cluster with enough mass to be a good lens for gravitationally bending the light from a distant galaxy.”
The latest uncovered cluster, named IDCS J1426.5+3508, is extraordinary because during this period in cosmic history, massive collections of galaxies were just starting to formulate. There has been only one other cluster of comparable size, spotted at such distance, but its weight is light when measured to the latest cluster.
What’s even more ambiguous and puzzling about this new found galaxy is the bizarre arc of blue light spotted right behind it. Astronomers believe this is an indication of yet another huge star-forming galaxy located further away at an even earlier era.
Astronomers aspire to comprehend how these objects came to exist in order to designate the actual history of galactic evolution. An x-ray telescope that is scheduled to launch the upcoming year (called eRosita mission) might bring the team answers and information about these peculiar findings.
For more on this story, check here.
Looking for a simple, elegant backup solution?
Genie Timeline 2012 is a new version of the number one continuous data protection program Timeline 2.1. It offers the first metro style user interface; enhanced performance, and added features. Like us on Facebook and Follow us on Twitter for the latest news. | <urn:uuid:2f8f0ea9-508c-4dec-ba4e-753a5c729fa6> | 3.765625 | 618 | Personal Blog | Science & Tech. | 39.585628 |
Haskell monads form a type class. And Haskell type classes are essentially interfaces shared by some types. So the Monad type class is a common API for talking to a bunch of different types. So the question is this: what's so special about this API?
One way to grasp an API is to see concrete examples. Usually the monad API is introduced through containers or IO. But I think that the prototypical monad through which all others can be understood is much simpler - it's the trivial monad.
APIs often capture design patterns, and the design pattern here is a one-way wrapper. We can wrap things but we can't unwrap them. We still want to be able to whatever we like to the wrapped data, but anything that depends on anything wrapped should itself be wrapped.
Without further ado, here's a wrapper type:
data W a = W a deriving Show
(The Show bit is just so we can play with these things interactively.)
Note how it doesn't add anything except some wrapping. And the first thing we need is to be able to wrap anything we like. We can do that easily with this function:
return :: a -> W a
return x = W x
(We could have just used W instead of return. But I'm heading towards a common API that will work with other types too, so it obviously can't be called W.)
And now we need one more thing - a way to manipulate wrapped data leaving it wrapped. There's an obvious idea. Given any function a -> b we write a function that converts it to a function W a -> W b. This is guaranteed to keep things under wraps. So here goes
fmap :: (a -> b) -> (W a -> W b)
fmap f (W x) = W (f x)
It seems that we've finished. For example we can wrap a number and then increment it:
a = W 1
b = fmap (+1) a
But here's something we can't do. Define a function f like this:
f :: Int -> W Int
f x = W (x+1)
It increments and returns a wrapped result. Now suppose we want to apply the underlying operation here twice, ie. we'd like to increment a number twice and return a wrapped result. We can't simply apply f twice because (1) it'll doubly wrap our result and (2) it'll attempt to add 1 to a wrapped result, something that doesn't make sense. fmap doesn't do what we want either. Try it in an interactive Haskell session. It seems we need a way to apply f, unwrap the result and apply f again. But we've already said that we don't want to allow people to unwrap these things. We we need to provide a higher order function that does the unwrapping and application for us. As long as this function always gives us back something that is wrapped, our end users will never be able to unwrap anything. Here's an idea for such a function:
bind :: (a -> W b) -> (W a -> W b)
bind f (W x) = f x
Notice how it's very similar to fmap but is even simpler. So now we can try doubly, or triply, incrementing
c = bind f (f 1)
d = bind f (bind f (f 1))
Notice how bind is more general than fmap. In fact, fmap f = bind (return . f).
And that's it. Using return and bind we have achieved our goal of wrapping objects and freely manipulating wrapped objects while keeping them wrapped. What's more, we can chain functions that wrap without getting bogged down in multiple layers of wrapping. And that, really, sums up what a Haskell monad is all about.
So here are a couple of exercises:
(1) define a function g :: Int -> W Int -> W Int so that g x (W y) = W (x+y). Obviously that definition won't do - the left hand side has a W y pattern so it's actually unwrapping. Rewrite this function so that the only unwrapping that happens is carried out by bind.
(2) define a function h :: W Int -> W Int -> W Int so that h (W x) (W y) = W (x+y). Again, no unwrapping.
I'm hoping that after you've done these exercises you'll see how you can still work freely with data even though it's wrapped.
In Haskell, it is more usual to use the operator >>= instead of bind where bind f x = x >>= f.
So the last question is this: why would you ever wrap data like this? In practice people tend not to use the trivial monad very much. Nonetheless, you can see how it might be used to represent tainted data. Wrapped data is considered tainted. Our API never lets us forget when data is tainted and yet it still allows us to do what we like with it. Any time we try to do anything with tainted data the result is also tainted, exactly as we might expect. What I find interesting is that almost every monad, including IO, lists and even probability, can be thought of quite naturally as variations on taint. I hope to say more about this in the near future.
Anyway, code above fails to compile because of some namespace clashes. Here's a complete definition of W that really works. Note also that fmap, which we showed we don't really need, allows us to make W an instance of Functor too:
> data W x = W x deriving Show
> instance Functor W where
> fmap f (W x) = W (f x)
> instance Monad W where
> return x = W x
> W x >>= f = f x
Exercise 3: Prove the three monad laws for W. This should be almost trivial.
Exercise 4: We can't completely unwrap things using the monad API. But we can unwrap one layer from things that are wrapped twice. So here's a nice puzzle: define a function join :: W (W a) -> W a using the Monad API and no explicit unwrapping.
In conclusion, most monad tutorials show what monads are, and how to use them. I hope I've given some additional insight into why the Monad interface consists precisely of the two functions it does. That insight should become clearer when I say more about taint.
PS I've been a bit busy lately with planning for kitchen remodelling, lots of work and a new cat(!). I probably won't be posting very frequently for a few months.
Solutions to Exercises
Some people have found the exercises tricky so I'm posting some solutions. I think it's a good sign that they're tricky, it means that there is some non-trivial stuff to be learnt from playing with this monad. It ought to figure more strongly in tutorials. In a sense it allows people to learn about monadicity in its purest form.
Anyway, enough of that.
Exercise 1: We want g :: Int -> W Int -> W Int so that g x (W y) = W (x+y). Start the definition like this
g x y = ...
We want to apply (+x) to y. But y is wrapped so we must use >>= to apply it. But (+1) doesn't have the right type to be applied with >>=, it's an Int -> Int not an Int -> W Int. So we must compose with return and we get
g x y = y >>= (return . (+x))
Exercise 2: We want h :: W Int -> W Int -> W Int so that h (W x) (W y) = W (x+y). But g is already fairly close. So again write
h x y = ...
The difference from g is that x is wrapped. So we want to apply \x -> g x y to our wrapped value. This function is already of type Int -> W Int. So we can just write
h x y = x >>= (\x -> g x y)
or just h x y = x >>= flip g y.
Exercise 3: I'll just do the last one: (m >>= f) >>= g = m >>= (\x -> f x >>= g)
m is of the form W x for some x:
(W x >>= f) >>= g
= (f x) >>= g
W x >>= (\x -> f x >>= g)
= (\x -> f x >>= g) x
= f x >>= g
So the LHS equals the RHS.
Incidentally, you can get a long way with these problems by not thinking about what's going on! Instead, just write out the type signatures of all of the functions involved and try to stick them together like a jigsaw puzzle so that the final result has the right type. At each stage the arguments of each function must match the signature of the function so it really is like fitting the shapes of jigsaw pieces together. This isn't a foolproof strategy, but it often works.
Anyway, feel free to ask questions if you're still stuck. Some of your questions may be answered here. | <urn:uuid:4412f62b-6c45-4371-992c-068a3244945b> | 3.15625 | 1,982 | Personal Blog | Software Dev. | 81.863159 |
“Almost limitless, clean power…” Yes, it is cold fusion!
The March 27, 2009 episode of Brink, a weekly show on the Science Channel, featured an update on the 2009 results of nuclear particle detection by the SPAWAR group at the American Chemical Society meeting that year. Read about the news on ScienceDaily.com.
Yes, it’s an OLD video, but for those of us new to the scene, it’s excavating the history.
Speaking from Washington, D.C., nuclear physicist Dwight Williams, Senior Science Advisor for the Department of Energy and a contributor to the show, gives the news cautiously, but open-mindedly.
He says of the broader mainstream science community, “All the jurors are still out.”
There is undeniable evidence that conclusively establishes the existence of the Fleischmann-Pons Effect (FPE), the production of excess heat when hydrogen reacts with a small piece of metal.
New designs for commercial hot-water boilers and steam heaters now in development use a powder made of nickel and hydrogen gas to create the same effect.
“If you think that the excess heat effect is not real, you’re being oblivious to data,” said Dr. Robert Duncan, Vice Chancellor for Research at University of Missouri in a recent talk at National Instruments.
This is the breakthrough for which the world’s been waiting. | <urn:uuid:41c479ac-3cfd-4740-b2d4-8ec4525cbf3d> | 2.734375 | 298 | Personal Blog | Science & Tech. | 55.140455 |
Sonars resolve targets in range and bearing. A sonar records the arrival time and bearing of the echo from a target. When an echo is detected at time
measured relative to the time of pulse transmission, the range to the target is
is the speed of sound in water. The detection of an echo implies that there is a target in a cell with area
is the angular resolving capability of the sonar and
is the range extent of the sonar pulse. The length in time of the sonar pulse is denoted by
. Modern sonars use short pulses and have good angular resolving capability (small
). Thus the sonar detection cell is small enough in area to contain only one target.
Consider a sonar operating in a region in which there are only two types of targets: fish (
) and an enemy submarine (
). Both the fish and the submarine are capable of producing an echo that the sonar can detect. Suppose that a sonar detection occurs at a particular range
. A question of great practical significance: what is the probability that this observed detection at range
is caused by a fish? If the echo is caused by a fish, it is a false alarm and should be ignored. However, if the echo comes from a submarine it is a very important piece of information.
In the situation that we are considering there are only two types of targets present: fish and submarines. When an echo is detected, we have two pieces of information: 1) There was a target in the sonar detection cell. Otherwise there would have been no echo. 2) The echo from the target was detectable. Since we are only considering two types of targets, the echo must come from one or the other. Let
respectively denote the events of fish in the cell and submarines in the cell, given that there is a target in the cell. Since the cell is small enough in area that it can only contain one target, the events
are disjoint and it is the case that
. If there are 9999 fish in the area of sonar operation and one submarine, then
In order to address our question, we proceed in the following fashion. Let
denote the probability that there is a fish in the cell, given that an echo is observed at range
, and let
) denote the probability that there is a submarine in the cell, given that an echo is observed at range
. The probabilities
depend on two types of information: the sonar's performance capability against fish and submarines, and the likelihood of encountering fish and submarines. In most practical situations the sonar is more likely to encounter a fish than a submarine, since fish are much more numerous. On the other hand, submarines are easier to detect because they produce larger echoes. Bayes's theorem provides a convenient means of combining these different types of information. Bayes's theorem tells us that
denote the sonar's detection capability against fish and submarines, and the probabilities
are the a priori probabilities of encountering fish and submarines. Their values are based upon the relative concentrations of fish and submarines in the area in which the sonar is operating.
Use the upper two sliders on the graph to control the selection of the reflectivity (TS is target strength) of the submarine and the fish. Use the lower slider to control the a priori density of fish targets. Blue and red are used to denote performance against fish and submarines, respectively. The dashed curves show sonar probability of detection as a function of range against a given type of contact, either fish or submarine. The solid curves are the probabilities
For the default settings in the Demonstration and with ranges inside 2000 yards, if a contact occurs then the probability is nearly unity that it comes from a fish. Beyond 2000 yards, sonar performance against fish becomes poor and
decreases and reaches a minimum at 3000 yards. At longer ranges where the sonar works poorly against both target types, Bayes's theorem simply tells us that contacts (if they occur) come from the most prevalent type of target (fish). The Bayes probability
is directly related to
. The shape of the plot of
clearly lends itself to the concept of false target reduction. For additional information, see:
P. Gregory, Bayesian Logical Data Analysis for the Physical Sciences,
New York: Cambridge University Press, 2005.
E. T. Jaynes, Probability Theory: The Logic of Science
(G. Larry Bretthorst, ed.), New York: Cambridge University Press, 2003.
R. J. Urick, Principles of Underwater Sound
, New York: McGraw–Hill, 1983.
A. D. Whalen, Detection of Signals in Noise
, San Diego: Academic Press, 1971. | <urn:uuid:9683aebf-c295-4d39-8a7a-d01207de42bd> | 4.25 | 979 | Academic Writing | Science & Tech. | 46.426385 |
Allows one or more procedural SQL statements to be iteratively executed.
The WHILE statement can be used to iteratively execute a sequence of one or more procedural-sql-statements.
The iteration continues as long as search-condition evaluates to true.
For information on procedural-sql-statements, see Procedural SQL Statements.
If label appears at the beginning and at the end of the WHILE statement, the same value must be specified in both places.
Specifying label is optional, however, if label appears at the end of the WHILE statement, it must also appear at the beginning.
A label is required at the beginning if the LEAVE statement is to be used to terminate the WHILE statement.
The WHILE statement may be terminated by executing the LEAVE statement using label. It will also terminate if an exception condition is raised, in accordance with the normal exception handling process.
ExampleSET I = 0; L1: WHILE I <= 10 DO ... SET I = I + 1; END WHILE L1;
For more information, see the Mimer SQL Programmer's Manual, chapter 12, Iteration Using WHILE.
Upright Database Technology AB
Voice: +46 18 780 92 00
Fax: +46 18 780 92 40 | <urn:uuid:447f3350-c0b6-4d2f-905d-1edfb7c78808> | 2.6875 | 267 | Documentation | Software Dev. | 45.997356 |
Special & General Relativity Questions and Answers
Does time stop when you travel at the speed of light?
Since no matter can ever travel at exactly the speed of light, you are asking a hypothetical question. We know from experiment that the half-lives of unstable particles such as muons are prolonged in OUR reference frame by an amount predicted from their speeds by special relativity. So, by simple extrapolation, we expect that for particles moving at nearly the speed of light, their times are greatly increased so that 1 minute of their time might be weeks or months or even longer in our rest frame. So, time does 'stop' at relativistic speeds, but you have to get to practically the speed of light itself to get the most extreme situation.
Return to the Special & General Relativity Questions and Answers page.
All answers are provided by Dr. Sten Odenwald (Raytheon STX) for the NASA Astronomy Cafe, part of the NASA Education and Public Outreach program. | <urn:uuid:8cb9701c-df08-4bb0-8310-3d54175b43a2> | 3.1875 | 205 | Q&A Forum | Science & Tech. | 45.199451 |
Energy returned on energy invested
In physics, energy economics and ecological energetics, energy returned on energy invested (EROEI or ERoEI); or energy return on investment (EROI), is the ratio of the amount of usable energy acquired from a particular energy resource to the amount of energy expended to obtain that energy resource. When the EROEI of a resource is less than or equal to one, that energy source becomes an "energy sink", and can no longer be used as a primary source of energy.
Non-manmade energy inputs
The natural or original sources of energy are not usually included in the calculation of energy invested, only the human-applied sources. For example in the case of biofuels the solar insolation driving photosynthesis is not included, and the energy used in the stellar synthesis of fissile elements is not included for nuclear fission. The energy returned includes usable energy and not wastes such as heat.
Because much of the energy required for producing oil from oil or tar sands (bitumen) comes from low value fractions separated out by the upgrading process, there are two ways to calculate EROEI, the higher value given by considering only the external energy inputs and the lower by considering all energy inputs, including self generated. See: Oil sands#Input energy
Relationship to net energy gain
EROEI and Net energy (gain) measure the same quality of an energy source or sink in numerically different ways. Net energy describes the amounts, while EROEI measures the ratio or efficiency of the process. They are related simply by
For example given a process with an EROEI of 5, expending 1 unit of energy yields a net energy gain of 4 units. The break-even point happens with an EROEI of 1 or a net energy gain of 0.
Economic influence of EROEI
|EROI (for US)||Fuel|
|3.0||Bitumen tar sands|
|35.0||Oil imports 1990|
|18.0||Oil imports 2005|
|12.0||Oil imports 2007|
|10.0||Natural gas 2005|
|10.0||Nuclear (with diffusion enrichment)|
|50.0||Nuclear (with centrifuge enrichment, with fast reactor or thorium reactor)|
|30.0||Oil and gas 1970|
|14.5||Oil and gas 2005|
|1.9||Solar flat plate|
|35.0||World oil production|
High per-capita energy use has been considered desirable as it is associated with a high standard of living based on energy-intensive machines. A society will generally exploit the highest available EROEI energy sources first, as these provide the most energy for the least effort. With non-renewable sources, progressively lower EROEI sources are then used as the higher-quality ones are exhausted.
For example, when oil was originally discovered, it took on average one barrel of oil to find, extract, and process about 100 barrels of oil. That ratio has declined steadily over the last century to about three barrels gained for one barrel used up in the U.S. (and about ten for one in Saudi Arabia). Currently (2006) according to the Danish Wind Energy Association, the EROEI of wind energy in North America and Europe is about 20:1.
Although many qualities of an energy source matter (for example oil is energy-dense and transportable, while wind is variable), when the EROEI of the main sources of energy for an economy fall energy becomes more difficult to obtain and its value rises relative to other resources and goods. Therefore the EROEI gains importance when comparing energy alternatives. Since expenditure of energy to obtain energy requires productive effort, as the EROEI falls an increasing proportion of the economy has to be devoted to obtaining the same amount of net energy.
Since the invention of agriculture, humans have increasingly used exogenous sources of energy to multiply human muscle-power. Some historians have attributed this largely to more easily exploited (i.e. higher EROEI) energy sources, which is related to the concept of energy slaves. Thomas Homer-Dixon demonstrates that a falling EROEI in the Later Roman Empire was one of the reasons for the collapse of the Western Empire in the fifth century CE. In "The Upside of Down" he suggests that EROEI analysis provides a basis for the analysis of the rise and fall of civilisations. Looking at the maximum extent of the Roman Empire, (60 million) and its technological base the agrarian base of Rome was about 1:12 per hectare for wheat and 1:27 for alfalfa (giving a 1:2.7 production for oxen). One can then use this to calculate the population of the Roman Empire required at its height, on the basis of about 2,500-3,000 calories per day per person. It comes out roughly equal to the area of food production at its height. But ecological damage (deforestation, soil fertility loss particularly in southern Spain, southern Italy, Sicily and especially north Africa) saw a collapse in the system beginning in the 2nd century, as EROEI began to fall. It bottomed in 1084 when Rome's population, which had peaked under Trajan at 1.5 million, was only 15,000. Evidence also fits the cycle of Mayan and Cambodian collapse too. Joseph Tainter suggests that diminishing returns of the EROEI is a chief cause of the collapse of complex societies. Falling EROEI due to depletion of non-renewable resources also poses a difficult challenge for industrial economies.
Criticism of EROEI
|This section does not cite any references or sources. (May 2010)|
Measuring the EROEI of a single physical process is unambiguous, but there is no agreed-upon standard on which activities should be included in measuring the EROEI of an economic process. In addition, the form of energy of the input can be completely different from the output. For example, energy in the form of coal could be used in the production of ethanol. This might have an EROEI of less than one, but could still be desirable due to the benefits of liquid fuels.
How deep should the probing in the supply chain of the tools being used to generate energy go? For example, if steel is being used to drill for oil or construct a nuclear power plant, should the energy input of the steel be taken into account, should the energy input into building the factory being used to construct the steel be taken into account and amortized? Should the energy input of the roads which are used to ferry the goods be taken into account? What about the energy used to cook the steelworker's breakfasts? These are complex questions evading simple answers. A full accounting would require considerations of opportunity costs and comparing total energy expenditures in the presence and absence of this economic activity.
However, when comparing two energy sources a standard practice for the supply chain energy input can be adopted. For example, consider the steel, but don't consider the energy invested in factories deeper than the first level in the supply chain.
Energy return on energy invested does not take into account the factor of time. Energy invested in creating a solar panel may have consumed energy from a high power source like coal, but the return happens very slowly, i.e. over many years. If energy is increasing in relative value this should favour delayed returns. Some believe this means the EROEI measure should be refined further.
Conventional economic analysis has no formal accounting rules for the consideration of waste products that are created in the production of the ultimate output. For example, differing economic and energy values placed on the waste products generated in the production of ethanol makes the calculation of this fuel's true EROEI extremely difficult.
EROEI is only one consideration and may not be the most important one in energy policy. Energy independence (reducing international competition for limited natural resources), decrease of greenhouse gas emissions (including carbon dioxide and others), and affordability could be more important, particularly when considering secondary energy sources. While a nation's primary energy source is not sustainable unless it has a use rate less than or equal to its replacement rate, the same is not true for secondary energy supplies. Some of the energy surplus from the primary energy source can be used to create the fuel for secondary energy sources, such as for transportation.
Richards and Watt propose an Energy Yield Ratio for photovoltaic systems as an alternative to EROEI (which they refer to as Energy Return Factor). The difference is that it uses the design lifetime of the system, which is known in advance, rather than the actual lifetime. This also means that it can be adapted to multi-component systems where the components have different lifetimes.
EROEI under rapid growth
A related recent concern is energy cannibalism where energy technologies can have a limited growth rate if climate neutrality is demanded. Many energy technologies are capable of replacing significant volumes of fossil fuels and concomitant green house gas emissions. Unfortunately, neither the enormous scale of the current fossil fuel energy system nor the necessary growth rate of these technologies is well understood within the limits imposed by the net energy produced for a growing industry. This technical limitation is known as energy cannibalism and refers to an effect where rapid growth of an entire energy producing or energy efficiency industry creates a need for energy that uses (or cannibalizes) the energy of existing power plants or production plants.
The solar breeder overcomes some of these problems. A solar breeder is a photovoltaic panel manufacturing plant which can be made energy-independent by using energy derived from its own roof using its own panels. Such a plant becomes not only energy self-sufficient but a major supplier of new energy, hence the name solar breeder. Research on the concept was conducted by Centre for Photovoltaic Engineering, University of New South Wales, Australia. The reported investigation establishes certain mathematical relationships for the solar breeder which clearly indicate that a vast amount of net energy is available from such a plant for the indefinite future. The solar module processing plant at Frederick, Maryland was originally planned as such a solar breeder. In 2009 the Sahara Solar Breeder Project was proposed by the Science Council of Japan as a cooperation between Japan and Algeria with the highly ambitious goal of creating hundreds of GW of capacity within 30 years. Theoretically breeders of any kind can be developed.
See also
- Embodied energy
- Energy balance
- Energy cannibalism
- Jevon's paradox 1880s observation of the efficiency effect multiplier
- Khazzoom-Brookes Postulate 1980s updating of Jevon's paradox
- Net energy gain
- Levelized cost of energy
- Murphy, D.J.; Hall, C.A.S. (2010). "Year in review EROI or energy return on (energy) invested". Annals of the New York Academy of Sciences 1185: 102–118. doi:10.1111/j.1749-6632.2009.05282.x.
- Cutler, Cleveland (2011-08-30). "Energy return on investment (EROI)". The Encyclopedia of Earth. Retrieved 2011-09-02.
- Hall, Charles A.S. "EROI: definition, history and future implications" (PowerPoint). Retrieved 2009-07-08.
- "Energy Payback Period for Wind Turbines". Danish Wind Energy Association. Retrieved 2010-08-18.
- Homer-Dixon, Thomas (2007). The Upside of Down; Catastrophe, Creativity and the Renewal of Civilisation. Island Press.
- Tainter, Joseph (1990). The Collapse of Complex Societies. Cambridge University Press.
- Richards, B.S.; Watt, M.E. (2006). Renewable and Sustainable Energy Reviews. doi:10.1016/j.rser.2004.09.015 http://www.inference.phy.cam.ac.uk/sustainable/refs/solar/Myth.pdf
|url=missing title (help). Text "Permanently dispelling a myth of photovoltaics via the adoption of a new net energy indicator" ignored (help)
- Pearce, J.M. (2008). "Limitations of Greenhouse Gas Mitigation Technologies Set by Rapid Growth and Energy Cannibalism". Klima. Retrieved 2011-04-06.
- "The Azimuth Project: Solar Breeder". Retrieved 2011-04-06.
- Lindmayer, Joseph (1978). "The solar breeder". Proceedings, Photovoltaic Solar Energy Conference, Luxembourg, September 27–30, 1977. Dordrecht: D. Reidel Publishing. pp. 825–835. Retrieved 2011-04-06.
- Lindmayer, Joseph (1977). The Solar Breeder. NASA. Unknown parameter
- "The BP Solarex Facility Tour in Frederick, MD". Sustainable Cooperative for Organic Development. Retrieved 28 February 2013.
- Koinuma, H.; Kanazawa, I.; Karaki, H.; Kitazawa, K. (Mar. 26, 2009), Sahara solar breeder plan directed toward global clean energy superhighway, G8+5 Academies' meeting in Rome, Science Council of Japan
Further reading
- Heinberg, Richard (2003). The Party's Over: Oil, War, and the Fate of Industrial Societies. Gabriola, BC: New Society Publishers. ISBN 0-86571-482-7.
- Gupta, Ajay K; Hall, Charles A.S (2011). "A Review of the Past and Current State of EROI Data". Sustainability 3 (10): 1796–1809. doi:10.3390/su3101796. Retrieved 25 March 2012.
- World-Nuclear.org, World Nuclear Association study on EROEI with assumptions listed.
- Web.archive.org, Wayback Archive of OilAnalytics.org, "EROI as a Measure of Energy Availability"
- Dematerialism.net, "Energy in a Mark II Economy".
- EOearth.org, Energy return on investment (EROI)
- EOearth.org, Net energy analysis
- H2-pv.us, Essay on H2-PV Breeder Synergies
- explanation of EROI and peak oil | <urn:uuid:911ff905-2dd8-437a-a9fb-648fd5965f54> | 3.390625 | 3,038 | Knowledge Article | Science & Tech. | 44.762058 |
Making string-formatting smarter by handling generators?
python.list at tim.thechases.com
Wed Feb 27 21:41:32 CET 2008
>> Is there an easy way to make string-formatting smart enough to
>> gracefully handle iterators/generators? E.g.
>> transform = lambda s: s.upper()
>> pair = ('hello', 'world')
>> print "%s, %s" % pair # works
>> print "%s, %s" % map(transform, pair) # fails
>> with a """
>> TypeError: not enough arguments for format string
> Note that your problem has nothing to do with map itself. String
> interpolation using % requires either many individual arguments, or a
> single *tuple* argument. A list is printed as itself.
> py> "%s, %s" % ['hello', 'world']
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
I hadn't ever encountered this, as I've always used tuples
because that's what all the example code used. I thought it had
to do with indexability/iteration, rather than tuple'ness.
Apparently, my false assumption. People apparently use tuples
because that's the requirement, not just because it reads well or
is better/faster/smarter than list notation.
> TypeError: not enough arguments for format string
> py> "%s" % ['hello', 'world']
> "['hello', 'world']"
> So the answer is always use tuple(...) as others pointed.
I'll adjust my thinking on the matter, and mentally deprecate
map() as well.
Thanks to all who responded.
More information about the Python-list | <urn:uuid:28557a23-cd90-4626-880b-1db0e8a0526a> | 2.8125 | 390 | Comment Section | Software Dev. | 63.276398 |
The National Zoo's Antarctica Expedition is sponsored by the National Science Foundation's Office of Polar Programs.
All photographs depicting Weddell seals were taken under NMFS Permit No. 763-1485-00 issued under the authority of the Marine Mammal Protection Act.
Weddell Seal Fact Sheet
Species: Leptonychotes weddelli
Adult Weddell seals have dark backs, mottled sides, and mottled and white undersides. Females are generally larger than males. The largest individuals are nearly 11 feet (3.3 meters long). Weights vary considerably from season to season and at different stages of reproduction.
Weddell seals inhabit the fast ice (sea ice attached to the shore or connecting two icebergs) all around the Antarctic and nearby islands and as far north as the Falkland Islands. The first scientific specimen of this species was collected in the South Orkney Islands .
Ecology and Behavior
The Weddell seal lives on fast ice and is not migratory. Local movements may be stimulated by changing location of prey or changing condition of the ice on which they haul out. Weddell seals haul out near natural cracks in the fast ice where the ice is thin enough to maintain suitable breathing holes. During the winter, seals chew at the ice with their slightly forward-pointing canine and incisor teeth to keep breathing holes open. As a result, Weddell seal teeth show considerable wear and the rate of wear may affect longevity. Pupping localities are largely determined by the presence of adequate breathing holes.
Weddell seals dive through breathing holes to forage below the ice. Their eyes are well developed for low-light conditions and they are able to hold their breath for an extended period of 20 minutes or more. They can dive quite rapidly at a rate that may exceed 120 meters per minute. Research conducted with acoustic tracking equipment has shown that most dives are less than 20 minutes and that there are two dive levels: 0-160 meters, occurring primarily during the night, and 340-450 meters, primarily during the day. Underwater navigation appears to be excellent but the exact mechanism by which they negotiate a dark underwater environment and are able to return to breathing-holes is not known.
During the mating season, males defend access to breathing holes and attempt to exclude other adult and sub-adult males. Mating occurs in the water. During the pupping season, individuals space themselves more widely on the ice and become somewhat aggressive toward intruders.
Mating occurs mid-November through December. There is then a short period delayed implantation followed by a 9-10 month gestation. Usually, a single pup is born but twins have been reported. Newborns weigh about 30 kg at birth, are covered with soft, gray pelage that is molted in four to six weeks to the adult pelage. At birth, young have a full compliment of adult teeth. Pup survival appears to increase with age and experience of the mother with the highest pup mortality occurring to first-time mothers.
Weddell seal mothers remain with their pups and fast for at least part of lactation and lose considerable body mass during lactation. When lactating females resume foraging, they may dive deeper and longer to catch in an effort to restore their own energy stores, which have been depleted during lactation. Pups are weaned at about six weeks and reach full adult size in about three years. Reproduction generally appears to be delayed until age four to five for females and six to seven for males.
After weaning, Weddell seal pups leave their natal area and move along the Antarctic continent shoreline gradually perfecting their foraging skills. They will sometimes use pack ice for hauling out, but prefer to remain closer to the coastline than adults do.
The exact life span is not known but may average around 12 to 15 years in the wild; the oldest female recorded over a 40-year study monitoring Weddell seals at McMurdo Station was 27 years old and the oldest male 24 years.
The diet is primarily fish and squid. The Antarctic silverfish and the emerald rock-cod are preferred species. In the summer, Weddell seals forage slightly more at night than during the day and they apparently eat their food underwater. In the summer and winter, when there are few environmental time cues, Weddell seals may use tidal movements to determine the best hunting opportunities. While Weddell seals may get all the water they need from their food or from metabolizing sea water, individuals have occasionally been seen eating snow.
When sleeping and resting, Weddell seals may remain in the same spot for hours melting a hollow in the ice underneath them with their own body heat. Weddell seals have also been observed sleeping under the ice, but it is not known how common this phenomenon is and for how long individuals can remain sleeping underwater.
Weddell seals have a variety of underwater vocalizations that are apparently made with the vocal cords and larynx. Most calls are made at a depth of ten to 35 meters in parts of the water column where light penetrates—where the vocalizer can also be seen.
Weddell seals groom parts of their bodies they can reach with nails on their fore-flippers; they roll and rub themselves on the ice to groom areas the flippers cannot reach. | <urn:uuid:d1c7973e-75ff-459a-baf4-1b09bf08da1e> | 4.09375 | 1,102 | Knowledge Article | Science & Tech. | 46.425234 |
Hydrogen Isn't Always the Renewable It Seems To Be
Ever since President Bush announced his hydrogen car initiative in his State of the Union address, hydrogen as a fuel source has received more press. But hydrogen isn't necessarily the renewable energy it's portrayed to be - whether it is or not depends on the underlying source of energy used to produce it. For instance, hydrogen can be produced from offshore technologies such as OTEC as described at this website
. And as this article
entitled "Renewables Key to Hydrogen Economy" (Brussels May 1, 2003) notes, the success of hydrogen as a renewable in Europe, depends on the successful development of renewable technologies first. Meanwhile, as for trends in the U.S., this column
by Dave Zweifel entitled "Big Oil Latches on to Hydrogen" Madison.com (May 5, 2003) notes that it's likely that most hydrogen in the U.S. will come from non-renewable sources such as coal and oil. So at least here in the United States, classifying hydrogen as a renewable isn't quite accurate. | <urn:uuid:8d69dd78-9893-45fe-a863-1f5a32b923be> | 2.71875 | 229 | Personal Blog | Science & Tech. | 46.63997 |
How Iridium data are processed
A down-sampled data stream from the Iridium system is sent to the Johns Hopkins University Applied Physics Laboratory (USA). Algorithms developed by Dr Brian Anderson subtract off the ambient magnetic field, correct for cross-talk, and filter the data. The process is described further here (http://dysprosium.jhuapl.edu/introduction.html)
Since 2010, a higher data sampling of the Iridium magnetometer data has been made available through a National Science Foundation research grant and the Johns Hopkins University Applied Physics Laboratory. The project is called AMPERE. Newcastle (C. L. Waters) provides the software to take the processed magnetic field data and estimate the Birkeland currents. | <urn:uuid:9f31ea75-01f7-4117-b439-845c65be78f8> | 2.703125 | 153 | Knowledge Article | Science & Tech. | 37.03877 |
aRecaptures reported as 26 June took place between the evening of 25 June and the morning of 26 June.
Hooper has noted that the number of recaptures increased sharply on 1 July, the same day that E.B. Ford sent a letter to Kettlewell. Ford's letter commiserated with Kettlewell for the low recapture rates but suggested that the data would be worthwhile anyway. The letter is unremarkable, and two facts militate against a finding of fraud. First, Kettlewell finished collecting data in the wee hours of the morning and therefore could not have received the letter before collecting his data on 1 July. He markedly increased the number of moths he released on 30 June, the day before the letter was mailed, not 1 July. Additionally, as Hooper admits, he continued to release more moths after 30 June. Not surprisingly, he also captured more moths: more moths released, more captured.
Indeed, Figure 1 plots recapture rate as a function of the number of moths released on any day. The line is a line of best fit constrained to pass through the origin on the assumption that no moths are recaptured if none are released. Figure 1 shows that the recapture rate is very nearly a linear function of the number released. The square r2 of the correlation coefficient is 0.80 and suggests that most of the variation of the number of recaptures is accounted for by variation of the number of releases. The fit improves only slightly if the line is not constrained.
Why did Kettlewell release more moths beginning on 30 June? He released both moths he had reared and moths he had captured. Because the moths were just hatching, he had limited control of the number he could release on any given day. There is no reason to suspect that the increased numbers of releases reflect anything other than the number of moths that were available. At any rate, Ford's letter could not have influenced his decision to release more moths because it arrived after Kettlewell's first big release on 30 June.
Still, his recapture rate, as well as the absolute number of moths recaptured, increased from 12% over the first 3 days of his experiment to 26% over the last 3 days. More pointedly, if we plot his recapture rate as a function of time, as in Figure 2, we find what looks to the eye as a sudden increase.
Figure 2 omits those days, 27 June and 30 June, that were preceded by 0 releases. It is hard to make much out of a mere 8 data points, but the recapture rate certainly appears to the casual observer to increase sharply after 1 day of inactivity. Biological field data, however, display significant random variation, and the eye often infers patterns in random data, so let us perform a quantitative analysis to see whether Kettlewell's data are what we would expect given normal experimental variations. Specifically, let us construct a mathematical model and see how well it describes Kettlewell's data.
Kettlewell recaptured most of his moths after they had been in the wild for only 1 day, but he recaptured some after 2 days. Let us therefore define a 1-day recapture rate R1 and a 2-day recapture rate R2 as the ratios of the numbers of moths recaptured after 1 and 2 days in the wild. Kettlewell reported no 3-day recaptures.
We may estimate the 2-day recapture rate by looking at the 4 moths captured on days 2 and 5. No moths were released on the preceding days, but 2 days before, a total of 63 +32 =95 moths had been released, so R2 =4/95, or approximately 4 %. The overall recapture rate is given by the row labeled "Totals" and is R =149/630, or approximately 24%. The 1-day recapture rate is the difference R1=R-R2 between the two values, or about 19% (the numbers do not add exactly because of round-off error). R2 is very nearly equal to the square of R1, as we would expect if the model is appropriate.
Our mathematical model is straightforward: The number of moths captured on any given night is equal to the number of moths released the day before times the 1-day recapture rate, plus a similar term, the number of moths released 2 days before times the 2-day recapture rate. The results of a calculation based on this model are shown as the solid curve in Figure 3. Note that I have made no artificial assumptions, such as adjusting the recapture rates to get a good fit to the data, in constructing Figure 3.
The points in Figure 3 are Kettlewell's data, and the solid curve is the model. How well does the model fit the data? To answer that question, we have to estimate the normal range of variability in the data. In statistical terms, we calculate the standard uncertainty of the data points. The standard uncertainty is a number that tells us, in this case, how much variation we might expect if we repeated the experiment many times.
By way of introduction, suppose that you toss N marbles at a hole in a table. Count the number of marbles that fall through the hole, and repeat the experiment many times. Suppose that the average number of marbles that fall through the hole is M.You will not count M marbles every time you perform the experiment; to the contrary, the number will vary about M and very possibly will never exactly equal M. Thus, we talk of the probability p that any one marble passes through the hole and set it equal to the ratio M/N. The mean number of marbles that pass through the hole is equal to Np.
How much will any one toss differ from M? Assume that the number of marbles that pass through the hole is described by a binomial distribution. Then the standard deviation of M is . On approximately 19 tosses in 20, you will record a number that is between M- and M+, so is most commonly used as a measure of uncertainty.
The uncertainty can be surprisingly large. For example, if p is 0.24 (the average recapture rate in Kettlewell's experiment) and N is 102 (the number of moths Kettlewell released on 30 June), then M is 24 and is about 8. You can expect anywhere between 16 and 32 marbles to fall through the hole on any given toss. You should not be especially surprised by any number unless it is much less than 12 or much more than 36. Thus, the day-to-day variation in an experiment such as Kettlewell's can easily be 100% or more. This fact alone should militate against a charge of fraud.
The result is shown in Figure 2 as a series of error bars. The error bars represent ±2u, an interval called the 95% confidence interval. If we take a single measurement, then we may estimate that the true value (the average of a great many measurements) falls within the error bars, with 95% probability. Inasmuch as the model (the solid curve) passes through virtually every error bar, it may be said to be a nearly perfect fit to the data, however poor it might appear in the absence of error bars.
The points on days 7, 8, and 9 lie noticeably above the curve. If the data were completely unbiased, then we would expect about a 50-50 chance that any one of those points lay above the curve. The odds that 3 consecutive points lie above the curve are 1 in 8 -- exactly the same as the odds against tossing 3 heads in a row and by no means improbable enough to base a charge of cheating. Even if 5 points lay above the curve, the odds against would be 1 in 32, again, not very impressive in its improbability. Additionally, 2 consecutive data points lie noticeably below the curve.
In summary, the last 5 of Kettlewell's data points are higher than the first 5. This meager fact, combined with the anecdotal evidence of Ford's letter,is all that led Hooper (2003) to infer that Kettlewell cheated. In reality, the timing of Ford's letter belies Hooper's inference, and Kettlewell's data are completely consistent with normal experimental variation.
The differences between the data and the curve are not statistically significant; the observed variation very probably is the result of chance. It is, however, possible that the deviations from the curve are "real" -- that is, due to some systematic effect, or systematic error, not due solely to random error. It is very hard, unfortunately, to track down a source of systematic error when that error is itself less than the standard uncertainty of the data set; the systematic error is said to be lost in the noise.
Hooper tells us that the weather was stable and could not have accounted for the increase in the number of recaptures (though her description suggests somewhat variable winds). We have, nevertheless, a strong candidate that can account for the systematic deviations of our simple model from the curve: the phase of the moon. Shapiro (2002), in his review of Hooper's book, suggests that moonlight interferes with moth trapping, a possibility that Hooper and her informant, biologist Ted Sargent, should have investigated. The moon was full on 27 June (that is, the night of 26-27 June). By 2 July, the moon was 5 days past full but visible for only part of the night. Thus, the total exposure to the moon -- the product of illuminance (brightness) and time -- was approximately one-quarter what it was during the full moon, and it dropped steadily over the next few days.
Clarke and his colleagues (1990) have investigated the effect of the phase of the moon on capture rates of peppered moths in a single environment over 30 years and concluded that the moon does not affect capture rates. Unfortunately, theirs was a retrospective study, and they did not record weather data, that is, did not control for cloudy or rainy days. They averaged the data over 5-day periods surrounding the full moon and did not use the actual exposure to the moonlight (as defined above). All of these factors will reduce the correlation between capture rates and exposure to moonlight. Even so, they calculated a small but not statistically significant correlation that suggests a slight increase of capture rate around the full moon. In addition, when they checked the new moon against the full moon, they calculated a small, barely significant increase, which they discounted. Possibly the effect is due to the presence of streetlights, to which they refer obliquely, and which may attract moths away from the stronger mercury vapor light only when the moon is dark. At any rate, they conclude that moonlight does not affect capture rates. Kettlewell worked on clear days only; I do not think that the conclusion of Clarke and colleagues is necessarily pertinent.
Thus, I examined Kettlewell's data in hope of quantifying the effect of the moon on his recapture rates (1955:332, Table 5). I obtained data that gave the moon's magnitude (an astronomical term that is related to its brightness) and the duration during which the moon was visible each night during Kettlewell's experiment. I plotted Kettlewell's daily recapture rate as a function of the exposure to the moon (the product of brightness and time, as defined above). I made no effort to control for the elevation of the moon. The result is shown in Figure 4, which plots Kettlewell's daily recapture rate as a function of lunar exposure normalized to the value 1 on the night of the full moon. The equation in Figure 4 is the equation of the line of best fit to the data. The daily recapture rate rises by a factor of 3 as the brightness of the moon decreases. (We could perform a similar calculation using Kettlewell's total captures [1955:333,Table 6], but such a calculation is complicated by the fact that the moths emerge from their cocoons haphazardly, whereas the recapture rate is based on a known distribution of released moths. Still, the calculation based on total captures yields much the same result as that outlined below.)
Using the line in Figure 4, I adjusted the calculated recapture rates according to the equation,
Kettlewell's data are simply accounted for by the unsurprising fact that you can recapture more moths when you release more -- that and normal experimental variation. When the effect of moonlight is included in the calculation, the calculated curve fits even closer to Kettlewell's data.We have no need of Hooper's perverse, ad-hoc hypothesis.
Hooper's claims are moonshine; they are based on a lack of understanding of Kettlewell's experiments in particular and experimental science in general. Hooper evidently did not consider the most likely cause of the changes she saw, exposure to moonlight, let alone realize that the change in recapture numbers began before Kettlewell could have read the letter that supposedly triggered this change. Hooper and Sargent should have performed a careful analysis before Hooper presumptuously insinuated fraud.
Kettlewell's conclusion -- that predation by birds was a major factor in promoting industrial melanism -- was based on at least 4 lines of inquiry, as detailed above. It did not rely on the release-recapture experiments alone. It is also supported by at least 30 studies of different moth species that also developed melanic forms (Grant,1999). In other words, an enormous body of evidence supports Kettlewell's conclusion. Even if Kettlewell's release-recapture experiments were ruled out, we would still be forced to conclude that industrial melanism is the result of natural selection due to bird predation, possibly among other causes.
Thus, there is no foundation for assuming that Kettlewel's data were manipulated. The variations in his data are no more than the uncertainties associated with sampling and other factors, possibly including exposure to the moon. It is an irresponsible leap to accuse a distinguished naturalist of fraud on the basis of a single letter and a wholly imperfect, offhand analysis of his data. The peppered moth properly remains a valid paradigm -- no, an icon -- of evolution.
Acknowledgements. Ian Musgrave provided the lunar data. I am further indebted to Pete Dunkelberg and Bruce Grant for helping me understand the uncertainties of field work in biology. Musgrave, Laurence Cook, and Nicholas Matzke reviewed the paper and made many helpful suggestions regarding both clarity and content.
Copyright © 2004 by Matt Young. All rights reserved. This paper may be reproduced on the Worldwide Web on condition that it be reproduced in its entirety and that the author be notified. Print or hard-copy reproduction requires the express written consent of the author.
Matt Young is a former physicist with the US National Institute of Standards and Technology and now teaches physics and engineering at the Colorado School of Mines. He is the author of No Sense of Obligation: Science and Religion in an Impersonal Universe (1st Books Library, 2001) and coeditor of Why Intelligent Design Fails: A Scientific Critique of the New Creationism (Rutgers University Press, 2004).
Clarke, Cyril A., Frieda M.M. Clarke, H.C. Dawkins, and Susannah Kahtan (1990). "The Role of Moonlight in the Size of Catches of Biston betularia in West Kirby, Wirral, 1959–1988," Bulletin of the Amateur Entomologists' Society 368:19–29.
Cook, L.M. (2000). "Changing Views on Melanic Moths," Biological Journal of the Linnean Society 69:431–441.
Cook, Laurence M. (2003). "The Rise and Fall of the Carbonaria Form of the Peppered Moth," The Quarterly Review of Biology 78(4):1–19.
Coyne, Jerry (2002). "Evolution under Pressure," Nature 418:20–21.
Grant, Bruce (1999). "Fine Tuning the Peppered Moth," Evolution 53:980–984.
Grant, Bruce (2002). "Sour Grapes of Wrath," Science 297:940–941.
Forrest, Barbara, and Paul R. Gross (2004). Creationism's Trojan Horse: The Wedge of Intelligent Design. New York: Oxford University Press.
Hooper, Judith (2002). Of Moths and Men: An Evolutionary Tale. New York: W.W.Norton.
ISO (1993). Guide to the Expression of Uncertainty in Measurement. Geneva: International Organization for Standardization.
Kettlewell, H.B.D. (1955). "Selection Experiments on Industrial Melanism in the Lepidoptera," Heredity 9:323–342.
Kettlewell, H.B.D.(1956). "Further Selection Experiments on Industrial Melanism in the Lepidoptera," Heredity 10 (Part 3):287–301.
Majerus, M.E.N.(1998). Melanism: Evolution in Action. Oxford: Oxford University Press. Chapter 6.
Majerus, M.E.N. (2002). Moths. London: HarperCollins. Chapter 9.
Mallet, Jim (2002) ."The Peppered Moth: A Black and White Story after All," Genetical Society Newsletter, in press. Available at http://abacus.gene.ucl.uk/jim/pap/malletgensoc03.pdf.
Musgrave, Ian (2004). "Paint It Black: The Peppered Moth Story," in press.
Shapiro, Arthur M. (2002). "Paint It Black," Evolution 56:1885–1886. | <urn:uuid:6c61e5c8-eb00-4d18-99db-70763f10cd23> | 2.953125 | 3,724 | Academic Writing | Science & Tech. | 54.134953 |
Location of experimental equipment for Lake-ICE and SNOWBAND. The experiment headquarters is in Ann Arbor, and the aircraft are also based there.
The two experiments, the Lake-Induced Convection Experiment (Lake-ICE) and Snow Band Dynamics (SNOWBAND), will study a group of weather phenomena that occur around the Great Lakes. These range from the well-known lake-effect storms, which heap snow on communities just south and east of the Great Lakes, to the snow bands northwest of low-pressure systems, which bury the Midwest in snow, to the recently discovered mesoscale aggregate vortices, which influence weather as far afield as the East Coast. The scientists are studying the storms using NCAR and the University of Wyoming's aircraft; NCAR, Pennsylvania State University, and the University of Wisconsin's remote sensing equipment; NCAR and the National Severe Storms Laboratory's sounding systems; and the National Weather Service's WSR-88D (Nexrad) radars (see sidebar).
The lakes and their surrounding land, as it happens, have another unique advantage as an experiment site: they are a microcosm of the earth. "We're trying to understand how the atmosphere over water bodies responds to intense heat and moisture input from the surface," says Kristovich (Illinois State Water Survey). This is important in understanding the atmospheric boundary layer, the kilometer of air closest to the earth. "The Great Lakes allow us to study how marine boundary layers evolve in these circumstances without having to sample over the ocean," Kristovich explains. A better understanding of these processes could improve weather and climate forecasting.
Lake-ICE will study lake-effect snowfall in a concentrated effort around Lake Michigan. Scientists plan to use the data from the experiment to address a number of outstanding scientific questions. For example, how does the rapid exchange of moisture and heat affect the growth of the boundary layer? What controls the organization, intensity, and location of heavy lake-effect snowfall? How do lake-effect snow bands interact with small-scale turbulence to transport heat, moisture, and momentum vertically? How do clouds and precipitation affect the boundary layer? "Lake-effect events are convenient laboratories for studying the marine boundary layer," Kristovich emphasizes.
|(NCAR file photo by Charles Semmer.)|
SNOWBAND will focus on snowstorms that occur on the western side of Lake Michigan. When slow-moving cyclones track northeastward along a path extending about 300 km south of the lake, surface winds north of the cyclone cross Lake Michigan from east to west, producing lake-effect conditions on its western shore--the opposite side from the more familiar lake effect described above. These lake-enhanced snows occur simultaneously with the larger-scale snow bands, contributing to very heavy snows on the west side of the lake.
Sousounis picked a single, well-observed cold spell, called a cold-air outbreak, of two days' length in November 1982. Working with Michael Fritsch of Pennsylvania State University, he modeled the cold weather in two ways, with and without the Great Lakes, using the Penn State/NCAR MM4 model. The results were so surprising, he says, that "I was somewhat sceptical that the model was behaving properly." The simulations suggested that places as far away as Philadelphia were being warmed by as much as 2 degrees Celsius. "I grew up in Philadelphia," says Sousounis, "and if you had told me it was warmed by the Great Lakes, I would have laughed."
This year, Sousounis identified a cyclonic circulation that develops within the warm pool of air that is generated by the lakes. He named this circulation a mesoscale aggregate vortex (MAV), because it develops from heating and moistening from all of the Great Lakes. Sousounis's simulations suggest that an MAV has a warm core (something like a tropical system), winds on the order of 15-20 knots, and a pressure disturbance at the surface of 6-7 millibars. The simulations indicate that it is as wide as the upper Great Lakes (Superior and Huron) and about 4 km deep. The modeled MAVs tend to develop most strongly just northeast of Lake Huron.
Although observational data give some confirmation of MAVs, upper-air soundings from around the Great Lakes are few and far between, especially on the sparsely populated Canadian side. Sousounis will gather data with a network of upper-air stations across southern Ontario during Lake-ICE and will do additional modeling to understand how MAVs develop and where they go. "It's not impossible that they can affect weather over the North Atlantic," he says.
"During the last two major El Niños," says Rauber, "a split flow pattern in the jet stream set up over North America during December and January. In those years, storms tended to form east of the Rockies in two places, north of the U.S./Canadian border (the Alberta Clippers) and around Texas and New Mexico. The clippers moved across the Great Lakes into the Northeast U.S., bringing short bursts of cold air and lake-effect snows. Storms along the southern branch of the jet stream moved northeastward into Illinois or Wisconsin, producing snow bands. Although long-lasting cold-air outbreaks were not common, the conditions required for the success of the projects were present." However, a split flow pattern this year is by no means certain. Rauber points out, "The atmosphere usually ignores our best predictions and does the unpredictable."
|A CLASS station, with a weather-balloon launch in progress.|
Forecasters are predicting temperatures as low as -45 degrees C. Murphy isn't sure how the CLASS computers will handle that kind of cold. "The trailers [housing the computers and other equipment] are not well insulated. We're just going to keep them as warm as we can." Murphy has the responsibility for any on-site repairs needed. Although he's hoping to avoid Murphy's Law, that anything that can go wrong will, "I think the cold is probably going to add to the typical problems. And driving around in that type of weather, it tends to get a little treacherous."
During the operating periods, students from regional universities and seven or eight local people will be hired to do the balloon launches, but "we try to have some NCAR personnel at the more remote locations," Murphy explains. That's why he'll be in Beattyville, Quebec, during the January leg of the experiment. You can easily find it on a map, but finding it on the road is another story. "The first time I went up there, I drove all the way to the next town. I didn't see how I could have missed it; there was nothing there to miss." When he called his contact, he was instructed to turn off the highway at a certain mile marker. There was still nothing to see, but "I went over a hill, and there was this hotel on a lake." That was Beattyville.
The owner of Aux berges des 11 rapides, Murphy discovered, had bought the land for one dollar and built the inn as a stop along the trans-Canadian snowmobile trail. Though the site was isolated, the owner was thrilled to be hosting a CLASS station, even arranging local television coverage for Murphy when he set up the station in October. So although January temperatures are cold in Quebec, at least the hsopitality is warm. | <urn:uuid:8b08777e-5578-404e-9818-47ef89363ec6> | 3.984375 | 1,548 | Knowledge Article | Science & Tech. | 46.500998 |
The process of detecting and monitoring the physical characteristics of an area by measuring its reflected and emitted radiation at a distance from the targeted area. Remote sensing is used in this thesaurus to refer to methods that are solely or primarily deployed through air or space. Included in this concept are studies of biological populations using remote imaging techniques. Related methods which are used most frequently on the ground (e.g. photography), whether underwater, from airplanes or satellites, are not included in the term remote sensing.
The U.S. Geological Survey uses remote sensing to improve fire-management databases in the Everglades, gain insights into post-fire land-cover dynamics, and develop spatial and temporal fire-scar data for habitat and hydrologic modeling. | <urn:uuid:972e96ac-fe81-43c2-9b9f-0b75c6d2515e> | 3.4375 | 150 | Knowledge Article | Science & Tech. | 24.332814 |
On the surface, it would seem that using an e-reader would be a sure-fire way to up your green factor. After all, they’re pretty much the Superman of the tree world, saving innocent victims from the paper mill. But as an electronic device, e-readers aren’t without their footprint. There are the carbon emissions that result from powering the device, and the consumer electronics industry is infamous for using toxic materials.
So given these considerations, are e-readers actually greener than paper books?
The answer is a resounding yes. Between the paper consumption and the carbon emissions associated with production, printing, shipping and disposal of paper books, there is no doubting the environmental impact of traditional publishing. Don’t believe me? Just take a look at this powerful infographic, which explores the full impact of the two billion printed books produced every year. A few key facts:
- Printed books require 3 times more raw materials to produce books than e-readers and 7 times more water
- The 125 million trees cut down every year for the newspaper and book industries result in the emission of 44 million tons of CO2 each year versus just 7.3 million from cars.
In fact, the research this infographic cites estimates that e-readers will prevent the emission of 10 million tons of CO2 between 2009 and the end of 2012, which is the equivalent of the yearly emissions from roughly 800,000 cars. What’s more, when we leave more trees standing to absorb more CO2, our e-reading habit can help offset emissions from other high footprint technologies.
When considering the eco-credentials of e-readers, it’s important to take a look at the full supply chain as well as the raw materials. Fortunately for us book lovers, e-readers allow us to feed our book addiction while also doing something good for the environment. Though (sorry, I have to say it!) there’s nothing quite like the weight and smell of a good old-fashioned book… | <urn:uuid:7d09c975-d853-4468-bb0d-5bfa7797c95f> | 3.0625 | 422 | Personal Blog | Science & Tech. | 52.643309 |
EU-funded study calls for better protection for freshwater ecosystems
Current methods used to plan conservation strategies are not providing adequate protection for freshwater ecosystems and the ecosystem services they provide, according to African and European researchers in a new study published in the journal Conservation Letters.
The scientists, from Belgium, Germany, the Netherlands, Senegal, South Africa, Switzerland and the United Kingdom, were supported by the EU-funded BIOFRESH ('Biodiversity of freshwater ecosystems: Status, trends, pressures, and conservation priorities') project, which is supported by a EUR 6,465,406 grant under the 'Environment' Theme of the EU's Seventh Framework Programme (FP7).
Freshwaters are one of the most threatened ecosystems globally and although they occupy less than 1% of the Earth's surface they are home to over a third of the world's known species and around a third of all vertebrates. Human population growth and economic development have continued to threaten the health of many global freshwater ecosystems, throwing into jeopardy their ability to support biodiversity and provide ecosystem services such as irrigation, sanitation and food supply to humans.
The international researchers are calling for more primary information on freshwater biodiversity status and distribution to support more effective conservation planning and investment. The study's findings are based on a comprehensive assessment of freshwater biodiversity across Africa, the most in-depth study of freshwater biodiversity across an entire continent ever carried out.
They cross-checked data on range maps for 4,203 freshwater species and 3,521 land species across Africa with data on protected area coverage, large dam presence, rural poverty and the IUCN (International Union for Conservation of Nature) Red List, a catalogue that ranks plants and animals at risk of global extinction as either Critically Endangered, Endangered or Vulnerable. With all this data they were then able to analyse the status, threats and protection for freshwater biodiversity.
The researchers found that the problem lies in where research is focused, with most support directed to land species and so-called 'charismatic species'. Charismatic species refers to the act of raising support for the protection of one particular well known and 'charismatically' appealing species, such as the panda for example. Environmental groups often try to raise support for a whole ecosystem by using one species as the 'poster species'.
However, in reality, as the new study shows, with so much emphasis on charismatic and land species, efforts to highlight the distribution and threat towards many freshwater species go relatively unnoticed. The team found that conservation priorities and investment targets based on our knowledge of birds, mammals and amphibians alone may not be appropriate for freshwater species such as fish, molluscs and crabs. Often, protection plans for freshwater ecosystems are drafted using 'surrogate' species. However, the team warns this leaves them under-protected from a variety of human and
climate-based threats. Freshwater ecosystems are dynamic and transboundary in nature meaning that their conservation needs are often not met by protected areas planned around terrestrial ecosystems.
Their study flags up a research bias towards terrestrial and charismatic species that leaves our knowledge of global freshwater biodiversity patterns and trends fragmented and incomplete, and the team therefore wants to see targeted and tailored freshwater biodiversity research and funding.
BIOFRESH aims to build a global information platform for scientists and ecosystem managers with access to all available databases describing the distribution, status and trends of global freshwater biodiversity. The project, which started in 2009 and runs until 2014, brings together 19 research institutions from Austria, France, Germany, Hungary, Malaysia, the Philippines, Slovenia, Spain, Sweden, Switzerland and the United Kingdom.
For more information, please visit:
Category: Project results
Data Source Provider: IUCN (International Union for Conservation of Nature)
Document Reference: Darwall, W.R.T., et al. (2011) 'Implications of bias in conservation research and investment for freshwater species', Conservation Letters, 00, 1-9. DOI: 10.1111/j.1755-263X.2011.00202.x.
Subject Index: Climate change & Carbon cycle research; Coordination, Cooperation; Environmental Protection; Regional Development; Scientific Research; Resources of the Sea, Fisheries; Social Aspects; Water resources and management | <urn:uuid:e32ee6df-467f-4bda-8cd1-10ed311e088f> | 3.265625 | 864 | Knowledge Article | Science & Tech. | 20.353172 |
What part of Florida experiences the most hurricane strikes?
- spreadsheet or notebook within which to record data collected
This exercise uses an online searchable database to discover the locations of historic hurricane tracks. The database ends in 2006. At this National Oceanic and Atmospheric Admistration website (http://maps.csc.noaa.gov/hurricanes/), click "query storm tracks". Queries are based on location (state/county/city), intensity of storm, years (it is recommended to begin in 1950, when names were first routinely used), and months.
Part I: Search for the tracks of all tropical cyclones passing within a radius from the given location (for example, within 50 km of Union County). Students can be assigned different counties to query so that they can compare different parts of the state. Here are some of the data that they can research for their county/counties using this website (http://maps.csc.noaa.gov/hurricanes/):
- How many all-intensity occurrences have there been since 1950?
- How many hurricanes versus tropical depressions and tropical storms have occurred?
- How many major hurricanes (Category 3 or higher) have occurred?
- What is the frequency of occurrence for all storms, tropical storms, tropical depressions, hurricanes, and major hurricanes? (frequency = total number of events divided by number of years in period)
- When was the most recent occurrence?
- In what years did multiple occurrences happen?
- How many tracks actually passed through the county?
- How many tracks passed within 50 km to the right of the county? To the left?
- How many storms made landfall in the county?
- How many storms moved into the county from another county?
Part II: Potential economic impacts of tropical cyclone landfalls in Florida
Use this website (http://www.eflorida.com/floridasregionsSubpage.aspx?id=284) to obtain population and other socio-economic data about Florida's counties.
Use the data gained from the previous exercise pertaining to the number of tropical cyclones passing within 50 km of a county to answer the following questions:
- Which counties are the largest in area? Do the largest counties receive the most tracks?
- What counties contain the highest populations? Do counties with the largest populations also receive the most tracks?
- Do more people live in coastal counties versus inland counties? Do more tracks pass through coastal or inland counties?
- Which counties are projected to have the highest populations in 2030? Do these counties historically have frequent tracks?
Tracking Hurricanes: Where do hurricanes come from, and where do they go?
- storm position information available from National Hurricane Center (NHC) storm reports at http://www.nhc.noaa.gov/pastall.shtml or from current forecast advisories at http://www.nhc.noaa.gov/
- tracking chart downloaded from the NHC website
This exercise will improve map reading skills, teach students how to plot the locations of tropical cyclones using latitude and longitude information, and help them learn the names of countries in the northern Atlantic basin and where they are located.
Latitude and longitude data are listed in decimal degrees. These locations represent the center of circulation, or “eye” if one is present. This is where the lowest pressure occurs. If the storm is over the ocean, we can use satellite data to determine where the eye is located. If it is closer to land, we can fly aircraft through the storm and use instruments aboard the aircraft to measure where the lowest pressure is encountered. Aircraft can also drop instrument packages that measure pressure, temperature, humidity, and winds as they fall towards the Earth’s surface.
The National Hurricane Center, located in Miami, Florida, has the responsibility to monitor, forecast, and post watches and warnings for tropical cyclones occurring in both the North Atlantic and Northeast Pacific Ocean basins. Their forecasts and advisories are updated every six hours. If you live in the Eastern time zone, you can log onto the NHC website (http://www.nhc.noaa.gov/) and read the latest information at 5:00 AM, 11:00 AM, 5:00 PM, and 11:00PM. If a storm nears landfall, updates are issued every three hours.
|Date/Time (UTC)||Latitude (Nº)||Longitude (Wº)||Pressure (mb)||Wind Speed (kt)||Stage|
|05/2300||30.4||81.4||1002||45||landfall near Atlantic Beach, Florida|
To obtain circulation center positions from previous storms, click on the link to “Past Seasons”, select the year, then select the storm. The data available include the six-hourly coordinates plus the pressure, wind speed, and intensity category. Listed at the bottom are data from when it was most intense, and any landfalls made. | <urn:uuid:9a7af853-717a-4bc3-aeba-88163bdeb266> | 3.109375 | 1,042 | Tutorial | Science & Tech. | 50.394916 |
British fighter jets escorted a Pakistan International Airlines passenger plane to Stansted Airport near London on Friday, where police went on board and arrested two men on suspicion of endangering an aircraft. Full Article
Mangroves excel at storing climate-warming carbon
WASHINGTON (Reuters) - Tropical mangrove trees are better at storing climate-warming carbon than most other forests, so cutting them down unleashes far more greenhouse gas than deforestation elsewhere, scientists reported on Sunday.
Mangroves are so efficient at keeping carbon dioxide out of the atmosphere that when they are destroyed, they release as much as 10 percent of all emissions worldwide attributable to deforestation -- even though mangroves account for just 0.7 percent of the tropical forest area, researchers said.
They store two to four times the carbon that tropical rainforests do, said Daniel Donato, of the U.S. Agriculture Department's Forest Service and lead author of a study published in the journal Nature Geoscience.
"Mangroves store a lot of carbon, much more so than most forests on Earth, on a per-hectare basis," Donato said by phone. "Since they store so much carbon, there's probably a lot being released from all the mangrove deforestation that's going on."
Mangroves live where many people want to live, along ocean coastlines in the tropics, and their areal extent -- the amount of land they grow on -- has declined by 30 percent to 50 percent over the last 50 years, the study found.
Coastal development, aquaculture and over-harvesting have all contributed to mangrove deforestation. Rising sea levels expected this century are also a threat, the study said.
Besides storing carbon, mangrove forests act as fisheries, keep sediment in place, produce fiber and protect inhabited areas against storms and tsunamis, the researchers said.
Coastal mangrove forests and the ecological services they provide could be gone in as little as 100 years, according to the researchers.
Part of the reason for mangroves' efficiency in keeping carbon locked away lies in their location in tidal zones, where their roots are often covered with sea water.
They need complex root systems to keep them breathing even when the tides come in, Donato said. This same complexity traps sediment that comes in from rivers and it builds up.
SLOW DECAY MEANS CARBON STORAGE
In most forests, this kind of sediment and litter would decay rather quickly, but because much of it gets submerged in a coastal mangrove forest, there is not enough oxygen to break it down, so the breakdown of materials is much slower.
A slower decay means more carbon dioxide gets stored.
Scientists have previously studied the rate at which mangroves could sequester carbon, but this latest research looked at how much of a pool of stored carbon was locked away in the trees, in their root systems and in the soil decomposing slowly around them.
To figure this out, Donato and his colleagues went to 25 mangrove forests stretching from the Ganges Delta in Bangladesh to Micronesia in the western Pacific, and from the Malay Peninsula in southeast Asia to northern Australia.
Mangroves grow in 118 countries, but the region the scientists chose has the greatest mangrove area and diversity.
They figured out how much total carbon these trees kept out of the air by measuring the trees' size, the tree litter on the forest floor, the amount of carbon in the soil around them and the depth of the soil.
"Mangroves store about two to four times what (tropical rainforests) store, and it's mostly in that thick organic muck layer in the soil," Donato said. "That's really what sets mangroves apart in terms of carbon storage."
(Editing by Eric Walsh)
- Tweet this
- Share this
- Digg this | <urn:uuid:12da6b9a-fd16-4a0a-8e4d-e1a1c7c0813e> | 3.21875 | 811 | Truncated | Science & Tech. | 39.459354 |
Have you ever wondered if there are other planets in the universe (besides the nine in our solar system)? Everyday astronomers search for planets that orbit other stars. These planets are called extrasolar because they do not orbit our Sun.
How do astronomers find extrasolar planets?
Astronomers can’t just look up, point their finger, and say "Oh look! There’s another new planet!"
Astronomers use different techniques to locate extrasolar planets. It is hard to find extrasolar planets because the stars they orbit are light years away and are VERY bright! With even the most powerful telescopes you can’t see these planets orbiting near their glowing stars. Sometimes astronomers detect the dimming of a star’s light as its planet passes in front of it.
One of the techniques used is to watch stars to see if they slightly wobble. This works because a star wobbles from the slight gravity of an orbiting planet. Even though stars are much bigger than planets, the planets have just enough gravity to tug a little bit on the star that they orbit. This causes the star to wobble. Some stars have wobbles inside of wobbles, which means that there is probably more than one planet orbiting the star.
Another way astronomers find extrasolar planets is to use radio telescopes to detect the radio waves that are created while a solar system is forming. The radio waves are from the hot dust and gas in the forming solar system. The radio telescopes that are on Earth can detect those radio waves.
How big are the extrasolar planets?
All of the extrasolar planets found so far are several times larger than Jupiter, which is the largest planet in our solar system. This is a bit amazing because that type of planet is usually made of gasses and usually takes a long time to form.
What is the history of the search for extrasolar planets?
In the 1900s astronomers found out that our solar system is not the center of the galaxy, and that our galaxy is not the center of the universe. Earth is just one tiny planet orbiting one star out of billions. This made it seem possible that there were other planets that are just like Earth.
The first extrasolar planet was found in 1995. Its name is 51PegasiB, and it orbits a star that is 45 light years from Earth.
The American astronomers Geoffrey Marcy and Paul Butler discovered 13 more stars with planets orbiting them between 1996 and 1998.
Geoffrey Marcy and his team of astronomers found 18 other extrasolar planets since 1995 and also found a planet on November 7, 1999. This planet is much bigger than Jupiter, and it orbits the star HD209458 in the constellation Pegasus.
On November 29, 1999 six more extrasolar planets were found. This brought the number of extrasolar planets that were discovered by astronomers to 28.
Upsilon Andromedae and its 3 planets
In June look up in the sky and you might see Upsilon Andromedae. Now imagine that far away, a little yellow dot has 3 planets orbiting it. Two of its planets are at least twice the size of Jupiter, and the third is at least four times as large as Jupiter. They are all gas planets with very elliptical orbits. This was the first multi-planet solar system discovered besides our own.
Could life exist near Upsilon Andromedae?
The orbit of one of Upsilon Andromedae’s planets is very close to the area where life could develop. Upsilon Andromedae is a bit brighter and bigger than our Sun but is otherwise similar to it.
All of Upsilon Andromedae’s planets are made of gas, so life would have to be on their moons (if they had any). It would be on its outer two planets. It couldn’t be on the planet closest to Upsilon Andromedae because it is only 5.5 million miles away from it, that means anything on it would fry (Earth is 93 million miles from the Sun).
What about water?
Upsilon Andromedae’s outer most planet is on the edge of the area where life could develop. If it had large enough moons that had good enough atmospheres, then they could support life. If they had good enough atmospheres, then they could hold liquid water. That means it could have carbon-based life forms like us.
Life in our solar system
Water on Mars?
Water could also exist on Mars. Scientists have found more evidence of water on the fourth planet in our solar system, Mars. The Mars Global Surveyor took pictures of newly formed channels in Mars’s surface. Scientists believe some sort of liquid formed these channels.
One of Jupiter’s moons, Europa, may have liquid water under its surface. If this is true, it could support life (such as microscopic organisms). The information that is collected by the Europa probe, along with information from the Galileo spacecraft, might give us more clues to life on Europa.
Scientists plan to send a spacecraft called the Europa Probe to Europa. It will use radio waves to measure the thickness of the ice on Europa’s surface, which scientists now think is ¾ of a mile thick. The tool that sends the radio waves through the ice is called a radar sounder. This radar sounder will be able to detect if there is any liquid water under the surface of ice.
Unless otherwise noted, all images courtesy of NASA. Permission for use at http://www.nasa.gov/gallery/photo/guideline.html.
This site works best on a PC using Internet Explorer. There are some minor problems using Netscape, especially on Apples, but they can't be fixed. Sorry! | <urn:uuid:7342085d-9044-47fc-9b2a-0234902e47a7> | 4.15625 | 1,190 | Knowledge Article | Science & Tech. | 56.994616 |
Click on icon in upper right corner of slideshow to enlarge images.
On the show this week, we'll talk with James Watson, of "Watson and Crick" fame, two of the scientists* credited with discovering the helical structure of DNA molecules.
The scientists published their discovery in the journal Nature in 1953. And while the work earned the men the Nobel Prize, the credit for that now iconic double-helix image (pictured at left below) belongs to another Crick -- Francis's wife Odile.
Odile Crick was an artist who painted mostly nude women, according to Scott Christianson, author of the new book 100 Diagrams That Changed the World
. It was Mrs. Crick who took her husband's crude DNA drawing (pictured at right, above) and transformed it into the image that has come to serve as a stand-in for the concept of molecular biology.
"That something so complex can be boiled down to a single graphic is what this book is about," Christianson says. "Here was an effort to understand this phenomenon that became so important. This structure hadn't been seen before. It took the drawing to convey it to people, " Christianson says.
The diagrams in the book span centuries, and include images from astronomy, physics and technology. There's the first drawing of a car by Karl Benz, the first electrical circuit (by Volta) and the first drawing of a lunar eclipse by Abu Rayhan al-Biruni. (See the slide show above.) There's also Tim Berners-Lee's early schematic for the World Wide Web and Steve Jobs's iPod sketches.
While it might not achieve "changed the world" status, it is interesting to see the origins of an IKEA assembly diagram in the first exploded-view drawings of Mariano Taccola, circa-1450. (Although I would argue that IKEA diagrams have probably altered the course of many a Sunday afternoon, and possibly a few marriages.)
Nearly everything we use or rely on today has it origins in one of the diagrams in the book. Or as Christianson puts it in the introduction "It all begins with a diagram. Everything from family trees to seating arrangements at a wedding to bank heists starts with a roughly sketched plan."
*You can read more about the discovery of DNA's shape, and the contributions of scientists Maurice Wilkins and Rosalind Franklin here. | <urn:uuid:cd1fbcc0-7777-43b5-962b-f4a4264a54bd> | 3.171875 | 497 | Nonfiction Writing | Science & Tech. | 52.652214 |
With c++ (and other languages) for each variable/string you want to output, you must have a concatenation operator between them.
With c++ it's "<<" for outputs. With PHP it's ".", with C# it's "+".
Your code is correct, just the line which outputs the text has one small error.
if (x > y)
cout << x << "is the greater variable" ;
cout << y << "is the greater variable" ;
I've bolded and underlined the code which would fix it. | <urn:uuid:dcfe29ab-30ec-4d35-8e43-9e25b6d43bb8> | 2.6875 | 120 | Q&A Forum | Software Dev. | 82.662544 |
In the Kingdom of the Blue Whale, the researchers came across a dead blue whale while at sea. The tell-tale signs of death were all-around. The floating bloating body of the dead whale. The circling and squawking scavengers from the sky - in this case sea gulls. Finally, the feeding frenzy of blue sharks in the ocean. Death is never a pleasant sight or smell in any natural environment; however it is a part of the life cycle and the food cycle.
Dead animals provide a valuable bulk source of nutrition to the remaining animals in the food web. Meat is a very valuable source of protein, on which the sea gulls and sharks were partaking. And had the whale sank to the ocean floor, which normally happens, a frenzy of other animals would have had the opportunity to feast as well. Though the death and location of a whale is completely random, in fact there are thousands of organisms large, small, and microscopic who greatly depend on them.
However, the dead whale from the special washed up on shore. In fact, the special showed another blue whale that had washed up on shore after death. Though sea gulls and other land animals willing and capable of feasting on the whale, it won't be handled nearly as efficiently as it would have been had the carcass remained out at sea. When a dead whale washes to shore, usually people have a hard time dealing with the sight and especially smell of the decomposing animal. In urban areas this is a big problem. As a result, teams of people come to assist. Though it was sad to see the dead baby whale (which had been born to soon from the shock of its mom's death), scientists now know more about fetal and baby blue whales.
This is a great opportunity for researchers to collect samples or learn more about whale anatomy. For example the researcher interested in whale hearing behavior was able to collect an intact whale ear bone structure. Her research in whale ear anatomy may prove beneficial in helping us understand how whales make and receive sounds - from each other and the huge shipping vessels that cause whale death. | <urn:uuid:b56fdc02-c449-42b2-a4a8-35c217ed79df> | 3.390625 | 428 | Personal Blog | Science & Tech. | 56.582462 |
Lazy loading, also known as dynamic function loading , is a mode that allows a developer to specify what components of a program should not be loaded into storage by default when a program is started. Ordinarily, the system loader automatically loads the initial program and all of its dependent components at the same time. In lazy loading, dependents are only loaded as they are specifically requested. Lazy loading can be used to improve the performance of a program if most of the dependent components are never actually used.
How to write scalable Silverlight applications
There are many ways to scale a Silverlight appl...(SearchWinDevelopment.com)
The Eclipse plug-in structure for Lotus Notes and Domino
Learn about the Eclipse plug-in structure for L...(SearchDomino.com)
A developer can enable lazy loading on a component-by-component basis in both thread ed and non-threaded applications. The disadvantage of lazy loading is that if a component call s most of its dependents, every function call to a lazily loaded component requires extra instructions and time. Consequently, if a program can be expected to use most of its dependent components, then lazy loading will probably not improve performance. | <urn:uuid:18a1cec4-be48-4046-b64b-9c9168ad25bc> | 2.75 | 243 | Knowledge Article | Software Dev. | 36.329032 |
Lampreys (sometimes also called lamprey eels) are a family of jawless fish, whose adults are characterized by a toothed, funnel-like sucking mouth. Translated from a mixture of Latin and Greek, lamprey means stone lickers (lambere: to lick, and petra: stone). While lampreys are well known for those species which bore into the flesh of other fish to suck their blood, most species of lamprey are not parasitic and never feed on other fish. In zoology, lampreys are sometimes not considered to be true fish because of their distinctive morphology and physiology. The lampreys are the basal group of Vertebrata (hagfishes are actually not vertebrates, but craniates).
Lampreys live mostly in coastal and fresh waters, although somespecies, (e.g. Geotria australis, Petromyzon marinus, Entosphenus tridentatus) travel significant distances in the open ocean, as evidenced by their lack of reproductive isolation between populations. They are found in most temperate regions except those in Africa. Their larvae (ammocoetes) have a low tolerance for high water temperatures, which may explain why they are not distributed in the tropics.
Adults physically resemble eels, in that they have no scales, and can range anywhere from 13 to 100 centimetres (5 to 40 inches) long. Lacking paired fins, adult lampreys have large eyes, one nostril on the top of the head, and seven gill pores on each side of the head. The unique morphological characteristics of lampreys, such as their cartilaginous skeleton, suggest they are the sister taxon (see cladistics) of all living jawed vertebrates (gnathostomes), and are usually considered the most basal group of the Vertebrata. They feed on prey as adults by attaching their mouthparts to the target animal's body, then using their teeth to cut through surface tissues until they reach blood and body fluid. They will generally not attack humans unless starved. Hagfish, which superficially resemble lampreys, are the sister taxon of the true vertebrates (lampreys and gnathostomes). | <urn:uuid:3beeb0c1-31ac-458d-abe2-4059c43b1c12> | 4 | 460 | Knowledge Article | Science & Tech. | 32.470423 |
So how bad was Hurricane Irene? Some commentators seem to think Irene didn’t match up to the media, yet preliminary assessments suggest Irene will be one of the top 10 costliest hurricanes ever in the United States. New Yorkers are indeed fortunate that the worst case scenario did not play out in their fair city, but that doesn’t mean there were no worst case scenarios elsewhere.
The worst fears about wind intensity did not play out, but a different devastating outcome did occur: Historic inland flooding across a huge swath of the interior Northeast. From New Jersey to Vermont, as much as 12 inches of rain fell in a matter of hours, swelling creeks and streams to well beyond flood stage. Paterson, New Jersey, is still under several feet of water five days after the storm passed and many residents have not be able to return home. Thirteen towns in Vermont were cut off from the outside world, and relief workers were unable to reach one town for days. More than 250 Vermont roadways are damaged and 30 bridges were destroyed.
“Don’t wait, don’t delay, we all hope for the best and prepare for the worst.” President Obama’s statement on Hurricane Irene urges the public to take precautions before one of the most significant northeast hurricanes in recent history. Mandatory evacuations have been ordered for much of the Atlantic seaboard, including coastal areas of New York City. All lanes of one major highway in New Jersey are headed in one direction only – west. The safest course of action is always to get out of the way of an approaching storm – to minimize the risk of harm when you can.
Texas climatologists have recently stated that the ongoing dry spell is the worst one-year drought since Texas rainfall data started being recorded in 1895. The majority of the state has earned the highest rating of “exceptional” drought and the remaining areas are not far behind with “extreme” or “severe” ratings by the U.S. Drought Monitor. So far, Texas has only received 6.5 inches of the 16 inches that has normally accumulated by this time of year.
Cattle deaths have been mounting in the central U.S. as the recent heat wave has pushed heat indices above 120 degrees in a number of states. Faced with dry pastures, rapidly depleting hay supplies and drought stressed surface water sources, ranchers in Texas are engaging in a significant livestock sell-off, referred to in one press account as culling into “the heart of the herd.” The size of the U.S. herd is now at a record low as farmers liquidate, enticed by high beef prices and expensive feed. The situation is dire enough that the government has stepped in with low interest loans to ranchers and direct payments for farmers that lost animals due to the extreme weather. Under the Livestock Indemnity Program, cattle lost to extreme weather are reimbursed by the government at 75 percent of their value, a significant expenditure when cattle losses are counted in the thousands. Texans are already looking for ways to adapt to the drought and improve their climate resilience. Henderson County is hosting a training session on August 22 entitled “Managing the Effects of Drought for Beef Producers.”
Over the weekend, the National Weather Service issued an excessive heat warning across a huge swath of the country, putting 132 million people under a heat alert. This warning is only issued when a heat index of at least 105°F is expected for more than three hours per day on two consecutive days or when the heat index is expected to rise above 115°F for any length of time. Recently in Iowa, the heat index reached 131°F, a level normally found only along the Red Sea in the Middle East. Scientists warn that these types of events could become much more common in the future, thanks to climate change.
Press Release: Pew Center on Global Climate Change Chief Scientist Wins Prestigious Scientific Organization Award
July 19, 2011
Contact: Rebecca Matulka, 703-516-4146
Pew Center on Global Climate Change Chief Scientist Wins
Prestigious Scientific Organization Award
WASHINGTON, D.C. – Pew Center on Global Climate Change Senior Scientist, Dr. Jay Gulledge, is this year’s recipient of the Charles S. Falkenberg Award for his work communicating climate change science to decision-makers and the public. The award is presented jointly by the American Geophysical Union (AGU) and the Earth Science Information Partnership (ESIP).
Since joining the Pew Center in 2005, Dr. Gulledge, who directs the Center’s science and impacts program, has worked to build public awareness of climate change science. In this role, he has communicated both an understanding of climate science and the need for urgent action to a diverse audience of non-scientists including policy-makers, the business community, and the media. Dr. Gulledge’s recent work uses a risk management framework to help explain that uncertainty over climate science is not a reason for inaction, rather it is a reason to act now to minimize both the risk that comes with climate change and the cost of mitigating it.
“He has the unique ability to translate scientific uncertainty into useful information for decision-makers and the public,” said Eileen Claussen, President of the Pew Center on Global Climate Change. “Jay often says, ‘Uncertainty is information.’ For the public, that notion is nothing short of revolutionary.”
In December Dr. Gulledge will be honored for his achievements at the 2011 AGU Fall Meeting in San Francisco. Established in 2002, the Falkenberg Award honors a scientist under age 45 who has contributed to the quality of life, economic opportunities, and stewardship of the planet through the use of Earth science information, and to the public awareness of the importance of understanding our planet.
Dr. Gulledge manages the Pew Center’s efforts to assess and communicate the latest scholarly information about the science and environmental impacts of climate change. In Pew Center reports, on the Climate Compass blog, and in numerous media interviews, Dr. Gulledge connects the dots between climate change and extreme weather, explains scientific developments in accessible terms, and delivers straight answers that increase public understanding of climate change.
Dr. Gulledge has also forged new ground in his work on the relationship between climate change and national security. As a non-resident Senior Fellow at the Center for a New American Security, he has co-authored influential reports, including The Age of Consequences: The Foreign Policy and National Security Implications of Global Climate Change.
Dr. Gulledge is a Certified Senior Ecologist with two decades of experience teaching and conducting research in the biological and environmental sciences. He earned a PhD from the University of Alaska Fairbanks and was a Life Sciences Research Foundation Postdoctoral Fellow at Harvard University. He has held faculty posts at Tulane University and the University of Louisville.
“The ability to effectively communicate Earth science to a wide range of audiences is rare, and Jay ranks among the very few who possess that skill,” said Claussen. “His dedication to transparency and accuracy and his unflagging defense of the scientific process in the face of political shenanigans have earned him the respect of his peers.”
For more information about global climate change and the activities of the Pew Center, visit www.c2es.org.
The Pew Center on Global Climate Change was established in May 1998 as a non-profit, non-partisan, and independent organization dedicated to providing credible information, straight answers, and innovative solutions in the effort to address global climate change. The Pew Center is led by Eileen Claussen, the former U.S. Assistant Secretary of State for Oceans and International Environmental and Scientific Affairs.
Scientific American published a three-part series authored by award-winning science journalist John Carey and commissioned by the Pew Center on Global Climate Change that reports on the link between extreme weather and climate change. Editorial control was held by the author and Scientific American.
The series details the impacts of extreme weather events, the science behind extreme weather and global warming, and the risks and how to respond to the increase in extreme weather. Through enterprising reporting, this series provides an in-depth and accessible account of extreme weather affecting communities across America, why it’s happening, and what can be done about it.
More violent and frequent storms, once merely a prediction of climate models, are now a matter of observation.
In North Dakota the waters kept rising. Swollen by more than a month of record rains in Saskatchewan, the Souris River topped its all time record high, set back in 1881. The floodwaters poured into Minot, North Dakota's fourth-largest city, and spread across thousands of acres of farms and forests. More than 12,000 people were forced to evacuate. Many lost their homes to the floodwaters.Read more.
How rising temperatures change weather and produce fiercer, more frequent storms.
Extreme floods, prolonged droughts, searing heat waves, massive rainstorms and the like don't just seem like they've become the new normal in the last few years—they have become more common, according to data collected by reinsurance company Munich Re. But has this increase resulted from human-caused climate change or just from natural climatic variations? After all, recorded floods and droughts go back to the earliest days of mankind, before coal, oil and natural gas made the modern industrial world possible. Read more.
Adapting to extreme weather calls for a combination of restoring wetland and building drains and sewers that can handle the water. But leaders and the public are slow to catch on.
Extreme weather events have become both more common and more intense. And increasingly, scientists have been able to pin at least part of the blame on humankind's alteration of the climate. What's more, the growing success of this nascent science of climate attribution (finding the telltale fingerprints of climate change in extreme events) means that researchers have more confidence in their climate models—which predict that the future will be even more extreme. Read more.
|Glaciers on the summit of Mount Kilimanjaro|
I recently returned from climbing Mount Kilimanjaro in Tanzania for a great cause, and I was reminded why I left engineering to work on climate change. Mount Kilimanjaro, or Kili, is the tallest peak in Africa, and its summit is covered with beautiful glaciers (see the picture to the right). But those glaciers are rapidly disappearing, and scientists estimate Kili’s summit will be ice free by 2022. This trend is a prime example of forced adaptation to climate change and provides a serious warning of things to come unless we work together to reduce our global greenhouse gas emissions. The action we need has to come from government at all levels, businesses, and individuals as we explain in our Climate Change 101 series.
The Pew Center's July 2011 newsletter explores how climate change and extreme weather are connected, highlighting our new extreme weather map, a series by Scientific American on extreme weather, and updated science Q&As.
Undoubtedly, it’s a different climate for talking about climate change this year. Extreme weather events have replaced legislative proposals as the big hook for discussing the issue. What hasn’t changed much is that we are still talking about it, and much of the talk still centers on the costs.
When climate legislation was before Congress last year, much of the discussion focused on the costs of reducing greenhouse gas emissions. This year we are seeing a new set of headlines. Story after story describes communities across our country being hit by extreme weather events – the floods in the Mississippi, Missouri and Souris rivers, the drought in Texas, and the wildfires in Florida and Arizona. We see vivid photos of temporary levees being built around nuclear power plants and wildfires threatening stored plutonium in New Mexico. The increasing number of extreme weather events is a wake-up call of the costs we will incur if we fail to address climate change. | <urn:uuid:d8e3153f-af3a-4e9b-8afa-587b92c2b66f> | 2.703125 | 2,489 | Content Listing | Science & Tech. | 41.193651 |
In a Compton scattering experiment, an x-ray photon scatters
through an angle of 17.8° from a free electron that is
initially at rest. The electron recoils with a speed of 1,240
(a) Calculate the wavelength of the incident photon.
(b) Calculate the angle through which the electron scatters.
An electron has a kinetic energy of 6.07 eV. Find its
An electron is located on a pinpoint having a diameter of 4.29
µm. What is the minimum uncertainty in the speed of | <urn:uuid:63e3ad58-6bf5-4415-8eeb-04161bdb6575> | 3.34375 | 121 | Q&A Forum | Science & Tech. | 71.974913 |
Click here for a printable version of this page.
The living coelacanths, Latimeria chalumnae,and Latimeria menadoensis are possibly the sole remaining representatives of
a once widespread family of Sarcopterygian (fleshy-finned) coelacanth fishes (more than 120 species are known from fossils)all
but one of which disappeared at the end of the Cretaceous, 65 million years ago. The classification of coelacanths is a murky business with more than one vairation in the class category, but we'll give it a shot. Kingdom: Animalia, Phylum: Chordata, Class: Pices (fishes), Sub class: Gnathostomata- jawed fishes, Sub class: Teleostei- bony fishes (though cartilaginous, coelacanths are usually classed with the teleosts), Sub class: Sarcopterygii (lobed-finned fishes), Order: Crossopterygii, Family: Actinistia (coelacanths), Gennus: Latimeria, Species: chalumnae and menadoensis.
The coelacanth appears to be a cousin of Eusthenopteron, the fish
once credited with growing legs and coming ashore-360 million years
ago. Today, scientists prefer to cite the tongue-twisting fossil candidates: icthyostega, panderichthys, acanthotega, and the newly discovered Tiktaalik roseae (2004), as the ancestor(s) of all tetrapods-amphibians, reptiles, and mammals, including ourselves.
But this view is controversial. Debate still rages as to whether
the coelacanths, presumed to be close relatives of the Rhipidistia
fishes from which tetrapod amphibians supposedly arose, are our
closest tetrapod ancestors, or if lung fishes, another very ancient
line, are more closely related to tetrapods than the Rhipidistia
and thus claim the oldest closest living relative title. (There are three
living genera of lung fishes.) Good genetic and morphological evidence
points in both directions. Another line of thinking, based on
physiological and anatomical analysis, identifies coelacanths
with sharks and other cartilaginous fishes, but this view seems
to have fallen from favor.
Fossils of ancient coelacanths have been found on every continent except Antarctica. They were first identified from an English fossil by naturalist Louis Agassiz in 1836. (Ironically, Agassiz became a firm opponent of Charles Darwin's theory of evolution!) 250 million years ago there were as many as 30 species living at the same time, about a third of them in fresh water. With a couple of exceptions ancient coelacanths were small, seldom exceeding 55 cm. In a recent issue of The Journal of Vertebate Paleontology, Andrew Wendruff and Mark Wilson, described an aggressive fast swimming species they call Rebellatrix. It had a muscular, forked tail which allowed it to chase prey in the open seas. Rebellatrux lived from about 250 mya, until it lost out to the better adapted sharks at an undetermined time in the deep past.
Today's coelacanths can reach almost six feet (2 meters) in length and weigh up to 150 or more lbs,(the giant Mozambique female shown on this site was 180 centimeters long and 95kg) but they are usually somewhat smaller, particularly the males, which average under 165cm.
Coelacanths are oportunistic feeders, scarfing up prey probably on or near the bottom. Stomach contents have included lantern fishes, stout beard fishes, cardinal fishes, cuttle fishes, deep water snappers, squids, deepsea witch eels, snipe eels, swell sharks, and other fishes normally found in their deep reef and volcanic slope habitats- and now even garbage (!) (There are as yet no reports on the specific prey of the South African canyon dwelling coelacanths.)
Coloration is dark blue with distinctive
white flecks that can even be used by researchers to designate
individuals. (Indonesian coelacanths may be more brown than blue). The white flecks afford camouflage against a backdrop of dark lava walls encrusted with white oyster shells.
Scientists believed individual coelacanths may live as long as 60 years, but there is still confusion as to how many scale growth rings are laid down each year: one or two- possibly resulting from multiple nutrient cycles in the shifting currents. Reseacher, Hans Fricke, however, writing in Marine Biology, takes a different tact. He says it is virtually impossible to detect age or growth changes in coelacanths observed by submersible for 20 plus years at the Comoros. On top of that, the rate of replacement in observed colonies indicated only two or three deaths per year. Comparing coelacanths, to a well known species of grouper, Dr Fricke estimated individuals may live to the age 103!
Coelacanths are ovoviparous, giving birth to as many as 26 live pups which develop from eggs in the oviduct, feeding off a large yolk sac until birth.
Nothing is known about mating behavior or even juvenile habitat.
GENERAL DESCRIPTION AND SIGNIFICANCE
The coelacanths date back 410 million years to the beginning of
the Devonian epoch. One of the incredible aspects of the living
coelacanth, Latimeria, is that it offers a genetic and anatomical
snapshot of life in those times. The backbone
of this fish is composed of a fluid-filled cartilaginous tube,
which provides a firm yet flexible support for muscles. Hollow
fin spines, identified in fossils, are what got the fish its name- "coelacanth"
which literally means 'hollow spine'from the Greek. The sucking maws of jawless predecessors have
transformed, through a modification of one of the gill arches,
into hinged, rigid structures with teeth on the bottom ridge and
upper palate- true jaws. The tiny brain, is encased in a hardened
skull, which hinges in the middle to increase the gape of the
mouth while feeding (a feature also found in frogs!) The eyes are well developed, with reflecting
cells called tapita to enhance night vision. A chambered heart
pumps blood in prototype to our own. Three indentations on either
side of the snout lead to a peculiar cavity, a jelly-filled rostral
organ, which very likely functions as an electro-receptor to help
in the location of prey. Along the sides a pressure sensitive
lateral line is well developed to sense the proximity of other
fishes and surrounding structures- no doubt useful in the submarine
caves where coelacanths pass their days. Two back, or dorsal, fins
and one protruding beneath the nape of the tail are complimented
by paired lobed pectoral and pelvic fins. These contain in their
trunks bones mimicking those of Eusthenopteron which later developed
into arms and legs. While coelacanths have not been observed to "walk" on the bottom, their pectoral and pelvic fins can be seen as "pre-adaptations" to land locomotion. Used under water their action maintains stability and balance. But in their cousin Eusthenopteron, the same action became four-legged land walking. Coelacanth scales are thick, and lined with serrated
rows of hardened toothpick-pointed denticles. Perhaps most distinctive
of all is the trilobated tail with its extra trunk and fin protruding
from the middle. It was this feature that made fossil coelacanths
so easily recognizable and helped clinch the case for the identification
of the first living specimen. While the living coelacanths retain many ancient features they have also, contrary to their public image, done some evolving along the way. Live bearing, for example, would seem to be a modern feature. | <urn:uuid:54b5815c-536d-463c-88bb-fac7e646672e> | 3.53125 | 1,724 | Knowledge Article | Science & Tech. | 32.310719 |
|EPA Global Warming Site|
Extensive website discussing all aspects of global warming. Discover what global warming is, what the greenhouse gases are and how much we emit, what the potential future impacts are, and what is being done to correct the problem. Site features public, educator, student, and kid resources. Explore how global warming and sea level rise will affect your state, as well as learn what you can do to help.
Intended for grade levels:
Type of resource:
No specific technical requirements, just a browser required
Cost / Copyright:
Copyright and other use information is unknown. Please consult the resource directly for the latest information.
DLESE Catalog ID: BRIDGE-1593
Resource contact / Creator / Publisher: | <urn:uuid:d3a00669-5b09-425b-b251-3633945bc6d8> | 3.0625 | 154 | Content Listing | Science & Tech. | 31.707895 |
Coasts and seas
- Bulgarian (bg)
- Czech (cs)
- Danish (da)
- German (de)
- Greek (el)
- English (en)
- Spanish (es)
- Estonian (et)
- Finnish (fi)
- French (fr)
- Hungarian (hu)
- Icelandic (is)
- Italian (it)
- Lithuanian (lt)
- Latvian (lv)
- Maltese (mt)
- Dutch (nl)
- Norwegian (no)
- Polish (pl)
- Portuguese (pt)
- Romanian (ro)
- Slovak (sk)
- Slovenian (sl)
- Swedish (sv)
- Turkish (tr)
Human activities are causing unprecedented environmental changes for coastal and marine ecosystems. Pressures from fishing, pollution from land- and sea-based sources, urbanisation, loss and degradation of valuable habitat, and invasions of non-native species are growing worldwide. All these impacts are likely to be exacerbated by the changing climate. More
- Key facts and messages
- Observed global mean sea level rise has accelerated over the past 15 years. From 2002 to 2009 the contributions of the Greenland and West Antarctic ice sheets to sea level rise increased. In 2007 the IPCC projected a sea level rise of 0.18 to 0.59 m above the 1990 level by 2100 excluding the... more
- Unsustainable fishing occurs in all European Seas and is threatening the viability of European fish stocks. 21 to 60% of the commercial fish stocks in the North-East Atlantic, the Baltic Sea and the Mediterranean are considered to be outside safe biological limits. more
- Sea surface temperatures and sea level are rising and likely to rise further. The resulting shifts in the geographical and seasonal distribution of marine and coastal species will require adaptations in the management of fisheries and natural habitats to ensure environmental sustainability.... more
- Sustainable use of the seas and the conservation of marine ecosystems through an ecosystem-based approach are being pursued through the Integrated Maritime Policy and its environmental pillar, the 2008 Marine Strategy Framework Directive, under which ‘good environmental status’ in European... more
- Nutrient enrichment is a major problem in the coastal and marine environment, where it accelerates the growth of phytoplankton and can lead to oxygen depletion. Concentrations of some heavy metals and persistent organic contaminants in marine biota exceed food stuff limits in all Europe’s seas. more
- Designation of coastal and marine sites as part of Natura 2000, although improving, has been slow and difficult. The conservation status of some coastal and most marine habitats still needs to be assessed, while 22 % of marine mammals are threatened with extinction. The available data suggest... more
- Degradation of marine and coastal ecosystems is observed in the Black, Mediterranean, Baltic, North East Atlantic Seas and in the Arctic. This trend is caused by fishing, agriculture, the industrial use of chemicals, tourist development, shipping, energy exploitation and other maritime activities.... more
- Growth of the maritime, agriculture and tourism sectors is expected to continue. An important future objective for the Marine Strategy Framework Directive will be to ensure that this growth is sustainable to achieve and then maintain ‘Good Environmental Status’ of the marine environment.... more
- By 2100, ocean acidity could be higher than during the past 20 million years. more
- The third lowest minimum of Arctic summer sea ice occurred in September 2010. more
- In 2007, the IPCC projected a sea level rise of 0.18 to 0.59 m above the 1990 level by 2100. more
- Recent projections show a maximum increase of sea level of about 1.0 m by 2100, while higher values up to 2.0 m cannot be excluded. more
- 30 % of Europe’s fish stocks (for which information exists) are fished outside safe biological levels. more
- The consumption of fish in Europe has been increasing over the last 15 years while fish catches from European waters have decreased. more
- Where marine species and habitat types have been assessed, the majority are found to be in unfavourable or unknown condition; only 10 % of habitats and 2 % of species are found in good condition. more
- The sea surface temperature changes in the European regional seas have been up to six times greater than in the global oceans in the past 25 years. more
- The current reduction of 0.1 in pH that has occurred over the industrial era translates to a 30 % increase in ocean acidity. This change has occurred at a rate that is about a hundred times faster than any change in acidity experienced during the past 55 million of years. A further decline... more
Ninety-four per cent of bathing sites in the European Union meet minimum standards for water quality, according to the European Environment Agency's annual report on bathing water quality in Europe. Water quality is excellent at 78 % of sites and almost 2 % more sites meet the minimum requirements compared to last year's report.
Water pollution and excessive water use are still harming ecosystems, which are indispensable to Europe’s food, energy, and water supplies. To maintain water ecosystems, farming, planning, energy and transport sectors need to actively engage in managing water within sustainable limits.
Climate change is affecting all regions in Europe, causing a wide range of impacts on society and the environment. Further impacts are expected in the future, potentially causing high damage costs, according to the latest assessment published by the European Environment Agency today.
More than 21 % of the land has some kind of protected status in the 39 countries which work with the European Environment Agency (EEA). However, only 4 % of the sea controlled by countries of the European Union is included within the Natura 2000 network of protected areas, according to a new report from the EEA.
The quality of bathing water across Europe declined slightly between 2009 and 2010, but the overall quality was still high. More than nine out of 10 bathing water sites now meet the minimum requirements.
Europe’s coastal zones are under increasing pressure from erosion, pollution, climate change, urbanisation and tourism. Such pressures threaten entire ecosystems — vital not only for wildlife but also for the economy and human well-being. The European Environment Agency (EEA) takes a closer look at the state of coastal ecosystems and policy responses to the pressures affecting them.
Clean bathing waters are vital for key economic sectors such as tourism and for plant and animal life. The annual bathing water report presented by the European Commission and the European Environment Agency shows that 96 % of coastal bathing areas and 90 % of bathing sites in rivers and lakes complied with minimum standards in 2009. It also describes where to obtain detailed and up-to-date information on bathing sites.
Clean fresh water is essential to life. Unfortunately, almost all human activities affect water quality. On World Water Day, 22 March, the European Environment Agency (EEA) is enriching the information on the web-based Water Information System for Europe (WISE) with two new sets of data on urban waste water and pollutant releases. | <urn:uuid:a1645ca2-ebc9-429d-9091-bcf9587e1b39> | 3.4375 | 1,458 | Content Listing | Science & Tech. | 39.419445 |
A major advance in molecular robotics and structural DNA nanotechnology occurred last May with the publication in Nature of two papers, each of which describes a new DNA nanorobot that walks across a landscape made from DNA origami (see Nanodot post “DNA-based ‘robotic’ assembly begins“). In one paper the nanorobot moved 50 steps autonomously. In the other paper a more complex but less autonomous nanorobot picked up nanoparticle cargo as it moved. Physorg.com presents a Caltech news feature written by Dave Zobel. Voyage of the DNA Treader announces that one of the coauthors of the first paper will present at January’s TEDxCaltech conference:
Make way for the incredible shrinking robot!
Richard Feynman was right: there is plenty of room at the bottom, and the beeping, lumbering trashcans of 1950s science fiction are gradually giving way to micro-droids the size of a speck of dust . . . or even a molecule.
But this new breed of invisibly tiny robots raises a new question: how can even rudimentary intelligence be squeezed into something whose largest moving part consists of a handful of atoms? One solution, says Caltech graduate student in computation and neural systems Nadine Dabby, is to build the smarts into the environment instead.
At January’s TEDxCaltech conference, Dabby will present a one-molecule robot capable of following a trail of chemical breadcrumbs. A paper she co-authored in Nature last May describes a “molecular spider” that can be coaxed to “walk” down a predetermined path. …
And what does a nano-bot in action look like? Using fluorescent markers and atomic-force microscopy, the team successfully produced a short and rather grainy “movie” of a spider actually making its sticky-footed way up the garden path.
With a pace measured in nanometers per minute, the tiny tripper isn’t likely to break any land speed records. Nevertheless, Dabby muses, given a few enhancements to its ability to interpret and alter its molecular environment, the robot could function as a biological computer, executing arbitrarily complex algorithms.
That first small step down a tiny trail of DNA just might represent one giant leap for bot-kind.
The list of speakers/performers for TEDxCaltech—Feynman’s Vision: The Next 50 Years includes beside the category “Nanoscience and Future Biology” the categories “Conceptualization and Visualization in Science” and “Frontiers of Physics”. | <urn:uuid:1e564df8-19ef-4c22-8af1-f4b3a77f5af0> | 3.140625 | 556 | Content Listing | Science & Tech. | 33.769343 |
Not all algebras contain a (left and right) multiplicative neutral identity element, but if an algebra contains such an identity element it is unique.
If an algebra A contains a multiplicative neutral element then in general it cannot be derived from an arbitrary element a of A by forming a / a or a^0, since these operations may be not defined for the algebra A.
More precisely, it may be possible to invert a or raise it to the zero-th power, but A is not necessarily closed under these operations. For example, if a is a square matrix in GAP then we can form a^0 which is the identity matrix of the same size and over the same field as a.
On the other hand, an algebra may have a multiplicative neutral element Zero and One for Algebras).
In many cases, however, the zero-th power of algebra elements is well-defined, with the result again in the algebra. This holds for Finitely Presented Algebras) and all those matrix algebras whose generators are the generators of a finite group.
For practical purposes it is useful to distinguish general algebras and unital algebras.
A unital algebra in GAP is an algebra U that is known to contain
zero-th powers of elements, and all functions may assume this. A not unital
algebra A may contain zero-th powers of elements or not, and no
function for A should assume existence or nonexistence of these
elements in A. So it may be possible to view A as a unital algebra
AsUnitalAlgebra( A ) (see AsUnitalAlgebra), and of course it
is always possible to view a unital algebra as algebra using
AsAlgebra( U ) (see AsAlgebra).
A can have unital subalgebras, and of course U can have subalgebras that are not unital.
The images of unital algebras under operation homomorphisms are either unital or trivial, since the identity of the source acts trivially, so its image under the homomorphism is the identity of the image.
The following example shows the main differences between algebras and unital algebras.
gap> a:= [ [ 1, 0 ], [ 0, 0 ] ];; gap> alg1:= Algebra( Rationals, [ a ] ); Algebra( Rationals, [ [ [ 1, 0 ], [ 0, 0 ] ] ] ) gap> id:= a^0; [ [ 1, 0 ], [ 0, 1 ] ] gap> id in alg1; false gap> alg2:= UnitalAlgebra( Rationals, [ a ] ); UnitalAlgebra( Rationals, [ [ [ 1, 0 ], [ 0, 0 ] ] ] ) gap> id in alg2; true gap> alg3:= AsAlgebra( alg2 ); Algebra( Rationals, [ [ [ 1, 0 ], [ 0, 0 ] ], [ [ 1, 0 ], [ 0, 1 ] ] ] ) gap> alg3 = alg2; true gap> AsUnitalAlgebra( alg1 ); Error, <D> is not unital
We see that if we want the identity matrix to be contained in an algebra that is not known to be unital, it might be necessary to add it to the generators. If we would not have the possibility to define unital algebras, this would lead to the strange situations that a two-generator algebra means an algebra generated by one nonidentity generator and the identity matrix, or that an algebra is free on the set X but is generated as algebra by the set X plus the identity.
Previous Up Top Next | <urn:uuid:f3c27a0d-a96f-4405-9911-60bd3a22283b> | 3.21875 | 806 | Documentation | Science & Tech. | 30.27655 |
arg takes one of the following forms:
var_list start-end [type_spec]
WRITE writes text or binary data to an output file.
See PRINT, for more information on syntax and usage. PRINT
and WRITE differ in only a few ways:
- WRITE uses write formats by default, whereas PRINT uses
- PRINT inserts a space between variables unless a format is
explicitly specified, but WRITE never inserts space between
variables in output.
- PRINT inserts a space at the beginning of each line that it
writes to an output file (and PRINT EJECT inserts ‘1’ at
the beginning of each line that should begin a new page), but
WRITE does not.
- PRINT outputs the system-missing value according to its
specified output format, whereas WRITE outputs the
system-missing value as a field filled with spaces. Binary formats
are an exception. | <urn:uuid:9c4f223f-00db-42d2-9d57-4f9608b6c688> | 3.09375 | 198 | Documentation | Software Dev. | 43.482524 |
To understand how the Earth and our environment work, we need to have access to the water and the atmosphere and the solid earth, the bottom of the ocean. Scripps vessels have played a very important part in providing access to those areas for our scientists. Competitions for the nation's research vessels come along very rarely, so it's important that when they do, that we compete well and in this case we were lucky enough to win the competition for a vessel called AGOR-28.
AGOR-28 is being funded by the United States Navy, it's about an $88 million project and we've been able to be integrally involved in the design and now the early phases of construction of AGOR-28. AGOR-28 is going to be a new class of ocean vessel, called an ocean class. The ocean class research vessel is going to be a little bit smaller than our biggest global class research vessels. It's going to have a suite of instruments on it that's similar to our biggest and most capable vessels, yet it's going to have a smaller footprint. So, it'll take a little bit fewer scientists to sea, have a smaller crew and hopefully be a little bit more efficient than our biggest ships are right now.
One of the biggest investors in the ocean is of course, the U.S. Navy and many of the things we want to know for all of our projects are things that they want to know for their safe and efficient operations. One of them is national security. So, being able to listen quietly to the ocean and understand who is out there and whether they are our friends or our foes is a very important part of our research. Over the decades, it's something that Scripps has excelled and something that we're going to continue to develop over the decades. There are many other aspects of our work, especially in climate change that are increasingly relevant to national security.
So, when there are shortages of drinking water or floods, these often lead to both direct impacts and also geopolitical instabilities. All of those things are consequences of the changing planet. We need to understand them and predict them as best we can and feed them into our government so we know what is waiting for us down the road. And so over the decades, we have developed a pipeline of ideas and inventions that are first used by the U.S. Navy and when they're comfortable with it, they make the transition into the environmental side of what we do. So, we're always going to need access to the ocean, we need slightly different equipment to optimize the experiments we're doing this decade compared to previous decades and that's why it's important every now and then to build a new research vessel.
Right now, the vessel is about to enter its initial construction phase, so we've been involved with detailed design and following that, we're going to go through a year-and-a-half of construction and we hope that the vessel will be delivered in the early part of 2015.
Navy cermony dedicates new research ship | <urn:uuid:3cf97203-f039-4d78-92ef-049177fff60c> | 3.125 | 624 | Audio Transcript | Science & Tech. | 53.93526 |
As dark matter particles steam through the detector, scientists hope that a few will collide with the argon atoms. This will generate two flashes of light - one in the liquid argon and another in the gas - which will be detected by the receptors. — BBC News
Rebecca Morelle visits the Gran Sasso National Laboratory a man-made cavern, deep beneath a mountain, designed by scientists hoping to shed light on one of the most mysterious substances in our Universe - dark matter. Physicists are hoping to detect WIMPS (Weakly Interacting Massive Particles).
SUBMIT NEWS: submit in 60 seconds! | <urn:uuid:b8f19959-ba37-48ba-bd90-5462c9251144> | 2.953125 | 128 | Truncated | Science & Tech. | 43.70325 |
Partial solar eclipse on January 4th
Since we know the Moon's orbital plane right now, we can extrapolate to the appearance of the Moon in upcoming months and phases. We know that the first and third quarter Moons will be either higher or lower away from the ecliptic, since at new the moon was obviously at the ecliptic (to produce the eclipse). So we need one other piece of information to give us the appearance of the Moon in the sky: is the node ascending or descending? Recall the Moon passing Jupiter about two weeks ago.
Was it above or below Jupiter in the sky?
Sadly, I had to check via a planetarium program because I couldn't remember. It passed above Jupiter. So the node is ascending, and the First Quarter Moon in January will be higher than usual in the sky and the Third Quarter Moon lower (but keep in mind only in deviation from the ecliptic, not in absolute elevation in the sky). In three months (aka March-April) the full Moon will be lower in the sky than usual and rise and set further south than normal. | <urn:uuid:c7aacfd8-2428-4f40-9db6-bc8fb66244c7> | 3.046875 | 228 | Personal Blog | Science & Tech. | 56.078542 |
Cool Deep Space Stuff
Below are sites where you can view and download digital images of space
taken from equipment such as Earth- (ground-) based telescopes,
satellites, the space shuttle, and the Hubble Space Telescope. Visit the
NASA Image exchange to view a wide range of space images.
Picture of the Day
Images from High Energy Astrophysics Missions
Space Telescope Public Pictures
Space Science Data Center (NSSDC) Photo Gallery
Image Gallery: post
mission pictures of the Hubble Servicing Mission 3A
These sites offer background information on the Hubble Space Telescope,
our solar system, and space phenomena. Sites about space trivia, facts,
and frequently asked questions are also listed.
to Space Exploration: The Universe
Sciences Laboratory, at the University of California
Learn more about the origins of our universe through The
Positioned Half-Degree Anisotropy Telescope (TopHat)
The Gamma-ray Large Area Space Telesecope (GLAST) will
supermassive black holes, neutron stars, and gamma-ray
Site: Space images, news, science and technology,
information and activites.
Live from the
Telescope (HST): NASA Quest archived project focusing on
men and women using the HST to learn about our universe.
Amazing Space - web based activites designed for classroom
use. Subjects include light and color, the solar system, and the
Hubble Space Telescope.
Picture of the Day - different image or photograph of the
universe is featured, along with a brief explanation written by
a professional astronomer.
- an astronomy web page devoted to airing out myths and misconceptions
in astronomy and related topics.
- links and information about STS-103 mission.
DLTK Space Crafts - printable craft templates for the anniversary
of the first time man set foot on the moon. Suitable for preschool,
kindergarten and gradeschool kids.
Future Astronauts of America Foundation - great space science
site with info on spacecraft, astronomy, current news, model rocketry,
and more. Contains images, links, a newsletter, and printable
- Collectable Hubble Stamps from the U.S. Postal Service.
Space Station - tons of space science resources for grades
K-6, including information about the solar system, astronauts,
meteors, comets, and backyard science activities.
itty bitty scienceguides - cool animated guides on subjects
like Asteroids, Earthquakes, Eclipses, Planets, Tides & Waves,
and lots more. From itty bitty blackboard.
2D Satellite Tracking
- 2D plot showing the position of satellites.
3D HST Tracking
- 3D plot showing the position of satellites.
pass predictions - Using your location and the latest available
tracking data, predict the times and paths of a satellite.
Liftoff Academy - explanations of some of the knowledge NASA
uses for the space program. Learn about being an astronaut, the
Space Shuttle, space stations, and general earth and space science.
Mysteries of Deep Space - from black holes to supernovas,
PBS examines the unexplained facets of the universe.
Kids - this neat site from NASA is a fun way to learn about
NASA's projects and science with news stories, info, and activities
designed for kids.
Outer Space - ThinkQuest site with information on the solar
system, NASA missions, and astronauts. It includes quizzes, word
searches, and crosswords.
Space Day - May 3, 2001 is Space Day. Check out the
activities and play some games.
Place - check out this site from NASA and JPL and learn all
about different space topics, make some spacy things, solve an
extraterrestrial riddle, and dive below the surface of Mars!
SpaceZone - all about space flight and the space program,
including live audio and video events.
StarChild - want to learn more about our solar system and
the universe? Come try these activites.
Hubble Space Telescope - easy-to-understand information about
Hubble Space Telescope by TheTech.
U.S. Space & Rocket Center - experience the "G" forces of
a shuttle launch, try to land the space shuttle or see what weightlessness
U.S. Space Camp - news on camp programs, plus images, sound
clips, video, and other space links.
Your Weight on Other Worlds - find out your weight on other
planets, moons, and stars while learning about the difference
between mass and weight. | <urn:uuid:6bbb2b13-cd3f-4e3e-a59a-9c01ccaee46e> | 3.171875 | 977 | Content Listing | Science & Tech. | 36.113436 |
Nearly one-third of the world’s amphibian species are at risk of extinction. The rescue project aims to save more than 20 species of frogs in Panama, one of the world’s last strongholds for amphibian biodiversity. While the global amphibian crisis is the result of habitat loss, climate change and pollution, chytridiomycosis is likely at least partly responsible for the disappearances of 94 of the 120 frog species thought to have gone extinct since 1980.
Read more about the Panamanian Amphibian Rescue and Conservation Project, and check out their blog, here.
(h/t Smithsonian Institution) | <urn:uuid:435a40f6-7d1c-4517-89ec-6afd7e076883> | 3.484375 | 131 | Truncated | Science & Tech. | 40.058363 |
Accessing and Changing Data Fundamentals
The primary purpose of a Microsoft® SQL Server™ 2000 database is to store data and then make that data available to authorized applications and users. While database administrators create and maintain the database, users work with the contents of the database:
- Accessing, or retrieving, existing data
- Changing, or updating, existing data
- Adding, or inserting, new data
- Deleting existing data
Accessing and changing data in Microsoft SQL Server is accomplished by using an application or utility to send data retrieval and modification requests to SQL Server. For example, you can connect to SQL Server using SQL Server Enterprise Manager, SQL Query Analyzer, or the osql utility to begin working with the data in SQL Server.
Applications and utilities use two components to access SQL Server:
- Database application programming interfaces (APIs) send commands to SQL Server and retrieve the results of these commands. The APIs can be general-purpose database APIs such as ADO, OLE DB, ODBC, or DB-Library. They can also be APIs designed specifically to use special features in SQL Server, such as SQL-DMO, SQL-DTS, or the SQL Server replication components.
- Commands sent to SQL Server are Transact-SQL statements.
Transact-SQL statements are built using the SQL language defined in the Transact-SQL Reference. Most of these operations are implemented using one of four Transact-SQL statements:
- The SELECT statement is used to retrieve existing data.
- The UPDATE statement is used to change existing data.
- The INSERT statement is used to add new data rows.
- The DELETE statement is used to remove rows that are no longer needed.
- The SELECT statement is used to retrieve existing data.
These four statements form the core of the SQL language. Understanding how these four statements work is a large part of understanding how SQL works.
Graphical or forms-based query tools require no knowledge of SQL. They present the user with a graphical representation of the table. The user can graphically select the columns to be retrieved and easily specify how to qualify the rows to be retrieved.
Some applications, such as SQL Query Analyzer and the osql utility, are tools for executing Transact-SQL statements. These statements are entered interactively or read from a file. To use these tools, you must be able to build Transact-SQL statements.
Applications written to the general-purpose database APIs, such as ADO, OLE DB, ODBC, or DB-Library, also send Transact-SQL statements to SQL Server. These applications present the user with an interface reflecting the business function they support. When the user has indicated what business function should be performed, the application uses one of the database APIs to pass SQL statements to SQL Server. You must be able to build Transact-SQL statements to code these types of applications.
Other applications, such as SQL Server Enterprise Manager, use an object model that increases efficiency in using SQL Server. SQL Server Enterprise Manager uses an object model that eases the task of administering SQL Servers. APIs such as SQL-DMO, SQL-DTS, and the replication components also use similar object models. The objects themselves, however, communicate with SQL Server using Transact-SQL. Knowing the Transact-SQL language can help you understand these objects.
Building Transact-SQL Statements
Accessing and Changing Data Fundamentals contains information about the basic elements used to build Transact-SQL statements. It also provides information about the functions Transact-SQL can perform, as well as similar functionality offered by the database APIs.
A SELECT statement contains the common elements used in Transact-SQL statements. For example, to select the names, contact names, and telephone numbers of customers who live in the USA from the Customers table in the Northwind database, these elements are used:
- The name of the database containing the table (Northwind)
- The name of the table containing the data (Customers)
- A list of the columns for which data is to be returned (CompanyName, ContactName, Phone)
- Selection criteria (only for customers living in the USA)
This is the Transact-SQL syntax to retrieve this information:
SELECT CompanyName, ContactName, Phone FROM Northwind.dbo.Customers WHERE Country = 'USA'
Additional elements used in Transact-SQL statements include:
Functions are used in SQL Server queries, reports, and many Transact-SQL statements to return information, similar to functions in other programming languages. They take input parameters and return a value that can be used in expressions. For example, the DATEDIFF function takes two dates and a datepart (weeks, days, months, and so on) as arguments, and returns the number of datepart units there are between the two dates.
Identifiers are the names given to objects such as tables, views, databases, and indexes. An identifier can be specified without delimiters (for example, TEST), with quoted delimiters ("TEST"), or in brackets ([TEST]).
Comments are nonexecuting remarks in program code.
Expressions include constants or literal values (for example, 5 is a numeric literal), functions, column names, arithmetic, bitwise operations, scalar subqueries, CASE functions, COALESCE functions, or NULLIF functions.
- Reserved keywords.
Words that SQL Server reserves for its own functionality. It is recommended that you avoid using these reserved keywords as identifiers.
- Null values.
Null values are values that are unknown. You can use values of NULL to indicate that this information will come later. For example, if the contact at the Leka Trading company changes and the new contact is unknown, you could indicate the unknown contact name with a value of NULL.
- Data types.
Data types define the format in which data is stored. For example, you can use any of the character or Unicode data types (char, varchar, nchar, or nvarchar) to store character data such as customer names.
Batches are groups of statements transmitted and executed as a unit. Some Transact-SQL statements cannot be grouped in a batch. For example, to create five new tables in the pubs database, each CREATE TABLE statement must be in its own batch or unit. This is an example of a Transact-SQL batch:
USE Northwind SELECT * FROM Customers WHERE Region = 'WA' AND Country = 'USA' ORDER BY PostalCode ASC, CustomerID ASC UPDATE Employees SET City = 'Missoula' WHERE CustomerID = 'THECR' GO
- Control-of-flow language.
Control-of-flow language allows program code to take action, depending on whether a condition is met. For example, IF the amount of products ordered are equal to or less than the amount of products currently on hand, THEN we must order more products.
SQL Server includes operators, which allow certain actions to be performed on data. For example, using arithmetic operators, you can perform mathematical operations such as addition and subtraction on your data. | <urn:uuid:41de0c9d-7aac-420b-b1f0-04c552183aad> | 3.421875 | 1,480 | Documentation | Software Dev. | 34.562303 |
|Why is this nebula so complex?
When a star like our Sun is dying, it will cast off its outer layers, usually into a simple overall shape.
Sometimes this shape is a
sphere, sometimes a
double lobe, and sometimes a
ring or a
In the case of planetary nebula
however, no such simple structure has emerged.
To help find out why, the Earth-orbiting
Hubble Space Telescope
recently observed NGC 5189 in great detail.
indicated the existence of multiple epochs of material outflow,
including a recent one that created a bright but distorted
across image center.
Results appear consistent with a hypothesis that the
is part of a binary star system with a
precessing symmetry axis.
Given this new data, though, research is sure to continue.
NGC 5189 spans about three light years and lies about 3,000 light years
away toward the southern constellation of the Fly
Hubble Heritage Team | <urn:uuid:91cff925-751f-4b65-b39c-17b08f090034> | 3.140625 | 208 | Knowledge Article | Science & Tech. | 44.31351 |
Performing all of your application development with a single language on a single platform may be ideal, but it's not always practical. There are times when you may need to integrate a new application with a legacy one, and communication between the two can be an issue. For instance, you may desire to isolate the two applications so that the new application's design isn't compromised, and the older one can be upgraded later without impacting the newer application.
In the past, I've explored distributed computing solutions that have discussed integrating Java and C++ applications via the Java Native Interface (JNI), Java Message System (JMS), and web services; see the the "Conclusion" section at the end of this article. Although these approaches are good in the right situations, there may be times where these solutions are too complicated or just not ideal. For instance, calling into native code from Java via JNI can be complex, time-consuming, and error-prone. Using JMS requires a JMS provider be licensed, installed, and configured. And a web service requires significant development and web-based infrastructure.
Another solution is to use socket-based network communication directly between the Java and C++ applications. Although this is a relatively low-level approach, it's still an effective solution. With XML as the message protocol between them, you can maintain a degree of platform and language independence. Figure 1 illustrates this in a very simple way; here we show that a C++ application that uses Windows sockets can communicate with a Java application that uses Java IO (scenario
c) just as easily as in the two homogenous examples (scenarios
b), and with no code changes.
Figure 1: A Java application can communicate directly with a C++ winsock application.
The Socket Solution
Let's explore a sample integration solution that includes a Java application that uses
java.io to communicate with a C++ Windows application that uses Windows sockets. The Windows application supports three simple requests for the following data:
- The Windows host name
- The amount of memory installed
- A pseudorandom number
These requests are simple for illustration only, but in reality they may be requests for data only available from a legacy application or a native platform interface. All requests and responses are formatted as simple XML character strings. Specifically, both the client and server wait for data and read bytes from the network stream, with each full message delineated by a NULL — or '\n' — character.
The simple XML request messages are in Example 1. Each message has the same basic XML structure, with only the request name differing between them.
<Request> <Name>GetHostname</Name> </Request> <Request> <Name>GetMemory</Name> </Request> <Request> <Name>GetRandomNumber</Name> </Request>
Example 1: The XML request messages sent from the client, to the server.
If you need to send data along with each request, simply add one or more XML elements to the request message. For example, if you want to change the
GetMemory message to indicate the type of memory (physical or virtual) to request, the XML can be changed to look like the following:
<Request> <Name>GetMemory</Name> <Type>Physical</Type> </Request>
The response messages are similar to the requests with some obvious changes; see Example 2. The main differences are the XML response types, and the inclusion of the data being requested.
<Response> <Name>HostnameResponse</Name> <Hostname> MyServer </Hostname> </Response> <Response> <Name>MemoryResponse</Name> <TotalPhysicalMemory> 1073201152</TotalPhysicalMemory> </Response> <Response> <Name>RandomNumberResponse</Name> <Number>20173</Number> </Response>
Example 2: The XML response messages sent from the server, in response to client requests.
In this sample implementation, the actual data returned in each response will vary according to the computer it's run on. Let's begin to dive into the implementation of the solution, beginning with the Java client application. | <urn:uuid:8ca5ef80-761b-4f84-a07a-420a6975d6ef> | 2.890625 | 854 | Documentation | Software Dev. | 26.669221 |
Taxonomy | Morphology
and Anatomy | Chemistry
Life History | Ecology | Photo
Gallery | Home
Crust Aging and Mechanisms of Crust Death
In observing Mastocarpus crusts, about the biggest you will see
is 1.0 meter in diameter. I found a few that were almost 70 cm on the shore
along the marine station. I heard about people making estimates for crustose
lichen ages by knowing their growth rate and measuring crust diameters.
Mastocarpus tetrasporophytes can grow from 1-2 cm in diameter
in a year under laboratory regulated conditions. Taking this as a rough
estimate of natural growth rate, some of those large crusts could be
almost 70 years old. Considering that most of the algae are annuals
or live only a few years, 70 years is ancient. Considering that bull
kelp will grow a thick stipe about 20m in one year, 1-2cm per year
is really slow. An experiment showed that flies with a higher metabolism
died earlier than flies with a lower metabolism. Could the same thing
be true for algae?
Finding out that an alga might grow to be 70 years old is really neat,
but why not older? What are mechanisms of crust death?
Epiphyte smothering can occur, and other intertidal organisms could crowd
out the slow-growing crust, but how then do we get a 70 cm wide crust?
Some other crustose algae have been observed to shed their cuticle to slough
off epiphytes. Mastocarpus has not been observed to do the same, but
it is a possibility. The crust could also produce secondary metabolites that
ward off epiphyte colonization. There has to be some explanation.
Mastocarpus pages copyright W. Ludington 1999 | <urn:uuid:467de4a6-3c75-48bf-91b1-4240d64d9db5> | 3.078125 | 389 | Knowledge Article | Science & Tech. | 52.584056 |
This bizarre-looking concoction of glass, liquid and tubes could one day bring a whole new meaning to the idea of natural lighting.
The new "bio-light" concept designed by Dutch electronics company Philips creates light in the same way that bioluminescent living organisms like fireflies and glow worms do.
The phenomenon of bioluminescence is created by a chemical reaction where an enzyme called luciferase interacts with a light-emitting molecule called luciferin.
In the bio-light a collection of hand-blown jars -- held in place by a steel frame -- contain a measure of bioluminescent bacteria which glow green when fed methane gas -- in this case through individual silicon tubes routed through a household waste digester.
Harnessing these biological techniques could help redefine how we consume energy in the home, says Philips. | <urn:uuid:4e66f1af-be85-478e-8e5a-64fe0daea0ff> | 3.28125 | 177 | Knowledge Article | Science & Tech. | 24.298091 |
What is it like to work in the remote forests of Papua New Guinea? Biologist Vojtech Novotny knows better than most. He tells Rowan Hooper about dealing with disease and warring tribes in one of the most linguistically and biologically diverse regions on Earth
Tell me about your work in Papua New Guinea.
We've built a research station on the northern coast of New Guinea, only 50 kilometres from Astrolabe Bay, where Nicholai Miklukho-Maklai, the first modern biologist and anthropologist to work in New Guinea, spent more than a year in 1871-72. After almost 150 years, there are still great forests and coral reefs we can study. New Guinea is also the most linguistically diverse place on the planet and there are more than 20 different languages within a 20-kilometre radius of our station.
About 5 per cent of all species live in New Guinea. With the Amazon and the Congo, ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:8f24d2ec-9434-4e1b-9a0a-d392b215a837> | 2.78125 | 226 | Truncated | Science & Tech. | 55.70606 |
Nature Bulletin No. 273-A September 9, 1967
Forest Preserve District of Cook County
Richard B. Ogilvie, President
Roland F, Eisenbeis, Supt. of Conservation
In the summer of 1956 we had a plague of Periodical Cicadas, or "17-
year locusts" in the Chicago region. There were countless millions of
them during June and early July, clattering through the air or crawling
on trees and bushes, Insect-eating birds and mammals got fat. The
"song" of just one male cicada sounds like a buzz saw going through a
log, and the metallic screeching of millions -- sometimes a continuous
clamor, sometimes rising and falling in waves -- made a nerve-wracking
din. People were frightened when those big insects, actually harmless
but fearsome in appearance, lit and crawled upon them. It was like a
bad dream. Then they disappeared but their progeny will return in 1973.
That was Brood XIV, so labeled by the U.S. Department of Agriculture.
There are many broods of the 17-year Periodical Cicada -- some very
numerous and widely distributed; others small and local -- which
emerge in different years. There are also several broods, mostly in the
south, which emerge every 13 years. One of these, a whopper, extends
as far north as Illinois.
They are not locusts. They are by far the largest members of a great
family of insects which have mouth parts for sucking the juices of
plants but not for biting and chewing. This includes the spittle bugs, leaf
hoppers, scale insects, and the aphids or plant lice.
The adult Periodical Cicada is about an inch long, with a stout
brownish-black body and a large head with prominent reddish-orange
eyes. The transparent wings, when held roof-like above the abdomen,
extend beyond it. They have reddish margins and veins, with a black
"W" near the end of each front wing.
The song of the male is produced by vibrating drum-like membranes
over a pair of sound chambers on the abdomen. The female, laying from
400 to 600 eggs has a dagger-like ovipositor with which she makes a
series of slits on the underside of tender twigs -- usually on an oak, a
wild crab, or an apple tree. In each slit she lays from 12 to 20 eggs. In 6
or 7 weeks these hatch into ant-like nymphs which drop to the ground
and burrow until each finds a tree rootlet -- anywhere from 1 to 10 feet
down. There it remains for 13 to 17 years, sucking juice, until full-
grown. Then it tunnels to the surface and emerges, usually at night,
crawls up on a tree or a weed stem and clings there. Presently, its skin
splits down the back and an adult appears. From 10, 000 to 40, 000 may
emerge from the ground beneath one large tree but usually there is no
noticeable damage from the feeding of the nymphs upon its rootlets, nor
from the sucking of sap from its twigs by the short-lived adults.
However, the twigs punctured to receive the eggs usually die, drop
off, and cause some injury.
There are many species of cicadas in North America. The common dog-
day cicada, or harvest fly, which we hear whirring continuously every
August, is much larger, has greenish margins on its wings, and is
believed to have a 2-year life cycle. After so long underground, no
wonder they celebrate.
To return to the Nature Bulletins Click Here!
Update: June 2012 | <urn:uuid:50cbdb71-f316-45be-95de-7dd61a4e8eaa> | 3.390625 | 809 | Knowledge Article | Science & Tech. | 62.387266 |
Centipedes’ Reproductive Cycle
Centipedes do not undergo a process of metamorphosis, though their young may pass through several molts during growth. Centipedes mate in warm months and stay dormant through winter. A centipede may live up to six years.
The centipede reproductive cycle involves distinct rituals. The female centipede first releases pheromones to attract a male, who, in some species, then weaves a silk pad deposited with sperm, known as a spermatophore. The spermatophore is either left for her to find and take up or is brought to her attention via a courtship dance, during which the male taps the female’s posterior legs with his antennae. Typical indoor centipede’s reproductive cycle produces up to 35 eggs. Other species of centipedes give birth to living young.
Centipedes lay their eggs in the hollows of rotting logs or in the soil. Most females will tend to their eggs and hatchlings, curling their bodies around their brood for protection. In addition, eggs are prone to the growth of fungi and require grooming to ensure that they reach adulthood. However, some species may abandon or eat their eggs.
Upon hatching, many centipede young have fewer pairs of legs than the adults and acquire the additional body segments and legs each time they molt. Because centipedes have outer skeletons, they must undergo a series of molts, shedding their exteriors. However, the hatchlings of the Scolopendromorphae and Geophilomorphae are born with a complete set of legs. | <urn:uuid:168f7168-dc53-4f32-9ab7-6808ddbee176> | 4.09375 | 339 | Knowledge Article | Science & Tech. | 43.63322 |
Visit The Physics Classroom's Flickr Galleries and enjoy a photo overview of the topic of standing waves.
Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on standing waves.Low Cost Physics Activities: Pulses on a Coil Spring
Use a slinky to demonstrate wave categories, behaviors and standing waves.
Traveling Waves vs. Standing Waves
A mechanical wave is a disturbance that is created by a vibrating object and subsequently travels through a medium from one location to another, transporting energy as it moves. The mechanism by which a mechanical wave propagates itself through a medium involves particle interaction; one particle applies a push or pull on its adjacent neighbor, causing a displacement of that neighbor from the equilibrium or rest position. As a wave is observed traveling through a medium, a crest is seen moving along from particle to particle. This crest is followed by a trough that is in turn followed by the next crest. In fact, one would observe a distinct wave pattern (in the form of a sine wave) traveling through the medium. This sine wave pattern continues to move in uninterrupted fashion until it encounters another wave along the medium or until it encounters a boundary with another medium. This type of wave pattern that is seen traveling through a medium is sometimes referred to as a traveling wave.
Traveling waves are observed when a wave is not confined to a given space along the medium. The most commonly observed traveling wave is an ocean wave. If a wave is introduced into an elastic cord with its ends held 3 meters apart, it becomes confined in a small region. Such a wave has only 3 meters along which to travel. The wave will quickly reach the end of the cord, reflect and travel back in the opposite direction. Any reflected portion of the wave will then interfere with the portion of the wave incident towards the fixed end. This interference produces a new shape in the medium that seldom resembles the shape of a sine wave. Subsequently, a traveling wave (a repeating pattern that is observed to move through a medium in uninterrupted fashion) is not observed in the cord. Indeed there are traveling waves in the cord; it is just that they are not easily detectable because of their interference with each other. In such instances, rather than observing the pure shape of a sine wave pattern, a rather irregular and non-repeating pattern is produced in the cord that tends to change appearance over time. This irregular looking shape is the result of the interference of an incident sine wave pattern with a reflected sine wave pattern in a rather non-sequenced and untimely manner. Both the incident and reflected wave patterns continue their motion through the medium, meeting up with one another at different locations in different ways. For example, the middle of the cord might experience a crest meeting a half crest; then moments later, a crest meeting a quarter trough; then moments later, a three-quarters crest meeting a one-fifth trough, etc. This interference leads to a very irregular and non-repeating motion of the medium. The appearance of an actual wave pattern is difficult to detect amidst the irregular motions of the individual particles.
It is however possible to have a wave confined to a given space in a medium and still produce a regular wave pattern that is readily discernible amidst the motion of the medium. For instance, if an elastic rope is held end-to-end and vibrated at just the right frequency, a wave pattern would be produced that assumes the shape of a sine wave and is seen to change over time. The wave pattern is only produced when one end of the rope is vibrated at just the right frequency. When the proper frequency is used, the interference of the incident wave and the reflected wave occur in such a manner that there are specific points along the medium that appear to be standing still. Because the observed wave pattern is characterized by points that appear to be standing still, the pattern is often called a standing wave pattern. There are other points along the medium whose displacement changes over time, but in a regular manner. These points vibrate back and forth from a positive displacement to a negative displacement; the vibrations occur at regular time intervals such that the motion of the medium is regular and repeating. A pattern is readily observable.
The diagram at the right depicts a standing wave pattern in a medium. A snapshot of the medium over time is depicted using various colors. Note that point A on the medium moves from a maximum positive to a maximum negative displacement over time. The diagram only shows one-half cycle of the motion of the standing wave pattern. The motion would continue and persist, with point A returning to the same maximum positive displacement and then continuing its back-and-forth vibration between the up to the down position. Note that point B on the medium is a point that never moves. Point B is a point of no displacement. Such points are known as nodes and will be discussed in more detail later in this lesson. The standing wave pattern that is shown at the right is just one of many different patterns that could be produced within the rope. Other patterns will be discussed later in the lesson. | <urn:uuid:a2995be8-c223-458d-b8f3-13735f8cb08d> | 4.3125 | 1,037 | Tutorial | Science & Tech. | 41.898152 |
Adding minus infinity to plus infinity gives mathematicians nightmares and even makes theoretical physicists worry a little. Fortunately, nature does not worry about what the mathematicians or physicists think and does the job for us automatically. Consider the grand total vacuum energy (once we have added in all quantum fields, all particle interactions, kept everything finite by hook or by crook, and taken all the proper limits at the end of the day). This grand total vacuum energy has another name: it is called the "cosmological constant," and it is something that we can measure observationally.
In its original incarnation, the cosmological constant was something that Einstein put into General Relativity (his theory of gravity) by hand. Particle physicists have since taken over this idea and appropriated it for their own by giving it this more physical description in terms of the ZPE and the vacuum energy. Astrophysicists are now busy putting observational limits on the cosmological constant. From the cosmological point of view these limits are still pretty broad: the cosmological constant could potentially provide up to 60 percent to 80 percent of the total mass of the universe.
From a particle physics point of view, however, these limits are extremely stringent: the cosmological constant is more than 10(-123) times smaller than one would naively estimate from particle physics equations. The cosmological constant could quite plausibly be exactly zero. (Physicists are still arguing on this point.) Even if the cosmological constant is not zero it is certainly small on a particle-physics scale, small on a human-engineering scale, and too tiny to be any plausible source of energy for human needs--not that we have any good ideas on how to accomplish large-scale manipulations of the cosmological constant anyway.
Putting the more exotic fantasies of the free lunch crowd aside, is there anything more plausible that we could use the ZPE for? It turns out that small-scale manipulations of the ZPE are indeed possible. By introducing a conductor or a dielectric, one can affect the electromagnetic field and thus induce changes in the quantum mechanical vacuum, leading to changes in the ZPE. This is what underlies a peculiar physical phenomenon called the Casimir effect. In a classical world, perfectly neutral conductors do not attract one another. In a quantum world, however, the neutral conductors disturb the quantum electromagnetic vacuum and produce finite measurable changes in the energy as the conductors move around. Sometimes we can even calculate the change in energy and compare it with experiment. These effects are all undoubtedly real and uncontroversial but tiny.
More controversial is the suggestion, made by the physicist Julian Schwinger, that the ZPE in dielectrics has something to do with sonoluminescence. The jury is still out on this one and there is a lot of polite discussion going on (both among experimentalists, who are unsure of which of the competing mechanisms is the correct one, and among theorists, who still disagree on the precise size and nature of the Casimir effect in dielectrics.) Even more speculative is the suggestion that relates the Casimir effect to "starquakes" on neutron stars and to gamma ray bursts.
In summary, there is no doubt that the ZPE, vacuum energy and Casimir effect are physically real. Our ability to manipulate these quantities is limited but in some cases technologically interesting. But the free-lunch crowd has greatly exaggerated the importance of the ZPE. Notions of mining the ZPE should therefore be treated with extreme skepticism
From the way some enthusiasts talk about the zero-point energy, one might think that unlimited power is lying all around just waiting to be harnessed. Like many ideas that seem too good to be true, this one falls apart on closer examination, although the concept of the zero-point energy is quite fascinating in and of itself. John Obienin, a materials science researcher at the University of Nebraska at Omaha, explains: | <urn:uuid:90c1174b-6833-4743-82e0-6c66b94b7d93> | 2.9375 | 809 | Nonfiction Writing | Science & Tech. | 25.440344 |
Firstly , what is friction ?
Friction is the force that opposes the relative motion or tendency of such motion of two surfaces in contact.
It is also the contact of two objects creating static electricity.
It is not, however, a fundamental force, as it originates from the electromagnetic forces and exchange force between atoms.
In situations where the surfaces in contact are moving relative to each other, the friction between the two objects converts kinetic energy
into sensitive energy, or heat (atomic vibrations).
Friction between solid objects and fluids (gases or liquids) is called fluid friction..
Friction is the force of two surfaces in contact, or the force of a medium acting on a moving object (e.g. air on an aircraft).
It is not a fundamental force, as it is derived from electromagnetic forces between atoms and electrons. When contacting surfaces move relative to each other, the friction between the two objects converts kinetic energy into thermal energy, or heat. Friction between solid objects is often referred to as dry friction or sliding friction and between a solid and a gas or liquid as fluid friction. Both of these types of friction are called kinetic friction. Contrary to popular credibility, sliding friction is not caused by surface roughness, but by chemical bonding between the surfaces. Surface roughness and contact area, however, do affect sliding friction for micro- and nan o-scale objects where surface area forces dominate inertial forces. Internal friction is the motion- resisting force between the surfaces of the particles making up the substance.
μ is the coefficient of friction, which is an empirical property of the contacting materials,
Fn is the normal force exerted between the surfaces, and
Ff is either the force exerted by friction, or, in the case of equality, the maximum possible magnitude of this force.
For surfaces in relative motion, μ is the coefficient of kinetic friction , the Coulomb friction is equal to Ff, and the frictional force on each surface is exerted in the direction opposite to its motion relative to the other surface.
For surfaces at rest relative to each other, μ is the coefficient of static friction (generally larger than its kinetic counterpart), the Coulomb friction may take any value from zero up to Ff, and the direction of the frictional force against a surface is opposite to the motion that surface would experience in the absence of friction. Thus, in the static case, the frictional force is exactly what it must be in order to prevent motion between the surfaces; it balances the net force tending to cause such motion. In this case, rather than providing an estimate of the actual frictional force, the Coulomb approximation provides a threshold value for this force, above which sliding would commence.
This approximation mathematically follows from the assumptions that surfaces are in atomically close contact only over a small fraction of their overall area, that this contact area is proportional to the normal force (until saturation, which takes place when all area is in atomic contact), and that frictional force is proportional to the applied normal force, independently of the contact area (you can see the experiments on friction from Leonardo Da Vinci. Such reasoning aside, however, the approximation is fundamentally an empirical construction. It is a rule of thumb describing the approximate outcome of an extremely complicated physical interaction. The strength of the approximation is its simplicity and versatility – though in general the relationship between normal force and frictional force is not exactly linear (and so the frictional force is not entirely independent of the contact area of the surfaces), the Coulomb approximation is an adequate representation of friction for the analysis of many physical systems.
Coefficient of friction
The coefficient of friction (also known as the frictional coefficient) is a dimensionless scalar value which describes the ratio of the force of friction between two bodies and the force pressing them together. The coefficient of friction depends on the materials used;
for example, ice on steel has a low coefficient of friction (the two materials slide past each other easily), while rubber on pavement has a high coefficient of friction (the materials do not slide past each other easily).
Coefficients of friction range from near zero to greater than one–under good conditions, a tire on concrete may have a coefficient of friction of 1.7. When the surfaces are conjoined, Coulomb friction becomes a very poor approximation (for example, Scotch tape resists sliding even when there is no normal force, or a negative normal force). In this case, the frictional force may depend strongly on the area of contact. Some drag racing tires are adhesive in this way.
The force of friction is always exerted in a direction that opposes movement (for kinetic friction) or potential movement (for static friction) between the two surfaces. For example, a curling stone sliding along the ice experiences a kinetic force slowing it down. For an example of potential movement, the drive wheels of an accelerating car experience a frictional force pointing forward;
If they did not, the wheels would spin, and the rubber would slide backwards along the pavement.
Note that it is not the direction of movement of the vehicle they oppose, it is the direction of (potential) sliding between tire and road.
The coefficient of friction is an empirical measurement–it has to be measured experimentally, and cannot be found through calculations. Rougher surfaces tend to have higher values. Most dry materials in combination have friction coefficient values between 0.3 and 0.6. Values outside this range are rarer, but Teflon, for example, can have a coefficient as low as 0.04. A value of zero would mean no friction at all, an elusive property–even Magnetic levitation vehicles have drag. Rubber in contact with other surfaces can yield friction coefficients from 1.0 to 2.
Static friction is the force between two objects that are not moving relative to each other. For example, static friction can prevent an object from sliding down a sloped surface. The coefficient of static friction, typically denoted as μs, is usually higher than the coefficient of kinetic friction. The initial force to get an object moving is often dominated by static friction.
Another important example of static friction is the force that prevents a car wheel from slipping as it rolls on the ground. Even though the wheel is in motion, the patch of the tire in contact with the ground is stationary relative to the ground, so it is static rather than kinetic friction. The maximum value of static friction, when motion is impending, is sometimes referred to as limiting friction,although this term is not used universally.The value is given by the product of the normal force and coefficient of static friction.
Rolling friction is the frictional force associated with the rotational movement of a wheel or other circular objects along a surface. Generally the frictional force of rolling friction is less than that associated with kinetic friction. Typical values for the coefficient of rolling friction are 0.001.One of the most common examples of rolling friction is the movement of skateboard wheels on a road, a process which generates heat and sound as by-products.
Kinetic (or dynamic) friction occurs when two objects are moving relative to each other and rub together (like a sled on the ground). The coefficient of kinetic friction is typically denoted as μk, and is usually less than the coefficient of static friction. Since friction is exerted in a direction that opposes movement, kinetic friction usually does negative work, typically slowing something down. There are exceptions, for instance if the surface itself is under acceleration. One can see this by placing a heavy box on a rug, then pulling on the rug quickly.
In this case, the box slides backwards relative to the rug, but moves forward relative to the floor.
Thus, the kinetic friction between the box and rug accelerates the box in the same direction that the box moves, doing positive work.
Examples of kinetic friction:
Rubbing dissimilar materials against one another can cause a build-up of electrostatic charge, which can be hazardous if flammable gases or vapours are present. When the static build-up discharges, explosions can be caused by ignition of the flammable mixture.
Devices - Devices such as tires, ball bearings, air cushion or roller bearing can change sliding friction into a much smaller type of rolling friction. Many thermoplastic materials such as nylon, HDPE and PTFE are commonly used for low friction bearings. They are especially useful because the coefficient of friction falls with increasing imposed load.
Techniques - One technique used by railroad engineers is to back up the train to create slack in the linkages between cars. This allows the train engine to pull forward and only take on the static friction of one car at a time, instead of all cars at once, thus spreading the static frictional force out over time.
Lubricants - A common way to reduce friction is by using a lubricant, such as oil, water, or grease, which is placed between the two surfaces, often dramatically lessening the coefficient of friction. The science of friction and lubrication is called tribology. Lubricant technology is when lubricants are mixed with the application of science, especially to industrial or commercial objectives.
Super lubricity - a recently-discovered effect, has been observed in graphite: it is the substantial decrease of friction between two sliding objects, approaching zero levels. A very small amount of frictional energy would still be dissipated. Lubricants to overcome friction need not always be thin, turbulent fluids or powdery solids such as graphite and talc; acoustic lubrication actually uses sound as lubricant.
friction - In physics, the force that opposes the movement of two bodies in contact as they move relative to each other. The coefficient of friction is the ratio of the force required to achieve this relative motion to the force pressing the two bodies together.
Two materials with rough surfaces rubbing together will change kinetic energy into heat and sound energy. Friction is greatly reduced by the use of lubricants such as oil, grease, and graphite. A layer of lubricant between two materials reduces the contact, allowing them to slide over each other smoothly.
For example, engine oil used in cars reduces friction between metal parts as they move against each other. Air bearings are now used to minimize friction in high-speed rotational machinery. In joints in the human body, such as the knee, synovial fluid plays a key role as a lubricant.
In other instances friction is deliberately increased by making the surfaces rough – for example, brake linings, driving belts, soles of shoes, and tyres.
Friction is also used to generate static electric charges on different materials by rubbing the materials together.
Energy of friction
According to the law of conservation of energy, no energy is destroyed due to friction, though it may be lost to the system of concern. Energy is transformed from other forms into heat. A sliding hockey puck comes to rest because friction converts its kinetic energy into heat. Since heat quickly dissipates, many early philosophers, including Aristotle, wrongly concluded that moving objects lose energy without a driving force.
Physical deformation is associated with friction. While this can be beneficial, as in polishing, it is often a problem, as the materials are worn away, and may no longer hold the specified tolerances.
The work done by friction can translate into deformation, wear, and heat that can affect the contact surface's material properties (and even the coefficient of friction itself). The work done by friction can also be used to mix materials such as in the technique of friction welding. | <urn:uuid:b7d28f0e-ac10-4695-80ef-4e3debfdd901> | 4.28125 | 2,369 | Knowledge Article | Science & Tech. | 35.254505 |
- Mathieu Isidro
public information officer, European Southern Observatory
Are we losing interest in space and astronomy, and if so, how can we inspire the next generation?
In times of budget cuts and crisis, science is often left behind for more pressing issues. Costly projects, such as the James Webb Space Telescope, are now endangered. Astronomy and space science in general requires enormous amounts of funding and cooperation between agencies such as NASA, ESA, JAXA, etc.
As projects have become less ambitious because they are too costly, and risk management has prevented agencies from attempting complicated and innovative missions (the explosion of Columbia grounded the fleet for months), we have lost people's interest in space and astronomy. People need to be inspired, and it is through such images as those of Armstrong on the moon (now more than 40 years ago), or the images of the rovers on Mars or on Titan that we can inspire the next generation of astronomers and space administrators, and reignite interest (and thus funding) in space.
Now that the shuttle program has ended (and thus freed a sizable portion of NASA's budget) and even though the scientific usefulness of astronauts on planets has been put into question, shouldn't we at least make Man on Mars a reality for the sake of our children? Shouldn't we attempt more daring and innovative missions to new places, like Europa? Enceladus? Io? | <urn:uuid:d875e9b6-3d18-461f-8eb8-9fcb224aa544> | 3.21875 | 288 | Nonfiction Writing | Science & Tech. | 34.997222 |
Irish Chronicles Document Links Between Volcanoes and Weather
A study of over 40,000 written entries in Irish Annals and ice core measurements shows a strong correlation between the occurrence of volcanic eruptions and extreme cold weather in Ireland over a 1200 year period. Data analyzed in this study cover the period from 431 to 1649, during which time up to 48 volcanic eruptions are identified in Greenland ice core records through deposition of volcanic sulfate in annual layers of ice. You can find the study (open access), published on 6 June 2013 in IOP Publishing's journal Environmental Research Letters, at http://iopscience.iop.org/1748-9326/8/2/024035/article. Find out more about how volcanoes can influence climate.
EF-5 Tornado in El Reno, Oklahoma Widest Ever Recorded in US
The EF-5 tornado that hit El Reno, Oklahoma on May 31st was the widest ever recorded in the US, according to the National Weather Service in Norman Oklahoma. The tornado, which remained on the ground for 40 minutes and reached 2.6 miles across (4.2 km), took the lives of 18 people including storm chasers Tim Samaras, Paul Samaras and Carl Young. For more information on the tornado, visit http://ow.ly/i/2hfDG.
During the week of May 13th, the CO2 level at the Mauna Loa Observatory in Hawaii topped 400 ppm repeatedly. Daily levels of CO2 can vary due to weather, and there are seasonal trends as well. The level of atmospheric greenhouse gases continues to increase, now over 120 ppm since the Industrial Revolution began. For more on the Keeling Curve, see http://keelingcurve.ucsd.edu/. Find out more about greenhouse gases and warming.
Even though the sleeping man is no longer on the bed, you can still see where he was lying down. The heat from his body warmed up the bed sheets which are now radiating infrared light toward your eyes....more
All warm objects (not just people) radiate in the infrared. Warmer objects give off more infrared radiation. Very hot objects radiate other types of light in addition to infrared. Click on the picture...more
Your eye is a wonderful detector of visible light. Different frequencies of light produce different sensations in the eye which we interpret as colors. Our eyes detect light by using light sensitive components...more
Imagine you found a pair of special glasses that not only gave you telescopic vision but gave you the ability to see all forms of radiant energy. The universe in visible light contains all the familiar...more
This is a volcano on the island of Miyake in Japan. It has erupted, sending hot lava and ash into the air, a total of ten times. The time after one eruption until the next occurred was about twenty years...more
This is a picture of a galaxy in visible light. A galaxy is a large number of stars, some like our sun, some bigger, some smaller and all moving together through space. This galaxy is called Centaurus...more
This is a plant in Gary, Indiana where power is made. We use power to run things like television sets, radios, lights, and microwave ovens. The picture looks very strange because it was taken in infrared....more | <urn:uuid:46d0e049-c3de-46d3-a849-5175fe675bcc> | 3.25 | 678 | Content Listing | Science & Tech. | 59.241806 |
The physics lesson for today.
The question is how much do you weigh? So you get on your scale and see that your weight is 200 pounds, lets say.
The number you are looking at is your mass, not your weight. But the scale is not set up to measure mass because mass does not change noticeably due to velocities at non relativistic speeds. To prove this check your readings, with the same scale, in an accelerating elevator, the number you read would change. Confused?
Well lets describe weight, mass, and acceleration.
The equation for weight is F = m a, one of Newtons laws.
F = weight force; is measured in newtons, its symbol is N.
m = mass; is measured in kilograms.
a = acceleration; is measured in meters per seconds squared.
If you measure your weight with the home scale, while not moving, on the ground at sea level, that will give you your mass or m (more or less). The scale is really set up to read newtons but they fudge the numbers to give you your mass(more or less) if gravity is the only acceleration acting on you. They tell you and You believe it's your weight due to some bad education.
To learn your real weight, measured in newtons (N) --not fig newtons-- take the number your home scale gives you as weight, convert it to kilograms and multiply it times 9.80 .
The number you get will be your true weight measured in newtons.
Acceleration due to the earth's gravity is around 9.8 meters per seconds square.
Taking the equation from above F = m a
A mass of one kilogram multiplied by the acceleration of gravity gives you around 9.8 newtons of weight force.
9.8 newtons(F) = 1 kilogram(m) * 9.8 meters per second squared (a -- in this case g)
Very confusing to many, but only because the educational system in America feels it's necessary to tell untruths to children. Many would think my concern is just stupid, but to a child, just starting out, may feel he's at fault when things don't add up. Could things like this be the reason America is last when it comes to math and science.
Would it be so hard to think of your weight as your mass. An over weight person on the earth if you put him on the moon would weigh only one sixth as much but his health concerns would still be there because his mass did not change.
Installing Solar Panels
When installing solar panels, it is a must that batteries be connected to the system before any product that will be powered by the solar panel is connected to the system. Don't make the mistake of connecting anything directly to the output of the solar panel with out at least having batteries connected to the system first or your product may be severely damaged.
Problems with Home Wind Power
Wind power uses generators or alternators to produce electricity. Unlike solar, which due to it's means of producing power lends it's self to a constant voltage in a wide range of conditions, alternators and generators traditionally use up to 40% of their produced power in regulating their voltage output.
Batteries are the most important link in the chain of any home powered, off grid, system. Batteries are where your hard earned power is stored to be used when needed most. There are many types of batteries and all of them are different. Learn what batteries are needed to store power for the purpose you need the power for. more... Home Made Energy Affordable alternative power, is the goal, alternative electricity to be more precise. There are many ways of producing electrical power but which is best for you? Alternative power sources that are realistic are limited. We will be discussing the three main way of producing electrical power in your home, off grid. So why would anyone bother with "Home made energy"? Well thousands of small business in California wish they had. Thousands of small business went out of businesses when the cost of electrical power went through the roof a few years ago. I was personally effected when my meat place, that offered meat at a great price well below the super market price, was forced out of business when their freezer electrical costs went from a few hundred dollars a month to a few thousand a month. These were business that had spent years developing their name, reputation, and product just to be wiped out with a blink of an eye because there was no alternative energy. Can a viable "long term use" cost effective mini home power plant be developed? Well lets look at the facts. The major power plants have costs that a mini home unit would not have. There are huge losses with hundreds of thousands of miles of transmission lines. Ever hear them buzzing and crackling at night, that's electricity being given away in the air. Also there is line resistance, electricity thrown away on heat. In big cities all over the world, in summer time, that heat almost melts the power lines off their poles. That heat is due to resistance in the hundreds of thousands of miles of transmission line. These are huge losses. Then there is profits, the power company needs to make a profit, the middle man has to make a profit, the stock holders need to make a profit, and the CEO's and the upper management people need to make out of this world profits. | <urn:uuid:6824e460-d75c-4ee6-8129-d2123a5d1bb0> | 3.609375 | 1,111 | Personal Blog | Science & Tech. | 60.488396 |
One of the nice things about the Extrasolar Planets Encyclopedia, which I came across whilst writing about Kepler, is that you can produce plots which summarise the properties of all the extra-solar planets discovered so far. I was particularly interested in the plot below, of orbital period (in days) against mass (scaled relative to the mass of Jupiter), to which I’ve added Earth, Venus and Jupiter for reference.
In terms of mass, it’s clear that most of the planets discovered so far are big gas giants, on the scale of Jupiter or bigger, with a scattering of smaller gas giants and Super Earths. If, as I’ve argued before, this reflects the limits of our current detection technology, in the next few years Kepler and CoRoT should increase the number of points in this range, and below it (for those of you wondering about the ultra-small outlier, it’s orbiting a pulsar).
More interesting is the distribution of orbital periods, which is actually bimodal: there’s a cluster in the 1-10 day range, representing the ‘hot Jupiters’, and another, larger, cluster of gas giants with orbital periods of about 1-10 years, which Jupiter is just about on the edge of. It’s probably my fault for not paying enough attention, but I was genuinely unaware that so many of this latter type had been discovered: I was under the impression that almost all of the extra-solar planets discovered thus far were sun-skimming hot Jupiters, if only because they are much easier to detect. The fact that we’re detecting gas giants in a much more familiar place (when compared to our own galactic neck of the woods) is actually quite encouraging, and actually increases my confidence that we don’t need to chuck out the Copernican Principle just yet.
Update: for those in the comments who astutely pointed out that many of these planets have a much higher eccentricity than planets in our solar system (~0.05 on average), it turns out this may indeed be an issue:
“If Jupiter’s orbit around the sun was just a bit more eccentric (oblong), it would have scattered a lot of the material that delivered water to the Earth, kicking it out of the solar system instead,” [Sean Raymond, who has been modelling extra-solar planet formation] says. “The result would have been an Earth that had only 10 percent of the water it does now.”
Interestingly, the passage of “hot Jupiters” from the outer solar system where they form to their sun-skimming orbits, whilst disruptive to the formation of the rocky inner planets, does not preclude their survival. | <urn:uuid:385133da-ae37-442e-8c0b-5c9b73a3bd4d> | 3.109375 | 580 | Personal Blog | Science & Tech. | 30.814547 |
Coral Reef Thriving in Sediment-laden Waters
July 31, 2012
"Rapid rates of coral reef growth have been identified in sediment-laden marine environments, conditions previously believed to be detrimental to reef growth. A new study has established that Middle Reef – part of Australia's iconic Great Barrier Reef – has grown more rapidly than many other reefs in areas with lower levels of sediment stress.
Led by the University of Exeter, the study by an international team of scientists is published today in the journal Geology.
Middle Reef is located just 4 km off the mainland coast near Townsville, Australia, on the inner Great Barrier Reef shelf. Unlike the clear waters in which most reefs grow, Middle Reef grows in water that is persistently 'muddy'. The sediment comes from waves churning up the muddy sea floor and from seasonal river flood plumes. The Queensland coast has changed significantly since European settlement, with natural vegetation cleared for agricultural use increasing sediment runoff. High levels of sediment result in poor water quality, which is believed to have a detrimental effect on marine biodiversity."
To read the full text of this article, click here. | <urn:uuid:fd6f1bf7-cc93-4447-a2ae-fa2dc71e1ae2> | 3.484375 | 232 | Truncated | Science & Tech. | 37.355177 |
I missed that bit, mainly due to the badly broken quoting. Transpower, please wrap your quotes in quote tags, don't just stick your responses into the quote...
Originally Posted by Strange
Before TTL logic came along, very few things used 5V power...tube radio circuits work at voltages from millivolts to hundreds of volts. Experiments with Leyden jars often dealt with hundreds or thousands of volts. Most audio circuits work at tens of volts. Your typical PC power supply has capacitors operating at several hundred volts. Even most digital electronics these days works at 3.3V or lower. (It also generally uses MOSFET logic, with an integral part of each transistor being...guess what? A capacitor.)
More than that, the very idea that it by chance produces the same results in 5V circuits is nonsensical. Even in 5V circuits, capacitors used as a functional component of the circuit (not just something like a power supply filtering or decoupling capacitor) charge and discharge through a range of voltages below 5V, quite often less than a volt or even just a few millivolts. We couldn't possibly build modern electronic devices with such a severe misunderstanding. Or mechanical devices for that matter, considering that the same math applies to springs, air tanks, etc. Virtually nothing would work right.
Transpower, one of your basic claims is that the energy stored in a capacitor is proportional to the voltage it is charged to, right? So the difference in stored energy from full voltage to half voltage should equal that from half voltage to zero voltage, right?
Well, what is the voltage waveform of a capacitor charging or discharging at constant current? A simple linear ramp up or down. Power is current times voltage, current is constant, so the power waveform is also a simple linear ramp. Energy is total power over time, equivalent to the area under the power waveform. To illustrate, with a vertical line drawn at the point where the capacitor reaches half voltage, and a horizontal to clarify the diagram:
Two identical triangles and a rectangle. The conclusion of your claim of stored energy being proportional to voltage is that the sum of the areas of the triangle and rectangle on the left half equals the area of the triangle on the right half. The areas of the triangles alone are equal, and the rectangle has twice the area of one of the triangles, so your claim leads to the conclusion that 1 + 2 = 1. Or perhaps that rectangles have zero area, take your pick. Personally, I conclude that your theory is bunk, and that stored energy is in fact proportional to the square of voltage...
| | \
| | \
(also note that full charge isn't 5V, or 1V, or 1kV or 1ÁV, it's whatever you want it to be) | <urn:uuid:a4cb8621-fb1e-4bba-85be-b026b2d28ffc> | 2.875 | 585 | Comment Section | Science & Tech. | 56.255385 |
Dust storms generally call to mind places like the Sahara Desert or the Gobi Desert, but dust storms occur at high latitudes as well. One such storm left streamers over the Gulf of Alaska in mid-November 2010.
The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite captured this natural-color image on November 17, 2010. Thin plumes of beige dust blow off the Alaskan coast toward the south-southwest. Farther to the south, lines of clouds mimic the shape and direction of the dust plumes, and even cast shadows on them. The dust plumes and clouds were likely shaped by the same winds.
Malaspina is just one of many glaciers fringing the Alaskan coastline. As glaciers grind over rocks, they pulverize some of the rock into glacial flour. Melt water percolating through glaciers often deposits glacial flour in mud plains. When the plains dry out, winds sometimes carry dust particles aloft.
- University Corporation for Atmospheric Research. (2003). Forecasting dust storms. (Registration required). Accessed November 18, 2010.
NASA image courtesy Jeff Schmaltz, MODIS Rapid Response Team at NASA GSFC. Caption by Michon Scott.
- Terra - MODIS | <urn:uuid:e7bdd8e2-dbe9-40c8-b420-727192fef25b> | 4.0625 | 265 | Knowledge Article | Science & Tech. | 50.44754 |
||Organic Chemistry 4e Carey||
|Online Learning Center
Chapter 10: Conjugation in Alkadienes and Allylic Systems
Allylic Systems |
Self Assessment |
Conjugation in Alkadienes and Allylic Systems
Resonance is probably one
of the most important concepts that one needs to master in order to understand
organic chemistry, yet it is often under appreciated or misunderstood.
We first met resonance in
Ch 1 (review)
Things to remember about
- It's a property of p
systems, therefore double or triple bonds must be present.
- Only the position of
p electrons changes in resonance contributors.
- Resonance structures
can (best) be derived by pushing curved arrows.
- The actual molecluar
structure is a composite of all the resonance contributors with the more favorable
ones contributing the most character.
- Delocalisation increases
the stablility of systems (esp. for charged systems)
of a cation
of an anion
- Functional groups next
to p systems (i.e. conjugated functional
groups) have some reactivity trends that are modified compared to those of
the non-conjugated system. Here are a few simple examples:
- allyl chloride :
very reactive in nucleophilic substitution reactions
- 1,3-butadiene : can
undergo addition via two different modes
- propenal : nucleophiles
can add to the C=C due to the presence of the conjugated C=O | <urn:uuid:ec4ccb1d-0711-4953-9088-95e47f0e373b> | 3.671875 | 330 | Tutorial | Science & Tech. | 22.507745 |
Taking a closer look at light
Nov 1, 2001
Two teams of European physicists have developed techniques that can measure optical fields more accurately than ever before. Wolfgang Lange and colleagues at the Max Planck Institute for Quantum Optics in Germany have built a single-ion probe that can measure a standing light wave with a resolution of better than a wavelength, while Niek van Hulst and co-workers at the University of Twente in the Netherlands have followed changes in the shape of an ultrashort laser pulse as it travels through a waveguide.
One of the problems that arises when measuring an optical field is that the measuring device can disturb the field being studied. The Munich team overcome this problem by using a single calcium ion in a radio-frequency trap to measure the intensity of a standing light wave inside a cavity (G Guthöhrlein et al 2001 Nature 414 49). The standing wave causes the ion to fluoresce at a certain wavelength, and the intensity of this fluorescence is proportional to the strength of the optical field in the cavity.
By detecting the fluorescence from the ion when it is at different positions inside the cavity, it is possible to map out the intensity of the optical field in three dimensions. Lange's team achieved a resolution of about 60 nanometres in measurements of the standing wave produced by radiation with a wavelength of 397 nanometres. 'This approach takes all the probabilistic elements out of the atom-field interaction,' Lange told PhysicsWeb. The team plans to use the technique in fundamental tests of quantum theory where it is important to have maximum control over the position of single ions.
Meanwhile, van Hulst and co-workers have used measurements of the light fields on surfaces to track the progress of laser pulses in a silicon-based waveguide (M Balistreri et al 2001 Science 294 1080). Laser pulses that last just femtoseconds - or 10-15 seconds - are used in a wide range of optoelectronic and optical fibre systems. But these pulses get distorted as they travel through devices.
Existing methods compare the final shape of the pulse as it leaves a device with its original shape. But such 'black box' methods cannot pinpoint when or where the shape changes. Now van Hulst and colleagues have found that surface light waves produced by the laser pulse as it travels through the waveguide have the same group and phase velocities as the pulse itself.
Using a fibre-optic probe to measure intensity variations in these surface fields, the Twente team was able to monitor changes in the shape of a laser pulse as it travelled through the waveguide. Van Hulst and colleagues believe that their method will allow physicists to see how nonlinear effects - which can be both helpful and destructive - emerge in different systems.
About the author
Katie Pennicott is Editor of PhysicsWeb | <urn:uuid:d7062a9b-03f2-4c8d-9442-5e4976524bfb> | 3.8125 | 587 | Truncated | Science & Tech. | 33.493953 |
Cleaning Up Wastes
Excerpted from BIO. "Protecting Our Environment" Washington, D.C.: Biotechnology Industry Organization, 1992.
The use of biotechnology to solve environmental problems, according to William K. Reilly, former head of the Environmental Protection Agency, "could be - should be - an environmental breakthrough of staggering positive dimensions."
Everything under the sun degrades, or breaks down, into different materials. Fallen leaves become compost, iron rusts, milk turns sour, and food 'goes bad.' Just as light, heat, and moisture can degrade many materials, biotechnology relies on naturally occurring, living bacteria to perform a similar function. Some bacteria naturally 'feed' on chemicals and other wastes, including some hazardous materials. They consume those materials, digest them, and excrete harmless substances in their place.
For decades now, municipalities have used biological methods to treat their sewage, and industry has used secondary aerobic treatment to remove harmful materials from liquid wastes. Biological treatment is not a new idea. What is new is the expanded range of biotreatment capabilities offered by the science of biotechnology.
Bioremediation uses natural as well as recombinant microorganisms to break down toxic and hazardous substances already present in the environment. Biotreatment is a broader term, which refers to all biological treatment processes, including bioremediation. Biotreatment can be used to detoxify process waste streams at the source - before they contaminate the environment - rather than at the point of disposal. This approach involves carefully selecting organisms, known as biocatalysts, which are enzymes that degrade specific compounds, and define the conditions that accelerate the degradation process.
Living Off a Landfill
Vast numbers of bacteria exist naturally in the prevailing conditions in landfills and other solid waste sites. Some of those bacteria consume, or degrade, different types of waste present at the site. But they do it slowly.
Scientists today can examine a landfill and determine not only what bacteria are degrading which materials in it - including any hazardous materials - but which do it fastest, most completely, and under what optimum conditions.
Armed with this knowledge, they can clone the most efficient strains of naturally occurring bacteria, reproduce them in quantity, and apply them to the site. In effect, they can create a customized army of waste eaters.
Oil for Dinner
Some bacteria literally 'live on oil,' just as some people live on meat and potatoes. And they consume it with just as much relish.
Following the major oil spill in Alaska's Prince William Sound, the Environmental Protection Agency brought in natural oil-eating bacteria to help clean up the mess. Follow-up studies suggest that the microbes did as good a job in cleaning up soiled beaches as high-pressure hoses and detergents could have done. "It was almost as if we had brought in fresh rock," stated the EPA's project manager after visiting the site.
Such bioremediation cannot only help to clean up oil spills, but also chlorinated chemicals and leaks from storage tanks.
Using naturally occurring bacteria for environmental purposes is a relatively simple procedure of identification, cloning, and mass production.
Biotechnologists using recombinant DNA technology - the principal tool of genetic engineering - can recombine, or mix-and-match, the most desirable traits of several bacterial species. They can, for instance, extract the gene from one strain that allows it to 'feed' on PCBs or other hazardous wastes, then take the genes that allow another bacterial strain to withstand wide temperature ranges - lack of oxygen or other environmental extremes - and transplant them into a common, harmless bacterium that can be mass produced easily. The result is an organism custom-made to 'eat up' a specific problem waste at a specific site under specific conditions.
This technology holds the potential to solve many environmental problems from the past, and leave our children an environment cleaner than we inherited from our parents.
Of equal, or perhaps even greater importance, biotechnology can eliminate hazardous pollutants at their source before they enter the environment. Every year, some 5 billion pounds of 320 potentially harmful chemicals are released into the environment. The EPA has targeted 17 of those chemicals for massive reductions. Biotreatment with naturally occurring biocatalysts has been demonstrated to almost completely eliminate one of these chemicals, methylene chloride, a suspected carcinogen, from industrial process streams. About 130 million pounds of this compound are currently discharged each year in manufacturing process wastes.
Special bacteria in a bioreactor can virtually eliminate methylene chloride from industrial waste water. They reduce concentrations from over 1,000,000 parts per billion to less than 5 parts per billion - far below the EPA's permissible guidelines. The bacteria in the bioreactor consume the chemical and convert it to water, carbon dioxide, and salt. They permanently destroy the hazardous material and eliminate any need to recover it, transfer it, or dispose of it. | <urn:uuid:3717ef16-29c0-4dfd-854b-b496ba68b0e8> | 3.4375 | 1,009 | Knowledge Article | Science & Tech. | 24.949636 |
Science Fair Project Encyclopedia
|Name, Symbol, Number||Ununbium, Uub, 112|
|Chemical series||Transition metal|
|Group, Period, Block||12, 7, d|
|Appearance||Unknown, probably a metallic |
and silvery white or grey colour.
|Atomic weight|| amu|
|Electron configuration||[Rn].5f14 6d10 7s2|
a guess based on Mercury
|e-s per energy level||2, 8, 18, 32, 32, 18, 2|
|State of matter||probably liquid|
Ununbium (eka-mercury) is a chemical element in the periodic table that has symbol Uub and has the atomic number 112. Element 112 is one of the superheavy elements; its longest-lived isotope has a mass of 285 and a half-life of 11 min. Some research has referred to it as "eka-mercury". Following periodic trends, it is expected to be a liquid metal more volatile than mercury.
It was first created on February 9, 1996 at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany. This element was created by fusing a zinc atom with a lead atom by accelerating zinc nuclei into a lead target in a heavy ion accelerator.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:666eb84d-0b2f-4ad0-81e1-4d7d9cc38e68> | 3.375 | 321 | Knowledge Article | Science & Tech. | 49.392659 |
Feb 23,2007 00:00
Biologist Steven Amstrup discusses global warming, polar bears in
Scientists agree: If greenhouse-gas emissions remain uncurbed, the consequences to the planet will be devastating. Wildlife biologist Steven Amstrup discusses the effects of global warming on polar bears as part of the Wildlife Conservation Lecture Series on Tuesday, Feb. 27, at the Oregon Zoo.
Amstrup, a research wildlife biologist with the
Over the past 25 years, the summer sea-ice melt period has lengthened, and the summer sea-ice cover has declined by more than a half million square miles. "Longer ice-free seasons have resulted in reduced survival of young and old polar bears and a chronic population decline over the past 20 years," says Amstrup.
"Recent observations of nutritionally driven cannibalism and unexpected mortalities of prime age polar bears in | <urn:uuid:d82881a3-85b3-442f-b847-c9a79fad2f6c> | 3.046875 | 182 | Truncated | Science & Tech. | 31.437139 |
- Conical Epidermal Cells Allow Bees to Grip Flowers and Increase Foraging Efficiency
Current Biology, Volume 19, Issue 11, 9 June 2009, Pages 948-953
Heather M. Whitney, Lars Chittka, Toby J.A. Bruce and Beverley J. Glover
SummaryThe plant surface is by default flat, and development away from this default is thought to have some function of evolutionary advantage. Although the functions of many plant epidermal cells have been described, the function of conical epidermal cells, a defining feature of petals in the majority of insect-pollinated flowers, has not [1,2]. The location and frequency of conical cells have led to speculation that they play a role in attracting animal pollinators [1,3,4]. Snapdragon (Antirrhinum) mutants lacking conical cells have been shown to be discriminated against by foraging bumblebees . Here we investigated the extent to which a difference in petal surface structure influences pollinator behavior through touch-based discrimination. To isolate touch-based responses, we used both biomimetic replicas of petal surfaces and isogenic Antirrhinum lines differing only in petal epidermal cell shape. We show that foraging bumblebees are able to discriminate between different surfaces via tactile cues alone. We find that bumblebees use color cues to discriminate against flowers that lack conical cells—but only when flower surfaces are presented at steep angles, making them difficult to manipulate. This facilitation of physical handling is a likely explanation for the prevalence of conical epidermal petal cells in most flowering plants.
Summary | Full Text | PDF (557 kb)
- Social Learning in Insects — From Miniature Brains to Consensus Building
Current Biology, Volume 17, Issue 16, 21 August 2007, Pages R703-R713
Ellouise Leadbeater and Lars Chittka
SummaryCommunication and learning from each other are part of the success of insect societies. Here, we review a spectrum of social information usage in insects — from inadvertently provided cues to signals shaped by selection specifically for information transfer. We pinpoint the sensory modalities involved and, in some cases, quantify the adaptive benefits. Well substantiated cases of social learning among the insects include learning about predation threat and floral rewards, the transfer of route information using a symbolic ‘language’ (the honeybee dance) and the rapid spread of chemosensory preferences through honeybee colonies via classical conditioning procedures. More controversial examples include the acquisition of motor memories by observation, teaching in ants and behavioural traditions in honeybees. In many cases, simple mechanistic explanations can de identified for such complex behaviour patterns.
Summary | Full Text | PDF (1287 kb)
- Speed-Accuracy Tradeoffs and False Alarms in Bee Responses to Cryptic Predators
Current Biology, Volume 18, Issue 19, 14 October 2008, Pages 1520-1524
Thomas C. Ings and Lars Chittka
SummaryLearning plays a crucial role in predator avoidance [1,2,3], but little is known about how the type of experience with predators molds future prey behavior. Specifically, is predator-avoidance learning and memory retention disrupted by cryptic coloration of predators, such as crab spiders [4,5]? How does experience with different predators affect foraging decisions? We evaluated these questions by exposing foraging bumblebees to controlled predation risk from predators (robotic crab spiders) that were either cryptic or highly contrasting, as assessed by a quantitative model of bee color perception . Our results from 3D tracking software reveal a speed-accuracy tradeoff : Bees slow their inspection flights after learning that there is a risk from cryptic spiders. The adjustment of inspection effort results in accurate predator detection, leveling out predation risk at the expense of foraging time. Overnight-retention tests reveal no decline in performance, but bees that had experienced cryptic predators are more prone to “false alarms” (rejection of foraging opportunities on safe flowers) than those that had experienced conspicuous predators. Therefore, bees in the cryptic-spider treatment made a functional decision to trade off reduced foraging efficiency via increased inspection times and false-alarm rates against higher potential fitness loss from being injured or eaten.
Summary | Full Text | PDF (372 kb)
Copyright © 2003 Elsevier Science Ltd All rights reserved.
Current Biology, Volume 13, Issue 12, R463, 17 June 2003
MagazineAdd/View Comments (0)
Plight of the bumblebee
- One group of insects highlight some of the potential economic damage that loss of biodiversity may entail for the future of human food crops. Nigel Williams reports. | <urn:uuid:6f927c1b-49b9-43cb-aac5-c25aa4de9be9> | 2.921875 | 969 | Content Listing | Science & Tech. | 22.395757 |
When C was first invented by Dennis Ritchie in the early
1970s, his main objective was to create a language that was easy to read, fast,
powerful, easy to learn and most of all, extendable. Before that programming
required either knowing the byte codes for the processor you were developing or
using FORTRAN and COBOL. Now FORTRAN and COBOL are nice, human readable
languages like BASIC, but BASIC is interpreted and the other languages were verbose. When Dennis Ritchie invented C from BPCL, the keywords were concise, fast, flexible and portable.
When you write code in machine or byte code, you know
exactly how many instruction cycles or ticks it takes to execute the
instruction. Generally, it’s one. Assembly language is a way to write byte
code without knowing the actual number for an instruction. So the instruction
mov ax, 3 sets the value of 3 into the ax register. It costs one tick. Are
you lost? Ok, let me explain.
The processor of any computer has registers, or small spots
of memory directly accessible that it uses as it performs instructions. Some
registers are general purpose, others are special purpose but the values in
them are used by the processor to aid in its work. Want to add two values
together? The processor will add two registers together. Add ax,bx for
example adds bx to ax and stores the result in ax. Think of it (for the
C-style language initiated) as ax += bx; OKAY! Why is this important?
Ok that the instruction idx = 5; Assume
idx is an integer.
It’s fairly straight-forward. Set idx to 5. But what happens under the hood.
Well the data segment and an offset register (okay assembly language experts, I
am over simplifying but work with me) at set to point to the memory location of
idx. When the value of 5 is assigned to the memory location. Depending on the
processor, the “5” may be MOVed into another register which then gets
idx. But all in all, it takes 3 to four processor ticks to accomplish
this instruction. That’s the beauty of C. It’s meant to take the power of
assembly language and make it more concise. The compiler translates the
instructions directly into machine code and the instructions run.
In an object oriented paradigm however, things are not so
simple. The simple assignment I have been discussing can become amazingly more
complex. Because the assignment operator is overloadable (that is, for the
statement idx = 5, depending on what idx is, I can write a function that is
called when I say idx =), the amount of code increases. Add implicit copy constructors
and cast operators and what is a simple statement can turn under the hood into
a fair amount of code. Imagine if you will that I have a class called
Integer. Integer has an assignment operator and copy constructor defined. It
also has a typecasting operator (operator int) defined as well. So idx = 5 can
become a typecast 5 to Integer, creating another “implicit” Integer object that
then gets passed to the assignment operator.
We say garbage collection is a good thing, but if you use a
lot of memory, you still are asked to “plan” it properly. My argument is while
garbage collection can help guard against memory leaks it still doesn’t make a
good substitute for proper code planning and programming in the first place.
As processors have gotten faster and memory cheaper and more expansive, the
Windows OS for example has become bloated! I remember (back when I was a
teenager), using Windows 286 and Windows 3.0. I was amazed at how many
programs I could run with 640k of memory, two floppy drives, and a 40 MB HDD
(yes megabyte!). Older programmers in the time of Ritchie, Thompson, and Stroustrup
knew how to maximize the use of 16 bytes of memory. Today, we will throw away
16K with no thought.
The other thing about the newer Java and C# is both languages
are still interpreted. Yes, they get compiled to a bytecode, but since the
processor doesn’t understand any other instruction set but its own, the
bytecode has to be interpreted or recompiled to run. What do think the Java
Virtual Machine and .NET Just-In-Time Compiler are written in? They require
native code. Not managed code.
Above all else, what do you think your operating system is
written in? Major applications like Microsoft Office? Ever since DOS 3.0, C
and C++ have become the standard in writing operating systems (not discounting
C’s birth with UNIX and their continued coexistence and codependence. While
assembly language is still used to a limited extent, especially when wanting to
connect with a piece of hardware with no overload, C is the next best thing.
You can write the lowest code in assembly language and from C call the function
in question. You can set the address of the assembly language function to a
pointer declared naked so no prolog or epilog code is assumed if needed (functions
usually have code to set up parameters and local variables on the stack and
save the state of the system and then reverse the stack changes after the
function runs). An operating system kernel doesn’t have time to be recompiled
before execution. When you’re at the heart of a preemptive multitasking
operating system, every tick counts! You cannot have any unnecessary
When C was first developed forty years ago, it gave us just
want we need and little of what we didn’t. Now I’m not against the newer
languages. They serve a valuable purpose in the age of Rapid Development. But
let’s not forget what these languages and their tools are written in. Combine
that with the operating system and all our major applications and you’ll see C and
C++ are still very useful indeed. | <urn:uuid:305c26b8-65e8-4b04-80f7-2cb8554f4931> | 4 | 1,314 | Personal Blog | Software Dev. | 54.066266 |
Reynolds number through a pipe
Hello, can someone tell me how the reynolds number for flow through a pipe change along it's length? Is the only method that the boundary layer gets thicker along the pipe so the freestream velocity increases? Assuming no gas temperature change along the pipe's length.
From thinking of the actual definition of the reynolds number, the ratio of inertial to viscous forces, I can see that the higher free stream flow in the thicker boundary layer regions would increase the local inertia force of the gas.
I would like someone just to clarify.
Thanks a lot. | <urn:uuid:56f76523-6449-477a-88f1-46657c4e50cb> | 2.9375 | 126 | Comment Section | Science & Tech. | 56.350639 |
Question about silver staining
Posted 16 February 2005 - 10:12 PM
Posted 17 February 2005 - 01:41 AM
that was discussed at bioforum on topic
the process relies on differential reduction of silver ions that are bounds to the side chains of amino acid or nucleic acids. There are supposed to be at least 100 to 1000 fold more sensitive as coomassie staining and capable to detect as little as 0.1 1 ng of polypeptide or ng DNA.
hope that helps you
Posted 17 February 2005 - 02:04 AM
Thanks a lot for your answer but i want to know the detail which regions of protein and DNA that bound with silver? Or Which amino acids can bind with silver? As every one know, DNA consist of Phosphate group, deoxyribose sugar and base, which one bind with silver?
Could you tell me again, Thanks | <urn:uuid:ded8e50a-fbb3-4694-878b-ec21c8ff5b76> | 2.71875 | 185 | Comment Section | Science & Tech. | 64.449282 |
A right circular, conical vessel of altitude 21 cm and base radius 11 cm is kept with its vertex downwards. If 2 litres of water is poured into it, how high above the vertex will the level of the water be?
I think I need to convert the 2 L to 2000 cc's, and use similar similar triangles to find the height of the water cone. Does that sound right? Thanks!
p.s. I'm supposed to use 22/7 for , which makes the answer come out very "neat". If the answer is supposed to be 19 and 1/11 cm, then I've done it right. | <urn:uuid:a5e8bc9c-930f-447d-a47f-e082b610f7f3> | 2.921875 | 129 | Q&A Forum | Science & Tech. | 92.821667 |
“during this year a most dread portent took place. For the sun gave forth its light without brightness… and it seemed exceedingly like the sun in eclipse, for the beams it shed were not clear.”
This quote from Procopius of Caesarea is matched by other sources from around the world pointing to something – often described as a ‘dry fog’ – and accompanied by a cold summer, crop failures and a host of other problems. There’s been a TV special, books and much newsprint speculating on its cause – volcanoes, comets and other catastrophes have been suggested. But this week there comes a new paper in GRL (Larsen et al, 2008) which may provide a definitive answer….
It’s long been known that tree-rings (such as the one pictured from Arizona) often show an extremely small growth ring for AD 536 (you can count back from the marked AD 550 ring). In fact, if you look at the mean anomaly in a whole range of tree ring constructions, this event stands out along with 1601 and 1815 (known volcanic events) as being exceptional over the last 2000 years.
Average of the high-frequency components of 7 northern European tree ring reconstructions from Larsen et al, 2008. The filtering ensures that uncertainties in long term trends (which are not important in this context) don’t confuse the issue.
These data match the written sources quite well. However, tying it to a cause has always been plagued with problems of chronology. An initial attempt to tie this event to a volcanic pulse in the Dye3 ice core in Greenland foundered when the chronology was revised to put it 20 years earlier. However, there has recently been a concerted effort to place all the Greenland ice cores on a common timescale based on annual layer counts (Vintner et al, 2006). Because all the cores are being counted together, ambiguities in one can be corrected by reference to the others. Once the dates have been better established, the sulphate records (which generally show the impact of volcanic aerosols) can be examined to see if they line up. And low and behold, they do:
The second peak in the picture is dated at 534 AD which is close enough to 536 AD given the one or two year uncertainty in counting. Note that the 534 AD peak is actually smaller than the one a few years earlier. In assessing the importance of an eruption though, it isn’t enough to have just a peak in Greenland. That could simply signify an eruption that was close by. Instead, people look for a matching peak in Antarctica. This signifies that the eruption was likely tropical and the aerosols were carried into both hemispheres by the stratospheric circulation. Here is where previous attempts often faltered. The dating of ice cores in Antarctica is less exact than in Greenland because the accumulation is slower (it doesn’t snow as much). However, the relatively new Dronning Maud Land (DML) core has comparable resolution to the Greenland ones, and this one does have a clear sulphate peak at about 542 +/- 17 years. That is good enough to be a match to the 536 AD peak in Greenland. The correction you’d need to make to align them exactly would also fix some other apparent offsets for smaller events in the subsequent 100 years.
So it probably was a volcano, somewhere in the tropics, and it was likely the size of Tambora in 1815. There has been some speculation that it was an earlier eruption of Krakatoa (which went off again in 1883), but that is uncertain, as are the numerous consequences such as the fall of the Rome or the rise of Islam which have been attributed to this event. While not exploring that too deeply, this quote from Michael the Syrian indicates dramatically the potential for climate events like this one to really spoil your day:
“The sun was dark and its darkness lasted for eighteen months; each day it shone for about four hours; and still this light was only a feeble shadow … the fruits did not ripen and the wine tasted like sour grapes.” | <urn:uuid:05708725-9bc1-414f-a6bb-d43839a5c8fd> | 3.484375 | 863 | Comment Section | Science & Tech. | 51.532158 |
If you have specific Questions/Numerical problems in which you need help please post them here:
Section I- Force, Work, Energy and Power
- What are the forces acting on a box when it is pulled along a frictional surface?
- What are the forces acting on a moving rocket?
- Is mass conserved in a moving rocket?
- Two bodies A and B of mass 10kg and 15 kg are at rest. Which body will require greater force to start moving? Explain.
- What is the cause of acceleration in a body?
- State the energy changes that takes place in (i) Inverter (ii) Solar battery
- What is the SI unit of energy, momentum, Force?
- What is the physical quantity derived from the change of momentum?
- (a) What is the magnitude of the gravitational force that the Earth exerts on the Moon?
(b) What is the magnitude of the gravitational force that the Moon exerts on the Earth?
10. A sailboat is tied to a mooring with a line. The wind is from the southwest. Identify all the forces acting on the sailboat.
11. A box slides down an incline with uniform acceleration. It starts from rest and attains a speed of 2.7 m/s in 3.0 s. Find (a) the acceleration and (b) the distance moved in the first 6.0 s.
12. A rock is swung on the end of a rope in a horizontal circle at constant speed. The rope breaks. Immediately after the rope breaks, the ball will ________
13. A 40,000-kg freight car is coasting at a speed of 5.0 m/s along a straight track when it strikes a 30,000-kg stationary freight car and couples to it. What will be their combined speed after impact?
14. An ice skater is in a fast spin with her arms held tightly to her body. Explain why?
15. A 90-g ball moving at 100 cm/s collides head-on with a stationary 10-g ball. Determine the speed of each after impact if (a) they stick together, (b) the collision is perfectly elastic
16. Just before striking the ground, a 2.0-kg mass has 400 J of KE. If friction can be ignored, from what height was it dropped?
17. Compute the power output of a machine that lifts a 500-kg crate through a height of 20.0 m in a time of 60.0 s.
18. A 200-kg cart is pushed slowly up an incline. How much work does the pushing force do in moving the cart up to a platform 1.5 m above the starting point if friction is negligible?
- Weight of the body acting downwards, Normal reaction acting upward, pulling force, frictional force acting opposite to the direction of the pulling force.
- Drag, weight, thrust, resistance.
- No. Mass is not conserved in a moving rocket. The oxygen fuel depletes due to combustion resulting in the loss of overall mass of the rocket.
- The body with mass 15 kg will require greater force to make it move from rest. The body of 15kg mass will have more inertia than the body of 10 kg as inertia is a function of mass.
- A body is accelerated when an unbalanced force acts on the body and the body is not in equilibrium under different forces.
- (i) Electrical energy to chemical energy to electrical energy. (ii) Solar energy to electrical energy.
- Joule, Kgm/s, Newton.
- Force or impulse.
- Same in both cases a) and b) 1.98 × 1020 N
10. 1) the force of gravity; 2) the force of water opposing gravity and the force of water currents; 3) the force of the wind; 4) the force of the line tied to the mooring
- (a) 0.90 m/s2, (b) 16 m
12. move outward tangent to the circle from the point the rope broke
13. 2.9 m/s
14. Moment of inertia remains constant during the motion and the angular velocity is increased. With a full stretched arm, the frictional resistance was more and with the arms folded the resistance gets minimized.
15. (a) 90 cm/s, (b) 80 cm/s; 1.8 m/s
16. 20.0 m
17. 1.63 KW
18. 2.94 kJ
Section II- Light
- A ray of light travels through Light Flint glass to Crown Glass. Will the angle of refraction be greater than the angle of incidence? Explain.
- On what factors does the lateral shift of the light ray depend while it passes through a glass plate?
- Why does diamond glitter under light?
- Under which conditions the phenomena of total internal reflections occur?
- Why a glass beaker filled with glycerin gets disappeared when it is immersed in a beaker containing glycerin.
- A person standing in water sees his feet bent. Explain why?
- With angle of prism remaining constant, the angle of deviation will only depend on the angle of incidence. Justify the statement.
- Under what conditions will the angle of deviation be minimum?
- What is the use of fiber optics?
10. State the conditions on which the critical angle of a medium depend?
11. Light rays are travelling from Flexi glass to crown glass and incident at an angle greater than the critical angle but internal reflection is not observed. Explain?
12. Explain why white cloth washed with indigo appears whiter?
13. What is achieved by rotating the camera lens while capturing a photograph?
- The angle of refraction is lesser than the angle of incidence. The refractive index of crown glass is 1.52 whereas the refractive index of Light Flint glass is 1.58 and hence it is a denser medium. So the light will move closer to the normal while passing through light flint glass.
- The factors are the thickness of the glass plate and the angle of incidence of the light on the glass surface.
- Diamond is cut in such a way that the faces of the diamond act as prisms. Due to total internal reflection, the light rays get trapped inside the prism and have successive reflections inside the diamond which makes it glitter.
- Total internal reflections occur when a ray of light travels from a denser medium to a rarer medium and the angle of incidence becomes greater than the critical angle.
- This is phenomenon occurs due to equal refractive index of glass and glycerin. An object is visible only when some light after passing through it is reflected or refracted. But as the μ is almost equal the speed of light in both of them is equal making them act like a single body. Hence no reflection or refraction takes place.
- Assume that X is the tip of the foot immersed in water. The rays emerging from point X after coming to the surface are end away from the normal and come to the eyes of the observer giving the illusion that they are coming from point Y, located above X. Every point is likewise raised making it apparent as if the foot is bent as seen in the picture.
- i=for A to be constant, deviation will depend on the incident angle only.
- Hint: for this case i = e and r1 = r2.
- Fiber optics are used to transmit light signals over long distance. They work on the principle of total internal reflection.Read on to understand the principle
Fiber optics (optical fibers) are long, thin strands of very pure glass about the diameter of a human hair. They are arranged in bundles called optical cables and are used to transmit light signals over long distances. The parts of a single optical fiber are as follows:
Core– Thin glass center of the fiber where the light travels
Cladding– Outer optical material surrounding the core that reflects the light back into the core (like walls)
Buffer coating– Plastic coating that protects the fiber from damage and moisture
A fiber optic is so thin that it becomes extremely flexible even though it is made completely of glass. In a fiber optic, light enters one end and because its diameter is so small, the light is never able strike the inside walls at less than the critical angle, even when bent. This means that the light undergoes total internal reflection each time it strikes the wall of the fiber optic. Therefore, the light is only able to exit at the other end of the fiber. Fiber optic cables are used to carry telephone and computer communications. Fiber optics are advantageous to the user because they can carry much more information in a much smaller cable, have no interference from electromagnet fields and this results in “clearer” connections, have no electrical resistance, and there is no hazard of electrocution if a fiber optic cable breaks.
- Critical angle depends on color of the light, pair of media concerned ie, the wavelength of the light and the temperature of the medium.
- For internal reflection to occur, light rays have to travel from denser medium to lighter medium. Since crown glass is denser, total internal reflection will not occur.
- Indigo imparts a bluish tinge on the white cloth which reflects more light and appears whiter.
- The lens is adjusted so that the image is formed at the focal point of the lens.
See More Questions, Diagrams, Solutions in the Forum | <urn:uuid:10d20d81-4b92-46f6-a3ac-40b0ff937a6b> | 4.03125 | 1,988 | Q&A Forum | Science & Tech. | 69.360378 |
MS Access: CInt Function
In Microsoft Access, the CINT function converts a value to an integer.
The syntax for the CInt function is:
CInt( expression )
expression is the value to convert to an integer.
- Access 2013, Access 2010, Access 2007, Access 2003, Access XP, Access 2000
Dim LValue As Integer LValue = CInt(8.45)
In this example, the variable called LValue would now contain the value of 8.
Be careful using CInt. If you were to use the following code:
Dim LValue As Integer LValue = CInt(8.5)
The variable LValue would still contain the value of 8. Until the fraction is greater than .5, the CInt function will not round the number up.
If the fraction is less than or equal to .5, the result will round down.
If the fraction is greater than .5, the result will round up.
We've found it useful in the past to add 0.0001 to the value before applying the CInt function to simulate the regular rounding process.
For example, CInt(8.50001) will result in a value of 9. | <urn:uuid:7e4d39c7-c550-4f0c-aadc-41ec5783fc7a> | 3.53125 | 252 | Documentation | Software Dev. | 81.772228 |
This module defines a class which implements the client side of the
HTTP protocol. It is normally not used directly -- the module
urllib uses it to handle URLs that
The module defines one class, HTTP:
- HTTP ([host[, port]])
An HTTP instance
represents one transaction with an HTTP server. It should be
instantiated passing it a host and optional port number. If no port
number is passed, the port is extracted from the host string if it has
the form host:port, else the default HTTP port (80)
is used. If no host is passed, no connection is made, and the
connect() method should be used to connect to a server. For
example, the following calls all create instances that connect to the
server at the same host and port:
>>> h1 = httplib.HTTP('www.cwi.nl')
>>> h2 = httplib.HTTP('www.cwi.nl:80')
>>> h3 = httplib.HTTP('www.cwi.nl', 80)
Once an HTTP instance has been connected to an HTTP server, it
should be used as follows:
- Make exactly one call to the putrequest() method.
- Make zero or more calls to the putheader() method.
- Call the endheaders() method (this can be omitted if
step 4 makes no calls).
- Optional calls to the send() method.
- Call the getreply() method.
- Call the getfile() method and read the data off the
file object that it returns.
Send comments on this document to email@example.com. | <urn:uuid:ca89aec9-cd35-4d7d-b620-bcc44c560ee5> | 3.5 | 353 | Documentation | Software Dev. | 79.715162 |
|How to Think Like a Computer Scientist|
source ref: ebookit.html
In the last chapter we had some problems dealing with numbers that were not integers. We worked around the problem by measuring percentages instead of fractions, but a more general solution is to use floating-point numbers, which can represent fractions and other non-integers. In Java, the floating-point type is called double.
You can create floating-point variables and assign values to them using the same syntax as the other types. For example:
It is also legal to declare a variable and assign a value to it at the same time:
In fact, this syntax is quite common.
Although floating-point numbers are quite useful, they are often a source of confusion because there seems to be an overlap between integers and floating-point numbers. For example, if you have the value 1, is that an integer, a floating-point number, or both?
Strictly speaking, Java distinguishes the integer value 1 from the floating-point value 1.0, even though they seem to be the same number. Nevertheless, they belong to different types, and strictly speaking, you are not allowed to make assignments between types. For example, the following is illegal:
because the variable on the left is an int and the value on the right is a double. But it is easy to forget this rule, especially because there are places where Java will automatically convert from one type to another. For example:
should technically not be legal, but Java allows it because converting an int to a double does not lose any information. This leniency is convenient, but it can cause problems; for example:
You might expect the variable y to be given the value 0.333333, which is a legal floating-point value, but in fact it will get the value 0.0. The reason is that the expression on the right appears to be the ratio of two integers, so Java does integer division, which yields the integer value 0. Converted to floating-point, the result is 0.0.
One way to solve this problem (once you figure out what it is) is to make the right-hand side a floating-point expression:
This sets y to 0.333333, as expected.
All the operations we have seen so far--addition, subtraction, multiplication, and division--also work on floating-point values, although you might be interested to know that the underlying process is completely different. In fact, most processors have special hardware just for performing floating-point operations.
In addition, the Java library comes with a variety of special functions for performing mathematical operations on doubles. These functions are invoked using a syntax that is similar to the print commands we have already seen:
The first example sets root to the square root of 17. The second example finds the sine of 1.5, which is the value of the variable angle. Java assumes that the values you use with sin and the other trigonometric functions (cos, tan) are in radians. To convert from degrees to radians, you can divide by 360 and multiply by 2*PI. Conveniently, Java provides PI as a built-in value:
Notice that PI is in all capital letters. Java does not recognize Pi, pi, or pie.
So far we have only been using the functions that are built into Java, but it is also possible to add new functions. In other languages, these functions are sometimes called procedures, or subroutines, but in object-oriented languages like Java they are usually called methods .
Actually, we have already seen one method definition, main. The method named main is special in that it indicates where the execution of the program begins, but the syntax for main is the same as for any other method definition:
You can make up any name you want (except of course, that you can't call it main. The list of parameters specifies what information, if any, you have to provide in order to use (or invoke) the new function.
The single parameter for main is String args, which indicates that whoever invokes main has to provide an array of Strings (we'll get to arrays in Chapter 10). The first couple of methods we are going to write have no parameters, so the syntax looks like this:
This method is named newLine, and the empty parentheses indicate that it takes no parameters. It contains only a single statement, which prints an empty String, indicated by "". Printing a String with no letters in it may not seem all that useful, except remember that println skips to the next line after it prints, so this statement has the effect of skipping to the next line.
In main we can invoke this new method using syntax that is similar to the way we invoke the built-in Java commands:
The output of this program is
Notice the extra space between the two lines. What if we wanted more space between the lines? We could invoke the same procedure repeatedly:
Or we could write a new method, named threeLine, that prints three new lines:
You should notice a few things about this program:
So far, it may not be clear why it is worth the trouble to create all these new methods. Actually, there are a lot of reasons, but this example only demontrates two:
Pulling together all the code fragments from the previous section, the whole class definition might look like this:
The first line indicates that this is the class definition for a new class called NewLine. So far I haven't said much about what a class is. For now, let's just say that a class is a collection of methods. In this case, the class named NewLine contains three methods, named newLine, threeLine, and main.
Another class we've seen is the Math class. It contains methods named sqrt, sin, and many others. When we invoke a mathematical function, we have to specify the name of the class, Math and the name of the function. That's why the syntax is slightly different for built-in methods and the methods that we write:
The first statement invokes the pow method in the Math class (which raises the first value to the power of the second value). The second statement invokes the newLine method, which Java assumes (correctly) is in the NewLine class, which is what we are writing.
If you try to invoke a method from the wrong class, the compiler will generate an error. For example, if you type:
The compiler will say something like, "Can't find a method named pow in class NewLine." If you have seen this message, you might have wondered why it was looking for pow in your class definition. Now you know.
When you look at a class definition that contains several methods, it is tempting to read them from top to bottom, but that is likely to be confusing, because that is not the order of execution of the program.
Execution always begins at the first statement of main, regardless of where it is in the program (in this case I deliberately put it at the bottom). Statements are executed one at a time, in order, until you reach a method invocation. Method invocations are like a detour in the flow of execution. Instead of going to the next statement, you go to the first line of the invoked method, execute all the statements there, and then come back and pick up again where you left off.
That sounds simple enough, except that you have to remember that one method can invoke another. Thus, while we are in the middle of main, we might have to go off and execute the statements in threeLine. But while we are executing threeLine, we get interrupted three times to go off and execute newLine.
For its part, newLine invokes the built-in method println, which causes yet another detour. Fortunately, Java is quite adept at keeping track of where it is, so when println completes, it picks up where it left off in newLine, and then gets back to threeLine, and then finally gets back to main so the program can terminate.
Actually, technically, the program does not terminate at the end of main. Instead, execution picks up where it left off in the program that invoked main, which is the Java interpreter. The Java interpreter takes care of things like deleting windows and general cleanup, and then the program terminates.
What's the moral of this sordid tale? When you read a program, don't read from top to bottom. Instead, follow the flow of execution.
Some of the built-in methods we have used have parameters, which are values that you provide to let the method do its job. For example, if you want to find the sine of a number, you have to indicate what the number is. Thus, sin takes a double value as a parameter. To print a string, you have to provide the string, which is why println takes a String as an parameter.
Some methods take more than one parameter, like pow, which takes two doubles, the base and the exponent.
Notice that in each of these cases we have to specify not only how many parameters there are, but also what type they are. So it shouldn't surprise you that when you write a class definition, the parameter list indicates the type of each parameter. For example:
This method takes a single parameter, named phil, that has type String. Whatever that parameter is (and at this point we have no idea what it is), it gets printed twice. I chose the name phil to suggest that the name you give a parameter is up to you, but in general you want to choose something more illustrative than phil.
In order to invoke this method, we have to provide a string. For example, we might have a main method like this:
The string you provide is called an argument, and we say that the argument is passed to the method. In this case we are creating a string value that contains the text "Don't make me say this twice!" and passing that string as an argument to the printTwice where, contrary to its wishes, it will get printed twice.
Alternatively, if we had a string variable, we could use it as an argument instead:
Notice something very important here: the name of the variable we pass as an argument (argument) has nothing to do with the name of the parameter (phil). Let me say that again:
The name of the variable we pass as an argument has nothing to do with the name of the parameter.
They can be the same or they can be different, but it is important to realize that they are not the same thing, except that they happen to have the same value (in this case the string "Never say never.").
The value you provide as an argument must have the same type as the parameter of the method you invoke. This rule is very important, but it often gets complicated in Java for two reasons:
One last thing you should realize is that parameters and other variables only exist inside their own methods. Within the confines of main, there is no such thing as phil. If you try to use it, the compiler will complain. Similarly, inside printTwice there is no such thing as argument.
In the last chapter, I showed a few ways in which expressions and statements can be composed, meaning that you use one expression as part of another. Well, you can apply the same idea to methods and method invocations. For example, you can use any expression as an argument to a method:
This statement takes the value Math.PI, divides it by two and adds the result to the value of the variable angle. The sum is then passed as an argument to the cos method. (Notice that PI is the name of a variable, not a method, so there are no arguments, not even the empty argument ()).
You can also take the result of one method and pass it as an argument:
This statement finds the logarithm (base e) of 10 and then raises e to that power. The result gets assigned to x; I hope you know what it is.
You might have noticed by now that some of the methods we are using yield results, like the Math methods. But others don't, like the print methods. That raises some questions:
The answer to the third question is "yes, you can write methods with results," and we'll do it in a couple of chapters. I will leave it up to you to answer the other two questions by trying them out. In fact, any time you have a question about what is legal or illegal in Java, a pretty good way to find out is to ask the compiler. | <urn:uuid:b5be2285-723d-4898-84e6-d0c545599f9d> | 4.4375 | 2,626 | Documentation | Software Dev. | 53.698047 |