text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
What can astronomy enthusiasts look forward to in the upcoming year? Here are some events, due to the motions of solar system bodies, that could have been predicted a century before now. Here also are events due entirely to human ingenuity that would have been thought possible only by the most far-sighted dreamers of 1912. A robot landing on Mars and then rolling off to explore the landscape! Another robot leaving orbit around one asteroid and setting out to explore a second one! We truly live in an age of wonders. But then we always have, for as long as we have looked up at the sky.
The World Won’t End In December
If you enjoy watching an “astronomy person” grit her teeth or try really, really hard to be polite and not to roll his eyes, just ask him or her about the end of the Mayan calendar and the end of the world in December 2012. AARRGGHH!! If you want the truth (and some folks just can’t handle the truth; thank you, Jack Nicholson), there is a web page from NASA that provides it.
Or you can just watch the movie. No, not THAT movie!
Is The Moon Up Tonight?
So much of what you can see in the sky depends on the phase of the moon. The full moon is in the sky all night, and it is really bright! Meteor showers will be less obvious if the moon is bright, and the full glory of the Milky Way is washed out unless the night is moonless. So here is your handy moon phase calendar generator, so you won’t have to guess.
A quick guide to when various moon phases rise and set:
Waxing crescent: low in the west after sunset; sets early in the night.
First quarter: high in the south at sunset; sets around midnight.
Waxing gibbous: in the southeast at sunset and in the sky most of the night.
Full moon: rises at sunset and sets at sunrise; in the sky all night.
Waning gibbous: rises a little after sunset and sets a little after sunrise
Last (or third) quarter: rises at midnight and sets at noon.
Waning crescent: only visible at night in the early morning hours before dawn.
New moon: not visible, although it rises and sets with the sun.
Here are the most easily seen showers—not a comprehensive list. I’ve given the days and times as the night you need to stay up to try and avoid confusion. All the times before midnight are on the earlier date; those after midnight on the later date, and are for the Eastern time zone.
Tuesday/Wednesday, January 3rd/4th: Quadrantids. The moon will be in a waxing gibbous phase, but when it sets around 3:20 a.m., you should get some good viewing in. The forecast temperature for Lynchburg of 19° F means only the truly dedicated will be watching!
Saturday/Sunday, April 21st/22nd: Lyrids. A new moon means conditions are near-perfect. Peak should be around midnight.
Saturday/Sunday, August 11th/12th: Perseids. These are everyone’s favorite meteors, because the weather is seldom severe. The waning crescent moon (actually pretty close to last quarter) rises around midnight but shouldn’t be too much of an interference.
Saturday/Sunday, October 21st/22nd: Orionids. Meteor showers are associated with comets that leave debris strewn all along their orbits. When the Earth crosses the orbit, that debris enters our atmosphere and creates “shooting stars”. The Orionids are due to the famous Halley’s Comet. The moon will set around midnight.
Sunday/Monday, November 11th/12th: Taurids. These meteors are famous for producing fireballs! I observed one myself one Halloween night that was really impressive—it was as though someone had fired off an old-fashioned flash bulb in my face. It left a smoke trail that was visible for several minutes. There is an earlier peak at the end of October (hence my Halloween experience), but the full moon will interfere with that in 2012. The moon will be new by the date given here, however.
Friday/Saturday, November 16th/17th: Leonids. The moon sets around 8 p.m., so will not interfere.
Wednesday/Thursday, December 12th/13th: Geminids. New moon for the peak! This shower gives fairly predictable rates of around 100 per hour, one of the best there is.
Meteors from these showers are generally seen both before and after the dates given, although they do peak on these dates. Although meteors in a given shower all appear to emerge from a point in the sky (the “radiant”), this is an optical illusion. They follow parallel paths to the Earth, and their paths appear to converge for the same reason that parallel railroad tracks do. But they will appear all over the sky, and the best viewing tip is simply to lie flat, make yourself comfortable, and look up.
Lunar eclipses can only occur when the moon is full, and can be seen anywhere on Earth if the moon is visible. The Earth is between the sun and the moon, and its relatively large shadow falls on the moon.
Solar eclipses can only occur when the moon is new. The moon is between the Earth and the sun, and since it is much smaller than the Earth, its shadow does not fall on all the Earth’s surface. A total solar eclipse is visible (if at all) only along a narrow strip, while partial phases (where the moon covers some, but not all, of the sun) are seen along either side of this central path.
Lunar eclipses do not occur with every full moon, and solar eclipses do not occur with every new moon. The two bodies must line up as seen from Earth for an eclipse to be seen.
Complicating this (and leading to total solar eclipses of varying duration) is the fact that the apparent sizes of the moon and the sun as seen from Earth are very close to the same. The moon is 400 times smaller but it is also 400 times closer. If they were exactly the same, of course, totality would last for only an instant. But the moon’s path around the Earth is elliptical, and sometimes it is closer and therefore appears larger. If a total solar eclipse were to occur when the moon is nearer the Earth, the duration of totality from a given spot on Earth would be longer. Conversely, if the eclipse occurs when the moon is farther from Earth (and therefore appears smaller), it would cover only the central part of the sun, and leave a “ring of fire” around the edge of the moon’s disk. This is an annular eclipse, and there will be one of these in 2012.
For those of us in the eastern U.S., 2012 is not a year for eclipses, unless we travel!
Lunar Eclipses: Partial eclipse on June 4th, visible before dawn in central and western US.
Penumbral eclipse on November 28th, most visible in Pacific states and west before dawn. A penumbral eclipse is one where no part of the moon’s surface is entirely shaded from the sun by the Earth. But part of the moon will be partially shaded by the Earth. In other words, part of the full moon will be slightly less bright than the rest. You will probably have to look closely to notice it.
Solar Eclipses: Annular, evening of May 20th. The sun will set while eclipsed in some places (Mountain time zone) and the entire eclipse will be visible in others (Pacific time zone). The sun will be higher in the sky during eclipse as one goes farther west—a point in the Pacific just southwest of the Aleutian Islands would be an ideal location for viewing it, except for the likely cloud cover.
Total: November 14th in eastern Australia, just after sunrise. Maximum eclipse in open Pacific Ocean, duration 4 minutes.
Planets In The Night Sky
Mercury: This closest planet to the sun is hard to spot because of that very proximity. From our point of view, it never appears more than 28° away from the sun, usually less. Your fist at arm’s length is about 10° wide.
How good a viewing opportunity we get depends on how far Mercury swings from the sun (which varies because of Mercury’s elliptical orbit) and the angle the sun and Mercury make with the horizon. The two best opportunities to glimpse Mercury with your naked eye will come just after sunset in early March, and just before sunrise in early December.
Venus: Venus is currently the bright “evening star” in the southwest, rising ever higher (farther from the sun) each night and reaching its maximum elongation from the sun in late March. It then begins its descent toward an historic transit of the sun on June 6th (see below). It emerges into the morning sky a few days later, and spends the rest of 2012 as a “morning star”.
Mars: Mars reaches opposition on March 3rd. This is the point when it is exactly opposite the sun in our sky, and the sun, Earth and Mars make a straight line as shown below:
This is when it is nearest to us, and therefore brightest, although just how near varies, once again because of an elliptical orbit (Mars’). This opposition is one where Mars is not all that close to us compared to others. Mars is notably ruddy-colored (more orange than red), and as it gets brighter early in the year, this will be even more obvious. Look for Mars above the full moon on the night of March 7th.
Jupiter: The largest planet in our system begins the year high in the south in the early evening, slowly moving a little to the west on each successive night. In May, it is invisible, behind the sun from us. It comes back into view in the early morning in a couple of weeks, and on July 1st it will pass within 5° of Venus before sunrise. Starting in September it is again an evening sight, and on December 3rd, Jupiter will be at opposition. A planet at opposition is highest in our southern sky at midnight.
Saturn: The ringed planet is currently an early morning sight, rising about 1:40 a.m. in early January. But that rising time gets a little earlier every night, and by the time pleasant weather arrives in May, it will be visible in the east after sunset. It should be a great sight all summer before it disappears behind the sun in October.
In 1882, then again in 2004, Venus passed across the face of sun, an event known as a transit. It will do so again in 2012, then not again until 2117. Unless you have some secret longevity formula, this is your last chance.
These events are as rare as they are predictable. Two transits, separated by eight years, occur at intervals of 130 years. The 1882 transit was preceded by one in 1874, and the 2117 event will be followed by one in 2125.
From the U.S., the transit begins at the end of the day, and the sun sets before it is over. The safest way to see this is with a pinhole in a card, held in front of a large white card. Adjust the distance between the two until you see a sharp image. If you want to get a little fancier, you can use a shoebox as shown here:
The entire transit from start to finish will be visible from Alaska and Hawaii, northwestern Canada, northern areas of Europe, most of China, Japan, eastern Australia, and essentially all of the Pacific Ocean. The map below shows the global zones of visibility.
SpaceCraft Launches And Rendezvous
Finally, what about our robot solar system explorers? Here is a rundown of some upcoming highlights:
In July, the Dawn mission will complete its survey of the asteroid Vesta and will depart for Ceres, the largest asteroid. It will arrive at Ceres in 2015.
The Curiosity rover will be landing (softly, we hope!) on Mars just after midnight on August 6th. The rover is too big and heavy to land with airbags as the Spirit and Opportunity rovers did in 2004. Here is a great animation showing the frighteningly complex sequence of events required to land the rover safely, from low Earth orbit to successful touchdown to beginning its scientific mission:
Its landing site on Mars is Gale Crater, whose central mountain shows layered rocks that look intriguing. The nuclear-powered rover will have plenty of energy to explore extensively; it won’t be dependent on solar cells that provide less power in the winter and get progressively more dusty as they continue to operate. Here is Gale Crater and the landing ellipse for Curiosity.
There are no other planetary missions scheduled to launch or rendezvous this year, but the Cassini orbiter at Saturn will complete several very close (less than 50 miles) flybys of the ice moon Enceladus. Enceladus surprised us all with its ice fountains, captured beautifully in this image.
Save this post! I wish you all clear skies and happy viewing in 2012. | <urn:uuid:db3d2320-a11c-4605-bb5a-beb70178d19b> | 2.78125 | 2,803 | Personal Blog | Science & Tech. | 62.71815 |
Web edition: August 2, 2007
Members of an established ecosystem develop a sense of balance, usually permitting at least limited biodiversity and a stable structure. When interlopers arrive that aren't responsive to the same environmental checks and balances, they can overrun the ecosystem, eliminating some members and quickly dominating others. Such bullying immigrants are known as invasive speciesand they can be newly introduced garden plants, fish, insects, trees, worms, even fungi. The U.S. Department of Agriculture offers one-stop shopping for news and impacts of such invasive species. | <urn:uuid:0232d7fe-86b3-4eed-a6a3-15242792c296> | 3.09375 | 111 | Knowledge Article | Science & Tech. | 25.23 |
Using Solar PowerWhen you think of solar power you think of heating and light for your home. That is one of the many things that we use solar power for. Solar power is everywhere and it is growing everyday. There are different products that are made using solar power. This article will list these products and their uses and also how solar power affects them. Solar power is using the sun's natural heat to produce electricity, heat, and more. When you use solar power you are using the natural resources found that cannot harm the earth in ways that other methods can.
There are more products that use solar power than what we realize. A lot of the electronics made will use some type of solar power in order to function completely and accurately. For example, calculators are solar power products. These calculators may or may not have on and off switches. Some rely on the solar panel completely in order to stay or turn off. Solar power calculators need a certain amount of light inside the solar panel in order to turn the calculator on and perform what you want it to do; add, subtract, divide, multiply and more. The solar panel in a calculator is not as big as the one that you would to power your home. The size needed for a calculator is adjusted before installation to provide the right amount of what it needs. Solar power products can be found in travel products, outdoor recreation, safety products, emergency products and more.
Radios have are produced with a solar panel inside that transforms the sunlight into energy allowing you to listen to your radio while you are outside. You may also find solar power in flashlights, battery chargers, mobile phone chargers, watches, lanterns, emergency products such as sirens and lights. As you see there are several products that are using the solar power technology. Portable chargers are great to use because they charge the product that you have using the sunlight just as easy as turning a calculator on. Camping equipment and supplies work well with solar power because it allows the sunlight during the day to supply their lanterns, flashlights and radios at night.
Cooking outdoors can also be done by using solar power in order to heat the element that will ignite and allow for even cooking. Because more people are turning to solar power for their future energy source there are companies that are marketing products that are produced using solar power. Appliances are being made for solar power homes. These appliances, refrigerators, stoves, dishwashers, and more will work great in a home that is generated by solar power. They are built to conserve energy even more so than the products available to everyone.
In the future when everything turns to solar power in order to work we will be prepared using the knowledge and the products that are available today. We can't predict the future in solar power but we can all do our best to make it happen.
Great Locations For Wind Turbines | Advanced Vehicles | Ways To Use Solar Power For Heat | Heating Your Home With Solar Energy | Concentrating Solar Power Systems | Heating Water Using Solar Power | Storing Energy | Wind Power | How Do You Find Renewable Energy? | Kids Can Learn About Solar Power | Make Your Own Solar Power Source | Passive Solar Power | Solar Power Efficient Appliances | Solar Power For The Rv | Solar Power Homes | Solar Power Lighting | Vertical Axis Wind Turbine | The Benefits Of Solar Power | What You Need To Know About Solar Power | Using Solar Power | What Is Solar Energy? | Where To Find Solar Energy | Why You Should Choose To Use Solar Power | Why Are Solar Panels Important? | Why Is Solar Power Important? | <urn:uuid:ded96f7f-c619-4ee2-b2f1-d4e7e539a4e2> | 2.828125 | 739 | Knowledge Article | Science & Tech. | 40.145181 |
by Anthony Carpi, Ph.D.
Chemical reactions happen all around us: when we light a match, start a car, eat dinner, or walk the dog. A chemical reaction is the process by which substances bond together (or break bonds) and, in doing so, either release or consume energy (see our Chemical Reactions module). A chemical equation is the shorthand that scientists use to describe a chemical reaction. Let's take the reaction of hydrogen with oxygen to form water as an example. If we had a container of hydrogen gas and burned this in the presence of oxygen, the two gases would react together, releasing energy, to form water. To write the chemical equation for this reaction, we would place the substances reacting (the reactants) on the left side of an equation with an arrow pointing to the substances being formed on the right side of the equation (the products). Given this information, one might guess that the equation for this reaction is written:
H + O H2O
The plus sign on the left side of the equation means that hydrogen (H) and oxygen (O) are reacting. Unfortunately, there are two problems with this chemical equation. First, because atoms like to have full valence shells, single H or O atoms are rare. In nature, both hydrogen and oxygen are found as diatomic molecules, H2 and O2, respectively (in forming diatomic molecules the atoms share electrons and complete their valence shells). Hydrogen gas, therefore, consists of H2 molecules; oxygen gas consists of O2. Correcting our equation we get:
H2 + O2 H2O
But we still have one problem. As written, this equation tells us that one hydrogen molecule (with two H atoms) reacts with one oxygen molecule (two O atoms) to form one water molecule (with two H atoms and one O atom). In other words, we seem to have lost one O atom along the way! To write a chemical equation correctly, the number of atoms on the left side of a chemical equation has to be precisely balanced with the atoms on the right side of the equation. How does this happen? In actuality, the O atom that we "lost" reacts with a second molecule of hydrogen to form a second molecule of water. During the reaction, the H-H and O-O bonds break and H-O bonds form in the water molecules, as seen in the simulation below.
Concept simulation - Reenacts the reaction of hydrogen and oxygen in formation of water.
The balanced equation is therefore written:
2H2 + O2 2H2O
In writing chemical equations, the number in front of the molecule's symbol (called a coefficient) indicates the number of molecules participating in the reaction. If no coefficient appears in front of a molecule, we interpret this as meaning one.
In order to write a correct chemical equation, we must balance all of the atoms on the left side of the reaction with the atoms on the right side. Let's look at another example. If you use a gas stove to cook your dinner, chances are that your stove burns natural gas, which is primarily methane. Methane (CH4) is a molecule that contains four hydrogen atoms bonded to one carbon atom. When you light the stove, you are supplying the activation energy to start the reaction of methane with oxygen in the air. During this reaction, chemical bonds break and re-form and the products that are produced are carbon dioxide and water vapor (and, of course, light and heat that you see as the flame). The unbalanced chemical equation would be written:
CH4(methane) + O2(oxygen) CO2(carbon dioxide) + H2O(water)
Look at the reaction atom by atom. On the left side of the equation we find one carbon atom, and one on the right.
|^||1 carbon||^||1 carbon|
Next we move to hydrogen: There are four hydrogen atoms on the left side of the equation, but only two on the right.
|^||4 hydrogen||^||2 hydrogen|
Therefore, we must balance the H atoms by adding the coefficient "2" in front of the water molecule (you can only change coefficients in a chemical equation, not subscripts). Adding this coefficient we get:
|^||4 hydrogen||^||4 hydrogen|
What this equation now says is that two molecules of water are produced for every one molecule of methane consumed. Moving on to the oxygen atoms, we find two on the left side of the equation, but a total of four on the right side (two from the CO2 molecule and one from each of two water molecules H2O).
|^ 2||oxygen||^ 4||oxygen||^|
To balance the chemical equation we must add the coefficient "2" in front of the oxygen molecule on the left side of the equation, showing that two oxygen molecules are consumed for every one methane molecule that burns.
|^ 4||oxygen||^ 4||oxygen||^|
Dalton's law of definite proportions holds true for all chemical reactions (see our Matter module). In essence, this law states that a chemical reaction always proceeds according to the ratio defined by the balanced chemical equation. Thus, you can interpret the balanced methane equation above as reading, "one part methane reacts with two parts oxygen to produce one part carbon dioxide and two parts water." This ratio always remains the same. For example, if we start with two parts methane, then we will consume four parts O2 and generate two parts CO2 and four parts H2O. If we start with excess of any of the reactants (e.g., five parts oxygen when only one part methane is available), the excess reactant will not be consumed:
Excess reactants will not be consumed.
In the example seen above, 3O2 had to be added to the right side of the equation to balance it and show that the excess oxygen is not consumed during the reaction. In this example, methane is called the limiting reactant.
Although we have discussed balancing equations in terms of numbers of atoms and molecules, keep in mind that we never talk about a single atom (or molecule) when we use chemical equations. This is because single atoms (and molecules) are so tiny that they are difficult to isolate. Chemical equations are discussed in relation to the number of moles of reactants and products used or produced (see our The Mole module). Because the mole refers to a standard number of atoms (or molecules), the term can simply be substituted into chemical equations. Thus, the balanced methane equation above can also be interpreted as reading, "one mole of methane reacts with two moles of oxygen to produce one mole of carbon dioxide and two moles of water."
The law of conservation of matter states that matter is neither lost nor gained in traditional chemical reactions; it simply changes form. Thus, if we have a certain number of atoms of an element on the left side of an equation, we have to have the same number on the right side. This implies that mass is also conserved during a chemical reaction. The water reaction, for example:
|2 * 2.02g||+||32.00g||=||2 * 18.02g|
The total mass of the reactants, 36.04g, is exactly equal to the total mass of the products, 36.04g (if you are confused about these molecular weights, you should review the The Mole lesson). This holds true for all balanced chemical equations.
Anthony Carpi, Ph.D. "Chemical Equations," Visionlearning Vol. CHE-1 (8), 2003. | <urn:uuid:d4e1cc9c-45b2-4878-9032-5c14b19a82c7> | 4.34375 | 1,588 | Knowledge Article | Science & Tech. | 49.229151 |
Back to the Big Bang: Inside the Large Hadron Collider
Venture deep inside the world’s biggest physics machine, the Large Hadron Collider. This extraordinary feat of human engineering took 16 years and $10 billion to build, and recently began colliding particles at energies unseen since a fraction of a second after the big bang. We’ll explore this amazing apparatus that could soon reveal clues about nature’s fundamental laws and even the origin of the universe itself. John Hockenberry moderates a discussion among physicists including Marcela Carena, Monica Dunford, Jennifer Klay, and Nobel laureate Frank Wilczek.
This program is part of The Big Idea Series, made possible with support from the John Templeton Foundation. | <urn:uuid:7c760ca8-52ef-4031-924a-651595cf86c7> | 2.703125 | 152 | Truncated | Science & Tech. | 30.131828 |
10.24.12 - A new study using data from NASA's Spitzer Space Telescope suggests a cause for the mysterious glow of infrared light seen across the entire sky.
10.23.12 - NASA's newest set of X-ray eyes in the sky, the Nuclear Spectroscopic Telescope Array (NuSTAR), has caught its first look at the giant black hole parked at the center of our galaxy.
10.17.12 - The following is a statement about the European Southern Observatory's latest exoplanet discovery from NASA's Science Mission Directorate Associate Administrator, Dr. John Grunsfeld.
10.17.12 - Forty community college students from across the United States have been selected to travel to NASA's Marshall Space Flight Center in Huntsville, Ala., to participate in the 2012 National Community College Aerospace Scholars (CCAS) project.
10.16.12 - NASA will host a media teleconference at 3 p.m. EDT on Thursday, Oct. 18, about the latest status of the Curiosity rover's mission to Mars.
10.16.12 - Media representatives are invited to attend a NASA History symposium about past, present and future solar system exploration.
10.11.12 - The first Martian rock NASA's Curiosity rover has reached out to touch presents a more varied composition than expected from previous missions.
10.10.12 - NASA Deputy Administrator Lori Garver will visit Lockheed Martin in Littleton, Colo., on Monday, Oct. 15 to view the next spacecraft to launch to Mars and a part of the Orion vehicle that will carry astronauts farther into space than ever before.
10.09.12 - NASA will host a media teleconference at 11 a.m. PDT (2 p.m. EDT) on Thursday, Oct. 11, to provide a status update on the Curiosity rover's mission to Mars' Gale Crater.
10.04.12 - NASA's Curiosity rover is in a position on Mars where scientists and engineers can begin preparing the rover to take its first scoop of soil for analysis. | <urn:uuid:a159d40a-9f9f-4675-95fd-0a2204a84358> | 2.859375 | 419 | Content Listing | Science & Tech. | 67.807495 |
Alaska's Mount Redoubt Volcano erupted on March 22, 2009, spewing ash into the atmosphere and obscuring the skies. Four more eruptions followed. According to scientists at the Alaska Volcano Observatory, the ash plume reached a height of 50,000 feet above sea level.
Because they happened at night, the eruptions could only be detected in thermal infrared imagery; high temperatures appear black, while lower temperatures are white. The ash plumes become very cold as they rise high in the atmosphere, making them appear white in the image. The MODIS instrument on NASA's Aqua satellite captured the image on March 23, just as the fifth eruption was about to start. | <urn:uuid:d7a94c24-ed9d-4577-bc8f-e6667d2cb931> | 3.953125 | 136 | Knowledge Article | Science & Tech. | 39.435878 |
|Long-tailed Duck - Dave Menke, National Digital Library; Great Black-backed Gull - Ken Wilmington, Wikimedia Commons|
Friday, March 9, 2012
Great Lakes waterbirds and warming water temperatures
We know that the Gr Lakes are used extensively by wintering and migrating birds of many species. What influence does the warming of surface temperatures of the Lakes' waters have on these species? See graphs displaying Lake Michigan's warmer-than-normal temperature at this link. (Just wondering - I don't have an answer).
The Western Great Lakes Bird & Bat Observatory's staff are surveying transect blocks along the west shoreline of the Lake again this year, and finding numbers of Long-tailed Ducks in waters out to 6-8 miles offshore. Other species found recently include Common Goldeneyes, Red-breasted Mergansers, Glaucous Gulls, and an adult Great Black-backed Gull ten miles offshore from southern Door County.
Posted by Bill Mueller at 5:17 AM | <urn:uuid:b14e651e-6fc9-4a4b-bdd9-897afaaeef1b> | 3.53125 | 208 | Personal Blog | Science & Tech. | 40.505441 |
It’s also a perfect opportunity to revisit the old saw that Kansas is flatter than a pancake — an issue addressed scientifically, if serendipitously, half a dozen years ago in the ever-entertaining and illuminating Annals of Improbable Research. Writing for the May/June issue of 2003, scientists from the Departments of Geography at Texas State University and Arizona State University reported conclusively that “Kansas is Flatter than a Pancake”
As they explain, “barring the acquisition of either a Kansas-sized pancake or a pancake-sized Kansas, mathematical techniques are needed to do a proper comparison . . . . One common method of quantifying ‘flatness’ in geodesy is the ‘flattening’ ratio. The length of an ellipse’s (or arc’s) semi-major axis a is compared with its measured semi-minor axis b using the formula for flattening, f = (a – b) / a. A perfectly flat surface will have a flattening f of one, whereas an ellipsoid with equal axis lengths will have no flattening, and f will equal zero.
“For example, the earth is slightly flattened at the poles due to the earth’s rotation, making its semi-major axis slightly longer than its semi-minor axis, giving a global f of 0.00335. For both Kansas and the pancake, we approximated the local ellipsoid with a second-order polynomial line fit to the cross-sections. These polynomial equations allowed us to estimate the local ellipsoid’s semi-major and semi-minor axes and thus we can calculate the flattening measure f.” See the article for a further — and hilarious – description of their methodology.
Their conclusion? “Mathematically, a value of 1.000 would indicate perfect, platonic flatness. The calculated flatness of the pancake transect from the digital image is approximately 0.957, which is pretty flat, but far from perfectly flat. The confocal laser scan showed the pancake surface to be slightly rougher, still.
“Measuring the flatness of Kansas presented us with a greater challenge than measuring the flatness of the pancake. The state is so flat that the off-the-shelf software produced a flatness value for it of 1. This value was, as they say, too good to be true, so we did a more complex analysis, and after many hours of programming work, we were able to estimate that Kansas’s flatness is approximately 0.9997. That degree of flatness might be described, mathematically, as ‘damn flat.’”
For further coverage of the study and its conclusions, see “Zero Gravity: The Lighter Side of Science” from the American Physical Society, and the article “Holy Hotcakes! Study Finds Kansas Flatter than a Pancake” in the Lawrence Journal-World for July 27, 2003.
It is worth pointing out, of course, that by the very measure the geographers used, Florida (with a variation of only 345 feet from sea level to its highest point at Britton Hill), Delaware (with a variation from sea level to its highest point of 448 feet), Louisiana (with a variation from sea level to the top of Driskill Mountain at 535 feet), and 18 other states are in fact flatter than both pancakes and Kansas. And, as the Journal-World avers, by measuring in terms of the elevation changes in one-kilometer sections, Kansas ranks all the way down at 32nd in flatness. But perhaps that all is just a matter of comparing apples to oranges. | <urn:uuid:c7669446-98ad-4503-bad6-88a96e52167a> | 3 | 794 | Personal Blog | Science & Tech. | 45.783088 |
"The sensitivity of the ECS [Earth's climate system] to changes in radiative forcing at the top of the atmosphere is equal to 0.41±0.05 K W−1 m2."The IPCC, however, claims that a change in the top of the atmosphere [TOA] radiation of 3.7 W/m2 from a doubling of CO2 will lead to a 3°C ± 1.5°C temperature increase. The IPCC climate sensitivity is therefore 3K/(3.7W/m2) or 0.81 K/W/m2, about double the amount [0.41 K/W/m2] determined by this new paper.
Note, however, this paper calculates sensitivity on the basis of solar radiation, which is significantly different from infrared radiation from greenhouse gases. Unlike UV and visible radiation from the Sun, infrared radiation from greenhouse gases cannot heat the oceans [70% of Earth's surface area], therefore the sensitivity to greenhouse gases on the land surface is calculated to be
(1-.70)*0.41 = 0.12 K/W/m2
indicating the effect of doubling CO2 levels on Earth's climate is trivial and confirming the low sensitivities obtained by others from observations without computer gaming. Here here here as well as Lindzen & Choi, Paltridge, Spencer & Braswell, and others. | <urn:uuid:9ab8a449-c349-439e-8dfa-fe717c0651c1> | 3.21875 | 282 | Personal Blog | Science & Tech. | 68.652574 |
How good are you at reading faces? Scientists have found that your gender may affect this ability.
Why are there males and females? Why are there two sexes instead of three, or twelve, or one?
Close your eyes and imagine that you’re a Mormon cricket. Why, you ask? Well, Mormon crickets are interesting. Like desert locust, they sometimes form large bands that march across the landscape of northeastern Utah and northwestern Colorado, and basically eat everything in their way. Learn more on this Moment of Science.
Have you ever wondered why cannibalism isn’t more popular? Just think about it, each animal is made of a complex variety of chemical ingredients. As an animal, we can either try to assemble these ingredients haphazardly, eating other animals and plants and hoping these assorted meals will add up to exactly what we need. Or we can get all our essential nutrients in one complete package by dining on our next-door neighbor! Learn more on this Moment of Science.
Very few people actually sit down and make up jokes, yet everyone is always telling jokes. Where do they come from? Learn more on this Moment of Science.
Is a fever always bad? Find out on this Moment of Science.
Was Elvis really in a burrito? Was Princess Di found in a cookie? Find out on this Moment of Science.
Surprisingly, even though there is no light to catch, the sunflower will continue to bend every day just as it did when it was outside. This is a classic example of what scientists call a circadian rhythm — it’s a daily cycle of behavior that is internal to the organism, rather than being solely triggered by the environment.
So sometimes parents or stronger offspring will eat the weaker offspring in order to increase their own chances at survival.
The female wood mouse has multiple mates, so the sperm of this wood mouse may be competing with sperm from other males. | <urn:uuid:f425c05a-26c8-4aea-80ff-49bcaecb20a4> | 3.453125 | 394 | Content Listing | Science & Tech. | 57.329165 |
|Date:||March 02, 2011|
Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python’s elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms.
The Python interpreter and the extensive standard library are freely available in source or binary form for all major platforms from the Python Web site, http://www.python.org/, and may be freely distributed. The same site also contains distributions of and pointers to many free third party Python modules, programs and tools, and additional documentation.
The Python interpreter is easily extended with new functions and data types implemented in C or C++ (or other languages callable from C). Python is also suitable as an extension language for customizable applications.
Jython is an implementation of Python for the JVM. Jython takes the Python programming language syntax and enables it to run on the Java platform. This allows seamless integration with the use of Java libraries and other Java-based applications. The Jython project strives to make all Python modules run on the JVM, but there are a few differences between the implementations. Perhaps the major difference between the two implementations is that Jython does not work with C extensions. Therefore, most of the Python modules will run without changes under Jython, but if they use C extensions then they will probably not work. Likewise, Jython code works with Java but CPython does not. Jython code should run seamlessly under CPython unless it contains Java integration.
This tutorial references the Jython wiki at this time. The most up-to-date references and examples reside on the wiki at this time. Over time, the Python tutorial will be reconstructed to show Jython examples. However, at this time the wiki and the open source Jython book are the most current references for 2.5 and previous releases.
The glossary is also worth going through. | <urn:uuid:cf447887-a002-4635-b770-2ca392b13511> | 3.265625 | 413 | Knowledge Article | Software Dev. | 41.043593 |
Search Loci: Convergence:
Mathematics is not a deductive science -- that's a cliche. When you try to prove a theorem, you don't just list the hypotheses, and then start to reason. What you do is trial and error, experimentation, guesswork.
I Want to Be a Mathematician, Washington: MAA Spectrum, 1985.
What's in Convergence?
Contents of Volume 9 - 2012 (Loci - Volume 4)
Editors: Janet Beery, Kathleen Clark
Algebraic Formalism within the Works of Servois and Its Influence on the Development of Linear Operator Theory, by Anthony Del Latto and Salvatore Petrilli
This article describes how Servois’ failed attempt to construct a foundation for the calculus nevertheless helped shape modern mathematics.
Teaching the Fundamental Theorem of Calculus: A Historical Reflection, by Jorge López Fernández and Omar Hernández Rodríguez
The authors argue that the teaching of elementary integration should better reflect its historical development.
Georg Cantor at the Dawn of Point Set Topology, by Nicholas Scoville
How the history of analysis, and in particular that of Fourier series, can be used to motivate the study of point-set topology
Considering non-unique representation of Maya calendar numbers may help your students understand their own number system better.
Download the two winning essays to learn about the history of using indivisibles to find the area under an arch of the cycloid in the 17th century and of the Radon transform and its use in x-ray tomography in the 20th century.
An image of an early 19th century perpetual calendar, together with a translation and explanation of its creator’s instructions for its use
Maya calendars as they were developed over time and the Maya modified base 20 number system used in the calendars
A discussion of the context and content of the 15th century Pamiers manuscript, with translations of its problems, including one for which negative solutions were acceptable
A comparison of five circa-1900 proofs of the famous theorem with a view toward improving student understanding of compactness
A comparison of the geometry found in two 18th century copybooks written with two very different purposes
Who's That Mathematician? Images from the Paul R. Halmos Photograph Collection, by Janet Beery and Carol Mead
An expanding and interactive feature with new photos every week throughout the year and the opportunity for you to provide additional information about them
In Pursuit of the Traveling Salesman, by William J. Cook. Reviewed by Christopher Thompson.
Author William Cook recounts the history of and computational progress on the traveling salesman problem, emphasizing connections within mathematics and with other disciplines.
The Man of Numbers: Fibonacci's Arithmetic Revolution, by Keith Devlin. Reviewed by Frank J. Swetz.
Author Keith Devlin brings to life the impact of the Pisan merchant and his Arabic numbers on medieval Europe.
Mathematics Emerging: A Sourcebook 1540–1900, by Jacqueline Stedall. Reviewed by Frank J. Swetz.
Our reviewer praises the selection of excerpts, the use of facsimiles rather than transcriptions, and the commentary and English translation in this collection.
This book suggests that the accepted historical chronology is fundamentally flawed.
Our reviewer finds this collection of translations of Babylonian mathematical texts to be both "remarkable" and accessible. | <urn:uuid:48e5bb6b-a084-4a3c-899f-c91a85c1c0b1> | 2.875 | 715 | Content Listing | Science & Tech. | 31.214035 |
High Plains Aquifer Water-Level Monitoring Study
Water-Level Changes in the High Plains Aquifer—Predevelopment to 1993
By Jack T. Dugan and Dale A. Cox
U.S. Geological Survey
Water-Resources Investigations Report 94-4157
Regional variability in water-level change in the High Plains aquifer underlying parts of South Dakota, Wyoming, Nebraska, Colorado, Kansas, New Mexico, Oklahoma, and Texas results from large regional differences in climate, land use, and ground-water withdrawals for irrigation. From the beginning of significant development of the High Plains aquifer for irrigation (1940) to 1980, substantial water-level declines have occurred in several areas. The estimated average area-weighted water-level decline from predevelopment to 1980 for the High Plains was 9.9 feet, an average annual decline of about 0.25 foot. Declines exceeded 100 feet in some parts of the Central and Southern High Plains. Declines were much smaller and less extensive in the Northern High Plains, largely as a result of later irrigation development.
Since 1980, water levels in those areas of large declines in the Central and Southern High Plains have continued to decline, but at a much slower annual rate. The estimated average area-weighted water-level decline from 1980 to 1993 for the entire High Plains was 2.09 feet, which is an average annual decline of about 0.16 foot. The slower rate of decline since 1980, in relation to the rates prior to 1980, is associated partly with above-normal precipitation and a decrease in the average ground-water application rates for irrigated agriculture. Water-conserving practices and technology, in addition to reductions in irrigated acreages in areas of large consumptive irrigation requirements, contributed to the decrease in ground-water withdrawals for irrigation.
Water-level declines exceeding 20 feet since 1980, however, are widespread in parts of southwestern Kansas, east-central New Mexico, and the Oklahoma and Texas Panhandles. Widespread declines of 10 to 20 feet and exceeding 20 feet in smaller areas occurred in northeastern Colorado, northwestern Kansas, southwestern Nebraska, and the Nebraska Panhandle from 1980 to 1993. Water-level rises exceeding 20 feet occurred in the extreme Southern High Plains in Texas where precipitation was much greater than normal from 1981-92. Widespread water-level rises of 10 to 20 feet occurred in southeastern and southcentral Nebraska during the same period in association with above-normal precipitation.
The average area-weighted water-level in the High Plains rose 0.21 foot from 1992 to 1993 in apparent association with average area-weighted precipitation that was 2.80 inches above normal in 1992. Water-level rises of 3 or more feet from 1992 to 1993 were widespread in eastern and southern Nebraska, northwestern and south-central Kansas, and the southern two-thirds of the Southern High Plains of Texas. All of these areas of rise coincide with areas of well-above normal precipitation in 1992. The large rises in some of these areas, particularly in the Southern High Plains of Texas, may be partly the result of delayed recharge from above-normal precipitation in 1991 in association with the well-above normal precipitation in 1992.
Water levels continued to decline from 1992 to 1993 in northeastern Colorado, southwestern Nebraska, southwestern Kansas, the central Panhandle of Oklahoma, the northern Panhandle of Texas, and the northern part of the Southern High Plains. The size of the area and magnitude of these declines, however, appear to be considerably smaller than in previous years.
This report (WRIR 94-4157) is available online.
To obtain a copy of this report, please contact:
USGS Nebraska Water Science Center
5231 South 19th Street
Lincoln, Nebraska 68512-1271
Phone (402) 328-4100
FAX (402) 328-4101
Back to High Plains Aquifer main page | <urn:uuid:b7c8b138-2ed3-455c-91b8-b0a7326a6618> | 3.046875 | 797 | Knowledge Article | Science & Tech. | 39.238993 |
Tag: "clays" at biology news
Scientists propose the kind of chemistry that led to life
...were nothing more complicated than the surfaces of clays
or other minerals. In its simplest form, the model shows how two catalysts in a solution, A and B, each acting to catalyze a different reaction, could end up forming what the scientists call a complex, AB. The deciding factor is the relative conce...
ASU researchers test antibacterial effects of healing clays
...ydel's research on the antibacterial properties of clays
realizes its full potential, smectite clay could o...ASU duo will examine the mechanisms that allow two clays
mined in France to heal Buruli ulcer, a flesh-eating bacterial disease found primarily in central an...
Ancient raindrops reveal the origins of California's Sierra Nevada range
... and other minerals that form on the ground. These clays
provide scientists with a geologic record of ancie...mbedded in the crystalline structure of these soft clays
is a continuous record of Eocene precipitation along the western flank of the Sierra Nevada--a recor...
Clay material may have acted as 'primordial womb' for first organic molecules
...e have only started investigating the influence of clays
on the origin of life," Williams said....
New evidence indicates biggest extinction wasn't caused by asteroid or comet
...nction, though they looked specifically for impact clays
or material ejected from a crater left by such an impact. They contend that if there was a comet or asteroid impact, it was a minor element of the Permian extinction. Evidence from the Karoo, they said, is consistent with a mass extinction resulting... | <urn:uuid:96ecbe1f-0422-40ee-ac56-49ee1092ea08> | 2.875 | 357 | Content Listing | Science & Tech. | 53.045421 |
The purpose of our experiment was to measure the rate of evaporation
of anhydrous isopropyl alcohol. The experiment involved measuring
the amount of alcohol that evaporated from a glass, funnel-shaped
container every twelve hours. These values were then plotted
and the resulting equation was found to be y=101.38e-.0026x.
When alcohol is exposed to air, it evaporates. The more concentrated the alcohol, the more rapidly evaporation should occur. When the surface area of the alcohol exposed to the air is reduced, the evaporation occurs slower as a function of time. The rate of
evaporation should obey the differential equation:
dy/dx = -ky
If one solves the differential equation above for Y, one
finds the solution to be:
Y = Ae-kt
In attempt to prove this theory, we exposed one hundred milliliters
of anhydrous isopropyl alcohol to the atmosphere in a container
in which the surface area varies as a function of time.
Description of Experiment:
One hundred milliliters of anhydrous isopropyl alcohol (at least 99% alcohol) were poured into a clean, dry, funnel-shaped glass container. The room temperature was kept relatively constant at 23 degrees Celsius, so that the temperature would not be a variable of the solution. The container had clear milliliter markings on it to make taking measurements easier. Measurements of the amount of alcohol evaporated were taken at 7:00 a.m. and 7:00 p.m. each day for 12 and a half days.
Results and Discussion:
After obtaining all of our data, we fed the data into a computer
using the spreadsheet program Microsoft Excel. To find our constant
k for our solution:
Y = Ae-kt
for our differential equation:
=-kx , (1)
we first took the natural log of our data. Knowing from previous
experience in our differential equations class and the required
Exploring Differential Equations via Graphics and
Data, we prepared a graph of the ln(amount) VS. time.
This line that is shown on the graph has a slope m that
is equal to our constant k. We then plotted the amount
of alcohol VS. time to achieve the second graph below. Using
a feature allowed by the program Excel, we obtained the equation:
Y(t) = -.0026t
+ 4.6189 (3)
for the plotted data. The data and graphs follow on the next page.
|TIME(HRS)||AMT (in mL)||LN(AMT)|
The differential equation that was expected for the evaporation
of alcohol was Y= -kt (equation 1). A solution to
this equation was found for the graph we derived from our data
points is y=101.38e-0.0026x (equation 3). Our experimental
data revealed a graph that was more linear than expected. This
could be attributed to the experimental error in measuring the
alcohol that evaporated. Another factor could be the amount of
time elapsed between measurements. Future experimentation could
possibly correct this error by obtaining data every six hours
instead of twelve. | <urn:uuid:1050ac4c-77e9-45f8-bb42-550d7c5f6e84> | 4.09375 | 684 | Academic Writing | Science & Tech. | 50.382691 |
Pronunciation: /pruf/ ?
A proof is a logical argument
that shows that a
is always true. A mathematical proof must show that a claim is
true in all cases, without any exceptions. The opposite of a proof is a
is the opposite of a proof: an argument that shows a claim is always false. A claim that has not
been proved always true and has not been proved always false, but is generally believed to be
true is called a
Types of Proofs
The are many types of proofs. Some of the types that are commonly used are:
- Direct proof: A
is built on
and previously proved
- Proof by induction: An
proof is used to proof claims about infinite sets. An inductive proof shows that if a claim is true
for the first case and for an
case, it is always true for the next case after the arbitrary case.
- Proof by transposition: A proof by transposition shows that
of a statement is true. Since the contrapositive of a statement is always true if the original statement
is true, the statement is taken to be true.
- Proof by contradiction: A
proof by contradiction
starts with a claim. The assumption is made that the claim is false. The proof then shows that if the
claim is false a contradiction is reached. The claim must then be true.
- Proof by exhaustion: In proof by exhaustion,
a claims is divided into a number of cases, and each of the cases is individually proved.
- Proof by construction: In proof by construction,
a concrete example is 'constructed' with a property that shows that something with that
property exists. A proof by construction can also be called a
proof by example.
- proof. http://wordnet.princeton.edu/. WordNet. Princeton University. (Accessed: 2011-01-08). http://wordnetweb.princeton.edu/perl/webwn?s=proof&sub=Search+WordNet&o2=&o0=1&o7=&o5=&o1=1&o6=&o4=&o3=&h=.
- Cupillari, Antonella. Nuts and Bolts of Proof: An Introduction to Mathematical Proofs, 3rd edition. Academic Press, June 3, 2005.
- Disproof. allmathwords.org. All Math Words Encyclopedia. Life is a Story Problem.org. 2010-01-11. http://www.allmathwords.org/article.aspx?lang=en&id=Disproof.
- Mastering the Formal Geometry Proof (video). dummies.com. Wiley. 2010-01-23. http://www.dummies.com/how-to/content/mastering-the-formal-geometry-proof.html.
Cite this article as:
Proof. 2010-01-11. All Math Words Encyclopedia. Life is a Story Problem.org. http://www.allmathwords.org/en/p/proof.html.
2010-01-11: Initial version (McAdams, David. | <urn:uuid:562db429-e8ea-45cf-94bb-47884b8e4c66> | 4 | 672 | Knowledge Article | Science & Tech. | 74.637912 |
cnidarianArticle Free Pass
- General features
- Natural history
- Form and function
- Contributors & Bibliography
cnidarian, also called coelenterate, any member of the phylum Cnidaria (Coelenterata), a group made up of more than 9,000 living species. Mostly marine animals, the cnidarians include the corals, hydras, jellyfish, Portuguese men-of-war, sea anemones, sea pens, sea whips, and sea fans.
The phylum Cnidaria is made up of four classes: Hydrozoa (hydrozoans); Scyphozoa (scyphozoans); Anthozoa (anthozoans); and Cubozoa (cubozoans). All cnidarians share several attributes, supporting the theory that they had a single origin. Variety and symmetry of body forms, varied coloration, and the sometimes complex life histories of cnidarians fascinate layperson and scientist alike. Inhabiting all marine and some freshwater environments, these animals are most abundant and diverse in tropical waters. Their calcareous skeletons form the frameworks of the reefs and atolls in most tropical seas, including the Great Barrier Reef that extends more than 2,000 kilometres along the northeastern coast of Australia.
Only cnidarians manufacture microscopic intracellular stinging capsules, known as nematocysts or cnidae, which give the phylum its name. The alternative name, coelenterate, refers to their simple organization around a central body cavity (the coelenteron). As first defined, coelenterates included not only the animals now designated cnidarians but also sponges (phylum Porifera) and comb jellies (phylum Ctenophora). In contemporary usage, “coelenterate” generally refers only to cnidarians, but the latter term is used in order to avoid ambiguity.
Size range and diversity of structure
Cnidarians are radially symmetrical (i.e., similar parts are arranged symmetrically around a central axis). They lack cephalization (concentration of sensory organs in a head), their bodies have two cell layers rather than the three of so-called higher animals, and the saclike coelenteron has one opening (the mouth). They are the most primitive of animals whose cells are organized into distinct tissues, but they lack organs. Cnidarians have two body forms—polyp and medusa—which often occur within the life cycle of a single cnidarian.
The body of a medusa, commonly called a jellyfish, usually has the shape of a bell or an umbrella, with tentacles hanging downward at the margin. The tubelike manubrium hangs from the centre of the bell, connecting the mouth at the lower end of the manubrium to the coelenteron within the bell. Most medusae are slow-swimming, planktonic animals. In contrast, the mouth and surrounding tentacles of polyps face upward, and the cylindrical body is generally attached by its opposite end to a firm substratum. The mouth is at the end of a manubrium in many hydrozoan polyps. Anthozoan polyps have an internal pharynx, or stomodaeum, connecting the mouth to the coelenteron.
Most species of cubozoans, hydrozoans, and scyphozoans pass through the medusoid and polypoid body forms, with medusae giving rise sexually to larvae that metamorphose into polyps, while polyps produce medusae asexually. Thus, the polyp is essentially a juvenile form, while the medusa is the adult form. In contrast, anthozoans are polypoid cnidarians and do not have a medusa stage. Commonly polyps, and in some species medusae too, can produce more of their own kind asexually.
One body form may be more conspicuous than the other. For example, scyphozoans are commonly known as true jellyfishes, for the medusa form is larger and better known than the polyp form. In hydrozoans, the polyp phase is more conspicuous than the medusa phase in groups such as hydroids and hydrocorals. Hydromedusae are smaller and more delicate than scyphomedusae or cubomedusae; they may be completely absent from the life cycle of some hydrozoan species. Some other species produce medusae, but the medusae never separate themselves from the polyps. Cubozoans have medusae commonly known as box jellyfish, from their shape. Some of these are responsible for human fatalities, mostly in tropical Australia and Southeast Asia, and include the so-called sea wasps. The polyp is tiny and inconspicuous.
Many cnidarian polyps are individually no more than a millimetre or so across. Polyps of most hydroids, hydrocorals, and soft and hard corals, however, proliferate asexually into colonies, which can attain much greater size and longevity than their component polyps. Certain tropical sea anemones (class Anthozoa) may be a metre in diameter, and some temperate ones are nearly that tall. Anthozoans are long-lived, both individually and as colonies; some sea anemones are centuries old. All medusae and sea anemones occur only as solitary individuals. Scyphomedusae can weigh more than a ton, whereas hydromedusae are, at most, a few centimetres across. Tentacles of medusae, however, may be numerous and extensible, which allows the animals to influence a considerably greater range than their body size might suggest. Large populations of hydroids can build up on docks, boats, and rocks. Similarly, some medusae attain remarkable densities—up to thousands per litre of water—but only for relatively brief periods.
Distribution and abundance
Many of the world’s benthic (bottom-dwelling) ecosystems are dominated by anthozoans. Although soft and hard corals coexist in virtually all tropical areas appropriate for either, coral reefs of the tropical Indo-Pacific are built mainly by members of the anthozoan order Scleractinia (hard corals); whereas on coral reefs of the Caribbean members of the anthozoan subclass Alcyonaria (soft corals) are much more prominent. Aside from being the most numerous and covering the greatest area of any animals on the reef, the corals structure their environment, even after death. Soft corals contribute greatly to reef construction by the cementing action of the skeletal debris (spicules), filling in spaces between hard coral skeletons.
Soft-bodied anthozoans are similarly dominant in other seas. Temperate rocky intertidal zones in many parts of the world are carpeted with sea anemones. They sequester the space that is therefore made unavailable to other organisms, thus having a profound impact on community structure. The curious hemispherical anemone Liponema is the most abundant benthic invertebrate in the Gulf of Alaska, in terms of numbers and biomass. Parts of the Antarctic seabed are covered by anemones, and they occur near the deep-sea hot vents.
Prominent among organisms that foul water-borne vessels are sedentary cnidarians, especially hydroids. The muscles that make scyphomedusae strong swimmers are dried for human consumption in Asia. Sea anemones are eaten in some areas of Asia and North America.
Throughout the tropics where reefs are accessible, coral skeletons are used as building material, either in blocks or slaked to create cement. Another use for cnidarian skeletons is in jewelry. The pink colour known as “coral” is the hue of the skeleton of a species of hydrocoral. Other hydrocorals have purplish skeletons. Skeletons vary in hue, and those considered most desirable command a high price. The core of some sea fans, sea whips, and black corals are cut or bent into beads, bracelets, and cameos.
All cnidarians have the potential to affect human physiology owing to the toxicity of their nematocysts. Most are not harmful to humans, but some can impart a painful sting—such as Physalia, the Portuguese man-of-war, and sea anemones of the genus Actinodendron. These, and even normally innocuous species, can be deadly in a massive dose or to a sensitive person, but the only cnidarians commonly fatal to humans are the cubomedusae, or box jellyfish. Anaphylaxis (hypersensitivity due to prior exposure and subsequent sensitization) was discovered with experiments on Physalia toxin. Extracts of many cnidarians, mostly anthozoans, have heart-stimulant, antitumour, and anti-inflammatory properties.
Reproduction and life cycles
All cnidarian species are capable of sexual reproduction, which occurs in only one phase of the life cycle, usually the medusa. Many cnidarians also reproduce asexually, which may occur in both phases. In asexual reproduction, new individuals arise from bits of tissue that are budded off from a parent, or by a parent dividing lengthwise or crosswise into two smaller individuals. Polyps that remain physically attached to one another or embedded in a common mass of tissue constitute a colony. In some colonies, polyps share a common coelenteron through which food captured by any member is distributed to others. Hydrozoan polyp colonies, called hydroids, are prostrate, bushy, or feathery in form. Examples of other colonies are anthozoan soft corals and most reef-forming hard corals. Polyps that are produced asexually and then physically separate are called clones, or ramets. In this way, a single genotype can be represented by many separate “individuals.”
Although genetically identical, colony members of many hydrozoans and some anthozoans are polymorphic, differing in morphology (form and structure) and/or physiology. Each zooid within the colony has a specific function and varies somewhat in form. For example, gastrozooids bear tentacles and are specialized for feeding. Some colonies possess dactylozooids, tentacleless polyps heavily armed with nematocysts that seem primarily concerned with defense. Gonozooids develop reproductive structures called gonophores. Members of the order Siphonophora, free-floating colonial hydrozoans, display an even greater variety of polymorphs. These include gas-filled floats called pneumatophores, pulsating, locomotory structures called nectophores, and flattened, protective individuals called bracts or phyllozooids.
Although the medusa stage is absent in anthozoans, polyps produce additional polyps sexually and, in some species, asexually as well. Hydromedusae are budded from polyps that, in some colonial species, are specialized for this function; each polyp produces numerous medusae. The major distinguishing feature of the cubozoans is that each polyp transforms entirely into a medusa. Before this metamorphosis occurs, however, each cubozoan polyp may divide asexually to produce numerous genetically identical polyps, and each of these subsequent polyps can then produce a medusa. In most scyphozoans, a scyphistoma (scyphopolyp) produces immature medusae (ephyrae) by asexual fission at its oral end. This process, called strobilation, results in eight-armed, free-swimming ephyrae.
Gametes differentiate in parts of the body referred to as gonads, despite the fact that cnidarians cannot be said to have true ovaries and testes because they lack organs. In anthozoans, cubozoans, and scyphozoans, gametes develop in the endoderm, whereas in hydrozoans they ripen in the ectoderm, although they do not necessarily originate there. Sexes are commonly separate, but hermaphroditism is known. Some hermaphroditic species are capable of self-fertilization. Gametes are generally shed into the sea, where the eggs are fertilized. Cleavage produces a ciliated ball of cells that elongates and develops a tuft of cilia at one end to become a planula larva, which may be free-swimming and planktonic, or crawling and benthic. Its ciliated tuft, which may have sensory abilities, is directed forward in locomotion. After a motile period, the planula attaches by its forward end to a solid object and develops tentacles around its posterior end, thereby transforming into a polyp. In some anthozoans and a few scyphomedusae, eggs are fertilized without being released. Embryonic development passes either partly or entirely within the mother’s coelenteron or, as in the case of some anemones and some members of the anthozoan subclass Alcyonaria (octocorals), attached to the outside of her body. In some species of hydroids that lack a free medusa stage, eggs are fertilized and the embryo develops in specialized zooids that are essentially attached medusae. Such brooding species may release offspring as very advanced larvae or as miniature adults, so that a planktonic stage is absent from the life cycle.
- General features
- Natural history
- Form and function
- Contributors & Bibliography
What made you want to look up "cnidarian"? Please share what surprised you most... | <urn:uuid:f2518aad-279a-42e8-8a8d-21842be9b444> | 3.6875 | 2,898 | Knowledge Article | Science & Tech. | 21.036667 |
2001: When Green Tech was Born
Every few weeks, someone asks me "When did green technology begin?"
Technically, it started around 2000 B.C. when the Egyptians began to design buildings with passive air conditioning. The Roman Emperor Varius Avitus followed by having snow brought from the mountains to cool his palace, thereby kicking off a craze for ice-powered air conditioners. He's often ranked as the worst emperor but for other reasons.
The 19th Century experienced a burst of alternative energy activity, particularly in France. In 1839, Edmond Becquerel discovered the photovoltaic effect while experimenting with an electrolyte cell, paving the way for the first silicon solar cell, made at Bell Labs, in 1954. In 1860, August Mouchet proposed the idea of solar-powered steam engines. In 1859, a third Frenchman, Gaston Plante, invented the lead acid battery and demonstrated it at the French Academy of Sciences a year later.
In the 1970s, Japan, Denmark and others decided to invest in solar, biomass, wind and other technologies to wean themselves from fossil fuels. In some countries like the U.S. the effort sputtered a few years later.
But the current wave we're living in can be traced back to 2001. Consider these historical tidbits in sometimes chronological order:
1. In January 2001, then California Governor Gray Davis declares a state of emergency because of rolling power blackouts. Enron, rightly, gets the blame and implodes. In February 2001 Enron's Ken Lay gives the hot set to Jeff Skillings. Inadvertently, all three men become green Hall of Famers for what happens next. Davis gets recalled and replaced by Arnold Schwarzenegger, who subsequently helps install one of the most ambitious green programs in the world and renewed interest in one-liners from "The Terminator."
2. Cypress Semiconductor CEO T.J. Rodgers invests $750,000 of his own money into a struggling outfit called SunPower after nearly everyone else in Silicon Valley, including Cypress' board, turned the company down. SunPower holds a successful IPO in 2005 and Rodgers investment, which eventually got bought by SunPower, eventually turns into $2.5 billion.
3. Bloom Energy, which has created a futuristic fuel cell that can generate heat, electricity and even oxygen for submarine crews or people trapped in buildings, is founded. An early investor and board member is T.J. Rodgers, who can also rightly claim to be Silicon Valley's first and most successful green tech investor. Side note: Rodgers is a fundamental free marketer and opposes federal subsidies, including subsidies for scientific research. Both SunPower and Bloom, however, depend on federal and state tax credits and other subsidies to woo customers. Consistency isn't everything, kids, but at least he admits it. Bloom also puts the world on notice that green isn't cheap: over $400 million has been invested in the company and sales have only just begun.
4. Suntech Power Holdings is founded by a Chinese professor named Zhengrong Shi from the University of South Wales in Australia in September 2001. Again, traditional VCs don't pay attention: the primary early investor is China's Communist Party. Suntech, based in Wuxi, goes public in 2005 and now jostles with First Solar for the top spot in the solar market. Suntech marked China's entry into solar and in the process the company becomes one of the first brand names to come out of China.
5. Tim Healy and David Brewster from the Tuck School of Business at Dartmouth found EnerNoc, which wants to sell demand response services. VCs initially balk: why do utilities need a third party like EnerNoc to curb power demand at factories in the middle of the afternoon? A few years later, Foundation Capital puts money into the company and EnerNoc goes on to hold a successful IPO. Demand response services become an industry.
6. GreenFuel Technologies is founded. The company, spun out of research at Harvard and MIT, proposed making biofuel from algae and capturing carbon dioxide from smokestacks. The company helps establish algae as a potential biofuel source. GreenFuel raises $79 million but goes under in 2008, setting what could be an ominous trend for noble and expensive failures.
7. General Electric starts sniffing around green and announces in February 2002 that it will buy the wind division of bankrupt Enron (see above). From those ashes, GE goes onto become one of the biggest wind turbine manufacturers in the world.
8. Consulting firm Clean Edge predicts that the market for clean energy-fuel cells, solar panels, turbines will grow from $7 billion in 2000 to over $82 billion in 2010 while the clean vehicle market will go from $2 billion to $48 billion. Only a few pay attention. The prediction turns out to be somewhat close. (In 2007, the firm says green tech came to $77.3 billion.) By 2017, it says the market will be $254.5 billion.
9. Konarka, a company specializing in solar dyes and other futuristic solar technologies, gets founded. Despite hoovering in over $170 million from investors and government agencies, its technologies are still in the testing stage. It's a trendsetter in the having a really, really long runway department. But there are lengthier ones: wave power specialist Pelamis got started in 1998.
10. Pat Gelsinger then an Intel exec tells an audience at an engineering conference that computer processors will put as much heat, proportionally, as nuclear reactors by 2015. Intel, among others, ramp up efforts to cool off chips so computers won't melt. As power prices climb, cooler chips start to get marketed as a way to save energy and money. It marks the beginning of the accidentally green movement, which has surprisingly become large.
And the fun continued in 2002. In that year, First Solar shipped its first cadmium telluride solar cells, a technology others had tried but stopped pursuing. First Solar goes onto become one of the largest manufacturers in the world. NanoSolar, which specializes in copper indium gallium selenide solar cells, is founded, and AC Propulsion founder Tom Gage tells a then unemployed entrepreneur he's not interested in making electric sports cars. Gage, though, gives him the number of another guy who asked Gage the same thing. Martin Eberhard contacts Elon Musk. The two go on to found Tesla Motors and author one of Silicon Valley's best love stories ever.
Popular in SciTech
- Chinese supercomputer named world's fastest
- Airborne laser reveals hidden city in Cambodia
- Valentina Tereshkova: First woman in space 13 Photos
- NASA picks 8 new astronauts, 4 of them women
- "Tweet" added to Oxford English Dictionary
- Russian tycoon seeks human immortality by 2045
- Solar plane lands at Dulles Airport Play Video
- Apple unveils overhaul of iOS 7, new iTunes Radio | <urn:uuid:2372cca6-bd7b-4acc-b5d1-d27364e33d0c> | 2.890625 | 1,448 | Content Listing | Science & Tech. | 52.020958 |
It is a true example of mind control in nature, and though scientists are well aware of the method of infection, they are uncertain exactly how the mind control is accomplished. When a wasp successfully attacks a host spider, the spider is temporarily paralyzed as the wasp lays eggs on the tip of the spider's abdomen. Once the wasp departs, the spider regains its ability to move, and it continues its daily web construction for the next two weeks as though nothing has changed. Meanwhile, the wasp's growing larvae cling to the spider's belly and feed on its juices through a number of small punctures.
This behavior was first observed by Dr. William G. Eberhard at the university of Costa Rica. His observations have led him to believe that the mind control is most likely accomplished through a fast-acting chemical secreted by the larvae, but what that chemical is-- and how it works-- is a mystery. What he has found is that the spider's usual five-step web building process is reduced to two when held captive by these larvae, resulting in the alternate design; and he has also discovered that if he removes the larvae on the last day, just before the spider is killed, the spider will often recover after a few days of spinning the abnormal web.
It is true that many parasites are able to shape their host's behavior subtly, but never before has science observed a parasite that can manipulate its host in such a detailed, specific way. As evidenced by this finding, biology certainly has many fascinating secrets yet to be discovered. | <urn:uuid:1941f551-c7d9-489a-bab3-02a57746dcad> | 3.171875 | 315 | Personal Blog | Science & Tech. | 44.771981 |
New England Regional Assessment (NERA)
|The purpose of this regional assessment of potential climate change impacts on the New England Region (the six New England states plus upstate New York) is to provide a local perspective on a global issue. The intent in producing this Overview document is to provide the most current insight on the topic of climate change, focused on local issues and concerns, in a relevant and accessible format of use to the public. The New England Regional Assessment (NERA) is one of 16 regional assessments, conducted for the U.S. Global Change Research Program (USGCRP), as part of the National Assessment of climate change impacts on the United States. The National Assessment is directed by response to the Congressional Act of 1990, at the request of the President's Science Advisor.
Each chapter of the NERA document can be downloaded below and printed using Adobe Acrobat Reader.
|Chapters from the document:|
|Table of Contents||download|
|Executive Summary & Acknowledgements||download|
USGCRP | Assessment Team | Reports | Contact Info | Home | Email | <urn:uuid:ca96693b-9411-40b4-b7d0-9922365d1fde> | 2.859375 | 226 | Content Listing | Science & Tech. | 28.671579 |
The carbon supermaterial graphene is already known for its exotic electronic properties. Now two studies suggest that the material is also one of the strongest, most elastic and stiffest materials known to science.
Graphene crystals are atom-thick sheets of carbon atoms connected together in hexagons, like chicken wire.
Graphene flakes are produced every time we put pencil to paper - the graphite in pencils is simply a 3D structure comprising multiple stacked layers of graphene. And yet graphene was only isolated for the first time in 2004.
In the graphene "gold rush" since then, scientists have scrambled to uncover the material's properties and discover potential applications. The large surface-to-volume ratio and high conductivity already suggest uses in ultra-small electronics.
Now, researchers have discovered that graphene has remarkable mechanical properties too. Changgu Lee and Xiaoding Wei at Columbia University, New York, took flakes of graphene 10 to 20 micrometers in diameter and laid them across a silicon wafer patterned with holes just 1 to 1.5 micrometers in diameter, like a microscopic muffin tray.
The graphene above the tiny holes was unsupported, and Lee and Wei poked at these with the diamond tip of an atomic force microscope to see how readily the graphene deformed and ruptured.
They found that the graphene could be pushed downwards by 100 nanometres with a force of up to 2.9 micronewtons before rupturing. The researchers estimate that graphene has a breaking strength of 55 newtons per metre.
"As a way of visualising the force needed to break the membranes, imagine trying to puncture a sheet of graphene that is as thick as ordinary plastic food wrap - typically 100 micrometers thick," says James Hone, head of the laboratory at Columbia in which Lee studies. "It would require a force of over 20,000 newtons, equivalent to the weight of a 2000 kilogram car."
That strength puts graphene literally "off the chart" of the strongest materials measured, Hone says. "These measurements constitute a benchmark of strength that a macroscopic system will never achieve, but can hope to approach," he says.
In separate work, Tim Booth and Peter Blake at the University of Manchester, UK, are well on the way to bringing atomically perfect graphene out of the nanoscopic and into to the macroscopic world. Their team has patented a new method to produce free-standing graphene flakes up to 100 micrometers in diameter.
Researchers use sticky tape to remove tiny flakes of graphene from graphite, and as their technique improves they are beginning to produce larger and larger flakes. The flakes can be picked off the sticky tape manually or the tape can be dissolved away with acetone.
A problem with this method is that the sticky tape picks up flakes of multi-layered graphite at the same time. Finding the graphene is like searching for a needle in a haystack.
The key is to place all of the flakes on a silicon wafer where the properties of the graphene make it easy to spot under an optical microscope.
However, graphene flakes bond to the silicon and are easily damaged when the scientists try to remove them. One solution is to use aggressive chemicals such as hydrofluoric acid to eat away the silicon and free up the graphene but this tends to chemically contaminate the graphene and alter its properties.
Now, Booth and Blake have realised that acrylic glass (PMMA) has the same optical properties as silica and can also highlight graphene flakes. It however easily dissolves away in acetone, a less aggressive chemical that doesn't alter graphene. Using their technique, Booth and Blake can easily isolate large crystals.
"We are limited only by the size of graphene flakes available," says Booth. "There is no reason that the method will not scale up to much larger flakes."
Using these flakes, Booth and Blake have also found that graphene is extraordinarily stiff. A crystal supported on just one side extends nearly 10 micrometers without any support - equivalent to an unsupported sheet of paper 100 metres in length. It had previously been assumed that graphene would curl up if left unsupported.
Although counter-intuitive, the team has shown that this should actually be expected from a theoretical measure of material stiffness, says Booth.
Graphene could be added to polymers to form super-strength composites, Booth says. "However, it is likely the most interesting applications will result from a unique combination of graphene's properties: transparency, electronic structure, stiffness, thermal conductivity," he says. "That could help achieve science-fiction applications."
Nanotechnology - Follow the emergence of a new technology in our continuously updated special report.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Fri Jul 18 14:34:08 BST 2008 by Colin Stanwell-smith
It would be helpful to have "strength" in a conventional meaningful unit such as Newtons per square metre. (i.e. Pascals) If this is the unit intended, then 55 is not a high score. A Pascal is a tiny unit. A Newton per Metre might be used for a spring rate, not for a general material property. It makes the subsequent comparison dubious.
If you are going to quote units please check with a scientist or engineer whether the units make sense.
Fri Jul 18 15:00:34 BST 2008 by Colin Barras
Thanks for the comment. Your concern about the units was one I had myself, and did indeed check with the scientists. But Newtons per metre is indeed the correct unit.
To quote Professor Hone: "Three-dimensional materials have a strength given in N/square meter (= 1 Pascal) because the strength is proportional to the cross-sectional area. In the case of graphene, which is a 2D material, the cross-section is a line, so the stiffness and strength are measured in newtons/meter."
Colin Barras, online technology reporter
Sat Jul 19 03:56:30 BST 2008 by Current User
It isn't a 2D material though, not really... Even an atom has some thickness...
Mon Feb 02 18:04:36 GMT 2009 by colin Stanwell-Smith
Thanks for reply, sorry about delay. I agree it is useful for comparison with film (i.e.very thin) materials to to use a force per unit width but it is still confusing to use the words strength and stiffness which have 3D meanings and are still relevant even to very thin materials, because as another has pointed out everything has thickness. I still don't understand the extrapolation. 2.9 micro newtons on a circumference of about 3 microns is a shear stress of about 1 N/m, not 55 and why scale to a thickness (kitchen film) if the strength is per length?
Fri Jul 18 17:51:19 BST 2008 by Mark
I say let the Science-fiction applications begin!
Sat Jul 19 02:49:07 BST 2008 by Stephen J. Brown
The novel nature of graphene appears to be due to it being an example of a two-dimensional crystal and maybe less so than it being a hexagram arrangement of carbon atoms. Maybe one day we will find even more amazing 2-d crystals made from other arrangements of one or more different types of atom.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:5ad924d8-a28e-4933-be95-89923015eeda> | 3.640625 | 1,630 | Comment Section | Science & Tech. | 49.321941 |
Physics is consists of mainly particles and paradox. The double split experiment for example, shows that light can behave as both a wave and a particle, and the way that we observe it makes it one or the other.
According to the experiment, if you observe which of the two slits the light passes through, it behaves like a particle but if you observe the screen which it falls on, it behaves like a wave.
Oh, but the confusion doesn’t end just there. According to an experiment proposed by John Wheeler in 1978 and executed by scientists in 2007, observing a particle now can change what happened to another one in the past.
If you let light pass through the slit, and then observe which way it came through, it will “retroactively force it to have passed through one or the other.”
This phenomenon only affects the tiniest fraction of a second. Wheeler suggested however that light from distant stars can be observed the same way; which would alter what happened up to 1 million years ago. And they say you can’t change the past.
If this sounded too confusing, you can get a more detailed explanation here. | <urn:uuid:a80799db-f225-421a-83ce-f60e4a455e18> | 3.671875 | 240 | Knowledge Article | Science & Tech. | 56.674 |
is a non-expanding, clay
-sized, micaceous mineral. Illite is a phyllosilicate
or layered alumino-silicate. Its structure is constituted by the repetition of Tetrahedron – Octahedron – Tetrahedron (TOT) layer. The interlayer space is mainly occupied by poorly hydrated potassium cations responsible for the absence of swelling. Structurally illite is quite similar to muscovite
with slightly more silicon
, and water and slightly less tetrahedral aluminium
and interlayer potassium
. The chemical formula is given as (K,H3
O)], but there is considerable ion substitution. It occurs as aggregates of small monoclinic
grey to white crystals. Due to the small size, positive identification usually requires x-ray diffraction
analysis. Illite occurs as an alteration product of muscovite and feldspar
environments. It is common in sediments, soils, and argillaceous sedimentary rocks
as well as in some low grade metamorphic rocks
in sediments can be differentiated by x-ray analysis.
The cation exchange capacity (CEC) of illite is smaller than that of smectite but higher than that of kaolinite, typically around 20 – 30 meq/100 g.
Illite was first described for occurrences in the Maquoketa shale in Calhoun County, Illinois, USA, in 1937. The name was derived from its type location in Illinois. Illite is also called hydromica or hydromuscovite. Brammallite is a sodium rich analogue.
Illite is also used in food supplements, with claimed benefits that range from bowel function to reduction of heavy metals in the blood. Apparently, a French company, Argiletz, provides a wide range of products which are offered for sale in the UK and elsewhere. "Green clay", a term used in several languages, often contains illite. In Scotland, internal uses of illite probably date back to Celtic times.
- Mitchell J.K. (1993) Fundamentals of soil behavior. Second edition. John Wiley and Sons, Inc., New York. 437 pp, see Chapter 3, Soil Mineralogy, p. 32. | <urn:uuid:e62abc87-5cfc-418d-8daa-dbc87e30c33b> | 3.390625 | 473 | Knowledge Article | Science & Tech. | 33.998957 |
This week researchers announced that they've confirmed the presence of dark energy in the universe -- a mysterious force that is pushing the expansion of the universe ever outwards. The new finding, reported by astronomers using the Chandra X-Ray Observatory, helps explain how the universe got the way it is, and the way it might end.
Previous work trying to measure the effects of dark energy examined supernovae, tracking the overall rate of expansion of the universe. The new work looks at a separate measure, the formation of structure within galactic clusters. Data from the two approaches agree, both confirming the presence of the force, and helping astrophysicists refine their calculations of its strength. Data from the new research also indicate that the universe is not likely to end with a fatal 'Big Rip.' We'll talk with one of the researchers on the project.
Produced by Charles Bergquist, Director and Contributing Producer | <urn:uuid:78a3f731-80c2-4a48-8757-d30662afd37b> | 2.8125 | 183 | Truncated | Science & Tech. | 35.081623 |
CLOSER TO HOME: In this artist's depiction of COROT 9 b, the Jupiter-size planet's relatively distant host star is visible in the background. The star is very similar to our own sun in both size and temperature. Image: Instituto de Astrofísica de Canarias
A French spacecraft designed to discover new worlds beyond our solar system has made one of its most significant finds yet—a planet that looks like a cousin to those in our own celestial backyard. COROT 9 b, named by astronomical convention for the instrument that discovered it, the COROT (for COnvection, ROtation and planetary Transits) satellite, is less massive than Jupiter and orbits a star, called COROT 9, at about the same distance Mercury orbits the sun. The new world is of fairly average size, but it is the most temperate exoplanet yet whose properties are well known in orbit around a sunlike star.
A largely European research team reports the discovery the March 18 issue of Nature. (Scientific American is part of Nature Publishing Group.)
Like NASA's Kepler spacecraft, launched in 2009, the three-year-old COROT tracks the brightness of stars with a photometer, looking for periodic dimming that might be attributable to the passage of a planet across the face of its host star. Actually confirming a planetary cause of that dimming takes painstaking follow-up work at telescopes on the ground. Most often the researchers look for Doppler shifts in the host star's light as the planet's gravity regularly tugs the star nearer to and then farther from Earth.
The degree of dimming starlight during the passage of a planet across its star, a type of partial eclipse known as a transit, indicates the body's diameter. The velocity at which the star wobbles under the planet's influence, on the other hand, reveals the object's mass. With both transit and stellar-wobble observations of a planet, astronomers can paint a fairly complete picture of a world they have only indirectly observed.
"With transits we can learn much more about the planets than with any other method to find planets," says lead study author Hans Deeg, an astronomer at Spain's Institute of Astrophysics of the Canary Islands. "It's the only method currently where we can measure the size of the planets fairly reliably." On its own, a measurement of the star's wobble can only reveal a lower limit to the planet's mass, and in some cases the true mass turns out to be many times greater than that lower bound.
Residing 1,500 light-years away in the constellation Serpens Cauda, COROT 9 b has about the same diameter as Jupiter and is about 85 percent as massive. It keeps a much greater distance from its host star than the other transiting planets discovered to date, almost all of which reside in scalding hot orbits less than 10 million kilometers from their stars. The newfound world circles its star at about 60 million kilometers, leaving it with a relatively mild temperature that Deeg's group estimates to be between minus 20 degrees Celsius and 150 degrees C, depending on its atmospheric makeup. For comparison, many exoplanets are so close to their stars that their temperatures exceed 1,000 degrees C. The plentiful population of massive exoplanets in star-nuzzling orbits has been dubbed the "hot Jupiters"; COROT 9 b might be called a warm Jupiter—or even a cool one, if its true temperature turns out to be at the lower end of the estimated range.
"This is the first one that we can study in some detail that is relatively cool and which doesn't vary very much in its temperature," Deeg says. The only known transiting planet with a comparably long orbit, called HD 80606 b, has an extremely eccentric orbit; the distance between HD 80606 b and its star varies greatly throughout the planet's orbit, driving temperature changes of several hundred degrees in a matter of hours.
To confirm that the transits of COROT 9 b recorded in 2008 by the COROT satellite were indeed planetary, the researchers first took two follow-up readings of the star using a spectrograph in France, a rough sketch that was consistent with the presence of a planet. Next they ruled out a false positive, usually caused by an eclipsing binary-star system in the background, with two relatively small telescopes on the ground, which offer better spatial resolution than COROT does. But most importantly, the team detected the host star's telltale wobble using several nights of observations on the world-class HARPS spectrograph at La Silla Paranal Observatory in Chile.
The stepwise confirmation process highlights the directives behind follow-up work—to confirm exoplanet discoveries efficiently and inexpensively, explains Natalie Batalha, a professor of physics and astronomy at San Jose State University who did not contribute to the new research. "You employ cheaper resources to vet out the false positives before you send these candidates to the really expensive, big telescopes like Keck [in Hawaii] or the HARPS telescope," Batalha says.
Deeg notes that COROT 9 b, which takes 95 days to circle its star, is nearing the upper limit of the orbital period that COROT can detect—the spacecraft shifts its view every 150 days, and its detectors must record a minimum of two transits in an observation window to know the planet's orbital period. Kepler, which will keep a continuous watch on a patch of stars for more than three years, is better suited to finding planets like our own in terms of orbital periods as well as other parameters, although it will likely be a few years before it moves from the hot objects it has already discovered to cooler, potentially habitable worlds, whose transits are subtler and less frequent. A truly Earth-like planet will circle its star at a remove comparable to the Earth–sun distance, so it will pass in front of its star just once a year. (The laws of planetary motion derived by Johannes Kepler, the 17th-century German astronomer and namesake of the Kepler mission, show that the duration of an orbit depends on its size.)
Batalha, a Kepler co-investigator, says her team can learn a lot from COROT. Just one year into its mission, Kepler has already discovered five confirmed exoplanets and, Batalha says, has identified hundreds of additional candidates that await further investigation. "As a Kepler scientist, COROT is kind of paving the way with regard to space-based transit detection," Batalha says. "So I'm always really interested in seeing how they're dealing with a large volume of data and how they follow up those candidates and lead themselves to confirmation." | <urn:uuid:46b98e5b-3adf-45f7-970b-3f157040f7d4> | 4.0625 | 1,379 | Knowledge Article | Science & Tech. | 35.586214 |
The shuttle Columbia landing after mission STS-58. The drag chute is deployed to slow the shuttle down.
Click on image for full size
Courtesy of NASA
Shuttle Returns Safely
News story originally written on May 7, 1998
The shuttle Columbia returned after 16 days in space. Some interesting things happened after the shuttle landed!
There were many scientists waiting at the landing site. In fact, 200 researchers awaited Columbia's arrival so they could begin dissecting the animals that had traveled aboard the shuttle. Scientists would work with 2,000 fish, snails, crickets and other rodents that flew as part of Columbia's Neurolab. A few dozen baby rats were also of interest. It was a race against gravity: the sooner the animals could be tested, the greater the chance of seeing microgravity's effect on the nervous system.
You may wonder why there was all of this fuss about some animals that were sent up on the shuttle. The main focus of the research done on the shuttle was the nervous system. The nervous system is made up of the brain, spinal cord, nerves and sensory organs. Scientists need to know about the nervous system because if that system of our bodies isn't working right, we could end up with motion sickness, insomnia or even muscular dystrophy.
Shop Windows to the Universe Science Store!
Learn about Earth and space science, and have fun while doing it! The games
section of our online store
includes a climate change card game
and the Traveling Nitrogen game
You might also be interested in:
It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more
The Space Shuttle Discovery lifted off from Kennedy Space Center on October 29th at 2:19 p.m. EST. The weather was great as Discovery took 8 1/2 minutes to reach orbit. This was the United States' 123rd...more
A moon was discovered orbiting the asteroid, Eugenia. This is only the second time in history that a satellite has been seen circling an asteroid. A special mirror allowed scientists to find the moon...more
Will Russia ever put the service module for the International Space Station in space? NASA officials want an answer from the Russian government. The necessary service module is currently waiting to be...more
A coronal mass ejection (CME) happened on the Sun early last month. The material that was thrown out from this explosion passed the ACE spacecraft. The SWICS instrument on ACE has produced a new and very...more
J.S. Maini of the Canadian Forest Service called forests the "heart and lungs of the world." This is because forests filter air and water pollution, absorb carbon dioxide, release oxygen, and maintain...more
In late April through mid-May 2002, all five naked-eye planets are visible at the same time in the night sky! This is includes Mercury which is generally very hard to see. You won't want to miss this!...more | <urn:uuid:a796cdf9-2033-4b4c-b9e7-1d85171d8825> | 3.53125 | 630 | Content Listing | Science & Tech. | 61.147264 |
Thermal emission spectra provide specific information for the composition of debris ejected during the Deep Impact experiment. Comparisons with previously studied dusty objects including Hale-Bopp and a dust disk around a forming star named HD100546 help us understand and interpret the spectrum of Tempel 1 after impact, which is very different from spectrum taken before impact.
Infrared spectra spanning 5- to 35-µm of Tempel 1 and related objects are shown. From bottom to top: (i) spectrum of the ambient coma, taken by Spitzer Space Telescopes Infrared Spectrometer (IRS) 23 hours before impact (I); (ii) spectrum of the impact debris from the Deep Impact collision taken at I + 0.75 hours after impact; (iii) Infrared Space Observatory (ISO) spectrum of Comet Hale-Bopp (Crovisier et al. 1997 Science, 275, 1904.; (iv) ISO spectrum of Young Stellar Object (YSO) HD100546 (Malfait et al. 1998, A&A, 332, L25-L28). Note the logarithmic scale.
Photo Credit: NASA/UM C. M. Lisse et al., Science 313, 635 (2006); published online 13 July 2006 (10.1126/science.1124694). Reprinted with permission from AAAS.
+ Larger image | <urn:uuid:7ad779ca-4fe8-4a9d-ba59-901a0e54fa62> | 3 | 284 | Knowledge Article | Science & Tech. | 62.346518 |
Issue Date: January 28, 2013
Light-Driven Flow Reaction Features Molecular Acrobatics
By shining ultraviolet light on solutions of alkene-substituted pyrroles, chemists have enticed the molecules to perform never-before-seen molecular acrobatics. Through a twisting and turning cycloaddition-rearrangement reaction, simple pyrroles yield complex tricyclic aziridines. The reaction could be a boon for natural product and small-molecule drug synthesis (Angew. Chem. Int. Ed., DOI: 10.1002/anie.201208892). Photochemistry is usually simple, efficient, inexpensive, and catalyst-free. Plus, in a single step it often leads to products with molecular complexity that can’t be matched without many steps in a conventional synthesis. Katie G. Maskill, Kevin I. Booker-Milburn, and coworkers at the University of Bristol, in England, found that by irradiating various N-butenylpyrroles in acetonitrile with 254-nm light they can produce batches of tricyclic aziridines in yields of up to 60%. Booker-Milburn’s group had previously developed a continuous-flow reactor for scaling up photochemical reactions. In the new work, the team used a modified version of the reactor to make the aziridines, such as the one shown, at a rate of about 1 g per hour. That scale is difficult to achieve with batch photochemical reactions.
- Chemical & Engineering News
- ISSN 0009-2347
- Copyright © American Chemical Society | <urn:uuid:1befdb48-2ed6-43bd-bbac-cba3b42993c0> | 3.046875 | 340 | Truncated | Science & Tech. | 38.69439 |
Atomic Number: 8
Atomic Weight: 15.9994
Discovered By: Joseph Priestly, Carl Wilhelm Scheele
Discovery Date: 1774 (England/Sweden)
Electron Configuration: [He]2s22p4
Word Origin: Greek: oxys: sharp or acid and Greek: genes: born, former... 'acid former'
Isotopes: Nine isotopes of oxygen are known. Natural oxygen is a mixture of three isotopes.
Properties: Oxygen gas is colorless, odorless, and tasteless. The liquid and solid forms are a pale blue color and are strongly paramagnetic. Oxygen supports combustion, combines with most elements, and is a component of hundreds of thousands of organic compounds. Ozone (O3), a highly active compound with a name derived from the Greek word for 'I smell', is formed by the action of an electrical discharge or ultraviolet light on oxygen.
Uses: Oxygen was the atomic weight standard of comparison for the other elements until 1961 when the International Union of Pure and Applied Chemistry adopted carbon 12 as the new basis. It is the third most abundant element found in the sun and the earth, and it plays a part in the carbon-nitrogen cycle. Excited oxygen yields the bright red and yellow-green colors of the Aurora. Oxygen enrichment of steel blast furnaces accounts for the greatest use of the gas. Large quantities are used in making synthesis gas for ammonia, methanol, and ethylene oxide. It is also used as a bleach, for oxidizing oils, for oxy-acetylene welding, and for determining carbon content of steel and organic compounds. Plants and animals require oxygen for respiration. Hospitals frequently prescribe oxygen for patients. Approximately two thirds of the human body and nine tenths of the mass of water is oxygen.
Element Classification: Non-Metal
Density (g/cc): 1.149 (@ -183°C)
Melting Point (°K): 54.8
Boiling Point (°K): 90.19
Appearance: Colorless, odorless, tasteless gas; pale blue liquid
Atomic Volume (cc/mol): 14.0
Covalent Radius (pm): 73
Ionic Radius: 132 (-2e)
Specific Heat (@20°C J/g mol): 0.916 (O-O)
Pauling Negativity Number: 3.44
First Ionizing Energy (kJ/mol): 1313.1
Oxidation States: -2, -1
Lattice Structure: Cubic
Lattice Constant (Å): 6.830
Magnetic Ordering: Paramagnetic
References: Los Alamos National Laboratory (2001), Crescent Chemical Company (2001), Lange's Handbook of Chemistry (1952) | <urn:uuid:f28cfd0b-a4af-4472-b260-350abe95d30e> | 3.40625 | 581 | Knowledge Article | Science & Tech. | 48.191146 |
A helix (pl: helixes or helices) is a type of smooth space curve, i.e. a curve in three-dimensional space. It has the property that the tangent line at any point makes a constant angle with a fixed line called the axis. Examples of helixes are coil springs and the handrails of spiral staircases. A "filled-in" helix – for example, a spiral ramp – is called a helicoid. Helices are important in biology, as the DNA molecule is formed as two intertwined helices, and many proteins have helical substructures, known as alpha helices. The word helix comes from the Greek word ἕλιξ, "twisted, curved".
Helices can be either right-handed or left-handed. With the line of sight along the helix's axis, if a clockwise screwing motion moves the helix away from the observer, then it is called a right-handed helix; if towards the observer then it is a left-handed helix. Thus a helix cannot be described as 'spinning clockwise or anti-clockwise'. Handedness (or chirality) is a property of the helix, not of the perspective: a right-handed helix cannot be turned or flipped to look like a left-handed one unless it is viewed in a mirror, and vice versa.
The pitch of a helix is the width of one complete helix turn, measured parallel to the axis of the helix.
A conic helix may be defined as a spiral on a conic surface, with the distance to the apex an exponential function of the angle indicating direction from the axis. An example is the Corkscrew roller coaster at Cedar Point amusement park.
A curve is called a general helix or cylindrical helix if its tangent makes a constant angle with a fixed line in space. A curve is a general helix if and only if the ratio of curvature to torsion is constant.
A curve is called a slant helix if its principal normal makes a constant angle with a fixed line in space. It can be constructed by applying a transformation to the moving frame of a general helix.
Mathematical description
In cylindrical coordinates (r, θ, h), the same helix is parametrised by:
A circular helix of radius a and pitch 2πb is described by the following parametrisation:
Except for rotations, translations, and changes of scale, all right-handed helices are equivalent to the helix defined above. The equivalent left-handed helix can be constructed in a number of ways, the simplest being to negate any one of the x, y or z components.
Arc length, curvature and torsion
The length of a circular helix of radius a and pitch 2πb expressed in rectangular coordinates as
A natural right-handed helix, made by a climber plant
A charged particle in a uniform magnetic field following a helical path
See also
- Alpha helix
- Boerdijk–Coxeter helix
- Double helix
- Helical symmetry
- Helix angle
- Seashell surface
- Triple helix
- Weisstein, Eric W., "Helicoid", MathWorld.
- ἕλιξ, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus
- "Double Helix" by Sándor Kabai, Wolfram Demonstrations Project.
- O'Neill, B. Elementary Differential Geometry, 1961 pg 72
- O'Neill, B. Elementary Differential Geometry, 1961 pg 74
- Izumiya, S. and Takeuchi, N. (2004) New special curves and developable surfaces. Turk J Math, 28:153–163.
- Menninger, T. (2013), An Explicit Parametrization of the Frenet Apparatus of the Slant Helix. arXiv:1302.3175.
- Weisstein, Eric W., "Helix", MathWorld. | <urn:uuid:5a8e0062-b03e-49d9-aeec-db77391cf0ab> | 4.1875 | 881 | Knowledge Article | Science & Tech. | 49.256289 |
Printf for Java is a pure Java implementation of the famous printf data formatting function. It supports precompiled format strings for faster processing and an object-oriented data model for maximum flexibility. Printf for Java has helped hundreds of programmers port legacy C applications to Java.
UniT is a general purpose text generator written in and with full access to Java. It can be used to generate any text of any format, including program code of any programming language (Java, C, C++, SQL, etc.), or documentation of any format (HTML, XML, RTF, LaTex, etc.). It is similar to JSP, but easier, more general, and more flexible. With an appropriate parser (XML-Parser, like DOM), UniT can also be used to convert text documents.
Lava's jDES is a highly optimized, highly configurable implementation of the Data Encryption Standard (DES) in Java. jDES was designed with four goals in mind: accuracy, speed, ease of use, and configurability. jDES utilizes Sharkysoft's proprietary reformulation of DES optimized for 32-bit machines. jDES was initially released as an integrated feature of Lava, a general-purpose class library for Java. By popular demand, it is now distributed in its own bundle.
Lava's jWave is a versatile set of packages for Java that allow you to easily create and process RIFF files, with a special emphasis on reading and writing Microsoft PCM .wav files. jWave was initially released as an integrated feature of Lava, a general-purpose class library for Java. By popular demand, it is distributed in its own bundle.
Lava is a class library for Java. Its packages evolved from several programmers' need for features in Java that were not available in the standard Java library. Although Lava includes several GUI-related classes, its main emphasis is in batch-oriented data processing and reporting applications. Thus, there is a heavy emphasis on stream parsing, data manipulation, and text formatting.
RealChat is an affordable chat software solution for your Web site. Its features include colored text, pictures, sounds, emotions, multiple rooms, private messages, user-created rooms, hidden rooms, administrative manager, console and file logging, and more. The package includes client and server software. An online demo available, and an evaluation version is available for free download.
DigitizeIt digitizes scanned graphs and charts. Graphs can be loaded in nearly all common image formats (including gif, tiff, jpeg, bmp, png, psd, pcx, xbm, xpm, tga, pct), pasted from the clipboard, or imported via a screenshot. Digitizing of line and scatter plots occurs automatically, and manual digitizing via mouse clicks is also possible. Data values are transformed to a specified axes system and can be saved in ASCII format, ready to use in many other applications such as Microcal Origin or Excel. Axes can be linear, logarithmic, or reciprocal scale. Multiple data sets can be defined and edited. Tilted and distorted graphs can be handled. Comprehensive online help is included. Java 1.4 is required. | <urn:uuid:d78e9eda-b587-494e-a0d8-865f533a2e9e> | 2.703125 | 660 | Content Listing | Software Dev. | 37.861414 |
RSA_generate_key — generate RSA key pair
*RSA_generate_key(int num, unsigned long e, void (*callback)(int,int,void
*), void *cb_arg);
RSA_generate_key() generates a key pair and returns it in
a newly allocated RSA structure. The pseudo-random
number generator must be seeded prior to calling RSA_generate_key().
The modulus size will be num bits, and
the public exponent will be e. Key sizes with num < 1024
should be considered insecure. The exponent is an odd number, typically
3, 17 or 65537.
A callback function may be used to provide feedback about
the progress of the key generation. If callback is not NULL,
it will be called as follows:
While a random prime number is generated,
it is called as described in BN_generate_prime(3).
When the n-th randomly generated prime is rejected
as not suitable for the key, callback(2, n, cb_arg) is called.
When a random p has been found with p-1 relatively
prime to e, it is called as callback(3,
The process is then repeated for prime q with callback(3,
If key generation fails, RSA_generate_key() returns NULL;
the error codes can be obtained by ERR_get_error(3).
callback(2, x, cb_arg) is used with two
RSA_generate_key() goes into an infinite loop for illegal
ERR_get_error(3), rand(3), rsa(3), RSA_free(3)
The cb_arg argument was added in SSLeay | <urn:uuid:b40104a3-1759-42a3-ae0c-c0d8f14f4d7c> | 2.921875 | 373 | Documentation | Software Dev. | 46.931187 |
|Browse All Terms|
|Beginning With||By Language|
|A B C D E F G H I J K L M N O P Q R S T U V W X Y Z :: All||
Also called “molecular manufacturing” because involves manipulating matter on atom by atom or molecule by molecule basis to attain desired configurations. An example of “natural” nanotechnology is the development of a fertilized, single cell ovum into a mature human being.
The ability to work at the atomic, molecular, and supramolecular levels, in a scale of about 1 to 100 nanometers, in order to create, manipulate and use materials, devices, and systems that have novel properties and functions because of the small scale of their structures. All materials and systems establish their foundation at nanoscale. A water molecule is about 1 nm in diameter, and the smallest transistors measure about 20nm. DNA molecules are about 2.5 nm wide, a typical protein between 1 and 20 nm in diameter.
Do you have a term that should be included in the glossary?
Submit a term for review | <urn:uuid:eee7af7b-a268-4065-b697-470caa8efc73> | 2.96875 | 234 | Structured Data | Science & Tech. | 46.590297 |
Black holes cannot be observed directly and therefore cannot
be "discovered". The indirect evidence for two kinds of black holes
is now overwhelming. Those of a few solar masses produced by supernovae
and much larger ones at the center of some galaxies.
The existence of bodies with
gravitational fields strong enough to allow nothing to escape has
been a topic of speculation for hundreds of years. Einstein's general
theory of relativity (published in 1916) predicts just the kinds
of object we are now inferring.
Perhaps the first object to be
generally recognized as a black hole is the X-ray binary star Cygnus
X-1. It's effect on it's companion star suggested as early as 1971
that it must be a compact object with a mass too high for it to
be a neutron star. (That was 2 years after the American astronomer
John Wheeler coined the term 'black hole').
F o r m a t i o n
How do they form? Perhaps if a star was large enough and it
collapsed, maybe nothing (not even light) could escape from it. All
the matter of a star (even it's energy) would be drawn into a denser
and denser single point. At first Einstein thought that this couldn't
be possible! He though something in nature would prevent this. The
"Cosmological Constant" would have to prevent this. He later regretted
this and said it was a huge mistake! He found in his equations pointing
toward difficult possibilities: the expansion of the universe and
the collapse of matter into an infinitely dense point. Einy buddy,
your a GENIUS! Have a little more faith in your mathematics! In recent
years we've already proven the the universe is in fact expanding and
in a few years i'm sure we'll prove the existence of black holes.
Some believe that if two black holes connected in some mysterious
way it will become in essence a worm hole.
D e t e c t i o n
If it emits no visible light how can we find them? Black holes
devour everything around them, no matter what. Planets, stars,
even whole galaxies! We can tell it's there because it is feeding
on other energy masses that we see being dragged into the center.
Some black holes that astronomer's have detected are upwards of 10
solar masses (1 Solar mass = Mass of the sun). We can also see stars
that are revolving around invisible points of incredible mass that
are obviously there, but we can't see.
I n s i d e What would it be like to enter a black hole?
This is a place where all of Einstein's and Newton's theories and
equations fail. A singularity, or a worm hole? What ever happens there
are a few things we know...
As you approach the center you'll
begin to stretch. On Earth there is a slight difference between
the gravity pulling on your feet then on your
head. Your body is slightly being stretched out. Well, in a black
hole it is much greater. Nothing can survive this stretching. You'll
be stretched and crushed into a singularity. Spaghettifacation as
cosmologists like to say. That is what we know of, but what about
worm holes? Skipping around from universe to universe through these
worm holes is not to unreal. Could this be the fastest way to travel
through space and time?
Here are some of my theories again. All
the galaxies are spinning around a singular black hole waiting to
be sucked in. Slowly, but an inevitable consequence of the fate
for all galaxies. Is there any way to stop them? Essentially no,
but I believe if there is no more matter near for it to suck up
it will have to die out then. Right? I guess that's my "cosmological
constant". If my theories are correct we have eons before this will
happen. Our sun will burn out and we will have to find a new blue
planet to live on. Don't worry, we'll be long dead and forgotten | <urn:uuid:9bff1a3c-99f3-46c6-a274-15a5cf310b5e> | 3.8125 | 868 | Personal Blog | Science & Tech. | 66.325256 |
A diagrma consists of a rectangle ABCD and a triangle DXY so that X and Y are points on the line segments AB and BC respectively and angle DXY= 90 degrees. If all the line segments in the diagram have inte ger lengths, then we call it a Sophie Diagrma.
a) show that DA/XB = AX/BY = XD/YX
b) A partucular Sophie Diagram has DX = 729 and DY = 845. Find the length and width of rectangle ABCD.
This is urgent!!
Thanks for your help! | <urn:uuid:4776cb63-3c61-4081-aefa-34f814c37e55> | 2.921875 | 125 | Q&A Forum | Science & Tech. | 90.004118 |
The remnant of the supernova SN 1006 seen at many different wavelengths
This remarkable image was created from pictures taken by different telescopes in space and on the ground. It shows the thousand-year-old remnant of the brilliant SN 1006 supernova, as seen in radio (red), X-ray (blue) and visible light (yellow).
Radio: NRAO/AUI/NSF/GBT/VLA/Dyer, Maddalena & Cornwell, X-ray: Chandra X-ray Observatory; NASA/CXC/Rutgers/G. Cassam-Chenaï, J. Hughes et al., Visible light: 0.9-metre Curtis Schmidt optical telescope; NOAO/AURA/NSF/CTIO/Middlebury College/F. Winkler and Digitized Sky Survey.
About the Image
|Release date:||14 February 2013, 20:00|
|Size:||3311 x 3311 px|
About the Object
|Type:||• Milky Way : Nebula : Type : Supernova Remnant|
• X - Nebulae
|Distance:||7000 light years|
Colours & filters
|X-ray||Chandra X-ray Observatory|
|Optical||0.9-metre Curtis Schmidt|
|Radio||Green Bank Telescope|
|Radio||Very Large Array| | <urn:uuid:352591a0-6c6d-4ef0-b533-6c3a2865c71d> | 2.90625 | 293 | Truncated | Science & Tech. | 59.638824 |
Mass transfer between double white dwarfs
UNSPECIFIED. (2004) Mass transfer between double white dwarfs. MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, 350 (1). pp. 113-128. ISSN 0035-8711Full text not available from this repository.
Official URL: http://dx.doi.org/10.1111/j.1365-2966.2004.07564.x
Three periodically variable stars have recently been discovered (V407 Vul, P=9.5 min; ES Cet, P=10.3 min; RX J0806.3+1527, P=5.3 min) with properties that suggest that their photometric periods are also their orbital periods, making them the most compact binary stars known. If true, this might indicate that close, detached, double white dwarfs are able to survive the onset of mass transfer caused by gravitational wave radiation and emerge as the semi-detached, hydrogen-deficient stars known as the AM CVn stars. The accreting white dwarfs in such systems are large compared to the orbital separations. This has two effects. First, it makes it likely that the mass-transfer stream can hit the accretor directly. Secondly, it causes a loss of angular momentum from the orbit which can destabilize the mass transfer unless the angular momentum lost to the accretor can be transferred back to the orbit. The effect of the destabilization is to reduce the number of systems which survive mass transfer by as much as one hundredfold. In this paper we analyse this destabilization and the stabilizing effect of a dissipative torque between the accretor and the binary orbit. We obtain analytical criteria for the stability of both disc-fed and direct impact accretion, and we carry out numerical integrations to assess the importance of secondary effects, the chief one being that otherwise stable systems can exceed the Eddington accretion rate. We show that to have any effect upon survival rates, the synchronizing torque must act on a time-scale of the order of 1000 yr or less. If synchronization torques are this strong, then they will play a significant role in the spin rates of white dwarfs in cataclysmic variable stars as well.
|Item Type:||Journal Article|
|Subjects:||Q Science > QB Astronomy|
|Journal or Publication Title:||MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY|
|Publisher:||BLACKWELL PUBLISHING LTD|
|Date:||1 May 2004|
|Number of Pages:||16|
|Page Range:||pp. 113-128|
Actions (login required) | <urn:uuid:62f2d357-0aed-4c39-bbab-364c14fb429b> | 2.828125 | 572 | Academic Writing | Science & Tech. | 55.491702 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Wednesday, 4 April 2012
StarStuff Podcast Scientists say it's time to abandon existing theories that Earth is composed of the same material as chondritic meteoroids. Plus, pinpointing when dark energy became the dominate force in the universe, and North Korea prepares controversial missile launch.
Monday, 2 April 2012
Dark patches visible across much of the northern Martian hemisphere are volcanic glass according to a new study
Friday, 30 March 2012
For the first time, huge solar tornadoes have been filmed swirling deep inside the solar corona.
Wednesday, 28 March 2012
StarStuff Podcast Chemical samples that show the Earth and the lunar surface are virtually identical challenge theories about how the Moon was born. Plus Mercury's unusual internal dynamics raise fresh questions, and an Australian submarine reaches lowest point on Earth.
Wednesday, 21 March 2012
StarStuff Podcast Ancient rocks indicate early Earth regularly flipped between an oxygen-rich atmosphere and a thick hydrocarbon haze like Saturn's moon Titan. Plus CERN confirms neutrinos don't travel faster than the speed of light; and a new theory about why the 'Man in the Moon' faces Earth.
Tuesday, 20 March 2012 16
Great Moments in Science Why daytime is bright and night-time dark is not as simple as black and white. Dr Karl explains why the vast canopy of stars we see can't light up the night sky.
Wednesday, 14 March 2012
StarStuff Podcast More powerful solar storms hit Earth as the Sun moves towards solar max. Plus astronomers discover oldest galaxy cluster ever detected; and physicists trap antihydrogen.
Tuesday, 13 March 2012
Brain and eye problems have surfaced in astronauts who spent more than a month in space.
Thursday, 8 March 2012
Huge explosions on the sun's surface are sparking the biggest radiation and geomagnetic storm that Earth has experienced in five years, according to space weather experts.
Wednesday, 7 March 2012
StarStuff Podcast Astronomers debate how to deflect potentially destructive asteroids. Plus atmosphere detected on Saturn moon; and light from the Moon used to find life on Earth.
Monday, 5 March 2012
NASA's Cassini spacecraft has detected faints wisps of oxygen in the atmosphere of Saturn's moon Dione.
Wednesday, 29 February 2012
StarStuff Podcast Faulty wire causes 'faster-than-light' particle error. Plus: Astronomers discover water world; and Japanese engineers eye space elevator by 2050.
Wednesday, 22 February 2012
StarStuff Podcast Black hole discovery opens way for galactic archaeology. Plus: scientists detect mysterious microwave haze near Milky Way's centre; and why is Venus slowing down?
Wednesday, 22 February 2012 31
Ask an Expert How can the universe be 95 billion light years across when it has only been in existence for approx 14.3 billion years?
Thursday, 16 February 2012
Scientists plan to develop a machine that acts almost like a vacuum cleaner to scoop up thousands of abandoned satellite and rocket parts. | <urn:uuid:1c074eeb-529a-4cb5-bd3c-3ee784d904bc> | 2.90625 | 622 | Content Listing | Science & Tech. | 46.358192 |
Archives of Ask A Scientist!
About "Ask A Scientist!"
On September 17th, 1998 the Ithaca Journal ran its first "Ask A Scientist!" article in which Professor Neil Ashcroft , who was then the director of CCMR, answered the question "What is Jupiter made of?" Since then, we have received over 1,000 questions from students and adults from all over the world. Select questions are answered weekly and published in the Ithaca Journal and on our web site. "Ask A Scientist!" reaches more than 21,000 Central New York residents through the Ithaca Journal and countless others around the world throught the "Ask a Scientist!" web site.
Across disciplines and across the state, from Nobel Prize winning scientist David Lee to notable science education advocate Bill Nye, researchers and scientists have been called on to respond to these questions. For more than seven years, kids - and a few adults - have been submitting their queries to find out the answer to life's everyday questions.
Did you know that some special materials have four phases of matter? This fourth phase is called the "liquid crystalline" phase (LC). The LC phase has properties that make it both liquid-like and solid-like. A mood ring is an example of an object that takes advantage of the special properties of liquid crystals.
Here is a brief explanation of how mood rings work. The ring is a glass shell filled with molecules that are in the LC phase. When light shines on the ring, certain colors of light will be reflected depending on how the molecules are arranged. The arrangement of molecules depends on your body temperature. When your body temperature changes, the arrangement of molecules also changes, causing a different color of light to be reflected. To the extent that your body temperature indicates your "mood," you can then "see" your mood by the color of light that is reflected.
The liquid crystalline phase was first observed in 1888 by Austrian chemist Friedrich Reinitzer. Surprisingly, nearly 80 years passed before the first commercial products were realized. Today you will find LCs in many electronic display applications (LCD) like laptop computers, digital watches, calculators, and cell phones. These LC phases operate as shutters and can be switched on (light) and off (dark) by the action of an electric current.
- How do speakers produce more than one sound at a time (example: guitar and vocal)?
- How are dryer sheets manufactured?
- How come we have two eyes but see only one of everything?
- Why do scuba divers wear rubber?
- What is in batteries that causes electricity?
- How would gravity function if the earth were a torus?
- Can you explain the darkening of glass by irradiation? I am working with a high school chemistry teacher who would like to be able to use some old glass samples in discussions of atomic structure. Some of the glass has been turned purple through exposure to Cobalt-60.
- What about the atomic structure of a substance determines its color and/or luster?
- Does temperature effect the speed of light?
- It has been said that man cannot produce a perfect sphere. How can that be said if we have nothing perfectly spherical as a reference to begin with? | <urn:uuid:5f27ca98-bf4f-4b80-a8b1-96377cf8078c> | 2.90625 | 669 | Q&A Forum | Science & Tech. | 54.727486 |
Overriding vs Overloading
The method Overriding and method Overloading are two concepts/techniques/feature found in some programming languages. Both concepts allow the programmer to provide different implementations for methods with the same name. Method overriding allows the programmer to provide an alternative implementation within a sub class to a method already defined inside its super class. Method overloading allows the programmer to provide different implementations to multiple methods with the same name (within the same class).
What is Overriding?
As mentioned above, a class can extend a super class or a parent class, in object oriented programming languages. A child class can have its own methods or can optionally have its own implementations to methods already defined in its parent class (or one of its grand parent classes). So when the latter happens, it is called method overriding. In other words, if the child class provides an implementation to a method with the same signature and return type as a method already defined in one of its parent classes, that method is said to be overridden (replaced) by the implementation of the child class. So, if there is an overridden method in a class, the runtime system will have to decide which method implementation is used. This issue is resolved by looking at the exact type of object that is used to invoke it. If an object of the parent class is used to invoke the overridden method, then the implementation in the parent class is used. Similarly, if it is an object of the child class that is used, then the child class’s implementation is used. Modern programming languages like Java, Eifell, C++ and Python allows method overriding.
What is Overloading?
Method overloading is a feature provided by some programming languages to create more than one method with the same name, but with different input and output types. In modern programming languages like Java, C#, C++ and VB.NET, this feature is available. You can overload a method by creating another method with the same name but with a different method signature or a different return type (or both). For example, if you have method1(type1 t1) and method1(type2 t2) inside the same class, then they are overloaded. Then the system will have to decide which one to be executed when it is called. This differentiation is made by looking at the type of the parameter(s) been passed in to the method. If the argument is of type1, then the first implementation is called, while if it is of type2, then the second implementation is called.
What is the difference between Overriding and Overloading?
Although, method overriding and method overloading are used to provide a method with different implementations, there are key differences between these two concepts/techniques. First of all, subjects of method overriding always stay within different classes, while subjects of method overloading stay within the same class. That means overriding is only possible in object oriented programming languages that allows inheritance, while overloading can be available in a non object-oriented language as well. In other words, you override a method in the super class but you overload a method within your own class.
Another difference is that overridden methods have the same method name, method signature and the return type, but overloaded methods must differ in either the signature or the return type (the name should be the same). In order to differentiate between two overridden methods, the exact type of object that is used to invoke the methods id used, whereas to differentiate between two overloaded methods the types of the parameters are used. Another key difference is that overloading is resolved at compile time, while overriding is resolved at runtime. | <urn:uuid:763e1be8-ee0c-48e5-a91f-57a776218f38> | 4.21875 | 750 | Knowledge Article | Software Dev. | 31.87907 |
How A Mosquito Works : Automated Mosquito Misting Systems : Homeland Defense Corp. : Valdosta GA
Facts: The Mechanics of a Mosquito
You're out in your backyard during the summer enjoying your family and grilling your dinner. Oh!! You never heard or felt it coming. You look down see a painful, swelling mosquito bite. Later, you feel another one bite you. Do these insects carry disease? What can you do to protect yourself?
Mosquitoes are insects that have been around millions of years. And it seems that, during this time, mosquitoes have been polishing their skills so that they are now experts at finding people to bite. A mosquito has a group of sensors designed to track their prey, including:
Chemical sensors - mosquitoes can sense carbon dioxide and lactic acid up to 100 feet (36 meters) away. Birds and mammals gives off these gases as part of breathing. Certain chemicals in sweat also can attract mosquitoes (people who don't sweat much don't get nearly as many mosquito bites).
Visual sensors - if you are wearing clothing that contrasts with the background, and especially if you move while wearing that clothing, mosquitoes can see you and zero in on you. It's a good bet that anything moving is "alive", and therefore full of blood, so this is a good strategy.
Heat sensors - Mosquitoes can detect heat, so they can find warm-blooded mammals and birds very easily once they get close enough.
The word "mosquito" is Spanish for "little fly," and its use dates back to about 1583 in North America (Europeans referred to mosquitoes as "gnats"). Mosquitoes belong to the order Diptera, true flies. Although mosquitoes are like flies in that they have two wings, they are quite unlike flies because their wings have scales, their legs are long and the females have a long mouth part for piercing skin (proboscis).
One of the only ways to stop mosquitoes from finding you is to confuse their chemical receptors with something like pyrethrum.
Adult mosquitoes have three basic body parts:
Head - This is where all the sensors are, along with the biting apparatus the proboscis (only females have the proboscis, for biting). The head has two compound eyes, antennae to sense chemicals and a mouth part called the palpus.
Thorax - This segment is where the two wings and six legs attach. It contains the flight muscles, compound heart, some nerve cell ganglia and trachioles.
Abdomen - This segment contains the excretory and digestive organs.
So you have a sensor package, a motor package and a fuel processing package -- a perfect design!
There are over 2,700 species of mosquitoes in the world, and there are 13 mosquito genera (plural for "genus") that live in the United States. Of these genera, most mosquitoes belong to three:
Aedes - These are sometimes called "floodwater" mosquitoes because flooding is important for their eggs to hatch. Aedes mosquitoes have abdomens with pointed tips. They include such species as the yellow-fever mosquito (Aedes aegypti) and the Asian tiger mosquito (Aedes albopictus). They are strong fliers, capable of travelling great distances (up to 75 miles/121 km) from their breeding sites. They persistently bite mammals (especially humans), mainly at dawn and in the early evening. Their bites are painful.
Anopheles - These tend to breed in bodies of permanent fresh water. Anopheles mosquitoes also have abdomens with pointed tips. They include several species, such as the common malaria mosquito (Anopheles quadrimaculatus), that can spread malaria to humans.
Culex - These tend to breed in quiet, standing water. Culex mosquitoes have abdomens with blunt tips. They include several species such as the northern house mosquito (Culex pipiens). They are weak fliers and tend to live for only a few weeks during the summer months. They persistently bite (preferring birds over humans) and attack at dawn or after dusk. Their bite is painful.
Some mosquitoes, such as the cattail mosquito (Coquilettidia perturbans), are becoming more prevalent pests as humans invade their habitats. | <urn:uuid:3cb8e045-a17e-419e-80b5-8113b2033da4> | 3.453125 | 897 | Knowledge Article | Science & Tech. | 49.712727 |
Anyone working with visible-light lasers can’t fail to notice the speckle patterns that coherent light produces upon reflection from a scattering surface. Moving either the point of view or the surface itself causes the speckle patterns to appear to slide one way or the other. By merely noticing this pattern, one is performing a simple type of speckle interferometry.
In the conventional instrumentation form of speckle interferometry, the speckle pattern from a diffusely transmitting or reflecting surface is monitored by an imaging detector; movements in the pattern provide information on in-plane (lateral) translation of the surface. However, because this technique (which has been around for almost as long as the laser) relies on imaging detectors, it is far slower than, for example, distance-measuring laser interferometry, which measures the longitudinal displacement of a mirror using a nonimaging photodetector (which can be a high-speed photodiode).
Now, Dutch researchers from the University of Twente (Enschede), the FOM Institute for Atomic and Molecular Physics (Amsterdam), and Philips Research Laboratories (Eindhoven) have created a nonimaging speckle interferometer that uses a photodiode to detect the lateral position of a scattering material at high speed over a small range to nanometer precision.1 The technique relies on a wavefront synthesizer to create a sharp focus from the scattered light reflected from the surface.
The synthesizer is a spatial light modulator (SLM) with 1024 × 768 pixels from Holoeye Photonics AG (Berlin-Adlershof, Germany) based on a reflective liquid-crystal-on-silicon (LCOS) microdisplay. The beam from a green (532 nm) continuous-wave laser passes through the wavefront synthesizer and is then transmitted through the diffuse surface; via a feedback algorithm using the output from a small photodiode, the laser beam—and thus the returning wavefront—is optimized by the SLM to produce a focus, resulting in a wavefront “fingerprint.” Using a second photodiode in a different position, another spatially separate substrate position is calibrated this way, producing a second wavefront fingerprint. (Other translational directions can be measured by creating additional fingerprints combined with additional detectors.)
If the sample is then placed in a position somewhere between the two calibrated positions, its position can be determined by illuminating the surface with a wavefront that is constructed by coherently summing complex amplitudes of the two wavefront fingerprints, measuring the intensities at the two detectors, and interpolating from the intensity difference. The sensitivity of the technique depends on the illumination optics, which determine the scale of the speckle pattern.
A sample was constructed of zinc oxide powder on a glass cover slide. For experimental purposes, a CCD camera was placed at the far field of the transmitted and scattered output; however, the camera typically would be replaced by two photodiodes for high-bandwidth use.
For the particular experimental setup, the researchers created wavefront fingerprints for sample positions in the x-axis of -126 and +126 nm, producing a single sharp focus for each at different positions on the camera. Moving the sample to x = 0 nm produced two spots of similar intensity; a shift to +60 nm reduced the intensity of one spot and increased the intensity of the other without changing the positions of the spots. The two spot intensities were measured as a function of sample displacement (see figure).
|Spot intensities at two photodetectors are measured as a function of lateral sample displacement along the x-axis. The solid curves depict the theoretical behavior (the difference between measurement and theory is probably due to a nonideal transfer function of the optics). Each detector count is equal to about 10 photoelectrons.|
Near x = 0, the intensity difference between the two detectors is an approximately linear function of displacement. The experimental sensitivity at x = 0 was 0.66 counts/ms/mm. The noise level was measured to be 1.42 counts/ms, resulting in a displacement resolution of 2.1 nm. Because part of the “noise” in the experiment could actually be fluctuations in sample position (which is signal, not noise), the true noise level may be lower.
The researchers note that their technique could be used for reflective samples, to detect displacement along more than one direction, and, if modified, to measure sample rotations.
1 E.G. van Putten et al., Opt. Lett., 37, 8, 1070 (Mar. 15, 2012). | <urn:uuid:380bbdd2-853d-45dc-bf21-c85a5085ea56> | 3.609375 | 959 | Academic Writing | Science & Tech. | 30.182318 |
Africa Can Adapt to Climate Change
A new study warns of the potential problems Africa faces from rising temperatures. The Nairobi-based International Livestock Research Institute (ILRI) says the continent must learn to adapt to shorter growing seasons. The report was released as the U.N. Climate Change Conference is held in Cancun, Mexico.
Most warnings about climate change are based on a possible rise in global temperatures by two degrees Celsius. But this report considers what might happen if temperatures increased by four degrees.
Read full article here
Source: VOA News | <urn:uuid:84e4c175-e7ea-4885-9994-5d1bf361cb42> | 2.96875 | 117 | Truncated | Science & Tech. | 34.410217 |
So, on the surface of Mars, inside Gale Crater on a plain called Aeolis Palus, our tenacious six-wheeled Mars Science Laboratory (MSL) is doing cutting-edge laboratory work on an alien world and mission scientists are itching to announce a "historic" discovery.
"This data is gonna be one for the history books. It's looking really good," John Grotzinger, lead scientist of the MSL mission, said in an interview with NPR.
But what is he referring to and why all the secrecy?
For the past few weeks, rover Curiosity has been busily scooping dirt from a sandy ridge in a geologically interesting location called "Rocknest." Using a little scooper attached to its instrument-laden robotic arm, Curiosity has been carefully digging, shaking and dumping the fine soil grains into its Sample Analysis at Mars (SAM) and Chemistry and Mineralogy (CheMin) instruments.
Recently, NASA announced some results from SAM after analyzing samples of Mars air. Interestingly, clues as to Martian atmospheric history were uncovered. Also, mission scientists announced an apparent dearth of methane in the air -- a result that undoubtedly frustrated many hoping for the detection of the gas that may, ultimately, reveal the presence of sub-surface microbial life. | <urn:uuid:6a00ad1c-4067-4b05-93f2-5068cd32f57c> | 2.9375 | 261 | Truncated | Science & Tech. | 32.422444 |
A Russian scientist is trying to convince people they can change the world simply by using their own energy. He claims that thinking in a certain way can have a positive or negative effect on the surrounding environment. We are developing the idea that our consciousness is part of the material world and that with our consciousness we can directly influence our world, said Dr. Konstantin Korotkov, professor of physics at St. Petersburg State Technical University.
To bridge our understanding of the unseen world of energy, scientific experiments are being carried out using a technique called bioelectrophotography. The assumption is that we are constantly emitting energy. Bioelectrophotography aims to capture these energy fields seen as a light around the body — or what some people would call your aura. | <urn:uuid:2895f05f-914a-40dc-b260-9b75c5232b13> | 2.875 | 153 | Personal Blog | Science & Tech. | 37.918438 |
I use sound to study whales in the ocean. Understanding sound is absolutely essential to my research, and to understand sound, you've got to understand the wave equation.
We're not going to actually derive the wave equation, but it's important to know what goes into it. (if you want to see the real math, look here). In order to understand waves, you need to understand four other fundamental laws. The great thing about these laws is that they pretty much are telling you that waves obey the laws of physics. By the combined powers of these laws, Captain Wave equation emerges!
Conservation of Mass: This means that even though the number of molecules in any part of the volume that sound passes through may change, the total number of molecules in the volume stays the same.
|In the sound wave, molecules get more |
compressed, but don't appear out of thin air.
Equation of Motion (Newton's Second Law): This means that we can calculate the force acting on particles by multiplying their density when a sound is passing through them by their acceleration.
|Force = Acoustic Density * Acceleration|
Equation of Force: With this, we can use the total density of the fluid (which is different from the acoustic density) and divide it by the x movement of the particles to find the force acting on the particles.
|Force = Total Density differential * x differential|
Pressure = (squishiness / acoustic pressure) * pressure
When you put these four equations together, you get the wave equation, which uses all four to describe the movement of a wave. This equation factors in pressure, time, the speed of sound, and the movement of particles.
I warned you this was going to be dense. And that was only chapter 1.2.1.
Also, if I made a math mistake, and I messed up on understanding any of the equations, please comment! | <urn:uuid:81ea2935-49fa-4673-8f3c-0e762e1eef54> | 3.625 | 397 | Personal Blog | Science & Tech. | 52.101071 |
The most famous eclipsing binary star is visible to the naked eye and has a rich mythology associated with it.
Algol is called the Winking Demon Star because of its light variation and because Perseus is according to mythology holding the severed head of the Gorgon or demon.
|The Algol System|
A blue spectral class B8 star with a diameter of 3 solar diameters and red-yellow spectral class K2 star of about 3.5 solar diameters are in very close orbit around each other (See the earlier discussion of spectral classes). The orientation of the orbits is such that a large percentage of each star is eclipsed during the primary and secondary eclipses. The blue star (because it is hotter) emits more light from each square centimeter of its surface than the yellow-red star, so the primary eclipses occur when the blue B8 star is occulted by the K2 star.
From the light curve, we see that the change in light output corresponds to a change of more than one order of magnitude, easily visible to the naked eye.
In the lower right image we see that the two stars are so close together that tidal forces are distorting the shape of the K2 star, distorting it into a teardrop shape. In fact, there is evidence that some of the matter of the K2 star is being pulled onto the B8 star, as we shall discuss further later. The dotted line with the figure-8 shape in the lower right image corresponds to what are called the Roche lobes of the gravitational potential between the stars. We shall have more to say about that later.
The orange line between the two stars is a schematic indication that there appears to be matter streaming between the components of the binary. We shall return to this later when we discuss accreting binaries.
Here is a virtual reality simulation of the eclipses in the Algol system. | <urn:uuid:34945c72-c3d7-432f-9f39-1d7ed3942beb> | 3.578125 | 393 | Academic Writing | Science & Tech. | 48.433115 |
Rao, Madhusudana S (1997) Method for measurement of the angles of polygons. In: Optical Engineering, 36 (7). pp. 2062-2067.
Restricted to Registered users only
Download (114Kb) | Request a copy
A simple technique is devised to measure the angles of a spentagon and a hexagon without using expensive pectrometers, autocollimators, and angle gauges. A general technique for measuring the angles of polygons with sides ranging from 5 to 12 is proposed. The technique can be used for carrying out measurements with an autocollimator, when a suitable angle-gauge combination is not available. The method can also be used for both glass and hardened steel polygons with moderate polish.
|Item Type:||Journal Article|
|Additional Information:||Copyright of this article belongs to Society of Photo-Optical Instrumentation Engineers.|
|Department/Centre:||Division of Physical & Mathematical Sciences > Instrumentation and Applied Physics (Formally ISU)|
|Date Deposited:||17 Jul 2007|
|Last Modified:||19 Sep 2010 04:36|
Actions (login required) | <urn:uuid:c78e5874-f256-46d6-9119-17a7e7398023> | 2.71875 | 253 | Academic Writing | Science & Tech. | 27.408781 |
Photic zoneocean, that is exposed to sufficient sunlight for photosynthesis to occur. The depth of the photic zone can be greatly affected by seasonal turbidity. Since the photic zone is the only zone of water where primary productivity occurs, an exception being the productivity connected with abyssal hydrothermal vents along mid-oceanic ridges, the depth of the photic zone is generally proportional to the level of primary productivity that occurs in that area of the ocean,
The aphotic zone is that portion of the ocean that is exposed to no direct sunlight.
The transparency of water, which affects the photic zone, is simply measured with a Secchi disk.
See also: Pelagic zone | <urn:uuid:82bed542-6e12-42ef-a9ae-3483c38308e8> | 3.734375 | 145 | Knowledge Article | Science & Tech. | 21.300714 |
From Math Images
| Dragons 1
|Created By: Jos Leys
- A tessellation created in the style of M.C. Escher.
Basic Description This dragon tessellation was designed to emulate the style of M.C. Escher. Escher was famous for his lithographs depicting tessellations or endless loops. Tessellations are images that repeat and seamlessly mesh within one another. Each image alternates color, creating a beautiful and potentially endless work of art.
This image is an irregular tessellation, because its repeating shape (the dragon) is not a regular polygon.
For a more in-depth discussion, please see Tessellations.
- There are currently no teaching materials for this page. Add teaching materials.
About the Creator of this Image
Jos Leys creates images from mathematics using programs such as
Ultrafractal and Povray. He has also created a two hour mathematical
animation film called Dimensions, in collaboration with the French
mathematicians E.Ghys and A.Alvarez.
If you are able, please consider adding to or editing this page!
Have questions about the image or the explanations on this page?
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. | <urn:uuid:7bbebbcc-b901-4e57-ad4e-56266a34e07f> | 3.34375 | 278 | Content Listing | Science & Tech. | 42.115572 |
Mike Barnett, K. Rustan M. Leino, and Wolfram Schulte
The Spec# programming system is a new attempt at a more cost effective way to develop and maintain high-quality software. This paper describes the goals and architecture of the Spec# programming system, consisting of the object-oriented Spec# programming language, the Spec# compiler, and the Boogie static program verifier. The language includes constructs for writing specifications that capture programmer intentions about how methods and data are to be used, the compiler emits run-time checks to enforce these specifications, and the verifier can check the consistency between a program and its specifications.
In CASSIS 2004, Construction and Analysis of Safe, Secure and Interoperable Smart devices
|Series||Lecture Notes in Computer Science| | <urn:uuid:15e7c71d-9d6c-431c-9558-aaea2bd9b7ba> | 2.6875 | 162 | Academic Writing | Software Dev. | 20.762909 |
The barycenter (or barycentre; from the Greek βαρύκεντρον) is the point between two objects where they balance each other. For example, it is the center of mass where two or more planets orbit each other. When a moon orbits a planet, or a planet orbits a star, both bodies are actually orbiting around a point that lies outside the center of the larger body. For example, the moon does not orbit the exact center of the Earth. It actually orbits a point on a line between the center of the Earth and the Moon. This is about 1,710 km below the surface of the Earth. This is the point where the mass of the moon and the mass of the Earth balance. This is the point about which the Earth and Moon orbit as they travel around the Sun. | <urn:uuid:952d50a3-3413-4eb5-8a76-7272b16469da> | 3.75 | 172 | Knowledge Article | Science & Tech. | 71.841071 |
Earth's Climate Follows The Sun's UV Groove
That large changes in solar radiation can affect Earth's climate is widely accepted. However, the hypothesis of solar-induced centennial to decadal climate changes, which suggests feedback mechanisms in the climate system amplifying even small solar variations, has not found acceptance among orthodox climate scientists. The climate change clique would rather place their money on greenhouse gasses—human generated CO2 in particular. It is true that satellite-based measurements of total solar irradiance show that mean variations during solar cycles do not exceed 0.2 W m−2 (~ 0.1% of the Sun's energy output). It has also been noted that relatively large variations of 5–8% in the ultraviolet (UV) frequencies can occur, though how this could change global climate remained a puzzlement—but perhaps no longer. From studying a significant climate shift 2,800 years ago, a group of scientists have concluded that large changes in solar UV radiation can, indeed, affect climate by inducing atmospheric changes.
Ask a rational and scientifically literate person what might be the primary cause of climate change and they would be well justified in pointing out the large bright object that passes overhead daily. Humanity noticed that warmth came from the Sun long before it started keeping written records. A number of primitive cultures even worshiped the Sun as a deity. Fittingly, the Sun's possible influence on climate has not been ignored (see “Atmospheric Solar Heat Amplifier Discovered”). Climate scientists, however, have been loath to grant the local star primacy of place, at least when it comes to relatively short term climate variation.
It has been suggested by several scientists that centennial-scale climate variability during the Holocene epoch has been controlled by the Sun. While this sounds reasonable the problem has always been that the amplitude of solar forcing is small when compared with the climatic effects. Satellite measurements taken at the top of Earth's atmosphere indicate that observed solar fluctuations amount to less than 1/10th of a percent of total irradiance, though the data are limited and not reliable beyond the past 30 years or so. Without more extensive and reliable data, it is unclear which feedback mechanisms could have amplified the influence. That situation may have recently changed.
As reported in Nature Geosciences, “Regional atmospheric circulation shifts induced by a grand solar minimum,” Celia Martin-Puertas et al. took a meticulous look at annual sediment deposits in a German lake from 3,300 to 2,000 years ago. They analyzed the sediment layers—called varve—carefully measuring proxies for solar irradiance. This is what they found and their major conclusion:
Here we analyse annually laminated sediments of Lake Meerfelder Maar, Germany, to derive variations in wind strength and the rate of 10Be accumulation, a proxy for solar activity, from 3,300 to 2,000 years before present. We find a sharp increase in windiness and cosmogenic 10Be deposition 2,759 ± 39 varve years before present and a reduction in both entities 199 ± 9 annual layers later. We infer that the atmospheric circulation reacted abruptly and in phase with the solar minimum. A shift in atmospheric circulation in response to changes in solar activity is broadly consistent with atmospheric circulation patterns in long-term climate model simulations, and in reanalysis data that assimilate observations from recent solar minima into a climate model. We conclude that changes in atmospheric circulation amplified the solar signal and caused abrupt climate change about 2,800 years ago, coincident with a grand solar minimum.
The methodology involved employed a number of proxy sources aside from counting layers of sediment. 10Be is a so called cosmogenic radionuclide, an isotope of beryllium who's abundance is regulated by incoming cosmic radiation. Since the level of solar activity regulates the amount of incoming cosmic radiation, 10Be can be used as a gauge of the Sun's activity in times gone by.
“A less active Sun implies high cosmogenic radionuclide production rates in the atmosphere related to weaker shielding against galactic cosmic ray fluxes,” the authors state, “thus, the steep increase in 14C content of the atmosphere from 2,800–2,650 cal yr BP and the rise in the cosmogenic radionuclide 10Be flux archived in Greenland ice cores both point to a long-term (centennial) solar minimum from about 2,750–2,550 cal yr BP known as the Homeric minimum.”
What they were looking for were indications of “top-down” mechanisms that could translate long-term solar fluctuations into changes in climate. To accomplish this required very accurate dating, along with high resolution proxy readings for temperature and precipitation. Then they added in data for other climate parameters such as wind strength. An illustration of some of the measurements used in the study are shown below:
Lake Meerfelder Maar proxy data.
The methodology used here is not new or groundbreaking, but the conclusion the investigators reached is: they suggest a link between UV variability and atmospheric conditions. The report notes that significant shifts at the 200–300 nm (ie UV) part of the solar emission spectrum can have significant effects on heating and ozone chemistry in the middle atmosphere. This can induce indirect dynamical effects in the atmosphere down to the Earth’s surface and that could affect climate. Specifically:
This acts through disturbances of the stratospheric polar vortex that propagate by means of wave–mean flow interactions downwards, influence the tropospheric jet streams, which are connected to the Arctic Oscillation/North Atlantic Oscillation at Northern Hemisphere mid-latitudes and affect European winter variability. Other mechanisms for solar influence on climate concern the role of energetic particles from the Sun or galactic cosmic rays, but they are energetically smaller and their climate impact is much less understood than the mechanisms connected to electromagnetic radiation.
Anyone in the US who experienced this past winter's mild temperatures and is now suffering through the Midwest's sweltering summer can attest to the influence of upper-level air currents like the jet stream. The same can be said for the record cold in some parts of eastern Europe, a weather pattern blamed on the Arctic Oscillation and North Atlantic Oscillation being stuck in their plus phases. This winter’s AO/NAO pattern stands in stark contrast to what occurred the previous two winters, when we had the most extreme December jet stream patterns on record, caused by a strongly negative AO/NAO. The negative AO conditions suppressed westerly winds over the North Atlantic, allowing Arctic air to spill southwards into eastern North America and Western Europe, bringing unusually cold and snowy conditions.
“Climate models are generally too crude to make skillful predictions on how human-caused climate change may be affecting the AO, or what might happen to the AO in the future,” states Dr. Jeff Master on his WunderBlog. “There is research linking an increase in solar activity and sunspots with the positive phase of the AO. Solar activity has increased sharply this winter compared to the past two winters, so perhaps we have seen a strong solar influence on the winter AO the past three winters.”
Even NASA—at least the part not filled with climate change doomsayers like James Hansen—recognizes that UV radiation can have a big impact on the upper atmosphere. The National Center for Atmospheric Research (NCAR) in Boulder, Colorado, put out a NASA funded report that implicated solar UV radiation in expanding and contracting the upper atmosphere, possibly hastening the decay of satellite orbits.
As background, Martin-Puertas et al. cite an earlier paper by Sarah Ineson et al., “Solar forcing of winter climate variability in the Northern Hemisphere,” also published in Nature Geoscience. In that paper it was proposed that solar UV variation contributes a substantial fraction of typical year-to-year variations in near-surface circulation, with shifts of up to 50% of the interannual variability. That report concluded:
Our result has important implications for regional climate prediction in the northern extratropics. Fluctuations in the NAO often dominate the seasonal and decadal winter climate but its predictability on seasonal and decadal timescales is low. If the recent satellite data are typical of the variation in ultraviolet fluxes in other solar cycles then our results suggest shifts in the NAO of a sizeable fraction of the interannual variability. Given the quasiregularity of the 11-year solar cycle, our results therefore suggest significant decadal predictability in the NAO.
Here is recent research by two independent groups of investigators that have suggested plausible mechanisms for variation in the Sun's ultraviolet radiation to affect climate. This should surprise no one who is aware of the Maunder Minimum and the corresponding Little Ice Age—a time of advancing mountain glaciers in the Alps, failed crops and cold weather around the world. It is generally agreed that there were three temperature minima, occurring around 1650, 1770, and 1850. Each minima separated by slight warming intervals. These periods coincide closely with times of solar inactivity, with some of the worst weather occurring squarely during the Maunder Minimum. Of course, this blog has reported on such findings before, but as usual, the warm-mongering climate alarmists have ignored these data.
A NASA video of the Sun in UV—this is what changes Earth's climate.
Science has linked UV radiation to both decadal and century long timescales, yet the climate science establishment continues to pursue GHG emissions as the primary cause for recent climate change. It should be noted that not knowing the precise mechanisms by which GHG emissions amplify the Sun's power to cause global warming has not silenced the climate change alarmists. Lack of linkage has in no way diminished the shrillness or fervor with which they trumpet their unsubstantiated claims. We now know that the Sun calls the tune for earthly climate change, so the eco-extremist crowd can no longer blame global warming primarily on humans.
Let me emphasize the research paper's central conclusion: “changes in atmospheric circulation amplified the solar signal and caused abrupt climate change.” In other words, small changes in the Sun's output can and have driven rapid climate change in the recent past. Too bad for the warmists, because science has shown that Earth's climate does groove to the Sun's UV tune. No CO2 emissions need apply.
Be safe, enjoy the interglacial and stay skeptical. | <urn:uuid:0b84f591-3e47-4b2a-a1c1-4523b0379c43> | 4.0625 | 2,174 | Personal Blog | Science & Tech. | 30.455032 |
(a) Sketch the phase diagram for O2, showing the four points given above and indicating the area in which each phase is stable.
Try sketching the phase diagram yourself; check your result here
(b) Will O2(s) float on O2(l)? Explain.
From the phase diagram in part (a), the solid-liquid line is normal (slopes to the right). Thus, O2(s) is more dense than O2(l), and as a result will not float.
(c)As it is heated, will solid O2 sublime or melt under a pressure of 1 atm?
If the solid is at constant pressure of 1 atm (760 torr), it will melt when heated.
Doug Chapman firstname.lastname@example.org 7/1/08 | <urn:uuid:adf43bd8-a1a9-4f83-8fc7-2af6e0d36465> | 3.15625 | 170 | Q&A Forum | Science & Tech. | 83.117053 |
Approximately 30 kilometers around the Chernobyl plant is the designated Exclusion Zone, where the majority of radioactive materials and equipment used in the containment of the disaster was abandoned. The Ukrainian Ministry of Emergencies governs the zone - however, the zone crosses Ukraine's border with Belarus, and a similar Belarusian agency governs the zone on its own soil. According to the International Atomic Energy Agency, 187 former settlements exist within the zone, ranging in size from villages to the city of Pripyat, where most plant workers called home. While nobody aside from those responsible for maintaining the zone and the decommissioned Chernobyl plant are allowed to enter the exclusion zone, many former residents have moved back into the area to be close to their ancestral homes, family cemeteries, or simply because they don't want to give up their homes. The government has begrudgingly allowed former residents to move back, and they live mostly un-harassed. Some radiologists also reside within the zone, studying the long-term effects of radiation on the local flora and fauna.
Plants and Wildlife
Plants and wildlife within the zone seem to be flourishing, but not without difficulty. The reduced human impact on the zone has allowed the dense local forest to encroach into the villages and towns, and animals now roam close to former human settlements. However, a 2007 study conducted by Anders Moller and Tim Mousseau suggests that while life is returning to the exclusion zone, it's not unharmed by the lasting effects of radiation on the area. Anders' and Moller's findings indicate that the closer one gets to Chernobyl, both the number of species and the population density of those species found decline by nearly 66%. They also found that some species in the zone have a higher rate of genetic abnormalities in areas with normal background radiation. James Morris, a biologist at USC, observed that some trees are also suffering from abnormalities - he said in an interview with National Geographic Magazine that the radiation has affected the ability of some trees to know which way to grow. "One of the great ironies of this particular tragedy is that many animals are doing considerably better than when humans were there," Mousseau said.
- ↑ http://www.outsideonline.com/outdoor-adventure/science/Chernobyl--My-Primeval--Teeming--Irradiated-Eden.html?page=all
- ↑ http://en.wikipedia.org/wiki/Zone_of_alienation
- ↑ http://www.iaea.org/newscenter/features/chernobyl-15/cherno-faq.shtml
- ↑ http://news.bbc.co.uk/2/hi/science/nature/6946210.stm
- ↑ http://news.nationalgeographic.com/news/2006/04/0426_060426_chernobyl_2.html | <urn:uuid:61e9b37f-ee80-40d2-9b7f-2fd79444a479> | 3.296875 | 598 | Knowledge Article | Science & Tech. | 39.879059 |
****===The right way to do binary file I/O===****
1) Define your building blocks
Binary files are, at their core, nothing more than a series of bytes. This means that anything larger than a byte (read: nearly everything) needs to be defined in terms of bytes. For most basic types this is simple.
C++ offers a few integral types that are commonly used. There's
The problem with these types is that their size is not well defined. int might be 8 bytes on one machine, but only 4 bytes on another. The only one that's consistent is char... which is guaranteed to always be 1 byte.
For your files, you'll need to define your own integral types.
Here are some basics:
u8 = unsigned 8-bit (1 byte) (ie: unsigned char)
u16 = unsigned 16-bit (2 bytes) (ie: unsigned short -- usually)
u32 = unsigned 32-bit (4 bytes) (ie: unsigned int -- usually)
s8, s16, s32 = signed version of the above
u8 and s8 are both 1 byte, so they don't really need to be defined. They can just be stored "as is". But for larger types you need to pick an endianness.
Let's go with little endian for this example, which means a 2-byte variable (u16) is going to be stored low byte first, and high byte second. So the value
will be seen in the file as
when the file is examined in a hex editor.
An example way to safely read/write u16's with iostream:
u16 ReadU16(istream& file)
file.read( (char*)bytes, 2 ); // read 2 bytes from the file
val = bytes | (bytes << 8); // construct the 16-bit value from those bytes
void WriteU16(ostream& file, u16 val)
// extract the individual bytes from our value
bytes = (val) & 0xFF; // low byte
bytes = (val >> 8) & 0xFF; // high byte
// write those bytes to the file
file.write( (char*)bytes, 2 );
u32 would be the same way, but you would break it down and reconstruct it in 4 bytes rather than 2.
2) Define your complex types
Strings are the main one here, so that's what I'll go over.
There are a few ways to store strings.
1) You can say they are fixed width. IE: your strings will be stored with a width of 128 bytes. If the actual string is shorter, the file will be padded. If the actual string is longer, the data written to the file will be truncated (lost).
- advantages: easiest to implement
- cons: inefficient use of file space if you have lots of small strings, strings have a restrictive maximum length.
2) You can use the c-string 'null terminator' to mark the end of the string
- advantages: strings of any length.
- disadvantages: cannot have null characters embedded in your strings. If your strings contain a null character when written, it will cause the file to be loaded incorrectly. Probably the most difficult to implement
3) You can write a u32 specifying the length of the string, then write the string data after it.
- advantages: strings of any length, can contain any characters (even nulls).
- disadvantages: 4 extra bytes for each string makes it ever so slightly less space efficient than approach #2 (but not really).
I tend to prefer option #3. Here's an example of how to reliably read/write strings to a binary file:
string ReadString(istream& file)
u32 len = ReadU32(file);
char* buffer = new char[len];
string str( buffer, len );
void WriteString(istream& file, string str)
u32 len = str.length();
file.write( str.c_str(), len );
vectors/lists/etc could be handled same way. You start by writing the size as a u32, then you read/write that many individual elements to the file.
3) Define your file format
This is the meat. Now that you have your terms defined, you can construct how you want your file to look. I break out a text editor and outline it on a page that looks something like this:
char header "MyFi" - identifies this file as my kind of file
u32 version 1 for this version of the spec
u32 foo some data
string bar some more data
vector<u16> baz some more data
This outlines how the file will look/behave. Say for example you look at this file in a hex editor and you see this:
4D 79 46 69 01 00 00 00 06 94 00 00 03 00 00 00
4D 6F 6F 02 00 00 00 EF BE 0D F0
Since the file format is so clearly defined, just examing this file will tell you exactly what the file contains.
First 4 bytes:
4D 79 46 69
- these are the ascii codes for the string "MyFi", which identifies this file as our kind of file (as opposed to a wav or mp3 file or something, which would have a different header)
Next 4 bytes:
01 00 00 00
- the literal value of 1, indicating this file is 'version 1'. Should you decide to revise this file format later, you can use this version number to support reading of older files.
Next 4 bytes are for our 'foo' data:
06 94 00 00
means that foo==0x9406
After that is a string ('bar'). string starts with 4 bytes to indicate the length:
03 00 00 00
indicating a length of 3. So the next 3 bytes
4D 6F 6F
form the ascii data for the string (in this case: "Moo")
After that is our vector ('baz'). Same idea... start with 4 bytes to indicate length:
02 00 00 00
, indicating a length of 2
Then there are 2 u16's in the file. The first one is
(0xBEEF), and the second one is
You'll find that all common binary file formats like .zip, .rar, .mp3, .wav, .bmp, etc, etc are defined this way. It leaves absolutely nothing to chance.
Credits to Disch, who wrote all this, and I just copied it in here because:
(Disch wrote this in the post after the one with the above tutorial)
I really should just make these articles instead of forum posts. Gargle. Anyone want to transcribe this to an article for me? I'm too lazy to do it now.
Well Disch, I transcribed this to an article for you! Hope everyone likes it! | <urn:uuid:298682b8-8966-4895-b7c6-96896b504324> | 3.359375 | 1,483 | Personal Blog | Software Dev. | 75.019528 |
Learn about the root of all namespaces—System. Excerpt: The class Object is the root of the inheritance hierarchy in the .NET Framework. Every class in the .NET Framework ultimately derives from this class. If you define a class without specifying any other inheritance, Object is the implied base class. It provides the most basic methods and properties that all objects need to support, such as returning an identifying string, returning a Type object (think of it as a class descriptor) to use for runtime discovery of the object's contents, and providing a location for a garbage collection finalizer. | <urn:uuid:b67b74fd-6df5-4794-b903-3ee8680bd6f4> | 3.328125 | 118 | Truncated | Software Dev. | 43.439 |
Can Nanotechnology Fix Global Warming?
Nanotechnology principles and materials are used in a number of scientific disciplines. Scientists in a field called geoengineering are investigating ways to counter the global warming attributed to high levels of carbon dioxide in our atmosphere.
Volcanoes have provided these scientists with a one way to cool the earth. When a volcano erupts, it sends clouds of particles and gasses into the atmosphere. These clouds contain sulfur dioxide, which can rise as high as the stratosphere. At that height, the sulfur dioxide combines with water vapor and produces sulfuric acid aerosols that reflect the sun’s energy, reducing the heat that gets through to our atmosphere.
When the sulfuric acid returns to earth in rain, the side effect is acid rain. There’s always something.
The resulting lowering of the atmosphere’s temperature can seem small, but it can be significant in terms of its effect on our environment. For example, the U.S. Geological Survey estimates that an eruption of Mount Pinatubo in the Philippines in 1991 sent about 20 million metric tons of sulfur dioxide into the atmosphere and caused about half a degree centigrade (about 1 degree Fahrenheit) cooling in the northern hemisphere.
Here’s where nanotechnology comes in. A researcher at the University of Calgary has designed particles composed of different nanofilms that could be released into the atmosphere to cool the earth without some of the negative effects caused by volcanoes.
The top layer of a nanofilm protects the middle layer from oxidizing; the middle layer reflects light; and the bottom layer interacts with the atmosphere’s electric field to orient the disk-shaped particle horizontally for optimum reflection. That reflection cuts down the amount of sunlight that reaches our atmosphere and helps cool our planet slightly to compensate for global warming. | <urn:uuid:f7039cae-53bd-4c0a-81fb-bf89b4765755> | 3.859375 | 374 | Knowledge Article | Science & Tech. | 36.131638 |
is a SunOS invention.
A netgroup database is a list of string triples
(hostname, username, domainname)
or other netgroup names.
Any of the elements in a triple can be empty,
which means that anything matches.
The functions described here allow access to the netgroup databases.
defines what database is searched.
call defines the netgroup that will be searched by subsequent
function retrieves the next netgroup entry, and returns pointers in
A NULL pointer means that the corresponding entry matches any string.
The pointers are valid only as long as there is no call to other
To avoid this problem you can use the GNU function
that stores the strings in the supplied buffer.
To free all allocated buffers use
In most cases you only want to check if the triplet
is a member of a netgroup.
can be used for this without calling the above three functions.
Again, a NULL pointer is a wildcard and matches any string.
The function is thread-safe.
These functions return 1 on success and 0 for failure.
These functions are not in POSIX.1-2001, but
are available on most Unix systems.
is not widely available on other systems.
In the BSD implementation, | <urn:uuid:2b3d0b03-84d3-48f5-bc7d-26a53797e431> | 2.703125 | 265 | Documentation | Software Dev. | 57.329132 |
March 17, 2010
A Matter of State
Marsh or mudflat? Clear water and underwater grasses or brown turbid water and blooms of harmful algae? Comb jellies or or stinging sea nettles? Each ecological “state” has persisted in the Chesapeake Bay to varying degrees, in varying locations, at various points in history.
As humans, we have clear preferences for one state over another. The activities that we associate with the Bay –– fishing, boating, swimming –– all rely on our favorite spot being in a particular state at a particular time. Our efforts to restore the Bay hinge on a human desire to shape the state of the Bay’s ecology.
But ecological systems have a momentum of their own.
Picture a roller coaster. Once the cars crest the top of a hill, there’s little that can stop a rapid descent to the bottom. Without an engine, the cars will rock up and down the inclines, ultimately settling in a trough. Climbing the hill to recovery takes momentum in one direction –– a concerted push, sustained over time, until feedbacks kick in like an engine to give the uphill climb a boost.
Restoration is our human attempt to push an ecosystem into a preferred state of health. But this is no easy task. At Kingman Marsh in the Anacostia River, diverse groups, including scientists, government, and non-governmental organizations, have come together to return a freshwater marsh to a part of the river that had “flipped” into a mudflat state after decades of environmental insult (see Marsh in the City). This is a massive undertaking, a huge push more than a decade in the making. It required substantial dredging, followed by the planting of some 700,000 new plants –– at a cost of $6 million.
But in the case of Kingman Marsh, the uphill push of restoration efforts has faced a strong downhill counterforce. A hungry flock of resident Canada geese seems determined to eat every last palatable shoot from the marsh, pushing the area toward a persistent state of mudflat. Because of this, the fate of the whole restoration effort hangs in the balance.
In other parts of the Bay, the return to a desirable state has come more decisively. Take Gunston Cove for example. In this tidal embayment of the Potomac River, once dominated by noxious algal blooms, clear waters have returned. It took a while. For decades, the Blue Plains Sewage Treatment Plant had discharged large amounts of phosphorus into the river, causing eutrophication — the result of too many nutrients. But even after treatment upgrades reduced the discharge of phosphorus, the waters saw little change. Not until some two decades later, did Gunston Cove suddenly experience a rapid improvement –– presumably reflecting the time that phosphorus “lagged” behind in the system.
Late last fall, as I struggled to trudge, with borrowed hip waders, through the thick ooze of Kingman Marsh, I marveled at the uphill struggle underway to restore marshland to a small part of this ruined river. I was doing my best to keep pace with U.S. Geological Survey biologist Cairn Krafft as she surveyed the extensive damage done to the marsh by Canada geese. The devastation was pretty incredible. In the battle between the states of marsh and mudflat, the geese seem determined to make mud prevail.
How can we use ideas about ecological states to inform restoration efforts? How can we encourage ecological systems to work for us, not to fight our best intentions?
Here’s where it seems to me that lessons from history can provide valuable insights. By looking to the past, we can see how and where ecological states have flipped before, and where they would be likely to flip in the future. Such an effort would require the diverse expertise of different disciplines. We need scientists analyzing long-term data sets for evidence of changes in state –– data reflecting trends in fisheries, water quality, and more. We need modelers and statisticians to work on understanding transitions from one state to another, where thresholds for recovery might help set a regime change in motion. And we need synthetic thinkers to help translate from the academic realm to decisions on the ground.
It’s a steep hill to climb, but the potential gains for restoring the Bay are worth it. Aren’t they? | <urn:uuid:d80bb3b6-b283-43b8-90e1-9f77d3fad724> | 2.828125 | 910 | Personal Blog | Science & Tech. | 46.255384 |
HAVE the first stars finally revealed themselves? Astronomers in the US are claiming they may have found clusters that contain the first generation of stars to have congealed out of the cooling debris of the big bang. "If we're right, it's an awesome discovery," says Chris Churchill of Pennsylvania State University in University Park.
After the blinding explosion of the big bang, the Universe was completely dark for a billion years. Then lights began to switch on. These first stars would have been the building blocks of today's galaxies, but no one knows what they looked like or how they formed.
Churchill and his colleagues Jane Rigby of the University of Arizona in Tucson and Jane Charlton of Penn State were measuring radiation from distant quasars, noticing which wavelengths of light had been absorbed along the way to Earth, when they found clouds of hydrogen, magnesium and iron floating in ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:4e29dda2-1cf4-42d2-bbc2-880d5f13b9de> | 3.875 | 206 | Truncated | Science & Tech. | 50.523841 |
ENVIRONMENTALISTS have long argued that we must preserve natural habitats, but they have done so on mainly aesthetic and moral grounds. Now though, there is growing evidence for a direct link between biodiversity and human health.
Last week, researchers met in Ireland for the first international conference on the subject. The emergence of HIV, SARS and bird flu have highlighted the ability of viruses to jump the species barrier when we come into close contact with their animal hosts. But as the global trade in wildlife grows and humans encroach on more virgin habitats, the chances of encountering new pathogens will increase, the Conference of Health and Biodiversity (COHAB) in Galway was told.
That could deliver a blow to the UN's Millennium Development Goals, which aim to significantly reduce child poverty, improve maternal health and combat diseases such as HIV and malaria by 2015. World leaders will discuss the MDGs ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:0bf6cf53-a0c7-4a40-a160-08e864c0710f> | 3.390625 | 212 | Truncated | Science & Tech. | 43.130048 |
Name: Graham H.
Yesterday I was asked by a student, aged 14, in essence
the question was on the lines of: sex is determined by the presence of
sex chromosomes, although the determining factor varies (i.e. X,Y system
different in mammals compared with birds) there is a factor which
nevertheless determines whether an individual is male or female. So what
(i) in hermaphroditic animals;
(ii) and how can the temperature alter genetic instructions in certain
reptiles during development in the egg;
You have asked very involved questions which are answerable, but maybe not
to a 14 year old ability to understand.
Basically (as basic as I can get to reach a 14 year understanding), I am
taking most of the below from a number of recent Scientific American
articles which may be available in your school's library. There special
issue on Males and Females; and the article on the evolution of the Y
chromosome. My discussion below would be heavily criticized for leaving
information out if put to task by experts.
1) In hermaphroditic animals? Look up the earthworm exchange of sperm. The
chromosomes related to sex are really no different than mammals. These
animals connect to insure the proper exchange of sperm and they are stored
in specialize receptacles. I not sure why you have included these organisms
in your question. In simpler animals such as the Hydra sp. and sponges, it
is all a matter of timing. Sperms are not released at the same time that
the same organisms eggs are released. Maybe I can relate it to what you are
seeking below. Simple animals have simple processes. No advanced organism
uses hermaphroditic methods; except in genetic abnormalities or syndromes
which are mostly sterile.
2) Reptiles (and all classes other than Aves and Mammals) are heterothermic;
mammals are homothermic-> this is the key to the differences except with
a) Enzymes operate at specific temperatures including the ones that
determine sex. This explains clearly why sex differences such as found with
alligators take place in these heterothermic animals. Females have been
pretty much found to be the default (if you will) of most species. Females
are far more important to the survival of a species than the males are.
Humans included! Males develop due to a series of chemical changes that
once begun, result in the chemical sequences that result in the male
structures. Once this series of reactions start, there is no going back.
In reptiles, the enzymes that will bring about the initial reactions to
begin the male development will only operate if at the right timing, the
temperature is conducive to allow the enzyme to operate that begins the male
development. No correct temperature, the enzyme(s) responsible for male
determinants fail to start and you go to the default sex; a female. Mammals
and birds are homothermic, so this will never happen for all the enzymes
have evolved to operate at a specific temperature; in humans about 37C. Sex
has to come from specialized chromosomes instead. Even so, we find that
there are human females with XY chromosomes (very rare) that the male
"startup" gene (called the sry gene) apparently did not work. Default is
female. This is a whole other area!
b) The problem with the 14 year old understanding is that the human XX, XY
process most discussed in middle schools makes it difficult to talk about
most other animals outside of mammals because many organisms work
differently such as discussed above, or at least appear to. The
evolutionary process leading to the mammals required more detailed and
precise instructions because of the complexity of the group and, probably,
the reason that mammals carry the most amount of genetic code
introns(useless genes, 164 of these have been identified in human chromosme
20 alone) in their genome, development could get confused if a very ordered
arrangement of processes are not implemented.
It is clear that reptiles actually developed the Y chromosome, and it is
clear that it arouse from the X chromosome due to a series of mutations.
However, many reptiles and birds have different sex determinants that
actually do the very same thing as we observe in mammals. Regardless, of
the labels we give the chromosomes that determine sex in any animal, there
is one combination or "lack" of a combination that produces the chemistry
resulting in female reproductive structures and different chromosome
combination to allow for the chemistry resulting in a male. Some reptiles
have evolved the temperature method of determining the sex and the sex
chromosomes are not as important; well for the sex determination anyway.
There are always exceptions. Some reptiles are entirely female; no males
needed! Plays badly with genetic variation, but works OK in some species.
Some fish change sex with age; being females takes more energy and works
best when you are young so when they get old, they become males. Male
operation is energy cheaper and is more suitable for old age! Some species
do not use sex unless there is a stress factor in their habitat; why change
your genetic instructions if the individual is well suited to its
environment. I'm walking into ecology now and it is time to stop.
I apologize for I do not feel I have adequately answered your question. I
have left out details that are important. Entire courses in evolution
address your request and it is not easy to explain all the relationships
with out great details.
I did not have an answer to your question, so I asked a friend, Chris
Borland, who did her graduate work on a hermaphrodite nematode (worm)
called C. elegans. Here i what she had to say:
"Well, in my favorite hermaphrodite species, C. elegans, there are
actually two genders hermaphrodites and males. The hermaphrodites are
“XX”, produce both eggs and sperm (each has an X), and self-fertilize
internally (the eggs are already fertilized when they are laid). The
males have only one X chromosome we call this “XO”, and are produced by
nondisjunction. Males arise at a pretty low rate normally (about
1/1000). They can mate with the hermaphrodites, and when this happens,
the sperm from the male are used preferentially (more cross-progeny than
self-progeny) and the next generation is 50% male (only half of the male
sperm have an X)."
Chris Borland by way of Dr. Ticknor
I see this as a question which needs answering on two fronts: Biological
Biologically, humans all begin as females and early in gestation the y
chromasome gene called SRY is turned on which initiates a cascade of events
that leads to male phenotypic (outward) characteristics. If this gene is
inactive for the period it needs to be (a matter of days) the fetus will
usually develop the outward signs of a female but the inner anatomy will
show that the gonads have no clear development to true female or male
forms...ovaries or testes. The uterus is typically blind. After birth,
many times, this is not detected and the child is raised as a female and
looks and acts as such. When the time comes for puberty...no menstruation
occurs which usually will lead to a visit to the physician's office and
hopefully the proper guidance. In many cases the girl develops into a woman
with even fewer pubertal problems (no skin blemishes etc.) and enters
womanhood with the only detriment being she will not be able to bear
Philosophically it, in part, concerns what constitutes a person as an
individual and also what imparts their identity...If all the outward
appearances and behaviors of a person constituted those which were found in
females, does the presence of a "y" sex chromosome in someway overrule their
importance. If the y chromosome is non-functional, what then? There was an
interesting case of an Olympic female star who was stripped of her gold
medal(s?) after it was found she had a y chromosome...I believe in her case
she was "androgen insensitive" and the "sry" gene never made a difference.
Also she did not have the benefit of the male hormones that might have
given her a competitive advantage. Makes for a very interesting class
discussion. When it comes down to it, I and most scientists I speak with on
this issue, have a difficult time coming to terms with what constitutes an
individual. Heck I still can't even define what a living being is, and I
have been trying to find one for 40 years. I suggest also that you get the
student to read some of the short stories of Isaac Asimov in the book "I
Peter Faletra Ph.D.
Office of Science
Department of Energy
Click here to return to the Molecular Biology Archives
Update: June 2012 | <urn:uuid:100a7c69-86f5-4d66-bd21-8a54ed58c489> | 3.515625 | 1,959 | Comment Section | Science & Tech. | 41.946755 |
Good thing the cars in this video are all moving slowly. Add a little more speed, and the scene would be a driver’s worst nightmare. Imagine a car pileup in front of you on a snowy day, your own skidding wheels and, seconds later, the inevitable crash…
Consider—the reason people can control their cars is that it’s very hard to slide a tire across pavement. Technically speaking, this is because tires are built to have a high coefficient of friction when pressed on a paved road. The coefficient of friction is essentially a ratio of the force it takes to slide two surfaces across each other to the force they’re being pressed together with. A high coefficient of friction means the two surfaces don’t like to slide; a low coefficient of friction means it’s easy. For example, let’s say you’re speeding down the highway and you see a police officer, so you step on your brakes. The amount of force it would take for your car to skid is the weight of your car (the force pressing the car to the road) multiplied by the coefficient of friction. When the pavement is dry, the coefficient of friction is high, so you can apply a lot of braking force without skidding.
On the fateful snowy day in our video, things worked a little differently. When these people pressed the brakes, the heat generated by the tire-on-ice friction created a thin film of water over the frozen surface. The coefficient of friction for tires on ice with a thin film of water between them is pretty much zip, resulting in—you guessed it—auto Ice Capades. It took almost no braking force for the cars to skid and, once skidding, they continued in a uniform motion, on a decline, until they found something that could apply enough force to stop them. The most convenient thing, as it all too often is, was another car.
There’s not a whole lot you can do in a situation like this besides try to steer out of the line of other cars and gently brake in the hopes that your antilock system helps the wheels grip again. What didn’t seem to work was when one guy jumped out of his car, grabbed the door, and tried to stop it himself. Maybe he can bench-press a few, but it’s doubtful he could have competed against the villainous combination of ice, rubber and a low coefficient of friction. —Katherine Ryder
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:356dc024-bffc-45dd-85ba-ef324854154c> | 3.53125 | 565 | Truncated | Science & Tech. | 56.594172 |
by Staff Writers
Paris (AFP) Feb 9, 2012
French scientists unveiling new estimates for global warming said on Thursday the 2 C (3.6 F) goal enshrined by the United Nations was "the most optimistic" scenario left for greenhouse-gas emissions.
The estimates, compiled by five scientific institutes, will be handed to the UN's Intergovernmental Panel on Climate Change (IPCC) for consideration in its next big overview on global warming and its impacts.
The report -- the fifth in the series -- will be published in three volumes, in September 2013, March 2014 and April 2014.
The French team said that by 2100, warming over pre-industrial times would range from two degrees Celsius (3.6 degrees Fahrenheit) to 5.0 C (9.0 F).
The most pessimistic scenarios foresee warming of 3.5-5.0 C (6.3-9.0 F), the scientists said in a press release.
Achieving 2C, "the most optimistic scenario," is possible but "only by applying climate policies to reduce greenhouse gases," they said.
In its Fourth Assessment Report published 2007, the IPCC said Earth had already warmed in the 20th century by 0.74 C (1.33 F).
It predicted additional warming in the 21st century of 1.1-6.4 C (1.98-11.52 F), of which the likeliest range was 1.8-4.0 C (3.24-7.2 F).
The French estimates are derived from two different computer models that crunch data for four scenarios based on atmospheric levels of carbon dioxide (CO2), the main greenhouse gas.
The work differs from previous calculations as it takes into account the reflectivity of clouds and uptake of CO2 by the oceans and other factors that can skew the equation, the authors said.
Meeting in Cancun, Mexico in December 2010, countries under the UN Framework Convention on Climate Change (UNFCCC) set 2 C (3.6 F) above pre-industrial times as the maximum limit for warming.
They vowed to consider lowering it to 1.5 C (2.7 F) if scientific evidence warranted this.
Small island states and other poor nations badly exposed to climate change are lobbying for the 1.5 C (2.7 F) limit.
Climate Science News - Modeling, Mitigation Adaptation
Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.
Political Leaders Play Key Role In How Worried Americans Are By Climate Change
Columbus OH (SPX) Feb 08, 2012
More than extreme weather events and the work of scientists, it is national political leaders who influence how much Americans worry about the threat of climate change, new research finds. In a study of public opinion from 2002 to 2010, researchers found that public belief that climate change was a threat peaked in 2006-2007 when Democrats and Republicans in Congress showed the most agreem ... read more | <urn:uuid:312b4ed6-202a-408e-a247-69943308ae39> | 2.859375 | 615 | Truncated | Science & Tech. | 72.02956 |
I’m sure you’ve all heard the news; the OPERA collaboration have taken measurements which seem to suggest that neutrinos emanating from the CERN Super Proton Synchrotron travelled the 455 miles through the Earth’s crust to the Gran Sasso Laboratory at very slightly more than the speed of light in vacuum.
For those not versed in the ways of the physics, a neutrino is a fundamental particle. It’s a lepton, the same family of particles as the electron. Unlike the electron, the neutrino has no electrical charge, and so can only interact via the weak nuclear force. That’s how they can travel hundreds of miles through the earth’s crust; they interact with the matter we’re more familiar with (atoms made of electrons, protons and neutrons) only very, very rarely.
That means to detect them you need huge, super-sensitive detectors, typically built deep underground to screen out the signal you would otherwise get from cosmic rays. One is the Super-Kamiokande detector in Japan, which contains 50,000 tons of water. When one of the rare interactions with a neutrino occurs, the interaction generates a very small amount of light, which is detected and used to infer the properties of the neutrino interaction which caused it.
The OPERA experiment was designed to measure a phenomenon called neutrino oscillation, or neutrino mixing.
There are three types of neutrino: the electron neutrino, muon neutrino, and tau neutrino. The Sun produces a vast number of electron neutrinos as a by-product of the fusion reaction which powers it. When detectors were used to measure the neutrinos being emitted by the Sun, it was discovered that the number was less than would be expected. This became known as the “Solar Neutrino Problem”.
Despite claims to the contrary in certain elements of society and the media, when new evidence is discovered, the theory has to give way. Either the models of what was going on inside the Sun were wrong, e.g. the fusion yield of the Sun was lower that expected, or some aspect of neutrino physics was not properly understood. Cross-checks with other measurements of the Sun indicated support for the Solar models. So the problem was with the neutrino physics.
It had been assumed that the mass of the neutrino was zero; all measurements made had indicated that it was, at the least, very close to zero. However, if the neutrino had even a very small amount of mass, it would undergo a very peculiar phenomenon due to a quirk of quantum mechanics, called neutrino mixing. Essentially, in the flight from the Sun to the Earth, some of the neutrinos would change flavour, from electron to muon, or tau neutrinos. The “missing” neutrinos were there all along; they just weren’t in the form of electron neutrinos that the detectors were capable of detecting.
The OPERA experiment is designed to more closely measure this process by generating a neutrino beam on demand in an accelerator, and then measuring the mixing that occurred while the beam was in flight.
In doing this, they have apparently detected, to a good degree of statistical significance, that their neutrinos travelled superluminally from the source to their detector. This is well-known to be forbidden by relativity, so if this is a true result, then it will require brand-new physics to explain, and could mark the start of a new era of post-Standard Model physics. It would be one of those fantastic moments where something amazing is discovered by people looking for something else entirely.
That said, it could also be a mistake in their methodology. Relativity has stood unmolested for a century; every experiment concurs with it.
When a supernova occurs, as well as a blinding flash, there is also an extremely intense neutrino pulse. So intense that the even with the rare interaction of a neutrino with the matter we’re made from, the pulse could give you a fatal radiation dose. Knowing how far away the supernova is, the lag time between the observation of the light pulse and the neutrino pulse, and just a dash of astrophysics, you can work out how fast the neutrinos must have travelled, and it comes out subluminal.
So the OPERA guys have done the sensible thing; checked everything they could, and published. It’s very probable that it will turn out to be a complex effect they hadn’t fully considered or was unknown at the time of designing the experiment which will explain the measurements.
The real trouble with these sorts of things is how to manage the media, and how to stop them getting over-excited at things that may well turn out to be nothing, c.f. the hints of the Higgs that melted away in the late Tevatron data.
I actually don’t have much of a point to make about all this, except that the relationship between science, the media, and society, means that there’s really great misunderstanding out there about what’s actually going on. Reading the comments on BBC News, especially the worst-rated ones (thankfully!) does demonstrate the mistrust of science and scientists, and a misplaced belief that science is about arrogance and certainty, when it is really more about doubt, and trusting the weight of the evidence. There’s also a certain group of people who seem to be fully unaware of just how well the world actually is understood these days.
It will certainly be interesting to see what happens if/when these neutrinos are shown to be subluminal!
Then, there’s the wackos, who take any new development as an excuse to just make weird and wacky stuff up. But they’re another story, really. | <urn:uuid:c41cd91f-3bc1-4968-926b-339561eedc6e> | 3.75 | 1,247 | Personal Blog | Science & Tech. | 45.15349 |
It is now well established that different human populations may exhibit very different responses to therapeutic drugs. However, to what extent this may have been influenced by our evolutionary history is less well known. In this guest blog, Ripudaman K Bains from University College London outlines why understanding our past can help inform our future, and describes her recent work published in BMC Genetics with colleagues from Addis Ababa University, Henry Stewart Group and Uppsala University on molecular diversity and population structure at the Cytochrome P450 3A5 gene in Africa.
Posts tagged: genetics
A dog’s breed standard is the set of criteria used to define the archetype of each breed. My personal favourite belongs to the Beagle, which includes as part of the official definition:
“The man with the lead in his hand and no dog
in sight owns a Beagle”
Strict adherence to these breed standards defines each of the many different pedigree classes seen in conformation dog shows, where each breed competes to be crowned as its most perfect representation.
Such remarkable diversification is unprecedented in the animal kingdom given the level of genetic diversity seen in the species, and has largely come about through concerted efforts by human breeders to adapt them to …
In 1755 curators at the Ashmolean museum in Oxford (UK) threw the last remaining tissue specimens of the dodo onto a fire. Unfortunately for this most hapless of flightless creatures, this means that we are still not entirely certain what this giant tropical pigeon truly looked like.
The dodo represents perhaps one of the most extreme examples of a population crash ever witnessed, slipping from discovery to extinction in only around 80 years. Of course the blame for this lies firmly in the hands—and stomachs—of humans, as hungry sailors devoured their way through the entire species when they set anchor in the Mauritius.
However, human activity can also have unintended consequences for vulnerable species. | <urn:uuid:35eab837-83cd-4db8-9ffe-df11c67c79c9> | 2.921875 | 396 | Content Listing | Science & Tech. | 21.351429 |
Functional programming in C++. Why would you want to do that and is it possible? In this series of posts I’m going to introduce functional programming concepts and show how they can be implemented in C++. See below for an introduction to the series and the kick-off article on Algebraic Data Types.
Since the dawn of computing there have been two parallel approaches to understanding and using what we now call software. The first line of thought, from which C++ was born, focuses on immediate practicality and the capabilities of the machines that are available. The second line of thought, which produced the discipline of functional programming, is a search for simple and beautiful mathematical constructs that captures the underlying essence of software and the domains modeled within it.
The concern of this series is the latter. By lifting thinking about design from the level of pointers, classes, and templates to mathematical concepts, which include algebraic data types, the resulting programs are more simple, general, concise, and composable.
Some of the material may seem difficult at first but if you keep trudging along, the benefits will become clear soon enough. My hope is that once someone works through these articles in their entirety, they will be convinced that a) C++ can be used as an advanced (more powerful than Haskell 2010), functional language that has the added feature of being performance tunable and b) that functional programming is an approach that produces readable, correct, and composable code.
The concern of this article is the mathematical basis and notation for Algebraic Data Types. The sequal will explain how to apply ADTs directly to C++. As you learn these concepts, I encourage you to ponder how a data type you designed recently would be implemented abstractly with ADTs. When I do this I often come up with something more simple or general than my first thought.
A type can be thought of as a set of possible values. We use a ∈ A to say that a is in the set A, or in relation to types a has the type A. long in c++ is an example of a type. We’d say 64l ∈ long since the literal
64l has that type.
Algebraic data types deal with how we can create more complex types based on combining others. ADTs are described using 5 symbols: 1, ⊕, ⊗, ::=, and :=. We’ll be looking at each of them below.
Please consider this a guided tutorial. If you have a question or something needs clarification, please make a comment below.
Given two types, A and B, we call the type A ⊗ B the product of A and B. A value of type A ⊗ B can be thought of as containing an element of A and an element B. Mathematically, this is often referred to as a pair where the value (a,b) is of type A ⊗ B if a ∈ A and b ∈ B.
Pairs can be generalized to form n-tuples.
Exercise. If ℝ is the type of the real numbers, what would be the type of points in 3-space?
Solution. ℝ ⊗ ℝ ⊗ ℝ
Given two types, A and B, we call the type A ⊕ B the sum of A and B. A value of type A ⊕ B can be thought of as containing an element of type A or an element of type B.
Values of a sum like A ⊕ B can be represented denotationally as either (0,a) where a ∈ A or (1,b) where b ∈ B.
Given the sum int ⊕ char where int and char are the normal C++ types of the same name, some elements would be (0, 12), (1, ‘a’), and (0, 33). Note that the 0 or 1 discriminates which of the underlying types are being stored in the pair’s second element.
As with the product operation, sums can be extended to n elements.
Exercise. If n is the number of values of a type A. How many values are in type A ⊗ A? How about A ⊕ A?
Solution. n × n for the product and n + n for the sum. Because we define our sum operation to be a disjoint union, values of A ⊕ A are tagged with which side of ⊕ they fall upon. If sum were defined to be a non-disjoint union, A ⊕ A would have only n elements.
Exercise. Using what we’ve covered so far, come up with a type for representing the values of booleans.
Solution. 1 ⊕ 1. The only two values of this type are (0,1) and (1,1).
Type definitions allow us to use shorthand names for more complex types. The general form is a := e where a is the newly bound identifier and e is the type expression. We read := as “is the same as”.
Expanding on our bool exercise, Bool₁ := 1 ⊕ 1 allows us to conveniently use Bool₁ wherever we would otherwise use 1 ⊕ 1.
Bool₁ as defined suffers from a major drawback however. The type 1 ⊕ 1 can be used to represent any type with two elements. For example, a state of “on” or “off” could be represented as 1 ⊕ 1 just as easily.
To remedy this we include an alternative type definition ::= (note the two colons), called “is implemented with”. a ::= e states that a is a distinct type from e (for all v ∈ a, it is not true that v ∈ e), yet there is an affinity between them (given a va ∈ a there is a corresponding ve ∈ e and vice versa).
To denote elements of type a where a ::= e, we surround an element of type e with parentheses and give it the subscript a. For example if ve ∈ e, (ve)a is the value of type a that corresponds to ve of type e.
Exercise. Denote the values of type Bool₂ where Bool₂ ::= 1 ⊕ 1.
Solution. ((0,1))Bool₂ and ((1,1))Bool₂
Exercise. Consider the following definition of Bool₃:
T ::= 1
F ::= 1
Bool₃ := T ⊕ F
Denote the values of type Bool₃.
Solution. We start with elements of base types and work our way up.
T has exactly one element, (1)T.
Similarly the element of F is (1)F.
Therefore the elements of Bool₃ are:
Type functions allow us to create reusable type templates. The general forms for type functions are f a₁ a₂ … an := e and f a₁ a₂ … an ::= e where f is the name of the newly bound type function and the type expression e may use a₁…an.
Consider an OpInt type that can be either an integer or nothing. A simple way to declare OpInt is as follows:
none ::= 1
OpInt := none ⊕ int
If we think optional values are useful in a more general context, we can declare a type function Op that creates various optional types based on its type argument:
none ::= 1
Op a := none ⊕ a
Exercise. Define OpInt using Op from above.
Solution. OpInt := Op int
Interesting types occur when type definitions are allowed to be recursive. That is, the name to the left of the ‘=’ symbol also appears on the right hand side. Here are a few examples:
The natural numbers N:
Z ::= 1
N := Z ⊕ N
If z is the single value of type Z, values of N take the form:
n0 = (0, z)
n₁ = (1, n0) = (1, (0, z))
n₂ = (1, n₁) = (1, (1, (0,z)))
n₃ = (1, n₂) = (1, (1, (1, (0, z))))
Exercise. Define a list type L of arbitrary types a. Show a few values of type L char.
emptyList ::= 1
L a := emptyList ⊕ (a ⊗ L a)
And a couple values of type L char:
- “” = (0, emptyList)
- “Hi” = (1, (‘H’, (1, (‘i’, (0, emptyList)))))
Since the types in the sum are disjoint, we can also represent these values unambiguously as follows:
- “” = emptyList
- “Hi” = (‘H’, (‘i’, emptyList))
Exercise. Declare a type for binary trees BTree, with values of type a at leaf nodes. Each node is either a leaf or has exactly two subtrees:
Solution. BTree a ::= (BTree a ⊗ BTree a) ⊕ a
Exercise. Declare a type for binary trees BTree₂, with values of type a at leaf nodes and b at branch nodes. Again, each node is either a leaf or has exactly two subtrees:
Solution. BTree₂ a b ::= (b ⊗ BTree₂ a b ⊗ BTree₂ a b) ⊕ a
Exercise. How do the above trees differ from BTree₃ (Thanks to Peter Marendic for inspiring this exercise):
emptyTree ::= 1
BTree₃ a ::= emptyTree ⊕ (a ⊗ BTree₃ a ⊗ BTree₃ a)
Solution. BTree₃ allows for a empty trees and branch nodes containing 1 subtree as well as 2.
Next up, we’ll talk about how to realize algebraic datatypes in C++ using Boost libraries. See you soon! | <urn:uuid:b722839f-67e2-4d6c-8308-5a3aca674e78> | 2.734375 | 2,245 | Personal Blog | Software Dev. | 67.018073 |
Active Galactic Nuclei and Supermassive Black Holes
Until recently, the underlying physics of active galactic nuclei (AGN) were somewhat of a mystery to cosmologists. They seemed to produce forces and energies that were so large and in such a small volume that current cosmological theories could not account for them. Recent discoveries while studying inactive galactic nuclei, the center of normal galaxies like our own Milky Way, have allowed for theories to be put forward explaining AGN and the formation of all galaxies.
Active Galactic Nuclei come in several different types: Seyfert galaxies, quasars, and blazars. To understand how all of these seemingly different and disparate phenomena can be grouped together under a single category, we must first understand the history of each of these individual classes of object.
First investigated by Vanderbilt University astronomer Carl K. Seyfert (for whom they are named) in the 1940s, Seyfert galaxies are otherwise normal galaxies which have an uncharacteristically bright point source of electromagnetic radiation in the 100 keV gamma-ray band (1). This point source can be variable with a very small period, which indicates that the volume of the object emitting the radiation was on the order of one parsec (1). While these objects were found to be not uncommon in the universe and shown to be extragalactic due to their redshift, a viable theory explaining the object observed in the nucleus of these galaxies was absent for many years.
In the late 1950s and early 1960s, a new class of radio source objects was first catalogued. These sources of radio waves were soon matched to what appeared to be very dim stars in the visual band (2). When the redshift of one of these objects, 3C273, was calculated in 1962 by M. Schmidt and discovered to be 0.158, astronomers discovered that they were, in fact, very distant galaxies. This gives them the name quasar, which is short for QUAsi StellAr Radio source (2). In the 1960s, it was discovered that quasars have spectra very similar to the nuclei of Seyfert galaxies. This indicated that quasars were the same objects as Seyfert galaxies, only they were much further away (and consequently, much younger) and that the nucleus outshines the rest of the galaxy by a factor of 10-1000 (1). Radio observations of quasars show that they emit twin jets of particles and energy at relativistic speeds. We cannot detect these jets directly, as they are not pointed at us. These jets are ‘seen’ from earth as the energy given off as these extremely high-energy particles collide with the intergalactic gas and dust that permeates space (3).
Blazars are objects that emit very high amounts of electromagnetic radiation across all bands of the EM spectrum. Their redshift indicates that they, too, are extragalactic and occur towards to edge of the observable universe. Some characteristics of blazars are that their light exhibits high optical polarization and that they have high variability with periods of less than a few days. This very rapid variability indicates that the object emitting this polarized radiation is very small for an extragalactic object (1). The fact that blazars are extragalactic and emit extremely high amounts of energy from a small volume leads cosmologists to believe that they too are AGN. The crucial difference between blazars and Seyfert galaxies/quasars is that blazars are oriented in such a way that their relativistic jets of radiation and particles are pointed at the Earth. In the case of blazars, these jets need not be inferred from their interaction with the contents of intergalactic space, but can be viewed directly. This theory also accounts for the much higher luminosity and the much wider range over which this luminosity is observed (3). To date, more than 80 blazars have been catalogued by the EGRET experiment on board the Compton Gamma-Ray Observatory (1).
There are two subclasses of blazars that have been defined. Some blazars have spectra that exhibit emission lines similar to those of quasars. These are called Flat Spectrum Radio Quasars (FSRQ). The other classification encompasses blazars that have featureless optical spectra. These are known as BL Lac objects. Approximately 20% of blazars appear to be BL Lac objects, while the rest are FSRQs (1).
Bringing It All Together
When these phenomena were initially discovered no one had any idea that Seyfert galaxies, quasars, and blazars were all different manifestations of the same type of object. As they were all studied following their initial discovery, however, certain clues came to light that lent themselves to the modern theory of AGN. Firstly, the redshift of these objects, when applied to Hubble’s Law, indicated that they were all objects that resided outside our own galaxy. Thus far, the only objects luminous enough to be observed by us and reside outside the Milky Way were other galaxies. This idea fit well with Seyfert galaxies, but quasars and blazars appeared to be point sources, and the period of variability of all of these objects indicated that they occupied a very small volume. The connection between Seyfert galaxies and quasars was made when the spectra of these objects were compared. The spectra of these objects indicated that they were generated by a similar process. The redshift of the spectra indicated that the quasars were much further away from us than the Seyfert galaxies. Obviously, the quasars were Seyfert galaxies whose nuclei were much more active. So active, in fact, that they outshone the rest of the galaxy so that the nuclei were the only part that was visible from so far away. The blazars were connected to the Seyfert galaxies/quasars when it was observed that most blazars had emission lines similar to quasars, only with much more intensity. It was theorized that the bipolar jets emitting particles at relativistic speeds observed indirectly in Seyfert galaxies and quasars were pointed directly at Earth in the case of the blazars, accounting for their unmatched luminosity (1)(2)(3).
A Workable Theory
Once it was determined that Seyfert galaxies, quasars, and blazars were all in fact instances of the same phenomenon, henceforth referred to as AGN, the race to explain their peculiar behavior was on. As stated earlier, AGN exhibited both exceptionally intense luminosity and variability with a period on the order of a few days or weeks. Their luminosity coupled with their distance from Earth indicated that the process driving the AGN was extraordinarily intense, as they were radiating very large amounts of energy. The short variability period indicated that the mechanism of this process was housed in a very small volume, only a few light-weeks across. What could possibly emit so much energy in such a small volume? The only thing cosmologists had to offer were black holes containing several billion times the mass of our sun.
The theory of black holes and accretion disks has been around since 1971, when Cygnus X-1 was first proposed as the first empirical evidence for the singularity predicted by Special Relativity. Since light cannot escape their pull, the theory goes, it is impossible to observe black holes directly. Cygnus X-1 made direct observation unnecessary by being a binary system. The gravitational effects of the black hole were indicated by the other member of the system, a rather unremarkable B0 type star that was orbiting around a seemingly invisible partner. Additionally, this partner was siphoning stellar material off of the star into an accretion disk. As the material in this disk moved toward the center, it gained velocity and energy from friction. This energy was given off as X-rays.
Explaining the massive energy production of AGN was just a matter of applying the idea of accretion disks on a much larger scale. Instead of consuming a star, the supermassive black hole at the center of an AGN was consuming a galaxy. The accretion disk surrounding this black hole would be of such a scale as to explain the energy output of AGN. Bipolar jets are another feature of accretion disks surrounding black holes, so this strange feature is also explained by the supermassive black hole theory.
A Startling Discovery
Cosmologists now had a working theory to explain the seemingly unique features of the galaxies located furthest from earth. Because they are the furthest, they are also the youngest galaxies we can see. It would seem that most young galaxies have a black hole billions of solar masses at their center. This line of reasoning begs a very important question: what happened to these black holes in more mature galaxies like our own? An answer to that question would come as quite a shock to Alan Dressler in 1983 (4).
Intending to solidify the existence of a supermassive black hole in AGN NGC1068, Dressler devised an experiment in which he would measure the Doppler shift of material on either side of the galactic core in order to determine the velocity at which that material was orbiting. If it was moving fast enough, that would be evidence for the existence of a supermassive black hole in that galaxy. In order to have a basis for comparison, he also measured the Doppler shift of material orbiting the core of our galactic neighbor, Andromeda. While he was not able to get a precise enough measurement of NGC1068 to support his hypothesis, the results from Andromeda were quite clear: there was a supermassive black hole at the center of the Andromeda galaxy! Since then, similar experiments on other nearby galaxies confirm that it is quite common for a galaxy to have such an object at its core and that they may actually be intrinsic to the formation of all galaxies (4).
(1): The Imagine Team/Dr. Jim Lochner. Active Galaxies. http://imagine.gfsc.nasa.gov/docs/science/know_l2/active_galaxies.html
(2): Christian, Eric and Masetti, Maggie. The Discovery of Quasars. http://imagine.gfsc.nasa.gov/docs/ask_astro/answers/980316b.html
(3): The Imagine Team/Dr. Jim Lochner. Active Galaxies and Quasars. http://imagine.gfsc.nasa.gov/docs/science/know_l1/active_galaxies.html
(4): British Broadcasting Corporation. Supermassive Black Holes (Transcript). http://www.bbc.co.uk/science/horizon/massivebholes_transcript.shtml
And this concludes another exciting episode of Node Your Homework | <urn:uuid:31467c65-29fa-4c74-9582-cee613ddf8bb> | 3.890625 | 2,265 | Knowledge Article | Science & Tech. | 40.81112 |
The study of high temperature extreme environments continues to challenge our understanding of the upper tolerances of microbial life and how life may have originated on earth and possibly other planets. The Tramway Ridge geothermal site on Mt. Erebus, an active volcano in Antarctica, is the most geographically isolated geothermal site on earth providing an excellent system for studies of ... microbial speciation, biogeography, and evolution of thermal adaptation. Recent advances in high throughput DNA sequencing and bioinformatics allow us to acquire and decipher the genetic capabilities and structure of entire microbial communities without the necessity of cultivation. Employing a combination of these advanced genetic methods coupled with culture dependant approaches a gene-centric analysis of the Tramway Ridge microflora and other Antarctic geothermal sites was undertaken to address questions focused on endemism, biogeography, evolution, and adaptation. Soil samples and bacterial isolates were collected from high temperature soils (maximum temperature sites) at Tramway Ridge and western crater locations at Mt Erebus. These temperatures averaged 65°C and were dominated by steam emissions. Samples have been obtained both for DNA/RNA genetic analysis and for cultivation. Cultivation efforts were undertaken on site and have been continued successfully in the lab. Ice core samples were also collected from walls of several ice chimneys. Temperature probes were installed in the high temperature soils at Tramway Ridge. The temperature probes and data loggers were left in situ to obtain temperature data over a one year period at each site. Temperature, Depth, Water Activity and Oxygen saturation was measured in field. Salinity, pH, moisture content, nutrients and elemental analysis were determined post-field.
In the 2010-2011 field season, we expanded our set of study sites to include several ice caves, ice chimneys and ice hummocks found on the slopes of Mt. Erebus. In addition to the expansion of sites on Erebus, we also travelled to Mt. Melbourne and Mt. Rittman (Terra Nova Bay) to identify sampling sites within these features for comparison to Tramway Ridge.
On Mt. Erebus we deployed 19 temperature loggers, distributed around the main crater and flanking ice chimney fields to remain in the field overwinter, effectively constituting a volcano-wide temperature experiment. This will determine whether temperature fluctuations in different areas of the volcano are correlated to one another.
Soil samples from 21 different sites were successfully sampled with 12 from Tramway and 9 from other locations around Erebus. All sites from Tramway had been depth profiled with 9 new samples collected and 3 old sites re-sampled. Having collected a significant number of samples from wide ranging locations allows for a robust bio geographical study to be carried out and identification of more interesting site for potential sampling to focus on in future studies. Offsite soil collection included samples from within the western crater, ice chimneys, ice caves and Mt. Melbourne and Mt. Rittman. We now have samples collected from each of the three known geothermal features of Antarctica during the same field season. These samples will be utilised for biological and geochemical comparative analysis.
Ice cores were aseptically collected by drilling into the side of an ice chimney with an ethanol-sterilized corer. Cores were immediately placed into ethanol-sterilized plastic bags and placed in cold storage. The cores will be kept frozen until they are sampled aseptically. Portions of the core will be melted and used for DNA extraction. Other portions will be used for geochemical characterization. Together these data will provide a detailed history of the biology and chemistry of the ice chimney over time.
The primary objective for 2012 was to recover temperature loggers deployed last season. We also collected hot soil samples at Tramway and from the sites of temperature logger recovery for comparative analysis. In the field, we measured temperature of soil, water content and Oxygen concentration of subsurface gasses. We also collected ice cores from the side of ice chimneys to study bacteria associated with ice chimney formation.
Soil samples were collected from the temperature logger sites using sterile spatulas and containers. Temperature and water content data were collected using Hand-held CheckTemp1 temperature and Hydrosense probes, respectively. Oxygen was measured by drawing subsurface gasses into a chamber containing an oxygen sensor. The ice cores were collected using an ice drill. Temperature loggers deployed in 2010-2011 were collected.
This season we repeated oxygen measurements to establish whether the levels observed last year are constant or whether they fluctuate. Since oxygen concentrations are expected to have a profound impact on structuring microbial communities, our oxygen data provides a clue as to why the community structure differs at depth. | <urn:uuid:7bd24da0-5583-4e4e-8319-4a35937ee35b> | 3.1875 | 969 | Academic Writing | Science & Tech. | 20.778441 |
Even in ancient times, people began to suspect that matter, despite its appearance of being continuous, possesses a definite structure on a microscopic level beyond the direct reach of our senses. Democritus came up with the term "atom" to define these little structures.
While the scientists of the late nineteenth century accepted the idea that elements consisted of atoms, they knew almost nothing about the atoms themselves. The discovery of the electron in 1887 and the realization that all atoms contain electrons provided the first important insight into atomic structure. Since electrons carry a negative charge and the atom as a whole is neutral, positively charged matter of some kind must be present in atoms.
One suggestion, made by British physicist J. J. Thomson in 1898, was that atoms are simply positively charged lumps of matter with electrons embedded in them. His model was like that of raisins in a fruitcake. Because Thomson was well-respected scientist of his time, his idea was taken very seriously. But the actual atomic structure turned out to be quite different.
In 1911, at the suggestion of Ernest Rutherford, alpha particles (helium nuclei) were emitted behind a screen with a small hole in it, so that a narrow beam of alpha particles was produced. This beam was then directed at a thin gold foil. A zinc sulfide screen, which gives off a visible flash of light when struck by an alpha particle, was set on the side of the foil.
It was expected that the alpha particles would go right through the foil with hardly any deflection because in the Thomson model, an electric charge inside an atom is assumed to be uniformly spread through its volume. With only weak electric forces exerted on them, alpha particles that pass through a thin foil ought to be deflected only slightly, less than a degree.
What the scientists actually found was that although most of the alpha particles indeed were not deviated by much, a few were deflected in very large angles. Some were even deflected in the backward direction. As Rutherford remarked, "It was as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you." Our depiction of this statement is below.
Since alpha particles are relatively heavy (over 7000 electron masses) and those used in this experiment had high speeds, it was clear that strong forces had to be exerted upon them to cause such marked deflections. The only way to explain the results, Rutherford thought, was to picture an atom as being composed of a tiny nucleus, in which its positive charge and nearly all its mass are concentrated, with electrons some distance away. So, an atom would largely be empty space. With an atom having this characteristic, it is easy to see why most alpha particles go right through a thin foil. However, when an alpha particle comes near a nucleus, the intense electric field there causes it to be scattered through a large angle. So, everything made a little more sense.
In 1913, just two years after English physicist Ernest Rutherford's theory, the great Danish physicist Niels Bohr proposed a model for the hydrogen atom that not only accounted for the presence of the spectral lines but predicted their wavelengths to an accuracy of about 0.02%. Although Bohr's theory was successful for hydrogen, it proved less unseful for more complex atoms. We now regard Bohr's theory as an inspired first step toward the more comprehensive quantum theory that followed it.
Bohr, realizing that classical physics could not explain the structure of the hydrogen atom, put forward two bold postulates. Both turned out to be enduring features that carry over in full force to modern quantum physics. Moreover, both turned out to be quite general, applying not only to the hydrogen atom but to atomic, molecular, and nuclear systems of all kinds. These postulates are the following:
It was physically and mathematically determined that an electron does not simply orbit the nucleus due to stability considerations. The electron in this atom is obliged to whirl around the nucleus to keep from being pulled into it and yet must radiate electromagnetic energy continuously.
The standard model is the model of all known fundamental particles and particle interactions. It is basically a chart which encompasses all known particles and their characteristics (such as mass, spin, charge, etc.) as well as the interactions between them. It describes the quantum theory that includes the theory of strong interactions and the unified theory of weak and electromagnetic interactions (electroweak). Gravity is not part of the standard model.
An alternative to the standard model which employs the use of a new type of particle called the tachyon which can travel faster than the speed of light.
These theories interpret pointlike particles, such as electrons, as being unimaginable tiny, closed loops. Strangely, extra dimensions beyond the familiar four dimensions of spacetime appear to be required. | <urn:uuid:aac94758-f161-4415-9459-2754731f1873> | 4.375 | 979 | Knowledge Article | Science & Tech. | 41.586315 |
|Boiling Point: 5800K
Melting Point: 2400K
Electrons Energy Level: 2, 8, 18, 32, 32, 10, 2
Isotopes: 16 + None Stable
Heat of Vaporization: unknown
Heat of Fusion: unknown
Specific Heat: unknown
Atomic Radius: unknown
Ionic Radius: unknown
1s2 2s2p6 3s2p6d10 4s2p6d10f14 5s2p6d10f14 6s2p62 7s2
Rutherfordium (named in honour of noted New Zealand nuclear physicist Ernest Rutherford) was reportedly first synthesized in 1964. Workers of the Joint Nuclear Research Institute at Dubna (U.S.S.R.) bombarded plutonium, 242Pu, with accelerated 113 to 115 MeV 22Ne ions. By measuring fission tracks in a special glass with a microscope, they claimed detection of an isotope that decays by spontaneous fission. They reported this isotope to possibly be 260104 with a half-life of 0.3 +/- 0.1 seconds, produced by the following reaction:
22Ne + 242Pu 260104 + 4 1n
In 1969, Albert Ghiorso, Nurmia, Harris, K. A. Y. Eskola, and P. L. Eskola of the University of California at Berkeley reported they had positively identified two, and possibly three, isotopes of Element 104. The group also indicated that after repeated they had been unable to produce isotope 260104 reported by the Dubna groups in 1964. The discoveries at Berkeley were made by bombarding a target of 249Cf With 12C nuclei of 71 MeV, and 13C nuclei of 69 MeV. The combination of 12C with 249Cf followed by instant emission of four neutrons produced Element 257104. This isotope has a half-life of 4 to 5 s, decaying by emitting an alpha particle into 253No, with a half-life of 105 s. The same reaction, except with the emission of three neutrons, was thought to have produced 258104 with a half-life of about 1/100 s. Element 259104 is formed by the merging of a 13C nuclei with 249Cf, followed by emission of three neutrons. This isotope has a half-life of 3 to 4 s, and decays by emitting an alpha particle into 255No, which has a half-life of 185 seconds. Thousands of atoms of 257104 and 259104 have been detected. The Berkeley group believe their identification of 258104 was correct. As of January 1995 it was thought that eleven isotopes of Element 104 had been identified. The Berkeley group proposed for the new element the name Rutherfordium (symbol Rf), in honor of Ernest Rutherford, New Zealand physicist who is known as the "father" of nuclear physics.
This resulted in an element naming controversy; since the Soviets claimed that it was first detected in Dubna, dubnium (Db) was suggested, as was kurchatovium (symbol Ku) for element 104, in honor of Igor Vasilevich Kurchativ (1903 - 1960), late head of Soviet nuclear research. The International Union of Pure and Applied Chemistry (IUPAC) adopted unnilquadium (symbol Unq) as a temporary, systematic element name, derived from the Latin names for digits 1, 0, and 4. However in 1997 they resolved the dispute and adopted the current name. (Element 105 was named Dubnium, instead.)
Rutherfordium, the first transactinide element, is expected to have chemical properties similar to those of hafnium. It would, for example, form a relatively volatile compound with chlorine (a tetrachloride). The Soviet scientists have performed experiments aimed at chemical identification, and have attempted to show that the 0.3 seconds activity is more volatile than that of the relatively nonvolatile actinide trichlorides. This experiment does not fulfill the test of chemically separating the new element from all others, but it provides important evidence for evaluation. New data, reportedly issued by Soviet scientists; have reduced the half-life of the isotope they worked with from 0.3 to 0.15 seconds.
This is a highly radioactive synthetic element whose most stable isotope is 265Rf with a half-life of approximately 13 hours.
This element therefore has no applications and little is known about it. Rutherfordium is the first transactinide element and it is predicted to have chemical properties similar to hafnium.
Only very small amounts of of element 104, Rutherfordium, have ever been made. The first samples were made through nuclear reactions involving fusion of an isotope of plutonium, 242Pu, with one of neon, 22Ne. | <urn:uuid:6ce34e17-177f-47ef-a06c-ba5037d98765> | 3.5 | 999 | Knowledge Article | Science & Tech. | 50.878294 |
The issues have delayed the return trip to Earth that was due to begin within the first ten days of December, according to previous news reports. New information also suggests the last ditch effort to collect the world's first samples from an asteroid - first described as a success - may not have achieved its goal.
After a November 26 approach to gather samples from the potato-shaped space rock, the 1,000-pound probe encountered a series of difficulties with its chemical propulsion system that is responsible for controlling the craft's orientation.
First, as scientists rejoiced after an apparent successful end to a highly ambitious three-month stay in the vicinity of Itokawa, controllers noted a propellant leak in Subsystem B, one of two attitude control subsystems aboard Hayabusa. Commands were sent to shut latching valves aboard both subsystems, and the spacecraft was left in safe mode as conditions were slowly stabilized.
Hayabusa had earlier gone into safe mode last month, but officials worked several days to finally recover the probe to set up a pair of sampling attempts in the succeeding weeks.
However, this time the revival was not as trouble-free, and communications passes in the next few days were not able to bring back Hayabusa to normal operations. Control jets in Subsystem A were not producing enough thrust to point the craft's high gain antenna toward Earth. Investigations since then have concluded this was likely due to frozen propellant, while a "severe fuel leak continued," a status report said.
During this period, Hayabusa was also suffering a serious electricity shortage that almost fully drained the battery, engineers now believe. On November 28, no communications were received from the spacecraft at all, but a low bandwidth beacon signal was obtained the next day. By November 30, recovery operations began in earnest with the aid of the on-board computer than can work without help from the ground.
"It was unusually fortunate that the spacecraft recovered the attitude, power, and communications," a December 8 internal update said.
The mission has been relying more heavily than planned on chemical propellant quantities after two of three reaction wheels failed since this summer. Normally the assemblies would be responsible for attitude control, but thrusters must now take care of those activities.
Waning fuel numbers have drawn into doubt the chances for mission success since October, and the subsequent fuel leak certainly complicates the state of affairs.
Another setback struck the asteroid explorer on December 1 when controllers believe the spacecraft inexplicably pointed its solar panels away from the Sun, further depleting batter power reserves. This caused the instruments aboard Hayabusa to shut off, and controllers were then faced with the prospect of slowly restarting the systems one by one, lengthening the time it would take to downlink data that could explain that events that transpired to create the sticky situation, which officials continue to portray as not being optimistic.
Tests of the chemical propulsion system on December 2 showed it to still be sluggish, and engineers then set out to devise a new method of attitude control employing the xenon gas used by the ion engines that are needed for the trip back to Earth. The amount of xenon in pressurized tanks on Hayabusa is sufficient for both the required lengthy ion engine burns and attitude control, according to the project team. | <urn:uuid:48621fc8-b551-4cc5-98ac-e82afa325363> | 2.984375 | 664 | Knowledge Article | Science & Tech. | 31.24197 |
|It was the largest hurricane ever recorded in the Atlantic Ocean. The cost of its devastation is still unknown. Pictured above is a movie of Superstorm Sandy taken by the Earth-orbiting GOES-13 satellite over eight days in late October as the hurricane formed, gained strength, advanced across the Caribbean, moved up the Atlantic Ocean along the US east coast, made an unusual turn west, made landfall in New Jersey, turned back to the north over Pennsylvania, and then broke up moving north-east over the northern US and Canada. Although Sandy's winds were high and dangerous, perhaps even more damaging was the storm surge of water pushed onto land ahead of Sandy, a surge that flooded many coastal areas, streets, and parts of the New York City subway system. Spanning over 1500 kilometers, US states as far west as Wisconsin experienced parts of the storm. Although Hurricane Sandy might have formed at any almost time, concerns are being raised that large storms like Sandy might become more common if water in the Atlantic continues to edge higher in power-enhancing surface temperature.
Credit: NASA, GOES-13 Satellite | <urn:uuid:fc68613f-ebed-47bf-bed0-8a4503e7e57e> | 3.515625 | 225 | Knowledge Article | Science & Tech. | 29.537544 |
Newton’s law of cooling
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
...- T 2), of course, and it is worthwhile noting that the manner in which it does so is not linear; the heat loss increases more rapidly than the temperature difference. Newton’s law of cooling, which postulates a linear relationship, is obeyed only in circumstances where convection is prevented or in circumstances where it is forced (when a radiator is fan-assisted,...
What made you want to look up "Newton's law of cooling"? Please share what surprised you most... | <urn:uuid:141dd161-5a02-4e37-a96d-a1daec2d521d> | 2.9375 | 146 | Truncated | Science & Tech. | 60.667732 |
determine the equation of the line through the points (8,2) and (-4,-1) Please explain and show work
Given that sin alpha = 15/17 and -360 <alpha<360 find all values of alpha to two decimal places. Help Please!
Good point. Thank You.
Necessary for all the countries that took part. Like America, Germany, Great Britain
Was imperialism necessary in the 19th century?
Factorise fully: 5pq + 10p
median and mode
What is the median of this set of numbers 85,48,75,36,55
A student throws two beanbags in the air, one straight up and the other one at a 30° angle from the vertical. Both beanbags are thrown with the same initial velocity and from the same height. In your own words, explain which one will come back and hit the ground first and ...
Read the following excerpt from and sometimes i hear this song in my head. "and our tongues keep remembering the rhythm of the words we forgot swaying on the backs of buses and in hot kitchens crooning in pool halls and shared bathrooms yeah/we carving a heart...
Physical Science Help!
Can someone please help me balance these equations? Thank you for your time!! 1. NO + O2 ¨ NO2 2. NH4Cl + Ca(OH)2 ¨ CaCl2 + NH3 + H2O
For Further Reading | <urn:uuid:b8591c58-61e5-4326-9a24-a04cd1bbcf97> | 3.09375 | 302 | Comment Section | Science & Tech. | 83.203333 |
© NASA, CXC, M. Weiss.
When cosmic microwave photons pass through a galaxy cluster, about 1 percent of them scatter off of hot electrons in the gas filling the space between galaxies in the cluster. This is called the "Sunyaev-Zel'dovich effect." When the photons scatter, they gain energy from the electrons. Thus, galaxy clusters appear in a map of the CMB as cold spots at the original photon frequency, and hot spots at a higher frequency. Microwave telescopes such as the South Pole Telescope can detect galaxy clusters using the Sunyaev-Zel'dovich effect, while x-ray telescopes such as Chandra can map the hot gas that scattered the CMB photons. These measurements together allow astronomers to measure the growth of structure in the universe. (Unit: 11) | <urn:uuid:5df3fe3c-a271-48cd-a433-4879d667d665> | 3.90625 | 166 | Knowledge Article | Science & Tech. | 53.12 |
||About Climate Change
Over the past 100 years global mean temperature has increased by 0.7 °C and in Europe by about 1.0 °C. Temperatures are projected to increase further by 1.4 to 5.8°C by 2100, with larger increases in Eastern and Southern Europe. Recent declarations of scientists say the year 2009 will be one of the top-five warmest on record. There is numerous evidence that most of this warming can be attributed to the emission of greenhouse gases (GHGs) and aerosols by human activities. Human activities such as the burning of fossil fuels and the destruction of forests to make farmland are increasing the levels of carbon dioxide and other heat-trapping gases in the atmosphere. These gases trap heat that is radiated from the earth surface and prevent it escaping to space, causing "Global warming". Read the Key Messages by the European Environmental Agency.
Many experts believe that global warming must be limited to no more than 2 °C above the industrial temperature if we are to prevent climate change from having irreversible impacts. But the scientific consensus is that the world's average temperature could rise by as much as 6 °C above today's levels in the course of this century, if no further action is taken. Scientists warm that even if we stop emitting carbon dioxide now, the climate would not go back to normal in 100 or 200 years. Read article
Antarctica, in particular, has undergone a significant warming over the past 50 years. Read article
Global warming has become a political issue since the middle of the nineteenth century when Dr. James Hansen, director of the NASA Goddard Institute of Space Studies and a leading climate modeler, testified before the U.S. Congress that the greenhouse effect was changing the climate. Ever since, studies have shown the impact of human activity on the environment and underlined the specific substances that jeopardize the ecosystem. The climatologist now urges the U.S president-elect Barack Obama to act quickly on climate change. See the interview
Human activities that contribute to climate change include in particular deforestation and the burning of fossil fuels (such as coal, oil and natural gas) and other fuels which leads to the emission of carbon dioxide (CO2), one of the most important greenhouse gases. Other important contributors to the recent climate change are methane, nitrous oxide and fluorocarbons. Most of them, and especially increased concentrations of CO2 related to the burning of fossil fuels in automobiles or power plants for example, have great consequences on the Earth’s ecosystem. It is also in our daily behaviour that we impact the environment, by using more water than needed, throwing out our wastes without recycling, using our cars without real need. Everyone has his share in the environmental pressure, you can calculate your carbon footprint.
(Source: IPCC AR4)
From the origins of climate change, one can identify many observed changes on the environment. The international forum responsible for assessing the scientific evidence of climate change is the Intergovernmental Panel on Climate Change (IPCC), set up in 1988. The IPCC is a joint initiative of the United Nations Environment Porgramme and the World Meteorological Organization. It identifies warming-increased floods and drought, rising sea levels, spread of deadly diseases such as malaria and dengue fever, increasing numbers of violent storms threaten to be more severe and imminent than previously believed. The impact of global warming is felt, in particular in extreme temperature areas like the Arctic, where the average annual temperature has increased approximately four times as much as average annual temperatures around the rest of the globe.
Download the IPCC Fourth Assessment Report. Find report in other languages here
(Source: IPCC 4AR)
Impacts and Vulnerabilities
Anthropogenic emission of greenhouse gases perturb the global climate system, resulting in an increase of global mean temperature, changes in weather and precipitation patterns and increased climate variability resulting in higher frequency of extreme weather. Increased CO2 concentration results in ocean acidification, which has significant negative consequences for marine biology. Particularly vulnerable ecosystems include coral reef, Artic ecosystems, Alpine ecosystems and tropical forests. A global mean temperature increase exceeding 2-3°C would increase the risk of extinction for about 20-30% of species and have widespread adverse effects on biodiversity and ecosystems.
(Source: IPCC AR4)
Scientific Facts in the EU
On Wednesday 21 May, the European Parliament's Temporary Committee on Climate Change (CLIM) presented the Interim Report reported by MEP Karl-Heinz Florenz (PPE-ED, DE) on the scientific facts of climate change. The plenary adopted the draft interim report 'The scientific facts of climate change: findings and recommendations for decision-making' with a broad majority: 566 votes in favour, 61 against and 24 abstentions. See the report in other European languages here.
Doing so, the European Parliament acknowledges the sentiment of a majority of Europeans who see climate change as the most pressing political issue of the moment. This has been made evident in a European Barometer entitled ' Attitudes of European citizens toward the environment' available here. See the EU Environmental Indicators 2008. The European Environment Agency assesses environmental progress in 53 countries on biodiversity, water, waste and climate change. See Europe's Environment - the Fourth Report (in all languages)
Cost of Inaction
Combating climate change is likely to mean significant adjustments to our lifestyles and consumption patterns. These changes, however, also need to be compatible with sustainable development and jobs. In any case, the cost of this change is limited compared to the cost of the damage climate change will cause if we take no action.
The Stern Review on the economics of climate change, commissioned by the UK government and published in October 2006, said that managing global warming and thus reducing GHG emissions would cost +/- 1% of global GDP every year, while inaction could reduce global GDP by at least 5% a year, and in the long term by possibly as much as 20% or more. Around 0.5% of total global GDP would be required to invest in a low-carbon economy for the period 2013-2030, leading to a 0.19% decrease in global GDP growth per year up to 2030 (only a fraction of the expected annual GDP growth rate of 2.8%).
Moreover, this does not take into account the value of other benefits such as reduced air pollution, associated health benefits, security of energy supply at predictable prices and improved competitiveness through innovation.
A Mc Kinsey report released on January 26, 2008 says it is possible to maintain global warming below 2°C at an overall cost of less than 1% of global GDP if swift action is taken across different sectors. The consulting firm estimates that €530 billion will need to be invested across the world by 2020 to reduce emissions to 70% below "business as usual" and avoid dangerous levels of global warming. Overall, €810 billion would need to be invested by 2030 to to avoid such a scenario, the report adds. Read the article
Read the report
Back to top
Source: Euractiv, Europa, Europarl, UNFCCC, EEA | <urn:uuid:5b560e00-e049-48ee-adde-40ea6906929d> | 4.09375 | 1,452 | Knowledge Article | Science & Tech. | 33.297205 |
2 Energy of the Spring-Mass System We know enough to discuss the mechanical energy of the oscillating mass on a spring. Remember Kinetic energy is always K ½ mv2 K ½ m -A sin( t )2 And the potential energy of a spring is U ½ k x2 U ½ k A cos (t ) 2 3 Energy of the Spring-Mass System Add to get E K U constant. ½ m ( A )2 sin2( t ) 1/2 k (A cos( t ))2 Recalling so E ½ k A2 sin2(t ) ½ kA2 cos2(t ) ½ k A2 sin2(t ) cos2(t ) ½ k A2 with q wt f Active Figure 4 SHM So Far
The most general solution is x A cos(t )
where A amplitude
For SHM without friction
The frequency does not depend on the amplitude !
We will see that this is true of all simple harmonic motion!
The oscillation occurs around the equilibrium point where the force is zero!
Energy is a constant it transfers between potential and kinetic.
5 The Simple Pendulum
A pendulum is made by suspending a mass m at the end of a string of length L. Find the frequency of oscillation for small displacements.
S Fy mac T mg cos(q) m v2/L
S Fx max -mg sin(q)
If q small then x L q and sin(q) q
dx/dt L dq/dt
ax d2x/dt2 L d2q/dt2
so ax -g q L d2q / dt2 L d2q / dt2 - g q 0
and q q0 cos(wt f) or q q0 sin(wt f)
with w (g/L)½
L x T m mg 6 Lecture 20 Exercise 1Simple Harmonic Motion
You are sitting on a swing. A friend gives you a small push and you start swinging back forth with period T1.
Suppose you were standing on the swing rather than sitting. When given a small push you start swinging back forth with period T2.
Which of the following is true recalling that w (g/L)½
(A) T1 T2 (B) T1 gt T2 (C) T1 lt T2 7 The Rod Pendulum
A pendulum is made by suspending a thin rod of length L and mass M at one end. Find the frequency of oscillation for small displacements (i.e. q sin q).
S tz I a - r x F (L/2) mg sin(q)
(no torque from T)
- mL2/12 m (L/2)2 a L/2 mg q
-1/3 L d2q/dt2 ½ g q
The rest is for homework
x CM L mg 8 General Physical Pendulum
Suppose we have some arbitrarily shaped solid of mass M hung on a fixed axis that we know where the CM is located and what the moment of inertia I about the axis is.
The torque about the rotation (z) axis for small is (sin )
-MgR sinq -MgR
x CM Mg 9 Torsion Pendulum
Consider an object suspended by a wire attached at its CM. The wire defines the rotation axis and the moment of inertia I about this axis is known.
The wire acts like a rotational spring.
When the object is rotated the wire is twisted. This produces a torque that opposes the rotation.
In analogy with a spring the torque produced is proportional to the displacement - k where k is the torsional spring constant
w (k / I)½
10 Torsional spring constant of DNA
Session Y15 Biosensors and Hybrid Biodevices
1115 AM203 PM Friday March 25 2005 LACC - 405
Abstract Y15.00010 Optical measurement of DNA torsional modulus under various stretching forces
Jaehyuck Choi Kai Zhao Y.-H. Lo Department of Electrical and Computer Engineering Department of Physics University of California at San Diego La Jolla California 92093-0407 We have measured the torsional spring modulus of a double stranded-DNA by applying an external torque around the axis of a vertically stretched DNA molecule. We observed that the torsional modulus of the DNA increases with stretching force. This result supports the hypothesis that an applied stretching force may raise the intrinsic torsional modulus of ds-DNA via elastic coupling between twisting and stretching. This further verifies that the torsional modulus value (C 46.5 /- 10 pN nm) of a ds-DNA investigated under Brownian torque (no external force and torque) could be the pure intrinsic value without contribution from other effects such as stretching bending or buckling of DNA chains.
DNA half gold sphere 11 Lecture 20 Exercise 2Period
All of the following torsional pendulum bobs have the same mass and w (k/I)½
Which pendulum rotates the slowest i.e. has the longest period (The wires are identical k is constant)
For both the spring and the pendulum we can derive the SHM solution and examine U and K
The total energy (K U) of a system undergoing SMH will always be constant !
This is not surprising since there are only conservative forces present hence mechanical energy ought be conserved.
14 SHM and quadratic potentials
SHM will occur whenever the potential is quadratic.
For small oscillations this will be true
For example the potential betweenH atoms in an H2 molecule lookssomething like this
U x 15 SHM and quadratic potentials
Curvature reflects the spring constant
or modulus (i.e. stress vs. strain or
force vs. displacement)
Measuring modular proteins with an AFM
See http//hansmalab.physics.ucsb.edu 16 What about Friction
Friction causes the oscillations to get smaller over time
This is known as DAMPING.
As a model we assume that the force due to friction is proportional to the velocity Ffriction - b v .
17 What about Friction We can guess at a new solution. and now w02 k / m With 18 What about Friction if What does this function look like 19 Damped Simple Harmonic Motion
There are three mathematically distinct regimes
underdamped critically damped overdamped 20 Physical properties of a globular protein (mass 100 kDa)
Mass 166 x 10-24 kg
Density 1.38 x 103 kg / m3
Volume 120 nm3
Radius 3 nm
Drag Coefficient 60 pN-sec / m
Deformation of protein in a viscous fluid
21 Driven SHM with Resistance
Apply a sinusoidal force F0 cos (wt) and now consider what A and b do
w w0 22 Microcantilever resonance-based DNA detection with nanoparticle probes Change the mass of the cantilever and change the resonant frequency and the mechanical response. Su et al. APPL. PHYS. LETT. 82 3562 (2003) 23 Stick - Slip Friction
How can a constant motion produce resonant vibrations
Singing / Whistling
Tacoma Narrows Bridge
24 Dramatic example of resonance
In 1940 a steady wind set up a torsional vibration in the Tacoma Narrows Bridge
25 A short clip
In 1940 a steady wind sets up a torsional vibration in the Tacoma Narrows Bridge
26 Dramatic example of resonance
Large scale torsion at the bridges natural frequency
27 Dramatic example of resonance
Eventually it collapsed
28 Lecture 20 Exercise 3Resonant Motion
Consider the following set of pendulums all attached to the same string
If I start bob D swinging which of the others will have the largest swing amplitude (A) (B) (C) 29 Waves (Chapter 16)
Movement around one equilibrium point
Look only at one point oscillations
But changes in time and space (i.e. in 2 dimensions!)
30 What is a wave
A definition of a wave
A wave is a traveling disturbance that transports energy but not matter.
Transverse The mediums displacement is perpendicular to the direction the wave is moving.
Water (more or less)
Longitudinal The mediums displacement is in the same direction as the wave is moving
32 Wave Properties
Wavelength The distance between identical points on the wave.
Amplitude The maximum displacement A of a point on the
Animation 33 Wave Properties...
Period The time T for a point on the wave to undergo one complete oscillation.
Speed The wave moves one wavelength in one period T so its speed is v / T.
Animation 34 Lecture 20 Exercise 4Wave Motion
The speed of sound in air is a bit over 300 m/s and the speed of light in air is about 300000000 m/s.
Suppose we make a sound wave and a light wave that both have a wavelength of 3 meters.
What is the ratio of the frequency of the light wave to that of the sound wave (Recall v / T f )
(A) About 1000000 (B) About 0.000001 (C) About 1000 35 Wave Forms
So far we have examined continuous waves that go on forever in each direction !
36 Lecture 20 Exercise 5Wave Motion
A harmonic wave moving in the positive x direction can be described by the equation
(The wave varies in space and time.)
v l / T l f (l/2p ) (2p f) w / k and by definition w gt 0
y(xt) A cos ( (2p / l) x - wt ) A cos (k x w t )
Which of the following equation describes a harmonic wave moving in the negative x direction
(A) y(xt) A sin ( k x - wt ) (B) y(xt) A cos ( k x wt ) (C) y(xt) A cos (-k x wt ) 37 Lecture 20 Exercise 6Wave Motion
A boat is moored in a fixed location and waves make it move up and down. If the spacing between wave crests is 20 meters and the speed of the waves is 5 m/s how long Dt does it take the boat to go from the top of a crest to the bottom of a trough (Recall v / T f )
(A) 2 sec (B) 4 sec (C) 8 sec t t Dt 38 Waves on a string
What determines the speed of a wave
Consider a pulse propagating along a string
Snap a rope to see such a pulse
How can you make it go faster
Animation 39 Waves on a string... Suppose
The tension in the string is F
The mass per unit length of the string is (kg/m)
The shape of the string at the pulses maximum is circular and has radius R
R 40 Waves on a string...
So we find
Making the tension bigger increases the speed.
Making the string heavier decreases the speed.
The speed depends only on the nature of the medium not on amplitude frequency etc of the wave.
41 Lecture 20 Recap
Agenda Chapter 15 Finish Chapter 16 Begin
Chapter 16 Traveling Waves
Problem Set 7 due Nov. 14 Tuesday 1159 PM
Problem Set 8 due Nov. 21 Tuesday 1159 PM
For Wednesday Finish Chapter 16 Start Chapter 17
PowerShow.com is a leading presentation/slideshow sharing website. Whether your application is business, how-to, education, medicine, school, church, sales, marketing, online training or just for fun, PowerShow.com is a great resource. And, best of all, most of its cool features are free and easy to use.
You can use PowerShow.com to find and download example online PowerPoint ppt presentations on just about any topic you can imagine so you can learn how to improve your own slides and presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well!
For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone! | <urn:uuid:5626da52-4070-441f-8e7e-cf9b16b815c6> | 3.71875 | 2,821 | Content Listing | Science & Tech. | 68.951085 |
Mar. 27, 2012 Herpetologists from the California Academy of Sciences and University of Texas at El Paso discovered a single specimen of the Bururi long-fingered frog (Cardioglossa cyaneospila) during a research expedition to Burundi in December 2011. The frog was last seen by scientists in 1949 and was feared to be extinct after decades of turmoil in the tiny East African nation.
For biologists studying the evolution and distribution of life in Africa, Burundi sits at an intriguing geographic crossroads since it borders the vast Congo River Basin, the Great Rift Valley, and the world's second largest freshwater lake, Lake Tanganyika. Many of the species in its high-elevation forests may be closely related to plants and animals found in Cameroon's mountains, suggesting that at some point in the past, a cooler climate may have allowed the forests to become contiguous.
Previous knowledge of Burundi's wildlife came from scientific surveys conducted in the mid-20th century, when the nation was under Belgian administration. But its history since then has been one of political unrest, population growth, and habitat loss. Today, approximately 10 million people occupy an area the size of Massachusetts, giving Burundi one of the highest population densities in Africa.
Academy curator David Blackburn joined his colleague Eli Greenbaum, professor at the University of Texas at El Paso, on the 2011 expedition with the goal of finding Cardioglossa cyaneospila, as well as other amphibians and reptiles first described 60 years ago. To their pleasant surprise, the habitats of the Bururi Forest Reserve in the southwest part of the country were still relatively intact, with populations of rare forest birds and chimpanzees present.
With little knowledge to go on except a hunch that C. cyaneospila would make a call like its possible close relatives in Cameroon, Blackburn finally found a single specimen on his fifth night in the forest.
"I thought I heard the call and walked toward it, then waited," said Blackburn. "In a tremendous stroke of luck, I casually moved aside some grass and the frog was just sitting there on a log. I heard multiple calls over the next few nights, indicating a healthy population of the species, but I was only able to find this one specimen."
The Bururi long-fingered frog is about 1.5 inches long, with a black and bluish-gray coloration. The males are notable for one extra-long finger on each foot, analogous to the "ring finger" in humans, whose purpose is unknown. Its closest relatives live in the mountains of Cameroon, more than 1,400 miles away.
The lone specimen collected, which now resides in the Academy's herpetology collection, can be used for DNA studies to determine how long the Cardioglossa species from Burundi and Cameroon have been genetically isolated from one another. The results will shed light on Africa's historical climate conditions, a topic that has far-reaching implications for understanding the evolution of life in the continent that gave rise to our own species.
In addition to locating the Bururi long-fingered frog, Blackburn and Greenbaum also documented dozens of other amphibians in Burundi, many of which had never before been recorded in the country. The team also discovered some species that may be new to science.
"Eventually, we will use the data from our expedition to update the IUCN conservation assessment for amphibians of Burundi," said Greenbaum. "Because Burundi is poorly explored, we've probably doubled the number of amphibian species known from the country. Once we demonstrate that Burundi contains rare and endemic species, we can work with the local community to make a strong case for preserving their remaining natural habitats."
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:cf23f307-f7e0-4624-b095-89e9e18faa9b> | 3.625 | 818 | Knowledge Article | Science & Tech. | 33.819057 |
This module generates temporary file names. It is not Unix specific, but it may require some help on non-Unix systems.
Note: the modules does not create temporary files, nor does it automatically remove them when the current process exits or dies.
The module defines a single user-callable function:
The module uses two global variables that tell it how to construct a temporary name. The caller may assign values to them; by default they are initialized at the first call to mktemp().
None, this variable defines the directory in which filenames returned by mktemp() reside. The default is taken from the environment variable $TMPDIR; if this is not set, either /usr/tmp is used (on Unix), or the current working directory (all other systems). No check is made to see whether its value is valid.
None, this variable defines the prefix of the final component of the filenames returned by mktemp(). A string of decimal digits is added to generate unique filenames. The default is either @pid. where pid is the current process ID (on Unix), or tmp (all other systems).
Warning: if a Unix process uses
calls fork() and both parent and child continue to use
mktemp(), the processes will generate conflicting temporary
names. To resolve this, the child process should assign
template, to force recomputing the default on the next call | <urn:uuid:0be237b8-11a4-42a6-ae51-728fde6c633f> | 2.828125 | 290 | Documentation | Software Dev. | 43.184162 |
Recently I identified a counterattack trying to defend the failed anthropogenic global warming (AGW) hypothesis. Claims about Arctic ice melt this summer (2012) are another example. Data and analysis are wrong, but they need to scare a disinterested public.
I know about arctic sea ice from flying ice and anti-submarine patrols on Canada’s east coat for four years then five years search and rescue in the Arctic. I later worked with members of the Canadian Polar Shelf Project and researchers producing Hudson Bay ice reconstructions.
Claims of declining ice conditions use satellite records that produced results after 1980. Mark Serreze, Director of the National Snow and Ice Data Center (NSIDC) publicly attacked “anti-science misinformers” and used this data to claim sea ice is at a record low of 4.1 million sq. km. Anthony Watts shows this was belied by another NSIDC “new and improved” measure of 4.7 million sq. km.
“In order to extend diagnoses of recent sea-ice variations beyond the past few decades, a century-scale digital dataset of Arctic sea-ice coverage has been compiled. For recent decades, the compilation utilizes satellite-derived hemispheric datasets. Regional datasets based primarily on ship reports and aerial reconnaissance are the primary inputs for the earlier part of the 20th century.”
These reconstructions have no value. As the Arctic Climate Impact Assessment (ACIA) said,
“The observational database for the Arctic is quite limited, with few long-term stations and a paucity of observations in general.”
If you can’t measure accurately with satellites, it’s impossible from the historic record.
NSIDC’s different results between models illustrate the problem. Other agencies get different estimates again. NOAA says the ice level is 5.1 million sq..km. while NATICE interactive maps show over 6.1 million sq..km (diagram).
The NATICE map illustrates the problem of determining ice extent. Notice there are no 100% ice areas. Prevailing Polar Easterlies drive the pack ice in a constant movement around the Pole creating wind driven open areas. Other large open areas include polynas. Satellites are fooled by meltwater on top of the ice and vast areas of broken and slush ice (yellow). How would you define ice and its limits in this Bering Straits satellite image?
For Arctic information the Intergovernmental Panel on Climate Change (IPCC) 2007 used the ACIA report. It said,
“Over the course of millions of years, the Arctic has experienced climatic conditions that have ranged from one extreme to another.”
Within the last 10,000 years several periods were warmer than today. Longest was the Holocene Optimum between 8000 and 5000 years ago (ya); the Minoan Warm Period 3400 ya; the Roman Warm Period 2400 ya; the Medieval Warm Period 1000 ya and most recently the 1930s warm period.
Temperature isn’t the main cause of current changes in Arctic ice. Wind pattern changes at the Polar Front (diagram) explain changing ice conditions that make ice extent more difficult to determine.
Rossby Waves along the Polar Front, have become more meridional with increased north/south winds. Many reports indicate the pattern.
“Arctic sea ice waxes and wanes throughout the year, and conditions fluctuate each season and year—including conditions in the Bering Sea. Although sea ice extent in mid-January 2012 was not at a record high, it was the highest ice extent in several years, according to the National Snow and Ice Data Center.”
A plot of Bering Sea ice in March shows the pattern and the record level in 2012. As one media outlet reported,
“The amount of floating ice in the Arctic’s Bering Sea – which had long been expected to retreat disastrously by climate-Cassandra organisations such as Greenpeace – reached all-time record high levels last month, according to US researchers monitoring the area using satellites.”
Here is the Meridional wind pattern in action.
“As winds from the north pushed Arctic ice southward through the Bering Strait, the ice locked together and formed a structurally continuous band known as an ice arch, which acts a bit like a keystone arch in a building.”
A 2011 Journal of Geophysical Research article explains,
“The perennial (September) Arctic sea ice cover exhibits large interannual variability, with changes of over a million square kilometers from one year to the next. Here we explore the role of changes in Arctic cyclone activity, and related factors, in driving these pronounced year-to-year changes in perennial sea ice cover.”
The same meridional pattern is occurring at the South Pole.
“It’s no secret that the South Pole in Antarctica is one of the coldest places on Earth. But this year it got really cold faster than ever, breaking a 30-year-old record for the earliest the temperature has dropped below minus 100 degrees Fahrenheit (minus 73.3 degrees Celsius).”
“The record comes less than four months after an altogether different mark was set at the South Pole during the austral summer. On Christmas Day, the temperature officially hit 9.9F (minus 12.3C) at about 3:50 p.m., to become the warmest day ever at the South Pole.”
“Just last September, another significant record fell when the peak wind speed was clocked at 58 miles per hour (mph), or 50 knots — the strongest ever at the South Pole.”
Ice conditions are changing; they always have and always will. Much warmer conditions occurred often in the recent past, but current changes are more due to changing wind patterns than temperature. Claims otherwise are political climate science trying to defend failed political climate science. | <urn:uuid:93069442-d9df-4806-ae05-c7d09ec33c35> | 2.71875 | 1,231 | Personal Blog | Science & Tech. | 48.030447 |
Units of measurement
A unit of measurement is a definite magnitude of a physical quantity, defined and adopted by convention or by law, that is used as a standard for measurement of the same physical quantity. Any other value of the physical quantity can be expressed as a simple multiple of the unit of measurement.
For example, length is a physical quantity. The metre is a unit of length that represents a definite predetermined length. When we say 10 metres (or 10 m), we actually mean 10 times the definite predetermined length called "metre".
The definition, agreement, and practical use of units of measurement have played a crucial role in human endeavour from early ages up to this day. Different systems of units used to be very common. Now there is a global standard, the International System of Units (SI), the modern form of the metric system.
In trade, weights and measures is often a subject of governmental regulation, to ensure fairness and transparency. The Bureau international des poids et mesures (BIPM) is tasked with ensuring worldwide uniformity of measurements and their traceability to the International System of Units (SI). Metrology is the science for developing nationally and internationally accepted units of weights and measures.
In physics and metrology, units are standards for measurement of physical quantities that need clear definitions to be useful. Reproducibility of experimental results is central to the scientific method. A standard system of units facilitates this. Scientific systems of units are a refinement of the concept of weights and measures developed long ago for commercial purposes.
Science, medicine, and engineering often use larger and smaller units of measurement than those used in everyday life and indicate them more precisely. The judicious selection of the units of measurement can aid researchers in problem solving (see, for example, dimensional analysis).
A unit of measurement is a standardised quantity of a physical property, used as a factor to express occurring quantities of that property. Units of measurement were among the earliest tools invented by humans. Primitive societies needed rudimentary measures for many tasks: constructing dwellings of an appropriate size and shape, fashioning clothing, or bartering food or raw materials.
The earliest known uniform systems of weights and measures seem to have all been created sometime in the 4th and 3rd millennia BC among the ancient peoples of Mesopotamia, Egypt and the Indus Valley, and perhaps also Elam in Persia as well.
In "The Magna Carta" of 1215 (The Great Charter) with the seal of King John, put before him by the Barons of England, King John agreed in Clause 35 "There shall be one measure of wine throughout our whole realm, and one measure of ale and one measure of corn--namely, the London quart;--and one width of dyed and russet and hauberk cloths--namely, two ells below the selvage..."
Many systems were based on the use of parts of the body and the natural surroundings as measuring instruments. Our present knowledge of early weights and measures comes from many sources.
Systems of units
Historically many of the systems of measurement which had been in use were to some extent based on the dimensions of the human body according to the proportions described by Marcus Vitruvius Pollio. As a result, units of measure could vary not only from location to location, but from person to person.
A number of metric systems of units have evolved since the adoption of the original metric system in France in 1791. The current international standard metric system is the International System of Units. An important feature of modern systems is standardization. Each unit has a universally recognized size.
Both the Imperial units and US customary units derive from earlier English units. Imperial units were mostly used in the British Commonwealth and the former British Empire. US customary units are still the main system of measurement used in the United States despite Congress having legally authorized metric measure on 28 July 1866. Some steps towards US metrication have been made, particularly the redefinition of basic US and imperial units to derive exactly from SI units. Since the international yard and pound agreement of 1959 the US and imperial inch is now defined as exactly 0.0254 m, and the US and imperial avoirdupois pound is now defined as exactly 453.59237 g.
While the above systems of units are based on arbitrary unit values, formalised as standards, some unit values occur naturally in science. Systems of units based on these are called natural units. Similar to natural units, atomic units (au) are a convenient system of units of measurement used in atomic physics.
Legal control of weights and measures
To reduce the incidence of retail fraud, many national statutes have standard definitions of weights and measures that may be used (hence "statute measure"), and these are verified by legal officers.
Base and derived units
Different systems of units are based on different choices of a set of fundamental units. The most widely used system of units is the International System of Units, or SI. There are seven SI base units. All other SI units can be derived from these base units.
For most quantities a unit is absolutely necessary to communicate values of that physical quantity. For example, conveying to someone a particular length without using some sort of unit is impossible, because a length cannot be described without a reference used to make sense of the value given.
But not all quantities require a unit of their own. Using physical laws, units of quantities can be expressed as combinations of units of other quantities. Thus only a small set of units is required. These units are taken as the base units. Other units are derived units. Derived units are a matter of convenience, as they can be expressed in terms of basic units. Which units are considered base units is a matter of choice.
The base units of SI are actually not the smallest set possible. Smaller sets have been defined. For example, there are unit sets[which?] in which the electric and magnetic field have the same unit. This is based on physical laws that show that electric and magnetic field are actually different manifestations of the same phenomenon.
Calculations with units of measurements
||This section contains instructions, advice, or how-to content. (December 2011)|
Units as dimensions
Any value of a physical quantity is expressed as a comparison to a unit of that quantity. For example, the value of a physical quantity Z is expressed as the product of a unit [Z] and a numerical factor:
- For example, "2 candlesticks" Z = 2 [candlestick].
The multiplication sign is usually left out, just as it is left out between variables in scientific notation of formulas. The conventions used to express quantities is referred to as quantity calculus. In formulas the unit [Z] can be treated as if it were a specific magnitude of a kind of physical dimension: see dimensional analysis for more on this treatment.
Units can only be added or subtracted if they are the same type; however units can always be multiplied or divided, as George Gamow used to explain:
- "2 candlesticks" times "3 cabdrivers" = 6 [candlestick][cabdriver].
A distinction should be made between units and standards. A unit is fixed by its definition, and is independent of physical conditions such as temperature. By contrast, a standard is a physical realization of a unit, and realizes that unit only under certain physical conditions. For example, the metre is a unit, while a metal bar is a standard. One metre is the same length regardless of temperature, but a metal bar will be one metre long only at a certain temperature.
- Treat units algebraically. Only add like terms. When a unit is divided by itself, the division yields a unitless one. When two different units are multiplied, the result is a new unit, referred to by the combination of the units. For instance, in SI, the unit of speed is metres per second (m/s). See dimensional analysis. A unit can be multiplied by itself, creating a unit with an exponent (e.g. m2/s2). Put simply, units obey the laws of indices. (See Exponentiation.)
- Some units have special names, however these should be treated like their equivalents. For example, one newton (N) is equivalent to one kg·m/s2. Thus a quantity may have several unit designations, for example: the unit for surface tension can be referred to as either N/m (newtons per metre) or kg/s2 (kilograms per second squared). Whether these designations are equivalent is disputed amongst metrologists.
Expressing a physical value in terms of another unit
Conversion of units involves comparison of different standard physical values, either of a single physical quantity or of a physical quantity and a combination of other physical quantities.
just replace the original unit with its meaning in terms of the desired unit , e.g. if , then:
Now and are both numerical values, so just calculate their product.
Or, which is just mathematically the same thing, multiply Z by unity, the product is still Z:
For example, you have an expression for a physical value Z involving the unit feet per second () and you want it in terms of the unit miles per hour ():
- Find facts relating the original unit to the desired unit:
- 1 mile = 5280 feet and 1 hour = 3600 seconds
- Next use the above equations to construct a fraction that has a value of unity and that contains units such that, when it is multiplied with the original physical value, will cancel the original units:
- Last,multiply the original expression of the physical value by the fraction, called a conversion factor, to obtain the same physical value expressed in terms of a different unit. Note: since valid conversion factors are dimensionless and have a numerical value of one, multiplying any physical quantity by such a conversion factor (which is 1) does not change that physical quantity.
Or as an example using the metric system, you have a value of fuel economy in the unit litres per 100 kilometres and you want it in terms of the unit microlitres per metre:
One example of the importance of agreed units is the failure of the NASA Mars Climate Orbiter, which was accidentally destroyed on a mission to Mars in September 1999 instead of entering orbit due to miscommunications about the value of forces: different computer programs used different units of measurement (newton versus pound force). Considerable amounts of effort, time, and money were wasted.
On April 15, 1999 Korean Air cargo flight 6316 from Shanghai to Seoul was lost due to the crew confusing tower instructions (in metres) and altimeter readings (in feet). Three crew and five people on the ground were killed. Thirty seven were injured.
In 1983, a Boeing 767 (which came to be known as the Gimli Glider) ran out of fuel in mid-flight because of two mistakes in figuring the fuel supply of Air Canada's first aircraft to use metric measurements. This accident is apparently the result of confusion both due to the simultaneous use of metric & Imperial measures as well as mass & volume measures.
- "measurement unit", in International Vocabulary of Metrology – Basic and General Concepts and Associated Terms (VIM) (3rd ed.), Joint Committee for Guides in Metrology, 2008, pp. 6–7.
- [dead link]
- "US Metric Act of 1866". as amended by Public Law 110–69 dated August 9, 2007
- "NIST Handbook 44 Appendix B". National Institute of Standards and Technology. 2002.
- Emerson, W.H. (2008). "On quantity calculus and units of measurement". Metrologia 45 (2): 134–138. Bibcode:2008Metro..45..134E. doi:10.1088/0026-1394/45/2/002.
- "Unit Mixups". US Metric Association.
- "Mars Climate Orbiter Mishap Investigation Board Phase I Report". NASA. 1999-11-10.
- "Korean Air Flight 6316" (Press release). NTSB.
- "Korean Air incident". Aviation Safety Net.
- Witkin, Richard (July 30, 1983). "Jet's Fuel Ran Out After Metric Conversion Errors". New York Times. Retrieved 2007-08-21. "Air Canada said yesterday that its Boeing 767 jet ran out of fuel in mid-flight last week because of two mistakes in figuring the fuel supply of the airline's first aircraft to use metric measurements. After both engines lost their power, the pilots made what is now thought to be the first successful emergency dead stick landing of a commercial jetliner."
- A Dictionary of Units of Measurement - Center for Mathematics and Science Education, University of North Carolina
- NIST Handbook 44, Specifications, Tolerances, and Other Technical Requirements for Weighing and Measuring Devices
- Official SI website
- Quantity System Framework - Quantity System Library and Calculator for Units Conversions and Quantities predictions
- "Arithmetic Conventions for Conversion Between Roman [i.e. Ottoman and Egyptian Measurement"] is a manuscript from 1642, in Arabic, which is about units of measurement.
- Ireland - Metrology Act 1996
- US - Authorized tables
- Official text of the Units of Measurement Regulations 1995 as in force today (including any amendments) within the United Kingdom, from the UK Statute Law Database
Metric information and associations
- Official SI website
- UK Metric Association
- US Metric Association
- The Unified Code for Units of Measure (UCUM)
Imperial measure information | <urn:uuid:e628ae76-5626-4652-95c2-7da5c897cf69> | 3.9375 | 2,818 | Knowledge Article | Science & Tech. | 39.57509 |
console - console output
Helper methods for printing ANSI formatted output to a terminal and redirecting output to log file(s).
The console(3) module is a built in module available to all programs. It is responsible for printing messages with optional replacement parameters which are highlighted when the message is printed.
If either stdout or stderr is not a tty then formatting of the output message is not performed which means that when redirecting to a file the ANSI sequence is omitted and the string is passed with parameter replacement performed but no formatting.
The general syntax for printing output is:
console.info "message to print with %s information" "important";
%s substring in the main message will be highlighted and replaced with the string important, generating:
message to print with
To print to standard output use the following methods.
Prints an informational message.
Prints a log message. This method does not perform any formatting or print any program (or message type prefix) but does provide parameter replacement. This method is essentially just a wrapper for echo(1) that allows the use of parameter replacement and the log file redirection functionality.
To print to standard error use the following methods.
Prints a warning message.
Prints an error message.
Prints an error message followed by a stack trace of the current method call stack.
The console(3) module supports redirecting stdout/stderr to log file(s).
Redirect stdout to a log file and store stdout in file descriptor #3.
Close redirection of stdout, restoring stdout and closing file descriptor #3.
Redirect stderr to a log file and store stderr in file descriptor #4.
Close redirection of stderr, restoring stderr and closing file descriptor #4.
The console(3) module provides methods for exiting the program (optionally with a formatted message).
Marks the program as completed successfully (exiting with a zero exit code) and optionally prints a formatted message.
Exits the program with a non-zero exit code and optional error message. Note that the signature for this method differs from the general syntax for console(3) methods as it expects the first parameter to be the exit code.
The console(3) module parses the program options (but does not modify them) looking for the following options:
Do not perform any ANSI color or formatting modifications to the output.
See the globals-api(3) documentation.
You may also print a stack trace (without any preceeding error message) using the
console.trace method. Note that this method does not follow the general syntax for console(3) method invocations and accepts no parameters.
The number of replacement parameters must match exactly the number of
%s occurences in the message otherwise unexpected behaviour will occur.
console is written in bash and depends upon
bash >= 4.
console is copyright (c) 2012 muji http://xpm.io | <urn:uuid:c880f9bb-37e9-4eec-9387-42eabe265b75> | 3.3125 | 631 | Documentation | Software Dev. | 41.909398 |
There is a special relationship among the measures of the sides of a 45°-45°-90° triangle.
45°-45°-90° triangle is commonly encountered right triangle whose sides are in the proportion . The measures of the sides are x , x , and .
In a 45°-45°-90° triangle, the length of the hypotenuse is times the length of a leg.
To see why this is so, note that by the Converse of the Pythagorean Theorem, these values make the triangle a right triangle. | <urn:uuid:adbf301b-7022-4631-ad8f-99acfb50d31c> | 3.546875 | 116 | Knowledge Article | Science & Tech. | 71.542659 |
You might have noticed that we’re slowing down the posts here at Pop Physics HQ. That’s because summer is in full, sweaty swing, the students are busy forgetting everything they’ve learned sitting at their school desks, and I want to make sure this site’s got some momentum when we pick up the pace again in September.
But the public demands more. With that in mind, here’s a good article about the basic ways physicists think about Time, written by author Paul Davies. I’ll be talking about some of these ideas in greater detail in later chapters.
“Thinking of past and future brings us to another problem that has foxed scientists and philosophers: why time should have a direction at all. In every day life it’s pretty apparent that it does. If you look at a movie that’s being played backwards, you know it immediately because most things have a distinct time direction attached to them: an arrow of time. For example, eggs can easily turn into omlettes but not the other way around, and milk and coffee mix in your cup but never separate out again.”
Read the full article here! | <urn:uuid:7f707f33-0bc7-4187-9013-07d3f3530bce> | 2.890625 | 246 | Truncated | Science & Tech. | 58.24856 |
Explanation: What is that green thing? A volunteer sky enthusiast surfing through online Galaxy Zoo images has discovered something really strange. The mystery object is unusually green, not of any clear galaxy type, and situated below relatively normal looking spiral galaxy IC 2497. Dutch schoolteacher Hanny van Arkel, discovered the strange green "voorwerp" (Dutch for "object") last year. The Galaxy Zoo project encourages sky enthusiasts to browse through SDSS images and classify galaxy types. Now known popularly as Hanny's Voorwerp, subsequent observations have shown that the mysterious green blob has the same distance as neighboring galaxy IC 2497. Research is ongoing, but one leading hypothesis holds that Hanny's Voorwerp is a small galaxy that acts like a large reflection nebula, showing the reflected light of a bright quasar event that happened in the center of IC 2497 about 100,000 years ago. Pictured above, Hanny's Voorwerp was imaged recently by the 4.2-meter William Herschel Telescope in the Canary Islands by Matt Jarvis, Kevin Schawinski, and William Keel. | <urn:uuid:4f02bc54-7ad7-407c-be13-a24ff4a24958> | 3.59375 | 233 | Personal Blog | Science & Tech. | 39.387222 |
Short-Term Heating Helps Corals Survive Greater Long-Term Warming
Oliver, T.A. and Palumbi, S.R. 2011. Do fluctuating temperature environments elevate coral thermal tolerance? Coral Reefs 30: 429-440.
Taking advantage of back-reef pools in American Samoa that differ in diurnal thermal variation, Oliver and Palumbi experimentally heat-stressed Acropora hyacinthus corals from a thermally moderate lagoon pool and a more thermally-variable pool that naturally experienced 2- to 3-hour high temperature events during summer low tides, after which they compared coral mortality and photosystem II photochemical efficiency of colony fragments they collected from each of these lagoons that they exposed to either ambient (28.0°C) or elevated (31.5°C) water temperatures.
The two researchers report that in the heated treatment, "moderate pool corals showed nearly 50% mortality whether they hosted heat-sensitive or heat-resistant symbionts," while "variable pool corals, all of which hosted heat-resistant symbionts, survived well, showing low mortalities statistically indistinguishable from controls held at ambient temperatures." Also in the heated treatment, they say that "moderate pool corals hosting heat-sensitive algae showed rapid rates of decline in algal photosystem II photochemical efficiency," while those "hosting heat-resistant algae showed intermediate levels of decline." And, as by now might have been expected, they state that "variable pool corals hosting heat-resistant algae showed the least decline."
Oliver and Palumbi say their results suggest that "previous exposure to an environmentally variable microhabitat adds substantially to coral-algal thermal tolerance, beyond that provided by heat-resistant symbionts alone," indicative of the latent ability of earth's corals to potentially adapt to warmer temperatures than have been believed to be possible in the past, as they -- or if they -- gradually begin to experience recurring daily episodes of greater warmth in a gradually warming world.
Hutchison, V.H. and Ferrance, M.R. 1970. Thermal tolerances of Rana pipiens acclimated to daily temperature cycles. Herpetologica 26: 1-8.
Otto, R. 1974. The effects of acclimation to cyclic thermal regimes on heat tolerance of the western mosquitofish. Transactions of the American Fisheries Society 103: 331-335.
Podrabsky, J. and Somero, G. 2004. Changes in gene expression associated with acclimation to constant temperatures and fluctuating daily temperatures in an annual killifish Austrofundulus limnaeus. Journal of Experimental Biology 207: 2237-2254.
Putnam, H., Edmunds, P. and Fan, T. 2010. Effect of a fluctuating thermal regime on adult and larval reef corals. Invertebrate Biology 129: 199-209.
Sastry, A. 1979. Metabolic adaptation of Cancer irroratus developmental stages to cyclic temperatures. Marine Biology 51: 243-250.
Schaefer, J. and Ryan, A. 2006. Developmental plasticity in the thermal tolerance of zebrafish Danio rerio. Journal of Fish Biology 69: 722-734.
Thorp, J.W. and Wineriter, S.A. 1981. Stress and growth response of juvenile crayfish to rhythmic and arrhythmic temperature fluctuations. Archives of Environmental Contamination and Toxicology 10: 69-77.
Threader, R. and Houston, A.H. 1983. Heat tolerance and resistance in juvenile rainbow trout acclimated to diurnally cycling temperatures. Comparative Biochemistry and Physiology Part A: Physiology 75: 153-155. | <urn:uuid:1305fb70-2cdc-45c5-9ba9-f8a7af2dc304> | 3.0625 | 788 | Academic Writing | Science & Tech. | 37.868954 |
Other Popular Articles
Unusual Pulsar Or Alien Signals?
The pulse timing of this object is considered unusual.
What kind of phenomenon is related to this object?
It is the first time this kind of phenomenon has been observed by astronomers.
The "Cloaked" Star Was Difficult To Find
An object obscured by dust, and buried in a two-star system enshrouded by dense gas, is not easy to find.
A "cloaked" star was discovered after it ate a little of its neighbor. The meal must have given the star a bit of indigestion, because it
"burped" with a blast of high-energy radiation, which gave it away.
Astronomical Mystery: Giant Alien Planet Orbiting Three Suns
Binary stars are well-known and even trinary systems may be common but most of them
are crowded together and thus, difficult to find and study. Additionally, it has long been considered they are inhospitable to planets...
Radio Emission From Ultracool Dwarf Detected By Arecibo Telescope
The Arecibo Telescope in Puerto Rico has discovered sporadic bursts of polarized radio emission from the T6.5 brown J1047+21.
Because Arecibo is a single, fixed-dish telescope, it has a restricted practical sensitivity to weak, quiescent emission from radio sources...
Monster Star Thousand Times Bigger Than Our Sun Could Soon Explode - with video
Despite its enormous size, the star has not been identified as yellow hypergiant until recently...
Star Changes Into An Incredible Diamond Planet - with video
How can a star change into a diamond planet? Astronomical discoveries show that what sounds like science fiction is actually reality...
Invader From Another Galaxy
This alien intruder from another galaxy is in many ways different from other exoplanets observed by astronomers.
Located about 2000 light-years from Earth in the southern constellation of Fornax (the Furnace), the Jupiter-like planet orbits a dying star of
extragalactic origin and risks to be engulfed by it.
Intimate Connection Between Black Holes And New-Born Stars
Astronomers have known for some time that black holes and
supermassive black holes
accretion and star formation appear intimately connected.
However, it does not mean that powerful gravitational forces of the black holes disrupt surrounding material in their vicinity.
Power To See Most Distant Objects In The Universe
The 3C294, is one of the most distant galaxies recorded by Chandra, the most sophisticated X-ray observatory ever built.
The cluster 3C294 is even 40 percent farther (!) than the next most distant x-ray galaxy cluster.
Chandra focus on X-rays from high-energy regions of the Universe and see the invisible.
It is so sensitive that it can capture images of particles as they disappear into a black hole deep in outer space.
"Pillars Of Creation" Are Gone
Every time you look at the beautiful and famous image of the Pillars of Creation taken by Hubble back in 1995,
you are actually admiring something that no longer exists.
In fact, the Pillars of Creation were already long gone by the time the image was captured! | <urn:uuid:282c9f93-e489-453d-ae53-ef087b59d094> | 3.0625 | 673 | Content Listing | Science & Tech. | 42.30438 |
Building mod_perl from source requires a machine with basic development tools. In particular, you will need an ANSI-compliant C compiler (such as gcc) and the make utility. All standard Unix-like distributions include these tools. If a required tool is not already installed, you can install it with the package manager that is provided with the system (rpm, apt, yast, etc.).
A recent version of Perl (5.004 or higher) is also required. Perl is available as an installable package, although most Unix-like distributions will have Perl installed by default. To check that the tools are available and to learn about their version numbers, try:
panic% make -v panic% gcc -v panic% perl -v
If any of these responds with Command not found, the utility will need to be installed.
Once all the tools are in place, the installation can begin. Experienced Unix users will need no explanation of the commands that follow and can simply type them into a terminal window.
Get the source code distrubutions of Apache and mod_perl using your favorite web browser or a command-line client such as wget or lwp-download. These two distributions are available from http://www.apache.org/dist/httpd/ and http://perl.apache.org/dist/, respectively.
The two packages are named apache_1.3.xx.tar.gz and mod_perl-1.xx.tar.gz, where 1.3.xx and 1.xx should be replaced with the real version numbers of Apache and mod_perl, respectively. Although 2.0 development versions of Apache and mod_perl are available, this book covers the mod_perl 1.0 and Apache 1.3 generation, which were the stable versions when this book was written. See Chapter 24 and Chapter 25 for more information on the Apache 2.0 and mod_perl 2.0 generation.
Move the downloaded packages into a directory of your choice (for example, /home/stas/src/), proceed with the following steps, and mod_perl will be installed:
panic% cd /home/stas/src panic% tar -zvxf apache_1.3.xx.tar.gz panic% tar -zvxf mod_perl-1.xx.tar.gz panic% cd mod_perl-1.xx panic% perl Makefile.PL APACHE_SRC=../apache_1.3.xx/src \ APACHE_PREFIX=/home/httpd DO_HTTPD=1 USE_APACI=1 EVERYTHING=1 panic% make && make test panic% su panic# make install
All that remains is to add a few configuration lines to the Apache configuration file (/usr/local/apache/conf/httpd.conf), start the server, and enjoy mod_perl.
mod_perl, modperl, Apache, perl, cgi, html, mod_perl, e-commerce, scalability, free, open source, OSS, apache, squid, high availability, modperl, linux, unix, Web, www, mod_perl, webserver, admin, apache, book, webmaster, tools, modperl, guide, docs, documentation, help, mod_perl, perl, information, apache, script, errata, eric cholet, perl, apache, mod-perl, stas bekman, mod_perl, cool, perl, Apache, performance, speed, choice
Other projects to check out: meta-religion.com is for those interested in Religious, Spiritual and Esoteric Phenomena. i-want-a-better.com is a community of people discussing what they would like to be improved in their lives and things they use and interact with. You may also want to find a healer in your area or read articles on variety of topics. | <urn:uuid:f2524658-eb11-4e1d-bccb-8780579194e5> | 3.203125 | 843 | Tutorial | Software Dev. | 62.60713 |
Water companies in the south and east of England are taking unprecedented emergency measures to cope with drought this summer. Their plans include piping the cleaner effluent from sewage treatment plants back upstream to refill their shrinking rivers.
With ground water at an unusually low level, the water companies' hopes of avoiding shortages rest on an exceptionally wet spring - and these hopes are fading fast. A wet summer will not help because most summer rain evaporates or is lost through transpiration by plants.
The southeast of England is suffering its third year of drought. Rainfall at the Meteorological Office's station at Gatwick Airport has been averaging 90 per cent of normal since the start of 1988. So far this year there has been no noticeable let up in the drought. Although January was wet (about 16 per cent more rain than normal) rainfall in February was about two-thirds normal ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:21ff886e-006d-4d57-8668-84e501468526> | 3.421875 | 200 | Truncated | Science & Tech. | 54.339671 |
1. Let y = 3x2 + 4x - 5
Find the y-intercept, x-intercepts, the coordinates of the vertex, and sketch the graph.
Solve 3x2 + 4x - 5 > 0 for x.
2. Solve for x
3. Graph y = 1.1x
4. Change to logarithmic form ex = a
5. Change to exponential form t = log x
7. Express as a single log 3logb x - logb y + 4 logb z
8. Solve for x log4 (x + 3) - log4 (x-1) = 1
9. Suppose that the population of a city is 10,000 in 1970 and 50,000 in 1990. Assume further that the population is given by the formula
where P is the population in the year 1970, t is the number of years since 1970, and r is a suitable constant.
10. Write out Pascal's triangle to the 7th row
11. Remove parentheses and simplify (x + y)6
12. Compute 23C7
13. Compute the square root of 40.
14. Compute e2
15. Compute ln 2 | <urn:uuid:da579d63-66ba-4852-883c-6fae5edff5dc> | 3.359375 | 261 | Content Listing | Science & Tech. | 109.835722 |
Test Case Reduction
A general guide to test case reduction
The basic idea behind bug reduction is to take a page that demonstrates a problem and remove as much content as possible while still reproducing the original problem.
Why is this needed?
A reduced test case can help identify the central problem on the page by eliminating irrelevant information, i.e., portions of the HTML page’s structure that have nothing to do with the problem. With a reduced test case, the development team will spend less time identifying the problem and more time determining the solution. Also, since a site can change its content or design, the problem may no longer occur on the real-world site. By constructing a test case you can capture the initial problem.
The first steps
Really the first step in reducing a page is to identify that main problem of the page. For example:
- Does the page have text overlapping an image?
- Is there a form button that fails to work?
- Is there a portion of the page missing or misaligned?
After you have made this determination, you need to create a local copy
of the page created from the page source window. After saving this
source, it’s a good idea to put a
<BASE> element in the
HEAD so that any images/external style sheet or scripts that use a
relative path will get loaded. After the
BASE element has been added,
load the local copy into the browser and verify that problem is still
occurring. In this case, let’s assume the problem is still present.
Work from top to bottom
In general, it’s best to start from the top of the
work down through the
HEAD to the
BODY element. Take a look at the HTML
file in a text editor and view what types of elements are present in the
<head>. Typically, the
HEAD will include the
element, which is required, and elements such as
The reduction process is to remove one element at a time, save, and reload the
test case. If you have removed the element and the page is still
displaying the problem, continue with the next element. If removing an
element in the
HEAD causes the problem to not occur, you may have found
one piece of the problem. Re-add this element back into the
reload the page and confirm the problem is still occurring and move
on to the next element in the
HEAD? Continue with the
HEAD element has been reduced, you need to start reducing
the number of required elements in the
BODY. This will tend to be the
most time consuming since hundreds (thousands) of elements will be
present. The general practice is start removing elements by both their
</end> elements. This is especially true for
tables, which are frequently nested. You can speed up this process by
selecting groups of elements and removing them but ideally you need to
save and reload the test case each time to verify the problem is
Another way to help you identify unnecessary elements is to temporarily
off and loading your test case still reproduces the problem, then any
script elements that are present can be removed since they are not a
factor in this issue. Let’s say that you have reduced the page down to
a nested table with an ordered list with an
<link> element that need
to be present. It’s good practice to identify that CSS rule that is
being in the external file and add it directly to the test case. Create
</style> in the head and copy/paste the contents
of the .css file into this style element. Remove the
save the changes. Load the test case and verify the problem is still
occurring. Now manually delete or comment out each CSS rule until you
have just the required set of rules to reproduce.
Adding to the bug
When you’ve finished your reduction, you should add it to the bug. It’s quite likely
that in the process of reducing, you have found the root cause of the problem, so
you are able to set the right component. Don’t forget to add the
HasReduction keyword to the bug
(and remove the
NeedsReduction keyword, if present). If you do not have the rights to change
the component or the keywords, read about how to get them in this document
Ready to begin?
In addition to providing reductions for bugs that you’ve found, you can help
by reducing any of the bugs in Bugzilla tagged | <urn:uuid:e8b24f30-253f-48e4-a9af-89e318414b1d> | 3.046875 | 954 | Tutorial | Software Dev. | 53.808351 |
Cosmic timeline 10
Matter domination began about 70,000 years after the Big Bang.
In the photon epoch which began 10 seconds after the Big Bang photons are still interacting frequently with charged protons, electrons and (eventually) nuclei, and continue to do so for the next 300,000 years. Then between 3 minutes and 20 minutes after the Big Bang the temperature of the universe falls to the point where atomic nuclei can begin to form. Protons (hydrogen ions) and neutrons begin to combine into atomic nuclei in the process of nuclear fusion. Nucleosynthesis only lasts for about seventeen minutes, after which time the temperature and density of the universe has fallen to the point where nuclear fusion cannot continue. At this time, there is about three times more hydrogen than helium-4 (by mass) and only trace quantities of other nuclei.
Matter domination and dark matter
Matter domination began about 70,000 years after the Big Bang. At this time, the densities of non-relativistic matter (atomic nuclei) and relativistic radiation (photons) are equal. At this stage, cold dark matter dominates, paving the way for gravitational collapse making dense regions denser and rarefied regions more rarefied. However, present theories as to the nature of dark matter are inconclusive, but the thinking is that it needs to exist in order for the Big Bang theory to work.
Strong gravitational lensing as observed by the Hubble Space Telescope in Abell 1689 indicates the presence of dark matter
What is dark matter?
Most of the mass of the observable universe probably! In astronomy and cosmology, dark matter is a hypothetical form of matter that is undetectable, but whose presence can be worked out from looking at gravitational effects on visible matter. According to present observations of parts of the universe larger than galaxies, as well as Big Bang cosmology, dark matter and dark energy could account for the vast majority of the mass in the observable universe.
Dark matter plays a central role in state-of-the-art modeling of structure formation and galaxy evolution, and has measurable effects on the differences that exist in all directions of our universe (called anisotropies) observed in the cosmic microwave background. All these lines of evidence suggest that galaxies, clusters of galaxies, and the universe as a whole contain far more matter than that which interacts with electromagnetic radiation: the remainder is frequently called the “dark matter component,” even though there is a small amount of baryonic dark matter. The largest part of dark matter, which does not interact with electromagnetic radiation, is not only “dark” but also, by definition, utterly transparent.
The vast majority of the dark matter in the universe is believed to be nonbaryonic, which means that it contains no atoms and that it does not interact with ordinary matter via electromagnetic forces. The nonbaryonic dark matter includes neutrinos, and possibly hypothetical entities such as axions, or supersymmetric particles.
The first known observation of a neutrino, on November 13, 1970. A neutrino hit a proton in a hydrogen bubble chamber. The collision occurred at the point where three tracks emanate on the right of the photograph.
Neutrinos, meaning “small neutral one”, are elementary particles that often travel close to the speed of light, are electrically neutral, and are able to pass through ordinary matter almost undisturbed and are thus extremely difficult to detect. Neutrinos have a minuscule, but nonzero mass.
Experiments at neutrino detectors like SNO and Super-Kamiokande (pictured here) have established that neutrinos oscillate among various flavors, each with a different tiny mass.
The Kamioka Observatory, Institute for Cosmic Ray Research is a neutrino physics laboratory located underground in the Mozumi Mine of the Kamioka Mining and Smelting Co. near the Kamioka section of the city of Hida in Gifu Prefecture, Japan.
A set of groundbreaking neutrino experiments have taken place at the observatory over the past two decades. All of the experiments have been very large and have contributed substantially to the advancement of particle physics, in particular to the study of neutrino astronomy and neutrino oscillation.
This occurred about 377,000 years after the big bang. Hydrogen and helium atoms begin to form and the density of the universe falls. Hydrogen and helium are at the beginning ionized, i. e., no electrons are bounded to the nuclei, which are therefore electrically charged (+1 and +2 respectively). As the universe cools down, the electrons get captured by the ions, making them neutral. This process is relatively fast (actually faster for the helium than for the hydrogen) and is known as recombination. At the end of recombination, most of the atoms in the universe are neutral, therefore the photons can now travel freely.
The universe has become transparent.
The photons emitted right after the recombination can now travel undisturbed and are those that we see in the cosmic microwave background (CMB) radiation. Therefore the CMB is a picture of the universe at the end of this epoch. | <urn:uuid:849a473a-9087-405f-b4b5-6b7f110b50ce> | 3.46875 | 1,085 | Knowledge Article | Science & Tech. | 29.846247 |
As the Universe evolved from its early, hot, dense beginnings (the "Big Bang") to its present, cold, dilute state, it passed through a brief epoch when the temperature (average thermal energy) and density of its nucleon component were such that nuclear reactions building complex nuclei could occur. Because the nucleon content of the Universe is small (in a sense to be described below) and because the Universe evolved through this epoch very rapidly, only the lightest nuclides (D, 3He, 4He, and 7Li) could be synthesized in astrophysically interesting abundances. The relic abundances of these nuclides provide probes of conditions and contents of the Universe at a very early epoch in its evolution (the first few minutes) otherwise hidden from our view. The standard model of Cosmology subsumes the standard model of particle physics (e.g., three families of very light, left-handed neutrinos along with their right-handed antineutrinos) and uses General Relativity (e.g., the Friedman equation) to track the time-evolution of the universal expansion rate and its matter and radiation contents. While nuclear reactions among the nucleons are always occurring in the early Universe, Big Bang Nucleosynthesis (BBN) begins in earnest when the Universe is a few minutes old and it ends less than a half hour later when nuclear reactions are quenched by low temperatures and densities. The BBN abundances depend on the conditions (temperature, nucleon density, expansion rate, neutrino content and neutrino-antineutrino asymmetry, etc.) at those times and are largely independent of the detailed processes which established them. As a consequence, BBN can test and constrain the parameters of the standard model (SBBN), as well as probe any non-standard physics/cosmology which changes those conditions.
The relic abundances of the light nuclides synthesized in BBN depend on the competition between the nucleon density-dependent nuclear reaction rates and the universal expansion rate. In addition, while all primordial abundances depend to some degree on the initial (when BBN begins) ratio of neutrons to protons, the 4He abundance is largely fixed by this ratio, which is determined by the competition between the weak interaction rates and the universal expansion rate, along with the magnitude of any e - e asymmetry. 1 To summarize, in its simplest version BBN depends on three unknown parameters: the baryon asymmetry; the lepton asymmetry; the universal expansion rate. These parameters are quantified next.
1.1. Baryon Asymmetry - Nucleon Abundance
In the very early universe baryon-antibaryon pairs (quark-antiquark pairs) were as abundant as radiation (e.g., photons). As the Universe expanded and cooled, the pairs annihilated, leaving behind any baryon excess established during the earlier evolution of the Universe . Subsequently, the number of baryons in a comoving volume of the Universe is preserved. After e ± pairs annihilate, when the temperature (in energy units) drops below the electron mass, the number of Cosmic Background Radiation (CBR) photons in a comoving volume is also preserved. As a result, it is useful (and conventional) to measure the universal baryon asymmetry by comparing the number of (excess) baryons to the number of photons in a comoving volume (post-e ± annihilation). This ratio defines the baryon abundance parameter B,
As will be seen from BBN, and as is confirmed by a variety of independent (non-BBN), astrophysical and cosmological data, B is very small. As a result, it is convenient to introduce 10 1010 B and to use it as one of the adjustable parameters for BBN. An equivalent measure of the baryon density is provided by the baryon density parameter, ΩB, the ratio (at present) of the baryon mass density to the critical density. In terms of the present value of the Hubble parameter (see Section 1.2 below), H0 100 h km s-1 Mpc-1, these two measures are related by
Note that the subscript 0 refers to the present epoch (redshift z = 0).
From a variety of non-BBN cosmological observations whose accuracy is dominated by the very precise CBR temperature fluctuation data from WMAP , the baryon abundance parameter is limited to a narrow range centered near 10 6. As a result, while the behavior of the BBN-predicted relic abundances will be described qualitatively as functions of B, for quantitative comparisons the results presented here will focus on the limited interval 4 10 8. As will be seen below (Section 2.2), over this range there are very simple, yet accurate, analytic fits to the BBN-predicted primordial abundances.
1.2. The Expansion Rate At BBN
For the standard model of cosmology, the Friedman equation relates the expansion rate, quantified by the Hubble parameter (H), to the matter-radiation content of the Universe.
where GN is Newton's gravitational constant. During the early evolution of the Universe the total density, TOT, is dominated by "radiation" (i.e., by the contributions from massless and/or extremely relativistic particles). During radiation dominated epochs (RD), the age of the Universe (t) and the Hubble parameter are simply related by (Ht)RD = 1/2.
Prior to BBN, at a temperature of a few MeV, the standard model of particle physics determines that the relativistic particle content consists of photons, e ± pairs and three flavors of left-handed (i.e., one helicity state) neutrinos (along with their right-handed, antineutrinos; N = 3). With all chemical potentials set to zero (very small lepton asymmetry) the energy density of these constituents in thermal equilibrium is
where is the energy density in the CBR photons (which have redshifted to become the CBR photons observed today at a temperature of 2.7K). In this case (SBBN: N = 3), the time-temperature relation derived from the Friedman equation is,
In SBBN it is usually assumed that the neutrinos are fully decoupled prior to e ± annihilation; if so, they don't share in the energy transferred from the annihilating e ± pairs to the CBR photons. In this very good approximation, the photons are hotter than the neutrinos in the post-e ± annihilation universe by a factor T / T = (11/4)1/3, and the total energy density is
corresponding to a modified time-temperature relation,
Quite generally, new physics beyond the standard models of cosmology or particle physics could lead to a non-standard, early Universe expansion rate (H'), whose ratio to the standard rate (H) may be parameterized by an expansion rate factor S,
A non-standard expansion rate might originate from modifications to the 3+1 dimensional Friedman equation as in some higher dimensional models , or from a change in the strength of gravity . Different gravitational couplings for fermions and bosons would have similar effects. Alternatively, changing the particle population in early Universe will modify the energy density - temperature relation, also leading, through eq. 3, to S 1. While these different mechanisms for implementing a non-standard expansion rate are not necessarily equivalent, specific models generally lead to specific predictions for S.
Consider, for example, the case of a non-standard energy density.
where 'R = R + X and X identifies the non-standard component. With the restriction that the X are relativistic, this extra component, non-interacting at e ± annihilation, behaves as would an additional neutrino flavor. It must be emphasized that X is not restricted to additional flavors of active or sterile neutrinos. In this class of models S is constant prior to e ± annihilation and it is convenient (and conventional) to account for the extra contribution to the standard-model energy density by normalizing it to that of an "equivalent" neutrino flavor , so that
For this case,
In another class of non-standard models the early Universe is heated by the decay of a massive particle, produced earlier in the evolution . If the Universe is heated to a temperature which is too low to (re)populate a thermal spectrum of the standard neutrinos (TRH 7 MeV), the effective number of neutrino flavors contributing to the total energy density is < 3, resulting in N < 0 and S < 1.
Since the expansion rate is more fundamental than is N, BBN for models with non-standard expansion rates will be parameterized using S (but, for comparison, the corresponding value of N from eq. 11 will often be given for comparison). The simple, analytic fits to BBN presented below (Section 2.2) are quite accurate for 0.85 S 1.15, corresponding to -1.7 N 2.0
1.3. Neutrino Asymmetry
The baryon asymmetry of the Universe, quantified by B, is very small. If, as expected in the currently most popular particle physics models, the universal lepton and baryon numbers are comparable, then any asymmetry between neutrinos and antineutrinos ("neutrino degeneracy") will be far too small to have a noticeable effect on BBN. However, it is possible that the baryon and lepton asymmetries are disconnected and that the lepton (neutrino) asymmetry could be large enough to perturb the SBBN predictions. In analogy with B which quantifies the baryon asymmetry, the lepton (neutrino) asymmetry, L = L L, may be quantified by the neutrino chemical potentials µ ( e, µ, ) or, by the degeneracy parameters, the ratios of the neutral lepton chemical potentials to the temperature (in energy units) µ / kT, where
Prior to e ± annihilation, T = T, while post-e ± annihilation (T / T)3 = 4/11. Although in principle the asymmetry among the different neutrino flavors may be different, mixing among the three active neutrinos (e, µ, ) ensures that at BBN, Le Lµ L (e µ ) . If L is measured post-e ± annihilation, as is B, then for << 1, L 3Le and, for e << 1, L 0.75.
Although any neutrino degeneracy ( < 0 as well as > 0) increases the energy density in the relativistic neutrinos, resulting in an effective N 0 (see eq. 10), the range of || of interest to BBN is limited to sufficiently small values that the increase in S due to a non-zero is negligible. However, a small asymmetry between electron type neutrinos and antineutrinos (e 10-2; L 0.007), while large compared to the baryon asymmetry, can have a significant impact on BBN since the e affect the interconversion of neutrons to protons. A non-zero e results in different (compared to SBBN) numbers of e and e, altering the n/p ratio at BBN, thereby changing the yields (compared to SBBN) of the light nuclides.
Of the light, relic nuclei, the neutron limited 4He abundance is most sensitive to a non-zero e; 4He is a good "leptometer". In concert with the abundances of D, 3He, and 7Li, which are good baryometers, the 4He abundance provides a test of the consistency of the standard model along with constraints on non-standard models. The analytic fits presented below (Section 2.2) are reasonably accurate for e in the range, -0.1 e 0.1, corresponding to a total lepton number limited to |L| 0.07. While this may seem small, recall that a similar measure of the baryon asymmetry is orders of magnitude smaller: B 6 × 10-10.
1 A lepton asymmetry much larger than the baryon asymmetry (which is very small; see Section 1.1 below) would have to reside in the neutrinos since charge neutrality ensures that the electron-positron asymmetry is comparable to the baryon asymmetry. Back. | <urn:uuid:37eb43dc-f685-489a-b6e6-a0130e7cd4bd> | 3.09375 | 2,641 | Academic Writing | Science & Tech. | 38.039617 |
How/why can the cosmic background radiation measurements tell us anything about the curvature of the universe?
So I've read the Wikipedia articles on WMAP and CMB in an attempt to try to understand how scientists are able to deduce the curvature of the universe from the measurements of the CMB. The Wiki ...
I know that the Cosmic Microwave Background Radiation (CMBR) is the leftover radiation from the "surface of last scattering". However, at every instant the surface is changing (at the rate of flow ... | <urn:uuid:7b0fc822-009c-4735-8a3f-8b5a3c53bafd> | 2.890625 | 109 | Q&A Forum | Science & Tech. | 53.512855 |
SOLUTION: Sketch the graph of the function y = 3 - 3x2. Label all intercepts.
Quadratic Equations and Parabolas
-> SOLUTION: Sketch the graph of the function y = 3 - 3x2. Label all intercepts.
You enter your algebra equation or inequality -
solves it step-by-step while providing clear explanations. Free on-line demo
solves your algebra problems and provides step-by-step explanations!
: algebra software solves algebra homework problems with step-by-step help!
If you need
immediate math help from PAID TUTORS right now
, click here
. (paid link)
Click here to see ALL problems on Quadratic Equations
Sketch the graph of the function y = 3 - 3x2. Label all intercepts.
put this solution on YOUR website!
y = 3 - 3x^2
The vertex is: (-b/2a,f(x)) so the vertex is: (0,3)
Now, pick other points to plot .... should look like the graph below:
SOLVE quadratic equation with variable
(in our case
) has the following solutons:
For these solutions to exist, the
should not be a negative number.
First, we need to compute the discriminant
Discriminant d=36 is greater than zero. That means that there are two solutions:
can be factored:
Again, the answer is: -1, 1. Here's your graph: | <urn:uuid:4ced0879-7b36-4e44-a3ac-2ecea229adb6> | 3.484375 | 329 | Tutorial | Science & Tech. | 68.657222 |
Science Fair Project Encyclopedia
Proteomics is the large-scale study of proteins, particularly their structures and functions. This term was coined to make an analogy with genomics, and while it is often viewed as the "next step", proteomics is much more complicated than genomics. Most importantly, while the genome is a rather constant entity, the proteome differs from cell to cell and is constantly changing through its biochemical interactions with the genome and the environment. One organism will have radically different protein expression in different parts of its body, in different stages of its life cycle and in different environmental conditions.
The entirety of proteins in existence in an organism throughout its life cycle, or on a smaller scale the entirety of proteins found in a particular cell type under a particular type of stimulation, are referred to as the proteome of the organism or cell type respectively.
With completion of a rough draft of the human genome, many researchers are now looking at how genes and proteins interact to form other proteins. A surprising finding of the Human Genome Project is that there are far fewer protein-coding genes in the human genome than there are proteins in the human proteome (~22,000 genes vs. ~200,000 proteins). The large increase in protein diversity is thought to be due to alternative splicing and post-translational modification of proteins. This discrepancy implies that protein diversity cannot be fully characterized by gene expression analysis alone, making proteomics a useful tool for characterizing cells and tissues of interest.
To catalog all human proteins and ascertain their functions and interactions presents a daunting challenge for scientists. An international collaboration to achieve these goals is being co-ordinated by the Human Proteome Organisation (HUPO).
Branches of proteomics
- Protein separation. All proteomic technologies rely on the ability to separate a complex mixture so that individual proteins are more easily processed with other techniques.
- Protein identification. Well-known methods include low-throughput sequencing through Edman degradation. Higher-throughput proteomic techniques are based on mass spectrometry, commonly peptide mass fingerprinting on simpler instruments, or de novo sequencing on instruments capable of more than one round of mass spectrometry. Antibody-based assays can also be used, but are unique to one protein.
- Protein quantification. Gel-based methods are used, including differential staining of gels with fluorescent dyes (difference gel electrophoresis ). Gel-free methods include various tagging or chemical modification methods, such as isotope-coded affinity tags (ICATs) or combined fractional diagnoal chromatography (COFRADIC).
- Protein sequence analysis. This is more of a bioinformatic branch, dedicated to searching databases for possible protein or peptide matches, but also functional assignment of domains, prediction of function from sequence, and evolutionary relationships of proteins.
- Structural proteomics. This concerns the high-throughput determination of protein structures in three-dimensional space. Common methods are x-ray crystallography and NMR spectroscopy .
- Interaction proteomics. This concerns the investigation of protein interactions on the atomic, molecular and cellular levels
- Protein modification. Almost all proteins are modified from their pure translated amino-acid sequence, so-called post-translational modification. Specialized methods have been developed to study phosporylation (phosphoproteomics) and glycosylation (glycoproteomics).
Key technologies used in proteomics
- One and two-dimensional gel electrophoresis are used to identify the relative mass of a protein and its isoelectric point.
- X-ray crystallography and nuclear magnetic resonance are used to characterize the three-dimensional structure of peptides and proteins.
- Tandem mass spectrometry combined with reverse phase chromatography or 2-D electrophoresis is used to identify and quantify all the levels of proteins found in cells.
- Affinity chromatography, yeast two hybrid techniques, fluorescence resonance energy transfer (FRET), and Surface Plasmon Resonance (SPR) are used to identify protein-protein and protein-DNA binding reactions.
- MIT - Reduction in the number of human genes from previous estimates.
- Proteomic World - Resources for proteomics research.
- Yeast GFP Localization Database - Database of microscope images and quantitation for most of the yeast proteome.
- Twyman, R. M. 2004. Principles of proteomics. BIOS Scientific Publishers, New York. ISBN 1859962734.(covers almost all branches of proteomics)
- Westermeier, R. and T. Naven. 2002. Proteomics in practice: a laboratory manual of proteome analysis. Wiley-VCH, Weinheim. ISBN 3527303545.(focused on 2D-gels, good on detail)
- Liebler, D. C. 2002. Introduction to proteomics: tools for the new biology. Humana Press, Totowa, NJ. ISBN 0585418799 (electronic, on Netlibrary?), ISBN 0896039919 hardback, ISBN 0896039927 paperback.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:5b3f8d55-293a-4b95-b2d3-aea8d0ace1fc> | 3.46875 | 1,094 | Knowledge Article | Science & Tech. | 24.447348 |
How is passive current propagated?
G K GRAY
gord at homostudy.win-uk.net
Thu Oct 31 04:18:44 EST 1996
In article <leipzjn9-301096000943 at rts0107.ppp.wfu.edu>, Jeremy Leipzig (leipzjn9 at wfu.edu) writes:
>Does anyone know exactly how passive current spreads down a dendrite or
>myelinated axon. One of my professors says the depolarization moves in
>successive collisions of repelling cations, in a manner not unlike the
>propagation of sound waves. Another one says that the electric field
>created by incoming cations is enough to depolarize adjoining regions,
>implying that passive current spreads close to the speed of light. I have
>also heard in intro courses that simple diffusion of the cations is
>responsible. Which, if any, is the correct explanation?
This is a question that should be addressed in depth, yet seems to
be a blind spot in all the standard Intro texts that adhere to the
view of myelin as an insulator with very low capacitance, e.g.
"From Neuron to Brain". The observable facts are:
a) that the neural impulse propagates faster along myelinated
fibres than in bare fibres.
b) that in both types of fibre the impulse propagation velocity
is many orders of magnitude less than the velocity of light - C, as
in Einstsein's well-remembered but little understood equation
E = MC\2.
A point to be taken into account is that there is a short
delay between the pulse that triggers depolarisation and the
actual depolarisation per-se. Whatever the mechanism may be it is
this delay which limits the impulse velocity in both types of fibre.
A charged particle, e.g. an ion of Ca, K or Na in relative
motion to its surroundings inevitably generates a magnetic field
that propagates at velocity C. (J. Clerk Maxwell) A burst of ions
at a Node of Ranvier in a myelinated fibre will send a magnetic
wave to the next node in line which may be strong enough to
generate a trigger pulse at that position, But the delay between
trigger and depolarisation limits the signal velocity.
More information about the Neur-sci | <urn:uuid:780b0076-e0c3-460a-9fa3-30848bb1fcaf> | 2.828125 | 522 | Comment Section | Science & Tech. | 52.663549 |