text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Tuesday 21 May Peppered moth (Biston betularia) What’s the World’s Favourite Species?Find out here. Peppered moth fact file - Find out more - Print factsheet Peppered moth description In its typical form, the Peppered Moth has pepper-and-salt camouflage pattern. In some areas it also has a sooty black or ‘melanic’ form known as carbonaria (2) (4). The long, stick-like caterpillar may be various shades of brown or green, with small warts and projections that resemble bark. The head is deeply notched (3).Top Peppered moth biology This species is single-brooded, with a protracted emergence. Adults are on the wing from May into August. Females lay their eggs in large batches, but the newly hatched caterpillars soon disperse. They spin silk threads and float off downwind until they land again by chance. Fortunately they can eat a very wide range of deciduous trees and shrubs. Caterpillars feed only at night, and are full-grown in September. The pupal stage overwinters in the soil and adults emerge the following spring (6). The Peppered Moth has been widely used as a textbook example of evolution by natural selection. During the industrial revolution, sooty deposits darkened much of the habitat. The melanic form of the moth was first recorded in Manchester in 1848. Within 50 years it had almost replaced the typical form both there and in other industrial areas (5) Classic experiments carried out by Ketterwell during the 1950s suggested that bird predation was the crucial factor. The melanic form was better camouflaged when resting on sooty tree trunks and branches than was the paler typical form, hence it survived better. Conversely, the typical form was at an advantage in unpolluted areas where the tree bark was covered in lichens. In recent years, various doubts have been cast on this simplistic explanation, and less fairly on the quality of Keterwell’s work (5) (7). While his pioneering experiments were undoubtedly flawed when judged by modern standards, his basic premise is still accepted by most evolutionary biologists. Although the full picture may well have been more complicated. Following the Clean Air Acts, introduced from 1964 onwards, smoke pollution and soot deposition have been greatly reduced. The melanic form of the Peppered Moth has now lost its advantage and undergone a dramatic decline in frequency. It is now scarce in areas where it previously dominated and may soon disappear completely. Whatever the finer details of its rise and fall, the carbonaria story is a fascinating one.Top Peppered moth range This species is widespread throughout most of the British Isles and often fairly numerous (2). The melanic form is most frequent in the industrial areas of central Scotland, northern England, the Midlands and London, but is currently declining (5).Top Peppered moth habitat Found in woodland, hedgerows, parks and gardens, even in urban areas (6).Top Peppered moth status Common and widespread (2).Top Peppered moth threats Not currently threatened.Top Peppered moth conservation No conservation action has been targeted at this species.Top Find out more Enjoying Moths by Roy Leverton (Poyser). Hooper, J. (2002) Of Moths and Men: Intrigue, tragedy and the Peppered Moth. Fourth Estate, London.Top Information supplied and authenticated by Roy Leverton with the support of the British Ecological Society: - Pupal stage - Stage in an insect's development, when huge changes occur that reorganise the larval form into the adult form. In butterflies the pupa is also called a chrysalis. - (Also known as 'univoltine'). Insect life cycle that takes 12 months to be complete, and involves a single generation. The egg, larva, pupa or adult over winters as a dormant stage. - National Biodiversity Network Species Dictionary (March 2003): http://www.nhm.ac.uk/nbn/ - Skinner, B (1984) Colour identification guide to moths of the British Isles. Viking, Middlesex. - Carter, D.J. & Hargreaves, B. (1986) A field guide to caterpillars of butterflies and moths in Britain and Europe. William Collins sons and Co. Ltd., London. - Leverton, R. (2001) Enjoying moths. T & AD Poyser, Ltd., London. - Marjerus, M.E.N. (1998) Melanism: Evolution in Action. Oxford University Press, Oxford. - Leverton, R. (2004). Pers. comm - Hooper, J. (2002) Of Moths and Men: Intrigue, tragedy and the Peppered Moth. Fourth Estate, London. Terms and Conditions of Use of Materials Visitors to this website (End Users) are entitled to: - view the contents of, and Material on, the website; Additional use of flagged material Green flagged material Creative commons material Any other use
<urn:uuid:dfca586e-d771-4d4b-b9e3-e64d15ccf913>
3.4375
1,105
Knowledge Article
Science & Tech.
53.239343
Sometimes the shape is more complicated. In this case you need to calculate 'missing' lengths and be particularly careful to include all the sides. Perimeter = 2 + 2 + 3 + 3 + 5 + 5 = 20 cm A plan of a play area is shown below: a) Calculate the length of x and y b) Calculate the perimeter of the play area. a) The length of the play area is 20m, so x = 20 - 8 =12m. The width of the play area is 15m, so y = 15 - 5 = 10m. b) Perimeter = 8 + 5 + 12 + 10 + 20 + 15 = 70m. This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.
<urn:uuid:962686c0-830f-46f3-a2dd-f3ac58da2569>
3.28125
218
Tutorial
Science & Tech.
80.780128
Tuples are the most basic infrastructure that the framework builds with. This sub-library provides a mechanism to bundle objects of arbitrary types in a single structure. Tuples hold heterogeneous types up to a predefined maximum. Only the most basic functionality needed are provided. This is a straight-forward and extremely lean and mean library. Unlike other recursive list-like tuple implementations, this tuple library implementation uses simple structs similar to std::pair with specialization for 0 to N tuple elements, where N is a predefined constant. There are only 4 tuple operations to learn: Here are examples on how to construct tuples: typedef tuple<int, char> t1_t; typedef tuple<int, std::string, double> t2_t; // this tuple has an int and char members t1_t t1(3, 'c'); // this tuple has an int, std::string and double members t2_t t2(3, "hello", 3.14); 2) Member access A member in a tuple can be accessed using the tuple's operator by specifying the Nth tuple_index. Here are some examples: tuple_index<0> ix0; // 0th index == 1st item tuple_index<1> ix1; // 1st index == 2nd item tuple_index<2> ix2; // 2nd index == 3rd item // Note zero based indexing. 0 = 1st item, 1 = 2nd item t1[ix0] = 33; // sets the int member of the tuple t1 t2[ix2] = 6e6; // sets the double member of the tuple t2 t1[ix1] = 'a'; // sets the char member of the tuple t1 Access to out of bound indexes returns a nil_t value. 3) Member type inquiry The type of an individual member can be queried. Example: Refers to the type of the second member (again note zero based indexing, hence 0 = 1st item, 1 = 2nd item) of the tuple. Access to out of bound indexes returns a nil_t type. 4) Tuple length The number of elements in a tuple can be queried. Example: int n = t1.length; gets the number of elements in tuple t1. length is a static constant. Thus, TupleT::length also works. Example: int n = t1_t::length; Copyright © 2001-2002 Joel de Guzman Use, modification and distribution is subject to the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
<urn:uuid:95a2ecd3-cd1a-4439-9db2-e939243f8f41>
3.53125
591
Documentation
Software Dev.
64.540833
Written for the KidsKnowIt Network by: Gemma Lavender, MPhys, FRAS Ever wondered where stars are made? Well, now you are about to find out! Just where these hot balls of gas start their lives begins in what astronomers call a nebula (plural: nebulae) and they are basically the nurseries of the Universe. Do you remember when your parents first took you to nursery? Do you remember what it was like? Well, whatever you remember from the nursery that you went to, stellar nurseries are quite different! For one, a nebula is a gigantic cloud of dust and gas; mainly of hydrogen and helium gases, and they can be light years across - that’s trillions of miles - imagine how many nurseries you can fit into one of these ginormous star factories! Secondly, they look quite fuzzy in appearance - pretty much like fluffy clouds or cotton wool in the sky. Imagine having to go nursery in a massive splattering of cotton wool? Nebulae come in not just a variety of sizes, they also come in a range of shapes with some of them looking very much like anything from horses (the Horsehead Nebula) to crabs (the Crab Nebula). The massive question is though, how do they form or have they always been there? What do you think? JPL/NASA, NOAO, ESA and The Hubble Credit: NASA/ESA/ASU/J.Heste r & A.Loll Heritage Team, (STScI/AURA) When it comes to making stars, astronomers believe that nebulae are made from the huge collapse of gas in what they refer to as the Interstellar Medium (the gas, dust and cosmic rays that can be found between planets and stars in galaxies). As the material falls in on itself under its own weight, large stars are made in the centre. When this happens, ultraviolet radiation shoots out like a laser beam and the nebula is lit up - just like a Christmas tree! Astronomers have a name for these types of nebulae - can you guess what that is? Here’s a clue - it’s a word for “throwing out light.” If you are not too sure, then have a peek, but if you think you can have a guess, then write down your answer - remember, you can have as many goes as you like! The answer is emission nebulae. Did you get it right? If you did, then give yourself a pat on the back! There are many famous emission nebulae, one of them is probably one that you have heard of and is easily one of the most well known; astronomers call this the Orion Nebula because you can find it in the constellation of Orion. This type of nebula is very, very hot because of the hot, newborn stars that zap their surroundings with sizzling rays of hot particles with lots of energy - much like how the Sun throws out hot beams on our planet but only so much hotter! Emission nebulae are usually found to glow red or pink in color - this is because they are filled with lots and lots of hydrogen gas. If pink or red is not your favorite color though, then you might prefer the next type of nebulae that we are going to look at next. Credit: NASA/STScI/Rice Univ./C.O'Dell et al. Scientists believe that new stars form inside of nebulae. Sometimes the dust and gas in these clouds begins to contract, or squash together. When things such as clouds contract they get hotter. The denser the cloud gets the hotter it gets. Eventually it gets dense enough and hot enough to ignite its hydrogen fuel, beginning its new life as a star. To learn more about stars Click Here. Sometimes, stars do not have enough pent-up energy to zap their surroundings with high energy particles. So they happily sit in clouds of dust and these clouds reflect their light. You might be able to see little pieces of dust floating around in the air right now. Do you know why you can see them? That’s right! Because there’s light (either from a lamp or the Sun) illuminating them. This light is reflected and that’s pretty much what happens in a reflection nebula. If you asked an astronomer to give you an example of one of these nebulae, then they are more than likely going to answer with the Witch Head reflection nebula which is also found in the constellation of Orion. They appear blue in color due to what is known as scattering. You can learn more about scattering in the “Did You Know” box below. Where there’s a reflection nebula, you can usually guarantee that an emission nebula is not too far away. Astronomers call them diffuse nebulae when they are found together. Does it look like the side view of a witch's head to you?Credit: NASA That’s not the end of our tour of nebulae - there’s more! Last but not least there’s planetary nebulae - but do not be fooled - these shells of gas have absolutely nothing to do with planets! This type of nebula earns its name because some astronomers of the 18th Century believed that they looked like giant worlds through the eyepiece of small telescopes. Here’s a tricky question; how do you think that these nebulae are made? Here’s a clue - it’s not through the collapse of the Interstellar Medium and it’s something to do with the fuel of a star. Have you had a guess? Read on to see if you’re right! Planetary nebulae are made when a star runs out of fuel to burn. What happens next is amazing. What do you expect to happen when a star runs out of fuel? Have a quick think and write down some answers. It’s not quite the same as when your car runs out of gas, where it stops moving - what happens to a star is quite a bit different; it blows off its outer layers of gas in the shape of a ring or bubble. When stars do this, astronomers say that a star is dying. But it’s not a sad ending for the star, it’s a beautiful colorful one! What we mean in terms of a dying star, at least when it comes to stars just like our Sun (called Sun-like stars), is that it’s changing into a red giant star. A red giant is a huge star that can swell to a size that swallows up everything in its path. After spending millions of years as a heavy weight giant, it shrinks again, pushing off the outer layers that we mentioned earlier. Planetary nebula are usually visible for around 50,000 years before starting to mix with the space that surrounds it - so there’s plenty of time to get out your telescope to have a look. Maybe you can decide for yourself if it looks like a planet! Planetary nebulae are an example of an emission nebula. The nebulae that we have been learning about are not the only places where star birth can be found - there’s the Bok globules which are really thick dark clouds of cosmic dust and gas. Because they are so dense, they block out light behind them, so astronomers can find them quite easily! You can see the blue reflection nebula in the background. In front of the reflection nebula is a dark bok globule blocking the view. Reflection nebulae appear blue pretty much due to the same reason why the sky is blue and this is due to scattering. But how does scattering work? The simple answer is that the light that is thrown out from a star, or our Sun, is reflected. Light that comes from the Sun and most newborn stars is called white light and it is made of many different colors - very similar to a rainbow. When this light travels passed particles of dust, the blue light (or if you want to be scientific; wavelength) is scattered; bouncing off every dust particle that it encounters before reaching our eyes - very much like how balls on a pool table are thrown in all directions as the white ball hits them and they start to bounce off of each other if they come into contact. The remaining light gets to travel through without being touched and that’s why we don’t really get to see other colors in the sky above us or in reflection nebulae.
<urn:uuid:1fb297ee-e3e9-42d5-8ad9-7ab8eb1229d8>
3.84375
1,770
Knowledge Article
Science & Tech.
64.163734
by Jeanette Cain Conquering gravity, floating and levitation is becoming more of a reality and less likely good science fiction reading. Ning Li at the University of Alabama, works on a device in the laboratory that she believes will one day change the world. Since Newton's famous apple story, the majority have believed that gravity cannot be conquered. Li's laboratory contains tanks of liquid nitrogen, a temperature chamber that is 390 degrees below zero, and a spinning ceramic material disk on the inside of the chamber. The disk is being levitated by powerful magnets and, if you were to see it, you would see that it floats inside the chamber. Ning Li's hopes are to invent a practical anti-gravity device allowing rockets to blast off without propellant and power plants to operate without fuel. She even hopes to design an anti-gravity car within a decade. Li does not consider herself a normal scientist and neither does NASA. NASA helps by funding some of her research on the gravity altering properties of superconductor materials. After work by a Finnish researcher, NASA set up its own program at the Marshall Space Flight Center in Huntsville, Alabama. Physicists are beginning to focus on a subject of which very little is known. They are beginning to questions and to study what gravity can and/or cannot do. Hideo Hayasaka and Sakae Takeuchi of Tohoku University, Japanese researchers, were the first to have some success. It began when the two observed the behavior of high-speed gyroscopes with metal fly wheels spinning several thousand times per minute. Hayasaka and Takeuchi noticed that as the gyro rotated clockwise, the weight seemed to drop by one part in 100,000. They considered the possibility of this being an anti-gravity effect, but their peers considered it an experimental error. A graduate student at Tampere University in Finland, Eugene Podkletnov, probably began the anti-gravity movement in the early 1990s. Podkletnov was researching superconductor materials, which lose all resistance to electricity, if chilled with liquid nitrogen. He placed ceramic disks, only a few inches wide, in the cold chamber and to his surprise, as they passed through the magnetic field, they began to spin rapidly. The objects above the disks apparently lost two percent of their weight. If the effect was real, could he somehow discover a way to increase its effect? A little anti-gravity either is or isn't. Podkletnov's discovery began to circle the globe, but the skeptics remained stationary on its validity. The conditions needed in the laboratory to produce spinning superconductor disks may give many misleading effects, which would change the seeming weight of a test mass. In the 1980s and 1990s, Ning Li published theoretical papers on anti-gravity. Working with a team of NASA researchers, Li created superconductor fly wheels up to a foot in diameter. They hoped to reproduce the same results of Podkletnov's experiments. After this, NASA began focusing on validating basic experiments and Li began focusing more on application. Li has put her desire for paper publishing to the side, as well as her techniques and experiment results. She believes that delays in her project would allow foreign researches to forge ahead of her own work. David Noever, NASA scientist at Marshall Space Flight Center, is working on the Delta G Experiment. Delta G is the term used to indicate change in the pull of gravity. Noever is attempting to overcome any possible means of error in the experiments. Afterwards, he plans to quantify the true nature of this gravity modification phenomenon. Contrary to internet rumors, NASA's secret anti-gravity lab has yet to be built. Physicists are beginning to consider that there may be more than one way to overcome gravity. Nieto has suggested that anti-matter may not fall when dropped. When matter and anti-matter meet, they cancel one another, but any sign of anti-gravity will create real interest in the science world. Nieto is working in the ATHENA project with other physicists hoping to use two powerful particle accelerators that will create anti-protons and anti-electrons. After being created, they plan to capture them and combine them to form anti-hydrogen atoms. The anti-hydrogen atoms will then be cooled and observed to see IF the fall under gravity's force. James Woodward of California Sate University at Fullerton, believes that the answer may not be in the atoms, but in the relationship between gravity and inertia. He is referring to the tendency of objects to resist acceleration changes. Einstein said that inertia is related to the universal gravitation field. If an object were given a sudden knock, its mass will experience a moment of temporary fluctuation. Woodard works with twisting pendulums and electrical capacitors hoping to view this in action. NASA has started listening to Woodward's thoughts on the possibility of modifying an object's mass. NASA created a Breakthrough Propulsion program that will investigate mass modification for space travel. The laboratory has not been the source of the greatest anti-gravity effects, rather studies of supernovae and exploding stars of distant galaxies. Flashes of light from supernovae have two teams of astronomers studying the combined pull of matter in the universe to slow the Big Bang. Initial results have shown that the universe is speeding up, not slowing down. Scientists believe this is evidence of a latent energy hidden inside the make-up of space, which has the opposite action of gravity. Hal Puthoff of the Institute for Advanced Studies in Austin, Texas believes that this energy is responsible for the effect of inertia. If true, it would connect the universal anti-gravity back to potential anti-gravity techniques here on earth. Noever believes it will become an advancing area of interest and study. Corey Powell says, "After all, Thomas Edison didn't need a quantum model of radiation to make a lightbulb." (Discover Magazine) 1. Powell, Corey S. Discover: "Zero Gravity". Discover: US. May 1999 issue. 2. Editors. The World Book Encyclopedia. World Book-Childcraft International, Inc: Chicago. 1990 Nijmegen-Amsterdam High-Field Magnet Laboratory Experiments of magnetic levitation with a frog. Please visit our affiliate partners that keeps our site up.
<urn:uuid:8f762a1b-2c5b-4522-bcdb-5fb409cf2db3>
3.421875
1,299
Nonfiction Writing
Science & Tech.
40.117232
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. Post a reply Topic review (newest first) A maths teacher said I am wrong but never explained why. Confused... Okay, thank you! Oh yes, I'm terrible for only half reading the question. It does ask 'how fast' so I guess they want the velocity. I've studied moments of inertia before (had to do it for my exam) and deriving them via integration -- but I know the M.I. of the rod is definitely correct (and therefore the change in KE). Since: as v = rw. But, this question still confuses me... are they asking for the ANGULAR velocity w, or just the instantaneous velocity of the end of the rod? In which case, with r = l; v = rw, using my w from post #1; ...... stopped thinking I would have to look at this in the morning. My brain is not functioning properly right now. For a uniform rod of mass m and length 2l, the moment of inertia about an axis perp. to the rod through the centre is ml2/3 (you can prove this by integration but it should be fine to quote it). For a rod of length l, you square (l/2) instead. Then apply the parallel axis theorem, to get the M.I. of the rod about the axis through the end. I sort of wasted time thinking about I_x, I_y and I_z, since it is a rod, not a lamina, and therefore I_y = I_z. Oops! Me and my careful reading. Didn't see the "also touches the wall" part. Sorry. I don't see how else this could be interpreted... clearly the starting position is the rod in its horizontal position. But, the question never states the starting position of the rod. If the rod is at first horizontal, then rotates 90 degrees (when first vertical), then the change in GPE is mgl/2, since the centre of mass of the rod has dropped by a distance of l/2. Ok, step by step. How'd you get the loss in GPE formula?
<urn:uuid:2b7a2460-fffb-4d49-96e1-08433812fb6a>
2.890625
518
Comment Section
Science & Tech.
86.254201
THE Universe is home to as many invisible galaxies as visible ones, according to astronomers in Scotland. This, they claim, explains two mysteries at once: why images of distant quasars can appear distorted despite the absence of a visible object in its path, and what became of the many blue galaxies that populated the Universe when it was young. On its way to Earth, light from quasars is often bent, or "lensed", by the gravitational pull of galaxies it passes. This causes a distorted picture of the quasar, which often appears as a multiple image. In many cases, searches with powerful telescopes have revealed the lensing galaxy. But sometimes no galaxy is visible, even though the distortion is great enough to suggest lensing by an object as large as our own Galaxy, the Milky Way. "For us not to see such a big galaxy, it must be more than 50 ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:2fc52246-0e3c-4e47-9820-d04215f0509e>
4.3125
209
Truncated
Science & Tech.
46.830989
Most people associate the name Charles Darwin with the theory of evolution by natural selection. But in 1842, 17 years before he published his groundbreaking Origin of Species, Darwin gained recognition for another theory, this one in the field of geology. The theory explained the formation of atolls, low-lying coral islands found mainly in the South Pacific. (Aldabra, the atoll featured in the NOVA program "Garden of Eden," lies in the Indian Ocean.) Over a century and a half later, Darwin's theory still remains the basis for the study of atoll evolution. What is this theory of atoll evolution? Take a look at the following presentation to find out.
<urn:uuid:a5ad0f7a-671e-4edb-84e0-5e6fb2a5a1d0>
3.65625
140
Knowledge Article
Science & Tech.
46.691351
Its extremely unusual for orbit to be circular - alah Kepler orbits are normally elliptical! Too I am not bothering with equinox's every 27,000 years! Then gravitational & tidal processes - [a land-tide, estimated at about 8 inches (20 cms) is our present slow roller-coaster] - slowed down the Moon's SPIN until locked facing Earth, and that the Moon's ORBIT speed somehow also decreased significantly. But continual transfer of Earth's spin-energy (angular momentum) to the Moon, through those same effects, now causes the Moon's ORBIT - circling the Earth - to increase. This drives the Moon out to greater orbit distances - at an estimated rate of 4 metres per century. Here's some more workings-out at - Cornell.edu's Astronomy page The Moon's orbit (its circular path around the Earth) is indeed getting larger, at a rate of about 3.8 centimeters per year. (The Moon's orbit has a radius of 384,000 km.) I wouldn't say that the Moon is getting closer to the Sun, specifically, though--it is getting farther from the Earth, so, when it's in the part of its orbit closest to the Sun, it's closer, but when it's in the part of its orbit farthest from the Sun, it's farther away. The reason for the increase is that the Moon raises tides on the Earth. Because the side of the Earth that faces the Moon is closer, it feels a stronger pull of gravity than the center of the Earth. Similarly, the part of the Earth facing away from the Moon feels less gravity than the center of the Earth. This effect stretches the Earth a bit, making it a little bit oblong. We call the parts that stick out "tidal bulges." The actual solid body of the Earth is distorted a few centimeters, but the most noticable effect is the tides raised on the ocean. Now, all mass exerts a gravitational force, and the tidal bulges on the Earth exert a gravitational pull on the Moon. Because the Earth rotates faster (once every 24 hours) than the Moon orbits (once every 27.3 days) the bulge tries to "speed up" the Moon, and pull it ahead in its orbit. The Moon is also pulling back on the tidal bulge of the Earth, slowing the Earth's rotation. Tidal friction, caused by the movement of the tidal bulge around the Earth, takes energy out of the Earth and puts it into the Moon's orbit, making the Moon's orbit bigger (but, a bit pardoxically, the Moon actually moves slower!). The Earth's rotation is slowing down because of this. One hundred years from now, the day will be 2 milliseconds longer than it is now. This same process took place billions of years ago--but the Moon was slowed down by the tides raised on it by the Earth. That's why the Moon always keeps the same face pointed toward the Earth. Because the Earth is so much larger than the Moon, this process, called tidal locking, took place very quickly, in a few tens of millions of years. Many physicists considered the effects of tides on the Earth-Moon system. However, George Howard Darwin (Charles Darwin's son) was the first person to work out, in a mathematical way, how the Moon's orbit would evolve due to tidal friction, in the late 19th century. He is usually credited with the invention of the modern theory of tidal evolution. So that's where the idea came from, but how was it first measured? The answer is quite complicated, but I've tried to give the best answer I can, based on a little research into the history of the question. There are three ways for us to actually measure the effects of tidal friction. * Measure the change in the length of the lunar month over time. This can be accomplished by examining the thickness of tidal deposits preserved in rocks, called tidal rhythmites, which can be billions of years old, although measurements only exist for rhythmites that are 900 million years old. As far as I can find (I am not a geologist!) these measurements have only been done since the early 90's. * Measure the change in the distance between the Earth and the Moon. This is accomplished in modern times by bouncing lasers off reflectors left on the surface of the Moon by the Apollo astronauts. Less accurate measurements were obtained in the early 70's. * Measure the change in the rotational period of the Earth over time. Nowadays, the rotation of the Earth is measured using the Very Long Baseline Interferometry, a technique using many radio telescopes a great distance apart. With VLBI, the positions of quasars (tiny, distant, radio-bright objects) can be measured very accuarately. Since the rotating Earth carries the antennas along, these measurements can tell us the rotation speed of the Earth very accurately. However, the change in the Earth's rotational period was first measured using eclipses, of all things. Astronomers who studied the timing of eclipses over many centuries found that the Moon seemed to be accelerating in its orbit, but what was actually happening was the the Earth's rotation was slowing down. The effect was first noticed by Edmund Halley in 1695, and first measured by Richard Dunthorne in 1748--though neither one really understood what they were seeing. I think this is the earliest discovery of the effect.
<urn:uuid:7cb5b472-d9af-465a-a46f-cfaae5e25f96>
3.859375
1,130
Knowledge Article
Science & Tech.
58.522763
Air is reasonably transparent, but over long distances, things such as humidity, turbulence, and the like can distort images. This is because the density of air isn't the same everywhere, and it changes in time and space. The situation becomes even more complicated when trying to look through fog, biological tissues, or other inhomogeneous materials. Similarly, reflection off most (non-polished) surfaces doesn't produce coherent images, no matter how shiny the surface looks. Despite the loss of information, researchers have developed a number of techniques to reconstruct the appearance of the original object. Ori Katz, Eran Small, and Yaron Silberberg have now shown they can produce a fully three-dimensional image even after light has gone through thin, inhomogeneous layers. Known as turbid materials, these layers contain microscopic particles or density fluctuations that scatter light, preventing focusing. To accomplish this, they used wavefront shaping, whereby they pass the scattered light through a special modulator. This modulator produces constructive interference between light from two different wavefronts, allowing a coherent image to be produced. As a bonus, the image can be produced in real time, as opposed to related methods that require computer reconstruction. When light passes through a turbid medium, the photons scatter off the inhomogeneities. If the source is incoherent, like an ordinary incandescent or fluorescent bulb, this results in a blurry image—if any image can be formed at all. If the light is coherent, such as a laser, scattering results is a speckled pattern. In either case, a clear view of the original object may not be possible. This spells doom for medical imaging, astronomy, and other applications. (The authors also suggested it gets in the way of peering through shower curtains. We at Ars condone such voyeuristic pursuits for consenting scientific partners only). The researchers illuminated a printed letter "A"—the object—using an ordinary tungsten halogen lamp (a light bulb), which produces undirected incoherent white light. They used a thin polycarbonate film as the turbid medium. While a static medium like that doesn't change in time as a fog or other fluids do, the authors showed it was enough to keep the image of the "A" from forming. By placing a spatial light modulator (SLM) immediately behind the polycarbonate film, the researchers manipulated the phase of the incoming light, changing it until light from the object interfered constructively. They also used a filter to eliminate some of the extra scattered light, leaving mostly only light from the object. Finally, they photographed the image using an ordinary digital camera. The setup looked like this: Similarly, the researchers passed incoherent white light through a stenciled letter G. They bounced this light off a piece of white paper, using the paper like a mirror. While it's reflective, the paper certainly doesn't produce a clear image. Applying the SLM and filter as with the turbid medium, they were able to see the G with their eyes, as well as photograph it. While it isn't precisely the "seeing around corners" promised in the paper's title, it shows in principle that some non-polished surfaces can be used as mirrors, allowing coherent images to be formed using paths that might include things like walls. Finally, the authors showed they could track moving objects in a limited way, shining light through a pinhole onto the white paper. They imaged the dot of light produced over several points in space in real time. As they pointed out in the paper, there are two major limitations to this technique. First, the image produced isn't completely faithful to the color of the original object. After all, the method required using interference of light, which is wavelength-dependent. Second, the contrast between the resultant image and the background is very low: even if the light source was bright, the image produced was faint. This means the SLM technique is less attractive for astronomical observations, where the method known as adaptive optics (used for example at the Keck telescopes in Hawaii) preserves contrast and color. Nevertheless, the method outlined in this paper has many potential uses in medicine, where forming real-time images through skin or thin bone is highly desirable. Since soft tissue imaging is often limited or impossible by ordinary means, wavefront reconstruction could possibly lead to significant advances in that area. Listing image by André Mouraux
<urn:uuid:88f0fe71-bd62-495b-9414-8d00c37d89bb>
3.75
911
Knowledge Article
Science & Tech.
33.535522
The oyster population is in peril in Maryland with only 0.3% of its 19th century population remaining. 1 This is due to a variety of factors including water pollution from numerous sources. The desire to save this species is so great that the State of Maryland is encouraging private dock owners to assist in the re-population of oysters by attaching juvenile to their docks to increase survival. The researchers focused on the pollutant polycyclic aromatic hydrocarbons, or PAH's. PAH's are the potent problem chemical family in coal tar sealants which have been banned in numerous locations around the US. This particular study looked at roadways, not parking lots. Roadways rarely are treated with coal tar, but it is very frequent on parking lots. It would have been an interesting line of research if they had looked at specific parking lot runoff, but perhaps that could be for a future study. It also would be good to know just how much of that PAH was transported there from coal tar sealed parking lots. The USGS has observed this phenomenon in several locations across the US. The concentration of PAH's used in the study were based upon the researchers' previous work with local roadway runoff. The PAH concentration was within an order of magnitude of the probable biological effects concentration of 23 ppm. The study's concluding findings are worth quoting: This study's results provide evidence that PAHs entering an aquatic ecosystem from runoff from road surfaces have the potential to inhibit oyster reproduction by negatively impacting three critical processes in the early life cycle of the Eastern oyster. The mystery to me is that with this knowledge and the value of this resource to the people of Maryland (and the Nation), that the practice of coal tar sealant use on parking lots continues. The EPA led a Chesapeake Bay Toxic's Task Force that took topic up over 4 years ago and was ineffective in even recommending that the product use be curtailed. Let's hope that coal tar sealer legislation pending in Montgomery County Maryland can begin this change for the State.
<urn:uuid:9b9e5ee3-375d-4f94-9a11-6e167fdb31af>
3.265625
415
Personal Blog
Science & Tech.
42.49102
Jun 8, 2012 Parasitic Plants Steal Genes from Their Hosts Vertical gene transfer is that between parents and their offspring, while horizontal gene transfer is the movement of genes between two different organisms. Bacteria use horizontal gene transfer to exchange resistance to antibiotics. Recent studies have shown that plants can also use horizontal gene transfer, especially parasitic plants and their hosts due to their intimate physical connections. Rafflesia cantleyi is an obligate holoparasite (dependent on its host, and only that host, for sustenance), which grows on Tetrastigma rafflesiae, a member of the grape family. Researchers from Singapore, Malaysia and USA collaborated to systematically investigate the possibility of horizontal gene transfer between these two plants. By looking at the transcriptome (the transcribed products of switched on genes) they found 49 genes transcribed by the parasite, accounting for 2% of their total transcriptome, which originally belonged to the host. Three quarters of these transcripts appear to have replaced the parasites own version. Most of these genes had been integrated into the parasite's nucleus, allowing the researchers to perform genomic analysis. Over time DNA randomly mutates and investigation of genetic drift between the genes for these transcripts, between the parasite and host, showed that some time has passed since the genes were acquired and that they were acquired gradually. Prof Charles Davis, from the Harvard University Herbaria, who co-led this project with Prof Joshua Rest from Stony Brook University, explained, "The elevated rate of horizontal gene transfer between T. rafflesiae and its parasite R. cantleyi raises the possibility that there is a 'fitness' benefit to the parasite. For example they may improve the parasites ability to extract nutrients from the host, or help it evade the host's defences, as has been seen for a bacterial pathogen of citrus trees." Read more at Science Daily
<urn:uuid:3d65c431-b3cb-4c37-a85a-26247cf06cab>
3.625
384
Truncated
Science & Tech.
21.691721
When Dan Rugar first heard about an idea for a new microscope that could peer beneath the surface of molecules and pick out individual atoms, he was skeptical. Scientists had dreamed of having such a device to help them unravel the complex structures of proteins, spot the defects in semiconductors, and solve a thousand other mysteries. But so far nobody had come up with a way of building a microscope powerful enough to produce a three-dimensional image showing the precise location of each and every atom--without destroying or changing the structure of the material. Rugar knew the problem as well as anybody. As a physicist at IBM, he had helped develop the atomic force microscope (AFM), which uses a tiny mechanical cantilever to feel individual atoms on the surface of a sample--or more precisely, to feel the electrostatic repulsion exerted by the atoms’ electrons. Indeed, the proposal Rugar was listening to that day in 1991 sounded remarkably familiar. In the new microscope, as in the AFM, a cantilever would be moved across a sample like a microscopic phonograph needle. But instead of responding only to surface atoms, this cantilever would pick up the far feebler magnetic force exerted by resonating atoms below the surface. That technique is known as magnetic resonance imaging, or MRI; hospitals already use it every day, in a much cruder form, to image internal organs. After Rugar ran a few calculations of his own, his skepticism melted away. It might indeed be possible, he realized, to build a magnetic resonance force microscope, or MRFM--a device that would combine the atom- scale resolution of an atomic force microscope with the three-dimensional imaging capability of an MRI scanner. Rugar and his colleagues at IBM’s Almaden Research Center in San Jose, California, began working on the idea right away. Today a prototype sits in their lab--a very early prototype. It can measure a force that is one-millionth the strength of that measured by an AFM. In so doing it can see beneath the surface of a sample and detect features far smaller than an MRI scanner can. But it can’t see nearly as small as Rugar wants. We’re not at atoms yet, he says. That’s our goal. And if you go through the physics of how it works, it looks as though it may be possible. The inspiration for the MRFM, and the man sitting in Rugar’s office that day four years ago, was John Sidles, a medical physicist in the department of orthopedics at the University of Washington in Seattle. The payoff that Sidles was seeking was an understanding of the molecular mechanisms at work in such intractable diseases as bone cancer. But the desire to see atoms is pretty widespread in science and engineering today. Biologists would like to plumb the intricate folded structure of proteins, whose shapes are crucial to their myriad functions in our bodies. Semiconductor manufacturers would love to have a way to pinpoint not only atom-size defects but also the atoms of boron or phosphorus that are used to dope a silicon chip with the desired electrical properties. The list of possible applications for a three-dimensional atom-seeing microscope is long. Conventional microscopes aren’t up to any of these jobs, for obvious reasons. Light microscopes can see bacteria, and electron microscopes can see viruses, but neither can see atoms; the diameter of a hydrogen nucleus is just one angstrom, or one ten-billionth of a meter. And the various scanning probe microscopes, of which the AFM is one, don’t see in three dimensions; they see only surface atoms. Besides, they can’t distinguish one type of atom from another, which you have to do if you want to figure out the structure of a molecule. To be sure, researchers can work out the structure of some materials, such as proteins, by bombarding them with X-rays and observing the scattered rays, a technique known as X-ray crystallography. But that technique works only on crystals, and not all proteins can be crystallized. Another technique, called nuclear magnetic resonance spectroscopy, can only see the structure of small molecules. Says Sidles: There are no truly general-purpose techniques for studying molecular structure at the angstrom level. Sidles thought that the principle of magnetic resonance imaging held the key to penetrating below surface atoms. I published a couple of theory articles, and then I trotted around looking for experimentalists to do experiments, he says. Perhaps because of his experience on the AFM, Rugar was the only one to take Sidles up on the idea. He enlisted his colleagues Nino Yannoni, an expert in magnetic resonance spectroscopy, and physicist Othmar Züger to develop a prototype. Sidles’s scheme starts with the basic principle of MRI, which exploits the magnetic moment of protons and neutrons--that is, their tendency to act like tiny bar magnets. When protons and neutrons occur in pairs, each tends to cancel the other’s magnetic moment. But when a nucleus has an odd number of protons or neutrons, the leftover particle imparts an overall magnetic moment to the nucleus. Since hydrogen, for instance, has only one proton in its nucleus, hydrogen atoms placed in a strong magnetic field will tend to align themselves, by virtue of their magnetic moments, with the field. If you then bombard the atoms with a radio wave at a certain frequency, the spinning hydrogen nuclei resonate: they tip over and begin to wobble like tiny tops. If you point your finger up, that’s the way the nuclei are lined up to begin with, explains Rugar. Then if you tilt your finger over, say 30 degrees, and spin it around in a little circle, keeping it at 30 degrees but changing the direction that it’s pointed, that’s the wobbling motion. When the radio wave is turned off, the nuclei return to their upright state, and as they do so, they emit a weak but detectable radio signal. By measuring the intensity of this signal, scientists can deduce the concentration of hydrogen in a sample. If the sample happens to be a hospital patient lying in the huge magnetic coil of an MRI scanner, the varying concentrations of hydrogen throughout the body produce signals that form an image of the internal organs. Sidles wanted to see atoms, though, not organs, and to do that he proposed taking advantage of a little bit of MRI physics that the medical scanners don’t exploit. The frequency of the radio wave that will cause hydrogen nuclei in a sample to resonate isn’t fixed; it depends on the strength of the magnetic field that’s bathing the sample. In a medical MRI scanner, that doesn’t matter. The powerful magnetic coils create a uniform field throughout the patient’s body, and when the frequency of the radio waves is tuned to the appropriate level, hydrogen nuclei throughout the body start resonating. Sidles’s idea for an atom-seeing MRFM, though, was to use a magnetic field that wasn’t spatially uniform. That way only those few nuclei would resonate that happened to be in the part of the sample where the magnetic field and the radio waves were in harmony. In principle you could detect a signal from a single nucleus. In the design that Rugar has since developed, the magnetic field comes from a tiny magnet on the tip of a silicon cantilever much like the one in an atomic force microscope. The field grows weaker the farther away a sample gets from the tip. At any given frequency of the radio wave only those nuclei in a thin slice of the sample, at a precise distance from the tip, will resonate. The trick then is to get the nuclei to emit signals the microscope can detect. Whereas MRI scanners do this by turning the field on and off and letting the nuclei return to their upright positions, Rugar hit upon an easier method. By subtly varying the frequency of the radio wave, he found, the nuclei in the resonating slice can be made to do flip-flops, reversing their magnetic moments by 180 degrees. When this happens, the atoms go from exerting an attractive force on the cantilever’s tip to exerting a repulsive one and back again. It’s like taking two bar magnets, says Rugar. If you put them together, they might attract each other, but then if you flip them around, they repel. If you flip them back, they attract. As the nuclei continue to flip back and forth, the cantilever vibrates. To measure this movement precisely, Rugar’s team uses a laser beam that travels through an optical fiber, bounces off the back of the cantilever, and goes back up the fiber. By looking at the amount of laser light that is reflected, the researchers can detect movements of less than an angstrom. As the cantilever scans horizontally over the sample, its vibrations reveal the presence of resonating hydrogen nuclei hidden beneath the surface. And as the cantilever is moved toward and away from the sample, different slices of the sample come into resonance. A computer program then adds all the slices together, building a composite, three- dimensional picture of all the hydrogen nuclei in the sample. The final step is to locate the atoms of other elements. In principle that’s straightforward. Since each element resonates at a distinct radio frequency, Rugar’s idea is simply to repeat the process at different frequencies. By making a composite of all these measurements, researchers would be able to piece together a three-dimensional image of a sample. Rugar estimates that the microscope would be able to look as deep as about a millimeter below the surface. All this, however, lies in the future. So far Rugar and his colleagues have developed only a rough prototype that demonstrates the instrument’s basic principles. It consists of a cantilever, a magnet, and a radio-frequency coil sealed in a vacuum chamber, which is in turn immersed in a bath of liquid helium to keep the apparatus at close to absolute zero. The fact is that everything that is at a temperature above absolute zero is vibrating, explains Rugar. Even when there is no magnetic resonance signal, the cantilever will be continuously vibrating in a noisy sort of way. Lowering the temperature makes the noise smaller, so that we can see smaller signals. The researchers have tested their apparatus on, among other things, a tiny blob of ammonium nitrate--a material chosen because the abundance of hydrogen atoms makes it convenient to work with. At present, the microscope can’t detect the resonance of a single atom; it can detect only the force exerted on the cantilever by a trillion or so atoms at once. Making the microscope able to detect a single atom would be equivalent to improving its resolution by a factor of 10,000, from one-millionth or two- millionths of a meter down to one angstrom. Right now the resolution of the MRFM does not even equal that of the best light microscope. One problem is that Rugar and his colleagues haven’t yet succeeded in building a crucial part of their design. The tiny magnet attached to the tip of the cantilever is still under development. Instead the prototype MRFM uses an awkward arrangement in which a separate magnet sets up the magnetic field and the sample is attached to the cantilever itself. Once the magnet is completed and put in place on the cantilever’s tip, says Rugar, and the sample is mounted on a fixed slide, resolution should improve. In addition, Rugar and his colleagues are developing thinner, more flexible cantilevers that bend and vibrate more readily in response to minuscule atomic forces. The instrument’s cantilever is now a mere 900 angstroms thick, but Rugar’s lab has already made one that is 200 angstroms thick. Such a cantilever is so soft that, were it a spring, a paper clip would cause it to deflect one kilometer, says Rugar. Even so, he is now planning to make another one that is half that thick. Finally, he and his colleagues plan to improve resolution by finding a way to place the magnetic tip much closer to the sample. The strength of the magnetic field falls off most steeply right around the tip, so by bringing the tip as close as possible to the sample, the researchers will be able to generate resonance and detect nuclei in as thin a slice of the sample as possible. The researchers hope ultimately to position a 300- angstrom-wide magnetic tip within 50 angstroms of the sample. In short, many things need to happen before detecting single buried atoms becomes possible. Rugar isn’t the only one trying to build an MRFM; Sidles, whose idea it was in the first place, has now joined him in a friendly rivalry. Doing MRFM experiments is a little bit like sailing a ship along a rocky and uncharted coastline in a dense fog, Sidles muses. If you are the first ship sailing in these waters, all too often you learn about a rock by going aground on it. Rugar issues the same caution with less metaphor and more optimism. We’re in a very early stage of developing this technique, and it is possible that we will not be able to achieve our ultimate goal, he says. Yet there seems to be no law of physics that says we can’t do it.
<urn:uuid:e0dafc01-6361-4ba3-860c-c0a1fca41814>
3.546875
2,823
Nonfiction Writing
Science & Tech.
44.089017
DUST and Dust-storms. Poussiere, Poudra, • Fs. I Gird, Staub, Ohrfeige, . GER. Polvere, . . . Ir. Khak, Pesbash, . TURK. Dust is earried along with winds to great dis tances. Sirocco or African dust has been found by the microscope to consist of infusoria and organ isms whose habitat is, not Africa but S. America, and carried in the track of the S.E. trado wind of S. America. In tho dust of the Cape Verdes, Malta, Genoa, Lyons, and the Tyrol, Ehrenberg discovered separate forms. Dust is blown from Arabia and Africa far to seaward, causing a great haze. During four months of the year, a largo quantity of dust is blown from the N.IV. shores of Africa, and falls on the Atlantic over a space of 1600 miles in latitude, and for a distance of from 300 to GOO miles from the coast. But dust has been seen to fall at a distance of 1030 miles from the shores of Africa. Darwin mentions that in some of the dust 330 and 380 miles from Africa, falling in the sea near the Capo de Verde Islands, particles of stone occur 1000th of an inch square. Dust-storms aro very fre quent in India, and usually have et north to south course. One commenced at Allababad about seven A.M., and continued till one P.M., retaining the same fury as when it began. On the evening of the 17th, Secunderabad had been visited with an unusually severe dust-storm. It came from the N.W. and was accompanied by lightning and thunder. The air to a consider able height was rendered almost opaque by dense clouds of red dust. The wind raged with great fury for upwards of half an hour, and on its abating was followed by a heavy shower of rain. A dust-storm passed over !Madras on Sunday the 19th, beginning at one P.M. It had passed over Kristnapatam, seventeen miles S.E. of Nellore, at half-past ten o'clock in the forenoon of that day, accompanied by a slight fall of rain. In the north of the district between Ongole and Ramapatarn, there was a heavy fall of rain in the forenoon of Sunday, averaging from two to four inches. At Chingleput, thirty-six miles south of Madras, the storm was experienced in full force at that station at two P.M. the same day. It came from the N.W., and the wind was laden with vast quantities of reddish dust ; no refreshing shower succeeded the storm. A dust-storm occurred over 3800 square miles, from Ningpo to Shang-hai, on the 15th March 1846. It consisted of a congeries of light downy fibres or hairs, with silex adhering to them, and an admixture of an alkaline salt. In China, ac cording to Richthofen, beds appearing like fine sediment, several hundred feet in thickness, and extending over an enormous area, owe their origin to dust blown from the high lands of Central Asia. 1Vhirling dust-storms aro caused by spiral columns of tho electric fluid passing from the atmosphere to the earth ; they have an onward motion, a revolving motion, like revolving storms at sea, and peculiar spiral motion from above dovntwards, like a corkscrew. It seems probable that in an extensive dust-storm there are many of these columns moving on together in the same direction, and during the continuance of tho storm many sudden gusts tako place at intervals, during which tiine the electric tension is at its mammum. These storms, in the Panjab, mostly commence from N.W. or IV., and in the course of an hour, more or less, they have nearly completed the circle, and have passed onwards. Precisely tho samo phenomena, in kind, are observable in all casoi of duat-storms ; from the ono of a few Inches in diameter, to those that extend for fifty milei and upwards, tho phenomena are identical. It is a curious fact that some of the smaller duat-storms occasionalIT seen in extensive and arid plaina, both in the Panjab and in Afghanistan above the Bolan pass, called in familiar language devils, are either stationary for a long time, that is, upwards of an hour, or nearly so, and during the whole of this time the dust and minute bodies on the ground aro kept whirling above into the air; in other cases, these tunall dust-storms are seen slowly advancing, and when numerous, usually proceed in the same direction. Birds, kites, and vultures are usually seen soaring high up just above thesis spots, apparently following the direction of tho column. They may be looking for prey, or in volved in and unable to fly out of, the invisible part of the electrified tfdrial column, of which the lower part only is visible to ns by tho dust raised. Tho phenomena connected with dust-storms seem to be identical with those present in waterspouts and white squalls at sea, and revolving storms and tornadoes of all kinds; and they apparently origi nate from the same cause viz. movinn. columns of electricity. In 1847, at Lahore, an ot:erver, being desirous of ascertaining tho nature of dust-storms, projected into the air an insulated copper wire on a bamboo on the top of his house, and brought the wire into his room, and connected it with a gold-leaf electrometer and a detached wire com municating with the earth. A day or two after during the passage of a small dust-storm, he had the pleasure of observing the electric fluid passing iu vivid sparks from one wire to another, and of course strongly affecting the electrometer. After wards, by tho same means, he observed at least sixty dust-storrns of various sizes, all present ing the same phenotuena in kind. Commonly, towards the close of a storm of this kind, a fall of rain suddenly takes place, and instantly the stream of electricity ceases, or is much diminished; and whetf it continues, it seems only on occasions when the storm is severe and continues for some time after. The barometer steadily rises throughout. In the 'Punjab plains, the fluctuation of the barometric coluum Is very slight, seldom more than two or three tenths of an inch at a time. The average height at Lahore is 1180, corrected for temperature, indicating, it is supposed, above 1150 feet above the level of the sea, taking 30 inches as the standard. A large dust-storm is usually preceded by certain pecu liarities in the dew-point, and the manner in which the particles of dew are deposited on the bulb of a thermometer. The mode of taking the dew point is to plunge a common thermorueter in a little ice, and let it run down 20° or 30°. The manner in which tho electricity acts upon the dust and light bodies it meets with in its passage, is siuiple enough. The particks are aimilarly electri fied aud mutually repulsive, and then, togethe-r with the whirling motion communicated to them, are whisked into the air. The earn° takes place when the electricity moves over water. The sur face of tho water becomes exposed to the electric agency, and its particles, mndered mutually repulsive, are in the aline way whirled into the air. At sea the waterspout is thus fortned. First of all is seen the cloud descending, and beneath may be observed the water in a cone.—Bengal Asiatic Soc. Journal, No. v. of 1850, p. 790 ; Darwin, p. 239.
<urn:uuid:499e88be-e82f-4103-b3b3-3cd3a193ec25>
3.109375
1,678
Knowledge Article
Science & Tech.
60.331214
Receding Moon and Lunar Origin If the moon is moving away from the earth at approximately 1.5 inches a year,and the earth is approximately 4.5 billion years old. Then how far was the moon from the earth then? Would 4.5 billion years times 1.5 inches = miles from earth. The moon being 221,463 miles, how accurate would that be. The number of miles I come up with is not even close. A mile being 63,360 inches. That would make the moon part of the earth. When did the moon separate from the earth? Hi, you have just nailed an important theory right on the head. There is a very well supported theory that the current Earth and Moon were produced from the collision of two planets in the early development of the solar system. These two planets, let us call them proto-Earth and proto-Moon collided, mixed, and then coalesced into the current Earth and Moon, with the Moon steadily getting farther away. Many different data support this theory. The fact as you have pointed out that they are moving apart (unusual since gravity should be bringing them together as is the case in other planet/moon systems), the unusual similarity in composition of the two bodies, the age of the rocks on both bodies, etc. You have to imagine that the separation is not linear though. We can expect that the Moon is receding but at a decreasing rate. So, your calculations are not quite accurate, but it gets the point across. Greg (Roberto Gregorius) The assumption that the present "drift" in the Earth-to-Moon distance is likely flawed. There are a number of phenomena (e.g. tidal drift, solar effects to name just two of a much longer list) that can alter that separation spacing. There is at least one theory that proposes that the Moon was part of the Earth. So it is important to using present day data extrapolated back millions of Click here to return to the Astronomy Archives Update: June 2012
<urn:uuid:3ce0d900-8572-4ff8-85ad-57b3a370dec1>
3.34375
446
Q&A Forum
Science & Tech.
65.560174
(This is also featured as a guest post on Skeptical Science) The greatest source of uncertainty in understanding climate change is arguably due to the role of aerosols and clouds. This uncertainty offers fertile ground for contrarians to imply that future global warming will be much less than commonly thought. However, some (e.g. Lindzen) do so by claiming that aerosol forcing is overestimated, while others (e.g. the NIPCC) by claiming that aerosol forcing is underestimated. Even so, they still arrive at the same conclusion… Let’s have a look at their respective arguments. Below is a figure showing the radiative forcing from greenhouse gases and from aerosols, as compared to pre-industrial times. The solid red curve gives the net forcing from these two factors. The wide range of possible values is primarily due to the uncertainty in aerosol forcing (blue dotted line). The greenhouse gas forcing (dashed red curve) is relatively well known, but the aerosol forcing (dashed blue curve) is not. The resulting net anthropogenic forcing (red solid curve) is not well constrained. The height of the curve gives the relative probability of the associated value, i.e. the net climate forcing is probably between 1 and 2 W/m2, but could be anywhere between 0 and 3 W/m2. (From IPCC, 2007, Fig 2.20) The NIPCC report (a skeptical document, edited by Craig Idso and Fred Singer, made to resemble the IPCC report) says: “The IPCC dramatically underestimates the total cooling effect of aerosols.” They hypothesize that natural emissions of aerosol precursors will increase in a warming climate, causing a negative feedback so as to dampen the warming. Examples of such gaseous aerosol precursors are dimethyl sulfide (DMS) emitted by plankton or iodocompounds created by marine algae. They use these putative negative feedbacks to claim that “model-derived sensitivity is too large and feedbacks in the climate system reduce it to values that are an order of magnitude smaller.” These are intriguing processes (the “CLAW hypothesis” first got me interested in aerosols, when I assisted with DMS measurements on some remote Scottish islands), but their significance on a global scale is ambiguous and highly uncertain. As a review article about the DMS-climate link says: “Determining the strength and even the direction, positive or negative, of the feedbacks in the CLAW hypothesis has proved one of the most challenging aspects of research into the role of the sulfur cycle on climate modification.” The NIPCC report exaggerates the uncertainty in climate science, but seems to put a lot of faith in elusive and hardly quantified processes such as natural aerosol feedbacks coming to our rescue. On to Lindzen: “The greenhouse forcing from man made greenhouse gases is already about 86% of what one expects from a doubling of CO2 (…) which implies that we should already have seen much more warming than we have seen thus far (…).” Lindzen again (E&E 2007): “How then, can it be claimed that models are replicating the observed warming? Two matters are invoked.” The “two matters” he refers to are aerosol cooling and thermal inertia from the oceans (a.k.a. “warming in the pipeline”). He then proceeds to argue that both of these factors are much smaller than generally thought, perhaps even zero. E.g. on aerosols, Lindzen writes: “a recent paper by Ramanathan et al (2007) suggests that the warming effect of aerosols may dominate – implying that the sign of the aerosol effect is in question.” By downplaying the importance of these two factors, Lindzen argues that the observed warming implies a small climate sensitivity. The same line of argument is also used by Dutch journalist Marcel Crok, who writes in his recent book (in Dutch, my translation): aerosols probably cool much less than commonly thought While it is true that aerosols can warm and cool the climate (by absorption and reflection of solar radiation, respectively, besides influencing cloud properties), most evidence suggests that globally, cooling is dominant. Whereas Ramanthan et al (2007) don’t quantify the net aerosol effect (in contrast to Lindzen’s implicit claim), Ramanathan and Carmichael (2008) (quoted by Crok) do. They estimate both the warming ánd cooling effects to be stronger than most other estimates, but the net forcing (-1.4 W/m2) is right in line with (even a little stronger than) the IPCC estimate (see the above figure). Taking into account realistic estimates of aerosol forcing and ocean thermal inertia, the earth has warmed as much as expected, within the admittedly rather large uncertainties. Ironically, it is exactly because aerosol forcing is so uncertain and because the climate hasn’t equilibrated yet that the observed warming since pre-industrial times is only a very weak constraint on climate sensitivity. Lindzen seems very certain of something that most scientists would readily admit is very uncertain. So we have the peculiar situation that both of these approaches try to claim that climate sensitivity is small, but the NIPCC approach is to claim that aerosol forcing is very large (thus providing a negative feedback to warming), whereas the Lindzen approach is to claim that aerosol forcing is very small (thus necessitating a small sensitivity to explain the observed warming so far). Of course they can’t both be right, and probably neither of them are. Looking back at the figure above, both approaches are based on assuming that aerosol forcing is at the edge of the probability spectrum (as if it were some fudge factor), whereas the most likely value is somewhere in the mid range. Both approaches also ignore the other lines of evidence that point to climate sensitivity likely being in the range of 2 to 4.5 degrees. E.g. a value as small as suggested by the NIPCC (0.3 degrees) is entirely inconsistent with the paleo-climate record of substantial climate changes in the earth’ history. And finally, both approaches implicitly assign high confidence to some of the most uncertain aspects of climate science, even though they routinely mock climate science as if nothing is known at all. Of course it is not mandatory for all those who dismiss mainstream climate science to agree, but to see two important “spokespeople” for climate contrarians take such mutually inconsistent approaches is peculiar. Even more so when you realize that Lindzen signed the recent “prudent path” letter to US Congress, in which the NIPCC report was approvingly cited… Most people can’t have it both ways, but apparently climate contrarians can.
<urn:uuid:71c046db-8e0f-4adb-8226-2f52f4e1b93a>
3
1,435
Personal Blog
Science & Tech.
39.911424
Java has the feature of the playing the sound file. This program will show you how to play a audio clip in your java applet viewer or on the browser. For this example we will be creating an applet called PlaySoundApplet.java to play sound. There are two buttons to play the sound in Loop and to Stop the sound. The play() method of AudioClip object is used to play the sound while stop() method is used for stop the running audio clip suddenly. Here is the code of the program : In this example we have a class AudioClip, which is an abstract class. So, it can't be instantiated directly. But there a method called getAudioClip() of Applet class which can be used to create the object of AudioClip. There are two versions of getAudioClip() function: In this example we are using the second method: audioClip = getAudioClip(getCodeBase(), "TestSnd.wav"); AudioClip class provides the following methods: public abstract void play() - to play the sound only once public abstract void loop() - to play the sound in loop public abstract void stop() - to stop the playing sound Here is the HTML code : <APPLET CODE="PlaySoundApplet" WIDTH="200" HEIGHT="300"></APPLET> If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:fb0d0752-ff96-44fa-a463-ac4d5f2155b5>
2.859375
333
Tutorial
Software Dev.
58.437917
Theorists were talking about them as early as the 1960’s, but the first confirmed observation of a brown dwarf, an object that’s more than a planet but not quite a star, didn’t come until 1994. Since then, astronomers have been making up for lost time, trying their best to understand these cosmic oddballs. Now a team based at the University of Arizona has used not one but two space telescopes to reveal something in a brown dwarf lying 30 light years from Earth that we usually think of as very much a terrestrial phenomenon: weather patterns — including clouds most likely made from particles of iron and sand. The dwarf in question is named (believe it or not) 2MASS J22282889-431026, and the observations, reported in Astrophysical Journal Letters, show an atmosphere in no small amount of turmoil. That’s not surprising: the planet Jupiter, the closest thing in our own Solar System to a brown dwarf, has clouds of ammonia, water vapor and various hydrocarbons, driven around the planet by powerful winds and sucked into gigantic storms like the Great Red Spot that can last for centuries. But 2MASS J22282889-431026 is more than just a Jupiter — it’s probably 30 times more massive, which is close to half as big as it would have to be to burst into the nuclear reactions that would qualify it as a star. Still, at perhaps 1,300°F (700°C) on its gaseous “surface,” and even toastier deep inside thanks to heat left over from its formation billions of years ago, it’s got enough energy to get its atmosphere roiling nicely — and by using the powerful tag team of the Hubble and Spitzer space telescopes, lead author Esther Buenzli and her colleagues managed to get an unprecedentedly good look at the brown dwarf’s infrared glow. That, in turn, provided them with a glimpse inside. “This is the first time,” she says, “that we’re starting to get a three-dimensional look at its atmosphere.” What they see there is a world that rotates at high speed (once every 90 minutes, compared with ten hours for Jupiter), with bright spots that represent glowing clouds made of water and methane — and, according to theoretical models, vaporized iron and silicates as well. “For some other brown dwarfs,” Buenzli says, “we’ve seen the spectral signature of silicates, which vindicates the theory.” The new observations strengthen that thinking. What the scientists didn’t expect was to see the brightness and concentration of various clouds differ as they used longer infrared wavelengths to peer deeper into 2MASS J22282889-431026. Cloud formations seem to show up at higher altitudes depending on longitude, says Buenzli. “We aren’t quite sure how to interpret it. It might have to do with circulation. There are definitely huge winds, so maybe the winds smear out the clouds at different levels. That’s one possibility,” she says, “but the theorists are still trying to understand it.” At the very least, said one of those theorists, Arizona’s Adam Showman, in a press release, “The data suggest regions on the brown dwarf where the weather is cloudy and rich in silicate vapor … with drier conditions at higher altitudes — and vice versa.” Showman and the rest of the team will soon have more data to work with: the “Extrasolar Storms” project, led by University of Arizona astronomer Daniel Apai, is looking at dozens of brown dwarfs to try and understand how their outer atmospheres change as the bodies themselves age. It’s still too early to conclude much, but, says Buenzli, “we have observations from two more brown dwarfs that are earlier in their evolution. They seem to have thicker clouds, and not so much lag at different altitudes. There really seems to be a difference.”
<urn:uuid:0355f801-72f8-4b12-a6b1-f3fb5dd319d2>
4.03125
862
Truncated
Science & Tech.
47.212383
Looking at the shape of its outside shell, you can easily see why a horseshoe crab (or Limulus polyphemus) got the "horseshoe" part of its name, but what about the "crab"? People once thought the horseshoe crab was a crab, but the animal is, in fact, more closely related to the common garden spider than to a blue crab. Horseshoe crabs, which feed upon worms, small mollusks, and algae, are a living fossil; they have been literally unchanged for the past 360 million years! Picture: Young naturalist inspecting a horseshoe crab shell. The carapace was empty. If this was a live animal, picking up by tail could cause injury to the crab. Picture courtesy of Mary Hollinger, NODC biologist, NOAA. To the original photo. When it is time to mate, sometime in the early summer during high spring tides, which are influenced by the full moon, the female horseshoe crabs will come ashore. The males follow, often even hanging on to the females' backs with special hooks in their first pair of legs. The female horseshoe crab digs a hole in the sand and deposits her strands of dark greenish eggs. The male horseshoe crab then releases his sperm to fertilize the eggs. Picture: Mating pair of horseshoe crabs. Picture courtesy of Mary Hollinger, NODC biologist, NOAA. To the original photo. The importance of the horseshoe crab Many animals depend on the horseshoe crab. For example, they are an important part of the diet of juvenile loggerhead turtles, which summer in and around the Chesapeake Bay. The crab's eggs and larvae are also a significant food source for commercially important finfish and shellfish. As mentioned in the Watershed Radio program, many migrating shore birds time their migrations so that after flying from South America, they arrive at Delaware Bay at the same time as the horseshoe crabs are laying their eggs. The birds depend on the eggs and larvae to complete the last part of their journey to their northern nesting areas. Picture: Red Knot, a migratory bird. Migratory birds, like the Red Knot in this photo, depend on the eggs and larvae of the horseshoe crab. After eating the eggs and larvae, the birds are ready for the last part of their long trip to their northern nesting grounds. Picture courtesy of Peter S. Weber. Humans also rely on the horseshoe crab and have found many commercial uses for the animal. Lysate, an extract from their blood, is used worldwide in cancer research and to test for bacterial contamination. Horseshoe crabs are also commercially used as bait in eel and conch fisheries. The high demand for conch and eel, however, has led to an overharvest of the horseshoe crabs in recent years. References and further reading The horseshoe crab. All you ever wanted to know about the horseshoe crab. Excellent drawings and explanations. The National Audubon Society has been working to stop the overfishing of the horseshoe crab and achieve sustainable management practices to protect the horseshoe crabs and the migratory shorebirds. Read more about their horseshoe crab campaign. "Horseshoes anyone?" (a .pdf document) by Tom Barnard and Lyle Varnell in The Virginia Wetlands Report, Winter/Spring 1999, Volume 14, Number 1. The Virginia Wetlands Report is a quarterly publication of the Wetlands Program at the Virginia Institute of Marine Science of the College of William and Mary. Visit the Assateague Naturalist for more pictures and information on the horseshoe crab. Life in the Chesapeake Bay, by Alice Jane Lippson & Robert L. Lippson. The Johns Hopkins University Press, Baltimore and London. 1984. ISBN: 0-8018-3012-5. This illustrated guide to fishes, invertebrates, and plants is an excellent resource for anyone who wants to learn more about life in the Chesapeake Bay. The Seaside Naturalist, A Guide to Study at the Seashore, by Deborah A. Coulombe. Prentice Hall Press, New York. 1984. ISBN: 0-13-797101-X. Another great guide to learn more about wonderful seaside creatures.
<urn:uuid:831481f1-b8db-4ac4-9e58-998ff69d9538>
3.921875
912
Knowledge Article
Science & Tech.
52.120518
The physical realities of global warming Posted on 26 November 2009 by John Cook Global warming is happening before our very eyes. All over the world, from the Arctic to Antarctica, scientists are observing the impacts of climate change. In the three years since the IPCC Fourth Assessment Report (AR4) was drafted, hundreds of peer reviewed papers studying climate change have been published. A summary of the latest research has been compiled in The Copenhagen Diagnosis, released by the University of NSW and authored by 26 climate scientists. It's a resource heavy report, referencing hundreds of papers. Here are some of the highlights: At a time when we need to be lowering our carbon footprint, global CO2 emissions have been sharply rising. In fact, the acceleration in fossil fuel CO2 emissions is tracking the worst case scenarios used by the IPCC AR4. Consequently, atmospheric CO2 is increasing ten times faster than any rate detected in ice core data over the last 22,000 years. Figure 1: Observed global CO2 emissions from fossil fuel burning and cement production compared with IPCC emissions scenarios. The coloured area covers all scenarios used to project climate change by the IPCC. Over the past 25 years, global temperature has warmed at a rate of ~0.2°C per decade. Superimposed over this long term trend is short term variability. Most of these short-term variations are due to internal oscillations like El Niño Southern Oscillation, the 11-year solar cycle and volcanic eruptions. Over periods less than a decade, such short-term variations can outweigh the anthropogenic global warming trend. For example, El Niño events can change global temperature by up to 0.2°C over a few years. The solar cycle imposes warming or cooling of 0.1°C over five years. However, neither El Niño, solar activity or volcanic eruptions make a significant contribution to long-term climate trends. Consequently, over the past decade (1999-2008), the warming trend is 0.19°C per decade. consistent with the long term trend. Figure 2: Global temperature according to NASA GISS data since 1980. The red line shows annual data, the red square shows the preliminary value for 2009, based on January-August. The green line shows the 25-year linear trend (0.19 °C per decade). The blue lines show the two most recent ten-year trends (0.18 °C per decade for 1998-2007, 0.19 per decade for 1999-2008). Satellite and tide-gauge measurements show that sea level rise is accelerating faster than expected. The average rate of rise for 1993-2008 as measured from satellite is 3.4 millimeters per year while the IPCC Third Assessment Report (TAR) projected a best estimate of 1.9 millimeters per year for the same period. Actual sea level rise is 80% higher than the median projection. Sea level is likely to rise much more by 2100 than the often-cited range of 18-59 centimeters from the IPCC AR4. Figure 3: Sea level change. Tide gauge data are indicated in red and satellite data in blue. The grey band shows the projections of the IPCC Third Assessment report. Summer-time melting of Arctic sea-ice has accelerated far beyond the expectations of climate models. The area of sea-ice melt during 2007-2009 was about 40% greater than the average prediction from IPCC AR4 climate models. The thickness of Arctic sea ice has also been on a steady decline over the last several decades. September sea ice thickness has been decreasing at a rate of 57 centimeters per decade since 1987. Figure 4: Observed (red line) and modeled September Arctic sea ice extent in millions of square kilometers. Solid black line gives the average of 13 IPCC AR4 models while dashed black lines represent their range. The 2009 minimum has recently been calculated at 5.10 million km2, the third lowest year on record and still well below the IPCC worst case scenario. Some more observations from the latest research: - Recent studies have confirmed the observed trends of more hot extremes and fewer cold extremes. - Rains have become more intense in already-rainy areas as atmospheric water vapor content increases. Recent changes have occurred faster than predicted by some climate models, raising the possibility that future changes will be more severe than predicted. - Conversely, there have been observed increases in drought in some latitude bands. The intensification of the global hydrological cycle is expected to lead to further increases in very heavy precipitation in wet areas and increased drought in dry areas. - Several studies since the IPCC AR4 have found more evidence for an increase in hurricane intensity over the past decades. - There have been recent increases in the frequency and intensity of wildfires in regions with Mediterranean climates (e.g. Spain, Greece, southern California, south-east Australia) and further marked increases are expected. - Rapid degradation and upward movement of the permafrost lower limit has continued on the Tibetan plateau. Observations in Europe have noted permafrost thawing and a substantial increase in the depth of the overlying layer. - The contribution from shrinking glaciers to sea level rise in 2000 was about 0.8 millimeters per year. New estimates show that glacier mass loss has increased 50% and now contributes about 1.2 millimeters per year to global sea levels. - There have been a number of recent studies reinforcing the conclusion that the rate of ice mass loss from the Greenland and Antarctic ice sheets are increasing. Recent observations have shown that changes in the rate of ice discharge can occur far more rapidly than previously suspected. Dynamic ice sheet uncertainties are largely one-sided. They can lead to a faster rate of sea-level rise but are unlikely to significantly slow the rate of rise. - Observations also show deep-ocean warming is much more widespread in the Atlantic and Southern Oceans than previously appreciated. There is a common theme emerging from the most recent peer reviewed research. When uncertainties expressed in the IPCC AR4 report are subsequently resolved, they point to a more rapidly changing and more sensitive climate than previously believed. Skeptics tend to characterise the IPCC as imposing an alarmist bias in their conclusions. The latest empirical data indicates the opposite is the case.
<urn:uuid:24d0f420-8cc8-4765-a405-c1433b321213>
3.203125
1,274
Nonfiction Writing
Science & Tech.
46.615384
The uncommon wombat The underground lifestyle and nocturnal habits of wombats mean they can be hard to catch sight of, but this time of year provides a great opportunity to spot the world's largest burrowing herbivore. Wombats have a reputation for being muddle-headed, plodding and dozy. But this doziness actually masks a consummate skill for energy conservation — they have exceptionally slow metabolisms and very low water needs. Their metabolism has been described as almost reptilian, allowing them to convert 'grass to wombat' three times more efficiently than kangaroos. The stable temperature in their underground burrows is particularly important, helping them to manage their energy usage efficiently throughout the year. This means that the cooler months, from May to October, are the best time to spot wombats in the daytime. "During the winter, they'll often come up in the afternoons or mornings and have a feed, then head back down and stay in their burrows through the night, because it's just too cold and they can conserve energy," explains Barbara Triggs, a wombat expert since 1972. "In summer they often won't emerge until midnight." By letting body temperature drop while they are snoozing, wombats don't need to consume as much food as other mammals. Their diet is almost exclusively grass, with a nibble of moss, mushrooms or fungus as an occasional treat. They tend to only dig up roots and tubers in dry times. And their efficiency extends to water usage. Wombats are amongst the lowest water consumers of any mammal on earth, needing only 20 per cent of what a sheep requires. They conserve water in all aspects of their lives, including excreting droppings four times drier than a camel's. Like their closest relatives, koalas, they can get almost all the water they need from vegetation. Their dozy reputation hides another surprising wombat characteristic — wombats can turn on the power if they need to. The average wombat could win a gold medal in a 100 metre race with a human, as they can maintain 40 kilometres an hour for 150 metres. They've also been seen to leap over metre-high fences and squeeze through gaps only 10 centimetres high. There are three species of wombats, all found exclusively in Australia. The common wombat (Vombatus ursinus) lives in the coastal and highland areas of south-east Australia. The southern hairy-nosed wombat (Lasiorhinus latrifrons) can be found in the arid areas of southern Australia, and the endangered northern hairy-nosed wombat (Lasiorhinus krefftii) is found in central Queensland, although you'll be very lucky to see it, at this or any other time of year. Other info: Wombats are nocturnal animals who spend a lot of time in burrows. But it's not all hiding and snoozing underground! The work in digging a 10 m burrow is equivalent to walking 120 km, and they can maintain a speed of 40 km/hr for 150m. There are three wombat species in Australia - the common wombat, the southern hairy-nosed wombat and the northern hairy-nosed wombat (which is very rare). The common wombat is solitary and sometimes aggressive, with coarse, dense bristle-like hair. The northern and southern hairy-nosed wombats are more social and more docile. They have angular, blocky heads with softer, silky fur. The uncommon wombat The northern hairy-nosed wombat is only found within three square kilometres of Epping Forest National Park in central Queensland. Dr Alan Horsup, Senior Conservation Officer in the park, calls them the 'uncommon wombat'. "Northern hairy-noses are amongst the top ten most endangered mammals in the world," he says. "To put it into perspective, take the giant panda, the most famous endangered animal in the world. There are 1800 giant pandas left. There are only about 100 of these creatures." "It's the only tropical wombat. We nearly lost it. They were down to perhaps thirty individuals in the 1980's. If they become extinct, it will be the first large animal extinction since the thylacine. We don't want that to happen." Weighing in at up to 40 kg and 1.3 m long, the 'northerns' are the largest species of wombat. Their fine, silky fur, squared-off heads, pig-like noses and large ears are just part of what differentiates them from common wombats. Geneticists calculate the commons and the hairy-nosed wombats are less closely related to each other than chimps and humans. Dr Horsup is developing techniques to establish new populations within the park, and is looking for suitable land to establish a colony outside Epping Forest National Park. "It's really risky having all the animals in just one spot," he says, "A disease outbreak or a bushfire could be a disaster." The Northern Hairy-nosed Wombat Recovery Program is using innovative techniques such as DNA hair analysis and remote 'burrow-cams' to monitor these shy, scarce creatures with minimal disturbance. Limited funding means that the remote park is staffed by volunteers, but the list of those wanting the chance to see one of these exceptional beasts is long. "We've started to see lots of young wombats' signs around tracks and diggings and in videos now," he says. "There's quite few new burrows. It's pretty encouraging." Over the next few months, if you're in South Australia, you might see a joey (a baby wombat) poke its head out of a burrow, or a pouch, for the first time. Unlike the common wombat and the northern hairy-nosed wombat, the southern hairy-nosed wombat has a seasonal, and therefore predictable, breeding cycle. Most southern hairy-noses are born from late August to October. In a never-witnessed journey, the one gram neonate makes its way to the pouch and attaches to a teat. There may still be an elongated teat in the pouch which is accessed by the previous year's joey, now living outside the pouch. "Like some other marsupials they can produce the appropriate milk for each offspring from different teats, " says Peter Temple-Smith, from Monash University. "Often they'll wean the older one off pretty quickly." The maternal bond remains however, and the older juvenile wombat will stay around for many months to come. "Sometimes, if they're still in the pouch, you might even catch sight of a two-headed wombat," says Temple-Smith. "While mum's grazing, there'll be a head poking out the other end chewing on a bit of grass as well. They learn from the pouch what's food and what's not." With time, the females leave the joey in the burrow while they go out foraging. And after a while they just come out with her. Wombats don't make good pets! Many people might remember the lovable Fatso, who lived with the local policeman in the old TV series A Country Practice. However, anyone thinking about 'adopting' one needs to think twice. First of all, it's against the law to keep them without a licence. "They're a protected animal," says Gaylene Parker, an animal carer. "They have to be released and it has to be done the right way." Parker has looked after over 500 orphaned wombats and now trains carers. "We've had older confiscated wombats brought to us and 95 per cent of them die because they've been kept with a person, and they've run around the house with them during the day. When it comes to the age where it feels that it should be doing certain things, it doesn't have the confidence and skills to go out into the wild and fend for itself." If someone finds a baby with a mother killed by the side of the road, they should contact WIRES, the RSPCA, National Parks or a vet, she says. In any case, wombats are not the most domestic of animals. "We once had a wombat come through a sliding door at full pace, just leaving a wombat-sized hole in it," says Triggs. "They're not destructive really. It's just that to get somewhere, it doesn't matter what's in the way, they'll just push it aside rather than go around." And if you're still having second thoughts about having a lovable wombat for a pet, try tapping your knuckles on a common wombat's rump. The shiny patch over their rump marks an area of tank-like toughness formed by matted hair, centimetre-thick skin, cartilage and bone. It's used as a shield to block a tunnel against predators or even to crush the skull of an attacking dingo or fox against the burrow roof. It's like knocking on a door-mat! Wombats — NSW National Parks and Wildlife Factsheet Northern Hairy-nosed Wombat — Queensland Conservation Council fact sheet Common and Southern Hairy-nosed Wombat — Queensland Conservation Council fact sheet With thanks to all the people mentioned above, and especially James Woodford, Patsy Davies and Nick Mooney. Published 01 June 2006
<urn:uuid:c620599c-7371-4bab-8ff3-3541ac10288b>
3.6875
1,994
Knowledge Article
Science & Tech.
56.294057
Originally posted by ImaFungi reply to post by ButtUglyToad how do numbers occupy space and time? numbers as in, there exists 50 stars... the number of 50 exists? The Space-Time of numbers was assigned back during the days of Aristotle, Plato, Socrates. What numbering system did they use? Roman Numerals! Roman Numerals have NO Zero! Without Zero, they couldn't see the trUth about the Space-Time of numbers so they saw the numbers as objects, like planets, and the space in-between them as the space in-between planets, and they didn't realize that those planets occupy Space & Time and there is a VACCUM in-between them, a "nothingness" of sorts. Sew they assigned the Space-Time of numbers backwards and no one has caught it. Law of Matter - all Matter occupies Space & Time, and Numbers matter and tht which occupies Space, must also occupy Time. Every object in the Universe occupy Space & Time and numbers are nothing more than numerical objects used to mathematically define that which occupies Space & Time, thus, they must match that which they describe or it is like defining water using fire and that toad don't pee! Law of Time - Time is a linear constant, always moving forward in increments of finite and that which occupies Time, must also occupy Space. Past - Present - Future What's the minimum amount of Time the THREE can occupy? Answer: 3 finite Past = -f Present = 0f Future = + f -f + 0f + +f = 3f Zero occupies a finite amount of Space & Time and because the Math Werld has assigned the Space-Time of numbers backwards, that's why they think finite doesn't exist. Once they correct their stoopidity, finite is obvious and then, they will finally realize that .9 to infinity does kNot equal One, because they never accounted for finite in the finite infinity equations they used to prove their hypothesis. edit on 3-2-2012 by ButtUglyToad because: (no reason given)
<urn:uuid:ceba94cd-a75f-491e-b3b1-49f4c0275936>
3.453125
452
Comment Section
Science & Tech.
49.724274
Science Fair Project Encyclopedia Pteronura Otters are aquatic or marine carnivorous mammals, members of the large and diverse family Mustelidae, which also includes weasels, polecats, badgers and others. There are 13 species of otter in 7 genera, with a distribution that is almost worldwide. Otters have a dense layer 1,000 hairs/mm² (~650,000 hairs/in²) of very soft underfur which, protected by their outer layer of long guard hairs, keeps them dry under water and traps a layer of air to keep them warm. Unlike most marine mammals (seals, for example, or whales), otters do not have a layer of insulating blubber, and even the marine sea otter must come ashore regularly to wash its coat in fresh water. All otters have long, slim, streamlined bodies of extraordinary grace and flexibility, and short limbs; in most cases the paws are webbed. Most have sharp claws to grasp prey but the short-clawed otter of southern Asia has just vestigal claws, and two closely related species of African otter have no claws at all: these species live in the often muddy rivers of Africa and Asia and locate their prey by touch. Fish is the primary item in the diet of most otters, supplemented by frogs, crayfish, and crabs; some have become expert at opening shellfish, and others will take any small mammals or birds that happen to be available. To survive in the cold waters where many otters live, the specialised fur is not enough: otters have very high metabolic rates and burn up energy at a profligate pace: Eurasian otters , for example, must eat 15% of their body weight a day; sea otters, 20 to 25%, depending on the temperature. In consequence, otters are very vulnerable to prey depletion: in water as warm as 10°C an otter needs to catch 100 g of fish per hour: less than that and it cannot survive. Most species hunt for 3 to 5 hours a day; nursing mothers up to 8 hours a day. Northern River Otter The northern river otter (Lontra canadensis) was one of the major animals hunted and trapped for fur in North America after contact with Europeans. They are one of the mostplayful and active, making them a popular exhibit in zoos and aquaria, but unwelcome on agricultural land because they alter river banks for access, sliding, and defense. River otters eat a variety of fish and shellfish, as well as small land mammals and birds. They are 3 to 4 feet (1 m) in length and weigh from 10 to 30 pounds (5 to 15 kg). They were once found all over North America, but are rare or extinct in most places, although flourishing in some locations. Otters are a protected species in some areas and some places have otter sanctuaries. These sanctuaries help sick and hurt otters to recover. The sea otter Enhydra lutris is found along the Pacific coast of North America. Their historic range included shallow waters of the Bering Strait and Kamchatka, and as far south as Japan. Sea otters have 1 million hairs per square inch of skin, a rich fur for which they were hunted almost to extinction. By the time they were protected under the 1911 Fur Seal Treaty, there were so few sea otters left that the fur trade had become unprofitable. They eat shellfish and other invertebrates, and are frequently observed using rocks as crude tools to smash open shells. They are 2.5 to 6 feet (1 to 2 m) in length and weigh 25 to 60 pounds (30 kg). Although once near extinction, they have begun to spread again starting from the California coast. A sub-species otter Lutrogale perspicillata maxwelli (Maxwell's otter) is thought to have lived in the Tigris-Euphrates alluvial salt marsh of Iraq. It has been suggested that this may have become extinct as a result of the large scale drainage that has taken place since the 1960s. Otters are also found in Europe. In the United Kingdom they were common as recently as the 1950s, but are now rare due to the former use of chlorinated hydrocarbon pesticides and as a result of habitat loss. Numbers reached a low point in the 1980s, but with the aid of a number of initiatives, by 1999 numbers were estimated to have recovered to just below 1,000 animals. Under the UK Biodiversity Action Plan it is hoped that by 2010 the otter will have been reintroduced to all the UK rivers and coastal areas that it inhabited in 1960. Roadkill deaths are now one of the significant threats to their reintroduction. List of species - European Otter (Lutra lutra) - Hairy-nosed otter (Lutra sumatrana) - Speckle-throated otter (Hydrictis maculicollis) - Smooth-coated otter (Lutrogale perspicillata) - Northern River otter (Lontra canadensis) - Southern River otter (Lontra provocax) - Long-tailed otter (Lontra longicaudis) - Marine otter (Lontra felina) - Giant otter (Pteronura brasiliensis) - African clawless otter (Aonyx capensis) - Congo clawless otter (Aonyx congicus) - Oriental small-clawed otter (Amblonyx cinereus) - Sea otter (Enhydra lutris) The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:a83949af-ef75-4fdd-8c5c-8930871221fd>
3.875
1,227
Knowledge Article
Science & Tech.
43.912333
Featured Science Paper Rapid deglaciation of Marguerite Bay, western Antarctic Peninsula in the Early Holocene This paper reports new glacial geological data that provide evidence for the timing of ice-sheet retreat and thinning at the end of the last glaciation (~10,000 years ago) in Marguerite Bay, on the west coast of the Antarctic Peninsula. We have dated both the length of time rock outcrops have been exposed, which allow us to date the thinning of the ice sheet, and the record from seabed sediments, which allows us to determine how the ice sheet retreated across the continental shelf. The dating shows a surprising pattern. About 9,600 years ago, the ice in Marguerite Bay appears to have thinned very quickly indeed, an observation that turns out to be consistent with several other datasets from the same area (ice-shelf collapse histories, raised beaches and lake sediment cores). Further north, around Vernadsky Station, the ice-sheet retreat was much more gradual. Thus the rapid retreat of ice in Marguerite Bay was rather localised, and probably driven by specific local conditions. Indeed, there is some evidence that the retreat occurred at a time when warmer water was present on the continental shelf, leading us to suggest that the Marguerite Bay ice stream may have been destabilised, at least in part, by changes in the ocean. This finding supports recent observations that suggest a strong oceanic influence on ice-sheet change, and has implications for future stability of other parts of the Antarctic ice sheet. M.J. Bentley, J.S. Johnson, D.A. Hodgson, T. Dunai, S.P.H.T. Freeman, C. O. Cofaigh Quaternary Science Reviews 30 (2011) 3338e3349
<urn:uuid:f018bb1b-64b3-4aed-aa90-19287e478719>
3.390625
379
Academic Writing
Science & Tech.
51.743548
|Serial Line IP Implementation for Linux Kernel TCP/IP Stack| |<<< Previous||Implementation Details||Next >>>| In the C programming language the I/O space address can be accessed using the functions inb() an outb(). The function declaration of inb() and outb() is given below. char inb(char address); outb(char pattern, char address); The function inb() reads a byte from the address specified.The function outb() writes the pattern to the address specified. Only super user can access this functions. The user space needs a higher privilage level to gain access to I/O space. This can be achieved using the function iopl(). The declaration is given below. int iopl(int level); The call to iopl() changes the input output privilage level to the level specified. The program should be compiled with the -O option. O stands for the optimization option. Before using UART it must be properly initialized. The UART programming can be divided into The first step in initializing the UART is setting the baud rate. Data format register is used for this purpose. Baud rate=main reference frequency/(16*divisor) In the PC a main reference frequency of 1.8432Mhz is used generated by an external oscillator. In this case baud rate equal to 115200/divisor. The divisor is stored in the divisor latch register. The divisor latch register is accessed by setting the DLAB bit of data format register. After writing the divisor latch register DLAB bit is cleared. The next step is to define the data format. Data format register can be used for this purpose. Here the format of 8 data bit, One stop bit and no parity is used. With the interrupt enabled register we can control the interrupt request. Two types of interrupts are possible. In the first case an interrupt is occured when the reived data is ready. In the second case the interrupt is occured when the transmitter buffer is empty. In the Interrupt enable register, the high order nibble register is equal to zero and cannot be altered. The first bit is used to enable the interrupt of the first type and the second bit is used for enabling the interrupt of the second type. If the interrupts are enabled, the corresponding IRQ lines will be high, whenever an interrupt is occured. COM1 is configured with IRQ4 and COM2 is configured with IRQ4. Using the interrupt identification register we can identify the interrupt. Two types of interrupts are possible in this case as early mentioned. The second and third bits of this register are used for the interrupt identification. With modem control register we can set master interrupt bit to enable individual interrupt. Serialization status register is used to know whether data byte is available at the receiver buffer register. If the data byte is available corresponding data is retrieved. To transmit the data the corresponding data byte is loaded into the transmit hold register, if the transmit buffer empty flag of serialization status register is set. These tests are used to avoid character overrun errors.
<urn:uuid:ead6660e-8e42-4596-8b1a-f21fe0150a65>
3.359375
651
Documentation
Software Dev.
46.869978
The top 10 monkeys There are 264 known monkey species. Here are the best ones. 8. Spider monkey It goes without saying that if Spider-Man were a monkey, he'd be a spider monkey. Living 80 to 100 feet above the ground in the tropical forests of Central and South America, this monkey has a long prehensile tail with a tactile pad at the tip. Combine that with spindly arms and legs, and you've a monkey that can exuberantly swing, swoop, and hurtle itself through the air with abandon. The spider monkey genus contains seven species, six of which are endangered and one of which is likely to become endangered.
<urn:uuid:2211afa9-4f35-46d2-a941-568ec64e46ad>
2.9375
137
Listicle
Science & Tech.
63.695
Natural CO2 seepage sites Natural CO2 seepage will be studied in ECO2 as analogues for CO2 leakage. A number of key processes controlling leakage pathways and impacts on biota will be studied in situ by including natural analogue sites as additional study sites. Moreover, natural analogues will serve as test beds for the development of high-end monitoring techniques. Volcanic CO2 seeps have been studied in detail by ECO2 partners in recent years. These include the Mediterranean Panarea gas seeps located in shallow waters off Panarea Island, the Jan Mayen gas vents located at ~700 m water depth in the North Atlantic, and the CO2 droplet seeps in the Okinawa Trough at ~2000 m water depth. The first sedimentary CO2 seep was discovered recently in the German sector of the North Sea. It is located above the Juist Salt Dome at ~30 m water depth. Water column imaging reveals a large number of gas flares emanating from the seabed while chemical sensors have detected strong acidification of ambient bottom waters probably caused by the injection and dissolution of CO2 gas bubbles. The emitted CO2 seems to originate from a deep sedimentary reservoir, apparently ascending to the surface through fractures formed during salt dome emplacement. These four natural analogues cover a wide range of water depths, geological settings, and biological communities. They will be used to study the ascent of CO2 through sedimentary strata, to decipher the dynamics of gas bubble and droplet plumes, to understand the impact of CO2 on benthic organisms and marine ecosystems, and to test and improve CO2 monitoring techniques.
<urn:uuid:f52ed547-7849-497c-b25e-7cb6920d43f5>
3.1875
334
Knowledge Article
Science & Tech.
27.842558
Fact: our natural resources are limited, and we’re using them at an unsustainable rate. So, we need to find new energy sources. As wind energy and solar power become more common, scientists are testing potential new sources of energy to power our homes, workplaces, and industries. Recently, we came across an entertaining (if not mystifying and at times gross) article by How Stuff Works cataloging 10 of the strangest alternative energy sources. Here’s a summary of the 10 weirdest energy sources. Read the full post at How Stuff Works. - Muscle Power: Energy created by human movement. - Piezoelectricity: Electricity generated by touching metal, such as cell phone buttons while texting. - Hot Air: Heat from the sun trapped by tall solar updraft towers. - Methane Emissions: Methane extracted from cow excrement and converted into a high-quality biogas fuel. - Crude Oil: Derived from industrial yeast and benign strains of E.coli microorganisms. - Marine Wind Farms: Wind energy, created by wind turbines tethered to the ocean floor. - Wind-Powered Ships: Cargo ships powered by 13,000 square foot kites. - Small Nuclear Reactors: The size of a hot tube, one of these devices could power 20,000 homes. - Coffee Oil: Depending on the bean, coffee grounds contain enough oil to create biodiesel fuel. - Mirrored Balloons: Released into the atmosphere, these balloons would transmit solar energy to receiving stations on Earth. Intrigued? Read about the students, scientists, and research teams pioneering these creative energy sources in How Stuff Works piece, “10 Wacky Forms of Alternative Energy.” And for help conserving the regular old energy we use daily, read the EnerChange blog for tips or contact EnerChange to schedule a free energy assessment.
<urn:uuid:eba078b2-f542-4c5c-9b31-877404d4bac9>
3.296875
396
Listicle
Science & Tech.
43.307076
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895. Please note, Degree Days are not available for Agricultural Belts Colorado Temperature Rankings, October 1962 More information on Climatological Rankings (out of 119 years) |May - Oct 1962 |80th Coldest||1912||Coldest since: 1961| |37th Warmest||2012||Warmest since: 1960|
<urn:uuid:42bbfaf7-e947-474d-89a4-1b42332c31a1>
2.703125
139
Structured Data
Science & Tech.
50.786218
Regarding plasma: I understand that ionized hydrogen would separate the proton from the electron. What about heavier gases? Do all or only part of the electrons ionize from the nucleus depending upon the amount of ionizing energy? Thanks in advance. Doug Broyles Generally, only some of the electrons will depart. As you guessed, the more protons there are in an atomic nucleus, the more energy is required to strip away all the electrons. Richard E. Barrans Jr., Ph.D. PG Research Foundation, Darien, Illinois Click here to return to the Physics Archives Update: June 2012
<urn:uuid:609e3be3-a405-4722-91f1-0b68a9ee2c71>
2.84375
130
Q&A Forum
Science & Tech.
50.255658
Many injurious invasives come by water. Asian carpincluding the common, black and bighead varietywere brought to the U.S. in the 1970s as live vacuum cleaners meant to remove algae and suspended matter from ponds. These fish can grow to 100 pounds and will eat just about anything, adapting shockingly well to new environments. They have taken over and now represent 90 percent of the biomass in the Illinois River. Researchers worry that these ravenous, opportunistic fish will reach the Great Lakes and cause real problems to the fragile ecosystemvirtually eliminating biodiversity.
<urn:uuid:5330443b-68d0-432f-a1a1-dd35dad94009>
2.828125
113
Knowledge Article
Science & Tech.
41.766739
Sep. 12, 2008 We use our hands to play flamenco guitar, crochet a sweater and grip a baseball bat...but how did we get such great dexterity? In this segment of Science Friday, guest host Joe Palca takes a look at how humans came up with such a great hand. AVAILABLE IN ITUNES Michael Pollan Talks Plants and Food We are fascinated by them, frightened by them, and can't live without them -- so... - Surprise: Cockroaches Are Fastidious Groomers! Why do cockroaches spend so much time cleaning themselves? - Insects May Be the Taste of the Next Generation, Report Says Can entomophagy, the eating of insects, help improve the world’s food resources?... - The Superorganism Ira talks with eminent biologist Edward O. Wilson and Bert Holldobler about ants... - Tracking an Amphibious Caterpillar Several newly-discovered species of caterpillar in Hawaii function equally well ... BOOKS BY OUR GUESTS From the Vault Age of Wonder Ira talks with Richard Holmes, author of 'The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science.'
<urn:uuid:8ed3202b-2a6d-4d1a-9eb7-5b45c3cac440>
2.859375
259
Content Listing
Science & Tech.
60.101132
- Unix, Linux Command enc - symmetric cipher routines openssl enc -ciphername The symmetric cipher commands allow data to be encrypted or decrypted using various block and stream ciphers using keys based on passwords or explicitly provided. Base64 encoding or decoding can also be performed either by itself or in addition to the encryption or decryption. the input filename, standard input by default. the output filename, standard output by default. the password source. For more information about the format of arg see the PASS PHRASE ARGUMENTS section in openssl(1). use a salt in the key derivation routines. This option should ALWAYS be used unless compatibility with previous versions of OpenSSL or SSLeay is required. This option is only present on OpenSSL versions 0.9.5 or dont use a salt in the key derivation routines. This is the default for compatibility with previous versions of OpenSSL and SSLeay. encrypt the input data: this is the default. decrypt the input data. base64 process the data. This means that if encryption is taking place the data is base64 encoded after encryption. If decryption is set then the input data is base64 decoded before being decrypted. if the -a option is set then base64 process the data on one line. the password to derive the key from. This is for compatibility with previous versions of OpenSSL. Superseded by the -pass argument. read the password to derive the key from the first line of filename. This is for compatibility with previous versions of OpenSSL. Superseded by the -pass argument. the actual salt to use: this must be represented as a string comprised only of hex digits. the actual key to use: this must be represented as a string comprised only of hex digits. If only the key is specified, the IV must additionally specified using the -iv option. When both a key and a password are specified, the key given with the -K option will be used and the IV generated from the password will be taken. It probably does not make much sense to specify both key and password. the actual IV to use: this must be represented as a string comprised only of hex digits. When only the key is specified using the -K option, the IV must explicitly be defined. When a password is being specified using one of the other options, the IV is generated from this password. print out the key and IV used. print out the key and IV used then immediately exit: dont do any encryption set the buffer size for I/O disable standard block padding debug the BIOs used for I/O. The program can be called either as openssl ciphername or openssl enc -ciphername. A password will be prompted for to derive the key and IV if necessary. The -salt option should ALWAYS be used if the key is being derived from a password unless you want compatibility with previous versions of OpenSSL and SSLeay. Without the -salt option it is possible to perform efficient dictionary attacks on the password and to attack stream cipher encrypted data. The reason for this is that without the salt the same password always generates the same encryption key. When the salt is being used the first eight bytes of the encrypted data are reserved for the salt: it is generated at random when encrypting a file and read from the encrypted file when it is decrypted. Some of the ciphers do not have large keys and others have security implications if not used correctly. A beginner is advised to just use a strong block cipher in CBC mode such as bf or des3. All the block ciphers normally use PKCS#5 padding also known as standard block padding: this allows a rudimentary integrity or password check to be performed. However since the chance of random data passing the test is better than 1 in 256 it isnt a very good test. If padding is disabled then the input data must be a multiple of the cipher All RC2 ciphers have the same key and effective key length. Blowfish and RC5 algorithms use a 128 bit key. bf-cbc Blowfish in CBC mode bf Alias for bf-cbc bf-cfb Blowfish in CFB mode bf-ecb Blowfish in ECB mode bf-ofb Blowfish in OFB mode cast-cbc CAST in CBC mode cast Alias for cast-cbc cast5-cbc CAST5 in CBC mode cast5-cfb CAST5 in CFB mode cast5-ecb CAST5 in ECB mode cast5-ofb CAST5 in OFB mode des-cbc DES in CBC mode des Alias for des-cbc des-cfb DES in CBC mode des-ofb DES in OFB mode des-ecb DES in ECB mode des-ede-cbc Two key triple DES EDE in CBC mode des-ede Two key triple DES EDE in ECB mode des-ede-cfb Two key triple DES EDE in CFB mode des-ede-ofb Two key triple DES EDE in OFB mode des-ede3-cbc Three key triple DES EDE in CBC mode des-ede3 Three key triple DES EDE in ECB mode des3 Alias for des-ede3-cbc des-ede3-cfb Three key triple DES EDE CFB mode des-ede3-ofb Three key triple DES EDE in OFB mode idea-cbc IDEA algorithm in CBC mode idea same as idea-cbc idea-cfb IDEA in CFB mode idea-ecb IDEA in ECB mode idea-ofb IDEA in OFB mode rc2-cbc 128 bit RC2 in CBC mode rc2 Alias for rc2-cbc rc2-cfb 128 bit RC2 in CFB mode rc2-ecb 128 bit RC2 in ECB mode rc2-ofb 128 bit RC2 in OFB mode rc2-64-cbc 64 bit RC2 in CBC mode rc2-40-cbc 40 bit RC2 in CBC mode rc4 128 bit RC4 rc4-64 64 bit RC4 rc4-40 40 bit RC4 rc5-cbc RC5 cipher in CBC mode rc5 Alias for rc5-cbc rc5-cfb RC5 cipher in CFB mode rc5-ecb RC5 cipher in ECB mode rc5-ofb RC5 cipher in OFB mode aes-[128|192|256]-cbc 128/192/256 bit AES in CBC mode aes-[128|192|256] Alias for aes-[128|192|256]-cbc aes-[128|192|256]-cfb 128/192/256 bit AES in 128 bit CFB mode aes-[128|192|256]-cfb1 128/192/256 bit AES in 1 bit CFB mode aes-[128|192|256]-cfb8 128/192/256 bit AES in 8 bit CFB mode aes-[128|192|256]-ecb 128/192/256 bit AES in ECB mode aes-[128|192|256]-ofb 128/192/256 bit AES in OFB mode Just base64 encode a binary file: openssl base64 -in file.bin -out file.b64 Decode the same file openssl base64 -d -in file.b64 -out file.bin Encrypt a file using triple DES in CBC mode using a prompted password: openssl des3 -salt -in file.txt -out file.des3 Decrypt a file using a supplied password: openssl des3 -d -salt -in file.des3 -out file.txt -k mypassword Encrypt a file then base64 encode it (so it can be sent via mail for example) using Blowfish in CBC mode: openssl bf -a -salt -in file.txt -out file.bf Base64 decode a file then decrypt it: openssl bf -d -salt -a -in file.bf -out file.txt Decrypt some data using a supplied 40 bit RC4 key: openssl rc4-40 -in file.rc4 -out file.txt -K 0102030405 The -A option when used with large files doesnt work properly. There should be an option to allow an iteration count to be included. The enc program only supports a fixed number of algorithms with certain parameters. So if, for example, you want to use RC2 with a 76 bit key or RC4 with an 84 bit key you cant use this program.
<urn:uuid:51fda2df-2ec1-4167-8dd6-a74a10fac4bb>
3.46875
1,936
Documentation
Software Dev.
73.244274
A "wavepacket" is simply a single composite wave made up of two or more individual waves combining with each other in a consistent fashion over time. Wavepackets are an important concept in quantum mechanics. For an electron, atom, molecule, or any other quantum system, an individual quantum state can be represented as a stationary or "standing" wave whose points of maximum disturbance (peaks and valleys) and points of minimum disturbance (known as nodes) stay fixed. When a group of individual quantum states (right) add together, they combine to form a wavepacket (left). Under certain conditions, the resulting wavepacket can consist of a single peak (as depicted on the left)--causing the quantum system to act as a particle in a well-defined state, instead of the more customary quantum wave spread out in space.
<urn:uuid:ba50539f-d0fa-441e-9384-1c3ddc0c5d99>
3.5625
171
Knowledge Article
Science & Tech.
24.953896
Science Fair Project Encyclopedia Curvature is the amount by which a geometric object deviates from being flat. The word flat might have very different meanings depending on the objects considered (for curves it is a straight line and for surfaces it is a Euclidean plane). In this article we consider the most basic examples: the curvature of a plane curve and the curvature of a surface in Euclidean space. See the links below for further reading. Curvature of plane curves For a plane curve C, the curvature at a given point P has a magnitude equal to the reciprocal of the radius of an osculating circle (a circle that "kisses" or closely touches the curve at the given point), and is a vector pointing in the direction of that circle's center. The magnitude of curvature at points on physical curves can be measured in diopters (also spelled dioptre); a diopter has the dimension one per meter. The smaller the radius r of the osculating circle, the larger the magnitude of the curvature (1/r) will be; so that where a curve is "nearly straight", the curvature will be close to zero, and where the curve undergoes a tight turn, the curvature will be large in magnitude. A straight line has curvature 0 everywhere; a circle of radius r has curvature 1/r everywhere. For a plane curve given parametrically as c(t) = (x(t),y(t)) the curvature is where the dots denote differentiation respect to t. For a plane curve given implicitly as f(x,y) = 0 the curvature is Curvature of space curves A full treatment of curves embedded in an Euclidean space of arbitrary dimension (a space curve) is given in the article on parametric curves. Curvature of surfaces in 3-space For two-dimensional surfaces embedded in R3, there are two kinds of curvature: Gaussian curvature and Mean curvature. To compute these at a given point of the surface, consider the intersection of the surface with a plane containing a fixed normal vector at the point. This intersection is a plane curve and has a curvature; if we vary the plane, this curvature will change, and there are two extremal values - the maximal and the minimal curvature, called the principal curvatures, k1 and k2, the extremal directions are called principal directions. Here we adopt the convention that a curvature is taken to be positive if the curve turns in the same direction as the surface's chosen normal, otherwise negative. The Gaussian curvature, named after Carl Friedrich Gauss, is equal to the product of the principal curvatures, k1k2. It has the dimension of 1/length2 and is positive for spheres, negative for one sheet hyperboloids and zero for planes. It determines whether a surface is locally convex (when it is positive) or locally saddle (when it is negative). The above definition of Gaussian curvature is extrinsic in that it uses the surface's embedding in R3, normal vectors, external planes etc. Gaussian curvature is however in fact an intrinsic property of the surface, meaning it does not depend on the particular embedding of the surface; intuitively, this means that ants living on the surface could determine the Gaussian curvature. Formally, Gaussian curvature only depends on the Riemannian metric of the surface. This is Gauss' celebrated Theorema Egregium, which he found while concerned with geographic surveys and mapmaking. An intrinsic definition of the Gaussian curvature at a point P is the following: imagine an ant which is tied to P with a short thread of length r. She runs around P while the thread is completely stretched and measures the length C(r) of one complete trip around P. If the surface were flat, she would find C(r) = 2πr. On curved surfaces, the formula for C(r) will be different, and the Gaussian curvature K at the point P can be computed as The mean curvature is equal to the sum of the principal curvatures, k1+k2, over 2. It has the dimension of 1/length. Mean curvature is closely related to the first variation of surface area, in particular a minimal surface like a soap film has mean curvature zero and soap bubble has constant mean curvature. Unlike Gauss curvature, the mean curvature depends on the embedding, for instance, a cylinder and a plane are locally isometric but the mean curvature of a plane is zero while that of a cylinder is nonzero. Curvature of space - Curvature form for the appropriate notion of curvature for vector bundles and principal bundles with connection. - Curvature of Riemannian manifolds for generalizations of Gauss curvature to higher-dimensional Riemannian manifolds. - Curvature vector and geodesic curvature for appropriate notions of curvature of curves in Riemannian manifolds, of any dimension. - Gauss map for more geometric properties of Gauss curvature. - Gauss-Bonnet theorem for an elementary application of curvature. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:7526d5c3-cedd-4a6f-8fda-a1de36fb427a>
4.25
1,129
Knowledge Article
Science & Tech.
34.356112
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. August 20, 1997 Explanation: Has Orion the Hunter acquired a new weapon? If you turn your head sideways (counterclockwise) you might notice the familiar constellation of Orion, particularly the three consecutive bright stars that make up Orion's belt. But in addition to the stars that compose his sword, Orion appears to have added some sort of futuristic light-saber, possibly in an attempt to finally track down Taurus the Bull. Actually, the bright streak is a meteor from the Perseid Meteor Shower, a shower that put on an impressive display last Tuesday morning, when this photograph was taken. This meteor was likely a small icy pebble shed years ago from Comet Swift-Tuttle that evaporated as it entered Earth's atmosphere. Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
<urn:uuid:ea97c227-4799-4b89-b761-8b07b55360fc>
2.84375
222
Knowledge Article
Science & Tech.
46.085714
The Element Meitnerium [Click for Isotope Data] Atomic Number: 109 Atomic Weight: 278 Melting Point: Unknown Boiling Point: Unknown Phase at Room Temperature: Solid Element Classification: Metal Period Number: 7 Group Number: 9 Group Name: none Radioactive and Artificially Produced What's in a name? Named after the scientist Lise Meitner. Say what? Meitnerium is pronounced as met-NEAR-ee-um. History and Uses: Meitnerium was first produced by Peter Armbruster, Gottfried Münzenber and their team working at the Gesellschaft für Schwerionenforschung in Darmstadt, Germany in 1982. They bombarded atoms of bismuth-209 with ions of iron-58 with a device known as a linear accelerator. This produced atoms of meitnerium-266, an isotope with a half-life of about 3.8 milliseconds (0.0038 seconds), and a free neutron. Meitnerium's most stable isotope, meitnerium-278, has a half-life of about 8 seconds. It decays into bohrium-274 through alpha decay. Since only small amounts of meitnerium have ever been produced, it currently has no uses outside of basic scientific research. Estimated Crustal Abundance: Not Applicable Estimated Oceanic Abundance: Not Applicable Number of Stable Isotopes: 0 (View all isotope data) Ionization Energy: Unknown Oxidation States: Unknown 3s2 3p6 3d10 4s2 4p6 4d10 4f14 5s2 5p6 5d10 5f14 6s2 6p6 6d7
<urn:uuid:c4e6fb54-9faa-403f-8a9c-8ccf4a1edb8f>
3.71875
394
Knowledge Article
Science & Tech.
46.440126
Global Climate & Energy Project HydrogenHydrogen Effects on Climate, Stratospheric Ozone, and Air Pollution Start Date: January 2004 Mark Z. Jacobson, Civil and Environmental Engineering, David M. Golden, Mechanical Engineering, Stanford University This project studies the potential effects on climate, stratospheric ozone, and air pollution of converting vehicle fuel and electric power sources, in the U.S. and worldwide, from fossil fuels to hydrogen fuel cells. Changes in technology have environmental implications that must be studied and examined prior to wide-scale adoption. Previous studies and models have not examined the climate response of a transition to hydrogen, the effect of hydrogen on atmospheric aerosols, nor the effect of using wind, coal, and/or natural gas to generate hydrogen. As such, a significant gap in our understanding of the effects of switching to hydrogen still exists. The purpose of this project is to try to fill some of this void with a numerical model that replaces current and future fossil fuels emissions with hydrogen-related emissions in a high-resolution emission inventory. The model then treats gases, aerosols, meteorology, and radiation simultaneously over a three-dimensional global grid that nests down to the urban scale.Background About 90% of current H2 emissions originate from oxidation of methane, oxidation of nonmethane hydrocarbons, photolysis of formaldehyde (which originates from methane and isoprene), fossil-fuel combustion (particularly automobiles), and biomass burning. The remaining 10% originates from natural sources. The major losses of hydrogen are dry deposition to soils and oceans, and the chemical reaction, H2 + OH -> H2O + H (e.g., Schmidt, 1974). One effect of hydrogen in the stratosphere is that it increases water vapor in the ozone. H2O emitted near the surface does not readily penetrate to the stratosphere, but H2 can penetrate readily into the stratosphere, where it can form H2O by the reaction H2 + OH. This is one of the few sources of water in the stratosphere (e.g., Khalil and Rasmussen, 1990; Dessler et al., 1994; Hurst et al., 1999). Increased water in the stratosphere may increase the occurrence and size of Polar Stratospheric Clouds and stratospheric aerosols, both of which enhance stratospheric ozone reduction in the presence of chlorinated and brominated compounds. This issue will be examined as part of this project. One mechanism by which increases in H2 may enhance global warming is through a series of reactions that would produce O3. In the troposphere, the loss of OH from H2 + OH would appear beneficial at first since OH is the chemical primarily responsible for breaking down organic gases, which generate ozone in photochemical smog. However, the H created from the same reaction instantaneously converts to HO2 by H + O2 + M -> HO2 + M. HO2 forms ozone in the troposphere by NO + HO2 -> NO2 + OH, followed by NO2 + hv -> NO + O, followed by O + O2 + M -> O3 + M. Since O3 is a greenhouse gas, the increase in H2 may slightly increase near-surface global warming. This mechanism of O3 formation is less important in the stratosphere due to the lesser quantity of NO in the stratosphere than in the troposphere. Another chemical effect of H2 is that its reaction, H2 + OH -> H2O + H, reduces the rate of the reaction CH4 + OH -> CH3 + H2O because both reactions compete for a limited amount of OH. As a result, the lifetime of methane, CH4, a greenhouse gas, increases.Activities (A) Identify the scenarios to consider and all possible changes in emissions associated with each. (B) To simulate the scenarios defined under Task A, design computer model experiments, run test simulations, and compare results against a large measurement database. Some model improvement will be undertaken. (C) Run pairs of simulations for each scenario described under Task A. For each pair, run both a baseline simulation representing current fuel use and a sensitivity simulation representing hydrogen fuel use, where hydrogen is generated from difference sources.Approach For the study, data from emission inventories of vehicles and electric power plants will be replaced with those resulting from hydrogen generation and hydrogen fuel cell use. Base case model predictions will be evaluated against an array of gas, aerosol, and meteorological measurements. Sensitivity studies, in which vehicles and electric power plants are switched to hydrogen, will be analyzed in terms of their resulting effects on climate, stratospheric ozone, and air pollution. The outcome of this study will be a comprehensive assessment of the potential effects on the atmosphere of converting vehicle and electric power sources in the U.S. and worldwide to hydrogen. Figure 1: Schematic of Model Approach 1. Dessler, A.E., E.M. Weinstock, E.J. Hintsa, J.G. Anderson, C.R. Webster, R.D. May, J.W. Elkins, and G.S. Dutton, An examination of the total hydrogen budget of the lower stratosphere, Geophys.Res. Lett., 21, 2563-2566, 1994.
<urn:uuid:b2d3b069-a71c-4786-9398-abe0e4a419cd>
2.859375
1,099
Academic Writing
Science & Tech.
41.378274
I came across an interesting article today from ScienceBlog on a couple of new research tools aimed at helping developers find what they're looking for in APIs. The tools have been constructed as part of the Natural Programming Project at Carnegie Mellon University. Apatite (which stands for Associative Perusal of APIs That Identifies Targets Easily) is a way of exploring the Java API through association. The idea is to be able to browse the Java API on the strength of classes that commonly go together in programs. Jadeite (which stands for Java Documentation with Extra Information Tacked-on for Emphasis) works on the assumption that developers expect classes to have certain methods. One example provided is for the File class. It might be expected to have a read() method that could be used to read a file. Jadeite allows its users to fill in these expectations through the creation of "placeholders". A placeholder can be used to point developers in the direction of the class they are looking for. In the case of both tools their entries link back to the official Sun Java API documentation. If you want to find out more about the thinking behind the two tools have a look at the research papers Apatite: Associative Browsing of APIs and Improving API Documentation Using API Usage Information.
<urn:uuid:8bfb9d9d-d83c-4394-b7d8-fbf458f0eadd>
2.796875
263
Personal Blog
Software Dev.
37.91422
"A whole number x is odd only if x^2 is an odd whole number." a) Write it as a conditional statement. b) Write it's reciprocal. c) Write its counterpositive. " If A and B are sets, and A is a subset of B, then A U B = B." a) Write its reciprocal. b) Write its counterpositive.
<urn:uuid:580c84fd-c2d2-4286-8fc8-ec39810a6113>
3.359375
83
Q&A Forum
Science & Tech.
81.317368
One of the reasons Jellies are invading the oceans is because they can survive environmental changes that have negatively affected other forms of sea life. Did you know jellies have survived for over 500 million years?!They were here even before dinosaurs. The key to this survival is their ability to adapt and thrive to changes in the environment. Jellies appear to be better able to survive in polluted water than other forms of aquatic life. Runoff may be a cause for increases in jellies populations. Excess fertilizer from our yards runs into our waterways fueling algae blooms and the creation of low oxygen “dead zones” in the Bay and in the ocean. Jellies are able to survive and thrive in these degraded water conditions.
<urn:uuid:fbe3ef0b-0a8d-477d-b87a-3f3e7d6a5bfd>
3.28125
151
Personal Blog
Science & Tech.
43.841778
Saving Trees and Ozone How does saving trees effect the ozone layer? Could I please get the answer as soon as possible? There are two different types of ozone in your question: One is bad ozone: Trees help lower ozone and pollutant levels in cities: The other is the good ozone layer (15-40 km above the earth's surface) that protects life on earth from harmful UV radiation: but trees don't have any big direct impact on this, although oxygen produced by early algae/phytoplankton probably initially helped create it millions of years ago. Anthony R. Brach, Ph.D. Click here to return to the Biology Archives Update: June 2012
<urn:uuid:a90806dc-6059-497d-8b82-6ad7f6eab982>
3.265625
152
Q&A Forum
Science & Tech.
53.434762
Stunning news! Not really surprising though (nor necessarily encouraging), but here it is. In the most recent bulletin by the World Meteorological Organization (WMO), issued at the time of writing (Monday, November 21, 2011), data show a steady increase in the amount of greenhouse gases (GHGs) in the atmosphere (PDF). At present, we are at 389 parts per million (ppm) of atmospheric CO2, the highest recorded in the past 10,000 years. At present, this is substantially higher than the 350ppm hoped for by 350.org, a transnational science and advocacy network, but still within the range deemed acceptable by the Fourth Assessment Report (AR-4) of the Intergovernmental Panel on Climate Change – namely 350 – 400ppm, as described in their Summary for Policymakers (PDF). For clarity’s sake, this is the range in which most scientists feel we have the best change at stabilizing the projected increase in global temperatures at 2 degrees above the current average. Further, as indicated in the bulletin, it is assumed that the 350 – 400 CO2 level would correspond with a similar leveling of other GHGs, such as methane, sulfur hexafluoride, and nitrous oxide, such that we would have a total accumulation of 445 – 490ppm of CO2 equivalent. Hot Like Fire It goes without saying that, even if we took the slightly-less-alarmist AR-4 as the better predictor, we would still be in dire straits. To get to that level, we would need to cut global emissions by 50% – 85% of 2000 levels. Further, this assumes that we would not dramatically raise the amount of other GHGs in the atmosphere. While CO2 is usually held up as the bad boy of the atmosphere (and is currently responsible for about 70 – 80% of current anthropogenic warming), data held by the UN Framework Convention on Climate Change (UNFCCC) show that the other GHGs are more powerful warmers, based on ppm, than CO2. (This is due to their longevity, as well as chemical makeup). Unfortunately, these show no sign of decreasing and some, like HFCs, are increasing at alarming rates. Theoretically, we still have time to act. Politically, engendering the will to do so is another question. Part of this depends on our disposition to alarming data – will these trends cause apathy, alarm, or disdain? We don’t know to a certainty what all this means, but the problem is too many people take this to mean we don’t know anything. One way or the other, we’re going to find out. We are, after all, to borrow from Roger Revelle, “…carrying out a large-scale geophysical experiment” with the atmosphere. I suppose that, for those skeptical about climate science theories, the logical thing to do is to take the experiment to its conclusion.
<urn:uuid:d10a5cec-36c8-4840-b0c5-723ccac315fb>
3.0625
614
Personal Blog
Science & Tech.
48.626471
Geoscientists sail to Antarctica to study plate tectonics, glaciology and climate change Dr. Richard Alley is a professor of glaciology from Pennsylvania State. “Carbon dioxide is the biggest control knob on the global warming dial,” said Alley who described how geoscientists use satellites and other geophysical tools to measure the changing thickness of Antarctica’s ice sheets, with an accuracy of “one-third of a potato chip.” [Hey Richard: How do you manage such astounding accuracy when you don't always know where the ice ends and the land begins? Are you just assuming that the land down there somewhere below the ice isn't rising or falling?]July 2012: 'Grand Canyon' Discovered Beneath Antarctic Ice : Discovery News During a surveying trip across Antarctica, scientists discovered a rift six miles across and a mile deep.So what is the size of the carbon dioxide control knob, and what are the relative sizes of the next 100 biggest control knobs?
<urn:uuid:3bcff3c9-d1ae-4b97-886e-7f10c31ecf79>
3.359375
206
Personal Blog
Science & Tech.
38.523396
Hayden Planetarium Programs Frontiers Lecture: Other Earths and Life In the Universe March 11, 2013 Science fiction portrays our Milky Way Galaxy as filled with habitable planets populated by advanced civilizations engaged in interstellar trade, conflict, super-technology, and romance. Back in our real universe, Earth-like planets and extraterrestrial life have proved elusive; not a microbe has been found. Join Geoff Marcy as he discusses the latest on NASA’s new space-borne Kepler telescope which is finding the first Earth-like worlds around other stars. But what properties make a planet suitable for technological life? Could advanced life be more rare than we imagine? More in this Series: June 25, 2013 Join astronomers in the Planetarium as they provide details on how to observe the night sky before heading outside to observe celestial objects. July 11, 2013 On July 11, the sunset will be aligned with the streets of Manhattan. Join astrophysicist Jackie Faherty for a viewing of this special event. July 30, 2013 Explore planets, extrasolar planets, nearby stars, and the myriad galaxies that populate the universe. August 1, 2013 Using documentary aesthetics and based in real scientific data, this sci-fi thriller follows a contemporary mission to Jupiter’s moon Europa to investigate the possible existence of alien life within our solar system. August 27, 2013 Learn to use the familiar zodiac constellations—such as Taurus and Gemini— to locate other planets as they move through their orbits.
<urn:uuid:08e7256b-9f76-45c3-8478-71bfbc8a3538>
2.96875
313
Content Listing
Science & Tech.
31.735579
(Science: botany) Pertaining to, or resembling, a natural order (Orchidaceae) of endogenous plants of which the genus orchis is the type. They are mostly perennial herbs having the stamens and pistils united in a single column, and normally three petals and three sepals, all adherent to the ovary. The flowers are curiously shaped, often resembling insects, the odd or lower petal (called the lip) being unlike the others, and sometimes of a strange and unexpected appearance. About one hundred species occur in the United states, but several thousand in the tropics. Over three hundred genera are recognised.
<urn:uuid:54c68e0b-6fd1-4966-8a03-66cef036a61a>
3.359375
134
Structured Data
Science & Tech.
30.154309
List Parsers are generated by the special predefined parser generator object list_p, which generates parsers recognizing list structures of the type item >> *(delimiter >> item) >> !end where item is an expression, delimiter is a delimiter and end is an optional closing expression. As you can see, the list_p generated parser does not recognize empty lists, i.e. the parser must find at least one item in the input stream to return a successful match. If you wish to also match an empty list, you can make your list_p optional with operator! An example where this utility parser is helpful is parsing comma separated C/C++ strings, which can be easily formulated as: rule<> list_of_c_strings_rule = list_p(confix_p('\"', *c_escape_char_p, '\"'), ',') ; The confix_p and c_escape_char_p parser generators are described here and here. The list_p parser generator object can be used to generate the following different types of List Parsers: list_p used by itself parses comma separated lists without special item formatting, i.e. everything in between two commas is matched as an item, no end of list token is matched generates a list parser, which recognizes lists with the given delimiter and matches everything in between them as an item, no end of list token is matched generates a list parser, which recognizes lists with the given delimiter and matches items based on the given item parser, no end of list token is matched |list_p(item, delimiter, end)|| generates a list parser, which recognizes lists with the given delimiter and matches items based on the given item parser and additionally recognizes an optional end expression All of the parameters to list_p can be single characters, strings or, if more complex parsing logic is required, auxiliary parsers, each of which is automatically converted to the corresponding parser type needed for successful parsing. If the item parser is an action_parser_category type (parser with an attached semantic action) we have to do something special. This happens, if the user wrote something like: where item is the parser matching one item of the list sequence and func is a functor to be called after matching one item. If we would do nothing, the resulting code would parse the sequence as follows: (item[func] - delim) >> *(delim >> (item[func] - delim)) what in most cases is not what the user expects. (If this is what you've expected, then please use one of the list_p generator functions direct(), which will inhibit refactoring of the item parser). To make the list parser behave as expected: (item - delim)[func] >> *(delim >> (item - delim)[func]) the actor attached to the item parser has to be re-attached to the (item - delim) parser construct, which will make the resulting list parser 'do the right thing'. This refactoring is done by the help of the Refactoring Parsers. Additionally special care must be taken, if the item parser is a unary_parser_category type parser as for instance: which without any refactoring would result in (*anychar_p - ch_p(',')) >> *( ch_p(',') >> (*anychar_p - ch_p(',')) ) and will not give the expected result (the first *anychar_p will eat up all the input up to the end of the input stream). So we have to refactor this into: *(anychar_p - ch_p(',')) >> *( ch_p(',') >> *(anychar_p - ch_p(',')) ) what will give the correct result. The case, where the item parser is a combination of the two mentioned problems (i.e. the item parser is a unary parser with an attached action), is handled accordingly too: will be parsed as expected: (*(anychar_p - ch_p(',')))[func] >> *( ch_p(',') >> (*(anychar_p - ch_p(',')))[func] ) The required refactoring is implemented with the help of the Refactoring Parsers. |Summary of List Parser refactorings| |You write it as:|| list_parser.cpp sample shows the usage of the list_p utility parser: This is part of the Spirit distribution. Copyright © 2001-2003 Hartmut Kaiser Use, modification and distribution is subject to the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
<urn:uuid:af476845-ab05-4564-a080-ad4555ad0b8b>
3.03125
1,033
Documentation
Software Dev.
38.073916
October 23, 2005 A credit card number must be from 13 to 16 digits long. The last digit of the number is the check digit. That number is calculated from an algorithm (called the Luhn formula or MOD 10) on the other numbers. This is to spot typos when a user enters a number, and I assume was to allow detecting an error reading the magnetic stripe when a card is swiped. The MOD 10 check does not offer security, it offers error detection. Think of it as fullfilling the same role as a CRC in software. To calculate the check digit: First drop the last digit from the card number (because that’s what we are trying to calculate) Reverse the number Multiply all the digits in odd positions (The first digit, the third digit, etc) by 2. If any one is greater than 9 subtract 9 from it. Sum those numbers up Add the even numbered digits (the second, fourth, etc) to the number you got in the previous step The check digit is the amount you need to add to that number to make a multiple of 10. So if you got 68 in the previous step the check digit would be 2. You can calculate the digit in code using checkdigit = ((sum / 10 + 1) * 10 – sum) % 10 For an example of this in practice download the code to the credit card number generator. Credit card numbers are a special type of ISO 7812 numbers.
<urn:uuid:503aee30-ebd0-49f2-ba93-a5a3caaaa613>
2.859375
310
Tutorial
Software Dev.
67.827688
Understanding Transport Models SOAP is not a completely transparent solution, and Web service developers must look under the covers of SOAP, at the lower-level transport mechanisms and models to inspect how they are implemented. In the simple case, the platform will automatically generate the code and SOAP messaging constructs for you, making it easy to develop Web services. Generally, this works when the developer doesn't have to do any customization of the service. When customization is required, often the developer works directly with the SOAP messages and possibly the XML content underneath. Thus, developers need to understand the SOAP and XML layers. When designing your Web services interface, remember that the use of XML by itself does not guarantee interoperability. XML is not the "silver bullet" for solving integration problems. There is still a need for businesses to communicate and agree on the vocabulary that will be used in Web service interactions. Just introducing XML into your architecture won't guarantee this. XML is simply a language or syntax for describing data. XML by itself does not provide the semantics for understanding the data. Spoken languages share many of same characters. However, an English speaker would not be able to understand text written in German. They share an alphabet (i.e., the syntax), but how the alphabet is interpreted (semantics) is different. Even if you come to an agreement on meaning, it does not necessarily follow that you will share the same syntax, or language, with your partners. For example, you might decide that <order> represents a customer order while your partner decides to use <purchase_order>. In the real world, you usually cannot force a particular XML schema on your partners. So, what can you do to minimize this coupling? You can consider adopting a common internal format that will shield you from any external dependencies. In the case of the customer order, you actually might have more than one partner, each having its own XML representation. You probably cannot force a single XML format, so it's better to build your own internal format that can translate or transform to the partner formats. A technology such as XSLT could be used to perform this transformation. Some Web service platforms provide mechanisms, either at the gateway, Web server, or application server, to set up XML mapping rules between XML documents that are received and the internal representations used. DOM versus SAX Many Web services runtime environments shield the developer from having to worry about lower-level XML parsing and processing. However, in situations where additional flexibility is required or where performance is critical, it might be important for the developer to write custom code or handlers that involve the processing of XML documents. In these instances, choosing between the two XML parsing models, DOM and SAX, will be a critical design decision (see Figure 2). |Figure 2: DOM versus SAX. The graphic shows key differences between the two XML-parsing document models.| DOM uses a tree-like approach for accessing an XML document while SAX uses more of an event model. With a DOM parser, the XML document is converted into a tree containing the XML content. Subsequently, developers can traverse the DOM tree. There are a number of benefits to this approach. Generally, it is easier to program. A developer simply has to make one call to create the tree, and the navigation APIs are fairly easy to use. It's also fairly easy to add and modify elements in the tree. A common use for DOM is when the XML document is being changed frequently by the service. However, there are a number of disadvantages to using DOM. For one, when you parse an XML document using DOM, you have to parse the entire document. So, there is an up-front performance and memory hit in this approach, especially for very large XML documents. SAX uses an event-based model for parsing XML. It works through a set of events that get fired as the XML is parsed. When a given tag is found, a callback method is invoked to indicate that this tag was found. Generally, the SAX approach reduces the overall memory overhead because it is left up to the developer to determine which events they want to handle. SAX can typically increase scalability if you are only processing subsets of the data. But, SAX is more difficult to program than DOM, and the SAX approach doesn't work well if you are required to make multiple passes over the same document. Generally, DOM is appropriate when you are dealing with more document-oriented XML files. SAX fits better when what you locate maps directly to other objects in your system (e.g., mapping from an XML structure to a Java class) or when you want to extract specific tags from your document.
<urn:uuid:7cc7200f-50e5-4ca4-846b-5724907e6eb2>
2.765625
962
Knowledge Article
Software Dev.
46.109097
It eats insects and it looks like a snake! Kind of like herping, right? On Thursday I drove up to the Sierras in search of one of the coolest plants in the US, the Cobra Lily or California Pitcher Plant (Darlingtonia californica ). They are native to the northern Sierras and coastal Oregon, and tend to be found only on cold seeps in serpentine rock formations. Their closest relatives are the pitcher plants in the genus Sarracenia , which are only found in the eastern US and Canada. Their tongues produce lots of nectar in the growing season to attract insect prey - the insects crawl up the tongue into the puffed head of the leaf, where escape is almost impossible due to a rim around the entrance. The insects then wander around until they lose their footing and fall down the back of the pitcher and drown. Unlike many carnivorous plants, Cobra Lilies apparently don't produce digestive enzymes in their fluid pools. Instead, the dead prey is broken down by bacteria and specialized endosymbiotic midge larvae, and the nutrients are absorbed the the plant. They're pretty awesome.
<urn:uuid:62ef1156-0d6f-4d62-b081-7a4e4a506874>
3.078125
237
Comment Section
Science & Tech.
49.418136
Understanding nature and transferring its traits to technology is not only the objective of bionics, but also of marine biology and microbiology. Bionics, marine biology or microbiology. Here you can find scientific reports and articles about achievements and developments in the fields of bionics, marine biology and microbiology. Technical research departments at many universities and institutes are examining and learning from nature and then collaborating with the fields of bionics, marine biology and microbiology. Although Arnold Gehlen once labeled humanity as a "flawed being" that had to create its own culture to survive nature's environment, we can be certain he had not yet considered the opportunities presented by bionics, marine biology and microbiology. Science is meanwhile using the traits of the flawed being to contemplate how to utilize bionics, marine biology and microbiology to copy animals, plants and the rest of the environment. Because nature features attributes such as the hardest and most durable materials and efficient energy production and conversion, it has become a treasure trove of knowledge for bionics, marine biology and microbiology. As a stand-alone branch of research, science can use bionics to demonstrate that nature is superior to humans in many aspects and that we still have a lot to learn from it, whether in macro or microbiology. The "Bionic Six" comic and animated television series revolved around a family who collaborated with a researcher to utilize the attributes of nature to combat those intent on destroying it. The "Bionic Six" acquired their power and speed through bionics. They knew how to take advantage of the physical forces of nature and were already advancing into the fields of marine biology and microbiology research. Today, bionics is a well-respected field of research that has little to do with children's entertainment. Bionics occupies itself with nature's "inventions" and works closely with the fields of marine biology and microbiology to transfer their attributes to the human culture. Bionics has already proved its worth in the fields of materials research and nano technology. Bionics and microbiology have also made progress in areas such as energy production and storage. Marine biology has enjoyed new impetus over the past several years. Although researchers have long been occupied with both fields, marine biology and microbiology were thrust into the public spotlight no later than with the publication of "The Swarm", a novel by German author Frank Schätzing. Over the last year, marine biology and microbiology reports revealed that although scientists have unearthed a wealth of new discoveries in marine biology and microbiology, there remain thousands of undiscovered animal species in both areas. Microbiology is actually a vital part of marine biology since the ocean depths contain not only large animals, but also organisms that cannot be seen with the naked eye. And this is where microbiology comes into play. Marine biology and microbiology are engaged in examining the effects of currents, depths and temperatures on the development and propagation of organisms and animals. For this reason, marine biology and microbiology researchers are working to discover new animal species and organisms, all the while further expanding the depths of geography and science. When marine biology and microbiology come together with bionics, this can result in unimagined discoveries and thus the development of new methods that humans can implement for their own benefit and for the protection of the environment. The latest achievements in the fields of bionics, marine biology and microbiology can be found in innovations-report. Articles and reports from the Life Sciences area deal with applied and basic research into modern biology, chemistry and human medicine. Valuable information can be found on a range of life sciences fields including bacteriology, biochemistry, bionics, bioinformatics, biophysics, biotechnology, genetics, geobotany, human biology, marine biology, microbiology, molecular biology, cellular biology, zoology, bioinorganic chemistry, microchemistry and environmental chemistry. The Hong Kong Polytechnic University04.06.2013 | Read more Elhuyar Fundazioa03.06.2013 | Read more Wageningen University03.06.2013 | Read more New York Institute of Technology03.06.2013 | Read more Karolinska Institutet03.06.2013 | Read more BioMed Central03.06.2013 | Read more Thomas Jefferson University03.06.2013 | Read more Max-Planck-Institut für chemische Ökologie03.06.2013 | Read more Federation of American Societies for Experimental Biology31.05.2013 | Read more University of Leeds31.05.2013 | Read more American Society for Microbiology31.05.2013 | Read more Johannes Gutenberg-Universität Mainz31.05.2013 | Read more Howard Hughes Medical Institute (HHMI)31.05.2013 | Read more Helmholtz-Zentrum für Infektionsforschung31.05.2013 | Read more Boston University Medical Center31.05.2013 | Read more Virginia Commonwealth University31.05.2013 | Read more Universität Basel31.05.2013 | Read more Rice University31.05.2013 | Read more Max Planck Institute for Evolutionary Anthropology, Leipzig29.05.2013 | Read more Cedars-Sinai Medical Center29.05.2013 | Read more Woods Hole Oceanographic Institution29.05.2013 | Read more - Biological fermentation process converts CO and CO2 into bioethanol and platform chemicals - Process uses energy contained in steel plant off-gases - Ten-year co-operation to develop and market integrated environmental solutions for the steel industry worldwide Siemens Metals Technologies and LanzaTech have signed a ten-year co-operation agreement to develop and market integrated environmental solutions for the steel industry worldwide. The collaboration will utilize the ground-breaking fermentation technology developed by LanzaTech transforming carbon-rich off-gases generated by the steel industry into low carbon bioethanol and other platform chemicals. ... Novel application of 3D printing could enable the development of miniaturized medical implants, compact electronics, tiny robots, and more 3D printing can now be used to print lithium-ion microbatteries the size of a grain of sand. The printed microbatteries could supply electricity to tiny devices in fields from medicine to communications, including many that have lingered on lab benches for lack of a battery small enough to fit the ... ... two engines aircraft project “Elektro E6”. The countdown has been started for opening the gates again for the worldwide leading aviation and space event in Le Bourget, Paris from June 17th - 23rd, 2013. EADCO & PC-Aero will present at the Paris Air Show in Hall H4 booth F-7 their new future aircraft and innovative project: ... Siemens scientists have developed new kinds of ceramics in which they can embed transformers. The new development allows power supply transformers to be reduced to one fifth of their current size so that the normally separate switched-mode power supply units of light-emitting diodes can be integrated into the module's heat sink. The new technology was developed in cooperation with industrial and research partners who ... Cheaper clean-energy technologies could be made possible thanks to a new discovery. Led by Raymond Schaak, a professor of chemistry at Penn State University, research team members have found that an important chemical reaction that generates hydrogen from water is effectively triggered -- or catalyzed -- by a nanoparticle composed of nickel and phosphorus, two inexpensive elements that are abundant on Earth. ... 19.06.2013 | Life Sciences 19.06.2013 | Agricultural and Forestry Science 19.06.2013 | Studies and Analyses 14.06.2013 | Event News 13.06.2013 | Event News 10.06.2013 | Event News
<urn:uuid:f452d5b4-688c-44c2-bcf6-5331cd1d1b9c>
2.75
1,646
Content Listing
Science & Tech.
34.973237
Irrational Functions with Higher Indices The square root function is the inverse of the polynomial y = x2. All of the polynomials whose equation is y = xn, where n is a positive integer, have inverses. Take a moment to use your graphing calculator to look at the graphs of y = x4, y = x6, and y = x8. Do you see that they are very similar to the graph of y = x2? They are symmetric to the y-axis, they pass through the origin, and they never drop below the x-axis. Because they fail the horizontal-line test for 1-1 functions, you must restrict the domain to be x > 0 in order to have an inverse function. Take a look at the graphs of y = x3, y = x5, and y = x7. These graphs are not symmetric to the y-axis, they pass through the origin, they do have output values that are negative, and they do pass the horizontal-line test. These functions do have inverses as they exist, so the domain does not have to be restricted. The domain of a radical function is x > 0 when the index is even and is all real numbers when the index is odd. The inverse of the function f(x) = xn is
<urn:uuid:59c7feac-1616-4628-95cd-d800d4680baa>
3.4375
281
Tutorial
Science & Tech.
67.482857
Click on any phrase to play the video at that point.Close I'm Dr. David Hanson, and I build robots with character. And by that, I mean that I develop robots that are characters, but also robots that will eventually come to empathize with you. So we're starting with a variety of technologies that have converged into these conversational character robots that can see faces, make eye contact with you, make a full range of facial expressions, understand speech and begin to model how you're feeling and who you are, and build a relationship with you. I developed a series of technologies that allowed the robots to make more realistic facial expressions than previously achieved, on lower power, which enabled the walking biped robots, the first androids. So, it's a full range of facial expressions simulating all the major muscles in the human face, running on very small batteries, extremely lightweight. The materials that allowed the battery-operated facial expressions is a material that we call Frubber, and it actually has three major innovations in the material that allow this to happen. One is hierarchical pores, and the other is a macro-molecular nanoscale porosity in the material. There he's starting to walk. This is at the Korean Advanced Institute of Science and Technology. I built the head. They built the body. So the goal here is to achieve sentience in machines, and not just sentience, but empathy. We're working with the Machine Perception Laboratory at the U.C. San Diego. They have this really remarkable facial expression technology that recognizes facial expressions, what facial expressions you're making. It also recognizes where you're looking, your head orientation. We're emulating all the major facial expressions, and then controlling it with the software that we call the Character Engine. And here is a little bit of the technology that's involved in that. In fact, right now -- plug it from here, and then plug it in here, and now let's see if it gets my facial expressions. Okay. So I'm smiling. (Laughter) Now I'm frowning. And this is really heavily backlit. Okay, here we go. Oh, it's so sad. Okay, so you smile, frowning. So his perception of your emotional states is very important for machines to effectively become empathetic. Machines are becoming devastatingly capable of things like killing. Right? Those machines have no place for empathy. And there is billions of dollars being spent on that. Character robotics could plant the seed for robots that actually have empathy. So, if they achieve human level intelligence or, quite possibly, greater than human levels of intelligence, this could be the seeds of hope for our future. So, we've made 20 robots in the last eight years, during the course of getting my Ph.D. And then I started Hanson Robotics, which has been developing these things for mass manufacturing. This is one of our robots that we showed at Wired NextFest a couple of years ago. And it sees multiple people in a scene, remembers where individual people are, and looks from person to person, remembering people. So, we're involving two things. One, the perception of people, and two, the natural interface, the natural form of the interface, so that it's more intuitive for you to interact with the robot. You start to believe that it's alive and aware. So one of my favorite projects was bringing all this stuff together in an artistic display of an android portrait of science-fiction writer Philip K. Dick, who wrote great works like, "Do Androids Dream of Electric Sheep?" which was the basis of the movie "Bladerunner." In these stories, robots often think that they're human, and they sort of come to life. So we put his writings, letters, his interviews, correspondences, into a huge database of thousands of pages, and then used some natural language processing to allow you to actually have a conversation with him. And it was kind of spooky, because he would say these things that just sounded like they really understood you. And this is one of the most exciting projects that we're developing, which is a little character that's a spokesbot for friendly artificial intelligence, friendly machine intelligence. And we're getting this mass-manufactured. We specked it out to actually be doable with a very, very low-cost bill of materials, so that it can become a childhood companion for kids. Interfacing with the Internet, it gets smarter over the years. As artificial intelligence evolves, so does his intelligence. You can share this video by copying this HTML to your clipboard and pasting into your blog or web page. need to get the latest Flash player. Got an idea, question, or debate inspired by this talk? Start a TED Conversation. David Hanson's robot faces look and act like yours: They recognize and respond to emotion, and make expressions of their own. Here, an "emotional" live demo of the Einstein robot offers a peek at a future where robots truly mimic humans. David Hanson merges robotics and art to design life-like, social robots that can mimic human expression and emotion. Full bio »
<urn:uuid:483956c2-368e-4b19-82a0-597876e7e7f6>
2.953125
1,074
Audio Transcript
Science & Tech.
51.736003
09.08.10 - A new video shows how NASA maps pine beetle outbreaks, and what impact the beetle damage might have on forest fires. 09.07.10 - In the midst of a difficult fire season in many parts of the world, the United Nations' (UN) Food and Agriculture Organization has launched a new online fire detection system that will help firefighters and natural hazards managers improve response time and resource management. 09.02.10 - Though obscured by clouds, NASA's Terra satellite was able to see smoke coming from an oil platform that caught fire Sept. 2, 2010. 08.13.10 - The MODIS instrument on NASA’s Terra satellite detected two clusters of intense fires when it acquired a photo-like image. 10.01.09 - NASA and the U.S. Forest Service again partnered to obtain visible light, infrared and thermal imagery of California wildfires in response to July 2008 requests from the California Department of Forestry and Fire Protection, the California Governor's Office of Emergency Services and the National Interagency Fire Center. 11.24.09 - NASA's remotely piloted Predator B aircraft, the Ikhana, equipped with an infrared imaging sensor, recently conducted post-burn assessments of two Southern California wildfire sites, the Piute Fire in Kern County and the Station Fire in the Angeles National Forest. 05.08.09 - Researchers from NASA Langley's Science Directorate jumped into action recently, studying wildfire smoke with EPA partners. 04.30.09 - Fires in equatorial Asia are growing more frequent and having a serious impact on the air as well as the land. 04.22.09 - New research shows that the warming effect of aerosols, or small particles in the air, increases with the amount of cloud cover below the aerosols. 03.05.09 - CALIPSO traced vertically through the layers of the atmosphere to study the smoke from Australian bushfires in February. 09.26.08 - MOFFETT FIELD, Calif. – Three teams of NASA Ames computer programmers, heat shield engineers and Earth scientists were honored earlier this month by the Federal Laboratory Consortium Far West Region for developing innovative technologies and partnerships. 07.16.08 - A remotely piloted aircraft carrying a NASA sensor is helping to fight more than 300 California wildfires. 07.14.08 - MOFFETT FIELD, Calif. – Gov. Arnold Schwarzenegger visited NASA’s Ames Research Center today to see first-hand how the agency is helping firefighters battle the widespread wildfires raging throughout the state. 07.11.08 - A remotely piloted aircraft carrying a NASA sensor flew over much of California earlier this week, gathering information that will be used to help fight more than 300 wildfires burning within the state. View images of data collected by NASA's unmanned Ikhana aircraft, superimposed over Google Earth terrain data. 07.03.08 - An international team of fire trackers, weather forecasters and various atmospheric scientists puzzle over computer models, satellite tracks and flight charts to determine how fires age. In an effort to better understand the chemical nature of smog and greenhouse gases, scientists from the California Air Resources Board, CARB are collaborating with NASA scientists who are flying specially configured aircraft -- the DC-8 and the P-3 -- up and down the California coast this month and over the Central valley at varying altitudes. > View Image in High Resolution 06.12.08 - NASA aircraft will follow the trails of smoke plumes from some of Earth's northernmost forest fires, examining their contribution to arctic pollution. 05.27.08 - Clouds serve a valuable role in Earth's climate, and thanks to A-Train, a closer look at them is possible. 03.17.08 - NASA satellites finds vast quantities of industrial aerosols and smoke from East Asia and Russia travel from one side of the globe to another.
<urn:uuid:2e2ab4f6-10dd-4b4a-a5f8-8ebb6408cf2c>
3.453125
801
Content Listing
Science & Tech.
55.681152
Country distribution from AmphibiaWeb's database: Brazil View distribution map using BerkeleyMapper. IUCN (Red List) status: Least Concern (LC). For Red List information on this species, see the IUCN species account. From the IUCN Red List Species Account: This species is currently known only from southern Bahia, southeastern Brazil. It is also expected to occur in northeastern Minas Gerais and northern Espírito Santo States, due to the proximity and similarity of vegetation types between southern Bahia and these areas. Habitat and Ecology This species is known from temporary ponds in cow pastures at the edges of Atlantic Rain Forest fragments, natural forest clearings, and cacao plantations. Males were found calling from the edges of ponds, or floating in shallow water. Females were found near ponds or on forest leaf-litter. The species is presumed to be a larval developer. No information is currently available. The Atlantic Forest has been subject to substantial deforestation and fragmentation due to historical logging and ongoing large-scale clearance for cattle pasture, and crops such as sugar cane, coffee, and exotic trees, as well as for smallholder agriculture. Complete loss of forest habitat is likely to adversely affect this species, but some degree of degradation and opening of the forest canopy appears likely to actually benefit it. This species is not known to occur in any protected areas. Simon Stuart 2006. Physalaemus erikae. In: IUCN 2012
<urn:uuid:6192f432-62bb-4c88-8f0c-1719da3bf013>
2.9375
311
Knowledge Article
Science & Tech.
37.019584
Climate Change and Utah: The Scientific Consensus As directed by Governor Jon Huntsman’s Blue Ribbon Advisory Council on Climate Change (BRAC), this report summarizes present scientific understanding of climate change and its potential impacts on Utah and the western United States. Prepared by scientists from the University of Utah, Utah State University, Brigham Young University, and the United States Department of Agriculture, the report emphasizes the consensus view of the national and international scientific community, with discussion of confidence and uncertainty as defined by the BRAC. There is no longer any scientific doubt that the Earth’s average surface temperature is increasing and that changes in ocean temperature, ice and snow cover, and sea level are consistent with this global warming. In the past 100 years, the Earth’s average surface temperature has increased by about 1.3°F, with the rate of warming accelerating in recent decades. Eleven of the last 12 years have been the warmest since 1850 (the start of reliable weather records). Cold days, cold nights, and frost have become less frequent, while heat waves have become more common. Mountain glaciers, seasonal snow cover, and the Greenland and Antarctic ice sheets are decreasing in size, global ocean temperatures have increased, and sea level has risen about 7 inches since 1900 and about 1 inch in the past decade. Based on extensive scientific research, there is very high confidence that human-generated increases in greenhouse gas concentrations are responsible for most of the global warming observed during the past 50 years. It is very unlikely that natural climate variations alone, such as changes in the brightness of the sun or carbon dioxide emissions from volcanoes, have produced this recent warming. Carbon dioxide concentrations are now more than 35% higher than pre-industrial levels and exceed the highest natural concentrations over at least the last several hundred thousand years. It is likely that increases in greenhouse gas concentrations are contributing to several significant climate trends that have been observed over most of the western United States during the past 50 years. These trends are: (1) a several day increase in the frost-free growing season, (2) an earlier and warmer spring, (3) earlier flower blooms and tree leaf out for many plant species, (4) an earlier spring snowmelt and run off, and (5) a greater fraction of spring precipitation falling as rain instead of snow. In Utah, the average temperature during the past decade was higher than observed during any comparable period of the past century and roughly 2oF higher than the 100 year average. Precipitation in our state during the 20th century was unusually high; droughts during other centuries have been more severe, prolonged, and widespread. Declines in low-elevation mountain snowpack have been observed over the past several decades in the Pacific Northwest and California. However, clear and robust long-term snowpack trends have yet to emerge in Utah’s mountains. Climate models estimate an increase in the Earth’s average surface temperature of about 0.8°F over the next 20 years. For the next 100 years, the projected increase is between 3° and 7°F, depending on a range of credible estimates of future greenhouse gas emissions. These projections, combined with extensive scientific research on the climate system, indicate that continued warming will take place over the next several decades as a result of prior greenhouse gas emissions. Ongoing greenhouse gas emissions at or above current levels will further alter the Earth’s climate and very likely produce global temperature, sea level, and snow and ice changes greater than those observed during the 20th century. What does this mean for Utah? Utah is projected to warm more than the average for the entire globe and the expected consequences of this warming are fewer frost days, longer growing seasons, and more heat waves. Studies of precipitation and runoff over the past several centuries and climate model projections for the next century indicate that ongoing greenhouse gas emissions at or above current levels will likely result in a decline in Utah’s mountain snowpack and the threat of severe and prolonged episodic drought in Utah is real. Preparation for the future impacts of climate variability and change on Utah requires enhanced monitoring and knowledge of Utah’s climate, as well as better understanding of the impacts of weather and climate on the state’s water availability, agriculture, industry, and natural resources. - Home Return to the Utah Climate Center homepage. - Climate Database Server Use a GIS interface to access climate data from COOP, GSOD and AWOS weather stations. - Visualize Weather & Climate - Plant Management Tools - Utah AgWeather Net - Water Rangers (CoCoRaHS UT) - Climate Conversions Find tools for temperature, humidity, wind speed and other climate conversions. - Freeze Dates, & Water Years
<urn:uuid:e4df9195-9baa-4c8a-84b9-bed9aa183fc9>
3.46875
972
Knowledge Article
Science & Tech.
23.908182
bleaching, and infectious disease outbreaks are likely to become more frequent. Additionally, carbon dioxide (CO2) absorbed into the ocean from the atmosphere has already begun to reduce calcification rates in reef-building and reef-associated organisms by altering sea water chemistry through decreases in pH (ocean acidification). In the long term, failure to address carbon emissions and the resultant impacts of rising temperatures and ocean acidification could make many other coral ecosystem management efforts futile. Climate change impacts have been identified as one of the greatest global threats to coral reef ecosystems. As temperature rise, mass | Climate change and ocean acidification have been identified by many groups as the most important threat to coral reefs on a global basis. In 2007, the Intergovernmental Panel on Climate Change (IPCC) noted that the evidence is now "unequivocal" that the earth's atmosphere and oceans are warming. They concluded that these changes are primarily due to anthropogenic greenhouse gases (i.e.those derived from human activities), especially the accelerating increase in emissions of CO2. While reducing CO2 and other greenhouse gas emissions is vital to stabilize the climate in the long term, excess CO2 already in the atmosphere has changed and will continue to change global climate throughout the next century. Global ocean temperature has risen by 0.74°C (1.3°F) since the late 19th century causing more frequent and severe bleaching of corals around the world. At the current increasing rate of greenhouse gas emissions, a temperature rise of up to 4.0°C (7.2°F) this century is a distinct possibility. These changes have already had harmful impacts on coral reef ecosystems and will continue to affect coral reef ecosystems globally over the coming century. At the same time, the ocean absorbs approximately one-third of the additional CO2 generated every year by human activities, making the ocean more acidic. The resulting change to ocean chemistry has important consequences for corals and other marine life, especially other important reef builders. Warming seas and ocean acidification are already affecting reefs by causing mass coral bleaching events and slowing the growth of coral skeletons. Bleaching and infectious disease outbreaks are likely to be more frequent and severe as temperatures rise, increasing coral mortality. Climate changes will have other impacts on marine systems such as sea level rise; altered frequency, intensity, and distribution of tropical storms; altered ocean circulation; and others. All of these impacts will combine, often synergistically, to eliminate important ecosystem function and reduce global biodiversity. For more information: NOAA Coral Reef Conservation Program Goals & Objectives 2010-2015
<urn:uuid:32b1d2f5-d495-4f19-a6e9-3057c38d004f>
3.984375
527
Knowledge Article
Science & Tech.
20.083463
|Picture Scramble # 56 ||30 November 2000 The "Spirograph" Nebula Glowing like a multi-faceted jewel, the planetary nebula IC 418 lies about 2,000 light-years from Earth in the Use the arrow keys to shift the image left, right, up or down. Display Finished Picture Scramble A planetary nebula represents the final stage in the evolution of a star similar to our Sun. The star at the center of IC 418 was a red giant a few thousand years ago, but then ejected its outer layers into space to form the nebula, which has now expanded to a diameter of about 0.1 light-year. The stellar remnant at the center is the hot core of the red giant, from which ultraviolet radiation floods out into the surrounding gas, causing it to fluoresce. Over the next several thousand years, the nebula will gradually disperse into space, and then the star will cool and fade away for billions of years as a white dwarf. Our own Sun is expected to undergo a similar fate, but fortunately this will not occur until some 5 billion years from now. (Courtesy of NASA and The Hubble Heritage Team, STScI/AURA) CRpuzzles.com. Copyright © 2000-2007 by Calvin J. Hamilton & Randall L. Whipkey. All rights reserved.
<urn:uuid:9837ef30-57f0-4b33-b28e-a3542182a30a>
3.234375
302
Knowledge Article
Science & Tech.
61.786466
Elated by the finding, researchers are looking to mimic nature’s quantum ability to build solar energy collectors that work with near-photosynthetic efficiency. Alán Aspuru-Guzik, an assistant professor of chemistry and chemical biology at Harvard University, heads a team that is researching ways to incorporate the quantum lessons of photosynthesis into organic photovoltaic solar cells. This research is in only the earliest stages, but Aspuru-Guzik believes that Fleming’s work will be applicable in the race to manufacture cheap, efficient solar power cells out of organic molecules. TUNNELING FOR SMELL Quantum physics may explain the mysterious biological process of smell, too, says biophysicist Luca Turin, who first published his controversial hypothesis in 1996 while teaching at University College London. Then, as now, the prevailing notion was that the sensation of different smells is triggered when molecules called odorants fit into receptors in our nostrils like three-dimensional puzzle pieces snapping into place. The glitch here, for Turin, was that molecules with similar shapes do not necessarily smell anything like one another. Pinanethiol [C10H18S] has a strong grapefruit odor, for instance, while its near-twin pinanol [C10H18O] smells of pine needles. Smell must be triggered, he concluded, by some criteria other than an odorant’s shape alone. What is really happening, Turin posited, is that the approximately 350 types of human smell receptors perform an act of quantum tunneling when a new odorant enters the nostril and reaches the olfactory nerve. After the odorant attaches to one of the nerve’s receptors, electrons from that receptor tunnel through the odorant, jiggling it back and forth. In this view, the odorant’s unique pattern of vibration is what makes a rose smell rosy and a wet dog smell wet-doggy. In the quantum world, an electron from one biomolecule might hop to another, though classical laws of physics forbid it. In 2007 Turin (who is now chief technical officer of the odorant-designing company Flexitral in Chantilly, Virginia) and his hypothesis received support from a paper by four physicists at University College London. That work, published in the journal Physical Review Letters, showed how the smell-tunneling process may operate. As an odorant approaches, electrons released from one side of a receptor quantum-mechanically tunnel through the odorant to the opposite side of the receptor. Exposed to this electric current, the heavier pinanethiol would vibrate differently from the lighter but similarly shaped pinanol. “I call it the ‘swipe-card model,’?” says coauthor A. Marshall Stoneham, an emeritus professor of physics. “The card’s got to be a good enough shape to swipe through one of the receptors.” But it is the frequency of vibration, not the shape, that determines the scent of a molecule. THE GREEN TEA PARTY Even green tea may tie into subtle subatomic processes. In 2007 four biochemists from the Autonomous University of Barcelona announced that the secret to green tea’s effectiveness as an anti-oxidant—a substance that neutralizes the harmful free radicals that can damage cells—may also be quantum mechanical. Publishing their findings in the Journal of the American Chemical Society, the group reported that antioxidants called catechins act like fishing trollers in the human body. (Catechins are among the chief organic compounds found in tea, wine, and some fruits and vegetables.) Free radical molecules, by-products of the body’s breakdown of food or environmental toxins, have a spare electron. That extra electron makes free radicals reactive, and hence dangerous as they travel through the bloodstream. But an electron from the catechin can make use of quantum mechanics to tunnel across the gap to the free radical. Suddenly the catechin has chemically bound up the free radical, preventing it from interacting with and damaging cells in the body. Quantum tunneling has also been observed in enzymes, the proteins that facilitate molecular reactions within cells. Two studies, one published in Science in 2006 and the other in Biophysical Journal in 2007, have found that some enzymes appear to lack the energy to complete the reactions they ultimately propel; the enzyme’s success, it now seems, could be explained only through quantum means. QUANTUM TO THE CORE Stuart Hameroff, an anesthesiologist and director of the Center for Consciousness Studies at the University of Arizona, argues that the highest function of life—consciousness—is likely a quantum phenomenon too. This is illustrated, he says, through anesthetics. The brain of a patient under anesthesia continues to operate actively, but without a conscious mind at work. What enables anesthetics such as xenon or isoflurane gas to switch off the conscious mind? Hameroff speculates that anesthetics “interrupt a delicate quantum process” within the neurons of the brain. Each neuron contains hundreds of long, cylindrical protein structures, called microtubules, that serve as scaffolding. Anesthetics, Hameroff says, dissolve inside tiny oily regions of the microtubules, affecting how some electrons inside these regions behave. He speculates that the action unfolds like this: When certain key electrons are in one “place,” call it to the “left,” part of the microtubule is squashed; when the electrons fall to the “right,” the section is elongated. But the laws of quantum mechanics allow for electrons to be both “left” and “right” at the same time, and thus for the microtubules to be both elongated and squashed at once. Each section of the constantly shifting system has an impact on other sections, potentially via quantum entanglement, leading to a dynamic quantum-mechanical dance. It is in this faster-than-light subatomic communication, Hameroff says, that consciousness is born. Anesthetics get in the way of the dancing electrons and stop the gyration at its quantum-mechanical core; that is how they are able to switch consciousness off. It is still a long way from Hameroff’s hypothetical (and experimentally unproven) quantum neurons to a sentient, conscious human brain. But many human experiences, Hameroff says, from dreams to subconscious emotions to fuzzy memory, seem closer to the Alice in Wonderland rules governing the quantum world than to the cut-and-dried reality that classical physics suggests. Discovering a quantum portal within every neuron in your head might be the ultimate trip through the looking glass.
<urn:uuid:52d72bf0-9298-42c6-8d58-d9cd57e78638>
3.390625
1,416
Nonfiction Writing
Science & Tech.
30.25896
Large-scale alteration of nature landscapes has had profound implications for biological diversity. The single biggest contributor to the current extinction crisis is the wholesale destruction of habitats. As habitats are destroyed, formerly contiguous landscapes become fragmented into smaller patches. But what exactly the effects of fragmentation are, independent of habitat destruction, is not always so clear (e.g., Simberloff 2000. What do we really know about fragmentation? Texas Journal of Science 52: S5-S22). The biological dynamics of forest fragments project (BDFFP) in the Amazon, was started in 1979 and created 11 tropical forest patches ranging from 1 to 100 ha in size. The dynamics of these fragments have been consistently monitored and compared to plots in intact forest. This experiment represents the world's largest, longest-running fragmentation experiment and has told us more about fragmentation then any other study system. In a recent publication by William Laurance and many colleagues involved in this project, they summarize 30 years of data and show how fragmentation affects ecological patterns and processes. Fragments turn out to be very dynamic and defined by change, compared to interior plots. They have higher tree mortality and are much more susceptible to weather events such as storms or droughts. The effects are especially pronounced at the edges of these fragments. The edge community face high mortality but also have higher tree density. Faunal communities in fragments and especially near edges are depauperate. One interesting aspect highlighted by this 30 years of research is that the edge effects are strongly influenced by what is happening around the fragments. The fragment edge effects are sensitive to the composition of the inter-patch matrix, giving managers the opportunity to influence fragment diversity and health by managing the matrix in ways that support fragments. Because of over 30 years of perseverance of the researchers involved, this experiment give scientists, managers and policy-makers information to help manage an increasingly fragmented world and to find ways to reduce to negative impacts of habitat destruction. Laurance, W., Camargo, J., Luizão, R., Laurance, S., Pimm, S., Bruna, E., Stouffer, P., Bruce Williamson, G., Benítez-Malvido, J., & Vasconcelos, H. (2010). The fate of Amazonian forest fragments: A 32-year investigation Biological Conservation DOI: 10.1016/j.biocon.2010.09.021
<urn:uuid:43c1839c-fb48-43ef-bdea-83e35cb43f25>
3.84375
490
Academic Writing
Science & Tech.
41.30131
Skip to Navigation ↓ Galileo was the first person to attempt to measure the speed of light. In the early 1600s, he and an assistant each stood on a different hilltop with a known distance between them. The plan was for Galileo to open the shutter of a lamp, and then for his assistant to open the shutter of a lamp as soon as he saw the light from Galileo's. Using the distance between the hilltops and his pulse as a timer, Galileo planned to measure the speed of light. He and his assistant tried this with different distances between them, but no matter how far apart they were, Galileo could measure no difference in the amount of time it took the light to travel and concluded that the speed of light was too fast to be measured by this method. He was correct. We now know the speed of light very precisely, and if Galileo and his assistant were on hilltops one mile apart, light would take 0.0000054 seconds to travel from one person to the other. It is understandable that Galileo was unable to measure this with his pulse! In 1676 a Danish astronomer named Ole Rømer was studying the orbits of the moons of Jupiter and making tables to predict when eclipses of the moons would occur. He noticed that when Jupiter and Earth are far apart (near conjunction), the eclipses of the moons occurred several minutes later than when Jupiter and the Earth are closer (near opposition.) He reasoned that this could be because of the time light takes to travel from Jupiter to Earth. Rømer found the maximum variation in timing of these eclipses to be 16.6 minutes. He interpreted this to be the amount of time it takes light to travel across the diameter of Earth's orbit. He didn't actually calculate the the speed of light, and the diameter of Earth's orbit was not well known in his day. Using his method to calculate the speed of light using the modern value of 300 million kilometers for the distance across Earth's orbit (2 A.U.) gives a value of approximately 301,204.8 km/s for the speed of light. This is only about 0.5% off the modern known value of the speed of light. In the 1850s, French physicist Jean Foucault measured the speed of light in a laboratory using a light source, a rapidly rotating mirror and a stationary mirror. This method was based on a similar apparatus built by Armand-Hippolyte Fizeau. For the first time the speed of light could be measured on Earth, and the speed of light was measured to very great accuracy. In the 1970s, interferometry was used to get the most accurate value for the speed of light that had been measured yet: 299,792.4562±0.0011 km/s. Then, in 1983, the meter was redefined in the International System of Units (SI) as the distance traveled by light in vacuum in 1/299,792,458 of a second. As a result, the numerical value of the speed of light (c) in meters per second is now fixed exactly by the definition of the meter. It is always slower in other materials such as water or glass. For most calculations the value 3.00 x 105 km/s is used.
<urn:uuid:397785ec-f3da-41a5-80e9-7980064239db>
4.09375
666
Knowledge Article
Science & Tech.
66.713655
The Valley of Ten Thousand Smokes is within Katmai National Park, Alaska, and is filled with ash flows from the 1912 eruption of Novarupta. The 1912 eruption was the largest eruption by volume in the 20th century, erupting about 13 cubic kilometers of material. The summit of Mt. Katmai stratovolacano collapsed, forming a lake-filled caldera. The 100 km2 valley is filled as deep as 210 m with ash. The Alaska Volcano Observatory continually monitors the area where there are 5 active volcanoes within 15 km. Image data for the 3-D perspective view were acquired in July 2004, and are located near 58.4 degrees north latitude, 155.5 degrees west longitude. With its 14 spectral bands from the visible to the thermal infrared wavelength region and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER images Earth to map and monitor the changing surface of our planet. ASTER is one of five Earth-observing instruments launched Dec. 18, 1999, on Terra. The instrument was built by Japan's Ministry of Economy, Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and data products. The broad spectral coverage and high spectral resolution of ASTER provides scientists in numerous disciplines with critical information for surface mapping and monitoring of dynamic conditions and temporal change. Example applications are: monitoring glacial advances and retreats; monitoring potentially active volcanoes; identifying crop stress; determining cloud morphology and physical properties; wetlands evaluation; thermal pollution monitoring; coral reef degradation; surface temperature mapping of soils and geology; and measuring surface heat balance. The U.S. science team is located at NASA's Jet Propulsion Laboratory, Pasadena, Calif. The Terra mission is part of NASA's Science Mission Directorate, Washington, D.C. More information about ASTER is available at http://asterweb.jpl.nasa.gov/.
<urn:uuid:10ae739a-97c0-4d97-b721-9a6888e7d347>
3.703125
402
Knowledge Article
Science & Tech.
41.904188
A photograph by Kathy Keatley Garvey captures a honeybee's sting, with its abdominal tissue trailing behind. UC Davis communications specialist Kathy Keatley Garvey in the Department of Entomology said she has taken at least 1 million photos of honeybees in her lifetime, but this snapshot won the first-place gold feature photo award in an Association for Communication Excellence competition...Source (wait for it)... The Sacramento... Bee. Relevant information and additional photos in the sequence at the photographer's post at Bug Squad. The images showed the progression of the sting, but the most interesting part was that the bee's abdominal tissue lingered behind, she said. "As far as I know, nobody's been able to record anything like this," Garvey said. She said the only time she's seen it illustrated was in a textbook. Via BoingBoing, where I found this interesting observation in a comment: Their stingers developed for defending their hives by stinging the rigid bodies of other bees and insects, not for stinging the stretchy skin of mammals. One bee can sting another bee/insect several times.I didn't know that. You learn something every day.
<urn:uuid:257cef91-a942-432a-9328-34003475a2a5>
2.953125
245
Personal Blog
Science & Tech.
53.625255
Science Fair Project Encyclopedia A lake is a body of water, surrounded by land, which is not completely covered by vegetation. The majority of lakes are fresh water, and most lie in the northern hemisphere at higher latitudes. Large lakes are sometimes referred to as "inland seas" and small seas are sometimes referred to as lakes. The term lake is also used to describe a feature such as Lake Eyre, which is dry most of the time but becomes filled under seasonal conditions of heavy rainfall. Many lakes are artificial and are constructed for hydro-electric power supply, recreation (swimming, wind surfing,...), water supply, etc. Finland is known as The Land of the Thousand Lakes and Minnesota is known as The Land of Ten Thousand Lakes. The Great Lakes of North America also have ice age origins. Over 60% of the world's lakes are in Canada; this is because of the deranged drainage system that dominates the country. There are dark basaltic plains on the Moon, similar to lunar maria but smaller, that are called lacus (singular lacus, Latin for "lake"). They were once thought by early astronomers to be literal lakes. - The largest lake in the world is the Caspian Sea. With a surface area of 394,299 sq. km., it has a surface area greater than the next six largest lakes combined. - The largest freshwater lake, and second largest lake altogether is Lake Superior with a surface area of 82,414 sq. km. - The deepest lake is Lake Baikal in Siberia, with a bottom at 1,741 m (5,712 ft.). - The highest navigable lake is lake Titicaca, at 3821 m above sea level. It is also the second largest lake in South America. - The world's lowest lake is the Dead Sea, at 396 m (1,302 ft.) below sea level. It is also the lake with the highest salt concentration. - The largest freshwater-lake island is Manitoulin Island on Lake Huron, with a surface area of 2,766 square km. - The largest lake located on an island is Nettilling Lake on Baffin Island. - Lake Toba on the island of Sumatra is located in what is probably the largest resurgent caldera on Earth. Origin of natural lakes Most lakes are young, as the natural results of erosion will tend to wear away one of the basin sides containing the lake. There are a number of natural processes that can form lakes. A recent tectonic uplift of a mountain range can create bowl-shaped depressions that accumulate water and form lakes. The advance and retreat of glaciers can scrape depressions in the surface where lakes accumulate. Such lakes are common in Scandinavia, Siberia and Canada. Lakes can also form by means of landslides or by glacial blockages. An example of the later occurred during the last ice age in the state of Washington, when a huge lake formed behind a glacial flow. When the ice retreated, the result was an immense flood that created the Dry Falls monument at Sun Lakes, Washington. Saline lakes can form where there is no natural outlet or the water evaporates rapidly, and the drainage surface of the water table has a higher than normal salt content. Examples of salt lakes include the Great Salt Lake, the Caspian Sea and the Dead Sea. Small, crescent-shaped lakes called Oxbow Lakes can form in river valleys as the result of meandering. The slow-moving river forms a sinuous shape as the outer side of bends are worn away more rapidly than the inner side. Eventually a horseshoe bend is formed and the river cuts through the narrow neck. This gap now forms the main passage for the river and the ends of the bend become silted up. Lake Vostok is an under-ice lake in Antarctica, possibly the largest in the world. The pressure from ice and the internal chemical composition means that if the lake were drilled into, it may result in a fissure and spraying in the same manner as a shaken can of soda. A reservoir (French: réservoir) is an artificial lake created by flooding land behind a dam. Some of the world's largest lakes are reservoirs. Artificial lakes can also be made deliberately by digging one or by flooding an open-pit mine. To build dams, surveyors have to find river valleys which are deep and narrow; the valley sides can then act as natural walls. The best place for building a dam has to be determined. If necessary, humans have to be rehoused and/or historic sites have to be moved, e.g. the temples of Abu Simbel before the construction of the Aswan Dam, creating Lake Nasser. Lake Mead is North America's largest artificial lake. Lokka is Northern Europe's largest artificial lake, 417 km2 in size. See also: List of reservoirs and dams The change in level of a lake is controlled by the difference between the sources of inflow and outflow, compared to the total volume of the lake. The significant input sources are precipitation onto the lake; runoff carried by streams and channels from the lake's catchment area; groundwater channels and aquifers, and man-made sources from outside the catchment area. Output sources are evaporation from the lake; surface and groundwater flows, and any extraction of lakewater by humans. As climate conditions and human water requirements vary, these will create fluctuations in the lake level. Lakes can be categorized on the basis of their richness of nutrients, which typically effects plant growth. A nutrient poor lake is called oligotrophic, and are generally clear and have a low concentration of plant life. Mesotropic lakes have good clarity and an average level of nutrients. Eutrophic lakes are enriched with nutrients, resulting in good plant growth and possible algal blooms. A hypertrophic lake is a water body that has been highly enriched with nutrients. These lakes typically have poor clarity and are subject to algal blooms. Lakes typically reach this condition due to human activities, such as heavy use of fertilizers in the lake catchment area. Such lakes are of little use, and have a poor ecosystem. Abiotic and biotic limnology Limnology divides lakes in three zones: littoral zone, which is a sloped area that is close to land; open-water zone , where sunlight is abundant; and deep-water zone , where little sunlight can reach. The depth which light can reach in lakes depends on the density and motion of particles. These particles can be sedimentary or biological in origin and are responsible for the color of the water. Decaying plant matter for instance is responsible for a yellow or brown color, while algae result in greenish water. In very shallow water bodies, iron oxides make water reddish brown. Biological particles are algae and detritus. A sediment particle is in suspension if its weight is less than the random turbidity forces acting upon it. The turbidity is a decisive factor in the transparency of the water. Bottom-dwelling detritivorous fish are responsible for turbid waters, because they stir the mud in search for food. Piscivorous fish eat plant-eating (planktonivorous) fish, thus increasing the number of algae (see aquatic trophic cascade ). The light depth or transparency is measured by using a Secchi disk. This is a 20 cm disk with alternating white and black quadrants. The depth at which the disk is no longer visible, is the Secchi depth, and is a measure for transparency. It is commonly used to test eutrophication. How lakes disappear A lake may be deposited with sediment, and gradually, the lake becomes a wetland, such as a swamp or marsh. An important difference exists between lowland and highland lakes: lowland lakes are more placid, are less rocky/more sedimentary, have a less sloping bottom, and generally contain more plant life. Large waterplants (typically reeds) accelerate thi closing process significantly because they trap sediment. Turbid lakes, and lakes with much plant-eating fish, tend to disappear slower. A "disappearing" lake (barely noticeable on a human timescale) typically has a water's edge with extensive plant mats. They become a new habitat for other plants (like peat moss, when conditions are right) and animals, many of which are very rare. Gradually, the lake closes, and young peat may form, forming a fen. In lowland river valleys (allowing the river to meander), the presence of peat is explained by the closing of historical oxbow lakes. In the very last stages of sucession , more trees would grow in, eventually turning the wetland into a forest. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:dabb9c81-4422-49a4-b071-cdb1adb3d0cc>
3.65625
1,854
Knowledge Article
Science & Tech.
46.283999
become an editor the entire directory only in Science_and_Environment/Water_Resources Science and Environment Open Directory - Regional: North America: United States: Science and Environment: Water Resources Science: Environment: Water Resources Ground Water Atlas of the United States - Online USGS publication describes U.S. ground-water resources by region. Covers the 50 states, Puerto Rico, and the U.S. Virgin Islands. RiverWeb's American Bottom Landing Site - Explore information about the environment, history, and cultures of this important part of the Mississippi River valley. USGS - Arsenic in ground water of the United States - Publications, maps, and data on arsenic in U.S. groundwater. USGS -- Water Resources of the United States - Information about water resources from the U.S. Geological Survey. Topics include ground water, surface water, water use, water quality, acid rain, toxic substances hydrology. " search on: to edit this category. Copyright © 2013 Netscape Visit our sister sites Last update: Wednesday, August 27, 2008 5:31:54 PM EDT -
<urn:uuid:92fe5c42-d266-42e7-a251-df610cd31526>
2.90625
241
Content Listing
Science & Tech.
29.166747
“The answer to the oft-asked question of whether a particular extreme weather event is caused by climate change is that it is the wrong question. “All weather events are affected by climate change because the environment in which they occur is warmer and moister than it used to be.” – Kevin Trenberth, head of the Climate Analysis Section at the National Center for Atmospheric Research As I write this on a pleasant Tuesday morning in Oregon, hurricane Sandy has just passed through the North Atlantic coastal states and is ravaging inland areas from New York state to West Virginia on its way to Ontario, Canada. Moisture carried inland by the storm is meeting advancing cold fronts, and massive snowfalls are expected as far south as North Carolina. Inundated with sea- and rainwater, coastal cities from Delaware to Rhode Island have been brought to a standstill, and now face an enormous cleanup job. Sandy is the largest hurricane ever to hit the U.S. North Atlantic coast. The cost of repairing the infrastructure and property damage will likely be three times higher than the $15.6 billion spent cleaning up after the monster storm Irene struck the region in 2011. And, of course, the human toll of the storm – the deaths, disappearances and injuries; the loss of homes, communities and livelihoods; the traumatization of children – is incalculable. Significantly, a few days before Sandy struck the U.S. another storm – a storm of controversy about whether the hurricane’s unusual characteristics could be attributed to global warming – erupted in the blogosphere. That’s because Sandy’s enormous size and power, and the fact that an extremely anomalous high-pressure system over Greenland pushed it westward toward landfall, depended on a confluence of underlying climatic conditions and previously rare weather events that could become more common in our rapidly warming world. When this combination of factors was described by meteorological agencies to explain why Sandy was such a dangerous storm (staff members at National Oceanic and Atmospheric Administration called it a “Frankenstorm,” because it was a rare “hybrid” of tropical and North Atlantic cyclones arriving at Halloween), supporters and opponents of the idea of human-induced climate change jumped to make their cases, and a heated, sometimes vehement brouhaha ensued. My take? Along with 98 percent of the world’s climatologists, I consider the fact that humans are heating the planet by emitting greenhouse gases into the atmosphere to be settled science. But exactly what effects can we attribute to this global warming? Does it “cause” outsized hurricanes? Or, as some “global warming skeptics” maintain, are they merely natural phenomena that have and always will occur on this planet, regardless of human activity? Surprisingly, the answer is – quick, grab your cognitive-dissonance shield! – “all of the above.” Of course there have always been hurricanes, some quite powerful and some influenced by distant, unusual weather patterns. Global warming doesn’t necessarily create the conditions under which hurricanes (or other extreme weather events) arise. But it does exacerbate those conditions – and increase the likelihood that an “average-sized” future hurricane will be larger and more powerful than “average-sized” hurricanes of the past. Hurricanes generally become stronger as they pass over warm water and then they weaken over cooler water and land. Globally, September 2012 had the second-highest ocean surface temperatures on record, and the North Atlantic, over which Sandy traveled, is currently about 5 degrees F warmer than average, which helped the hurricane pick up strength and speed at its core as it lumbered northward. Also, global warming has now laden the atmosphere with 5 percent or more moisture than it had a few decades ago. A September 2012 National Geographic magazine article reported, “In theory, extra water vapor in the atmosphere should pump heat into big storms such as hurricanes and typhoons, adding buoyancy that causes them to grow in size and power. ... But the jury’s still out on whether any increase has occurred yet.” Sandy generated storm-force winds across its entire 1,100-mile girth as it hit the coast. Is it the first juror to cast a vote in our new era of destabilizing climate – the first superstorm in a century of superstorms? More important, do we want to find out? No one knows exactly what the future will bring, but many climatologists have predicted an age of superstorms, rising tides and unending droughts. Perhaps Sandy just told us that that future has arrived – and we’d better make some changes at our ecological house. Philip S. Wenz, who grew up in Durango and Boulder, now lives in Corvallis, Ore., where he teaches and writes. Reach him by email through his website, www.your-ecological-house.com.
<urn:uuid:6fd989b9-16c3-40b8-bc70-d3984af6876a>
3.375
1,036
Nonfiction Writing
Science & Tech.
38.817374
Posted by jasmine20 on Tuesday, December 26, 2006 at 2:18pm. How do you graph these types of problems. Graph each of the following inequalities. 4x + y (grater than or equal to) 4 Get y by itself on one side and then graph and shade the region depending on the sign (< or >) and make the line dotted if it is not equal to. "Anonymous" is correct. The solid line above which the graph should be shaded is y = -4x + 4 It goes through y= 4 on the y axis and has a slope of -4. ? I MAY SOUND DUMB BUT I DON'T GET IT. i still don't get it 4x + y >= 4 You have to draw a figure that shows which coordinates (x,y) satisfy the inequality and which don't. It's easiest to consider the border line. At the border 4x + y = 4 ---> y = 4 - 4x You draw this line forst. The line itself belongs to the region and all points that are above this line, as you easily see from the inequality. The points below the line don't satisfy the inequality. do you mean that i only have to have one line on the graph which the points go through,.(0,4) and (1,0) AM i correct or not. Yes. There is only the one line officially on the graph. It will be a solid line because the inequality says greater than or equal to. (It would be dotted if it just said greater than.) However, you need to shade the area above the line that you graph because there are many solutions to the inequality. Does that help? why do you need to shade to graph an inequality No one has answered this question yet. Answer this Question Math - I'm having a hard time with graphing this problem. Could someone ... algebra 1 help - I am having a hard time understanding these problems would some... math (graphing) - I need to graph "y = -2x+5" and "y = 4/5x-3&... algebra - Graph the system of inequalities x (lesser or equal to sign)2 y (... Math (Geometry) - "The graph shows types of trash in a typical American ... Algebra - Use a graphing calculator or a computer to graph the system of ... graphing polynomial functions - How do i graph f(x)4x-x^3-x^5? How do I graph f(... math - Is this correct. Graph each of the following inequalities. 4x + y (... Trigonometry - Directions: Use a graphing utility to approximate the solutions ... Inequalities - graph the system of inequalities: ( x-3)^2/9 + (y+2)^2/4<=... For Further Reading
<urn:uuid:f4c5a7b3-4889-43ba-bdda-685cb99a07bd>
3.53125
630
Q&A Forum
Science & Tech.
81.815294
Freezing genetic samples from plant and animal species is all the rage these days, with projects ranging from San Diego's Frozen Zoo to the UK's Frozen Ark. But New York's American Museum of Natural History recently scored a scientific coup when the U.S. National Park Service signed an agreement to store endangered species samples in the museum's underground lab, which will be one of the largest such repositories in the country. Remember that scary prediction from the Intergovernmental Panel on Climate Change a few months back about sea levels rising by as much as 23 inches in the next 100 years and flooding coastal regions and displacing billions of people? Well, that forecast just got a little bit scarier. Ever wonder what a museum's nebulous "permanent collection" looks like when it's not hanging on the gallery walls? Especially a permanent collection as intensively taxadermied as the American Museum of Natural History's? Photographer Justine Cooper's "Saved by Science" series shows us—one drawer full of dead Yellow Honeyeaters (Lichenostomus falvus) at a time. —John Mahoney Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:07d7c341-4da8-4d18-addc-8cc9ea65a9ee>
3.0625
280
Content Listing
Science & Tech.
45.230921
Time is not the fourth dimension of spacetime, nor is it an absolute quantity that flows on its own, Slovenian researchers say. Instead, they propose that time is simply a measure denoting the numerical order of change. This new theory is based on the fact that not even famed physicist Albert Einstein believed that time (t) was the fourth dimension. In other words, when we look at space, we shouldn't see three dimensions (3D) plus time, but rather four dimensions (4D). According to the new proposal, time can only be used to measure numerical order of material change, and not to explain other phenomena that go on in the material world. The new study was conducted by investigators at the Scientific Research Center Bistra (SRCB), in Ptuj. By looking at the Universe from this perspective, the team argues, explaining quantum information transfers become a lot easier. A 4D space provides the best possible medium for such transfers. One of the primary arguments in the new proposal is that time has absolutely no primary physical existence. It is, in fact, simply a mathematical value, that we use to measure the frequency and speed of an object. In this respect, using t as the value of a X-axis on a graph is incorrect, since we cannot measure time itself. The idea is expanded on in two research papers, one of which was published in the journal Physics Essays. The other will appear in an upcoming issue of the same magazine. The proposal basically calls for a paradigm shift in this area of research. The studies argue experts should regard spacetime as having four dimensions of space. The main implication of this is that the Universe is truly timeless, Daily Galaxy reports. “Minkowski space is not 3D + T, it is 4D. ... “This view corresponds better to the physical world and has more explanatory power in describing immediate physical phenomena: gravity, electrostatic interaction, information transfer,” they add. “The idea of time being the fourth dimension of space did not bring much progress in physics and is in contradiction with the formalism of special relativity,” Sorli goes on to say. “We are now developing a formalism of 3D quantum space based on Planck work. It seems that the Universe is 3D from the macro to the micro level to the Planck volume, which per formalism is 3D,” the investigator adds. “In this 3D space there is no ‘length contraction,’ there is no ‘time dilation.’ What really exists is that the velocity of material change is ‘relative’ in the Einstein sense,” he concludes. via Time Was Never the 4th Dimension - Softpedia.
<urn:uuid:99f0f14d-15ae-4171-8e38-1e5070b8f971>
3.171875
571
Personal Blog
Science & Tech.
48.310012
What happens when an animal that normally gets around quadrapedally - that is, on four limbs- is videotaped walking around on just two of them? It becomes a YouTube star, of course. Ambam, a male western lowland gorilla living at the Port Lympne Wild Animal Park in England has become a major internet celebrity (to which I'm contributing with this very post) after videos of him getting about bipedally were posted online. Gorillas, like most primates, have dexterous hands with an opposable thumb on their forelimbs. They normally get around on all fours by "knuckle-walking" on their bent fingers. They do have the ability to walk on two legs of course, which is something they do when they are carrying food or other objects in their front arms, or, in the case of males, when they rise up to intimidate other males or potential threats in dominance displays. But normally they don't make a habit of it because gorilla bodies are designed primarily for quadrapedal locomotion. Why Ambam decided that he likes getting around on two legs is anyone's guess, but he seems otherwise normal and healthy. His father and sisters sometimes walked on two legs, so there is some speculation that it might be genetic predisposition. Others theorize that the bipedal behavior allows Ambam to see the human zoo visitors, who sometime toss food to him, over his enclosure wall. No matter which way you look at it, it's definitely odd behavior. The videos were recorded as part of a research project on great ape locomotion and have sparked questions in the ongoing debate among anthropologist and primatologists on the origins of bipedalism in humans. Western lowland gorillas are a critically endangered species. (Photo at right is a screenshot taken from this video.)
<urn:uuid:750ea048-cd2b-4b5e-a5c8-6900a05f82c9>
2.703125
375
Personal Blog
Science & Tech.
38.457517
|Big Bang Nucleosynthesis| None of the "counting" arguments described above are capable of telling us much about the nature of the dark matter. In particular, these arguments don't help us figure out whether the dark matter is baryonic matter (like gas or dust) or something more exotic. To decide that question we need more information, and one of the strongest pieces of evidence that the dark matter is exotic is Big Bang nucleosynthesis (BBN). Some of the lightest chemical elements in the universe - in particular, deuterium (a heavy isotope of hydrogen), helium-3, helium-4, and lithium-7 - are created in the early moments of the universe, when the whole universe was hotter than the interior of a star. The amounts of each of these nuclei that were formed depends critically on the conditions in the early universe - in particular, the balance between baryonic matter (protons and neutrons) and non-baryonic matter (neutrinos and exotic particles). Based on these ratios, astronomers have concluded that, in the universe as a whole, dark matter outmasses baryonic matter by a factor of almost 10. The basis for this line of argument comes down to a question: how did the various chemical elements of the periodic table form? It turns out that these elements are made in several different ways. Helium is made from hydrogen by nuclear fusion in the core of stars. In the most massive stars, heavier elements such as carbon, oxygen, and even iron are formed in later stages of the star's lifetime. Elements heavier than iron are formed by the heat of exploding stars (supernovae). These processes do not, however, account for the very lightest elements - helium (not all of it can be accounted for by the stars), deuterium (a heavy isotope of hydrogen), lithium, and beryllium. The last three on this list are particularly troublesome because they are actually destroyed within stars, not formed. Where did these elements come from? According to the Big Bang model, the universe began in an extremely hot and dense state and has spent the last 13 billion years expanding and cooling. For the first second or so of its history, the universe was so hot that atomic nuclei could not form - space was filled with a hot soup of protons, neutrons, electrons, and photons (as well as other, short-lived particles). Occasionally a proton and a neutron may collide and stick together to form a nucleus of deuterium (a heavy isotope of hydrogen), but at such high temperatures these clusters will be broken immediately by high-energy photons. When the universe cools off a bit more, these high-energy photons become rare enough that it becomes possible for deuterium to survive. At this point, a race begins. These deuterium nuclei can keep sticking to more and more protons and neutrons, forming nuclei of helium-3, helium-4, lithium, and beryllium. This process of element-formation is called "nucleosynthesis". The denser protons and neutrons are at this time, the more of these light elements will be formed. As the universe expands, however, the density of protons and neutrons decreases and the process slows down. It turns out, however, that neutrons are unstable (with a lifetime of about 15 minutes) unless they are bound up inside a nucleus. After a few minutes, therefore, the free neutrons will be gone and nucleosynthesis will grind to a halt. That's the race - there is only a small window of time in which nucleosynthesis can take place, and the relationship between the expansion rate of the universe (related to the total matter density) and the density of protons and neutrons (the baryonic matter density) determines how much of each of these light elements that are formed in the early universe. Astronomers can use various techniques to study the amount of these light elements that are present in various distant parts of the universe. The abundances of these isotopes have led cosmologists to believe that in the universe as a whole, baryonic matter is far outmassed by some kind of exotic, non-baryonic matter. Last updated April 28, 2007
<urn:uuid:5fd8ef04-d1a7-4bc3-a5d2-76c3c798431a>
3.390625
890
Knowledge Article
Science & Tech.
34.496664
Date of this Version Hinkelman, T.M. Foraging challenges: unsuitable prey and limited information. Ph.D. Dissertation. University of Nebraska, Lincoln, Nebraska. 83 pp. Food acquisition is a complicated task. The profitability of potential food items depends on numerous factors, including the spatial distribution, probability of detection and capture, and suitability of the food. Animals faced with such challenges can use relatively simple mechanisms to maximize foraging efficiency. However, mechanisms that maximize foraging efficiency under some ecological conditions (e.g., prey scarcity) may produce ostensibly suboptimal behavior under different ecological conditions (e.g., prey abundance). In the work presented here, we explore two facets of foraging: (1) consuming unsuitable prey, and (2) searching for resources with limited information about resource location. To explore the consequences of consuming unsuitable prey on predator behavior, we first measured the suitability of two aphid species, black bean aphids and pea aphids, for a native predatory insect, the convergent ladybird beetle. Ladybird larvae had lower larval survival, longer developmental times, and lower adult weights on a diet of bean than pea aphids. We found that ladybird larvae killed bean aphids even if pea aphids were abundant, presumably because bean aphids were easier to capture than the pea aphids. Consumption of even a single bean aphid had pronounced short-term (< 1 day) effects on predator behavior. Ladybird larvae had longer handling times, longer bouts of inactivity, shorter bouts of intensive search, and lower patch-leaving tendencies after eating a bean aphid than after eating a pea aphid. The general lethargy from eating bean aphids may reduce the foraging efficiency of ladybird larvae. We built a simulation model to explore the performance of composite search strategies on landscapes where resource distributions ranged from dispersed to clumped. The search strategies involved switching between intensive and extensive modes based on either resource encounters or sensory cues. We found that the search strategy based on sensory cues outperformed the search strategy based on resource encounters across all resource distributions and was more robust to changes in the resource distribution. Adviser: Brigitte Tenhumberg
<urn:uuid:a22db75c-ca87-4a63-9314-9443a52c3884>
3.15625
455
Academic Writing
Science & Tech.
22.05753
This module implements the interface to NIST's secure hash algorithm, known as SHA-1. SHA-1 is an improved version of the original SHA hash algorithm. It is used in the same way as the md5 module: use new() to create an sha object, then feed this object with arbitrary strings using the update() method, and at any point you can ask it for the digest of the concatenation of the strings fed to it so far. SHA-1 digests are 160 bits instead of MD5's 128 bits. The following values are provided as constants in the module and as attributes of the sha objects returned by new(): 1. This size is used to allow an arbitrary string to be hashed. An sha object has the same methods as md5 objects: m.update(a); m.update(b)is equivalent to See About this document... for information on suggesting changes.
<urn:uuid:e36f890a-35dd-4c8a-9fd9-0fb8cbfa4c07>
2.75
192
Documentation
Software Dev.
69.684109
What is GPS? GPS stands for Global Positioning System. The Global Positioning System is a group of satellites orbiting the Earth twice a day at an altitude of about 20,000 kilometers (12,000 miles). GPS was designed by the military to locate tanks, planes, and ships. The system has been adopted by the public for navigation and scientific applications. With a GPS unit you can find your latitude, longitude, and elevation any place on Earth. How does it work?GPS satellites continuously broadcast messages on 2 radio frequencies. These messages contain a very accurate time signal, a rough estimate of the satellite's position in space, and a set of coded information that a GPS receiver can decipher. We want to know our latitude, longitude, and elevation. The receiver uses its internal clock and the coded information from each GPS satellite to determine the time it took the signals to reach the receiver. Since the signals travel at the speed of light, the receiver can calculate the distance to each satellite. Once the receiver knows the distances to at least 4 satellites, and their positions, it can determine its clock correction and position on the Earth. All you need is a clear view of the sky (this could be a problem in the woods or city), and a GPS receiver. Simply turn on the receiver and within minutes the receiver calculates your position. Even in the worst weather conditions you can know your location to within 100 meters (about 300 feet). That accuracy is fine for most navigation purposes. But since the motion across faults, such as the San Andreas, is usually less than 5 centimeters (2 inches) per year, the USGS has to use special techniques to get much better accuracy. How does the USGS use GPS to measure fault motion? We want to know how stations near active faults move relative to each other. When we occupy several stations at the same time, and all stations observe the same satellites, the relative positions of all the stations can be determined very precisely. Often we are able to determine the distances between stations, even over distances up to several 100 miles, to better than 5 millimeters (about a 1/4 of an inch). Months or years later we occupy the same stations again. By determining how the stations have moved we calculate how much strain is accumulating and which faults are slipping. Where do we work? The USGS uses GPS to measure crustal deformation all over the United States. However most of the work is concentrated in the western states where most earthquakes occur and where rates of crustal deformation are high. These web pages contain maps and data for individual "campaigns" or sets of stations that we monitor.
<urn:uuid:d2c385de-367f-4f8d-bbbb-9d10b3d1390f>
3.890625
547
Knowledge Article
Science & Tech.
49.735464
Override equals and hashCode in Java Equals and hashCode in Java are two fundamental method which is declared in Object class and part or core Java library. equals() method is used to compare Objects for equality while hashCode is used to generate an integer code corresponding to that object. equals and hashCode has used extensively in Java core library like they are used while inserting and retrieving Object in HashMap, see how HashMap works in Java for full story, equals method is also used to avoid duplicates on HashSet and other Set implementation and every other place where you need to compare Objects. Default implementation of equals() class provided by java.lang.Object compares memory location and only return true if two reference variable are pointing to same memory location i.e. essentially they are same object. Java recommends to override equals and hashCode method if equality is going to be define by logical way or via some business logic and many classes in Java standard library does override it e.g. String overrides equals, whose implementation of equals() method return true if content of two String objects are exactly same. Integer wrapper class overrides equals to perform numerical comparison etc. Since HashMap and Hashtable in Java relies on equals() and hashCode() method for comparing keys and values, Java provides following rules to override equals method Java. As per following rule equals method in Java should be: 1) Reflexive : Object must be equal to itself. 2) Symmetric : if a.equals(b) is true then b.equals(a) must be true. 3) Transitive : if a.equals(b) is true and b.equals(c) is true then c.equals(a) must be true. 4) Consistent : multiple invocation of equals() method must result same value until any of properties are modified. So if two objects are equals in Java they will remain equals until any of there property is modified. 5) Null comparison : comparing any object to null must be false and should not result in NullPointerException. For example a.equals(null) must be false, passing unknown object, which could be null, to equals in Java is is actually a Java coding best practice to avoid NullPointerException in Java. Equals and hashCode contract in Java And equals method in Java must follow its contract with hashCode method in Java as stated below. 1) If two objects are equal by equals() method then there hashcode must be same. 2) If two objects are not equal by equals() method then there hashcode could be same or different. So this was the basic theory about equals method in Java now we are going to discuss the approach on how to override equals() method, yes I know you all know this stuff :) but I have seen some of equals() code which can be improved by following correct approach. For illustration purpose we will see an example of Person class and discuss How to write equals() method in Java for that class. Steps to Override equals method in Java Here is my approach for overriding equals method in Java. This is based on standard approach most of Java programmer follows while writing equals method in Java. 1) Do this check -- if yes then return true. 2) Do null check -- if yes then return false. 3) Do the instanceof check, if instanceof return false than return false from equals in Java , after some research I found that instead of instanceof we can use getClass() method for type identification because instanceof check returns true for subclass also, so its not strictly equals comparison until required by business logic. But instanceof check is fine if your class is immutable and no one is going to sub class it. For example we can replace instanceof check by below code 4) Type cast the object; note the sequence instanceof check must be prior to casting object. 5) Compare individual attribute starting with numeric attribute because comparing numeric attribute is fast and use short circuit operator for combining checks. If first field does not match, don't try to match rest of attribute and return false. It’s also worth to remember doing null check on individual attribute before calling equals() method on them recursively to avoid NullPointerException during equals check in Java. Code Example of overriding equals method in Java Let’s see a code example based on my approach of overriding equals method in Java as discussed in above paragraph and hashCode method is generated by Eclipse IDE, see my post 5 tips to override hashCode in Java for detailed example and explanation of overriding hashCode. If you look above method we are first checking for "this" check which is fastest available check for equals method then we are verifying whether object is null or not and object is of same type or not. only after verifying type of object we are casting it into desired object to avoid any ClassCastException in Java. Also while comparing individual attribute we are comparing numeric attribute first using short circuit operator to avoid further calculation if its already unequal and doing null check on member attribute to avoid NullPointerException. Common Errors while overriding equals in Java Though equals() and hashcode() method are defined in Object class along with wait, notify and notifyAll, and one of fundamental part of Java programming I have seen many programmers making mistake while writing equals() method in Java. I recommend all Java programmer who has just started programming to write couple of equals and hashcode method for there domain or value object to get feel of it. Here I am listing some of common mistakes I have observed on various equals method in Java, if you like to learn more about common mistakes in Java programming then see my post Don’t use float and double for monetary calculation and Mixing static and non static synchronized method. Now let’s see common mistakes by Java programmers while overriding equals in Java : 1) Instead of overriding equals() method programmer overloaded it. This is the most common error I have seen while overriding equals method in Java. Syntax of equals method defined in Object class is public boolean equals(Object obj) but many people unintentionally overloads equals method in Java by writing public boolean equals(Person obj), instead of using Object as argument they use there class name. This error is very hard to detect because of static binding. So if you call this method in your class object it will not only compile but also execute correctly but if you try to put your object in collection e.g. ArrayList and call contains() method which is based on equals() method in Java it will not able to detect your object. So beware of it. This question is also a frequently asked question in Java interviews as part of Overloading vs Overriding in Java as how do you prevent this from happening ? Thankfully alongwith Generics, Enum, autoboxing and varargs Java 5 also introduces @Override annotation which can be used to tell compiler that you are overriding a method and than compiler will be able to detect this error during compile time. Consistently using @Override annotation is also a best practice in Java. 2) Second mistake I have seen while overriding equals() method is not doing null check for member variables which ultimately results in NullPointerException in Java during equals() invocation. For example in above code correct way of calling equals() method of member variable is after doing null check as shown below: 3) Third common mistake is not overriding hashCode method in Java and only overriding equals() method. You must have to override both equals() and hashCode() method in Java , otherwise your value object will not be able to use as key object in HashMap because working of HashMap is based on equals() and hashCode to read more see , How HashMap works in Java. 4) Last common mistake programmer make while overriding equals() in Java is not keeping equals() and compareTo() method consistent which is a non formal requirement in order to obey contract of Set to avoid duplicates. SortedSet implementation like TreeSet uses compareTo to compare two objects like String and if compareTo() and equals() will not be consistent than TreeSet will allow duplicates which will break Set contract of not having duplicates. To learn more about this issue see my post Things to remember while overriding compareTo in Java Writing JUnit tests for equals method in Java Its good coding practice to write JUnit test cases to test your equals and hashCode method. Here is my approach for writing JUnit test case for equals method in Java. I will write test cases to check equals behavior, contract of equals and hasCode method and properties of equals method in Java on different circumstances. You can also JUnit4 annotation to write JUnit testcases, than you don’t need to use test prefix on test method, just use @Test annotations. testReflexive() this method will test reflexive nature of equals() method in Java. testSymmeteric() this method will verify symmetric nature of equals() in Java. testNull() this method will verify null comparison and will pass if equals method returns false. testConsistent() should verify consistent nature of equals method in Java. testNotEquals() should verify if two object which are not supposed to equals is actually not equal, having negative test cases in test suite is mandatory. testHashCode() will verify that if two objects are equal by equals() method in Java then there hashcode must be same. This is an important test if you are thinking to use this object as key in HashMap or Hashtable 5 Tips on writing equals method in Java Here are some tips to implement equals and hashCode method in Java, this will help you to do it correctly and with ease: 1) Most of the IDE like NetBeans, Eclipse and IntelliJ IDEA provides support to generate equals() and hashcode() method. In Eclipse do the right click-> source -> generate hashCode() and equals(). 2) If your domain class has any unique business key then just comparing that field in equals method would be enough instead of comparing all the fields e.g. in case of our example if "id" is unique for every Person and by just comparing id we can identify whether two Person are equal or not. 3) While overriding hashCode in Java makes sure you use all fields which have been used in equals method in Java. 4) String and Wrapper classes like Integer, Float and Double override equals method but StringBuffer doesn’t override it. 5) Whenever possible try to make your fields immutable by using final variables in Java, equals method based on immutable fields are much secure than on mutable fields. That’s about equals and hashCode in Java, I am reiterating this but its imperative for a Java programmer to be able to write equals , hashCode, compareTo method by hand. It is not just useful for learning purpose but to clare any coding exercise during Java interviews. Writing code for equals and hashCode is very popular programming interview questions now days. Other Java fundamental tutorials from Javarevisited Blog
<urn:uuid:c72f5b0f-f025-4014-bce5-77baa7f35618>
4.125
2,279
Personal Blog
Software Dev.
35.756334
The most of the plots in this section are based on data from the NOAA repository, or from sites linked to from the repository. |Greenland - Dye3||Absolute proof that melting ice in Greenland will NOT cause sea levels to rise.| |Law Dome||The temperatures and CO2 mixing ratios from Law Dome. This site is the center of several controversies.| |Vostok||More proof that δ18O, deuterium concentration, and temperature are not correlated.| |Temperatures||How are the temperatures of past climates determined? This explains why you should not trust proxies.| |non-Ice Boreholes||The global existence of the mid-Holocene warm episode, the Medieval Warm Period (MWP), and the Little Ice Age (LIA) is easily proved with proxies from other sources.|
<urn:uuid:37071fc0-427b-4f17-97db-6556bb38bd22>
2.859375
176
Content Listing
Science & Tech.
39.59906
Please use this identifier to cite or link to this item: http://hdl.handle.net/1959.13/25049 - Quantification of in situ nutrient and heavy metal remediation by a small pearl oyster (Pinctada imbricata) farm at Port Stephens, Australia Dunstan, Richard Hugh; Macfarlane, Geoffrey R. - The use of pearl oysters has recently been proposed as an environmental remediation tool in coastal ecosystems. This study quantified the nitrogen, phosphorus and heavy metal content of the tissue and shell of pearl oysters harvested from a small pearl oyster farm at Port Stephens, Australia. Each tonne of pearl oyster material harvested resulted in approximately 703 g metals, 7452 g nitrogen, and 545 g phosphorus being removed from the waters of Port Stephens. Increasing current farm production of 9.8 t yr(-1) to 499 t yr(-1) would balance current nitrogen loads entering Port Stephens from a small Sewage Treatment Plant (STP) located on its southern shores. Furthermore, manipulation of harvest dates to coincide with oyster condition would likely remove substantially greater quantities of nutrients. This study demonstrates that pearl aquaculture may be used to assist in the removal of pollutants from coastal waters while producing a commercially profitable commodity. © 2004 Elsevier Ltd. All rights reserved. - Marine Pollution Bulletin Vol. 50, no. 4, p. 417-422 - Elsevier Science - Resource Type - journal article
<urn:uuid:2b814145-c4dd-43c5-9d5e-cae57dcc5753>
2.796875
304
Academic Writing
Science & Tech.
48.079174
Hydrocarbon measurements in the southeastern United States: The Rural Oxidants in the Southern Environment (ROSE) Program 1990 Article first published online: 21 SEP 2012 Copyright 1995 by the American Geophysical Union. Journal of Geophysical Research: Atmospheres (1984–2012) Volume 100, Issue D12, pages 25945–25963, 20 December 1995 How to Cite 1995), Hydrocarbon measurements in the southeastern United States: The Rural Oxidants in the Southern Environment (ROSE) Program 1990, J. Geophys. Res., 100(D12), 25945–25963, doi:10.1029/95JD02607., , , and ( - Issue published online: 21 SEP 2012 - Article first published online: 21 SEP 2012 - Manuscript Accepted: 21 AUG 1995 - Manuscript Received: 1 NOV 1992 An automated gas Chromatographic system was employed at a rural site in western central Alabama to measure atmospheric hydrocarbons and oxygenated hydrocarbons (oxy-hydrocarbons) on an hourly basis from June 8 to July 19, 1990. The location, which was a designated site for the Southern Oxidant Study (SOS), was instrumented for a wide variety of measurements allowing the hydrocarbon and oxy-hydrocarbon measurements to be interpreted both in terms of meteorological data and as part of a large suite of gas phase measurements. Although the site is situated in a Loblolly pine plantation, isoprene was observed to be the dominant hydrocarbon during the daytime with afternoon maxima of about 7 parts per billion by volume (ppbv). Decrease of isoprene after sunset was too rapid to be accounted for solely on the basis of gas phase chemistry. During the nighttime, α-pinene and β-pinene were the dominant hydrocarbons of natural origin. The ratio of α-pinene to β-pinene showed a well-defined diurnal pattern, decreasing by more than 30% during the night; a decrease that could be understood on the basis of local gas phase chemistry. Oxy-hydrocarbons, dominated by methanol and acetone, were the most abundant compounds observed. On a carbon atom basis, the oxy-hydrocarbons contributed about 46% of the measured atmospheric burden during the daytime and about 40% at night. The similarity of the observed diurnal methanol variation to that of isoprene and subsequent measurements [McDonald and Fall, 1993] indicate that much of the observed methanol was of local biogenic origin. Correlation of acetone with methanol suggests that it, also, has a significant biogenic source. In spite of the site's rural location, anthropogenic hydrocarbons constituted, on a carbon atom basis, about 21% of the hydrocarbon burden measured during the daytime and about 55% at night. Significant diurnal variations of the anthropogenic hydrocarbons, with increases at night, appeared to be driven by the frequent formation of a shallow nocturnal boundary layer.
<urn:uuid:721a25ea-e0ab-4719-bbeb-baf81a094f2f>
2.6875
633
Academic Writing
Science & Tech.
27.642768
Aug. 16, 2000 Southern Africa offers a unique climate sub-system where scientists can study the effects of industrial activity, biomass burning and changing patterns of land usage on the environment. Last weekend a international team of scientists launched an intensive campaign -- part of the SAFARI 2000 project -- to study this complex region from the ground, the air and from space. Sept. 7, 2000 Science-fiction writer Arthur C. Clarke was once asked when the "space elevator," a notion he helped to popularize, would become a reality. Clarke answered, "Probably about 50 years after everybody quits laughing." Nowadays NASA scientists are taking the idea seriously. Nov. 13, 2000 Life support systems on the International Space Stationprovide oxygen, absorb carbon dioxide, and manage vaporous emissions from the Astronautsthemselves. It's all part of breathing easy in our new home in space. Dec. 18, 2000 The normally meek Ursid meteor shower could surprise sky watchers with a powerful outburst on Dec 22nd when Earth passes through a dust stream from periodic comet Tuttle. Jan. 4, 2000 A surprising pattern emerges from satellite observations of lightning. Storms over the Great Plains States have significantly more Lightningthat never reaches the ground, an indicator of violent activity that can spawn hail and tornadoes. May 8, 2000 NASA astronomers have collected the first-ever radar images of a "main belt" asteroid. It's a metallic, dog bone-shaped rock the size of New Jersey, apparently sculpted during an ancient, violent cosmic collision. The asteroid, named 216 Kleopatra, was discovered in 1880, but until now, its shape was unknown. July 11, 2000 NASA's experimental Deep Space 1probe --left for dead after a guidance system failure in late 1999 -- was revived last month in a thrilling cross-the-solar-system rescue conducted by JPL engineers. The craft set sail again on June 28, 2000, just in time for a planned rendezvous with periodic comet Borrelly in 2001. Aug. 2, 2000 Scientists at a recent media forum said they are eager to begin using the International Space Stationas an innovative orbiting research laboratory. "The Hubble Space Telescope is to astrophysicists as the International Space Stationwill be to other researchers -- a working science laboratory in space," noted one participant. July 10, 2000 A series of unmanned balloon flights will measure the subtle ultraviolet glow of the night sky and help unravel one of the most perplexing mysteries of Astrophysics-- the origin of ultra high-energy cosmic rays. July 20, 2000 Your home computer can become a portal to a wonderland of stars, thanks to a massive release of images from an infrared sky survey sponsored by NASA and the National Science Foundation. The current release is based on a volume of data several hundred times larger than that contained in the human genome!
<urn:uuid:2c527462-028d-46ff-b2e2-ade1da0c2388>
3.234375
596
Content Listing
Science & Tech.
43.165313
Example as you expected in this link Timer will be running for ever in our application untill application is closed or When there is no more job will be available to assign or Schedule. TimerTask -- this is the task which has some functionality which is to be run based on time or duration. In Timer we will assgin TimerTask to run for particular duration or to start its run at particular duration. Please Understand how it works then apply with applet or any others 1,The GCTask class extends the TimerTask class and implements the run() method. 2,Within the TimerDemo program, a Timer object and a GCTask object are instantiated. 3, Using the Timer object, the task object is scheduled using the schedule() method of the Timer class to execute after a 5-second delay and then continue to execute every 5 seconds. 4,The infinite while loop within main() instantiates objects of type SimpleObject (whose definition follows) that are immediately available for garbage collection. public class GCTask extends TimerTask public void run() System.out.println(“Running the scheduled task...”); public class TimerDemo public static void main(String args) Timer timer = new Timer(); GCTask task = new GCTask(); timer.schedule(task, 5000, 5000); int counter = 1; new SimpleObject(“Object” + counter++); public class SimpleObject private String name; public SimpleObject(String n) System.out.println(“Instantiating “ + n); name = n; public void finalize() System.out.println(“*** “ + name + “ is getting garbage
<urn:uuid:727898da-3f7d-41a5-bc12-c728dbc85c09>
2.921875
382
Q&A Forum
Software Dev.
49.542934
I have been studying the central regions of galaxies in order to understanding their evolutionary history. The fist step involves measuring the mass of any central black hole. Together with the Nuker Team , we have found that nearly all galaxies contain a central supermassive black hole. Furthermore, the mass of the black hole strongly correlates with various galaxy properties. I have been working on a Black Hole Webpage that describes the data and results. In order to measure the mass of the black hole, we use a sophisticated orbit-based model to represent the galaxy. These models provide one of the most general solutions for how stars can orbit in a galaxy. This modeling code allows us to not only measure the central black hole accurately, but also to determine how the stars orbit throughout the galaxy. Both of these relate to how the galaxy formed and evolved. With the velocity measurements from the FP, I have been developing new techniques which can determine the mass density profile non-parametrically. The technique takes estimates of the velocity dispersion and the surface brightness profile and, after inversion through the Abel integrals, uses the Jeans equation, assuming isotropy, to provide a non-parametric estimate of the mass density and mass-to-light (M/L) ratio as a function of radius. Applying this technique to many clusters, we have found increases in the M/L in the central and the outer regions. The central increase is explained through mass segregation and, for the first time, we are able to directly estimate the heavy remnant population. The increase in the outer parts can be explained with having a population of low mass stars at those radii. The advantage of using the non-parametric techniques is that we are now able to put strong constraints on models, either N-body or Fokker-Plank, and we can directly estimate the present-day mass function. The mass functions for all of the clusters studied show a significant number of objects with a mass around 0.7~M$_\odot$. If we assume these are primarily white dwarfs, we can use their numbers to place constraints on the initial mass function. From the two-dimensional FP data, I am able to measure a velocity map, pixel-by-pixel, using the integrated light of the cluster, providing an accurate measure of any rotation. For most of clusters, there is a clear indication of rotation in the inner 0.5 parsecs. We also have a measure of the rotation at 3 parsecs, and the amplitude of the rotation is generally the same. Although not dynamically significant, the flat rotation curve has not been seen in standard models and N-body codes, since solid body rotation is expected. For M15, we have measured a significant increase in the rotation amplitude at small radii. This increase is not expected from evolutionary models and may be the result of a central mass concentration. A $1000\Msun$ black hole is consistent with both the rotation and velocity dispersion profiles. I analyzed velocity measurements from clusters of galaxies to determine both the significance of cD galaxy velocity offsets and the existence of bound populations around cD galaxies. Using robust statistical techniques we showed that a much smaller percentage of cD galaxies exhibited significant velocity offsets or bound populations than had previously been reported. This difference was due to the more robust statistical analysis and we encouraged a better understanding in the astronomical community of current statistical techniques. To study positional data from galaxy clusters, I implemented an adaptive kernel technique (Silverman, B.W. 1986, Density Estimation for Statistics and Data Analysis, Chapman and Hall, London) for density estimation, which allowed us to better determine the significance of substructure. In addition, we used clustering algorithms, incorporating both velocity and positional data, to provide an estimate of sub-clumps within a cluster. When applied to Abell 400, these techniques showed that the cluster is best modeled as two groups which are presently merging. This result has an important effect on estimates of the velocity dispersion of the cluster since the dispersion was significantly higher when using all the velocities from the whole system as compared to calculating the dispersions separately for the two groups. The inferred M/L was a factor of four lower if the subclumps were considered separately. We have looked at other galaxy clusters and, after applying the adaptive kernel to determine the membership of particular sub-groups, we have calculated robust velocity dispersions and locations in order to better understand the present-day kinematical states of the clusters. We found that if substructure is present, but ignored, the derived velocity dispersion may be overestimated by a factor of two.
<urn:uuid:92c14b85-4aee-4c6e-938f-b031b7e9601e>
2.765625
964
Academic Writing
Science & Tech.
30.094725
IW14 Directive inland water habitats Current conservation status Ten inland water habitat types are included in the EU Habitats Directive. These consist of five lake and pond habitat types, three water course types and two spring types. Seven inland water types occur in both the boreal and alpine region, the other three are naturally restricted to the boreal region. Conservation status of directive inland water habitat types in the alpine region is favourable. Water bodies are mainly in their natural state, and the pressure from land use is low. In the boreal region, however, the status has been evaluated as unfavourable for the most habitat types. Only the state of alpine rivers found in the northernmost part of the boreal region is favourable according to the assessment. Natural eutrophic lakes with Magnopotamion or Hydrocharition type vegetation represent the unfavourable-bad status class. This is mainly due to their distribution in southern Finland, where nutrient loading from agricultural sources and municipalities has begun earlier and been heavier than elsewhere in the country. Fennoscandian springs and springfens as well as water courses of plain to montane levels with Ranunculion fluitantis and Callitricho-Batrachion vegetation have been evaluated as unfavourable-bad but improving. The distribution or total area of directive inland water habitat types have not significantly changed. Instead, their conservations status is weakened by the current structure and function of the habitat type as well as future prospects. Some small inland water habitat types, for example streams and streamlets, springs and springfens and petrifying springs with tufa formation, have also decreased in total area, since some occurences have been lost especially in southern and central Finland. For other threats encountered by the inland water habitats, see IW13. - Updated (14.05.2013)
<urn:uuid:2fe25b6e-63c0-4eb8-8421-fdf3a7e07d1e>
3.53125
384
Knowledge Article
Science & Tech.
24.570767
The Rosetta mission, approved by the European Space Agency (ESA) in 1993 and launched in March 2004, is one of the most ambitious endeavors of European spaceflight. On its way to comet 67 P/Churyumov-Gerasimenko the space probe has already (Sept. 2008) flown by the main-belt asteroid Steins and is due to fly by another main-belt asteroid, Lutetia, in July 2010. In both cases the Rosetta observations provide information on size, shape and surface chemical composition. After more than 10 years travel time with 4 swing-by maneuvers Rosetta will reach comet 67 P/Churyumov-Gerasimenko in 2014. Rosetta will rendezvous with the comet at a distance of about 3 astronomical units from the Sun. The first measurements will help to identify a place suitable for landing; then Philae, the lander module, will be released from the orbiter and descend to the comet’s surface. Both modules will stay with the comet on its several month-long journey to perihelion, the comet’s closest point to the Sun. The instruments will precisely monitor how the formerly cold and inactive chunk of dust and ice “awakes” under the influence of the increasing solar heat flux. The ESA mission takes its name from the Egyptian town of Rashid, or Rosetta, where archeologists in 1799 found a stone with ancient hieroglyphic inscriptions in three different languages. With the help of inscriptions found on an obelisk from the town of Philae, archeologists were able to decipher the enigmatic hieroglyphs. Cometary scientists are hoping for similarly fundamental insights from the investigation of Churyumov-Gerasimenko by means of the Rosetta and Philae space probes. In contrast to planets, moons and asteroids, comets spend most of their existence in the cold outer parts of the planetary system. Therefore they may still contain original matter preserved in a frozen state from the time of formation of the planets. Rosetta, with 10 experiments onboard the orbiter and 10 experiments onboard the lander, will investigate the comet’s nucleus in detail. The main objectives of the mission are: - The global characterization of the comet’s nucleus and the surface topography. - The characterization of the chemical, mineralogical, and isotopic compositions. - To derive the physical properties of the comet’s nucleus, such as internal structure, and investigate thermal, electrical and magnetic properties. - To monitor the development of cometary activity as the comet approaches perihelion. With the help of the data collected scientists will gain insight into the formation of the solar system and, possibly, the development of life. The German Aerospace Center is heavily involved in the landing module Philae. Scientists in the Asteroids and Comets Department are responsible for the ROLIS, MUPUS, SESAME experiments and are involved in four further experiments.
<urn:uuid:64bc28f6-23fd-4702-b891-f667b360dca8>
3.703125
615
Knowledge Article
Science & Tech.
32.049135
Growth of sessile plants depends on accurate and timely cellular responses to environmental changes. In the prospect of an anthropogenic global climate change with expected regional extremes in temperature and precipitation, it is critical to understand how plants achieve stress The Ubiquitin Proteasome Pathway We are interested in resolving aspects of abiotic stress tolerance (salt, drought, heat, etc.) that are facilitated by the ubiquitin proteasome pathway (or ‘kiss of death’ as some people say). The pathway is highly conserved among eukaryotes and affects many processes in plant development and physiology. It requires the concerted activities of several proteins (E1, E2, E3) to activate ubiquitin, a small protein of 79 amino acids, and to build up ubiquitin chains on substrates (Hellmann and Estelle, 2002). This process is referred to as the 'Kiss of Death' since often substrates carrying ubiquitin chains are marked for degradation via a large protein complex, called the 26S proteasome (Fig. 1). Fig. 1: Scheme of the ubiquitin-proteasome pathway. Ubiquitin is transferred by an ubiquitin activating protein E1 in an ATP-dependent manner to an ubiquitin conjugating E2 protein. E2 can physically assemble with a ubiquitin E3 ligase which facilitate transfer of ubiquitin moieties to a substrate protein. Substrates carrying a ubiquitin chain are a potential target for degradation via the 26S proteasome. The pathway is a central response mediator in eukaryotic cells by degrading proteins or affecting their subcellular location, but potentially is also involved in protein quality control. Note: the pathway is very fast and may degrade proteins within minutes of an initiating signal. One of the current research topics in the lab is the initiation of DNA repair processes after UV-light exposure. UV-light causes DNA lesions that affect transcriptional processes, and may lead to point mutations if not efficiently repaired in time. In the last decade it became evident that E3 ubiquitin ligases containing a cullin CUL4, a RING-finger protein RBX1, and the Damaged DNA Binding protein 1 (DDB1) (Bernhardt et al. 2006; Fig. 3A) are key players for recognition of damaged DNA and initiation of repair processes. DDB1 interacts with a multitude of substrate receptors (potentially more than 90 in the plant Arabidopsis thaliana), and thus facilitates ubiquitylation of a broad range of substrate proteins. Not surprisingly, loss of DDB1 is embryo-lethal (Fig. 2) and affects many developmental processes like root and leaf development or flowering (Bernhardt et al. 2010). In addition, some substrate receptors are critical for UV-tolerance, and if they become mutated cause severe UV-hypersensitivity of affected plants (Fig. 3B; Biedermann and Hellmann, 2010). Fig. 2: Embryo lethal phenotype of an Arabidopsis thaliana ddb1 mutant. Left hand side: siliques with seeds for wild type (upper one) and mutant. Arrows point out aborted seeds. Right hand side: microscopic analysis of seeds at different developmental stages reveals that ddb1 mutants do not progress beyond heart stage in embryo development. Triangles point out embryos. Fig. 3: Critical Pathways for Repair of UV-Induced DNA Lesions in Plants. (A) Plants can repair UV-induced DNA lesions either by the light-dependent activity of photolyases or the light independent pathway of Nucleotide Excision Repair (NER). Our group focuses on mechanisms that activate NER. Critical proteins in this process are a CUL4-based E3 ligase with the core subunits CUL4, RBX1, and DDB1. DDB1 can interact with either CSA (Cockayne Syndrome Factor A; in Arabidopsis thaliana the protein is called ATCSA-1 (Biedermann and Hellmann, 2010)) or DDB2. Both, CSA and DDB2 are key players in damaged DNA recognition and initiating NER. While DDB2 binds directly to damaged DNA and is involved in genome wide control of damaged DNA, CSA does this in concert with CSB (Cockayne Syndrome Factor B; in Arabidopsis called CHR8) and RNA polymerase II, and is only involved in repair of genes actively transcribed. (B) Loss of ATCSA-1 (atcsa-1,1) or DDB2 (ddb2-3) lead to UV-B hypersensitivity of affected plants. Col0, wild type plants; P35S:myc:ATCSA-1, plants with increased ATCSA-1 levels (note: increased ATCSA-1 does not lead to increased UV-B tolerance). We also work on E3 ubiquitin ligases that contain a cullin CUL3. CUL3 proteins are able to interact with a group of proteins called BTB/POZ-MATH (BPM) (Weber et al. 2005) (Fig. 4). BPM proteins are discussed to function as substrate receptors in plants but functional proof is basically missing. We have identified transcription factors (e.g. RAP2.4; Fig. 4) that play a role in abiotic stress tolerance as potential substrate proteins for CUL3-based E3 ligases (Weber and Hellmann, 2009), and are currently resolving the impact of the CUL3 complex on transcriptional processes and stress tolerance in plants. Fig. 4: Hypothetical roles of CUL3 and BTB-POZ/MATH (BPM) proteins: they are likely to either target the transcription factor RAP2.4 for proteolytic degradation via the 26S proteasome or BPM proteins bind to RAP2.4 without initiating its proteolysis. Here BPM-RAP2.4 assembly might alter, for example, the DNA binding ability of the transcription factor. Independently of the ubiquitin proteasome pathway, we are also interested in vitamin B6 metabolism. Besides being a critical cofactor for more than 140 biochemical reactions, the vitamin is also considered to be a potent antioxidant, and to have many beneficial impacts on human health (for an overview see Hellmann and Mooney, 2010). Vitamin B6 comprises a group of six derivatives: pyridoxine (PN), pyridoxamine (PM), and pyridoxal (PL) and their phosphorylated forms which function as the actual cofactors (Fig. 5). Fig. 5: Chemical structures of the six common B6 vitamers. (1) Pyridoxine and its phosphorylated form pyridoxine 5'-phosphate (PLP), (2) pyridoxamine and pyridoxamine 5'-phosphate, (3) pyridoxal and pyridoxal 5'-phosphate. Arrows indicate the variable group at the 4’ position. De novo biosynthesis of vitamin B6 (or actually pyridoxal-5-phosphate (PLP) in plants and fungi) is catalyzed by two proteins, PDX1 and PDX2, which together form a PLP synthase complex. Plants lacking either PDX1 or PDX2 die if not rescued by external supply of the vitamin. Fig.6: The three known pathways for PLP biosynthesis: one salvage pathway, and two de novo pathways, a Deoxyxylulose (DXP)-dependent one and a DXP-independent one (for more information see Mooney and Hellmann, 2010). Chemical structures: (1) PLP, (2) Deoxyxylulose 5’-phosphate, (3) 4-(Phosphohydroxy)-L-threonine, (4) Glyceraldehyde 3’-phosphate, (5) Dihydroxyacetone phosphate, (6) Ribose 5’-phosphate, (7) Ribulose 5’-phosphate, (8) glutamine, (9) PM, (10) PMP, (11) PN, (12) PNP, (13) PL. In addition, the vitamin has been brought into context with different developmental processes (root and leaf development, flowering (Wagner et al. 2006; Fig. 7), and abiotic stress tolerance (plants with reduced vitamin B6 are light sensitive, and show increased sensitivity towards UV-B, salt and osmotic stress (Chen and Xiong, 2005; Denslow et al. 2007; Titiz et al. 2006). Our current research focuses on regulatory aspect of vitamin B6 biosynthesis, its impact on plant development and why the vitamin is required for stress tolerance. Fig. 7: Normalization of root growth in the vitamin B6 biosynthesis mutant rsr4-1. rsr4-1 is affected in a PDX1 gene which causes a nearly 70% reduction in vitamin B6 content. Most likely as a consequence of reduced vitamin B6, root development is strongly disturbed (left hand site). Note: external supply of pyridoxal can rescue this phenotype and generates plants indistinguishable from wild type C24 (right hand site).
<urn:uuid:13288cce-fd72-42e8-92e3-88093271545f>
2.765625
1,994
Academic Writing
Science & Tech.
43.540788
June 24, 1998: Observations obtained by the Hubble telescope and ground-based instruments reveal that Neptune's largest moon, Triton, seems to have heated up significantly since the Voyager spacecraft visited it in 1989. Even with the warming, no one is likely to plan a summer vacation on Triton, which is a bit smaller than Earth's moon. Since 1989 Triton's temperature has risen from about 37 on the absolute (Kelvin) temperature scale (-392 degrees Fahrenheit) to about 39 Kelvin (-389 degrees Fahrenheit). The scientists are basing a rise in Triton's surface temperature on the Hubble telescope's detection of an increase in the moon's atmospheric pressure, which has at least doubled in bulk since the time of the Voyager encounter. When Triton passed in front of a star known as "Tr180" in the constellation Sagittarius, Hubble measured the star's gradual decrease in brightness. The starlight became fainter as it traveled through Triton's thicker atmosphere, alerting astronomers to changes in the moon's air pressure.See the rest:
<urn:uuid:f0aa07f1-7c61-4a05-bf99-8195a1e7b739>
3.96875
221
Knowledge Article
Science & Tech.
38.688242
The common name "flies" encompasses a very diverse group of insects that belong to the order Diptera which means "two wings." They number over 120,000 species worldwide. Only the coldest parts of the earth (the Arctic and Antarctic) are devoid of flies. Some members of this order are well known for their disease carrying capacity such as the mosquito and Tsetse fly, while others are pollen gatherers and mimic bees. Fly larvae (maggots) play an important role in the clean-up of carrion.
<urn:uuid:d631a528-4483-466e-bdd1-c793f1c56f49>
3.265625
120
Knowledge Article
Science & Tech.
55.833
|Mar21-11, 03:28 PM||#1| 1. The problem statement, all variables and given/known data "The point C lies between A and B and is 5 units from A. OA = 2i + 3j OB = -4i - 5j 2. Relevant equations 3. The attempt at a solution This is not exactly a homework question since this is independent study, but I was wondering why I couldn't get the same answer as my textbook unless my textbook is wrong - but if it isn't, then that means I haven't grasped this topic yet. First find the vector AB; AB = OB - OA AB = -4i - 5j - 2i - 3j = -6i - 8j The magnitude of AB is therefore 10, by Pythagoras' Theorem. Since C is 5 units from A and lies between A and B; AC = (AB)/2 AC = 0.5(-6i - 8j) = -3i - 4j To find OC, we simply substitute values into the equation OA + AC = OC; OA + AC = OC 2i + 3j - 3i - 4j = -i - j = OC So OC should be -i - j, but I don't see the error I have made... my textbook says it should be 0.8i + 1.4j. This problem has been bugging me because I thought I understood this, perhaps there is something deeper I haven't noticed/considered? |Mar21-11, 03:58 PM||#2| Your method looks correct to me. Perhaps the question should say 2 units instead of 5 ? |Mar21-11, 04:09 PM||#3| Thank you for the quick reply! I did the problem again assuming it was 2 units from A, not 5, and I did indeed get the textbook's answer. But the problem does say 5 units. I suppose it is just a typo. Thanks! Now just one more topic and I've finished this chapter. |Similar Threads for: Vectors (Magnitude)| |How do you find the magnitude of vectors?||Introductory Physics Homework||3| |Vectors: solving for magnitude||Advanced Physics Homework||1| |Magnitude of the Sum of Vectors||Introductory Physics Homework||1| |vectors and magnitude||General Math||2| |Statics: Magnitude of vectors HELP!!||Introductory Physics Homework||20|
<urn:uuid:a33fd16a-4c77-4b66-9764-3e691a97fda8>
2.953125
550
Comment Section
Science & Tech.
82.937786
^ was able to identify the process that takes place when water changes to a gas as evaporation. ^ could identify a range of contexts in which water evaporates. ^ has described some contexts in which other liquids evaporate. ^ is able to explain that ^ can smell things when liquids evaporate and the gas reaches ~ nose. ^ is able to identify factors that could affect how fast water evaporates. ^ was able to make a reasonable prediction and, with some help, suggest a fair test to test the prediction. ^ could, with help in choosing what to do, present ~ results in a graph. ^ was able to compare ~ results and draw ~ own conclusions. ^ is able to explain how to make things ‘dry’ more quickly using ~ own ideas about the factors affecting evaporation. ^ was able to identify the process which takes place when water vapour turns to a liquid as condensation. ^ is able to explain why condensation occurs in a number of situations such as on kitchen windows on a cold day or on cold taps in the bathroom. ^ was able to explain where condensation wasn’t so frequently seen. ^ knows that air contains water vapour which cannot be seen but its vapour may condense when it hits a cold surface. ^ recognises condensation in the bathroom as droplets of water forming on a cold surface. ^ was able to identify the pattern in ~ data and use this to make predictions. ^ has recognised that simply heating water at its boiling point will not result in it getting hotter. ^ can state that the boiling temperature of water is 100?C. ^ is able to state that the freezing temperature of water is 0?C. ^ recognises that the temperature in the classroom is usually around 18?C – 22?C. ^ was able to recognise that melting, freezing, evaporation and condensing are all changes which can be reversed and all changes which involved a change of state. # has correctly identified examples of melting, freezing, evaporation and condensing. # is able to describe the water cycle, naming the processes correctly eg by telling the story of a drop of water from when it left the sea until it returned to the sea. ^ has recognised that evaporation and condensation are processes that can be reversed.
<urn:uuid:4a5e5c02-f20e-4fbe-83f2-944171dc2a1e>
3.875
489
Structured Data
Science & Tech.
51.613816
I hear you, it’s not always easy to create an application on many platforms. You want to build an iOS app, use Objective-C. You want to build a Windows Phone app, use C#. You want to build an Android app, use Java. It’s not an easy process, but there are tools, and ways of doing this that may save you some time. Since you want to target multiple platforms, as you know, you’ll have to code with the specific languages, and even tools for each platform. One of the first thing I would suggest is to think about the cloud. Why think about cloud computing? Because you want to limit to a minimum what will be specific to each platform. There is a way for you to put more logic in a common place, and have your application call a service to get, set or create data. The most stuff you can put outside of your application, the less things you have to recreate for each OS. Architecture is key when it comes to code reuse. As an example, if you want to build a Windows 8, and a Windows Phone 8 application, even if they used the same technology, the core is not the same. In that case, a suggestion would be to use the MVVM pattern to maximize the code reuse between the two platforms. In that case, C#, and XAML would be the technology of choice since it’s available on the two OS. For more information on the topic, you can read Alnur Ismail post. HTML5 for native apps Last but not least, there is always the Web path. If you don’t need a native application to do what you need to do, or access any specific features of the devices, the Web is one solution. Of course, many people will tell you that they want their native apps, but in some cases, the Web application is more than enough.
<urn:uuid:ae516dfb-b376-4f67-8a08-463f1c849f05>
2.734375
400
Personal Blog
Software Dev.
67.630902
Wahlström L.K. & Liberg O. 1995: Contrasting dispersal patterns in two Scandinavian roe deer Capreolus capreolus populations. - Wildl. Biol. 1: 159-164. Yearling natal dispersal frequencies and distances in roe deer Capreolus capreolus were compared between two regions in Scandinavia, Västerbotten, on the northern edge of the expanding population, and Mälardalen, in the central continuous part. Data were collected using telemetry during 1987-1994. In Västerbotten 91% (n = 11) of the males and 100% (n = 9) of the females left their natal areas, and in Mälardalen 43% (n = 42) of the males and 48% (n = 50) of the females dispersed. No intra-regional difference in distances dispersed was found between sexes. Average dispersal distance in Västerbotten was ca 120 km (n = 17), with only one disperser settling less than 39 km from its natal area. In Mälardalen, the average dispersal distance was around 4 km (n = 42), and only two animals moved further away than 15 km. One hypothesis accounting both for the almost complete dispersal of deer in Västerbotten, and for the existence of a few long-distance dispersers in Mälardalen, is that two genotypically distinct morphs of roe deer exist, one 'dispersive' and one 'stationary'. The predominance of the 'dispersive' type in Västerbotten could be explained by 'stationaries' not having had enough time to colonise this region since the last population bottleneck in the mid 19th century, when the Scandinavian population was restricted to the southernmost part of Sweden. Key words: Cervidae, dimorphism, dispersal, roe deer, Capreolus capreolus L. Kjell Wahlström & Olof Liberg, Stockholm University, Department of Zoology, S-106 91 Stockholm, Sweden Received 6 March 1995, accepted 12 June 1995 Associate Editor: Bernt-Erik Sæther
<urn:uuid:e3e0cd43-60a2-4ccc-a415-61a6bbb0f140>
3.03125
476
Academic Writing
Science & Tech.
47.387864
Venera 4 left Baikonur Cosmodrome in the central Soviet Union early in the morning of 12 June 1967. The first two stages of its three-stage Molniya-M launch vehicle placed the 1106-kilogram automated spacecraft into a 173-by-212-kilometer parking orbit about the Earth, then the launcher’s third stage boosted Venera 4 out of orbit onto a fast path Sunward toward the cloudy planet Venus. Two days later, after launch on a Atlas-Agena D rocket from the Eastern Test Range-12 launch pad at Cape Kennedy, Florida, 244.8-kilogram Mariner 5 followed Venera 4 toward Venus. Mariner 5 had been built as the backup for Mariner IV, which flew successfully past Mars in July 1965. Hardware modifications for its new mission included a reflective solar shield, smaller solar panels, and deletion of the visual-spectrum TV system in favor of instruments better suited to exploring Venus’s hidden surface. When Mariner 5 and Venera 4 left Earth, the nature of Venus’s surface was only beginning to be understood. Though the Mariner II Venus flyby (14 December 1962) had measured a surface temperature of at least 800° Fahrenheit (F) over the entire planet, some planetary scientists still held out hope for surface water. They believed that Venus’s atmosphere was made up mostly of nitrogen, with traces of oxygen and water vapor. They supposed that, even if Venus was in general hotter than Earth, its polar regions had to be cooler than its equator and mid-latitudes; perhaps cool enough for Venusian life. They also suggested that life might float high above Venus’s surface in cool moist cloud layers. Venera 4 reached Venus on a collision course on 18 October 1967. Shortly before entering the atmosphere at a blazing speed of 10.7 kilometers per second, it split into a bus spacecraft and a one-meter-wide, cauldron-shaped atmosphere-entry capsule. Both parts had been sterilized to prevent contamination of Venus with Earth microbes, and the capsule was designed to float if it splashed down in water. Radio signals from Venus ceased suddenly as the bus was destroyed as planned high in the Venusian atmosphere; then, after a brief pause, signals from the capsule reached Earth-based antennas in the Soviet Union. After its steep atmosphere entry, during which it experienced a deceleration of 350 Earth gravities, the capsule lowered on a single parachute for 94 minutes. It transmitted data on atmospheric composition, pressure, and temperature as it fell toward the surface. Twenty-five kilometers above Venus, at a pressure 20 times greater than Earth sea-level pressure and a temperature of more than 500° F, transmission abruptly ceased. Venera 4 confirmed that Venus’s atmosphere is more than 90% carbon dioxide. Mariner 5 flew by Venus the next day at a distance of 4100 kilometers. For nearly 16 hours it performed an automatic encounter sequence and stored data it collected on its tape recorder. On 20 Oct, it began to play back data to Earth. The U.S. spacecraft found no radiation belts; this was hardly surprising, since it also found a magnetic field only 1% as strong as Earth’s. As it flew behind Venus, Mariner 5 sent and received a steady stream of radio signals. The signals faded rapidly as they passed through the dense Venusian atmosphere, yielding temperature and pressure profiles before they were cut off by the solid body of the planet. Mariner 5 revealed that Venus’s atmosphere at its surface has a temperature of almost 1000° F and a pressure 75 to 100 times greater than Earth’s. As Venera 4 and Mariner 5 explored Venus, D. Cassidy, C. Davis, and M. Skeer, engineers at Bellcomm, NASA’s Washington, DC-based planning contractor, put the finishing touches on a report for NASA’s Office of Manned Space Flight. In it, they described automated Venus probes meant to be released from piloted Venus/Mars flyby spacecraft. They based their plans on a sequence of piloted Mars/Venus flyby missions outlined in the October 1966 report of NASA’s Planetary Joint Action Group (JAG). Continue Reading “2012 Venus Transit Special #3: Robot Probes for Piloted Venus Flybys (1967)” »
<urn:uuid:830f3441-7652-43d3-9d2f-d024e79e4b77>
3.90625
908
Truncated
Science & Tech.
47.477184
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2004 December 14 Explanation: Spiral galaxy M33 is a mid-sized member of our Local Group of Galaxies. M33 is also called the Triangulum Galaxy for the constellation in which it resides. About four times smaller (in radius) than our Milky Way Galaxy and the Andromeda Galaxy (M31), it is much larger than the many of the local dwarf spheroidal galaxies. M33's proximity to M31 causes it to be thought by some to be a satellite galaxy of this more massive galaxy. M33's proximity to our Milky Way Galaxy causes it to appear more than twice the angular size of the Full Moon, and be visible with a good pair of binoculars. Authors & editors: NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: LHEA at NASA / GSFC & Michigan Tech. U.
<urn:uuid:c19e8b6d-b8e9-47c0-8c2f-21816e26ba3a>
3.671875
223
Knowledge Article
Science & Tech.
48.309294
I’m sitting here at the AAAS meeting and I just heard that there are reports of a meteorite strike in Russia. Early reports are saying that there are anywhere from 500 to 950 people injured in the area. I’ll be updating this post for the next few hours as details continue to come in. What we know so far: There were bright streaks filmed and photographed passing through the air in Russia. The New York Times is reporting no deaths yet, but that nearly 750 people have sought medical treatment so far in hospitals in the city of Chelyabinsk, many for treatment from broken glass. There was some sort of explosion that occurred while the meteorite was in transit. The NYT is also reporting that Russian authorities currently believe that the strike was a bolide, a meteorite that broke up in the earth’s atmosphere. The USGS definition for bolide is slightly different, however. They say “There is no consensus on its definition, but we use it to mean an extraterrestrial body in the 1-10-km size range, which impacts the earth at velocities of literally faster than a speeding bullet (20-70 km/sec = Mach 75), explodes upon impact, and creates a large crater. “Bolide” is a generic term, used to imply that we do not know the precise nature of the impacting body . . . whether it is a rocky or metallic asteroid, or an icy comet, for example.” The Russians say that there was a crater found, but at present there are differing accounts of its size. Russia is no stranger to weird events like this. In 1908 it was the site of a huge explosion, known as the Tunguska Event which flattened over 80 million trees in the Siberian forest. This meteorite strike comes near the same time that asteroid 2012 DA14 was set to pass by the earth at very close range–below the orbits of some of our satellites. Early speculation is that the meteorite might have been a traveling companion of the asteroid. Update 10:55 am: The Washington Post has a good Q&A in which an ESA Spokesman does say that the meteor and asteroid 2012DA14 are not related. H apparently used the term ‘cosmic coincidence’ Excellent pun sir. I salute you. He later poured water over the ‘Armageddon’ idea of using a nuclear weapon to divert a falling meteor from a large population center. Another tidbit from the Washington post article says that the last comparable strike took place in 2008 over Sudan, with no known injuries. Update 11:06 The blogs are starting to come in. Jeff Masters at Weather Underground has a post with some great pictures, including a picture that purports to be of the crater that was created. Apparently it landed in an ice-covered lake? Caleb Scharf at SciAm has a good blog post detailing the ins and outs of a meteor vs meteorite and other interesting background. Over at Nature, they have some absolutely mind-blowing footage of the meteorite traveling through the sky and the explosion. (H/T Doug Main for finding the video) Update 11:32: The Guardian is doing a great live page with updates on the meteor including a really neat interactive map of every recorded meteorite strike. And in other perilously close to earth objects news NASA is doing a live stream of the asteroid 2012 DA14 now: http://www.ustream.tv/nasajpl2 Update 11:40 The Nature article mentioned above makes an interesting point that the meteor (estimated at 15 m) was not seen by any of the observatory posts looking for such objects. From the article: “Despite its massive size, the object went undetected until it hit the atmosphere. “I’m not aware of anyone who saw this coming,” says Heiner Klinkrad, head of the European Space Agency’s space debris office at the European Space Operations Centre in Darmstadt, Germany. Although a network of telescopes watches for asteroids that might strike Earth, it is geared towards spotting larger objects — between 100 metres and a kilometre in size.” Update 11:47 Discover Magazine’s Corey Powell has an interesting post on the meteorite and on another meteorite related injury in the US in 1954. Update 11:56 The NC Museum has another local take on the meteorite, describing the 1934 Farmville Meteorite (no relation to the Facebook game) Update 12:06 The ever-awesome AMNH is doing a live chat now with AMNH geologist Denton Update 12:10 Fun facts about Chelyabinsk courtesy of Lonely Planet. Apparently it was initially a booming tea-trading city that was renamed Tank City after the Soviet armaments that were built there. Update 3:32 NASA is holding a press conference about the asteroid and meteor at 4 pm EST today, and you can listen in here. Phil Plait has a great follow-up blog on Slate. Looks like it did fall on a lake! That’s all for now folks. I don’t expect that new information will be coming in now.
<urn:uuid:5a34da32-8aac-47e8-87c3-2ae7c244f3f0>
2.984375
1,077
Personal Blog
Science & Tech.
52.057987
||It has been suggested that Analytical dynamics be merged into this article. (Discuss) Proposed since May 2010.| Dynamics is a branch of physics (specifically classical mechanics) concerned with the study of forces and torques and their effect on motion, as opposed to kinematics, which studies the motion of objects without reference to its causes. Generally speaking, researchers involved in dynamics study how a physical system might develop or alter over time and study the causes of those changes. In addition, Isaac Newton established the undergirding physical laws which govern dynamics in physics. By studying his system of mechanics, dynamics can be understood. In particular, dynamics is mostly related to Newton's second law of motion. However, all three laws of motion are taken into consideration, because these are interrelated in any given observation or experiment. The study of dynamics falls under two categories: linear and rotational. Linear dynamics pertains to objects moving in a line and involves such quantities as force, mass/inertia, displacement (in units of distance), velocity (distance per unit time), acceleration (distance per unit of time squared) and momentum (mass times unit of velocity). Rotational dynamics pertains to objects that are rotating or moving in a curved path and involves such quantities as torque, moment of inertia/rotational inertia, angular displacement (in radians or less often, degrees), angular velocity (radians per unit time), angular acceleration (radians per unit of time squared) and angular momentum (moment of interia times unit of angular velocity). Very often, objects exhibit linear and rotational motion. For classical electromagnetism, it is Maxwell's equations that describe the dynamics. And the dynamics of classical systems involving both mechanics and electromagnetism are described by the combination of Newton's laws, Maxwell's equations, and the Lorentz force. From Newton, force can be defined as an exertion or pressure which can cause an object to move. The concept of force is used to describe an influence which causes a free body (object) to accelerate. It can be a push or a pull, which causes an object to change direction, have new velocity, or to deform temporarily or permanently. Generally speaking, force causes an object's state of motion to change. Newton's laws Newton described force as the ability to cause a mass to accelerate. His three laws can be summarized as follows: - First law: If there is no net force on an object, then its velocity is constant. The object is either at rest (if its velocity is equal to zero), or it moves with constant speed in a single direction. - Second law: The acceleration a of a body is parallel and directly proportional to the net force F acting on the body, is in the direction of the net force, and is inversely proportional to the mass m of the body, i.e., F = ma. - Third law: When a first body exerts a force F1 on a second body, the second body simultaneously exerts a force F2 = −F1 on the first body. This means that F1 and F2 are equal in magnitude and opposite in direction. See also |Wikibooks has a book on the topic of: School of Engineering/Dynamics| |Wikiversity has learning materials about Topic:Dynamics| - Goc, Roman (2004-2005 copyright date). "Dynamics" (Physics tutorial). Retrieved 2010-02-18. - Goc, Roman (2004-2005 copyright date). "Force in Physics" (Physics tutorial). Retrieved 2010-02-18. - Browne, Michael E. (1999-07). Schaum's outline of theory and problems of physics for engineering and science (Series: Schaum's Outline Series). McGraw-Hill Companies. p. 58. ISBN 978-0-07-008498-8. - Holzner, Steven (2005-12). Physics for Dummies. Wiley, John & Sons, Incorporated. p. 64. ISBN 978-0-7645-5433-9. Further reading - Swagatam (25 March 011). "Calculating Engineering Dynamics Using Newton's Laws". Bright Hub. Retrieved 2010-04-10. - Wilson, C. E. (2003). Kinematics and dynamics of machinery. Pearson Education. ISBN 978-0-201-35099-9. - Dresig, H.; Holzweißig, F. (2010). Dynamics of Machinery. Theory and Applications. Springer Science+Business Media, Dordrecht, London, New York. ISBN 978-3-540-89939-6.
<urn:uuid:5f4544a2-9aa9-4af5-91c9-90a2d5db9f90>
3.78125
983
Knowledge Article
Science & Tech.
50.02193
Field measurements and rock samples were taken at Victoria Valley (02-03) to determine the role of moisture in rock weathering processes along the Victoria land Coast. Surface and sub-surface temperatures were taken from rocks in the area. Holes were drilled into the rock at 45mm, 90mm and 400mm depths with thermocouples inserted and attached. Measurements were made at one-minute intervals during ... each physical visit in order to determine the number of times the temperature gradients at either the surface or the individual depths exceeded 2° C per minute, a recognised threshold for thermal weathering to occur. The data loggers were then set to record the temperature hourly for the October to January period. Surface and sub-surface moisture content was also measured. Surface moisture sensors were attached to the rock surface measuring date, time of surface moisture event and their magnitude. The time events happened was analysed to see how they correlated with changes in moisture at depth within the rock. Manual measurement were recorded at 45mm and 90mm depths within the rock for a 48 hour period. In addition to these measurements the hardness of the rock site was conducted using a Schmidt Hammer and a sonometer (to measure velocity of sound waves) as proxies for rock strength, digital images of rock surfaces were taken, estimation of the availability of moisture using a rainfall sensor, a general description of the area including GPS co-ordinates, weather characteristics and other relevant features were taken, solar radiation, rock albedo and position of shadows were measures and a series of weather observations was undertaken including; air temperature, air pressure, relative humidity, wind speed and direction, cloud cover and type, visibility and precipitation conditions. Rock samples were collected for later use in laboratory simulations and analysed for their mineral and chemical composition. An empirically based mathematical model will be developed that will predict changes in weathering rate depending on changes in moisture in the rock. In the 04/05 season data on rock surface and sub-surface temperatures at 1 minute intervals, manual recordings of moisture at 45mm and 90mm depths within the rock every 4 hours for approximately 4 days for each of 2 aspects and repeated rock strength, samples and other measurements were carried out.
<urn:uuid:281d2b43-f42a-42a0-ae06-6c4489e9c9e9>
3.15625
442
Academic Writing
Science & Tech.
23.817206
A rare commodity Water is amazing. It is truly the universal solvent. While there are solvents that can dissolve some things far better, there is none that dissolves as many different substances as can water. It’s all to do with the structure of water’s molecule – that particle scientists refer to when they talk of the tiny bits that some things are made of. Water molecules can cluster together and share parts of each other to form other discrete particles that have unique affinities for different things. Sugar is made of molecules. Common salt consists of two very different bits, called ions that are positively and negatively charged. Though sugar and salt look very similar, their fundamental differences, at the sub-microscopic level, make these substances behave so very differently. Very few solvents can dissolve both sugar AND salt for this reason. But water can. This strange property of water – the ability to dissolve molecular substances as well as those that are made up of ions – is one reason why it is extremely difficult to obtain pure water. When it rains, droplets of almost 100% pure water form high in the atmosphere. Yet it’s not long before this water has dissolved all sorts of substances, often before it reaches the ground. Pure water is actually an extremely rare substance. Repositories for everything Near-pure droplets of rainwater that drench the land eventually find their way into the oceans. There is a little bit of everything to be found in the world’s oceans. This is because water dissolves just about everything. Gold is one of the world’s rarest and most precious metals. Yet we are told that the world’s oceans contain enough dissolved gold to provide every person with a tiny piece weighing over 8 tonnes! Gold is just one of the billions of substances that water washes into the world’s oceans, every day. The seas and oceans throughout the world are repositories for all that is washed off the land. End to fresh water All over the world, beautiful freshwater lakes represent a half-way house for water that makes its way to the sea. These wonderful reservoirs are topped up by rivers and streams fed by water that takes many paths, from slow percolations of ground water to direct runoffs. Fresh water reservoirs contain water that has had only a relatively short time in contact with the earth. Most contain water that is near pure. They have enjoyed a place in the eye of the beholder for thousands, perhaps millions of years. Lakes, and the streams and rivers that contribute to them, have served living creatures with necessary fresh water during that time. But this service is literally drying up. Something in the water Supplies of drinkable water are dependent on readily available fresh water sources. The demand for water of this purity is increasing every day. I’m told that about 24 litres of fresh water can be used during the entire production of just one hamburger. Yet the world’s fresh water supplies provide water only at a worldly rate. That rate is huge, and hitherto has been unvarying. While that rate is now not sufficient to provide the world’s demand for fresh water, there is much now happening on the surface of the earth to actually decrease the rate of this provision. Wastes from farming, industry and the effluent from the people who rely on these present day processes are fast diminishing the usefulness of water sources. Many fresh water reservoirs are now becoming polluted and the level of pollution and the occurrence of this are always increasing. Now running to ground About 20% of the world’s fresh water is drawn directly from the ground. It is seen as a way of safeguarding against periods of drought while providing an almost unvarying year-long supply of fresh water – water that slowly percolated its way through the ground. Such water sources are still vulnerable to pollution, however, as surface pollutants can percolate through the ground and contaminate the ground water. Moreover, the removal of ground water at a rate higher than the natural recharge rate means that these sources diminish in time and can affect water replenishment of nearby reservoirs. A global contribution Further to this, some people who support theories of global warming believe that the usage of ground water may actually contribute to global warming, a proposed climatic effect that may also contribute to diminishing the supply of available ground water. The world’s use of fresh water is now outstripping the rate of its supply. And there are factors brought about through industrialisation and land use that are serving to reduce the suitability of otherwise useable fresh water. Clearly the world’s populations cannot continue to consume water the way they have been up till now – and it will take more than just a token act of water conservation.
<urn:uuid:1a07dbac-9adb-4a86-881f-95518f2795af>
3.203125
996
Personal Blog
Science & Tech.
45.068112
The big news in the scientific community this week was the announcement that scientists have decoded the genetic sequence of chimpanzees, the closest living relative to mankind. In articles published in the journal Nature and published online by the journal Science, an international team of researchers identified virtually all the roughly 3 billion building blocks of chimp DNA. The scientists found a very small difference between human DNA and chimp DNA--between 1 percent and 4 percent. But the number of genetic differences between a chimp and a human is about 10 times higher than that between two humans. Exploring those differences could help scientists figure what exactly makes us human. Blog community response: "There are no firm answers yet about how humans picked up key traits such as walking upright and developing complex language. But the work has produced a long list of DNA differences with the chimp and some hints about which ones might be crucial." --Just Hangin' on a Cross "It's not just a few scientists who have studied and reported on the similarities between chimpanzees and humans. The results, which are being published in Nature, are the result of 67 different studies from scientists in five countries--a lot of heavy hitters. It will be interesting to hear what the creationists have to say about this." --the occasional pundit "Our cousin the chimpanzee has a genetic code 99% similar to our own genetic code. What make human beings special compared to other animals? If the answer seems obvious to us, this is not an easy question for geneticians." -- Sailom's Philosophy "This is fantastic news, and it's difficult to overstate the importance of this. We want many different organisms sequenced to sample diversity, but having the sequence of two closely related species is going to be incredibly useful. Aren't you just itching to see what the differences are?" --Pharyngula
<urn:uuid:39033a9c-f700-4dec-b0ee-dbeccf1771f5>
3.5
379
Comment Section
Science & Tech.
43.557737
Chromosome Recognition in Meiosis I Location: Outside U.S. Date: January 2009 How does the homologous chromosomes recognize each other in Meisosi I pairings? That is a great question. The research is showing that each chromosome unzips a small region of its DNA and they bind together there because of opposite base pairing. There seem to be enzymes that coordinate this process. My guess would be by base pairing between the sense strand of one chromosome and the anti-sense strand of the homologous chromosome. This assumes that the strands of each of the homologous chromosomes unwind and are available for Hydrogen bonding to a complementary strand using the Watson-Crick rules for base pairing (A-T & G-C). This base pairing between complementary strands of homologous chromosomes is probably the physical basis for crossing-over between homologous chromosomes. Ron Baker, Ph.D. Click here to return to the Molecular Biology Archives Update: June 2012
<urn:uuid:370b310a-34cd-4adc-a32b-a0d007db30a7>
3.5
220
Knowledge Article
Science & Tech.
39.706921
The area of a regular pentagon looks about twice as a big as the pentangle star drawn within it. Is it? Prove that the internal angle bisectors of a triangle will never be perpendicular to each other. Triangle ABC has a right angle at C. ACRS and CBPQ are squares. ST and PU are perpendicular to AB produced. Show that ST + PU = AB
<urn:uuid:0589efa6-e8ab-4f48-8499-6ca148d6f952>
3.09375
86
Q&A Forum
Science & Tech.
71.291786
Dear Sev, first of all, a simple thing. Light in the vacuum always moves by the speed 299,792,458 meters per second: this fact is exactly true because of the modern definition of one meter. The speed of light in the air is just 0.03% smaller than the speed of light in the vacuum. Nothing ever travels faster than light in the vacuum. Second, mirrors typically reflect less than 100% of coming light - something like 70%. But that's not the main problem here. Third, the energy is conserved, so you can't produce much more "light" by putting mirrors. You can't illuminate 100 households by having 1 light bulb and "copying" it by mirrors. Why not? While you may increase - and almost double - the "amount of light" that is hitting a particular area, this fact is (more than) balanced by the fact that the light that would be absorbed by the area occupied by the mirror itself is not absorbed. So when it comes to the energy budget, ideal mirrors (that reflect 100% of light, just for the sake of simplicity) only rearrange the distribution of light - which areas finally absorb it and which areas just reflect it. The total amount of light that is absorbed is given by the total amount of light that is emitted on the light bulb - and it only depends on the light bulb (and its power). If you consider light as a "practical thing allowing us something to to be seen", then you want the light to be reflected, e.g. by a book. But the counting for a book that reflects some light - so that we can read it - is similar as the counting for an object that absorbs the light. Mirrors may increase the amount of light reflected by a particular book, but they may not increase the amount of light reflected by the whole room, assuming it has a uniform albedo. If you look at a light bulb and a nearby mirror, you may see "two light bulbs" and a doubled amount of light, so to say. But this is only true from certain directions. From other directions, the outcome is different and often opposite. For example, if you place your eyes behind the mirror, so that the light bulb is on the opposite side of the mirror than you, then you see no light bulb directly - and no unreflected light from a light bulb (and no light bulb light reflected only by mirrors). This lesson is much more general. Mirrors - and any other gadgets - may move energy from one place to another, or transform it from one form to another. But they never change the total amount of energy.
<urn:uuid:e0703efc-1070-4db4-a774-dc61cea45fcc>
3.484375
544
Q&A Forum
Science & Tech.
66.678962