text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Dec 10, 2008 | 2 A poorly kept secret is now official: the Hubble Space Telescope has discovered carbon dioxide (CO2) in the atmosphere of a planet outside our solar system. That's a first in the study of extrasolar planets, or exoplanets, which have been quite the hot topic this year. The exoplanet HD 189733 b, roughly the mass of Jupiter, orbits a star 63 light-years away in extremely close company. Although the planet can't be seen directly, scientists used Hubble data to analyze its atmospheric composition and turn up CO2 as well as carbon monoxide (CO). They did this by comparing the light spectrum from the star with that from the star and planet combined, as the planet passes in front of its star. Although HD 189733 b is way too steamy for life as we know it—roughly 1,950 degrees Fahrenheit (1,065 degrees Celsius) by one estimate—the finding, leaked to media outlets two weeks ago, shows that techniques exist to find markers of life on other planets. (The paper has been submitted to Astrophysical Journal Letters.) Nov 25, 2008 | 6 On the heels of the first photographs of planets orbiting other stars comes another first for so-called extrasolar planets: an atmosphere containing carbon dioxide (CO2). Nature News and Science News report that a forthcoming journal article will detail the discovery of CO2 around HD 189733 b, a planet roughly equivalent to Jupiter in mass that orbits a star some 63 light-years away. HD 189733 b, discovered in 2005, has already yielded other exoplanet milestones: it was the first found to host an atmosphere containing methane and was also among the first found to harbor water vapor. All of these discoveries have been made without seeing the planet in the conventional sense: to ascertain a planet's traits, the light spectrum of the parent star is compared with the star's emission as the planet passes in front of it. In the latest finding, the data came from the Hubble Space Telescope. Nov 24, 2008 | 34 Increased carbon dioxide (CO2) in the atmosphere is making the Pacific coast acidic far more rapidly than previously believed, potentially wreaking havoc for creatures living in it that are unable to tolerate the swiftly changing environment. Ecologists at the University of Chicago tracked the acidity of the Pacific off an island close to Washington state over the course of eight years. Their results, published today in Proceedings of the National Academy of Sciences: the waters here are becoming acidic 10 times more quickly than had been predicted using other models. Their data also shows that populations of mussels—key animals in that ecosystem—are declining rapidly as the ocean becomes less alkaline. Nov 19, 2008 | 16 You may recall that President George W. Bush pledged to do something about climate change when campaigning for the presidency back in 2000—but reneged on that promise once in office. But it appears that President-elect Barack Obama will not follow suit, telling a gathering of governors yesterday that "few issues facing America—and the world—are more urgent than combating climate change": Deadline: Jul 15 2013 Reward: $5,000 USD SciBX: Science-Business eXchange, a joint publication from the makers Deadline: Jul 25 2013 This challenge provides an opportunity for Solvers to build a web-based or mobile “app” to explore data relationships in scholarly conte Save 66% off the cover price and get a free gift! Learn More >>X
<urn:uuid:66e5ea29-4cc3-4ac7-a939-1103f17db1f4>
3.515625
726
Content Listing
Science & Tech.
37.651164
The energy flow in the ecosystem is unidirectional. Sun is the main source of energy. The amount of energy received differs in amount as it depends upon the slope, cloud, latitude and pollutants present in the atmosphere. The energy received in the Varanasi of India is three times more than the energy received in the Britain. Some part of the energy is used by the producers. The rest is dissipated. The efficiency to conserve energy is around 1 percent in the grasslands and savannah. It is also similar in the mixed forests. It is higher in the modern crops and sugarcane field. It ranges from the 5 to 10 percent. The autotrophs are also known as the producers. They make the food by the process of photosynthesis from the inorganic materials. They not only make their food but also for the other organisms. They absorb the energy from sun and convert into the chemical energy. They release oxygen. The organic compounds release energy during respiration. The organic compounds which are formed play an important role in the building of bodies and help in the release of energy which helps to overcome the entropy. The energy is dissipated as a heat. There are herbivorous which feed on the plants. They are not able to eat the whole of plant. There is a non usage of food energy which passes into the decomposers. The phytoplankton in the aquatic food chain is mainly eaten by the herbivore. The herbivores act on the ingested food which gets aggregated. It releases the energy later on and helps in the respiration. The energy lost in this case is not much and the remaining is used to overcome the entropy. The fraction of assimilated food is used for the body building. The primary carnivore feed on the herbivore which is feeded by the secondary carnivore. In the food chain when the food is broken energy is released. The small part of energy is utilized and so the rest of energy is dissipated. The energy transfer from the one trophic level to the other decreases in the amount. How energy flows in ecosystem?
<urn:uuid:89c1b1ed-fa94-404b-a525-66d788c022b3>
3.4375
430
Knowledge Article
Science & Tech.
49.604289
XQuery is a standardized language for combining documents, databases, Web pages and almost anything else. It is very widely implemented. It is powerful and easy to learn. XQuery is replacing proprietary middleware languages and Web Application development languages. XQuery is replacing complex Java or C++ programs with a few lines of code. XQuery is simpler to work with and easier to maintain than many other alternatives. It is used as a back end for implementing Web sites, integrating corporate data stores in the enterprise, in the XRX architecture (XForms, REST and XQuery), as well as for large publishing projects, for data mining, and for academic research. It can run on large servers and on mobile devices, as part of commercial software and as open source. Learn more about the current status of specifications related to: These W3C Groups are working on the related specifications:
<urn:uuid:a5ab6375-0c14-413b-81a6-234db2aedaf4>
2.921875
181
Knowledge Article
Software Dev.
31.7025
About 68.27% of the values lie within 1 standard deviation of the mean. Similarly, about 95.45% of the values lie within 2 standard deviations of the mean. Nearly all (99.73%) of the values lie within 3 standard deviations of the mean. In mathematical notation, these facts can be expressed as follows, where x is an observation from a normally distributed random variable, μ is the mean of the distribution, and σ is its standard deviation: These numerical values come from the cumulative distribution function of the normal distribution. For example, Φ(2) ≈ 0.9772, or Pr(x ≤ μ + 2σ) ≈ 0.9772. Note that this is not a symmetrical interval – this is merely the probability that an observation is less than μ + 2σ. To compute the probability that an observation is within 2 standard deviations of the mean (small differences due to rounding): Statisticians might express these intervals as confidence intervals: is approximately a 95% confidence interval. This rule is often used to quickly get a rough probability estimate of something, given its standard deviation, if the population is assumed normal, thus also as a simple test for outliers (if the population is assumed normal), and as a normality test (if the population is potentially not normal). Recall that to pass from a sample to a number of standard deviations, one computes the deviation, either the error or residual (accordingly if one knows the population mean or only estimates it), and then either uses standardizing (dividing by the population standard deviation), if the population parameters are known, or studentizing (dividing by an estimate of the standard deviation), if the parameters are unknown and only estimated. To use as a test for outliers or a normality test, one computes the size of deviations in terms of standard deviations, and compares this to expected frequency. Given a sample set, compute the studentized residuals and compare these to the expected frequency: points that fall more than 3 standard deviations from the norm are likely outliers (unless the sample size is significantly large, by which point one expects a sample this extreme), and if there are many points more than 3 standard deviations from the norm, one likely has reason to question the assumed normality of the distribution. This holds ever more strongly for moves of 4 or more standard deviations. One can compute more precisely, approximating the number of extreme moves of a given magnitude or greater by a Poisson distribution, but simply, if one has multiple 4 standard deviation moves in a sample of size 1,000, one has strong reason to consider these outliers or question the assumed normality of the distribution. Higher deviations Because of the exponential tails of the normal distribution, odds of higher deviations decrease very quickly. From the Rules for normally distributed data: |Range||Population in range||Expected frequency outside range||Approx. frequency for daily event| |μ ± 1σ||0.682689492137086||1 in 3||Twice a week| |μ ± 1.5σ||0.866385597462284||1 in 7||Weekly| |μ ± 2σ||0.954499736103642||1 in 22||Every three weeks| |μ ± 2.5σ||0.987580669348448||1 in 81||Quarterly| |μ ± 3σ||0.997300203936740||1 in 370||Yearly| |μ ± 3.5σ||0.999534741841929||1 in 2149||Every six years| |μ ± 4σ||0.999936657516334||1 in 15,787||Every 43 years (twice in a lifetime)| |μ ± 4.5σ||0.999993204653751||1 in 147,160||Every 403 years| |μ ± 5σ||0.999999426696856||1 in 1,744,278||Every 4,776 years (once in recorded history)| |μ ± 5.5σ||0.999999962020875||1 in 26,330,254||Every 72,090 years| |μ ± 6σ||0.999999998026825||1 in 506,797,346||Every 1.38 million years (history of humankind)| |μ ± 6.5σ||0.999999999919680||1 in 12,450,197,393||Every 34 million years| |μ ± 7σ||0.999999999997440||1 in 390,682,215,445||Every billion years| |μ ± xσ||1 in||Every days| Thus for a daily process, a 6σ event is expected to happen less than once in a million years. This gives a simple normality test: if one witnesses a 6σ in daily data and significantly fewer than 1 million years have passed, then a normal distribution most likely does not provide a good model for the magnitude or frequency of large deviations in this respect. In The Black Swan, Nassim Nicholas Taleb gives the example of risk models for which the Black Monday crash was a 36-sigma event: the occurrence of such an event should instantly suggest a catastrophic flaw in a model. See also - "The Normal Distribution" by Balasubramanian Narasimhan - "Calculate percentage proportion within x sigmas at WolframAlpha
<urn:uuid:5cb1709f-f2ec-4fc8-a317-520a6bc2ee2d>
3.890625
1,156
Knowledge Article
Science & Tech.
60.672084
I’m often surprised by the fact that many people see monads as the be all and end all of Haskell’s abstraction techniques. Beginners often struggle over them thinking that they’re something incredibly important and complex. In reality though, monads are neither important, nor complex, and that’s what I aim to show with this little tutorial about what other abstraction techniques you can use, and when they’re appropriate. First things first – IO The reason a lot of people come across monads is that they want to get on with interacting with the outside world – they want to do I/O. I’m going to get this out of the way quickly. In Haskell, IO is kept in a carefully constrained box, because unconstrained it violates referential transparency (that is, you can get multiple answers from the same call to the same function depending on what the user happens to type/how they wibble the mouse/etc). You can recognise functions in this box by their type signature – they involve a type that looks like this: Here’s a few examples: These can be stuck together with the handy do notation: There, that’s that out of the way – we can do IO now, and we didn’t need to understand anything nasty or complex. Some Abstraction techniques – Dealing with values in boxes We often deal with values that are hidden inside other types. For example, we use lists, to hide values in, we use Maybes etc. What would be nice, is if we could apply a function to the values hiding in those boxes. Early in learning Haskell I’m sure you will have met a function that can do this for lists: map. This can be generalised though: Enter the Functor. Functors can do one thing, and one thing only, they can apply a function to a value inside an outer construction. They do this with the function fmap (or Functor map). Lets look at some examples: For Lists, fmap is simply map: > fmap (+1) [1,2,3,4] [2,3,4,5] For Maybe values fmap lets us apply a function to the value inside a just: > fmap (+1) (Just 1) Just 2 > fmap (+1) Nothing Nothing For Tuples, fmap lets us apply a function to the second half of the tuple (if we import an extra module that defines it): > import Control.Applicative > fmap (+1) (1,2) (1,3) We can use fmap to target a function into several layers of boxes by composing applications: > (fmap . fmap) (+1) [(1,2), (3,4)] [(1,3),(3,5)] Here, the first fmap pushes (fmap (+1)) inside the list, and the second fmap pushes (+1) inside the tuples. Putting things in boxes All that isn’t very useful if we can’t actually put something in a box in the first place. This is where the pure function comes in handy. This function lets us wrap anything we like in a box. > pure 1 :: [Int] > pure 1 :: Maybe Int Just 1 > pure 1 :: Either String Int Right 1 In Haskell, the pure function is in the Applicative class (you’ll need to import Control.Applicative). The Applicative class does some other interesting things as we’ll see in a minute. Because of this it would be nice if pure were separated into its own little class all on its own, but unfortunately that isn’t the way it is in Haskell (at the moment at least). So, we’ve seen how to put something in a box, and we’ve seen how to apply a function to a value in a box, but what if our function is in a box too? At this point, the Applicative class really comes into its own. The (<*>) function from Applicative lets us apply boxes to each other as long as they have the right types of values inside. > (Just (+1)) <*> (Just 1) Just 2 > [(+1), (*2), (^3)] <*> [1,2,3] [2,3,4,2,4,6,1,8,27] That second result is not entirely clear – what’s going on? Well, the (<*>) function has applied each function to each argument in turn, and bundled up all the results in one list. (+1) gets applied to each argument, generating the results 2,3 and 4; (*2) gets applied to each argument, generating the results 2,4 and 6; and finally (^3) gets applied to each argument, generating the results 1,8 and 27. An important note: All Applicatives are also Functors. You can implement fmap for any Applicative like this: Applicative in fact does this for you, but calls the function (<$>). Functions that produce boxes When we have functions that produce values that are hidden inside boxes, we have a problem. Each time we apply the function we get an additional layer of boxes, this isn’t particularly pretty, nor composable. This is where monads come in. Monads add a single function called join, which is used to flatten out the layers of boxes: > join (Just (Just 2)) Just 2 > join [,,[3,4,5]] [1,2,3,4,5] The join function lets us compose our box-producing functions more easily. We now fmap our box producing function over values in a box. This results in a 2-layer set of boxes. We can then use join to squash that back down again. This pattern is so useful that we call it “bind” or (=<<). Some people like to define this the other way round: This allows a very imperative style of programming where we ask the language to take the result of one computation, push it through a function, take the results, push them through another function, etc. As with Applicatives and Functors, all Monads are Applicatives. We can define the (<*>) function using only bits of a Monad and the pure function: We can see here a common pattern with monadic programming. We bind a function returning a monadic value into a lambda. Haskell provides a syntactic sugar for doing this called do notation: We can now see clearly that IO in Haskell is not using any magic at all to introduce an imperative concept to a functional language, instead the IO type is simply a monad. Remember, this means that it’s a functor and an applicative too, so we can use <$> and <*> wherever we please in IO code to apply functions to IO values. Why you don’t always want to go for Monads As we’ve seen, Monad sits atop a set of classes proudly the most powerful of all, but that doesn’t mean we want to use it all the time. As we’ve seen, Monad gives us a very imperative feel to our code – it reveals an order that isn’t necessarily there. Do notation particularly seems to suggest (in our example above) that we should first take the value out of f, then take the value out of a, and then apply the two. In reality, this order is not there, the Haskell runtime is free to evaluate these in any order it likes. This can make such language constructs dangerous. Firstly, we’re functional programmers because we like describing what things “are”, not what steps you should take to produce them. Secondly, the steps we seem to give here, are not the ones that the run time will really take in the end. Lets look at an example of when we really shouldn’t use Monads. We have excellent Parser combinators in the Parsec library. These let us define small parsers, and stick them together using the Monadic interface. Lets define a small parser to parse an Int that may or may not be in a string: We are expressing an ordering that we don’t intend to – first we accept at least one digit into ds, then we read them and rewrap them in a parser. In parseMaybe first we parse something, and take the value out into n, then we wrap it in Just, and give it back. This isn’t clear. Why couldn’t we just describe the grammar? Why do we have to specify an order? Lets patch parsec to provide an Applicative instance: Note that I’m using a shorter version of the definition of (<*>) in a monad using the ap function. Now we may use the applicative functor interface: Not only are these definitions shorter, but we can quickly and easily see their meanings – an integer is many digits, with read applied to get them into a form we can use in Haskell. Lets look at a more complex example: Again, we’re specifying an order we don’t want to see. Lets look at this in applicative style: Note the use of the (<*) function – this simply takes the value from the left hand parser, and passes it up, ignoring the value returned by the right hand parser. We can now see that parseRecord constructs a record from two maybe Ints, separated with a comma. We haven’t introduced any orderings that we don’t need to, and we’ve even condensed our code a little. We’ve seen the hierarchy of classes in Haskell in all its glory, rather than focusing unduly on the Monad, we’ve seen that the Monad interface, while powerful, is not always desirable. Hopefully we’ve seen that a lot of our monadic code can be cleaned up to use the Applicative (or maybe even Functor) interface instead.
<urn:uuid:ce6aa644-962a-48c8-8e68-abdc3dd81e08>
2.9375
2,134
Personal Blog
Software Dev.
60.368019
Sensitive indicators of environmental quality Throughout the Chesapeake Bay watershed, you can find salamanders along the watershed's rivers and streams. The Maryland Biological Stream Survey, however, has found that streamside salamanders are not doing as well as they used to. Statewide, the number of salamander species decreased as urban and agricultural land use increased. One reason to look at streamside salamanders is that these amphibians are actually sensitive bioindicators of degraded stream habitats. These salamanders are extremely sensitive to changes in their environment and their health is often directly linked to the health of their habitat. What makes salamanders so susceptible to changing environmental conditions? A close look at their skin and eggs can give us an answer. Because salamanders, like all amphibians, lay their eggs in the water, their eggs don't have a protective shell like, for example, chicken eggs. This makes the salamander eggs vulnerable to chemical pollutants, ultraviolet radiation, and other factors that disrupt cell division in the early stages of the embryo. As a result, the embryo may not be able to develop properly, and it will die. |Picture: Eggs of the northern two-lined salamander (Eurycea bislineata). Salamander eggs don't have a hard shell to protect them from chemical pollutants or ultraviolet radiation. Picture courtesy of Robin Jung. Even when the eggs hatch, the salamanders stay sensitive to their environment. Salamanders, like all amphibians, have a permeable skin; gases and water simply enter or leave the body through this skin. This permeable skin, however, provides little or no protection from toxins in the soil or in the water. With the water and gases that go through the skin, the salamander also takes in toxins. This way, amphibians are environmental sponges, soaking up the chemicals and toxins from the water or air around them. ||Picture: Larva of the northern two-lined salamander (Eurycea bislineata). Through their permeable skin, salamanders soak up chemicals that are in the water. Picture courtesy of Robin Jung. Threats to salamanders Amphibians have been around for 350 million years, but recent developments are causing a worldwide decline. Habitat loss, acid rain and environmental contaminants, ultraviolet radiation, and invasive species directly affect salamander and other amphibian populations. And what is most mysterious to researchers, even in pristine, protected areas like nature preserves and national parks, salamander populations are decreasing. The loss of wetlands and forests equals the loss of amphibians. An alarming 54 percent of wetlands in the United States have been lost in the past 100 years. Temporary habitats, such as the vernal pools that form only in the spring, are also very important to many amphibians. Many amphibian species use these pools to come together and mate, so when these habitats disappear, it directly affects the amphibian populations. Acid rain and environmental contaminants Acid rain can lower the pH and thus increase the acidity of a stream to a level that can kill amphibian embryos. Acidic water can also increase the toxicity of other contaminants, such as metals, pesticides, and petroleum products. Even what you put on your lawn can affect salamanders; some widely used, commercially available herbicides contain a chemical that is directly toxic to amphibians. Because wetlands, the prime habitat of salamanders, are low-lying areas where contaminants in stormwater runoff can accumulate, the salamander is not even safe in its own habitat. The Natural History of Amphibians describes environmental contaminants as "closest to being a "single cause" behind widespread amphibian declines." Ultraviolet radiation (UV) disrupts the development of salamander eggs, and a low pH as a result of acid rain can even increase this effect. Vegetation along a stream or pool can provide shade and block radiation from entering the water where the salamander laid its eggs; when the vegetation is gone, the eggs may be exposed to lethal amounts of UV. Last but not least, salamanders can fall prey to various introduced species, such as game fish and predacious bullfrogs. Because these species are not native to the place they live, the salamanders don't have a natural defense against them. Also, these non-native species can introduce disease-causing pathogens. Viruses, bacteria, and fungi appear to affect salamanders more than other kinds of animals. |Picture: Northern two-lined salamander (Eurycea bislineata). This is the most common streamside salamander in the Chesapeake Bay watershed. The two-lined salamander appears to be more tolerant of degraded habitats and stream conditions than other salamander species. Picture courtesy of Robin Jung. Why should we care? After having been around for 350 million years, amphibians now face worldwide decline. What does that tell us about the world we live in? Ron Heyer, a scientist at the Smithsonian Institution and chair of the Declining Amphibian Populations Task Force states, "All amphibian biologists are now convinced that something unusual and catastrophic is happening to amphibians." Dr. Heyer speculates that amphibian populations are telling us something about the habitat we share with them. References and further reading There are many sites providing detailed information on salamanders, amphibians in general, and their global decline. The Maryland Biological Stream Survey attempts to define the problem of acid rain and deposition in Maryland. Bioindicators, such as salamander populations, are used to assess stream quality. You can volunteer to participate in the stream-sampling program. The Patuxent Wildlife Research Center and the North American Amphibian Monitoring Program of the U.S. Geological Survey also conduct amphibian monitoring. Both have extensive links to other related pages. The Declining Amphibian Populations Task Force seeks to "determine the nature, extent and causes of declines of amphibians throughout the world, and to promote means by which declines can be halted or reversed."
<urn:uuid:78b8f6bb-0978-4261-820b-b7b95768c9b1>
3.875
1,264
Knowledge Article
Science & Tech.
29.475872
Mentor: Dr. Frank Mallory [n]Phenacenes are compounds that contain n benzene rings fused together in a zigzag pattern. An example of a phenacene derivative is shown below. The synthesis of [n]phenacenes is of particular importance for the investigation of whether pseudo one-dimensional versions of the pseudo two-dimensional graphite sheets possess similar patterns of conductivity to the graphite sheets. Previously, an [n]phenacene having 11 fused rings has been synthesized by the Mallory group. [n]Phenacenes with n > 6 are extremely insoluble, so to produce larger [n]phenacenes by chemical synthesis, solubilizing groups (R) must be attached. [n]Phenacenes with n > 11 have been attempted, but have proved unsuccessful due to issues of solubility based on the various R-groups used. The most recently concluded study tested a branched 12-carbon chain and a straight 12-carbon chain as solubilizing groups, both of which proved to make the phenacene product too soluble. Currently, solubilizing groups of shorter carbon chains are the focus of our investigation in hopes of obtaining correct solubility. Two possible schemes contain 8-carbons each, one straight chain and one branched chain. Using synthesis schemes beginning from 1-bromobutane and 1-bromooctane, a sequence of reactions will be explored in the attempted syntheses of phenacene and phenacene derivatives.
<urn:uuid:04aa68c6-6353-4b34-89aa-210a0ae1886f>
2.765625
324
Academic Writing
Science & Tech.
29.748449
Weather satellites frequently document dust palls blowing westward from Africa’s Sahara Desert across the tropical Atlantic Ocean. Astronauts see these Saharan dust masses as widespread atmospheric haze. The dust can be transported right across the Atlantic Ocean, taking about a week to reach North America (in northern hemisphere summer) or South America (in northern hemisphere winter). This puts the Caribbean Sea on the receiving end of many of these events. In the top image, the margin of hazy air reaches the island of Hispaniola (Haiti and Dominican Republic) and the Turks and Caicos Islands, though the eastern tip of Cuba (foreground) remains clear. This image—taken by astronauts on the International Space Station (ISS) in July 2012—attracted the interest of scientists at NASA’s Johnson Space Center because the margin between dust haze and clear atmosphere lies in almost the same location as it appeared in another astronaut image in July 1994. When astronauts aboard the Space Shuttle Columbia captured the lower image (rotated from the 2012 view), few scientists had considered the possibility of trans-Atlantic dust transport. The Columbia image also shows the brilliant blues of the shallow banks surrounding the Caicos Island in the Bahamas. The mountainous spine of Haiti lies further away, partly obscured by dust. Closer to the foreground—about 26 degrees north latitude—the skies are clear. The dust in the images is almost 8,000 kilometers from its likely source in northern Mali, although data from sensors such as the Total Ozone Mapping Spectrometer and Ozone Monitoring Instrument have suggested that some dust traveling across the Atlantic may originate even further east in Chad or Sudan. Once airborne, Saharan dust has been known to travel west all the way into the Pacific Ocean, crossing Mexico at the narrow Isthmus of Tehuantepec. We now know that African dust reaches the western hemisphere every month of the year, though not necessarily in as visible a form as in these images. Researchers have linked Saharan dust to coral disease, allergies in humans, and harmful algal blooms (“red tides”). There is also evidence that some of this African dust serves as a source of airborne nutrients for Amazon rainforest vegetation.
<urn:uuid:5159edd0-4f58-4297-b021-c62cfa0492d5>
4.1875
452
Knowledge Article
Science & Tech.
30.088447
How Do They Reproduce? Worms have a very interesting method of reproduction. There are no male and female worms – all worms are the same and each produce both sperm cells and egg cells. When a worm is mature and ready to reproduce, it finds a partner worm. They line up head to head attaching themselves together at the clitella (the thick light colored band around each mature worm) and mucous is secreted . The sperm cells are exchanged by the worms through this mucous. After the worms separate, the clitellum on each worm secretes albumin (similar to egg-whites) which hardens to form a cocoon. The worm backs out of the cocoon-sort of like pulling a shirt off over your head. As the cocoon slides over the worm’s head, sperm and eggs that have been stored in that area of the worm’s body are squeezed out from small pores in the worm’s body into the cocoon. The cocoon detaches from the worm. The ends close up sealing the eggs and sperm into the cocoon. Fertilization takes place in the cocoon and usually 2-4 baby worms develop in each cocoon. The cocoon may hatch in a month if conditions are good. If it is too dry, or cold, or hot, or whatever, the cocoon will become dormant like a seed and will not hatch until conditions are better. Sometimes animals swallow worm cocoons and the cocoons pass through the animal’s digestive system. Then they are “pooped out” in the animal manure some distance away from where they were swallowed. This is one way worm populations spread to new areas. Here are the reproductive rates of Eisenia foetida (Red Wiggler Worm) from an excellent reference book “Biology and Ecology of Earthworms” by C.A. Edwards and P.J.Bohlen. - Each worm produces 3-4 cocoons per week - Approx 83% hatch. - Approx 3 worms emerge from each cocoon. - It takes 32- 73 days for a cocoon to hatch. - It takes 53-76 days for the baby worm to mature and be ready to reproduce. Photo of 2 worms linked to exchange sperm. Here is a photo of 2 worm cocoons. Cocoons are round yellowish gold colored; about 2mm in diameter. In a well established worm bin cocoons are easy to find, but you must look closely. Copyright Creekside Gardens 1998-2007 All Rights Reserved
<urn:uuid:9d8dec00-f1c6-4f99-8553-c2e9502cbcd5>
3.8125
537
Knowledge Article
Science & Tech.
65.682455
Science Fair Project Encyclopedia Protein structure prediction Protein structure prediction is one of the most significant tasks tackled in computational structural biology. It has the aim of determining the three-dimensional structure of proteins from their amino acid sequences. In more formal terms, this is the prediction of protein tertiary structure from primary structure. Given the usefulness of known protein structures in such valuable tasks as rational drug design, this is a highly active field of research. Every two years, the performance of current methods is assessed in the CASP experiment. The practical role of protein structure prediction is now more important than ever. Massive amounts of protein sequence data may be derived from modern large-scale DNA sequencing efforts of, for example, the Human Genome Project. The output of experimentally determined protein structures, typically by time-consuming and relatively expensive X-ray crystallography or NMR spectroscopy, is lagging far behind the output of protein sequences. A number of factors exist that make protein structure prediction a very difficult task, including: - The number of possible structures that proteins may possess is extremely large, as highlighted by the Levinthal paradox. - The physical basis of protein structural stability is not fully understood. - The primary sequence may not fully specify the tertiary structure. For example, proteins known as chaperonins have the ability to induce proteins to fold in specific ways. - Direct simulation of protein folding via methods such as molecular dynamics is not generally tractable for both practical and theoretical reasons. However, the distributed computing project, Folding@home, is tackling such simulation difficulties. Despite the above hinderances, much progress is being made by the many research groups that are interested in the task. Prediction of structures for small proteins is now a perfectly realistic goal. A wide range of approaches are routinely applied for such predictions. These approaches may be classified into two broad classes; de novo modelling and comparative modelling. De novo protein modelling De novo- or ab initio- protein modelling methods seek to build three-dimensional protein models "from scratch". There are many possible procedures that either attempt to mimic protein folding or apply some stochastic method to search possible solutions (i.e. global optimization of a suitable energy function). These procedures tend to require vast computational resources, and have thus only been carried out for tiny proteins. To attempt to predict protein structure de novo for larger proteins, we will need better algorithms and larger computational resources like those afforded by either powerful supercomputers (such as Blue Gene) or distributed computing (see Human Proteome Folding Project). Although these computational barriers are vast the potential benifits of structural genomics (by predicted or experimental methods) make de novo structure prediction an active research field. Comparative protein modelling Comparative protein modelling uses previously solved structures as starting points, or templates. This is effective because it appears that although the number of actual proteins is vast, there is a limited set of tertiary structural motifs to which most proteins belong. It has been suggested that there are only around 2000 distinct protein folds in nature, though there are many millions of different proteins. These methods may also be split into two groups: - Homology modelling is based on the reasonable assumption that two homologous proteins will share very similar structures. Given the amino acid sequence of a unknown structure and the solved structure of a homologous protein, each amino acid in the solved structure is mutated, computationally, into the corresponding amino acid from the unknown structure. - Protein threading scans the amino acid sequence of an unknown structure against a database of solved structures. In each case, a scoring function is used to assess the compatibility of the sequence to the structure, thus yielding possible three-dimensional models. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:575db5a5-3fcc-4119-abb1-58050e046c17>
3
797
Knowledge Article
Science & Tech.
21.030025
image credit: Mikhail Matz; image source; larger image Detecting Fluorescence: Green, Yellow, Orange, and Red in the Deep Sea To attract prey, this jellyfish fluoresces in the blue light of the ocean. In fluorescence, atoms or molecules absorb light at one wavelength and re-emit light at a longer wavelength, which corresponds to a lower energy. -- To find out how fluorescence is functional for ocean organisms, visit Fluorescence: the Secret Color of the Deep. -- To learn how fluorescence in the ocean is investigated, see Detecting Fluorescence: Green, Yellow, Orange, and Red in the Deep Sea . Login to Comment on this Item Try this Visual Quantum Mechanics simulation of a fluorescent light to explore the relationships among energy levels, excitation energy, and emitted light. On another subject, watch for the conjunction of Jupiter and Venus on November 30-December 1, 2008.
<urn:uuid:42d82731-9d08-4747-b7e8-92d81f7b32e5>
3.4375
192
Knowledge Article
Science & Tech.
31.475952
En EspañolFive major hazards threaten sea turtles, all of which are the direct results of human behavior. As these pressures increase, they trigger population declines and local extinctions. It’s estimated that the fishing industry contributes to the death of thousands to tens of thousands of sea turtles each year. Turtles that become trapped in longlines, gill nets, and trawls are thrown away as bycatch. And those that manage to avoid fishing nets are impacted by the disruption to their food supply and habitat. Throughout the world, turtles are killed and traded on the global market as exotic food, oil, leather, and jewelry. Over the past 100 years, millions of hawksbill turtles alone have been killed just for the price of their shells. And even though today the global trade of luxury and craft items has reduced thanks to conservation efforts, it still remains an ongoing threat to turtles in parts of Africa, Asia, and the Americas. Every year, sea turtle habitats are destroyed because of shrinking coastlines. Wherever there is boat vessel traffic, whenever a new hotel or high-rise is built up along the shore, and wherever there is sea floor dredging and beach erosion, sea turtle food supplies and nesting areas take a major hit. Pollution and Pathogens Marine pollution can harm sea turtles in many ways. Discarded fishing gear, petroleum by-products, and other plastic debris injure sea turtles through ingestion and entanglement. This sea garbage weakens the turtles’ immune systems, and disrupts nesting behavior and hatchling orientation for future generations. We are just now learning the extent to which climate change can affect sea turtles. Climate change can impact the natural sex ratios of hatchlings, increase the likelihood of disease outbreaks, and can escalate the frequency of extreme weather events, which destroy nesting beaches and coral reefs.
<urn:uuid:40bbfac4-5db5-4d34-b43f-c5b3ab167018>
3.828125
378
Knowledge Article
Science & Tech.
39.610263
The Variation of Inductance and Capacitance With Respect to Time The Variation of Inductance and Capacitance With Respect to Time We have heretofore established a new pair of dimensional relationships. These the magnetic inductance, L, in Henry, and the electro-static capacity, C, in Farad. Derived from these dimensional relations is a pair of electrical laws, (I) The Law of Dielectric Proportion The ratio of the quantity of dielectric induction, Psi in Coulomb to the magnitude of the electro-static potential, e, in Volt. (1) Coulomb per Volt, (II) The Law of Magnetic Proportion The ratio of the quantity of magnetic induction, Phi, in Weber, to the magnitude of the M.M.F., i, in Ampere. (2) Weber per Ampere, Through algebraic re-arrangement a pair of secondary dimensional relations alternately define, in a new form, the total dielectrtic induction, Psi, in Coulomb, and the total magnetic induction, Phi, in Weber. For the dielectric induction, (3) Coulomb, or Volt – Farad. And for the magnetic induction, (4) Weber, or, Ampere – Henry Hence, the total dielectric induction, Psi, in Coulomb, is the product of the potential, e, in volt, and the capacitance, C, in Farad. Likewise, the total magnetic induction, Phi, in Weber, is the product of the M.M.F., i, in Ampere, and the inductance, L, in Henry Psi equals e times C Phi equals i times L In the expression of the variation of the parameters which constitute the dimensional relations involving capacitance and inductance, two distinct conditions can exist. First is the capacitance and the inductance arfe time invariant, and the variation with respect to time resides in the relations of potential, e, and of M.M.F., i. Here derived are the suceptance and the reactance. In the alternate form of expression, it is the potential, e, and the M.M.F., i, that are time invariant, and the variation with respect to time resides in the relations of capacitance and inductance as geometric co-efficients. Geometry in time variation. In general, time invariance of L and C, or time invariance of e and i each can be considered as a limiting case. Each can be in variation with respect to time at their own individual time rates. That is, for the dielectric both C and e can be in variation, and for the magnetic both L and i can be in variation. Consider the A.C. induction motor. Here is form of magnetic inductance in which both the inductance, L, and the M.M.F., i, are in time variation, L with the rotational geometric variation, and i with the rotational variation of M.M.F. The difference between the rotational frequency of i is called the slip frequency. The rotor continuously falls behind the rotation of the magnetic field, dragging energy out of this field and delivering it to the output shaft of the motor. Considering the pair of primary dimensional relations, it is, for the dielectric induction. (5) Farad per second, or And for the magnetic, (6) Henry per second, or It is established that a distinct pair of conditions exist with regard to the variation with respect to time. Either the capacitance or inductance is in variation, or the potential or M.M.F. is in variation, with respect to time. For the condition of time invariant L and C it is given, (7) Farad per second, or Siemens, The Suceptance, B, (8) Henry per second, or Ohm, The Reactance, X. In the second case the L and C are in variation with respect to time. The forces, i and e, are held constant, or time invariant. Here the variation with respect to time exists with the Metallic – Dielectric geometry itself. This hereby produces a variation in the geometric co-efficients of capacitance or inductance. These relations are given as, (9) Farads per second, or Siemens, The Conductance, G (10) Henry per second, or Ohm, The Resistance, R This CONDUCTANCE, G, and this RESISTANCE, R, represent the relations derived from the time variation of capacitance and from the time variation of inductance, respectively. It is through this form of parameter variation that the energy stored in the electrical field bounded by the geometric structure is here given to an external form. This is to say, energy is taken out of the electric field and delivered elsewhere. For a closed system, the energy stored within the electric field is lost, or dissipated, from this system. It is then ENERGY LEAKAGE from the closed system. Considering the condition of a time invariant, or stationary geometric structure, this structure exhibiting the dissipation of the energy stored within the electric field bound by the structure, the conductance, G, and the Resistance, R, are the representations of energy leakage from the dielectric and magnetic fields respectively. For example, consider one span of a “J carrier” open wire transmission pair. Here the conductance, G, is the “leakage conductance” of the glass telephone insulator, the resistance, R, is the “electronic resistance” of the copperweld telephone wire. These represent the energy dissipation of one span of line. This conductance, G, represents a “molecular loss” WITHIN the glass of the insulator. This resistance, R, represents a “molecular loss” WITHIN the metal of the wire. Hence it is the molecular losses of the metallic-dielectric geometry itself that gives rise to an energy leakage from a closed system. The molecular agitation and cyclic hysteresis exist within the molecular dimensions of the physical mass of the bounding geometric structure. These consist of a multitude of minute variations of the capacitance and inductance of the geometric form. On a microscopic level the material substance of this form is indefine, a kind of blur in space, due to the multitude of minute variations of positions in space. These tiny motions, hereby through parameter variation, convert the energy stored in the electric field into random patterns of radiation. By experiment it can be shown that this energy leakage exists in proportion to the temperature of the material form storing energy within its bound electric field. In general, the elecro-static potential, e, in Volt, renders the insulators hot, the magneto-motive force, i, in Ampere, renders the wires hot. Also, it is found that this heating increases with increasing frequency of the potential, e, or the M.M.F., i. It is here where the prevailing concept of the “electron” is to be found. Hence it is the motions of the electrons that give rise to the energy loss in an electrical system. Electrons represent energy dissipation. However, the pedant, the mystic, and the dis-informer all tell us that the electron is what conveys energy, the complete opposite! Break – more to follow
<urn:uuid:bbc4c04f-df4d-4fa8-b913-5a8a4f1c066c>
2.734375
1,578
Comment Section
Science & Tech.
43.909219
CALIFORNIA has gained a dubious honour. The Sunshine State has lost 91 per cent of its wetlands over the past 200 years, a greater proportion than any other area on Earth, according to the conservation group Wetlands International. The statistics were released this week in Brisbane at the sixth gathering of the signatories to the Ramsar Convention on the world's wetlands. The convention, drawn up in 1971, has been signed by 93 countries which between them have nominated more than 800 sites as wetlands of international importance requiring protection. "The extent of wetland losses in the US makes depressing reading," says Michael Moser, director of Wetlands International. When the country was first settled, it had about 90 million hectares of wetlands. Today, just 40 million hectares remain unravaged by dams, drainage, and the diversion of water to farms and cities. Ohio, with 90 per cent of its wetlands gone, follows ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:fba9a00b-a187-4597-9ad4-c6c2a31454d5>
3.453125
213
Truncated
Science & Tech.
51.37125
C++ vs. C The C++ programming language brings improvements over its predecessor C, but fans of other languages point out that C++ is still not perfect. Over a decade after the 1998 standardization of the C++ programming language, the C++ vs. C debate continues. Compared to C, C++ has a bunch of new language features with little or no runtime overhead because they are translated to code equally as efficient as the equivalent C code: - function overloading - STL, the part of the C++ standard library with containers and algorithms - references, needed by operator overloading A few features are as efficient as C yet still rawther deceptive and easy to misuse because the "simple" syntax hides how much code is actually being generated: - operator overloading, when compared to ordinary function-call syntax virtual(polymorphic) methods, when compared to C function pointer tables, especially the pure virtual method's undefined behavior that some compilers may implement as an exception - templates instantiated several times, when compared to using the preprocessor to instantiate multiple copies class is a struct whose first member is That's the only difference. C++ also has some features requiring possibly expensive runtime library support: Templates have a couple drawbacks: - Type names in error message become far more difficult to interpret. Common implementations may expand a template type name fully in diagnostic messages even if the source code accesses the type through a typedef. For example, an error message involving the std::stringtype is likely to provoke this reaction: " basic_string? I'm using C++, not BASIC! And what the fsck is char_traits?" (True, implementations are not the language, but a language is only as good as its best free implementation.) - Programmers can lose track of for how many different type combinations they have instantiated a template, causing code size to balloon. There is a common extension called extern templateallowing for explicit instantiation, but it's not in C++98, and not all compilers support it. throw) also have a couple drawbacks: - Though exceptions have little to no runtime speed penalty in a modern C++ compiler, the size of the required library support might cause a problem on embedded or handheld devices with less than about a megabyte of RAM. - C++98 has no counterpart to the finallykeyword of Java and Python. True, there isn't as much need for finallyin C++ as in languages that rely on a garbage collector, given the idiom of allocating resources in constructors that C++'s deterministic destruction allows. But a method often still needs to restore the object's fields to a consistent state before eating or rethrowing the exception. C++0x addresses this by letting the programmer build a scope-guard with std::shared_ptrand lambda expressions. On platforms without virtual memory, a program must be aware of possible out-of-memory conditions. The STL allows passing an allocator to all containers, but most implementations appear to exhibit undefined behavior when allocate() returns 0 like new(std::nothrow) does. I've been told one STL implementation can be built with nothrow in mind: STLPort. EA Games created an out-of-memory-aware allocator. <iostream> library is another divisive issue. It was envisioned as a type-safe alternative to <cstdio>, but implementations are hairy, bloated, and inefficient. This goes double if you have to use a statically linked implementation of the C++ standard library, either because the operating system provides no C++ standard library (e.g. handheld video game systems) or because your compiler's C++ ABI differs from that of the operating system publisher's own development tools (e.g. MinGW). Hello World programs with -Os and statically linked libstdc++ in one version of MinGW resulted in 5,632 bytes for <cstdio> but 266,240 bytes for A devkitARM project targeting Game Boy Advance had similar results: 5,156 bytes for C-style I/O and 253,652 bytes for <iostream>; even removing some unreachable code with -Wl,-gc-sections couldn't get it below 180,032 bytes. (For comparison, the GBA's main RAM is 262,144 bytes.) These tests are with GNU libstdc++, which initializes date, time, and money aspects of a locale for each stream even if the program never shifts a date, time, or money object into the stream. This is because it was conceived before templates and instead uses virtual inheritance and "facets", which C++ implementations can't optimize well. Some third-party C++ standard library implementations such as uClibc++ are designed for space efficiency and leave out features such as locale support that aren't as useful in small-memory systems. Yet some C++ fanboys claim that anything using good old <cstdio> instead of new-fangled <iostream> isn't in the spirit of C++, whatever that means. They cling to item 2 in the second edition of Scott Meyers' Effective C++, which promotes <cstdio>, and ignore item 23 of his sequel ("consider alternative libraries"). It appears that Meyers eventually recognized that <iostream> is imperfect and removed item 2 from the third edition. "As an embedded programmer, I shun C++." -- Arlet, 2011-11-01
<urn:uuid:bbd4a4d6-da13-48ac-9de4-cb95148b5998>
3.734375
1,173
Personal Blog
Software Dev.
39.893954
PHOTOCHEMICAL REACTION: AMMONIUM OXALATE AND IODINE The promotion of a chemical reaction by light is investigated. This experiment is suitable for first or second-year courses. The effect of light on the reaction between ammonium oxalate and iodine is studied by subjecting samples of the reaction mixture to light from various sources. These are then compared to a sample kept in the dark. Fifteen to twenty minutes to set up; two hours for reaction to occur. Tincture of iodine should be used with care; iodine stains and irritates skin. Oxalic acid is poisonous. Ammonia vapors are irritating. Goggles must be worn throughout the experiment. - household ammonia - wood bleach, oxalic acid (3/4 teaspoon in 25 mL H2O)* - tincture of iodine* - small beaker - medicine dropper - aluminum foil - test tubes - Pure oxalic acid can also be obtained from hardware stores and radiator or motor shops. Zud cleanser is another source of oxalic acid. - A solution of 0.15 g iodine in 30 mL ethanol can be used in place of tincture of iodine. Solutions should be flushed down the drain with plenty of water. A photochemical reaction is one in which light energy is required for the reaction to occur. In the reaction under investigation, the ammonium oxalate reacts with iodine forming ammonium iodide and releasing carbon dioxide. - Add 25 mL of household ammonia to 25 mL of oxalic acid solution to produce an ammonium oxalate solution. - In each of two test tubes place 2 mL of the prepared ammonium oxalate solution. - Wrap one test tube with a piece of aluminum foil large enough to allow part to cover the top of the test tube. - Using a medicine dropper, add 10 drops of the tincture of iodine to each of the test tubes. - As quickly as possible, cover the top of the wrapped test tube and put in a drawer or some dark place. - Expose one unwrapped test tube to strong sunlight, another to fluorescent light, and the third to incandescent light. - After a couple of hours, compare the colors of the four solutions. (NH4)2C2O4(aq) + I2(aq) 2 NH4I(aq) + 2 CO2(g) The oxalate ion is oxidized to CO2 and the I2 is reduced to iodide. Trials in which the type of light, iodine source, and oxalic acid source varied were conducted and observations of the color changes were made. After the addition of iodine (step 4) the color was orange. Results were as follows: - Sunlight lightened the color to a faint yellow (reagent grade oxalic acid) or colorless (Zud cleanser) after two hours. - Fluorescent light caused no change (reagent grade oxalic acid) or light yellow color (Zud cleanser) after six hours. - Incandescent light did not cause any change after six hours. - Samples kept in the dark remained orange. - Tincture of iodine and an iodine-alcohol solution gave identical results. Alyea, H.N. and Dutton, F.B., Tested Demonstrations in Chemistry, 1962, p. 82. - Reagent grade oxalic acid, C2H2O4·2H2O, does not dissolve easily in water. - Some difficulties obtaining consistent color changes occurred in tests on a cloudy day. - When two consecutive lab periods are not available, one class can read the results from the solutions prepared by the preceding class. - For additional study students might investigate the effect of the concentration of iodine upon the time required for the reaction to occur or the effect of the concentration of iodine upon the appearance of the products. It was found that when excess amounts of the iodine were added to the saturated solution of oxalic acid, colorless crystals eventually precipitated from the solution. Pickles, A., J. Chem. Ed. 23, 347 (1935). Submitted by Robert Kemnitz, S. Margaret Suerth, Irene Walsh, and Doug Wilbur Woodrow Wilson Leadership Program in Chemistry The Woodrow Wilson National Fellowship Foundation CN 5281, Princeton NJ 08543-5281
<urn:uuid:0296cbf4-aa71-4313-becc-3fbea09f2b2d>
3.5
941
Tutorial
Science & Tech.
48.901213
3-D Fly-Through of Cassiopeia A Narrator (April Hobart, CXC): For the first time, a multiwavelength three-dimensional (3-D) reconstruction of a supernova remnant has been created. This stunning visualization of Cassiopeia A (Cas A), the result of an explosion approximately 330 years ago, uses X-ray data from Chandra, infrared data from Spitzer and pre-existing ground-based optical data. It begins with an artists rendition of the neutron star previously detected by Chandra. The green region is mostly iron observed in X-rays. The yellow region is a combination of argon and silicon seen in X-rays, optical, and infrared - including jets of silicon - plus outer debris seen in the optical. The red region is cold debris seen in the infrared. Finally, the blue reveals the outer blast wave, most prominently detected in X-rays.
<urn:uuid:198b8c8e-3594-49c5-8ff9-2dd0cf63bd82>
3.015625
188
Truncated
Science & Tech.
47.303022
Big Picture Science - That's Containment! We all crave power: to run laptops, charge cell phones, and play Angry Birds. But if generating energy is easy, storing it is not. Remember when your computer conked out during that cross-country flight? Why can’t someone build a better battery? Discover why battery design is stuck in the 1800s, and why updating it is key to future green transportation (not to mention more juice for your smartphone). Also, how to build a new type of solar cell that can turn sunlight directly into fuel at the pump. Plus, force fields, fat cells and other storage systems. And: Shock lobster! Energy from crustaceans? - Dan Lankford – Former CEO of three battery technology companies, and a managing director at Wavepoint Ventures - Jackie Stephens – Biochemist at Louisiana State University - Kevin MacVittie – Graduate student of chemistry, Clarkson University, New York - Nate Lewis – Chemist, California Institute of Technology - Alex Filippenko – Astronomer, University of California, Berkeley You can listen to this and other episodes at http://radio.seti.org/, and be sure to check out Blog Picture Science, the companion blog to the radio show.
<urn:uuid:7ba715a2-d2d1-43b1-be1b-c37dfb58eb4b>
2.796875
259
Truncated
Science & Tech.
42.940498
To understand the drivers and consequences of climate change on timescales important to humans, the paleoclimate of ice cores, the glacial system and local climate patterns were analysed from Skinner Saddle. GPR/GPS (8, 35, 200 and 500MHz) survey was carried out to provide an image of the internal layering of the glacier and the topography of the ice-rock interface beneath. A 17m firn core was ... recovered and analysed for oxygen and hydrogen isotope ratios, major cations, anions and methylsulfonates, trace elements and cations, dust concentration and mineralogy. An automatic weather station was installed as of 1 November, 2007, measuring air temperature, snow accumulation and air temperature, dew point temperature, solar radiation (incoming) and snow temperature. It is planned to recover an intermediate deep ice core (~400m) from this location.
<urn:uuid:47b1c370-2ad9-46eb-90de-ba0d18b115ac>
2.984375
183
Knowledge Article
Science & Tech.
26.826589
Mobile testing has come a long way since the days when testing mobile web applications was mostly manual and took days to complete. Selenium WebDriver is a browser automation tool that provides an elegant way of testing web applications. WebDriver makes it easy to write automated tests that ensure your site works correctly when viewed from an Android or iOS browser. For those of you new to WebDriver, here are a few basics about how it helps you test your web application. WebDriver tests are end-to-end tests that exercise a web application just like a real user would. There is a comprehensive user guide on the Selenium site that covers the core APIs. Now let’s talk about mobile! WebDriver provides a touch API that allows the test to interact with the web page through finger taps, flicks, finger scrolls, and long presses. It can rotate the display and provides a friendly API to interact with HTML5 features such as local storage, session storage and application cache. Mobile WebDrivers use the remote WebDriver server, following a client/server architecture. The client piece consists of the test code, while the server piece is the application that is installed on the device. WebDriver for Android and iPhone can be installed following these instructions. Once you’ve done that, you will be ready to write tests. Let’s start with a basic example using www.google.com to give you a taste of what’s possible. The test below opens www.google.com on Android and issues a query for “weather in san francisco”. The test will verify that Google returns search results and that the first result returned is giving the weather in San Francisco. // Create a WebDriver instance with the activity in which we want the test to run. WebDriver driver = new AndroidDriver(getActivity()); // Let’s open a web page // Lookup for the search box by its name WebElement searchBox = driver.findElement(By.name(“q”)); // Enter a search query and submit searchBox.sendKeys(“weather in san francisco”); // Making sure that Google shows 11 results WebElement resultSection = driver.findElement(By.id(“ires”)); List searchResults = resultSection.findElements(By.tagName(“li”)); // Let’s ensure that the first result shown is the weather widget WebElement weatherWidget = searchResults.get(0); assertTrue(weatherWidget.getText().contains(“Weather for San Francisco, CA”)); Now let’s see our test in action! When you launch your test through your favorite IDE or using the command line, WebDriver will bring up a WebView in the foreground allowing you to see your web application as the test code is executing. You will see www.google.com loading, and the search query being typed in the search box. // 400 pixels left at normal speed Action flick = getBuilder(driver).flick(toFlick, 0, -400, FlickAction.SPEED_NORMAL) WebElement secondImage = driver.findElement(“secondImage”); Next, let’s rotate the screen and ensure that the image displayed on screen is resized. Let’s take a look at the local storage on the device, and ensure that the web application has set some key/value pairs. LocalStorage local = ((WebStorage) driver).getLocalStorage(); // Ensure that the key “name” is mapped What if your test reveals a bug? You can easily take a screenshot for help in future debugging: WebDriver has two main components: the server and the tests themselves. The server is an application that runs on the phone, tablet, emulator, or simulator and listens for incoming requests. It runs the tests against a WebView (the rendering component of mobile Android and iOS) configured like the browsers. Your tests run on the client side, and can be written in any languages supported by WebDriver, including Java and Python. The WebDriver tests communicate with the server by sending RESTful JSON requests over HTTP. The tests and server pieces don’t have to be on the same physical machine, although they can be. For Android you can also run the tests using the Android test framework instead of the remote WebDriver server.
<urn:uuid:cabdf1f5-3a61-47d5-a688-c52417102a1e>
2.890625
928
Documentation
Software Dev.
51.986587
From Math Images - The image to the right is of the character “Flubber” from the 1997 Disney movie of the same title. Basic DescriptionThe Flubber image rendered is an implicit surface. Implicit equations are useful in computer graphics for representing these smooth shapes which we call implicit surfaces. The character is defined by these implicit equations which a rendering method uses to produce the output of this image. A More Mathematical Explanation An implicit function is a function in which the dependent variable has not been given explicitly in t [...] An implicit function is a function in which the dependent variable has not been given explicitly in terms of the independent variable. An implicit equation is different from an explicit equation in that one variable may depend on the value of another, but one is not given explicitly (e.g. ). The equation produces a sphere, all of the points are an equal distance from the center of the object. Similarly, the equation produces a smaller spherical object. The equation produces a model of the two objects blended together. , where is the golden ratio, produces an even more complicated form. In this way, complex implicit functions can be made to describe a variety of complex objects, which has proved useful in computer-aided design. Applications for Implicit Surfaces Implicit surfaces are useful in the modeling of medical data, fluid flow (for engineering), interactive characters, and numerous other applications. These surfaces which have been developing since the early 1970’s can be rendered using ray tracing or various other algorithms and can range in complexity from simple geometric objects, to complex objects which create full scenes. Implicit methods simplify some modeling operations like blending, warping, collision detection, et al. By using implicit methods to define 3-D objects, it has become easier to represent smooth objects and curves. Originally, trying to model smooth 3-D objects required using geometric primitives (lines, points, pyramids, etc.), which often were unable to create the smoothness which was desired. Implicit methods have eased the process of modeling smooth 3-D objects. - Metaballs Applet- At the top half of the applet the metaballs are visible. The graph at the bottom corresponds to the a cross section of the metaball field at the red line in the top half. - Do It Yourself- See this applet if you would like to try graphing your own implicit surfaces. - There are currently no teaching materials for this page. Add teaching materials. Shirley, Peter, et al. Fundametals of Computer Graphics. 3rd ed. Natick, Massachusetts: A K Peters, 2009. Print. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
<urn:uuid:fc76cbcf-8629-406c-899d-74bf15ba65fa>
3.703125
571
Knowledge Article
Science & Tech.
41.771709
The following species (possibly) occur in the North Sea, but only the polyp stage is known, the exact medusal stage of each of these species is unknown (Hansson, 1998a). ¥ Tubularia bellis Allman, 1863. Meduse unknown; polyp occurs in the lower eulittoral and is common in the whole North Sea. ¥ Tubularia indivisa Linnaeus, 1758. Meduse unknown; the polyp occurs from 0 to at least 280 m depth and is common in the whole North Sea. ¥ Tubularia larynx Ellis and Solander, 1786. Meduse unknown; the polyp occurs from the lower shore to at least 100 m depth and is common in the whole North Sea. Detached hydranths were occasionally found in the neritic plankton of the Dutch Waddenzee. ¥ Tubularia regalis Boeck, 1860. Meduse unknown; depth range unknown; concerning the North Sea, the polyp possibly occurs in the Oslofjorden; in the Atlantic Ocean it is found in the NE Atlantic, NE Greenland, Shetlands, along the Norway coast, Spitzbergen, and Nova Zembla.
<urn:uuid:a83b8386-a359-4eea-9ee4-db8b7ad2c4e5>
2.6875
263
Structured Data
Science & Tech.
53.189139
Web Programming Tutorials This is a relatively new section of the site, and I'm in the process of refining and expanding these tutorials. Feel free to browse, but be on the lookout for new tutorials, as well as revisions to existing ones very soon! Covers a basic introduction to the structure and syntax of this popular data structure notation, including how to convert strings into objects and vice-versa. Very useful in conjunction with the Ajax tutorial. Covers the use of an ajax() function to extract output from an httpRequest, as well as some basic examples of how the object can be handled upon retrieval. Covers the development of a mapping application using the Google Maps API. Introduction to displaying a map, placing markers, and using info windows. Leads into the use of object-oriented strategies for managing maps, markers and info windows in a more organized way. A basic overview of what PHP is, how to set up an environment in which to learn and use it, and how to create your first "hello world" page. Covers basic PHP syntax, including declaring and setting variables, working with strings and arrays, looping and conditionals, and other fundamental concepts necessary for further work in more specific areas of PHP. Explains how to connect to a MySQL database, run queries, and handle results. Also includes an introduction to the basic SQL syntax needed to perform simple selects, inserts, updates and deletions. (For more in-depth discussion of SQL syntax, watch for SQL tutorials coming soon.) Covers how to handle URL variables, catch form values, and manage state using session and environment variables. Provides a brief introduction to using MySQL to develop database-driven web applications. Provides an overview of object-oriented programming (OOP) and an introduction to creating custom classes of objects in PHP with their own properties and methods.
<urn:uuid:391603b8-b4ee-4726-b237-e7b9c2d09d0b>
2.8125
383
Content Listing
Software Dev.
41.85223
Derivatives in Coordinates Let’s take the derivative and see what it looks like in terms of coordinates. Say we have a smooth manifold and a smooth map from an open subset of to another smooth manifold . If is any point, we define the derivative as before. Now, if is a coordinate patch — even if there isn’t a single coordinate patch on the whole domain of we can restrict down to a coordinate patch containing — we get a basis of coordinate vectors at . Similarly, if is a coordinate patch around we get a basis of coordinate vectors at . We want to write down the matrix of in terms of these two bases. So, the obvious path is to take one of the coordinate vectors at , hit it with , and write the result out in terms of the coordinate vectors at . The generic problem, then, is to calculate the th component — the one corresponding to — of . But we know that this coefficient comes from sticking into this vector and seeing what pops out! We’re taking the th partial derivative of the th component of the function , which goes from the open set into , where and are the dimensions of and , respectively. Like we saw for coordinate transforms in place, this is just the Jacobian again. So if we want to write out the derivative in terms of local coordinates, we first write out our local coordinate version of as a function from one Euclidean space to another, and then we take the Jacobian of that function at the appropriate point.
<urn:uuid:69b9db8f-e380-4498-b6ae-b5b23548457c>
2.890625
309
Tutorial
Science & Tech.
49.917371
|NASA's Mark III Spacesuit Although heavier than earlier suit designs (59 kilograms / 130 pounds for the suit and an additional 15 kilograms / 33 pounds for the Portable Life Support System), the Mark III's selling point is its superior mobility. By combining soft suit joints, hard joints, and bearings, expected lunar or Martian surface mobility tasks can be performed within acceptable levels of effort. For instance, the task of kneeling and picking up an object would not be possible with the Apollo A7L or Shuttle EMU suits. With the Mark III (in lower gravity, of course) astronauts could perform handstands or somersaults in the suit. Credit: NASA High in the Arctic, just below Earth's north polar ice cap, a collaboration of nearly two dozen biologists, geologists, and engineers have embarked on an expedition to Mars. The scientists and researchers are spending two and a half weeks at the Svalbard archipelago in the Arctic Ocean north of Norway. The goal of this annual expedition, called the Arctic Mars Analogue Svalbard Expedition (AMASE), is to characterize the geology, geophysical features, biosignatures, and possible life forms of volcanic centers, warm springs, and perennial rivers -- settings thought to be analogous to sites on ancient Mars. During AMASE 2006, researchers will test a modified Mark III spacesuit replica. Although the test suit is lighter than the original Mark III, it is still quite heavy on Earth -- about 35 kilograms (75 pounds). Before taking it out in the field, astrobiologists will sterilize and test the suit and then test it again afterward to see what contaminants, including from the wearer, were brought back with the suit. Scientists also will observe how the sterilization process affects the joints of the suit, as well as watch for any damage or deterioration that may happen during fieldwork. They also will test the ability to manipulate sterile sample containers without contaminating them -- a necessary procedure that needs to be worked out if humans are to search for life on Mars. Engineers from the Jet Propulsion Laboratory also plan to test life detection instruments on a rover designed to safely maneuver cliff faces. The "cliff bot" may one day be used in concert with astronauts, so the human explorers at the Mars analog site will practice how to best to coordinate human and robotic activities. While testing the spacesuit, researchers also will be trying out new tools for communication and data logging, such as a wearable computer, throat microphone, and digital display. Testing the spacesuit and the cliff bot are just two of the many objectives that the 2006 expedition team wants to achieve. Two instruments headed for Mars on the planned Mars Science Laboratory mission -- the CheMin X-ray Diffraction/X-ray Fluorescence (XRD/XRF) instrument used for mineralogical analysis, and the gas chromatograph-mass spectrometer used for atmospheric analysis -- will be tested to see how they perform in the frigid environment. The two instruments also will be used to develop protocols to search for organics. Team members also will test the Mars Microbeam Raman Spectrometer and an ultraviolet spectrometer being developed for future missions. Additionally, some team members are developing new astrobiological instruments using modern microbiological forensic techniques. One of these instruments, the Lab-on-a-Chip, is set to fly on an upcoming shuttle mission to test for molds and pathogens on the International Space Station. One of this year's expedition sites, located in the Bockfjorden area of Svalbard at 80ºN, is an intriguing place where hot meets cold. About 1 million years ago, the Sverrefjell volcano erupted through an ice sheet. Today, the ice sheet is gone and the volcano is quiet. In this dry and cold environment, hot springs still simmer, exhaling argon and helium gasses from Earth's mantle. Shaped by volcanism, ice, and liquid water, Svalbard reminds us of how Mars might have once been. Volcanic activity like this could still percolate beneath the surface of Mars and may be a potential habitat for microbial life. |Satellite Image of Svalbard This image from space shows many of the sites of the Arctic Mars Analogue Svalbard Expedition. Credit: NASA The AMASE site has another important connection to Mars. Here, scientists found carbonate spherules that are nearly identical to those we see in the martian meteorite ALH84001 -- the meteorite which, although still controversial, houses possible evidence of simple life. The spherules found in Bockfjorden typically have iron-rich cores and magnesium-rich rims, sometimes embedded in a calcite matrix -- a texture and composition that is identical to ALH84001. Expedition scientists are looking at both abiological and biological agents that may produce the Earthly carbonate spherules to better understand what may have formed similar structures on Mars. AMASE consists of an international crew of scientists, engineers and filmmakers. Participating members hail from the University of Oslo, Electromagnetic Geoservices (Norway), Carnegie Institute of Washington, NASA Goddard Space Flight Center, NASA Ames Research Center, NASA's Jet Propulsion Lab, Penn State, the Lunar and Planetary Institute, Indiana University, Smithsonian Institution, University of Leeds (UK), International Space Science Institute and Optic Verve. AMASE first began four years ago, led by Hans Amundsen of the University of Oslo. The 2006 expedition lasts from August 8 to 22. Notes from the field are available at the NASA and Planetary Society web sites. Related Web Pages New Martian Meteorite Life in Tiny Tunnels? Arctic, Antarctic, Mars Diving for Life Under Antarctic Ice New Signs of Polar Life Tricorder Going to Mars Life in Ice AMASE expedition TV feature
<urn:uuid:c0d8b1ce-928e-4fc5-bae7-c450eec2435d>
2.734375
1,204
Knowledge Article
Science & Tech.
29.057874
Take an amusing quiz to learn about unexpected effects of Climate Change. After each multiple choice question, you see if you were right (and the right answer if you weren't). Add the possibility of incestuous suicidal clownfish to the list of odd results of Climate Change. Now this is where we are getting into hypothesis territory, because we know that other pollutants can influence a sea animal’s behavior and sensory ..., but CO2 is only starting to be investigated in this role (4). Clownfish in particular appear to be sensitive to elevated levels of carbon dioxide when they are “settlement-stage larvae”–little fish floating around looking for a good sea anemone to call home. In a normal environment, these larvae select a home based on odors in the water, much like I did when I first moved to the city. Big fail on my part. Normally, settlement-stage clownfish larvae avoid places where their parents live, and seek out places where there are few predators. Makes sense right? This strategy helps avoid incest and getting eaten – rules to live by. But, when reared in an elevated CO2 environment, their ability to sense these things becomes all screwy. Rather than avoiding their parent’s scent, they are attracted to it (5) – boomerang kids or is it just Chinatown? Furthermore, they are also attracted to predator scent cues rather than avoiding them (6) – oops! So, incestuous, suicidal clownfish? Maybe. There isn’t any evidence yet that these clownfish would actually be so confused as to mate with their parents or get swallowed up by a hungry predator. Just because their sensory systems are impacted, doesn’t mean they wouldn’t figure out the mistake and take off ASAP. It’s the early research stage – stay tuned as the story evolves. During the recent heat wave in Australia, ... that people can’t even pump gas. Nikki Staskiewicz and Angela Blomeley were stranded in Oodnadatta — which bills itself as “the driest town [in] the driest state of the driest country” in the world — when they tried to fill up their tank, only to find the fuel vaporizing in the triple-digit heat. Climate Destabilization creates wines that resemble disinfectant, smoked meat and dirty ashtray. ... Australia and other areas of the world are experiencing an increase in bush and wildfires, which may continue and intensify with global climate change. Smoke from those fires can travel long distances and poses a special threat for wine grapes. Grapes exposed to smoke yield wines with unpalatable aromas and tastes, sometimes described as resembling "smoked meat," "disinfectant" or a "dirty ashtray." My goodness! strange world we live in. Great articles! In Historic town walls crumbling 'because of climate change', we see another odd result of climate change. Defensive walls built around Ludlow in 1233 are suddenly crumbling. “It’s amazing that they have stood for 800 years and the climate change that has affected them over the last couple of years has wreaked so much damage.” ... last year was the second wettest year on record. Floodwater mosquitoes the size of quarters, whose bite feels like a stab? Climate Destabilization means more floods, and apparently that brings more infestations of floodwater mosquitoes. Giant mosquitoes, the size of quarters, are likely to infest Florida this summer, according to experts from the University of Florida. These huge, biting insects, which are called Psorophora ciliata, or more commonly known as gallinippers... Female gallinippers lay their eggs in soil at the borders of water bodies that overflow after heavy rain, such as ponds and streams. Therefore, they are referred to as floodwater mosquitoes. The eggs can stay dry and inactive for a long time, even years, until waters are high enough to help them hatch. Florida was hit by Tropical Storm Debbie last june, which resulted in flooding in several areas and the release of great numbers of gallinippers and other floodwater mosquitoes. The mosquito is native to the whole Eastern side of North America and its body is approximately half an inch long with a black-and-white color patten, making it look like a super-sized form of the invasive Asian tiger mosquito. The species is well-known for being aggressive and having a painful bite. "The bite really hurts, I can attest to that," Kaufman said. The pain has been described as similar to being stabbed. Another odd result of climate change. Wild figs in the tropics are likely to be among the first plants there to go extinct. There are more than 700 species of wild fig in the tropics. Most can be pollinated only by a unique species of fig wasp. In turn, the wasps rely on fig plants as hosts for their eggs. Neither species can survive without the other. Now a new study from equatorial Singapore, in the journal Biology Letters, finds that the wasps are vulnerable to climate change, meaning that the wild fig plants are, too. The scientists found that temperature increases of a few degrees could cut the adult life spans of pollinating fig wasps to just a few hours, from one or two days. Around 80 degrees Fahrenheit, the mean daily temperature in Singapore, the life span of the studied wasps ranged from 11 to 24 hours. At 87.8 degrees F, the life span dropped to 6 to 18 hours, and at 93.2 degrees F, it was six hours or less. Another Climate Destabilization bonus, increased skin cancer. More powerful storms inject water vapor into the stratosphere which ends up destroying the ozone shield protecting us from UV. ... water vapor injected into the stratosphere by powerful thunderstorms converts stable forms of chlorine and bromine into free radicals capable of transforming ozone molecules into oxygen. Recent studies have suggested that the number and intensity of such storms are linked to climate change. “What this research does is connect, for the first time, climate change with ozone depletion, and ozone loss is directly tied to increases in skin cancer incidence, because more ultraviolet radiation is penetrating the atmosphere.” Many crops, particularly staple crops grown for human consumption—including wheat, soybeans, and corn—could suffer damage to their DNA, Anderson said. Ironically, Anderson said, the discovery that climate change might be driving ozone loss happened virtually by accident. Sewer balloon demand spikes. Our sewer systems, poorly prepared to deal with the effects of even moderate rainstorms, dump billions of gallons of raw sewage into lakes, rivers and oceans each year. Superstorm Sandy alone caused more than 10.9 billion gallons of sewage to flow untreated into the East Coast ecosystem. At the heart of this problem is a piece of antiquated infrastructure called the combined sewer, which funnels both waste-water and storm-water towards treatment plants in the same stream. During heavy rainfall, storm drains and sewage treatment plants can't handle the increased (if diluted) volume of sewage, and so into the river it goes. It's called "combined sewer overflow" (CSO): Solutions are expensive. The back-end fix is an expansion of waste-water treatment plants, though this additional capacity lies unused during all but the most intense storms. Alternately, and at even higher cost, a city can take preventative measures by segregating disposal systems for sewage and storm-water. It no longer suffices for sewers to be mere conduits. They must be storage tanks themselves. Enter the inflatable dam. When activated, these giant sewer balloons turn roiling trunk lines into holding pens, rivers into lakes. They save raw sewage for a sunny day. U.S. cities have installed dozens of these devices over the past decade. Made of industrial grade rubber and anchored to a cement base, the dam inflates on command to restrict sewer flow, like a plug in a giant drain. When the rain has passed, it deflates and allows CSO to flow slowly to the treatment plant. Inflatable dams have been around for more than half a century, initially developed to stall flash floods in Southern California and allow rainwater to seep into aquifers. ... as U.S. cities begin to deal with more frequent flooding from climate change, and search for handy technology to divert water. The next frontier will be rail tunnels: New York's Metropolitan Transit Authority sustained nearly $5 billion in damage due to flooding from Superstorm Sandy. Parts of New York City were isolated for weeks. On Thursday, the MTA inflated a plug in a tunnel at the South Ferry station. Fourteen feet in diameter and 30 feet long, this giant contraption and others like it may be expected to protect the network's longest tunnels from water damage. Plugs, like inflatable dams, would use a simple air barrier to hold back millions of gallons of fluid. Reckoned to cost $400,000 a piece, they are something of a bargain.
<urn:uuid:eeed48ee-15d5-422e-97f0-2d85f1f259c3>
2.984375
1,890
Comment Section
Science & Tech.
50.578563
shaking a towel up and down would create what type of wave in the towel An 80 000-kg airliner is flying at 900 km/h at a height of 10.0 km. What is its total energy (kinetic + potential) if the total was 0 when the airliner was at rest on the ground? Jane and John, with masses of 53 and 62 , respectively, stand on a frictionless surface 15 apart. John pulls on a rope that connects him to Jane, giving Jane an acceleration of 0.90 toward him. If the pulling force is applied constantly, where will Jane and John meet? Unless otherwise stated, all objects are located near the Earth's surface, where = 9.80 . A girl pushes a 21 lawn mower as shown in the figure . If F = 33 and N= 42 what is the acceleration of the mower? Ignore friction. Four identical masses of mass 700 each are placed at the corners of a square whose side lengths are 11.0 What is the magnitude of the net gravitational force on one of the masses, due to the other three? A person hums into the top of a well and finds that standing waves are established at frequencies of 42, 70.0, and 98 Hz. The frequency of 42 Hz is not necessarily the fundamental frequency. The speed of sound is 343 m/s. How deep is the well? Topic: The Principles of Linear Superposition and... The minimum distance required to stop a car moving at 38.0 mi/h is 49.0 ft. What is the minimum stopping distance for the same car moving at 66.0 mi/h, assuming the same rate of acceleration? A boxer can hit a heavy bag with great force. Why can t he hit a piece of tissue paper in midair with the same amount of force? Snap Shots Options [Make this Shot larger] [Close] Options Disable Get Free Shots INDEX to Barbara G. Walker s THE WOMAN S ENCYCLOPEDIA OF MYTHS AND SECRETS (New York: HarperCollins Publishers, Inc., 1983) Compiled in 2003 by Cheryl Brooks With over 6,000... A piece of glass has a temperature of 78.0 C. Liquid that has a temperature of 42.0 C is poured over the glass, completely covering it, and the temperature at equilibrium is 55.0 C. The mass of the glass and the liquid is the same. Ignoring the container that holds the glass and liquid and... Ask a new Physics Question Tips for asking Questions - Provide any and all relevant background materials. Attach files if necessary to ensure your tutor has all necessary information to answer your question as completely as possible - Set a compelling price: While our Tutors are eager to answer your questions, giving them a compelling price incentive speeds up the process by avoiding any unnecessary price negotiations 1. Can you show me step by step how to solve this problem?: - The Earth’s rate of rotation is constantly decreasing, causing the day to increase in duration. In the year 2000 the Earth takes about 0.548 s longer to complete 365 revolutions than it did in the year 1900. - (a) What is the average angular acceleration of the Earth? - (b) If this average acceleration remains constant, in what year will the Earths rotation come to rest? 2. I do not understand this Ideal Gas Law and Kinetic Theory question. Can you help me set it up? - A spherical balloon is made from a material whose mass is 3.00-kg. The thickness of the material is negligible compared to the 1.5-m radius of the balloon. The balloon is filled with helium (He) at a temperature of 305-degrees K and just floats in the air, neither rising nor falling. The density of the surrounding air is 1.19-kg/m^3. Find the absolute pressure of the helium gas
<urn:uuid:18a26721-9243-43f3-bd69-da2102b5d689>
3.203125
810
Q&A Forum
Science & Tech.
75.718962
Contact: Diana Yates University of Illinois at Urbana-Champaign Caption: Ambrose analyzed the teeth of two dozen mammal species found in the same ancient soil layer as Ardipithecus in order to help reconstruct its environment. A modern hippopotamus tooth is pictured. Credit: Photo by L. Brian Stauffer, U. of I. News Bureau Usage Restrictions: Photo may be used only with stories about the research described in the news release. Please credit: Photo by L. Brian Stauffer, U. of I. News Bureau. Related news release: Early hominid first walked on 2 legs in the woods
<urn:uuid:91c2bb82-052b-4e5f-8911-01527f3a0b0b>
2.96875
134
Truncated
Science & Tech.
51.494
As heroic workers and soldiers strive to save stricken Japan from a new horror—radioactive fallout—some truths known for 40 years bear repeating. An earthquake-and-tsunami zone crowded with 127 million people is an unwise place for 54 reactors. The 1960s design of five Fukushima-I reactors has the smallest safety margin and probably can't contain 90 percent of meltdowns. The U.S. has six identical and 17 very similar plants. Every currently operating light-water reactor, if deprived of power and cooling water, can melt down. Fukushima had eight-hour battery reserves, but fuel has melted in three reactors. Most U.S. reactors get in trouble after four hours. Some have had shorter blackouts. Much longer ones could happen. Overheated fuel risks hydrogen or steam explosions that damage equipment and contaminate the whole site--so clustering many reactors together (to save money) can make failure at one reactor cascade to the rest. Nuclear power is uniquely unforgiving: as Swedish Nobel physicist Hannes Alfvén said, "No acts of God can be permitted." Fallible people have created its half-century history of a few calamities, a steady stream of worrying incidents, and many near-misses. America has been lucky so far. Had Three Mile Island's containment dome not been built double-strength because it was under an airport landing path, it may not have withstood the 1979 accident's hydrogen explosion. In 2002, Ohio's Davis-Besse reactor was luckily caught just before its massive pressure-vessel lid rusted through. Regulators haven't resolved these or other key safety issues, such as terrorist threats to reactors, lest they disrupt a powerful industry. U.S. regulation is not clearly better than Japanese regulation, nor more transparent: industry-friendly rules bar the American public from meaningful participation. Many presidents' nuclear boosterism also discourages inquiry and dissent. Nuclear-promoting regulators inspire even less confidence. The International Atomic Energy Agency's 2005 estimate of about 4,000 Chernobyl deaths contrasts with a rigorous 2009 review of 5,000 mainly Slavic-language scientific papers the IAEA overlooked. It found deaths approaching a million through 2004, nearly 170,000 of them in North America. The total toll now exceeds a million, plus a half-trillion dollars' economic damage. The fallout reached four continents, just as the jet stream could swiftly carry Fukushima fallout. Fukushima I-4's spent fuel alone, while in the reactor, had produced (over years, not in an instant) more than a hundred times more fission energy and hence radioactivity than both 1945 atomic bombs. If that already-damaged fuel keeps overheating, it may melt or burn, releasing into the air things like cesium-137 and strontium-90, which take several centuries to decay a millionfold. Unit 3's fuel is spiked with plutonium, which takes 482,000 years. Nuclear power is the only energy source where mishap or malice can kill so many people so far away; the only one whose ingredients can help make and hide nuclear bombs; the only climate solution that substitutes proliferation, accident, and high-level radioactive waste dangers. Indeed, nuclear plants are so slow and costly to build that they reduce and retard climate protection. Here's how. Each dollar spent on a new reactor buys about two to ten times less carbon savings and is 20 to 40 times slower, than spending that dollar on the cheaper, faster, safer solutions that make nuclear power unnecessary and uneconomic: efficient use of electricity, making heat and power together in factories or buildings ("cogeneration"), and renewable energy. The last two made 18 percent of the world's 2009 electricity (while nuclear made 13 percent, reversing their 2000 shares)—and made over 90 percent of the 2007 to 2008 increase in global electricity production. Those smarter choices are sweeping the global energy market. Half the world's new generating capacity in 2008 and 2009 was renewable. In 2010, renewables, excluding big hydro dams, won $151 billion of private investment and added over 50 billion watts (70 percent the total capacity of all 23 Fukushima-style U.S. reactors) while nuclear got zero private investment and kept losing capacity. Supposedly unreliable windpower made 43 percent to 52 percent of four German states' total 2010 electricity. Non-nuclear Denmark, 21 percent windpowered, plans to get entirely off fossil fuels. Hawai'i plans 70 percent renewables by 2025. In contrast, of the 66 nuclear units worldwide officially listed as "under construction" at the end of 2010, 12 had been so listed for over 20 years, 45 had no official startup date, half were late, all 66 were in centrally planned power systems—50 of those in just four (China, India, Russia, South Korea)—and zero were free-market purchases. Since 2007, nuclear growth has added less annual output than just the costliest renewable—solar power—and will probably never catch up. While inherently safe renewable competitors are walloping both nuclear and coal plants in the marketplace and keep getting dramatically cheaper, nuclear costs keep soaring, and with greater safety precautions would go even higher. Tokyo Electric Co., just recovering from $10-20 billion in 2007 earthquake costs at its other big nuclear complex, now faces an even more ruinous Fukushima bill. Since 2005, new U.S. reactors (if any) have been 100 percent-plus subsidized—yet they couldn't raise a cent of private capital, because they have no business case. They cost 2-3 times as much as new windpower, and by the time you could build a reactor, it couldn't even beat solar power. Competitive renewables, cogeneration, and efficient use can displace all U.S. coal power more than 23 times over—leaving ample room to replace nuclear power's half-as-big-as-coal contribution too—but we need to do it just once. Yet the nuclear industry demands ever more lavish subsidies, and its lobbyists hold all other energy efforts hostage for tens of billions in added ransom, with no limit. Japan, for its size, is even richer than America in benign, ample, but long-neglected energy choices. Perhaps this tragedy will call Japan to global leadership into a post-nuclear world. And before America suffers its own Fukushima, it too should ask, not whether unfinanceably costly new reactors are safe, but why build any more, and why keep running unsafe ones. China has suspended reactor approvals. Germany just shut down the oldest 41 percent of its nuclear capacity for study. America's nuclear lobby says it can't happen here, so pile on lavish new subsidies. A durable myth claims Three Mile Island halted U.S. nuclear orders. Actually they stopped over a year before—dead of an incurable attack of market forces. No doubt when nuclear power's collapse in the global marketplace, already years old, is finally acknowledged, it will be blamed on Fukushima. While we pray for the best in Japan today, let us hope its people's sacrifice will help speed the world to a safer, more competitive energy future. Amory B. Lovins, a 63-year-old American consultant, experimental physicist and 1993 MacArthur Fellow, has been active at the nexus of energy, resources, environment, development, and security in more than 50 countries for 35 years, including 14 years based in England. He is widely considered among the world’s leading authorities on energy—especially its efficient use and sustainable supply—and a fertile innovator in integrative design. After two years at Harvard, Mr. Lovins transferred to Oxford, and two years later became a don at 21, receiving in consequence an Oxford ma by Special Resolution (1971) and, later, 11 honorary doctorates of various U.S. and U.K. universities. He has been Regents’ Lecturer at the U. of California both in Energy and Resources and in Economics; Grauer Lecturer at UBC; Luce Visiting Professor at Dartmouth; Distinguished Visiting Professor at the University of Colorado.; Oikos Visiting Professor of Business, U. of St. Gallen; an engineering visiting professor at Peking U.; and 2007 MAP/Ming Professor at Stanford’s School of Engineering. Tags: amory lovins, japanese nuclear disaster, nuclear, nuclear costs, nuclear energy, nuclear plants, nuclear power, nuclear power plant construction, nuclear proliferation, nuclear reactor, nuclear reactors, nuclear regulatory agency, nuclear regulatory commission, nuclear renaissance., nuclear waste
<urn:uuid:cb521cec-8acc-47d7-8dbf-4dfaa6e3dd3f>
2.75
1,759
Nonfiction Writing
Science & Tech.
45.221106
Applications and libraries/Interfacing other languages/Erlang (simple tutorial on using the Erlang FFI) m (removed invalid link) |Line 1:||Line 1:| == Overview == == Overview == The [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/erlang Haskell/Erlang-FFI] The [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/erlang Haskell/Erlang-FFI] enables full bi-directional communication between programs written in Haskell and Erlang. Message sends from Haskell to Erlang just look like function calls (of course), and messages from Erlang to Haskell are delivered to MVars. == Theory of Operation == == Theory of Operation == The Haskell/Erlang-FFI enables full bi-directional communication between programs written in Haskell and Erlang. Message sends from Haskell to Erlang just look like function calls (of course), and messages from Erlang to Haskell are delivered to MVars. 2 Theory of Operation Because everything interesting that happens in Erlang happens as a result of sending a message, all that is required to fully interoperate with Erlang is to be able to send and receive messages using its native wire protocol. There are similar packages that allow Erlang to interoperate with programs written in C, Java, Clojure, Scheme, Emacs Lisp, Python, and Ruby. This Haskell library is distantly derived from the Emacs Lisp package Distel.Erlang types are represented in Haskell with the ghci> toErlang [("a", 1), ("b", 2)] ErlList [ErlTuple [ErlString "a",ErlBigInt 1],ErlTuple [ErlString "b",ErlBigInt 2]] ghci> fromErlang $ ErlList [ErlTuple [ErlString "a",ErlBigInt 1],ErlTuple [ErlString "b",ErlBigInt 2]] :: [(String, Int)] [("a",1),("b",2)] 3 Getting Started Before you can do anything with the Erlang FFI, you minimally need to start up the Erlang Port Mapper Daemon (epmd). The simplest way to do that is to start an Erlang node on the local machine.Having started Erlang, we can now create a Haskell node. The self <- createSelf "haskell@localhost" mbox <- createMBox self You are now ready to talk to Erlang. 4 Low-Level Communication Erlang's fundamental abstraction is an asynchronous message send. In Haskell that's: mboxSend mbox node pid msg mboxSend mbox "erlang" (Right "echo") (mboxSelf mbox, "Hello, Erlang!") Receive messages addressed to your "process" with: msg <- mboxRecv mbox You will generally initiate communication to a registered name, at which time you may receive a Pid for later use. 5 High-Level Communication In a real Erlang program, low-level message sends are not used for the bulk of the work. Most of the interesting things in Erlang are part of the OTP (Open Telecom Platform) libraries, and these implement higher-level protocols on top of message sends. The most important of these protocols is gen_server. When talking to a process that implements the gen_server protocol, you can either "call" or "cast" to it (in addition to still being able to do low-level message sends). A call is a two-way faux-synchronous request/response: reply <- genCall mbox node pid msg A cast is a one-way notification: genCast mbox node pid msg One instance of gen_server in particular is very useful: the RPC server "rex". Send rex a message containing a module name, function name, and a list of arguments, and it will (synchronously or asynchronously) call the named function in the named module, passing it the arguments supplied, and optionally returning the results to you. The RPC server gives you access to nearly all of Erlang: reply <- rpcCall mbox node module function arguments rpcCast mbox node module function arguments The library also provides a set of wrappers for making calls to Mnesia. 6 Not Implemented The Erlang FFI is a work in progress, and it has some shortcomings. As of this writing, it does not yet register itself with epmd. This means that, even though Erlang can call into Haskell, Haskell must initiate first contact with the Erlang node. (Otherwise Erlang simply doesn't know where the Haskell node is on the network). A larger issue is that the FFI does not yet implement process linking. Two processes are "linked" in Erlang if one is notified when the other terminates, and linking is the primary mechanism for handling and/or propagating errors in an Erlang system. This is in-progress and should be completed soon. Until it is done, this library is best suited for situations where Haskell is consuming Erlang services. Erlang can't yet reliably consume Haskell services because there is no error notification. 7 Future Plans To complete the items listed in the previous section. :)
<urn:uuid:02819692-cc0e-4186-be62-70e806fa2e8f>
3.21875
1,139
Documentation
Software Dev.
41.948429
- From: Duff, A. Wilmer, A Text-Book of Physics, 5th ed., P. Blakiston's Son & Co., Philadelphia (1921) pp. 418-9. - © Copyright 1998 R. Paselk - 473. Standard Cells for E.M.F. Determinations. - In calibrations with the potentiometer it is necessary to have a "normal" or "standard" cell of known and constant e.m.f. The two cells used universally for this purpose are the cells devised by Latimer Clark and by Edward Weston. A form of the Clark cell is shown in Fig. 339. The positive pole is mercury (Hg), in contact with a paste of mercurous sulphate (Hg2SO4), and the negative pole is zinc in contact with a solution zinc sulphate. When this cell is made strictly according to the specifications fixed by the national physical laboratories, it has an e.m.f. of 1.434 volts at 15°C. and for a temperature t , an e.m.f. of [1.434 - 0.0012 (t - 15)] volts. - The Weston cell is exactly like the Clark cell except that the zinc is replaced by cadmium, and the zinc sulphate by cadmium sulphate. Its e.m.f. in the standard form is 1.0190 volts, and it has the great advantage of having practically no change of e.m.f., with temperatures. No appreciable current should be taken from a standard cell, as the accompanying chemical actions cause more or less permanent changes in the cell and its e.m.f. - © R. Paselk - Last modified 22 July 2000
<urn:uuid:97ffca7c-4e35-406f-9316-fd0c78e16ba4>
3.234375
386
Knowledge Article
Science & Tech.
85.679434
Understanding how much global warming pollution is being emitted and where those emissions are coming from is an important step to reducing emissions in Oregon. It also provides a baseline for measuring progress as the state strives to reduce its emissions. What is Oregon’s carbon footprint? If you account for the greenhouse gases emitted in Oregon, as well as emissions associated with electricity used by Oregonians, then the total amount of greenhouse gases that Oregon is responsible for (i.e., its “carbon footprint”) is currently fluctuating around 65-70 million metric tons per year. The six greenhouse gases which dominate global warming pollution are included in this total, normalized so that the relative warming “strength” of each of the gases is equivalent to that of carbon dioxide (carbon dioxide equivalent or CO2e). As can be seen below, carbon dioxide emissions dominate in Oregon relative to the other gases and have been growing since 1990. This graph shows how Oregon's greenhouse gas emissions have changed since 1990. It also shows that most of our emissions come from carbon dioxide. What key sectors are responsible for Oregon’s greenhouse gas emissions? By grouping greenhouse gas emission sources into four major categories of economic activity in Oregon, and looking at the contribution of those categories over time, it is clear that there is no one sector which clearly dominates Oregon’s carbon footprint. The transportation of goods and people accounts for the largest share of emissions at about 37 to 38 percent of emissions in recent years. Residential and commercial activity in homes, offices, stores and the like, is a close second, at around 33 to 35 percent of emissions. The industrial sector has been stable in recent years at around 20 percent of emissions. Agricultural activities have hovered at around 8 percent and represent the smallest share of emissions in Oregon. This graph shows Oregon's greenhouse gas emissions by major sectors of our economy. Oregon does not yet have comprehensive information on which individual emitters or facilities are the largest “point sources” for greenhouse gas emissions. In 2009 Oregon started requiring large emitters of greenhouse gases to report those emissions to the state (mandatory greenhouse gas reporting) so that information will be available in the future. Where can I get detailed information on Oregon’s greenhouse gas emissions? You can find the greenhouse gas emission estimates used for the information above at this location. If you want a more detailed explanation of the state’s greenhouse gas inventory, as well as emission forecasts and other analyses, please refer to Appendix 1 of this 2008 report to the Governor, A Framework for Addressing Rapid Climate Change.
<urn:uuid:75246645-0ab0-4bfc-a334-0cf3b332204a>
3.28125
529
Knowledge Article
Science & Tech.
30.715581
How's this for impressive: a genome pieced together from a 30,000-year-old finger bone contains fewer errors than genomes generated using samples from living people. The genome, published online today, is from an extinct group of hominins called the Denisovans. Fossils of the Denisovans, close relatives of the Neanderthals, were discovered in Siberia in 2008. A draft genome was released in 2010 by Svante Pääbo of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, which revealed that Denisovans interbred with modern humans. However, each position in the genome was read only twice, so the fine detail was unreliable. The new genome covers each position 30 times over. Pääbo plans to use it to estimate how much genetic variation was present among the Denisovans, revealing whether they suffered population crashes. - New Scientist - Not just a website! - Subscribe to New Scientist and get: - New Scientist magazine delivered every week - Unlimited online access to articles from over 500 back issues - Subscribe Now and Save If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article
<urn:uuid:d1abe710-ed86-4f07-8d82-38818659a01a>
3.6875
317
Truncated
Science & Tech.
32.69606
Chicken Egg Shape Can a chicken egg come out round with no point? The shape of a chicken's egg can vary a lot, depending on the size, age and health of the chicken, My very small and VERY old (nearly 12 years) silky bantam lays eggs very occasionally - only about one a month. Her eggs are very small, about the size of a walnut, and they are almost round. They are slightly oval shaped, but the two ends are so close to the same shape it is very difficult, or even sometimes impossible to decide which is the 'pointy' end. Generally the shape of the egg is determined by the pressures exerted by the walls of the oviduct ( egg passage ) within the hen. A good healthy hen has strong muscles, which push the egg through fairly quickly, and mold the egg's shape as the shell is being formed. The 'point' is produced by the muscles pushing on the back half of the egg to move it through the passage. My old chicken has very weak muscles and her eggs move very slowly. This allows time for the egg to take the shape which is easiest for the egg - the same shape as a bubble - round. The shape of the egg has an added advantage for the hen. It stops the egg from rolling away from the nest. In the case of some seabirds who nest on cliff ledges, the shape is so stretched that the eggs are actually tapered like a cone. Click here to return to the Zoology Archives Update: June 2012
<urn:uuid:8be1cd84-6487-4b78-93ba-c49f152a6d5b>
3.28125
335
Knowledge Article
Science & Tech.
61.191737
Get flash to fully experience Pearltrees Linux is a Unix-like computer operating system assembled under the model of free and open source software development and distribution; its development is one of the most prominent examples of free and open source software collaboration, which allows its underlying source code to be used, modified, and distributed—commercially or non-commercially—by anyone. iOS was previously known as iPhone os and this is basically an operating system for the mobile and was developed and also distributed by the company apple. Here are very very basic unix commands you must know. These unix commands are related to the file and directory management in unix and some unix commands are general. You will also come to know how communication takes place between unix computers and also various security commands. Practical Code Editors Web development has become a very important topic on the web these days and the internet has become the best source for developers to learn more about web development. Every month we bring new and recently released applications and tools for our audience, today, We have collected 10 of the useful security tools and applications for developers that will to help you simplify your website and development related tasks and keep your website a step ahead of the competition. I was searching the other days a really good code editor to use, cause I found myself for the first time in the situation of having to use one. I discovered that if you search for one of these a lot of the ones you find are outdated, deprecated and no use for modern web design. However, if you search the depths of the internet well enough you find out about some online code editors that you haven’t heard about before. Using Django 1.4? Check out the new, updated version of this post with Django 1.4 specific changes and updates . One of the things I wish I had known when starting my Django project for IllestRhyme was "How do I start a real Django project". Benchmark (per Dictionary.com ) - "an established point of reference against which computers or programs can be measured in tests comparing their performance, reliability, etc." Seamless web search - lose less focus 90% of the time I switch context away from Vim, it’s to load Chrome to search for something. In order to make this as seamless as possible, I’ve added a function to my .vimrc which brings up a prompt where I can enter my search terms, hit enter and it launches Chrome and searches the search engine of my choice with the entered text. I should note - this is for Chrome on OS X, however, it could be adapted to work on other platforms and with other browsers. Have you ever peeked into the source code of any of the applications you run every day? Ever used make install to install some application? It is always interesting to read a book about a technical topic from its creator. This guide is aimed at computer programmers who want to master the GNU Emacs text editor. It has been said that the Emacs learning curve is not so much steep as long . Table of Contents
<urn:uuid:3c8325ef-3452-4df9-92fa-dd0dd0f7e812>
2.859375
636
Content Listing
Software Dev.
44.127564
In 2006, an article appeared in Science magazine reconstructing the temperature of the Northern Hemisphere back to 800 AD based on 14 smoothed and normalized temperature proxies (e.g., tree ring records). Osborn and Briffa proclaimed at the time that “the 20th century is the most anomalous interval in the entire analysis period, with highly significant occurrences of positive anomalies and positive extremes in the proxy records.” Obviously, concluding that the Northern Hemisphere has entered a period of unprecedented warmth is sure to make the news, and indeed, Osborn and Briffa’s work was carried in papers throughout the world and was loudly trumpeted by the American Association for the Advancement of Science (AAAS) that publishes the journal Science. A recent issue of Science contains an article not likely to receive any press coverage at all. Gerd Bürger of Berlin’s Institut für Meteorologie decided to revisit the work of Osborn and Briffa, and his results raise serious questions about the claim that the 20th century has been unusually warm. Bürger argues that Osborn and Briffa did not apply the appropriate statistical tests that link the proxy records to observational data, and as such, Osborn and Briffa did not properly quantify the statistical uncertainties in their analyses. Bürger repeated all analyses with the appropriate adjustments and concluded “As a result, the ‘highly significant’ occurrences of positive anomalies during the 20th century disappear.” Further, he reports that “The 95th percentile is exceeded mostly in the early 20th century, but also about the year 1000.” Needless to say, Gerd Bürger is not going to win any awards from the champions of global warming – nothing is more sacred than 20th century warming! The reconstruction of past temperatures is a science unto itself, and the library contains many journals dedicated to the field. We could easily locate an article a week presenting a temperature reconstruction from some part of the planet that would call into question the notion that the 20th century was a period of unusual warmth. You may recall many essays we presented over the past five years examining the “hockey stick” depiction of planetary temperature (little change for 900 years, and suddenly 100 years ago, the temperature shot up) so merrily adopted by Gore and many others. A large and important article appeared recently in Earth-Science Reviews regarding a long-term reconstruction of temperatures from Russia’s Lake Baikal. In case you have forgotten your geography lessons, Lake Baikal is the world’s deepest lake, it contains the world’s largest volume of freshwater (20 percent of the global supply), and the lake has over 300 rivers flowing into it. Anson Mackay of University College London is the author of the article, and he notes that “the bottom sediments of the lake itself have never been directly been glaciated. Lake Baikal therefore, contains a potential uninterrupted paleoclimate archive consisting of over 7500 m of sedimentary deposits, extending back more than 20 million years.” If that is not perfect enough, the Lake “is perhaps best well known for its high degree of biodiversity; over 2500 plant and animal species have been documented in Baikal, most of which are believed to be endemic.” The Lake is a long way from the moderating effects of any ocean, and therefore, the Lake should experience large climatic fluctuations over long and short periods of time. The trick to reconstructing temperatures here involves the shell remains of planktonic diatoms that have lived in the Lake for eons. During warm periods, some species of diatom phytoplankton flourish while during cold periods, some species flourish while most reduce production. Cores from the bottom of the Lake therefore contain a high-resolution temperature record for hundreds of thousands of years interpreted from biogenic silica left from the plankton. Figure 1 below tells us an interesting story about the climate of the past 800,000 years including (a) the most recent 10,000 years have generally witnessed a warming (warmer conditions are towards the right, cooler towards the left), (b) cold periods dominate the last 800,000 years, (c) climate can change rapidly, and (d) there are many periods in the past much warmer than what we have there today. Figure 1. Lake Baikal paleoclimate record from the past 800,000 years. Warmer conditions are towards the right, cooler ones towards the left (from Mackay, 2007). Of greater interest to us is what Lake Baikal can tell us about the most recent thousand years, and in particular, we are interested in the warming of the last 1000 years. Mackay notes that “between c. A.D. 850 and 1200, S. acus dominated the assemblage, most likely due to prevailing warmer and wetter climate that occurred in Siberia at this time.” Well now, it certainly looks as if the Medieval Warm Period was noticed at the Lake. Next we learn that “Between c. A.D. 1200 and 1400, spring diatom crops growing under the ice decline in abundance, due in part to increased winter severity and snow cover on the lake, which is reflected in cooler early Siberian summers.” The Little Ice Age then hit hard as Mackay finds “The diatom-inferred snow model suggests significantly increased snow cover on the lake between A.D. 1200 and 1775, which mirrors for the large part increases in snow cover in China during AD 1400–1900.” But here comes our favorite set of conclusions. Mackay writes “Diatom census data and reconstructions of snow accumulation suggest that changes in the influence of the Siberian High in the Lake Baikal region started as early as c. 1750 AD, with a shift from taxa that bloom during autumn overturn to assemblages that exhibit net growth in spring (after ice break-up). The data here mirror instrumental climate records from Fennoscandia for example, which also show over the last 250 years positive temperature trends and increasing early summer Siberian temperature reconstructions. Warming in the Lake Baikal region commenced before rapid increases in greenhouse gases, and at least initially, is therefore a response to other forcing factors such as insolation changes during this period of the most recent millennial cycle.” The Lake Baikal study shows that warming has occurred in the most recent century, but it is certainly nothing out of the ordinary and possibly to some degree explained by non-greenhouse forcing. The Osborn and Briffa proclamation that the 20th century was somehow out of the ordinary is certainly not confirmed by the incredible reconstruction from Lake Baikal. Bürger, G., 2007. Comment on “The Spatial Extent of 20th-Century Warmth in the Context of the Past 1200 Years”. Science, 316, 1844a. Mackay, A.W., 2007. The Paleoclimatology of Lake Baikal: A Diatom Synthesis and Prospectus. Earth-Science Reviews, 82, 181–215. Osborn, T.J., and Briffa, K.R., 2006. The spatial extent of 20th-century warmth in the context of the past 1200 years. Science, 311, 841-844.
<urn:uuid:1c7cfbac-ecc1-4acf-8552-380ab5f3387a>
3.40625
1,529
Nonfiction Writing
Science & Tech.
45.585936
by Doris Schattschneider, Moravian College, Bethlehem, Pennsylvania (from the Discovering Geometry NEWSLETTER, vol. 7, no.,1, Spring 1996) It is easy to show that every triangle tessellates and every quadrilateral tessellates. One way is to simply rotate the polygon by 180 degrees about the midpoints of each of the figure's sides and repeat this procedure again But the regular pentagon does not tessellate, so not every pentagon Is there at least one pentagon that will tessellate? Yes, there are many. But how many? Prior to 1968, it was thought that all tessellating pentagons could be classified into five types. But in that year R. Kershner found three more types and thought that the problem had been solved. No further discoveries were made until 1975, when Martin Gardner wrote a column in Scientific American based on Kershner’s article. Soon Gardner reported the discovery of another type found by one of his readers, Richard James III. James had cleverly taken the familiar tiling by octagons and squares, separated the rows of octagons and discarded the squares, divided the octagons into four pentagons, and filled the remaining space with copies of these same pentagons. This new discovery sparked the curiosity of another reader, Marjorie Rice, who quickly began her own investigations. With no formal training in mathematics beyond high school, she soon uncovered a tenth type of pentagon that tessellates. Her method of search was completely methodical, beginning with an analysis of what was already known. She drew little pentagons to represent each of the nine types known to tessellate and for each, wrote down the equations on angles and constraints on the sides that had to be satisfied. Then she devised a way to mark on each pentagon the angle relationships. Here is one example: This notation was the key to all her investigations, allowing her to easily consider all the possible cases of markings and then decide (by experimental construction) which markings might lead to a pentagon that tessellated. By 1977, Marjorie Rice had discovered three more new types of tessellating pentagons and more than 60 distinct tessellations by pentagons. Mathematics professor Doris Schattschneider of Moravian College brought Rice's research to the attention of the mathematics community and confirmed that Rice had indeed discovered what professional mathematicians had overlooked. In her article in Mathematics Magazine Schattschneider also reported that a high school class in New South Wales, Australia, had made a project to discover equilateral pentagons that tessellate and had discovered many different types. In 1985, a fourteenth type of tessellating pentagon was discovered by Rolf Stein, a German graduate student. Are all the types of convex pentagons that tessellate now known? The tessellating pentagon problem remains unsolved. Further information on the pentagon problem can be found in the following references: Doris Schattschneider, "Tiling the plane with congruent pentagons," Mathematics Magazine, 51 (1978) "A new pentagon tiler,"Mathematics Magazine, 58 (1985) Doris Schattschneider, "In Praise of Amateurs,"The Mathematical Gardner, D. Klarner (ed.). Belmont, CA: Wadsworth, 1981 B. Grünbaum and G.C. Shephard, Tilings and Patterns. New York: W.H. Freeman, 1987.
<urn:uuid:90b37872-cde7-4874-a97f-2aa09149f9d6>
3.984375
758
Knowledge Article
Science & Tech.
38.255958
History and Production The name is derived from radium. It was discovered in 1900 by F.E. Dorn who called it radium emanation. It was first isolated and studied in 1902 by E. Rutherford and F. Soddy. In 1908, W. Ramsey has also isolated the element and named it niton. He had determined the density and found it to be the heaviest known gas. Other names has also been called, depending on which radioactive series it originates in. Since 1923, it has been called radon. It is used in the treatment of cancer and as a radioactive source in testing metal castings. It is a colorless gas. When frozen, it exhibits a phosphorescence which becomes yellow as the temperature is lowered. Radon is present in minute amount (1 part of radon in 1 x 1021 part of air) in the atmosphere due to the decay of natural radium. Recently, radon is a health concern since its release from soils can easily trapped indoors. Inhaling radon has result in deaths from lung cancer due to the radioactive a emmission. Interatomic distance: - Melting point: -71°C Boiling point: -61.7°C Thermal conductivity/Wm-1K-1: 0.00364 (27°C) Density/kgm-3: 4400 (m.p.), 9.73 (0°C) Standard Thermodynamic Data (atomic gas) Enthalpy of formation: - Gibbs free energy of formation: - Entropy: 176.2 J/mol K Heat capacity: 20.8 J/mol K Electronic configuration: [Xe] 4f14 5d10 6s2 6p6 = [Rn] Term symbol: 1S0 Electron affinity: (not stable) Electronegativity (Pauline): - Ionization energy (first, second, third): 1037.07,- , - kJ/mol
<urn:uuid:45538e92-0f38-4111-949c-dfd5b43ef980>
3.625
425
Knowledge Article
Science & Tech.
68.808846
For those of you still looking for costume ideas this in from May 6th issue of Nature: Astronomy: Dust-filled doughnuts in space Julian Krolik, of Johns Hopkins University, is currently at the Institute of Astronomy, Madingley Road, Cambridge CB3 0HA, UK. The first images of an extragalactic object to have been captured using infrared interferometry reveal the doughnut-shaped cloud of dust that obscures the heart of a nearby active galaxy. Active galactic nuclei are among the most exotic objects in the Universe. They radiate as much light as an entire galaxy from a region the size of the Solar System and, unlike stars, spread that light over the entire electromagnetic spectrum, from radio waves to -rays. Thanks to observational advances in the past decade, we now have good evidence that the 'engine' powering each of these objects is a black hole, weighing anywhere from millions to billions of times as much as the Sun. Fortunately for the galaxies that house them (but unfortunately for distant observers like us), active galactic nuclei are often shrouded in opaque gas and dust that block a view of them from most directions. The light from them, intercepted by these dust clouds, is degraded to infrared light that tells us little about the fascinating activity deep inside. To the further frustration of astronomers, the dust clouds are so close to the active nucleus that their structure cannot easily be made out: rough estimates put their typical size on the sky at 0.01 arcseconds (a few hundred-millionths of a degree), and even the Hubble Space Telescope can't resolve anything smaller than about 0.1 arcseconds. But this barrier is at last being breached. On page 47 of this issue, Jaffe et al.1 present infrared images of an active nucleus that have a resolution of 0.01 arcseconds, achieved through interferometry — the careful combination of images from different telescopes placed some distance apart. The obscuring dust clouds, a scant several light years from a supermassive black hole, have now come into view. Light can be thought of either as a collection of individual energy packets called photons or as an electromagnetic wave. Interferometry exploits the wave aspect of light in that it hinges on comparing the phases of electromagnetic waves from a single source when they strike different telescopes. Measuring and retaining this phase information is a formidable technical challenge, so interferometry is not a discipline for the faint-hearted. Although the technique has been in regular use for decades at radio wavelengths, the difficulties increase as the wavelength of the light shrinks. The advance into the infrared region of the spectrum has been accomplished only recently2 and, as is usually the case, the first observations were only of nearby, comparatively bright stars3-5. Jaffe et al.1 have used the interferometer formed by the European Southern Observatory's Very Large Telescope and several smaller telescopes on the same Chilean mountain6. With this apparatus, high-resolution infrared images of faint, distant objects have been obtained for the first time. Indeed, the data shown in Jaffe and colleagues' paper1 are the first to be derived from infrared interferometry on anything outside the Milky Way. And what Jaffe et al. see is as interesting as the gear they built to see it with. Their images show a geometrically thick ring of dust that is extremely close to the central black hole in the nearest powerful active galaxy, NGC 1068 (Fig. 1). The warmest dust is no more than two or three light years from the very centre of the beast. Figure 1 The active galaxy NGC 1068. Full legend High resolution image and legend (86k) This result is a dramatic confirmation of inferences made many years ago, but which were not entirely accepted because they were so hard to understand. Since the late 1980s, many have believed that the dust clouds surrounding active galactic nuclei are not higgledy-piggledy, but are instead arranged more or less like a thick doughnut. As a result, observers whose sight-lines fall near to the equatorial plane of any particular active galactic nucleus (most of the Universe, in fact) are relegated to seats with an obscured view of its centre; the favoured minority close to the axis of the doughnut get to see the full show unobstructed. But this picture was difficult to accept because, in a gas cool enough to permit dust to survive, the thermal motions would be too slow to keep the doughnut puffed up and thick; the gravity of the central black hole would ensure that the doughnut collapsed to look more like a CD, if the only resistance to it were random thermal motions7. Many speculative suggestions have been made to account for the geometrical thickness of the dust doughnut7-10 but none has gained general credence. Partly to avoid this dynamical conundrum, alternative geometries for the dust (for example, a thin but warped disk11, 12) have been suggested, but these are problematic for other reasons7. Thanks to these new observations1, we now know that, at least in the nearest and brightest example of NGC 1068, a thick doughnut is in fact the right shape. The fact that most active galactic nuclei cannot be seen clearly has many implications. Most obviously, it means that it is hard to find them, so censuses of massive black holes probably underestimate their numbers by as much as a factor of five. If most active nuclei are visible only in difficult-to-observe infrared light, their total power may actually be so large as to rival the total output of all the stars in the Universe13. After languishing for a decade largely through lack of data, this field should now see a revival, as it is refreshed by detailed infrared imaging. The dynamical problems guessed at years ago can be brought into clearer focus; with any luck, direct images of these structures may yield clues to their solution. I often heard the sorrel nag (who always loved me) crying out, ..."Take care of thyself, gentle Yahoo."
<urn:uuid:3c3e6e76-8ef5-4453-b804-70e238b57bf6>
2.765625
1,253
Comment Section
Science & Tech.
38.551309
Instructions: First print copies of Exercise 4, Lag-time questions, and Real-time questions. Read the text and examine the images. Write answers to the questions on your copies. When you are satisfied with your answers, type them into the computer and submit. Keep the copies as a study guide. The ocean is not a homogeneous body of water. Rather, there are layers with distinct temperatures, salinities, and dissolved gas and nutrient content, and currents with distinct characteristics that circulate water up and down within the ocean and along the ocean surface. Because these currents are large-scale features, global views such as those obtained from satellite measurements are needed to study them. In this voyage we will examine some of the techniques used to investigate ocean currents, and will begin to look at the interactions of ocean and atmosphere that create climate. Large-scale currents operate in the world ocean to redistribute heat. Warm-water currents from the equator travel toward the poles, and cold-water currents travel from the poles toward the equator. Two examples of this large-scale circulation are the North Pacific and North Atlantic gyres. Click here to obtain a diagram of the major oceanic currents. Clicking will bring up a separate window that you can keep open while you are working on the exercise. (You can also print the diagram, if you like.) If the current names are not clearly visible, you can view a similar diagram in the textbook on page 149 (Figure 8.8). How can we observe these currents? Here are three techniques. For centuries, people have thrown objects into the ocean to see where the currents will send them. Today, the technique is still used, but with a high level of technological sophistication. Buoys are dropped by ships and airplanes into the ocean, where they drift with the currents and directly measure water characteristics with built-in instruments. They are tracked by satellites in orbits far above Earth and transmit data several times a day. Figure 1. This map of the North Pacific Ocean contains data from three buoys. The data were recorded during a 10-month period between February and December 1995. The arrow in the middle of each plotted line shows the direction of each buoy's motion. The beginning of the line shows where the buoy was located in February and the end of the line shows where the buoy was located in December. I obtained the data from a web site that provides buoy latitude and longitude information and used them to plot positions of the buoys through time. NOTE: Longitudes 80-180 degrees on the left (west) side of the diagram are East Longitudes. Longitudes 180 to 80 on the right (east) side of the diagram are West Longitudes. Latitudes north and south of the equator are also shown. Because currents often have different temperatures then the surrounding water, measurements of sea-surface temperatures (SST) can be used to map currents. SSTs are easily measured from satellites. Figure 2. This image shows sea-surface temperatures (SSTs) measured by instruments on satellites. Different temperatures were assigned different colors (see color bar along right side of image; numbers are degrees Celsius). The SSTs show two currents --- the Gulf Stream (yellow to red colors) and the Labrador Current (blue colors) --- that flow parallel to the western edge of the North Atlantic Ocean, along the East Coast of the United States. The gray color is land, extending from Cuba (just south of Florida) to Cape Cod (just north of New York). SSTs range from a cold 4 degrees Celsius (40 degrees F; dark blue color) to a moderate 17 degrees Celsius (63 degrees F; green color) to a warm 29 degrees Celsius (84 degrees F; red color). Major currents such as these can be thought of as rivers in the ocean that can transport incredibly huge amounts of water from place to place. For example, the Gulf Stream transports more than 150 cubic meters of water per second, compared to a flow of 0.6 cubic meters per second for all of the rivers that flow into the Atlantic Ocean. Figure 3. This image shows SSTs for the entire world ocean in August 1995, also measured from a satellite. Temperatures are in degrees Celsius. The temperature range and colors are similar to those in Figure 2. Different currents contain different amounts of life-supporting nutrients such as nitrogen and phosphorus. Because the water in some currents is more biologically productive than in other currents, we can use productivity measurements to investigate the location and shape of the currents. Figure 4. This image shows worldwide variations in primary productivity (plant growth) based on satellite measurements of the concentration of chlorophyll pigment in the water near the oceans' surface. Red, orange, and yellow colors indicate high values; purple and blue colors indicate low values (see color bar scale). Black areas are parts of the ocean where insufficient data were collected. The images incorporate data that were gathered from July through September (Northern Hemisphere summer months). Two factors affect the amount of primary productivity in the ocean: light and nutrients. Light changes seasonally, particularly at high latitudes. High nutrient levels are often observed along the edges of oceans, where upwelling currents bring nutrient-rich water to the surface. Low nutrient levels are often observed in the centers of the large oceanic gyres. Currents are just one component of the ocean that influences our climate. It is part of the system of ocean and atmosphere circulation that acts to redistribute heat on earth and to keep our planet a habitable environment for living things. Many people are concerned that human actions are upsetting the heat balance of earth. Of particular concern is our burning of fossil fuels (oil, gas, coal, etc.), which inputs large amounts of Carbon Dioxide (CO2) into the atmosphere. Pages 113-114 in the textbook (see Figure 6.12) explain the heat budget on earth. Pages 313-315 in the textbook (see especially Figure 15.22) explain how the addition of greenhouse gases (including CO2) could upset the balance and cause overall warming on the planet. Article 1. Click here to read an article ("solid evidence 'greenhouse gas' heating up earth") from the San Francisco Chronicle earlier this year that describes some evidence that the planet is warming and also some reactions to this evidence. Note that this link will bring up a new window that will connect you to the SF Gate web site. Article 2. Click here to read an article ("Greenland ice cap is melting, raising sea level) about some potential impacts of global warming.
<urn:uuid:b084f1c7-f8b9-4f8b-9f41-13e3abe00562>
4.375
1,344
Tutorial
Science & Tech.
47.212029
Science Community at the DRC. A Digital Resource Commons (DRC) community containing three collections: Bird Sounds from the Borror Laboratory of Bioacoustics, The Borror Laboratory houses one of the largest collections of recorded animal sounds in the world. Founded by the late Dr. Donald Borror, Professor of Entomology and Zoology at The Ohio State University, the collection contains more than 30,000 recordings of over 1000 species of animals. Dolphin Embryo Microscopic Slides, Dolphin Embryo Digitized Slides are scanned images of histological sections, captured using a digital camera connected to a microscope. Embryos were made available by the Los Angeles County Museum. The goal behind these sections is to allow researchers and others to be able to see each section individually to study histological embryology as well as use a group of images to see changes within tissues. This collection includes 5000 forestry images from Ohio State University. Ohio Agricultural Experiment Station Forestry Image Collection is a collection of photos of forestry, dating back to the early 20th century. Images portray trees, cultivation, care, and the effects of human activity. Many pictures were taken in Ohio but other states and countries are also represented.
<urn:uuid:04a4bece-0d44-45e8-b520-abcb98f37a1a>
3.046875
252
Content Listing
Science & Tech.
21.042338
This document was created by man2html using the manual pages. Section: Linux Programmer's Manual (2) Return to NetAdminTools accept - accept a connection on a socket int accept(int s, struct sockaddr *addr, socklen_t *addrlen); function is used with connection-based socket types It extracts the first connection request on the queue of pending connections, creates a new connected socket with mostly the same properties as and allocates a new file descriptor for the socket, which is returned. The newly created socket is no longer in the listening state. The original socket is unaffected by this call. Note that any per file descriptor flags (everything that can be set with the fcntl, like non blocking or async state) are not inherited across is a socket that has been created with bound to a local address with and is listening for connections after a is a pointer to a sockaddr structure. This structure is filled in with the address of the connecting entity, as known to the communications layer. The exact format of the address passed in the parameter is determined by the socket's family (see and the respective protocol man pages). argument is a value-result parameter: it should initially contain the size of the structure pointed to by on return it will contain the actual length (in bytes) of the address is NULL nothing is filled in. If no pending connections are present on the queue, and the socket is not marked as blocks the caller until a connection is present. If the socket is marked non-blocking and no pending connections are present on the queue, In order to be notified of incoming connections on a socket, you can use A readable event will be delivered when a new connection is attempted and you may then call to get a socket for that connection. Alternatively, you can set the socket when activity occurs on a socket; see For certain protocols which require an explicit confirmation, can be thought of as merely dequeuing the next connection request and not implying confirmation. Confirmation can be implied by a normal read or write on the new file descriptor, and rejection can be implied by closing the new socket. Currently only has these semantics on Linux. There may not always be a connection waiting after a is delivered or return a readability event because the connection might have been removed by an asynchronous network error or another thread before If this happens then the call will block waiting for the next connection to arrive. To ensure that never blocks, the passed socket needs to have the flag set (see The call returns -1 on error. If it succeeds, it returns a non-negative integer that is a descriptor for the accepted socket. passes already-pending network errors on the new socket as an error code from This behaviour differs from other BSD socket implementations. For reliable operation the application should detect the network errors defined for the protocol after by retrying. In case of TCP/IP these are shall fail if: - EAGAIN or EWOULDBLOCK The socket is marked non-blocking and no connections are present to be accepted. The descriptor is invalid. The descriptor references a file, not a socket. The referenced socket is not of type The system call was interrupted by a signal that was caught before a valid connection arrived. A connection has been aborted. Socket is not listening for connections. The per-process limit of open file descriptors has been reached. The system maximum for file descriptors has been reached. may fail if: parameter is not in a writable part of the user address space. - ENOBUFS, ENOMEM Not enough free memory. This often means that the memory allocation is limited by the socket buffer limits, not by the system memory. may fail if: Firewall rules forbid connection. In addition, network errors for the new socket and as defined for the protocol may be returned. Various Linux kernels can return other errors such as may be seen during a trace. SVr4, 4.4BSD (the function first appeared in BSD 4.2). The BSD man page documents five possible error returns (EBADF, ENOTSOCK, EOPNOTSUPP, EWOULDBLOCK, EFAULT). SUSv3 documents errors EAGAIN, EBADF, ECONNABORTED, EINTR, EINVAL, EMFILE, ENFILE, ENOBUFS, ENOMEM, ENOTSOCK, EOPNOTSUPP, EPROTO, EWOULDBLOCK. In addition, SUSv2 documents EFAULT and ENOSR. Linux accept does _not_ inherit socket flags like This behaviour differs from other BSD socket implementations. Portable programs should not rely on this behaviour and always set all required flags on the socket returned from accept. The third argument of was originally declared as an `int *' (and is that under libc4 and libc5 and on many other systems like BSD 4.*, SunOS 4, SGI); a POSIX 1003.1g draft standard wanted to change it into a `size_t *', and that is what it is for SunOS 5. Later POSIX drafts have `socklen_t *', and so do the Single Unix Specification Quoting Linus Torvalds: _Any_ sane library _must_ have "socklen_t" be the same size as int. Anything else breaks any BSD socket layer stuff. POSIX initially _did_ make it a size_t, and I (and hopefully others, but obviously not too many) complained to them very loudly indeed. Making it a size_t is completely broken, exactly because size_t very seldom is the same size as "int" on 64-bit architectures, for example. And it _has_ to be the same size as "int" because that's what the BSD socket Anyway, the POSIX people eventually got a clue, and created "socklen_t". They shouldn't have touched it in the first place, but once they did they felt it had to have a named type for some unfathomable reason (probably somebody didn't like losing face over having done the original stupid thing, so they silently just renamed their blunder). - RETURN VALUE - ERROR HANDLING - CONFORMING TO - SEE ALSO Return to NetAdminTools,
<urn:uuid:d7e798cd-b5dc-4a43-94e4-07e17c344c40>
2.890625
1,424
Documentation
Software Dev.
45.378264
Photon wave functions The fundamental building block of the theory of photons is the wave function for a single photon. A detailed study of the properties of a photon wave equation and its solutions has been made in a recent work that considers both classical and quantum solutions of the Maxwell equations. Properties that are expected to be satisfied by a photon wave function are enumerated and shown to be met by the formalism provided in the study: Solutions of the Maxwell equations and photon wave functions, P. J. Mohr, Annals of Physics 325, 607-663 (2010). (Preprint PDF) Excited states of atoms have attractive properties as a potential source of information on fundamental constants and tests of theory. It has been shown that the largest sources of uncertainty of atomic levels, namely the charge radius of the nucleus and higher-order quantum electrodynamic effects are greatly diminished in states with angular momentum 2ħ or greater. These points are discussed and results of detailed calculations, including QED effects for excited levels in hydrogen-like atoms are given in: Fundamental Constants and Tests of Theory in Rydberg States of Hydrogenlike Ions, U. D. Jentschura, P. J. Mohr, J. N. Tan, and B. J. Wundt, Phys. Rev. Lett. 100, 160404 (2008). (Reprint PDF) The transition frequencies in hydrogen and deuterium are among, and may actually be, the most precise quantities in physics that can be both measured and calculated. A searchable database of the theoretical predictions for these frequencies and the corresponding energy levels is available on the NIST Physics Laboratory Website. The calculated values are based on a least-squares analysis of the best available theory and experiments. This approach assures that the covariances of the levels is taken into account, with the result that the predictions in some cases have smaller relative uncertainties than the uncertainty in the Rydberg constant. The database is available at http://physics.nist.gov/hdel. Precise Calculation of Transition Frequencies of Hydrogen and Deuterium Based on a Least-Squares Analysis, U.D. Jentschura, S. Kotochigova, E.-O. Le Bigot, P.J. Mohr, and B.N. Taylor, Phys. Rev. Lett. 95, 163003 (2005).(PDF)
<urn:uuid:79de30bc-cd7e-4415-99bf-84b3da26c5dd>
3.171875
501
Knowledge Article
Science & Tech.
52.421731
Some students have been working out the number of strands needed for different sizes of cable. Can you make sense of their solutions? The triangle OMN has vertices on the axes with whole number co-ordinates. How many points with whole number coordinates are there on the hypotenuse MN? A game for 2 players The aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning. Jo made a cube from some smaller cubes, painted some of the faces of the large cube, and then took it apart again. 45 small cubes had no paint on them at all. How many small cubes did Jo use? A game for 2 players with similaritlies to NIM. Place one counter on each spot on the games board. Players take it is turns to remove 1 or 2 adjacent counters. The winner picks up the last counter. A game for 2 players. Set out 16 counters in rows of 1,3,5 and 7. Players take turns to remove any number of counters from a row. The player left with the last counter looses. Can you describe this route to infinity? Where will the arrows take you next? Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? Build gnomons that are related to the Fibonacci sequence and try to explain why this is possible. To avoid losing think of another very well known game where the patterns of play are similar. An article for teachers and pupils that encourages you to look at the mathematical properties of similar games. Try entering different sets of numbers in the number pyramids. How does the total at the top change? Pick a square within a multiplication square and add the numbers on each diagonal. What do you notice? Imagine starting with one yellow cube and covering it all over with a single layer of red cubes, and then covering that cube with a layer of blue cubes. How many red and blue cubes would you need? The NRICH team are always looking for new ways to engage teachers and pupils in problem solving. Here we explain the thinking behind How many moves does it take to swap over some red and blue frogs? Do you have a method? First of all, pick the number of times a week that you would like to eat chocolate. Multiply this number by 2... We can show that (x + 1)² = x² + 2x + 1 by considering the area of an (x + 1) by (x + 1) square. Show in a similar way that (x + 2)² = x² + 4x + 4 Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The loser is the player who takes the last counter. Charlie and Lynne put a counter on 42. They wondered if they could visit all the other numbers on their 1-100 board, moving the counter using just these two operations: x2 and -5. What do you Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The winner is the player to take the last counter. Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges. Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? Choose any two numbers. Call them a and b. Work out the arithmetic mean and the geometric mean. Which is bigger? Repeat for other pairs of numbers. What do you notice? The opposite vertices of a square have coordinates (a,b) and (c,d). What are the coordinates of the other vertices? How could Penny, Tom and Matthew work out how many chocolates there are in different sized boxes? Triangle numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? A collection of games on the NIM theme A package contains a set of resources designed to develop pupils’ mathematical thinking. This package places a particular emphasis on “generalising” and is designed to meet the. . . . Imagine a large cube made from small red cubes being dropped into a pot of yellow paint. How many of the small cubes will have yellow paint on their faces? What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles? Choose four consecutive whole numbers. Multiply the first and last numbers together. Multiply the middle pair together. What do you notice? Square numbers can be represented as the sum of consecutive odd numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153? A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. Four bags contain a large number of 1s, 3s, 5s and 7s. Pick any ten numbers from the bags above so that their total is 37. If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable. Decide which of these diagrams are traversable. List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it? Imagine you have a large supply of 3kg and 8kg weights. How many of each weight would you need for the average (mean) of the weights to be 6kg? What other averages could you have? Can you find sets of sloping lines that enclose a square? Can you find the values at the vertices when you know the values on When number pyramids have a sequence on the bottom layer, some interesting patterns emerge... The sum of the numbers 4 and 1 [1/3] is the same as the product of 4 and 1 [1/3]; that is to say 4 + 1 [1/3] = 4 × 1 [1/3]. What other numbers have the sum equal to the product and can this be so for. . . . Jo has three numbers which she adds together in pairs. When she does this she has three different totals: 11, 17 and 22 What are the three numbers Jo had to start with?” Charlie likes tablecloths that use as many colours as possible, but insists that his tablecloths have some symmetry. Can you work out how many colours he needs for different tablecloth designs? Think of a number, add one, double it, take away 3, add the number you first thought of, add 7, divide by 3 and take away the number you first thought of. You should now be left with 2. How do I. . . . A little bit of algebra explains this 'magic'. Ask a friend to pick 3 consecutive numbers and to tell you a multiple of 3. Then ask them to add the four numbers and multiply by 67, and to tell you. . . .
<urn:uuid:4b6940dc-2c92-4735-8e54-5d5fc8c36196>
3.640625
1,719
Content Listing
Science & Tech.
74.1028
Jan 19, 2001, 11:34 AM Post #1 of 1 (From the Perl FAQ) What machines support Perl? Where do I get it? The standard release of Perl (the one maintained by the perl development team) is distributed only in source code form. You can find this at http://www.perl.com/CPAN/src/latest.tar.gz, which in standard Internet format (a gzipped archive in POSIX tar format). Perl builds and runs on a bewildering number of platforms. Virtually all known and current Unix derivatives are supported (Perl's native platform), as are proprietary systems like VMS, DOS, OS/2, Windows, QNX, BeOS, and the Amiga. There are also the beginnings of support for MPE/iX. Binary distributions for some proprietary platforms, including Apple systems can be found http://www.perl.com/CPAN/ports/ directory. Because these are not part of the standard distribution, they may and in fact do differ from the base Perl port in a variety of ways. You'll have to check their respective release notes to see just what the differences are. These differences can be either positive (e.g. extensions for the features of the particular platform that are not supported in the source release of perl) or negative (e.g. might be based upon a less current source release of perl). A useful FAQ for Win32 Perl users is http://www.endcontsw.com/people/evangelo/Perl_for_Win32_FAQ.html
<urn:uuid:82dbe679-cbe1-4bfa-8052-5768fba6ff53>
2.6875
334
Comment Section
Software Dev.
68.016458
Thursday, March 5, 2009 The generation of giant dunes explained Ever climbed up a 200+ ft dune (I know some of you have from your memes ...) and wondered how it formed? Well, a new paper in Nature explains the controls on giant dune size. Andreotti et al. (2009) use a combination of field measurements and aerodynamic calculations to provide evidence that giant dunes are built of smaller dunes and their terminal height is controlled by an atmospheric boundary layer. They show that large dunes grow by the amalgamation of superimposed dunes. Dune growth stops (assuming it's not the result of climatic change) by interaction with an inversion layer. Once the dune has grown large enough, it begins interacting with the inversion layer which confines airflow to move around the dunes rather than over them. The height of this inversion layer correlates to the variation in annual temperature. The greater the variation in annual temperature the higher the altitude of the inversion layer and the taller the giant dunes can grow. Continental dunes with larger annual temperature changes grow larger than coastal dunes which have smaller annual temperature changes moderated by their proximity to the ocean. There is also a short piece on Ralph Bagnold and his seminal book "Physics of Blown Sand and Desert Dunes." And in case you missed it in the Carnival of the Arid #1, Through the Sandglass' Michael Welland describes the fascinating life of Ralph Bagnold. Posted by Mel at 9:56 PM
<urn:uuid:c8293268-f903-4da2-8d81-bf45a69f5a13>
3.0625
316
Personal Blog
Science & Tech.
47.461731
Simple explanation and facts about asteroids and meteoroids. The space between Mars and Jupiter is so great that early astronomers thought there might be a planet there. They did not find any, but instead they found small planet like bodies called asteroids. Asteroids are solid, rocklike masses with irregular shapes, so their brightness changes as they rotate. The two largest asteroids discovered are spherical in shape. These are Ceres and Pallas, each of which is about 100 kilometers in diameter. Other asteroids are less than one kilometer in diameter. Asteroids move around the sun in the same direction as the planets (west to east). Some asteroids travel in elongated orbits. Some pass near Earth. Scientists believe that asteroids are leftover materials from a planet in its early stage of formation. Also traveling in space are rock fragments called meteoroids. Some meteoroids are as large as boulders, while others are a tiny as particles of sand. Some meteoroids travel in groups, others travel alone. Meteoroids enter Earth’s atmosphere at speeds ranging from 16 to 71 kilometers per second. Friction between the meteoroids and particles in the atmosphere causes some meteoroids to light up. A meteor is a meteoroid that passes through Earth’s atmosphere and produces a bright flash in the sky. On a dark night, when the sky is clear, one can see around 5 to 15 meteors. Meteors are called “shooting stars” because they look like stars falling from the sky. As a comet travels near the sun, some parts of the comet break off, producing small particles called micrometeoroids. When the path of these micrometeoroids crosses Earth’s path and the particles enter Earth’s atmosphere, large numbers of meteors are seen. This phenomenon is called meteor shower. Meteor showers happen several times a year. They are named after the constellation from which they appear to come. Among the best known meteor showers and their peak dates are the following – the Lyrids (April 21/22), the Eta Aquarids (May 5/6), the Perseids (August 11/12), the Oronids (October 21/22), the Taurids (southern, November 5/6) and northern, (November 11/12), the Leonids (November 17/19) and the Geminids (December 13/14). All meteor showers originate from comets except Geminids, which originate from an asteroid. Some meteors do not burn out as they travel through Earth’s atmosphere. They are able to reach Earth’s surface, or the ground. The objects are now called Meteorites. There are three kinds of meteorites. The first kind and the most plateful are stones that look like dark igneous rocks. These are made of silicate and iron. The largest of the known stone meteorites weighs about a metric ton. The second kind of meteorites are called irons because they consist mostly of iron and nickel. Irons are heavier than stones. One of the largest known iron meteorites weighs 34 metric tones. The third kinds of meteorites are stony-irons, which are quite rare. They are mixture of stone and iron. The Antarctic ice cap is the most abundant source of meteorites.
<urn:uuid:6145c011-1e2c-4641-818c-c56cccc47a09>
4.1875
675
Knowledge Article
Science & Tech.
53.3085
Stuck in someone else's frames? break free! MAKING MINERAL DEPOSITS The following activities demonstrate the formation of mineral deposits on the surface and in the interior of the earth. a. Salt deposits can be found in various locations on the earth's surface. To show how they may be formed, put a small amount of salt into a jar of water and shake it. What happens to the salt? (It seems to disappear as it dissolves.) Continue adding spoons of salt until some salt remains in the bottom of the jar even after a thorough shaking. Pour the water into a shallow pan, and set the pan in a sunny place to let the water evaporate naturally. Describe the results. Where on the earth's surface have similar salt deposits been found? The Great Salt Lake in Utah is not the only salt deposit on the surface of the earth. Salts in the earth are continually being dissolved by water. Where does the water usually take the salts? Describe the ocean's taste and relate your description to this activity. b. Stalactites and stalagmites build up in caves from the dissolving and depositing of minerals by water. To illustrate the formation of stalactites, which hang down from the ceilings of some caves, and stalagmites, which stand upright on the floors of some caves (usually under stalactites), dissolve as much Epsom salts as you can in a container of water. Fill two smaller containers with this solution. Set the containers on paper toweling, then put one end of a thick string in each container, and suspend the string between them. Candle-wick is good material for this. Let the string remain in place for several days. You will see that water soaks the entire string and drips off of the string at the low point between the containers. You will also see that deposits form where the water drips -- both from the string and on the toweling. The deposits are carried in solution to the drip point, and when the water in a solution evaporates, the mineral -- in this case, Epsom salts -- is left behind. You will realize that this formation is similar to the slow formation of stalactites and stalagmites in caves. Any problems with this page? Send URL to webmaster. Thank you! We publish two newsletters a couple of times a month. To subscribe, send a blank email to the appropriate email address. Topica will send you a message asking if you really intended to subscribe - just click reply - that's it! Free Recipe Collection Newsletter: Jewish Recipe Collection Newsletter: Tired of Geek Speak when
<urn:uuid:67781a79-d0ee-4d1a-9c8c-6af4f5a9d372>
3.625
552
Tutorial
Science & Tech.
58.847029
This week researchers announced that a solar storm is coming — the most intense solar maximum in fifty years. The prediction comes from a team led by Mausumi Dikpati of the National Center for Atmospheric Research (NCAR). “The next sunspot cycle will be 30% to 50% stronger than the previous one,” she says. If correct, the years ahead could produce a burst of solar activity second only to the historic Solar Max of 1958. Solar Cycle 24 will be the biggest in modern history. Dikpati’s forecast puts Solar Max at 2012. Hathaway believes it will arrive sooner, in 2010 or 2011. Translation: it going to get hotter on earth during this period. At the same time, researchers are showing evidence that the next solar cycle 25 is likely to be very weak. Translation: It will get cooler on earth starting around 2017-2018 and beyond.
<urn:uuid:e0a2d68a-895f-472d-ab69-7e6754341297>
3.015625
187
Personal Blog
Science & Tech.
63.758879
CAMEL Climate Change Education A free, comprehensive, interdisciplinary, multi media resource for educators I Learn more I Join The Sun is a beautiful thing to talk about. Without the Sun there is no life for... Very good article. Solar forcing is not mentioned in the body of the text. F... This Is An Interesting Source..... Your Shared Information Are Good And Reality... Living a green life can actually offer us a better quality of life on earth. Rep... EDUCATOR TOOLS > Continuing Conversations (180 Blog sites) American Indian & Indigenous People Climate & Agriculture Climate & Food Security Climate Change & Disasters Climate Change & Security Sea Level Rise/Coastal Adaptation TED Talks Climate Series Misconceptions & Skeptics Climate Change FAQ's How Do We Know? CONTENT BY PARTNERS > Livermore National Laboratory Public Broadcasting System PBS UCAR – COMET Will Steger Foundation Concensus is an agreement, opinion, or position reached by a group as a whole. Teaching materials on climate change and consensus. Vetted articles on climate change and consensus About the book This website serves as an accompanying resource to the book published by Island Press in November 2009 titled Climate Solutions Consensus edited by... Polls show that between one-third and one-half of Americans still believe that there is "no solid" evidence of global warming, or that if warming is... Requires Flash 8 to play The producers' primary goal was to make a fun, challenging game. At times it was necessary to strike a compromise between strict... © Copyright 2012. All rights reserved. Powered by Trunity Terms of Service
<urn:uuid:17daf35c-7411-4205-ad88-5778c88f7de5>
2.703125
366
Content Listing
Science & Tech.
41.202201
This is a PowerPoint presentation for class lecture and discussion lasting one hour to one & one-half hours, depending upon the instructor and depth of coverage. As an introduction to crude oil and the environmental impacts of crude oil spills, students will learn about the chemical composition of crude oil, basic terminology used in the oil industry, the mechanics of obtaining, transporting, and refining crude oil, oil spills with a focus on the Deepwater Horizon Disaster. Coverage of marine environmental impacts is addressed with the Deepwater Horizon Disaster, focusing on the effects of oil on wildlife, fisheries, estuarine and habitat disruption. The lecture is completed with a discussion of oil spill containment, recovery and remediation. Instructors present this lecture in a format that encourages critical thinking. By providing students with a basic understanding of the chemical components of crude oil, the students are then challenged to use this information to suggest the impact of oil on the environment and habitat disruption, on oil weathering, and the separation of crude oil components in the water column and on the bottom, i.e. tar balls. CONTEXT FOR USE Students should already be familiar with the principles of wildlife habitat, food chains, trophic pyramids, marine and estuarine environments. Students may have a familiarity regarding the principles of energy sources and use, or this lecture can be tailored as a lead in for future material on energy and society. This lecture was designed for freshman/sophomore college courses in environmental sciences, but may be adapted for use at other levels as well. ACTIVITY DESCRIPTION AND TEACHING MATERIALS The PowerPoint presentation is designed for lecturing and triggering discussion among the students. The attached homework assignment is an example, which can easily be adapted for the setting and level of the students. This homework assignment was given to the students the week before this lecture material was covered. The PowerPoint should be provided to the students in advance of the lecture covering this material. This can be accomplished by either quizzes or on the exam, or both. REFERENCES AND RESOURCES [Notes compiled from over 15 years of teaching environmental toxicology and environmental science courses.] A PowerPoint presentation for class lecture and discussion that introduces the properties and chemical composition of crude oil and the environmental impacts of crude oil spills, with a focus on the Deepwater Horizon Disaster. PowerPoint is designed for freshman/sophomore college level introduction to environmental sciences, but is adaptable to other educational levels.
<urn:uuid:6e3f5dfd-715f-4974-9c3c-f49f70a22df8>
3.875
506
Tutorial
Science & Tech.
20.619093
Understanding Past Weather to Improve Future Predictions New assimilation method using old weather data is feasible technique for improving forecast models Climate change on Earth and its possible causes have become major "hot topics" of scientific research over the past few decades. Recently, climate researchers have come to realize that in order to better understand changes in our global climate, we must improve our understanding the day-to-day weather variations. Many important things could be learned from analyzing our past climate over a period of at least 100 years. Unfortunately, there is not a lot of accurate global climate data beyond the past 60 years. So what's a researcher to do? One solution is to compile the scant observations that are available and combine them using physical relationships to produce a more extensive and improved dataset. By analyzing these data, it might be possible to figure out the patterns in the atmosphere that created past weather conditions. Although this idea initially had skeptics, researchers Gilbert Compo, Jeffrey Whitaker, and Prashant Sardeshmukh of ESRL and CIRES set out to prove that it might work. Why? With the availability of more accurate data, better computer simulations of past climate and weather can be made. This information could be used to study how and why our climate has changed. Or it could be used to examine "climate catastrophes," such as the Dust Bowl of the 1930s, figure out why they happened, and determine whether they are likely to happen more frequently as the climate changes. The idea of studying the past weather from the historical observations seems like a logical approach, so why hasn't it been done before? First of all, the needed technology didn't exist. Luckily, our modern weather prediction tools have advanced enough to make this a possibility. Second, we did not have the daily observational data in a digital format. The Quest for Data There exists an enormous amount of weather observations describing the atmospheric circulation for the past 100 years collected by a variety of sources; from meteorologists and military personnel to volunteer observers and ships' crews. Until recently, however, this data was only available in paper manuscripts. Researchers had to rely on hand-drawn weather maps to study the past weather. Although these maps contain many errors, the value of this information was recognized and during the 1970s extensive efforts put these maps into a digital format. In recent years, organizations such as NOAA's Climate Database Modernization Program and the National Center for Atmospheric Research have formed partnerships with private industry and scientific organizations around the world to digitize the original manuscript weather observations, and make them available on the Web. Millions of images are available, but believe it or not, there is still a long way to go. Since the weather we experience occurs in the troposphere — the area of the atmosphere from the Earth's surface to about 11 km (6.8 miles) up in the air — Compo, Whitaker, and Sardeshmukh decided to focus on surface air pressure and sea level pressure observations over the past 100 years. By studying recent pressure data, as shown in their article in the February 2006 issue of the Bulletin of the American Meteorological Society, it is possible to get a snapshot of the other variables, such as the winds and temperatures, throughout the troposphere. "This was a bit unexpected," said Compo, "but it means that we can use the surface pressure measurement to get a very good picture of the weather back to the 19th century. Climate change may alter a region's weather and its dominant weather patterns. We need to know if we can understand and simulate the variations in weather and weather patterns over the past 100 years to have confidence in our projections of changes in the future." What Compo, Whitaker, and Sardeshmukh have shown is that the international community has digitized enough pressure data over the Northern and Southern Hemispheres that they can recreate the weather maps over the past 100 years. Filtering Out Errors Raw observations in hand, the next step is to combine the historical observations based on physical principles while taking into account that the data have errors and discrepancies, only some of which can be corrected. Correctable errors in observed weather data can come from a variety of sources, such as recording the wrong elevation where the observations were taken, or errors in transcription, such as an observation of 1001 millibars of atmospheric pressure being recorded as 1010. Other discrepancies include the difference two barometers might have when separated by several kilometers. The researchers apply a numerical weather prediction model and a Kalman filter to the data to combine the imperfect pressure observations in a process called data assimilation. The filter step, named after Rudolph E. Kalman, provides a mathematical way to create the weather map for a particular time by blending all of the observations with a numerical weather model. This blending takes into account the meteorological situation and the error in the observations. In the case of creating the weather maps for a 100 years ago, very few observations are available and those observations are only at the Earth's surface, but the data assimilation procedure creates a complete map for the entire troposphere. "What we have shown is that the map for the entire troposphere is very good even though we have only used the surface pressure observations," says Compo. The filter can change continuously based on the location on the globe, the number of observations, or the meteorology. The filtering procedure currently takes one day to process one month worth of global data. The resulting weather maps are then given to the numerical weather prediction model to make a forecast of the past weather. The quality of these forecasts of the past will help us know if our weather maps are of a sufficient quality to be used for scientific study. An Interesting Data Subset — The Admiral Byrd Expedition During the summer of 2006, PSD provided internship opportunities to several NOAA Hollings scholarship recipients. Two of these college-age students worked with Compo focusing on a subset of historical data from the Antarctic. In 1928 Admiral Richard E. Byrd, a great explorer, scientist and aviation pioneer, set out on an expedition of Antarctica. Byrd and his crew took hourly surface measurements throughout 1929 at their base in Antarctica; "Little America." This is an important set of observations since data in the Southern Hemisphere during this time is very sparse. The only weather observations recorded near this area were taken from passing ships, and even then only taken every 6 hours. The students digitized the hourly measurements using optical character recognition. The data were then reviewed. The students blended the Byrd observations with any other available observations using the Kalman filter; the results are yet to be examined. During their study of Byrd's adventures, the students discovered that some of the revamped data may help explain the atmospheric conditions that caused difficulties in some of Byrd's explorations. This additional data will also add to the completeness of the global data set. Proving the skeptics wrong, the new and advanced data assimilation method studied by Compo, Whitaker, and Sardeshmukh has demonstrated that the idea of compiling old weather data and blending them is a feasible technique worthy of future investment. After creating the weather maps of the past 100 years, the researchers hope that better computer models can be developed based on this new historical data. The challenge will then be to understand why the weather of the past varied and whether we can predict how it may change in the 21st Century.
<urn:uuid:88f8e2da-84c1-47cd-8bfc-35313401c2bb>
3.625
1,501
Knowledge Article
Science & Tech.
30.578166
java program simulating darts i need to write a program that does this You are required to write a simple program that simulates a game of darts. In this, two (or more) players take turns to “throw“ three darts to reduce their score from 501 down to zero. Each dart thrown will be simulated by a pair of numbers – the first is the number scored (1-20, 25, 50), the second indicates a single double or treble if one is possible (note that 25 or 50 are always singles, so the first number will dictate whether the second value is necessary). After three throws, the full score will be deducted from 501. The first player to zero wins. However, as with real darts, the final dart must be a double or bullseye (50), and must exactly clear the remaining score. If too much is scored, the player is out for that turn, and must try again after the other players have had a turn. On-screen, each turn should generate: 1. Each of the three Throws’ scores, with doubles and trebles included 2. The total score achieved in a Turn, by summing all three 3. The remaining score, having deducted from the total at each turn 4. An indication of ‘bust’ if the player ‘s darts total more than the remaining score, or ‘win’ if the player has got out with a double or bullseye (note that if the player gets out on the first or second dart, this is a win – the remaining dart(s) do not need to be thrown). can anyone advise me on what structure to use? or what algorithms would need to be included. has anyone already wrote a program like this?
<urn:uuid:0625578a-4597-49f0-973d-3887b4309293>
2.765625
377
Q&A Forum
Software Dev.
70.684353
|From George H. Darwin's article "Tides," Encyclopaedia Britannica, IX Ed.,Chicago, 1890. Note that in this graph, time increases to the left. Larger image.| Tides. This figure shows the tidal record for two weeks (January 1-14, 1884) at Bombay. The tide was recorded on a cylindrical sheet that turned once every 24 hours. Each daily curve is labelled with its date. Some obvious features: |The yearly travel of the Earth around the sun, the monthly travel of the Moon around the Earth, and the daily rotation of the Earth on its axis are the three main rythms that govern the tides. (It is impossible to draw the Sun-Earth-Moon system to scale. To scale, the Sun is a basketball in the center of a football field, the Earth is the eraser on a pencil held on the sidelines, and the Moon is the tip of a straightened-out paper clip stuck in the eraser.)| In fact the "tidal force" depends on the gradient of the gravitational field. Because (in the case of the lunar tide) what creates the bulge in the Earth's oceans on the side facing the Moon is the fact that the surface of the Earth is closer to the Moon than the center, and is therefore attracted more strongly. This also explains the bulge on the opposite side: there the center is closer than the surface. In general a variable gravitational field will stretch bodies along its gradient; our ocean tides are a special case of this effect. Why the gradient? The gravitational force exerted by a body of mass m on a piece at a distance L centimeters gives it an acceleration where G= 6.67x10-8dyne-cm2/g2 is Newton's gravitational constant. Suppose another piece of matter is k centimeters farther in the same direction. Then it undergoes a smaller acceleration: Gm/(L+k)2 cm/sec2. The difference between these two accelerations can be ascribed to a ``tidal force" pulling the two pieces apart. The estimation of this difference is a classic example of the linear approximation f(x+k) - f(x) ~ k f'(x). It yields -2 G m k --------- cm/sec2. L3 © Copyright 2001, American Mathematical Society
<urn:uuid:64461ae8-04e2-449b-8ade-79c6429f91f7>
4.4375
493
Academic Writing
Science & Tech.
62.686543
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895. Please note, Degree Days are not available for Agricultural Belts Contiguous U.S. Temperature Rankings, October 1936 More information on Climatological Rankings (out of 119 years) |Aug - Oct 1936 |100th Coldest||1976||Coldest since: 1935| |19th Warmest||1947||Warmest since: 1933|
<urn:uuid:57a6abd1-886b-4547-8d6f-70be897570c4>
2.84375
144
Structured Data
Science & Tech.
60.488056
There is an interesting anecdote which claims that the amphipod crustacean genus, Phronima, served as the inspiration for the alien queen first seen in James Cameron’s, “Aliens.” The story seems to originate from David Attenborough’s narration in the “Blue Planet” documentary (Skip to 3:25 in this video for the scene in question). Some people around the web rebut this, stating that the original alien design was based on a painting by artist H. R. Giger. This seems to be the case as far as the original “soldier” alien morph seen in “Alien” (1979) is concerned. It is much more likely that Phronima actually influenced the design of the queen alien morph, seen in “Aliens” (1986). I’ve tried to contact someone at the special effects company, Stan Winston Studios, but they seem to be hard to get a hold of if you are not the producer of a multi-hundred-million dollar blockbuster. Instead, lets talk a little about Phronima, which is an awesome animal regardless of whether or not it was the inspiration for the alien queen. Phronima is a planktonic amphipod crustacean that lives in the pelagic zone (existing in the water column, not on the bottom) of the deep sea. There are two very distinct characteristics about these animals that may have influenced the design of the alien queen. I will talk about each of them in some detail. The most immediately recognizable feature of Phronima is the broad crest atop its head. This morphological feature is actually one set of the animal’s eyes, and represents a classic deep sea visual adaptation. These tubular eyes point upwards, towards the limited down-welling irradiance. The huge facets of these eyes collect whatever light makes it down to that depth, silhouetting potential prey above in the water column. The light travels down long wave-guides to the retina which is near the bottom of the head (the retina appears black in the photos). This tubular eye structure maximizes photon capture and is seen in a variety of other deep-sea crustaceans, squid, and fish. The second set of eyes protrude from the side of Phronima’s head, and are more akin to other crustacean eyes. They probably look for bioluminescence to the sides and below the animal. It is thought that these eyes are used to spot the animals that Phronima likes to make their homes and nurseries out of… A gristly feature of Phronima that is reflected in the alien queen, is what I would call necro-parasitism. Females hunt down free-floating tunicates. They devour all the living tissue from the unlucky animal, leaving a barrel shaped house to live inside and drive around the deep sea in search of more prey (see below). Eventually, the female Phronima lays its eggs along the inside walls of the tunicate barrel, where the offspring grow and hatch. It is possible that the alien queen was derived completely independently of Phronima, however I find the comparison shot below, taken from a 1981 paper and from “Aliens” respectively, to be strongly suggestive of a connection. - Land, M.F., 1981. Optics of the eyes of Phronima and other deep-sea amphipods. Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology, 145(2), 209-226. - Young, R.E., Function of the Dimorphic Eyes in the Midwater Squid Histioteuthis dofleini. Available at: http://scholarspace.manoa.hawaii.edu/handle/10125/953 [Accessed December 11, 2009].
<urn:uuid:fc479b79-072d-432f-91e3-c82641000b07>
2.71875
809
Personal Blog
Science & Tech.
50.915169
This week we are in a deep freeze in the Montreal area, so it seems somewhat fitting to discuss Arctic spiders. I’ve discussed the life-history of Arctic wolf spiders (Lycosidae) before, specifically in the context of high densities of wolf spiders on the tundra. Much of this work was done with my former PhD student Joseph Bowden. The latest paper from his work was published last autumn, and was titled ‘Egg sac parasitism of Arctic wolf spiders (Araneae: Lycosidae) from northwestern North America‘. In this work we document the rates of egg sac parasitism by Ichneumonidae wasps in the genus Gelis. These wasps are fascinating, and we have found them to be very common on the tundra. There are often multiple wasps in a single egg sac, and as is typical with Gelis, they leave nothing behind: all eggs within an egg sac are consumed. After fully developed, the adult wasps pop out of the egg sac; the Gelis adults we encountered had both winged forms and wingless females, the latter superficially resembling ants. The rates of parasitism of Pardosa egg sacs (by Gelis) were, at some sites, extremely high. In some cases over 50% of the wolf spider egg sacs were parasitized. Stated another way, half of all the females encountered with egg sacs had zero fecundity because the female was carrying around wasps within the egg sac instead of spider eggs. It’s quite interesting to think about these wingless Gelis females: after emerging from egg sacs, they end up wandering around the tundra in search of hosts. Spiders with egg sacs must be encountered frequently enough for the wasps to grab on to a passing wolf spider in order to parasitize the egg sac. Recall, densities of wolf spiders can be very high in the Arctic (4,000 per hectare, at least). Hmmm…. this is all starting to fit… high densities of wolf spiders support high rates of egg parasitism and these wasps can ‘afford’ to be wingless since their hosts are frequently encountered: an interesting feedback loop! We can also speculate about large-scale gradients in diversity – many Ichneudmonidae show high diversity in northern regions. Within Gelis, it’s a good bet that they will find many suitable spider hosts in these environments. So, how extreme are these rates of egg parasitism? Looking at some of the literature, there are certainly a number of papers about wasps that parasitize spider egg sacs. Cobb & Cobb (2004) studied two Pardosa species in Idaho, and recorded a egg parasitism rate of about 15% (by Gelis wasps and wasps in the genus Baeus [Sceleonidae]). Van Baarlen et al (1994) studied egg parasitism in European Linyphiidae spiders and their maximum rates of parasitism were about 30%. Finch (2005) did a detailed study of four spiders species (non-Lycosidae) and rates of egg parasitism varied between 5% up to as high as 60% in an Agroeca species. Our documented parasitism rates for Arctic wolf spiders are certainly quite high (for Lycosidae), but not out of the range of other published studies for non-Lycosidae. I do wonder whether we will continue to find high egg parasitism rates if more species were examined in detail – certainly a fertile area of study. Related to this, what are the population-level consequences of this interaction? What is the relationship between spider densities and parasitism rates? Although Joe and I did try to speculate on this, our data are preliminary – again, a key area for future research. In the Arctic context, we will continue to uncover fascinating food-web dynamics. Our research group has already been thinking seriously about this – Crystal Ernst has written a nice post about the idea of an ‘inverse trophic web’ (i.e., predator-dominated) in the Arctic, and a fair amount of my future research will pursue this avenue of research. Pique your interest…? Why not think about graduate school in my lab, and study Arctic arthropod biodiversity? Bowden, J., & Buddle, C. (2012). Egg sac parasitism of Arctic wolf spiders (Araneae: Lycosidae) from northwestern North America Journal of Arachnology, 40 (3), 348-350 DOI: 10.1636/P11-50.1 Cobb, LM & Cobb VA (2004). Occurrence of parasitoid wasps, Baeus sp and Gelis sp., in the egg sacs of the wolf spiders Pardosa moesta and Pardosa sternalis (Araneae: Lycosidae) in southeastern Idaho. Canadian Field Naturalist 118(1); 122-123. Baarlen, P., Sunderland, K., & Topping, C. (1994). Eggsac parasitism of money spiders (Araneae, Linyphiidae) in cereals, with a simple method for estimating percentage parasitism of spp. eggsacs by Hymenoptera Journal of Applied Entomology, 118 (1-5), 217-223 DOI: 10.1111/j.1439-0418.1994.tb00797.x Finch, O. (2005). The parasitoid complex and parasitoid-induced mortality of spiders (Araneae) in a Central European woodland Journal of Natural History, 39 (25), 2339-2354 DOI: 10.1080/00222930500101720
<urn:uuid:24e7aa92-fcea-4362-96e5-036e51366690>
3.015625
1,187
Personal Blog
Science & Tech.
52.730717
© 1996, Astronomical Society of the Pacific, 390 Ashton Avenue, San Francisco, CA 94112. The awsome spectacle of a brilliant comet, with its long ghostly tail streaming behind it, is a rare event. Usually one appears about once a decade. This past March, many people thrilled to Comet Hyakutake, which passed within a mere 0.1 astronomical units of Earth (15 million kilometers, 9.5 million miles) and became a stunning sight for several nights. But nature is favoring us again: Another potentially great comet, Comet Hale-Bopp, is poised to enter our night sky in early 1997. Creature of the night. Late in the evening of Jan. 30, 1996 in Japan's Kyushu region, Yuji Hyakutake left his home in Kagoshima and drove to an observing site far from city lights. There, in the sky above him, was a new comet soon to bear his name. Comet Hyakutake became the most active comet in the past 400 years to come so close to Earth. Photo courtesy of Carter Roberts of Berkeley, Calif. © 1996 Carter Roberts. Comet Hale-Bopp stands to be among the most intensely studied comets in history. Only one other comet, Comet Halley, was seen so far from the Sun, giving scientists an opportunity to watch it as it slowly warmed up and sprouted a tail. Hale-Bopp appears to be much larger than Halley: about 40 kilometers (25 miles) in diameter, four times the width of Halley. And since Halley's appearance in 1986, scientists have developed new techniques and instruments, not least the Hubble Space Telescope, to scrutinize comets. Comets are the Rip Van Winkles of the solar system -- mini-worlds that have changed little during their 4.5-billion-year nap far from the Sun. By reading their tales of the distant past, scientists learn about how our solar system came to be. At the same time, comets provide science educators a visual treat to entice their students and the general public. The Discovery of Hale-Bopp Bright Becomes Brighter New Age of Science? Activity: A Viewgraph Comet Astrophotography for Teachers and Students A Comets Bibliography Ancient HistoryBy studying objects such as comets, astronomers have pieced together the story of the creation of the solar system. A little over 4.5 billion years ago, deep in interstellar space, a cloud of gas and dust began to collapse upon itself. As it contracted under its own weight, it started to spin. Some of the material in the cloud settled onto a large central object, and some of it landed on the rotating disc that surrounded that central object. As time went on, so much material landed on the central object that it became hot enough to ignite nuclear fusion. A star, our Sun, was born. Out in the disc, the material was creating not a star, but things a bit more prosaic: giant dust balls. The dust grains and other materials from the cloud collided and clumped together. Over a few tens to hundreds of millions of years, the clumps began to form small objects, a few miles across, called planetesimals. Over further tens to hundreds of millions of years, these planetesimals collided with each other and stuck together to form still-larger objects. Eventually, the planets arose. Most of the planetesimals were used up in building the planets, but a few were left over. They are still around today in more or less their original form. Some of these leftover planetesimals are composed of rock and metal: the asteroids. Others consist of easily vaporized materials such as water, carbon monoxide, and other gases in a frozen state: the comets. Once, comets inhabited the entire solar system. But those near the Sun quickly evaporated into nothingness. Only those that orbited the Sun in the cold, distant reaches of the solar system remained intact. Many of these continue to orbit the Sun in a huge disc -- a remnant of the original planet-forming disc -- beyond Neptune and Pluto: the Kuiper Belt. At its far end, this disc fans out into the Oort Cloud, an enormous sphere of comets which enshrouds the solar system 10,000 astronomical units from the Sun -- a significant fraction of the distance to the nearest star. Most of those comets remain happily exiled in frigid seclusion. But some are thrust into the realm of the planets. The Sun and the other bodies of the solar system travel through the Galaxy as a unit. Occasionally the solar system passes so close to other stars that it feels their gravity. These gravitational interactions kick some of the comets from the Oort Cloud or Kuiper Belt into the inner solar system. As these comets make their first swing past the Sun, various planets may yank them into new orbits. Some comets are unceremoniously ejected from the solar system altogether, while others are pulled into short-period orbits of a few thousand, or a few hundred, years. Eventually, some of these are pulled -- usually by Jupiter, the most massive planet -- into very short-period orbits, usually 6 to 8 years long. As these comets slowly evaporate away, others from the outer solar system come in to take their place. June 17, 1996 July 15, 1996 August 12, 1996 All hail Hale-Bopp. Conrad Jung of Oakland, Calif. took these photos from Fremont Peak in northern California. He used an 800 mm telephoto lens, Fujicolor Super G 800 film, and a 30-minute exposure at f/5.6. In the July photo, the bunch of stars to the right of the comet is the star cluster NGC 6649. Photos courtesy of Conrad Jung. | 1 | 2 | 3 | 4 | 5 | 6 | next page >> back to Teachers' Newsletter Main Page
<urn:uuid:ffcb3fef-71e1-4bf3-aff0-fb7bb7837ee0>
3.890625
1,239
Knowledge Article
Science & Tech.
55.01101
A dynamically loadable library is a dynamic-link library (DLL) on Win32, and an assembly (also a DLL) on the .NET platform. It is a collection of routines that can be called by applications and by other DLLs or shared objects. Like units, dynamically loadable libraries contain sharable code or resources. But this type of library is a separately compiled executable that is linked at runtime to the programs that use it. Delphi programs can call DLLs and assemblies written in other languages, and applications written in other languages can call DLLs or assemblies written in Delphi. You can call operating system routines directly, but they are not linked to your application until runtime. This means that the library need not be present when you compile your program. It also means that there is no compile-time validation of attempts to import a routine. Before you can call routines defined in DLL or assembly, you must import them. This can be done in two ways: by declaring an external procedure or function, or by direct calls to the operating system. Whichever method you use, the routines are not linked to your application until runtime. The Delphi language does not support importing of variables from DLLs or assemblies. The simplest way to import a procedure or function is to declare it using the external directive. For example, procedure DoSomething; external 'MYLIB.DLL'; If you include this declaration in a program, MYLIB.DLL is loaded once, when the program starts. Throughout execution of the program, the identifier DoSomething always refers to the same entry point in the same shared library. Declarations of imported routines can be placed directly in the program or unit where they are called. To simplify maintenance, however, you can collect external declarations into a separate "import unit" that also contains any constants and types required for interfacing with the library. Other modules that use the import unit can call any routines declared in it. You can access routines in a library through direct calls to Win32 APIs, including LoadLibrary, FreeLibrary, and GetProcAddress. These functions are declared in Windows.pas. on Linux, they are implemented for compatibility in SysUtils.pas; the actual Linux OS routines are dlopen, dlclose, and dlsym (all declared in libc; see the man pages for more information). In this case, use procedural-type variables to reference the imported routines. uses Windows, ...; type TTimeRec = record Second: Integer; Minute: Integer; Hour: Integer; end; TGetTime = procedure(var Time: TTimeRec); THandle = Integer; var Time: TTimeRec; Handle: THandle; GetTime: TGetTime; . . . begin Handle := LoadLibrary('libraryname'); if Handle <> 0 then begin @GetTime := GetProcAddress(Handle, 'GetTime'); if @GetTime <> nil then begin GetTime(Time); with Time do WriteLn('The time is ', Hour, ':', Minute, ':', Second); end; FreeLibrary(Handle); end; end; When you import routines this way, the library is not loaded until the code containing the call to LoadLibrary executes. The library is later unloaded by the call to FreeLibrary. This allows you to conserve memory and to run your program even when some of the libraries it uses are not present. Copyright(C) 2008 CodeGear(TM). All Rights Reserved. What do you think about this topic? Send feedback!
<urn:uuid:b555df5c-6a1c-4e36-8666-4822b2556d87>
2.96875
744
Documentation
Software Dev.
37.900332
At least 65 species of native freshwater mussels are found in Missouri waters. These humble shellfish once paved the bottom of rivers in incredible numbers, filtering the water and providing habitat and food for other animals. Mussel populations, however, have crashed in a relatively short time, raising alarms about the health of Missouri's streams and rivers. Of nearly 300 North American species, 38 are presumed to be extinct, and 77 others are critically imperiled. In Missouri, these unique creatures are among our most endangered freshwater wildlife. It's not surprising that mussels are in trouble. Mussels can be harmed by just about any of the many problems that affect our rivers. Adults live for decades on the bottom of the river. They can't move far to escape. They become victims if their part of the stream dries up in a drought, gets drowned by a reservoir, or is cut off by a channelization project. Mussels can also be uprooted by streambed erosion or buried by silt. They are at least as sensitive to water pollution as fish are, and they may be even more affected by pollutants in the bottom sediments in which they live. As vulnerable as adult mussels are, their reproduction seems to be most at risk. Our native mussels cannot reproduce without the help of native fish. Female mussels produce thousands of tiny larvae called glochidia. Each is smaller than the head of a pin. These larval mussels must attach to the gills or skin of particular kinds of fish, where they remain attached for a few days or weeks. They grow very little during this time, but they complete their development and get a free ride to new habitat before leaving the fish. Each species of mussel needs particular species of native fish to reproduce. Non-native fish, such as trout and carp, won't do. Biologists suspect that reduced populations of native fish are one of the reasons mussel populations have declined. Concern for native mussels is not new. A century ago, mussels were harvested in huge numbers in Missouri and other parts of the Midwest. Their shells were the basis of a multimillion-dollar, button-manufacturing industry. Two prominent University of Missouri biologists, George Lefevre and Winterton Curtis, studied the biology of mussels from about 1906 to 1914 and concluded that overharvest was decimating mussel populations. Recognizing that natural reproduction was insufficient to maintain populations, Lefevre and Curtis devised methods
<urn:uuid:ea10f5a8-7405-4577-9dcf-04c50b0c0038>
4.0625
505
Knowledge Article
Science & Tech.
42.2202
S1) Describe, with the aid of a drawing, the main parts of a spiral galaxy like the Milky Way. Where is gas and dust found? Where are globular clusters found? What kind of orbits do halo stars and globular clusters have? What kind of orbits do disk stars have? S2) In what kinds of gas clouds do you find ionized atoms? In what kinds of gas clouds do you find atoms that are not ionized? In what kinds of gas clouds do you find molecules. Which of these is hottest, which is coolest? Davison E. Soper, Institute of Theoretical Science, University of Oregon, Eugene OR 97403 USA firstname.lastname@example.org
<urn:uuid:708dab61-5534-441b-977e-00c412220720>
3.03125
153
Q&A Forum
Science & Tech.
69.403007
This recent recipient of “Best of what’s New” by “Popular Science” magazine, was not only an innovation to answer queries about the red planet mars, but also a hope to many ignited minds to find life in outer space. Phoenix did its job beautifully and completed its programmed task before dying on Martian sands. The only setback to this mission was that this craft could not come back to earth with collected soil samples. This mission’s principle investigator was Peter Smith, of the University of Arizona with project manager Barry Goldstein at NASA’s Jet Propulsion Laboratory in Pasadena, Calif and development partnership at Lockheed Mart in Space Systems, Denver with many international contributions. This entire project was worth 520 million dollars. Launched on August 4 2007, Phoenix landed on May 25, 2008, farther north than any previous spacecraft to land on Martian surface. It then scoped up the soil, dug up the land, studied its characteristics, found Martian ice (which confirmed the previous detection by Mars Odyssey in 2002) and returned more than 25,000 pictures from sweeping vistas to the nano-atomic size using the first atomic force microscope ever used outside earth. One of its works was to study whether the Martian artic environment has ever been favourable for microbes. Its findings support the hope that once Mars was habitable and it possibly supported life- as quoted by Doug McCuistion, director of Mars Exploration Programme, NASA. These findings include excavating soil above the ice table, revealing at least two distinct types of ice deposits and providing data on temperature, pressure, humidity and wind. Findings also include small concentration of salts that could be nutrients for life, discovering Perchlorate salt, which has implications for ice and soil properties and finding calcium carbonate, a marker of effects of liquid water. Mars rovers Spirit and Opportunity (two earlier crafts on Mars) found sulphates around their landing sites on the planet’s equatorial region, but Phoenix could not find sulphate rocks. Hence there is a lack of sulphates. Although scientists do have concluded that today, the surface is much too cold for water to be in liquid form. Phoenix’s abrupt end came due to the worsening weather conditions on Martian surface. Mission engineers last received a last signal from the Lander on November 2. Phoenix, in addition to shorter daylight, had encountered a dustier sky, more clouds and colder temperatures as the Northern Mars summer approaches autumn. As anticipated, seasonal decline in sunshine at the robot’s arctic landing site could not provide enough sunlight for the solar arrays to collect the power necessary to charge batteries that operate the lander’s instruments. As a result, phoenix could not be programmed to come back to earth with the collected samples. Even after the drawbacks, Phoenix could provide a lot of valuable information. One thing is certain though; it is not possible to send a manned mission just in the coming years. But phoenix has given us hope and a reason to believe in life on outer space. It is a remarkable achievement for mankind. Maybe a next mission to Mars can be to find life, which never existed on Earth! Let us keep our imagination up, for it is the imagination and questions that lead to innovation and hence to more understanding. [Image Source: http://flickr.com/photos/marctonysmith/2920694052/]
<urn:uuid:71ae5c46-c73f-425d-8c5d-270b0ab36254>
3.71875
694
Nonfiction Writing
Science & Tech.
39.63026
The Dimension of a Manifold One thing that may seem clear at first blush about manifolds actually takes a little thought to be sure about. Specifically, our definition says that each point is contained in some coordinate patch , which comes with a coordinate map . But does the here have to be the same for all points ? Well, let’s start by considering what happens if two coordinate patches intersect each other. Let and be open subsets with not empty, and with coordinate maps and . Now we can restrict our coordinate patches to the intersection, getting and . These are two different coordinate patches with two different coordinate maps on the same open subset . So what? So now we can use in reverse to lift up from into , and then use forward to drop down into . This composition is thus a homeomorphism from an open region in to an open region in . And this is absolutely impossible unless . So now we know that any two coordinate patches that intersect must use the same value of . Does this mean that we always have to use the same value of ? Well, not quite. Take two distinct natural numbers and . For each one, we can come up with all the coordinate patches with that dimension, and take their unions and . Since the union of any collection of open sets is open, each of these sets must be open. But they can’t intersect, or else some coordinate patch with dimension and some other with dimension would have to intersect, and we just saw that they can’t. The only way this is possible is for and to live in different connected components of . And, indeed, our definition so far doesn’t rule out this possibility. We could have a two-dimensional sphere and a one-dimensional circle floating next to each other, never touching, and they would count as a manifold according to what we’ve said so far. There are two ways around this. One is to only ever talk about connected manifolds, which automatically have a unique dimension since they only have one connected component. However, this imposes restrictions, like making it difficult to take intersections of manifolds and have the result still be a manifold, since it may suddenly be disconnected. The other alternative, which we will use, is simply to assert that all connected components of a manifold must have the same dimension. Either way, we can specify this dimension by saying that is an -dimensional manifold, or an -manifold for short.
<urn:uuid:f0c7f620-10bc-4d00-85a4-2125d39a9647>
3.421875
503
Personal Blog
Science & Tech.
50.538525
Up to this point, the detectors have told us only about charged particles. But what about neutral particles? Instead of a track that shows you where the particle went and how it responded to the magnetic field, the calorimeter gives you a measurement of the total energy of the particles hitting it in any direction, regardless of whether they are neutral or charged. The calorimeter is so large that it didn't make sense to try to make a seamless detector. So when we talk about the “calorimeter,” we are referring to a series of wedges that are arranged into circles along both sides of the detector, fitting snugly around the COT and magnet. Each wedge is about eight feet long and is actually two detectors in one. First is the electromagnetic calorimeter , which measures the energy of lighter particles such as electrons and photons. This detector is made of sheets of a plastic scintillator that absorb energy and emit light, sandwiched between 3/4-inch layers of lead. Behind it is the much larger hadronic calorimeter, measures the energy of hadrons —more massive particles made of . This piece of detector uses steel instead of lead in its scintillator sandwich. A close-up of a calorimeter wedge, with the electromagnetic calorimeter at the bottom and the hadronic calorimeter at the top. The green plastic at the bottom is scintillator; the top of the wedge (seen in upper right image) is a forest of photomultiplier tubes. (Click image for larger How do particles get "caught" by the calorimeter? A high-energy particle collides with a layer of lead or steel and “showers,” creating a cascade of lower-energy, charged particles. These in turn hit other particles that make up the steel or lead and create their own showers of even lower-energy particles. In this way, one 100-GeV electron could become one hundred 1-GeV electrons by the end of the ride. Every time the remaining particles pass through a layer of scintillator, they deposit energy in it, which is converted to light and carried away to be measured by fast electronics. Why is the hadronic calorimeter so much larger than the EM calorimeter? It isn't only direct collisions with atoms in the steel and lead that cause incoming particles to "shower." The electromagnetic attraction of protons and electrons in the metals can also cause charged incoming particles to shower merely by changing their direction. Since electrons are much lighter than hadrons, they yield to the electromagnetic force more easily, and shower more often. But hadrons have to wait until they have a head-on collision with the nucleus of an atom in the lead or steel. This means hadrons usually penetrate deeper into the machine before they are converted into lower-energy particles, so a hadronic calorimeter needs more space to contain the energy from its cascading particles. Uses of the calorimeter Calorimeters are especially good for telling you about the neutral particles that are coursing through your detector. Remember that neutral particles (such as neutrons and high-energy photons) were invisible up to this point, since the inner detectors rely on to see particles and neutral particles do not ionize atoms. But charged particles also deposit energy in the calorimeters. So how do you tell the difference between the two? If you see a track leading straight into a hit in the calorimeter, you're seeing a charged particle. If you see nothing and then suddenly you see a hit in the calorimeter, chances are it was a neutral particle. The ability to measure the energy of each particle helps physicists determine the masses of particles such as the top quark or W and Z bosons . But the energy that is "missing" can be just as important as the energy they "catch." Physicists know that according to the law of conservation of energy, what they put into the system should equal what they get out of it. Missing energy means the collision created some particles that cannot be seen by the detector. These could be particles such as neutrinos that don't interact with matter very often. Or they could be particles that haven't been discovered yet. Many searches for postulate the existence of particles that the detector wouldn't be able to "see" directly. Looking for missing energy is the only way to validate the existence of these particles. Look at a live event display produced by the calorimeter.
<urn:uuid:2421a334-f63a-421e-919b-4ff8e741953d>
3.984375
999
Knowledge Article
Science & Tech.
41.186562
The C-Band All Sky Survey (C-BASS) is a project to image the whole sky at a wavelength of six centimetres (a frequency of 5 GHz), measuring both the brightness and the polarization of the sky. C-BASS employs very sensitive microwave amplifiers, cooled to within a few degrees of absolute zero, and configured to measure tiny differences in temperature and polarization. They are mounted on two separate telescopes — one at the Owens Valley Observatory (OVRO) in California, the other in South Africa. This allows C-BASS to observe both in the northern and southern hemispheres and hence map the whole sky. C-BASS is a collaborative project between the Universities of Oxford and Manchester in the UK, the California Institute of Technology (supported by the National Science Foundation) in the USA, the Hartebeesthoek Radio Astronomy Observatory (supported by the Square Kilometre Array project) in South Africa, and the King Abdulaziz City for Science and Technology (KACST) in Saudi Arabia. The southern telescope is a 7.6-m dish donated to the project by Telkom. The northern telescope is a 6.1-m dish donated by the Jet Propulsion Laboratory. Status (April 2013) The northern instrument is conducting survey observations at OVRO. The southern instrument is being commissioned at Hartebeesthoek Radio Astronomy Observatory prior to deployment in the Karoo. The main uses of this survey will be to help us make better images of the cosmic microwave background (CMB) and to study diffuse radiation from our Galaxy. The CMB is the very faint afterglow of the Big Bang. It has a temperature of less than three degrees Celsius above absolute zero. By making images of this radiation, scientists are able to see the universe as it was just after the Big Bang. This radiation is also polarized — in a similar way to sunlight reflected off water — and this can be measured in a way similar to wearing polarized sunglasses. The polarization is predicted to have two distinct patterns; one due to variations in density, the other due to the presence of gravitational waves. The polarization patterns from gravitational waves are expected to be extremely faint (less than 1 millionth of a degree Celsius) but if they can be measured, they will provide information about the state of the universe when it was less than a billion billion billionth of a second old. Many telescopes are now being designed and built to detect these very faint polarization patterns in the CMB but, in order to clearly see the signals, we need to accurately remove all the other contaminating signals from the sky which obscure our view of the early universe. Most of this contamination comes from our own Galaxy, the Milky Way. This is where C-BASS will play an essential role in this quest for understanding the origin and evolution of our universe. Operating at a wavelength of 6cm, C-BASS will make an extremely accurate map of the contaminating signal from our Galaxy. This will allow the contamination to be subtracted with great accuracy from high-frequency measurements such as those being made by Planck. The all-sky maps produced by this survey will be key to enabling accurate subtraction of contaminating signals from the data collected by specialized CMB telescopes in order that they can reveal the true fluctuations in the microwave background. In addition to playing an essential part in the quest for revealing the tiny fluctuations in the CMB and understanding the origin and evolution of our universe, C-BASS will vastly increase our understanding of the physics of the gas between the stars in our own Galaxy, for example by mapping out the magnetic field in the Galaxy.
<urn:uuid:184dc6c2-0163-4a01-8ba6-c9db35164018>
3.90625
748
Knowledge Article
Science & Tech.
33.374352
The SeeAlso Markup Language Fredrik Lundh | May 2005 The SeeAlso Markup Language is a minimal XML dialect that can be used to generate links between different web-based documentation sets. A documentation generator can use one or more SeeAlso files to add outbound links to related documents at other sites. A documention generator can also generate SeeAlso files, for use by other sites. (To view it another way, a SeeAlso file is a collection of trackbacks, stored in a single XML file.) Here’s a small SeeAlso file, which contains a single link: <seealso src='http://effbot.org/python-seealso.xml'> <item href='http://www.effbot.org/librarybook/atexit.htm'> <title>Using the <em>atexit</em> module</title> <target>atexit</target> <target>atexit.register</target> </item> </seealso> A larger example can be found here. Also see Generating SeeAlso Indexes for the Python Library Reference. FIXME: Use a namespace? The seealso Element This is the document root element. It can contain zero or one info element, followed by zero or more item elements. The src attribute contains the source URI for this document. An application may use this URI to look for a newer version of the file. The src attribute is optional; if omitted, the application will have to keep track of the source location in some other way. The info Element This element contains information about the SeeAlso origin site, such as site name, contact addresses, etc. There are no standard subelements defined at this time, but you can use custom elements as long as they live in a distinct namespace. Providing relevant Dublin Core properties as info subelements is encouraged. The item Element This element contains information for a single SeeAlso item. It should contain an href attribute that points to the associated resource. This element can have zero or more title, excerpt, and target elements. For maximum interoperability, you should include a title and a target element for each SeeAlso item. The title Element The title of the SeeAlso item. An item should have at least one title. Multiple titles are allowed, but should be avoided (if used, applications are free to use any of them). The title should contain plain text, but the em element can be used to emphasize words or phrases. No other HTML markup can be used. The excerpt Element An excerpt from the SeeAlso item, or any other description that is relevant. As for titles, multiple excerpts are allowed, but should be avoided (if multiple excerpts are present, applications are free to use any of them). The excerpt should contain plain text, but the em element can be used to emphasize words or phrases. No other HTML markup can be used. The target Element The target resource for this SeeAlso item (that is, the resource which will link back to this item). The target can be a URI, or use any other form relevant for the application. For example, objects in the Python standard library can be identified by a dotted path (e.g. Tkinter.Canvas.move or atexit.register). An item should have at least one target. Multiple targets can be used, where appropriate. FIXME: Add some mechanism to specify “target domains”? Current effbot.org samples use a target-domain attribute in the seealso element. The em Element This element can be used to emphasize portions of a title or excerpt. Applications are free to render emphasized portions in a suitable way (e.g. use italics), or to treat them as ordinary text. FIXME: Put this tag in the XHTML namespace? Using Custom Elements Other elements can be used freely, as long as they live in a distinct namespace. You can for example use Dublin Core elements, RSS elements, or more extensive XHTML markup. For interoperability, custom elements should be placed after the standard elements, and the document should be designed to make sense also if all custom elements are ignored.
<urn:uuid:a207b739-1eda-45d7-8544-667b6a00e265>
2.890625
877
Documentation
Software Dev.
45.42293
Feb 17, 2012 Gene flow in wheat: Cultivation trial Outcrossing potential of GM wheat through pollen dispersal |Execution||Institute of Evolutionary Biology and Environmental Studies, Zurich| Type of experiment: Planting trial, measurement of GMO presence Experimental design and realization: The outcrossing rate rate between adjacent small plots was measured as a function of distance in the years 2008-2010. Various GM wheat lines with increased fungal resistance were used as the GM donor plants. To determine the outcrossing rate, five samples each containing 100 grains were taken from each plot. The grains were ground to flour and examined for the presence of the foreign genes from the GM lines using PCR analysis. Over a distance of 0 to 0.5 metres, an average outcrossing rate of 0.7 per cent was observed. This fell to 0.03 per cent when the distance increased to 2.5 metres. A relatively small separation between wheat fields can, according to these results, almost entirely prevent undesirable gene transfer. The observed cross-fertilisation rates were within similar ranges for the various GM and non-GM wheat plants studied, but differed slightly. The study authors therefore recommend investigating the outcrossing rates of GM wheat lines on a case-by-case basis.
<urn:uuid:f581a62a-1b6a-4f03-87c8-e9e430ddd905>
2.75
268
Knowledge Article
Science & Tech.
44.959892
Cold Weather Fun: Boiling Water Turns to Ice Crystals In this video, a mug of boiling water is tossed in the air on a very, very cold day. Much of the water immediately freezes into ice. This happens because it takes time for the energy contained in a hot object to be transferred to a cold object. However, the rate of heat transfer is proportional to the temperature difference between the two objects, so hot water will lose heat faster than cold water. MORE FROM LiveScience.com
<urn:uuid:f9ce9e2c-04b9-4d68-ab91-15d1e1bf0072>
2.890625
102
Truncated
Science & Tech.
50.118828
Further to your interview with Brian Hare on the importance of dog domestication to our own evolution (2 March, p 30). We know that human lifestyles changed from hunter-gatherer to pastoralism. This was a crucial step in human cultural evolution. The practical possibility, and hence probability, of the advent of pastoralism must have increased when dogs were able to follow human hand and voice signals to help control livestock. In the past, as now, such working dogs would have been valued and bred, producing selective advantages. To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:45ee9f79-f5c8-477f-83c4-f67b630a0733>
3.25
133
Truncated
Science & Tech.
41.095701
Some 30,000 light-years away near the chaotic, gaseous region near our galactic center, a team of Japanese researchers has found the strange cosmic feature you see above: a helical molecular cloud twisting across some 60 light years. The European Southern Observatory’s VISTA survey telescope has turned its eyes inward to the center of our galaxy, and for the first time has looked straight through it. VISTA’s latest batch of infrared images have discovered two new globular clusters here in the Milky Way that had never been seen before, but more importantly they are the first star clusters that we’ve been able to image beyond the dusty and gaseous core of our galaxy. Behold, your galactic center. This Hubble image, captured with the space telescope’s Near Infrared Camera and Multi-Object Spectrometer (NICMOS), is the highest-resolution pic of the Milky Way’s galactic center taken to date, taking in a newly discovered group of massive stars, lots of super-hot gas, and roughly 35,000 square light years of space in one sweeping mosaic. The center of the Milky Way is hard to see in visible light, because interstellar dust blocks our view. But the Spitzer Space Telescope’s infrared vision can penetrate the dust and see through to our galaxy’s jam-packed core. This is a newly updated version of the plane of the Milky Way captured by the Spitzer telescope. NASA says the area shown here is immense: Horizontally, it spans 2,400 light years, or 5.3 degrees of the sky, and vertically it covers 1,360 light years, or 3 degrees. Spitzer's Milky Way Panorama:I can almost see my ... wait, never mind NASA/JPL-Caltech/Univ. of Wisconsin Size doesn't always matter when it comes to NASA's pretty pictures, but it may certainly make an impression upon visitors at the Adler Planetarium in Chicago. The planetarium has revealed a gigantic Milky Way panorama that stretches 120 feet long and 3 feet wide at the sides. The center of the picture bulges out to 6 feet wide to accommodate the center of the galaxy. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:d0dd0c60-4eb4-4f42-9f08-6945e33a264d>
3.34375
509
Content Listing
Science & Tech.
51.023632
Kelvin temperature scale Submitted by Jur on Sun, 06/17/2012 - 01:00 The Kelvin temperature scale is linked to the concept of absolute zero. LIKE US ON FACEBOOK SHARE THIS DUMP Most read this week 10 people with the highest IQ in the world (Infographic) Video designed to create a strong natural hallucination 13 advertisements that make science fun What happens when a smoker quits (Infographic) What happens when you crack an egg 60 feet below the surface (Classic Dump) Did you like this? also check these related dumps Artist creates real indoor clouds Have you ever seen an atom? Misconceptions about temperature Quantum cooling to (near) absolute zero Absolute zero: Absolute awesome Lawrence Krauss on Caveman common sense The coming nanotech age How hot can it get? What is one degree? (DOC)
<urn:uuid:37febee9-829d-44c6-8cc3-2a96bab0b44f>
2.84375
195
Content Listing
Science & Tech.
35.686731
With all the storms in the weekend forecast, why not talk about lightning and thunder? Local WCCO-TV news ran a story on this yesterday, but in my humble opinion, it didn’t tell us much. Some of it even missed the mark. So here we go! The question was asked: Why does thunder crack and sometimes roll? Many reasons determine the sound of thunder. Distance is a key component. As the sound waves from thunder travel toward you, they travel through different densities of air, different temperatures, and fade. Lower tones travel the farthest. You tend to hear the low rumble of distant thunder, therefore. Also the length of a lightning bolt affects the sound you hear. All thunder is the result of lightning and some lightning can travel over 60 miles. You literally hear different parts of the lightning bolt at different times. This can also give thunder and rumbling, rolling sound. Different parts of the sound travels through different densities of air, for example. Where you are in the storm matters, too. Generally we hear the first strikes of thunder the loudest and clearest, especially if the thunder precedes any rainfall. Without rain impeding the sound waves, the thunder tends to be more sharp and clear. And if you happen to be right next to a strike, you get the loud crack that sounds instantaneous because the sound has very little distant to travel and distort. You hear the loud crack. If you still have your wits about you, you can often still hear the thunder roll away in the distance, too. Two errors in the WCCO story. One minor and the other a bit more signficant. First, lightning and thunder rarely occurs in the winter not because there isn’t as much moisture in the air, but because there isn’t as much static electricity in the air. The turbulent rising and falling of rain and especially ice in thunderstorms creates a charge (usually a negative charge) in the cloud that is discharged against a positive charge elsewhere, even in another cloud. It is a matter of turbulent storm dynamics, not moisture, that causes lightning and thus thunder. It does happen in the winter, but rarely, because the storm dynamics are not right. (I heard someone once say that lightning is rare in hurricanes, too. Is that true? I was told hurricanes expend energy in circulating winds that inhibit tall thunderstorm development. I don’t know if this is true. Does anyone from Florida or Louisiana read this post? My dear friend Sandy can answer this for me. I’ll get back to you.) The second and more silly error is how the difference in time between a lightning strike and the sound of thunder was explained. They explained that every five seconds counted after seeing a lightning flash and hearing thunder counted for a mile of distance, which is essentially accurate. If you can count thirty seconds, for example, the storm is about 6 miles away. The meteorologist said this was true because light travels five times faster than sound. This is incorrect. (Sorry, Mike Augustyniak…but it is ok.) Light travels at a speed of 186,000 miles per second or almost 700 million miles an hour. (We learned this in school, right?) Sound travels about 768 miles per hour, right? I’m not a math wiz, but light travels many millions times faster than sound. And if you think about it, it doesn’t matter anyway. If light travels “five times” faster than sound, how would counting to five measure that. Where is the constant? If I counted really slowly and only got to three, would that mean light traveled three times faster than sound? Nope. The logic behind the incorrect answer is incorrect as well. So there you have it, my little brush up on lightning and thunder. Please reply with all corrections kindly. - Why are we getting thunder and lightning? (metofficenews.wordpress.com) - Flash Facts About Lightning (nationalgeographic.com) - Frequently Asked Questions About Lightning (nssl.noaa.gov)
<urn:uuid:c587a52a-fd6c-46c6-b531-3c9217108aa1>
2.734375
849
Personal Blog
Science & Tech.
64.600858
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. December 31, 1997 Explanation: Some stellar nebulae are strangely symmetric. For example, every major blob of gas visible on the upper left of NGC 5307 appears to have a counterpart on the lower right. This picture taken by the Hubble Space Telescope was released last week. NGC 5307 is an example of a planetary nebula with a spiral shape. Spiral planetary nebulae are thought to be caused by a bright central white dwarf star expelling a symmetric wobbling jet of rapidly moving gas. It takes light about 10,000 years to reach us from NGC 5307, and about 6 months just to go from one side to the other. In contrast, light takes only about 8 minutes to reach Earth from the Sun. Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
<urn:uuid:bfe4c093-d71f-425b-b7bc-2428afb69766>
3.578125
223
Knowledge Article
Science & Tech.
56.919778
Researchers in the last several years realized that they have overlooked whole classes of genes that do not make proteins. Normally, a short stretch of DNA is translated into an RNA molecule which is then moved to another part of the cell and decoded to make a protein. While the non-protein coding genes also produce RNA molecules, these products are not used to make protein. About 15 years ago scientist started to see that some RNA did other things than make proteins. For example, the Xist RNA controls deactivation of one of the X-chromosomes in females, a necessary process to prevent biological problems. No one yet understands how the Xist RNA actually works. These non-coding RNA genes often to regulate the levels of protein-coding RNAs but some have other functions too. It turns out there are many genes that do not make proteins--many more than realized until even just a few years ago. The new article, Genes Don't Only Make Proteins: How Non-Coding RNA is Revolutionizing Genomics discusses how these genes were discovered and their impact on genomics research.
<urn:uuid:8826d170-d0b0-44c0-a575-d637398a0ea6>
3.75
226
Knowledge Article
Science & Tech.
42.129301
JDK and JRE File Structure This page provides an introductory overview of the JDK directories and the files they contain. Note that the file structure of the JRE is identical to that of the JDK's This section describes the most important files and directories required to develop applications for the Java platform.(Note that some of the directories that are not required include demos, Java source code, and C header files. These are mentioned in the Additional Files and Directories section.) Assuming the JDK software is installed at /jdk1.6.0, here are some of the most important directories: - Root directory of the JDK software installation. Contains copyright, license, and README files. Also contains src.zip, the archive of source code for the Java platform. - Executables for all the development tools contained in the JDK. The PATH environment variable should contain an entry for this directory. For more information on the tools, see JDK Tools. - Files used by the development tools. Includes tools.jar, which contains non-core classes for support of the tools and utilities in the JDK. Also includes dt.jar, the DesignTime archive of BeanInfo files that tell interactive development environments (IDE's) how to display the Java components and how to let the developer customize them for an application. - Root directory of the Java runtime environment used by the JDK development tools. The runtime environment is an implementation of the Java platform. This is the directory referred to by the - Executable files for tools and libraries used by the Java platform. The executable files are identical to files in /jdk1.6.0/bin. The java launcher tool serves as an application launcher (and replaced the old jre tool that shipped with 1.1 versions of the JDK). This directory does not need to be in the PATH environment variable. - Code libraries, property settings, and resource files used by the Java runtime environment. For example: rt.jar-- the bootstrap classes (the RunTime classes that comprise the Java platform's core API). charsets.jar-- character-conversion classes. - Aside from the extsubdirectory (described below) there are several additional resource subdirectories not described here. - Default installation directory for Extensions to the Java platform. This is where the JavaHelp jar file goes when it is installed, for example. localedata.jar-- locale data for - Contains files used for security management. These include the security policy ( java.policy) and security properties ( - Contains the .so(shared object) files used by the Solaris version of the Java platform. - Contains the .sofile used by the Java HotSpotTM Client Virtual Machine, which is implemented with Java HotSpotTM technology. This is the default VM. - Contains the .sofile used by the Java HotSpotTM Server Virtual Machine. - Jar files containing support classes for applets can be placed in the lib/applet/directory. This reduces startup time for large applets by allowing applet classes to be pre-loaded from the local file system by the applet class loader, providing the same protections as if they had been downloaded over the net. - Font files for use by platform. This section describes the directory structure for the demos, Java source code, and C header files. The additional files and directories shown above are: - Archive containing source code for the Java platform. - Examples, with source code, that show you how to program for the Java platform. - Applets that can be used on a web page. - Examples that use Java 2DTM and JFC/Swing components. - Examples of using the Java Platform Debugging Architecture. Includes source code for the javadt and jdb utilities. - Demos for use with the Java Plug-in product. - Example classes and C code that demonstrate access to poll(2) functionality from the Java platform. - C-language header files that support native-code programming using the Java Native Interface and the Java Virtual Machine Debugger Interface. - Contains man pages for the JDK tools. Copyright © 1993, 2011, Oracle and/or its affiliates. All rights reserved.
<urn:uuid:4f830473-0df6-4ac0-98af-27dfba1e7f6c>
3.046875
897
Documentation
Software Dev.
43.784
Help write the 12 days of aquatic invasive species Christmas (2012 edition) Enjoy the song and the commentary: On the twelfth day of Christmas, a canal brought to me: Canals have been a significant source of invasive species into the Great Lakes region. Canals can be an invasion pathway by opening previously unavailable habitat to a species (e.g. alewives, lamprey) or by allowing ships with AIS in their ballast to enter the Great Lakes (e.g. quagga mussels, spiny water flea). In fact, the pathway into the Great Lakes region for nearly all of the invasive species in this jingle can be easily tied to a canal. The only exception is the red swamp crayfish; it likely made its way into Wisconsin via an aquarium release. Twelve quaggas clogging – Quagga mussels are now the dominant invasive mussel in Lake Michigan. A congener (a member of the same genus) of zebra mussels, the quagga mussel can tolerate colder water and colonize soft substrates. These abilities have helped it colonize most of the benthic habitat in Lake Michigan. Just like zebra mussels, quagga mussels are quite effective at clogging water intake pipes and other infrastructure. Mitigating these impacts has cost Great Lakes residents millions of dollars. ‘Leven gobies gobbling – The round goby is very fond of fish eggs. It is a known predator of lake sturgeon and lake trout eggs, and a swarm of round gobies can decimate a smallmouth bass nest. They can also consume zebra and quagga mussels which can cause toxins to move up the food chain, or bioaccumulate. The best ways to prevent future inland round goby invasions is to not use round goby as bait and to not move live fish. This will keep the goby’s gobbling ways from spreading to interior lakes. Ten alewives dying– Alewives are one of the few invasive species that foul Great Lake’s beaches throughout the summer. Until the introduction of Pacific salmon, alewives died off in such great numbers that tractors were required to remove them from beaches. Salmon now do a great job controlling alewife numbers, but there are still alewife die-offs due to spawning and temperature related stresses. Nine eggs in resting – The spiny waterflea and the fishhook waterflea produce tiny resting eggs that can survive as much as 12 hours after the mature waterflea has perished. The resting eggs can also survive extreme environmental conditions, so it is imperative to make sure that recreational equipment is cleaned to prevent spreading these invasive crustaceans. Luckily, their Wisconsin distribution is currently limited to Lake Michigan, Lake Superior, the Madison Lakes, and a few other inland lakes. Eight shrimp ‘a swarming– The bloody red shrimp is one of the Great Lakes most recently discovered ballast invaders. Bloody red shrimp swarms can be incredibly abundant, with swarms up to 1500 individuals/square meter being documented. Their effects on the Great Lakes are largely unknown, but they may compete for food with young fish, and have been found in the diet of some fish in the Great Lakes. Regardless of the impacts, eight shrimp ‘a swarming is a huge underestimate. Seven carp and counting – There are seven species of non-native carp in the United States. There are the four collectively known as Asian carp (black, grass, silver, and bighead), the common carp, the crucian carp, and last but not least, the Prussian carp (a wild version of the goldfish). While the current focus is on the silver and bighead carp, all of these carp cause problems one way or another. Hopefully we won’t actually be counting any other carp species soon. Six lamprey leaping – This is some bad lamprey biology humor. Lampreys are actually poor jumpers, especially when compared to trout and salmon, so a small low-head obstacle or ledge can prevent lampreys from moving further upstream while other fish leap over the obstacle. Thus, physical barriers are one way managers are preventing lampreys from invading more streams in the Great Lakes basin. FIVE ??? What? Readers, here’s your chance to test your songwriting chops. In the comments below indicate what you think this verse should say. Four white perch on ice – Icing your catch is another way fishermen can help prevent the spread of invasive species. Many invasive species aren’t readily visible to the naked eye, including zebra and quagga mussel veligers, spiny and fishhook waterfleas, and viral hemorrhagic septicemia (VHS). Icing the day’s catch makes it so anglers don’t need to transport water and the organisms in it, while also improving table fare. The invasive white perch would go well on ice, as would the yellow perch and any of the other delicious fish species found in Wisconsin. A fish on ice is twice as nice; that’s a win-win if I’ve ever heard one. Three clean boat steps – Clean. Drain. Dry. Follow those three simple steps to stop aquatic hitchhikers. Not already familiar with the three clean boat steps? Here they are in a little more detail: Clean any weeds, mud, and debris from your boat, motor, and trailer. Drain any water from the boat, motor, live wells, bait wells, and bilge. Dry any equipment that came in contact for more than five days, especially if you will be using it on different waterbody. Two red swamp crayfish – Two is the number of documented red swamp crayfish populations in Wisconsin. Both populations were detected early and contained. Time will tell if eradication efforts were successful in eliminating this common classroom dissection subject. Similar introductions can be prevented in the future by not releasing unwanted pets and classroom specimens. Not using crayfish as bait can also help prevent future crayfish invasions. And a carp barrier in the city! – There are actually three electric barrier arrays in the Chicago Sanitary and Ship Canal. Two of the barriers are always on, while the other is on standby to provide emergency backup or to be functional during periods of maintenance. This configuration has prevented radio tagged fish in an Army Corps of Engineers study from moving upstream of the barriers. As long as the electricity remains on, these barriers should prove to be effective at preventing additional silver and bighead carp from entering the Great Lakes until a more permanent solution can be found.
<urn:uuid:c94df3ea-4106-4a73-b44c-2c46766a7d8b>
3.34375
1,367
Personal Blog
Science & Tech.
47.086869
Remember the first time someone handed you a kaleidoscope and invited you to look inside? You may have heard a rattling in the far end of the brightly-colored cardboard tube as you lifted it to your eye like a spyglass. Perhaps you were skeptical, but when you peered in, you were amazed by the burst of color and intricate design at the other end. No matter how long you played with that fascinating device, or how many times you turned or shook the end, you never saw the exact same pattern twice. Generations of people over the past two centuries have shared this experience, but none have ever viewed identical images. Perhaps it's part of the appeal of the kaleidoscope that such a low-tech device can create a never-ending array of beautiful -- sometimes breathtaking -- art. But the art lasts only a few moments before it's replaced by the next amazing image. The word kaleidoscope comes from Greek words meaning "beautiful form to see." Some are so beautiful and rare that they've become prized as collectable objects, bringing big money in the marketplace: One sold at an auction house in 2000 for over $75,000 [source: Kohler]. Despite what you might have once thought, it's not magic that creates the kaleidoscope's beautiful forms, but rather an assembly of mirrors, angles and ordinary objects working in a very scientific way. On the next page, we'll explore the mystery behind those mirrors and beautiful forms, and we'll see why there's really no mystery at all. In fact, before long, you could be creating a kaleidoscope yourself, to amaze and delight your friends.
<urn:uuid:f9bd9ec7-6300-49bc-9f67-c397123e69a5>
2.78125
338
Knowledge Article
Science & Tech.
46.280952
The Effect of Light on Rates of Cloning of the Symbiont-Bearing Acoel Convolutriloba longifissura The acoel turbellarian Convolutriloba longifissura reproduces primarily by asexual fission and engages in an obligate symbiotic relationship with unicellular algae belonging to the genus Tetraselrnis. The obligate nature of the symbiosis between these species suggests that algal photosynthesis may influence rates of flatworm asexual reproduction. To test this hypothesis we explored the effect of light on C. longifissura 's ability to clone. Worms (n= 18/treatment) were incubated at light regimes of total darkness, 8L:16D, and 12L:12D for 30 days at 26 ± 10C. Worms held in complete darkness experienced 100% mortality after 14 days; mortality in all other light treatments was <4% . Cloning increased with a greater exposure to light (p=0.03). The average rate of cloning for worms exposed to 8L:16D light cycle was 0.022/day; rates of cloning by individuals held at 12D:12L was nearly 3X greater (0.061/day). Flatworm length and cloning rate were positively correlated (r=0.77, p=0.01, n=36); there were no significant differences in worm length among light treatments (p>0.05). Differences were detected in the cloning rates of worms exposed to blue, red, green, or white light for 20 days at 26 ± 1 0C. Cloning rates were highest for worms exposed to white light, lowest for worms exposed to green light, and intermediate for those exposed to red or blue light. These data provide evidence supporting the hypothesis that algal photosynthetic activity directly affects cloning by C. longifissura. We suggest that light conditions which promote photosynthesis in algae result in release of photosynthate to the flatworm and these materials fuel asexual reproduction. In turn, flatworms provide shelter and/or nutrients to the algae. This allows for the possibility of a mutualistic symbiotic relationship between C. longifissura and Tetraselrnis sp. Originally presented in the John Wesley Powell Student Research Conference - April 17, 2004 and used with permission. Sara Rahim, '04. "The Effect of Light on Rates of Cloning of the Symbiont-Bearing Acoel Convolutriloba longifissura" 2004
<urn:uuid:768547eb-96e1-4532-8bec-e9458eede7fa>
2.9375
522
Academic Writing
Science & Tech.
45.142407
One Cubic Foot How humans’ choice to grow just one crop can affect nature’s balance. A typical terrestrial ecosystem is a living mosaic of hundreds or even thousands of species, balanced on one another’s existence like a biological house of cards. From plants and bugs down to microscopic fungi and bacteria, there’s a world of life in just a cubic meter. That’s what David Liitschwager’s new book One Cubic Foot set out to capture. Anything that came through a plastic cube one foot on each side was photographed and catalogued. It’s stunning just how much life there is right under our feet, or above our heads, at any moment. Move the cube just a few feet away? You may see a completely different slice of the biodiversity pie. However, there are tales of caution within those pages. See those two photos at top? The top photo shows the biodiversity present in a typical slice of shrub land. Cooperative populations of over 100 plants and insects. The bottom? It’s from an Iowa cornfield, home to less than an actual handful. That cornfield is the victim of the modern agricultural practice of monoculture. Where there were once hundreds of species, living together on the richest soil in the midwest, there remain a sparse few. In manipulating nature to grow only one crop on a piece of land, we have created an almost alien world. It’s beyond a debate between organic vs. conventional (neither of which are perfect). It’s a question of simple biology, and I don’t like the answer. Be sure to read Robert Krulwich’s review of One Cubic Foot. And then check out Michael Pollan talking about the danger of monocultures to nature and our diets.
<urn:uuid:4869ea7d-fb87-4734-b848-42cb6cb45903>
2.859375
380
Personal Blog
Science & Tech.
56.275258
Apollo 14 Mission Science Experiments - Charged Particle Lunar Environment Experiment The Charged Particle Lunar Environment Experiment was deployed on Apollo 14. It measured electrons and both positively and negatively charged ions near the Moon's surface with energies between 50 and 50,000 electron volts. Less energetic ions were studied by the Suprathermal Ion Detector Experiment and more energetic particles were studied by the Cosmic Ray Detector experiment. One interesting result of this experiment is that it recorded the impact of the Apollo 14 lunar module ascent stage, which was crashed onto the Moon after the crew had returned to lunar orbit. The ascent stage impact occurred at a distance of about 66 kilometers from the Apollo 14 landing site and there were about 180 kilograms of unused rocket propellant on board. Two distinct clouds of material from this impact were recorded by the Charged Particle Experiment. The two clouds were about 14 and 7 kilometers across and had expanded away from the impact site at velocities of about 1 kilometer per second. This expansion velocity is similar to that measured by the Suprathermal Ion Detector for other impacts on the Moon.
<urn:uuid:d9a086c7-db86-47bf-85a1-5b03b7001830>
4.09375
224
Knowledge Article
Science & Tech.
26.49375
Click on the thumbnail at the bottom right of this page to go to the bitmap scan of this document. The text of this document appears immediately below. ROYAL ASTRONOMICAL SOCIETY OF CANADA Standing Committee on Observational Activities Programme for Solar Eclispe of July 20, 1963 Bulletin No. 5 Basic Observation Programme May 4, 1963 Section C. Visual Observations This section of the basic observation programme covers visual observations made during the total phase either with the naked eye or with telescopic aiti and with the emphasis on drawings or sketches, supplemented by verbal descriptions of detail seen. Dark Adaptation. jt is common knowledge that our ability to see dim light is pro- portional to the length of tame that the "rod" cells of the eye's retina remain unstimulated. To see fine detail during the brief period of totality, the eye must first become dark adapted, a process that takes at least twenty minutes. The gradual darkening of the sky during the partial phase is not suflicient to produce the des- ired effect, and the following methods of dark adaptation are suggested, (1) The observer is blindfolded during the approach of totality. This rather drastic measure deprives him of the thflll of seeing the sudden transition front partial to total phase when the corona sinnes forth in all its beauty True, he can watch the end of totality and the subsequent partial phase but it is not as spectacular as the approach. (2) The observer can wear dark red goggles during the approach of totality, for it has been fohd that the rods are practically unstimulated by the red end of the spectrum. This method aflows activity during the period of adaptation. Care must still be taken to provide adequate protection for the eyes when looking directly at the sun. (3) The observer can wear a black patch over his "observing" eye, watching the approach of totality with the other and removing the patch only after the beginning of totality. This method is particularly, sqitable for those making telescopic observations for most observer invariably use the same eye at the eyepiece. 1. CORONA Regardless of the ztumbei of photographs taken, visual observations of the solar corona will be useful, for the human eye, if properly dark adapted, can see fine detail that is difficult to photograph without over-exposing other aretts. The problem, of course, in making visual observations is to record faithfully the detail that one sees - the shape and extent of the corona, the variations in intensity and to do this in a very limited period of time. The observer should not trust to memory but should complete his drawing from actual observation tharing the period of totality. This is a tall order and it is suggested that the observer practise before- hand, making drawings frdth. pro jected Slides of the solar corona. For the sake of uniformity it is suggested that a two inch circle represent the sun's disk. A verbal description, recorded immediately after totality, should supplement the drawing. As mentioned in Bulletin No. 4, at sun-spot minimum it is expected that the corona will have long equatorial streamers and short polar plumes. Observations can be made with the naked eye or with a telescope. The report form should give details of equipment used. The telescopic observer has the disadvantage that, since no filters are used during totality, he must be ready to stop a few seconas before the end of totality to avoid the possibility of injury to his eyes.
<urn:uuid:592082f1-bcb4-4d47-928c-116d57fae49a>
2.6875
730
Truncated
Science & Tech.
39.915918
You may have heard about nanotechnology enhanced pants that keep that wine stay away or even a nanotech tennis racket. But if nanotechnology is truly set to revolutionize the world we live in what benefits can the poorest people of the world expect to see? According to a 2005 study these are the areas we should focus on first: Not surprisingly energy tops the list. According to Singer easy access to cheap energy will lead to a great deal of economic growth in the developing world. Here at the Buzz we have covered several nanotechnology energy advances that might come to market in the future. super cheap solar cells, nano ultracapacitors from MIT, nano products now. Look for more info on some exhibits we will be rolling out soon on nanotech's impact on energy and the environment. The BB-sized spherules are thought to be composed of hematite, iron oxide that leaches out of the soil as ground water rises up through it. Three years ago, when Opportunity first landed on Mars, millions of the tiny concretions were seen covering the Martian surface around its landing site at Eagle crater. A depression containing a concentration of the berries allowed Opportunity’s Mössbauer spectrometer (an instrument designed to identify iron-bearing minerals) to analyze the spheres’ composition. The test results displayed typical outcrop characteristics, but showed intense hematite signature. Scientists think the concretions are similar to those found on Earth in the Utah desert, and elsewhere. Commonly called “Moqui Marbles”, these larger concretions litter the ground in many areas in the Utah desert, and are cherished among New-Age devotees for their supposed metaphysical powers. I found similar concretions at Como Bluff in Wyoming last fall. Como is an historic dinosaur bone yard carved out of an anticline just north of Laramie. The well-known Morrison Formation , from which the dinosaur fossils weather out, is composed of river and floodplain deposits laid down during the Late Jurassic Period. The Morrison outcrops in a number of western states. Concretions on Earth form when ground water rises up through strata of compacted soil, seeping into joints and between layers, where minerals in the water precipitate out slowly over long periods of time. Scientists believe the same process has taken place on Mars. As Opportunity moved across Meridiani Planum and upwards toward the rim of Victoria, the number and size of blueberries has decreased. But as the rover’s neared Victoria’s rim, the trend has reversed. The terrain there was "full of great big juicy blueberries again," said rover chief scientist Steven Squyres last month at the American Geophysical Union's fall meeting in San Francisco. "That was a surprise to us." The impact that created Victoria Crater smashed deep below the surface and into the blueberry layer, throwing thousands of the concretions around the crater’s rim. The concretions add to the growing evidence of water on the Red Planet. During it’s trek toward Victoria, Opportunity spotted ripple marks in Endurance crater leading scientists to speculate that water, which is mostly present underground, sometimes flows out on the surface. Recent gully activity was also noticed just a few weeks ago in photos taken by the orbiting Mars Global Surveyor. Just how deep the water level is won't be known until Opportunity descends into the Victoria Crater in a few months, and studies the outcrops there. "As we go down, we'll cross a bathtub ring," marking the highest level the water reached, Squyres said. One of the greatest threats to public health these days, especially in developing countries, is poor water quality. Everyone needs water to live, right? But if that water isn’t clean, it can lead to a ton of health problems. Some estimates figure that 6,000 people die each day due to health complications from drinking poor water. Solving big problems usually takes big solutions. But a Swiss weaving company have developed an easy, low-cost way to get around the problems of drinking impure water. It’s developed a device called LifeStraw. It’s a portable water purifying system. People can wear a LifeStraw around their neck and use it to safely slurp up surface water from just about any natural location. The ten-inch-long tube contains a series of fabric filters inside. Those filters can screen out nearly all micro organisms that carry water-borne diseases, including diarrhea, dysentery, typhoid and choler. The filters are fine enough to screen out particles that are up to 15 micorns small. The makers of LifeStraw say their product can last for about a year until it needs to be replaced, processing about 700 liters of water in its life time. That averages out to about two liters a day, the size of a large soda pop bottle. There is some minimal maintenance required with a LifeStraw. Users occasionally need to blow out their last gulp of water plus some air through the straw to clean out the filters and any silt or mud that may get drawn into the straw. What’s really remarkable about this is the price tag for LifeStraw. Each device costs $3. But you’re not going to find them on the shelves of Wal-Mart, Target or a grocery store. LifeStraw’s parent company, Vestergaard Frandsen sells LifeStraws in bulk quantities to charitable groups who then get them to needy areas of the world through service projects. Rotary Clubs in Great Britain are among the biggest participants in the LifeStraw distribution effort. More information on how to get involved in distributing LifeStraws is available at the organization’s website: www.lifestraw.com There’s growing photo evidence that water occasionally flows on the surface of Mars. Photos from a NASA Mars orbiter taken over the span of several years show that erosion patterns have changed on portions of the Red Planet. Scientists have known that ice exists on Mars for quite a while, but these latest photographs help point to signs that liquid water occasionally can be found on the planet as well. That’s especially important in the search for any forms of life on the planet. While past research has concluded that life was possible on the planet’s long past when it was warmer, these new photos help boost the odds that liquid water may exist somewhere on the planet today to help feed life forms. Satellite photos have long shown gullies on the surface of Mars where water was believed to have flowed millions of years ago. Comparing photos of portions of Mars first photographed in 1999 and 2000 and then reshot in 2004 and 2005, researchers have found gullies in two spots that are part of the second series of photos, but not the first. “Water seems to have flowed on the surface of today’s Mars,” says Michael Meyer, lead scientist for NASA’s Mars Exploration Program. “The big question is how does this happen, and does it point to a habitat for life.” There are no visible channels or pools of water on Mars. That leads researchers to think that there may be liquid water in underground aquifiers, which occasionally release water to Mars’ surface. Underground temperatures of Mars might be warm enough to keep water in its liquid state. The new gullies display evidence of water flow similar to what we see on Earth. They are about one-quarter of a mile long and have delta-shaped patterns at their ends, much like what we find at the end of our rivers and streams. Also, flow patterns in the areas around obstacles in the paths of the gullies show similar patterns like those we see here on earth of mud and sediment washing around the obstacle. By the way, if you want to see more about the surface of Mars, the Science Museum of Minnesota’s 3-D cinema currently is showing the film “Mars,” which has footage taken from the Mars rovers currently scurrying around the planet. Maybe you’ll be able to see some signs of water in the background. Astronomers at Cornell have determined that craters on the moon, once thought to hold ice, actually just have highly reflective dirt. This is a set back for space exploration plans, which had hoped to use the ice as a source of water and/or hydrogen for a future moon base. Researchers for the European Journal of Clinical Nutrition stated "drinking three or more cups of tea a day is as good for you as drinking plenty of water and may even have extra health benefits." A long time ago; far, far away, there might have been life on Mars. Those are the conclusions researchers are coming to has they pull together data gathered from several space probes to the Red Planet over the past decade. It all adds up to the possibility that Mars could have supported life during its first 1 billion years of existence. For the past 3.5 billion years, its conditions have been too harsh to sustain life as we know it. It became too cold and too dry for even the basic forms of live, microbes, to exist. The findings of the research team were recently published in the journal Science. A team of international space experts has been studying the data gathered from various space missions. In its first 600 million years, Mars likely had plenty of water, temperate weather and low acid levels. The research team has been able to figure that out by examining the oldest rocks they’ve found from the missions. Those rocks have been exposed on Mars’ surface due to erosion, cratering and large temblors. Exactly were the water may have been on Mars is still up for debate. The research team keeps open the possibility that the planet’s surface never had large amounts of water covering it. Clay deposits, a key link to the presence of water on Mars, have been found beneath the planet’s surface. And the few exposed sections of clay may have been formed below the surface and later pushed up or exposed. The tame first segment of the planet’s life was followed by 500 million years of great volcanic activity that filled the atmosphere with sulfur. Those particles fell back down in the form of sulfuric acid, while at the same time Mars began to lose its atmosphere. Then over the course of the next 300 million years, Mars got to its icy-cold, rusty-red look that it has still today. All of this information is helping scientists plan where they want to send future Mars probes to get even more answers to these questions on Mars’ origins. This morning at 7:43 AM EDT NASA successfully launched the new Mars Reconnaissance Orbiter into space to begin its long journey to the red planet. This new mission to Mars will put the satellite into a low orbit to examine the planet in the highest detail ever captured. The orbiter will travel for 8 months before it goes into orbit around Mars. One of the main goals of this mission is to scout out information that will help us in future missions that will actually land on Mars. So once it gets there it will deploy six new instruments to analyze the atmosphere, scour the surface, and even image deep below the surface of the planet. Learn more about the Mars Reconnaissance Orbiter and other NASA Mars programs What do you think is the most important reason to travel to Mars? To mine its resources? For human colonization? To find out it there was/is life on Mars? Something else? News reports last week indicated that scientists had found methane on Mars—a chemical that usually indicates life. However, NASA says it ain't so. "News reports on February 16, 2005, that NASA scientists from Ames Research Center, Moffett Field, Calif., have found strong evidence that life may exist on Mars are incorrect. "NASA does not have any observational data from any current Mars missions that supports this claim. The work by the scientists mentioned in the reports cannot be used to directly infer anything about life on Mars, but may help formulate the strategy for how to search for martian life. Their research concerns extreme environments on Earth as analogs of possible environments on Mars. No research paper has been submitted by them to any scientific journal asserting martian life."
<urn:uuid:118ba6cc-32a8-41c0-b368-f79547513ac1>
2.890625
2,558
Content Listing
Science & Tech.
50.031866
[Previous] | [Session 15] | [Next] T. H. McConnochie, B. J. Conrath, D. Banfield, P. J. Gierasch (Cornell U.), M. D. Smith (NASA/GSFC) We have used Mars Global Surveyor Thermal Emission Spectrometer (MGS TES) data to generate a time series of maps that illustrate the properties of the polar vortices on Mars. This represents the first detailed study of the structure and evolution of the Martian circumpolar jets. These jets are an important component of the general circulation, and control the critical wintertime transports of water, dust, and momentum into the polar regions. The polar vortices on Mars are analogous to the stratospheric polar vortices on Earth. We have mapped column-integrated aerosol abundances and vertically resolved temperatures below the 0.1 mb pressure level. Data from each orbit is smoothed and sampled at 1 degree intervals in latitude. The smoothed data is then interpolated onto a uniform longitude-time grid. We calculate the wind field implied by the temperature data using the "balance winds" methodology suggested by Randel (1987, J. Atmos. Sci, 44). From the wind field, we generate maps of potential vorticity for use as a dynamical tracer. In addition to resolving numerous transient weather events along the vortex boundary, our maps provide graphical illustration of the stationary and traveling planetary waves that were reported by Banfield et al. (2002, Icarus, in press; 2003, in prep.). The largest transient displacements in the polar vortex boundary have amplitudes approaching 10 degrees in latitude. However, the vortex boundary remains intact at all times; no sudden warming events comparable to those which occur occasionally in the terrestrial arctic stratosphere have been observed. Funding for this research was provided by NASA through the Mars Data Analysis Program. If the author provided an email address or URL for general inquiries, it is as follows: Bulletin of the American Astronomical Society, 34, #3< br> © 2002. The American Astronomical Soceity.
<urn:uuid:8a25651e-97b1-4a17-9671-0bc8c423a4e5>
2.8125
450
Academic Writing
Science & Tech.
41.290951
If you thought yesterday's post on the SETMAR gene made for tough reading, you should have tried writing it. I wound up starting it over twice, and still wasn't all that happy with it. I found myself thinking that it would have been much easier if i could assume that everyone knew what a Ka/Ks ratio was, or I could link to one. Why it didn't occur to me to write my own until after I posted the story, I may never know. It's especially ironic given that I taught it in a class last semester. The Ka/Ks ratio is a way of measuring the rate of sequence change in a gene that tells us something about the selective evolutionary pressures that are acting on that gene. It tells us whether the sequence of the gene is under pressure to stay the same, change, or drift randomly. It takes advantage of the fact that not all mutations are equal. For example, the DNA sequence GTT codes for the amino acid Valine, but so does anything that starts with GT: GTA, GTG, and GTC. A mutation that changes either of the first two bases will cause the sequence to code for something else, while any change in the third base will still leave it coding for Valine. The result is that many mutations do not cause any changes in a protein—these are called "synonymous mutations." Those mutations that do result in changes are called non-synonymous. Simply measuring the number of these changes, however, is not statistically informative, since a given gene will have a different number of places where changes can be synonymous or non-synonymous. To control for that, the number of sites where one type of change can take place is also calculated. In this example, the first two bases (GT) would be considered potential non-synonymous sites. The Ka value is simply the total number of non-synonymous changes divided by the number of potential non-synonymous sites, making it a measure of how often these potential changes happen. Ks is the same for synonymous sites. The Ka/Ks ratio, as a result, measures how often the average mutation in a gene is resulting in a change in the protein it produces. If mutations in a gene are random, or equally likely to cause changes or not, this ratio should be 1. Since mutations appear to be largely random, it probably starts out that way, but it's worth looking at what happens once a protein is produced, and the organism has to deal with the consequences. In cases of basic metabolic enzymes, change is bad; these proteins have existed for billions of years, and are already highly optimized for their roles. As a result, most mutations in these genes will result in a decrease in metabolic efficiency or death. The net result is that most of the mutations in surviving individuals will be synonymous, and the Ka/Ks ratio will be very low. This situation is called negative or purifying selection, and indicates a conservative pressure that gets rid of changes. In contrast, if a gene has been under pressure to change, mutations that alter its amino acids will be retained by selection, while some of the synonymous ones will be lost at random. This can result in a Ka/Ks ratio that's greater than one, a situation indicative of what's called positive selection, or a pressure for change. So, going back to yesterday's post, the question was whether there was pressure to keep any of the transposon-derived portions of the SETMAR protein around. If you looked at the transposon sequence as a whole, Ka/Ks was 0.3, which indicates a pressure towards conservation with a high statistical confidence (p < 0.000001). But most of the conservation is in the portion containing the DNA binding region, where Ka/Ks is a very low 0.1, again with a high statistical probability. Changes in DNA binding are definitely being selected against. Going into the DNA cutting region, the Ka/Ks = 0.7, which is not statistically different from 1 (p = 0.2), meaning changes in this region appear random. Which is why we know that evolutionary pressures were keeping the DNA binding function around, and didn't care about the DNA cutting activity.
<urn:uuid:23ee0c9b-9bc7-4cdc-af7a-bbebcac1a6fb>
3.046875
867
Personal Blog
Science & Tech.
50.31886
This week, Astronomy.com reported some intriguing comet research at Lowell Observatory in Arizona. A Lowell scientist, Dave Schleicher, studies the chemistry of comets. He and his colleagues recently found that Comet 96P/Machholz 1 has a weird chemistry. Machholz is extremely low in a chemical called cyanogen compared to other comets. The Lowell researchers think it may be a totally new class of comets, possibly cooked up billions of years ago in a very different way than their comet comrades in the early solar system. Or — and this is pretty speculative — Machholz could be a refugee from another star system, kicked out by a gravitational glitch onto an unfathomably long journey to our corner of space, whereupon the Sun’s gravity snatched it from the void. The press officer for Lowell, Steele Wotkyns, called me to see if I wanted to talk to the scientist, Dave Schleicher, about the work. This is a routine interaction. Scientific institutions want coverage of their work and we want to pass along interesting new findings to readers. “Interesting” is the key word. What’s interesting to a reader? Making that call is part experience, part gut instinct. I ask myself what is inherently cool about the discovery. In this case, it came down to two issues: chemistry and origin. Chemistry can be a tough sell to a broad audience. So I thought I’d ask Schleicher what this oddball cyanogen-poor comet would smell like. Most people relate to the universal basics of smell, touch, taste, sound, and sight. Schleicher seemed a little dubious about my motives for asking this silly question, but he was a good sport about it. He says if you thawed out a chunk of comet in the lab and sniffed it, it might smell of ammonia. It also contains lots of pure carbon compounds, but these might not smell like anything in particular. As for cyanogen, it was a poison gas used in World War I. “Presumably it had an identifiable odor,” Schleicher told me in an e-mail, “such that soldiers knew when to put on their gas masks.” Indeed, cyanogen chloride is a toxic, foul-smelling nerve gas that causes rapid paralysis and death. So don’t play with a comet, should you ever come across one. OK, we got the smell thing out of the way. Now for the serious stuff: Could this comet be, like Clark Kent (a.k.a. Superman), a strange visitor from another world? If so, the travel time to our solar system would be stupendous. For the comet to be captured by our solar system, it can’t come into the neighborhood too fast. Otherwise it might just fling itself back out into interstellar space. Assuming a relatively slow inbound speed — something roughly the velocity of a commuter jet — Machholz would take something like 10 million years to reach us from the nearest star, Alpha Centauri. “More likely,” Schleicher said, “if Machholz 1 did come from another stellar system, it has been on its journey for hundreds of millions or even more than a billion years.” It would be tough to prove extrasolar origin for a comet, Schleicher notes, because such an interstellar refugee would enter the inner solar system at about the same speed as any other comet originating from the Oort Cloud — a cloud of comets extending from 1,000 to 50,000 times the Sun-Earth distance. However, if a comet came into the solar system at a relatively high velocity and with a hyperbolic orbit, astronomers might argue that it came from another star. But no one has seen such a comet — yet. Interestingly, Schleicher said, about one comet probably escapes our solar system each year. “After a billion years or so, that's a lot of comets that we have lost, and there is no reason to believe that the same thing isn't happening to comets in other stellar systems.” Schleicher’s project has studied about 150 comets so far, probing them for spectroscopic clues to their chemical composition. Chemistry tells us how and where they formed and could reveal important new details about the early solar system. If astronomers ever do discover a comet from another star, Schleicher said, statistical analysis of lots of comets will probably provide the key evidence. Another approach, Schleicher says, is to journey to the comet in question and bring a sample of it home. Detailed analysis of its chemistry could potentially identify it as from another star system. As the Comet Observer Award coordinator for the Astronomical League, I found this article very intriguing. Not only does it have information about a comet I and many others photographed, but its speculation of being a totally different type of comet is most interesting and curious. That would something if it was found to come from another solar system oort belt! Thanks for writing, kcstarguy . Yeah, an extrasolar comet -- too cool. Especially considering the idea that material like this could potentially carry some trace of extraterrestrial life. Hmmmmm.. was our solar system "seeded" billions of years ago by primitive cells hitchhiking on a lost comet from Alpha Centauri???
<urn:uuid:6282341d-a7d0-4315-9163-b8233130ad7a>
3.328125
1,129
Comment Section
Science & Tech.
53.187394
Yes. An ice age is a period over tens of millions of years where the Earth is cold enough to produce permanent ice sheets. Since permanent ice sheets currently exist in Greenland and Antarctica, it qualifies the current age to be an ice age. This current ice age began 30 million years ago. Within an ice age there are warm periods referred to as "interglacial" and cold periods referred to as "glacial." We are in an interglacial period right now. Factor such as changes in the Earth's axis and orbit; tectonic plate shifting; volcanic matter explosions or meteoric impacts; and an increase in carbon dioxide emission from industrial pollution all contribute to changes in the Earth's climate. How is an iceberg formed? Answered by Animal Planet Why do we fear that alternative fuels will reduce corn supplies? Answered by Science Channel How do development and construction affect the barrier islands? Answered by Planet Green
<urn:uuid:c255f141-5571-4b57-92f7-e9f178d5a261>
3.703125
193
Q&A Forum
Science & Tech.
44.035615
A collection of starlings is called a “murmuration.” A couple of kayakers in Ireland came across a murmuration. Nature never ceases to amaze and remind us of the importance of curiosity and discovery. Here’s an excerpt from a study of the phenomenon: “Bird flocking is a striking example of collective animal behaviour. A vivid illustration of this phenomenon is provided by the aerial display of vast flocks of starlings gathering at dusk over the roost and swirling with extraordinary spatial coherence. Both the evolutionary justification and the mechanistic laws of flocking are poorly understood, arguably because of a lack of data on large flocks… We investigated the main features of the flock as a whole (shape, movement, density and structure) and we discuss these as emergent attributes of the grouping phenomenon. Flocks were relatively thin, of various sizes, but constant proportions. They tended to slide parallel to the ground and, during turns, their orientation changed with respect to the direction of motion. Individual birds kept a minimum distance from each other that was comparable to their wing span. The density within the aggregations was nonhomogeneous, as birds were packed more tightly at the border than the centre of the flock.”
<urn:uuid:a2ce472c-a6ea-4a62-8a56-664536ce07f8>
3.46875
254
Knowledge Article
Science & Tech.
30.281261
Prasad, CR and Yoganarasimha, A and Venkateshan, SP and Nagaraju, TG (1979) Technique for measuring the beam shape of pulsed lasers. In: Review of Scientific Instruments, 50 (9). pp. 1161-1162. Beam.pdf - Published Version Restricted to Registered users only Download (168Kb) | Request a copy A simple technique for the measurement of the beam shape parameters of pulsed lasers, with just a single pulse of the laser is described. It involves the use of several beam dividers inclined at very small angles to the beam axis, reflecting the beam back to a screen or a phosphor placed near the exit of the laser. The reflected images are then photographed by a camera to yield the beam parameters. |Item Type:||Editorials/Short Communications| |Additional Information:||Copyright of this article belongs to American Institute of Physics.| |Department/Centre:||Division of Mechanical Sciences > Mechanical Engineering| |Date Deposited:||27 Dec 2010 13:38| |Last Modified:||27 Dec 2010 13:38| Actions (login required)
<urn:uuid:7fb727fc-cd09-4c0d-8339-843159bdd62e>
2.90625
250
Truncated
Science & Tech.
41.518428
Structs (C# Programming Guide) Structs are defined using the struct keyword, for example: Structs share almost all the same syntax as classes, although structs are more limited than classes: Within a struct declaration, fields cannot be initialized unless they are declared as const or static. A struct may not declare a default constructor —a constructor with no parameters — or a destructor. Copies of structs are created and destroyed automatically by the compiler, so a default constructor and destructor are unnecessary. In effect, the compiler implements the default constructor by assigning all the fields of their default values (see Default Values Table). Structs cannot inherit from classes or other structs. Structs are value types — when an object is created from a struct and assigned to a variable, the variable contains the entire value of the struct. When a variable containing a struct is copied, all of the data is copied, and any modification to the new copy does not change the data for the old copy. Because structs do not use references, they do not have identity — there is no way to distinguish between two instances of a value type with the same data. All value types in C# inherently derive from, which inherits from . Value types can be converted to reference types by the compiler in a process known as boxing. For more information, see Boxing and Unboxing. Structs have the following properties: Structs are value types while classes are reference types. Unlike classes, structs can be instantiated without using a new operator. Structs can declare constructors, but they must take parameters. A struct cannot inherit from another struct or class, and it cannot be the base of a class. All structs inherit directly from System.ValueType, which inherits from System.Object. A struct can implement interfaces. For more information:
<urn:uuid:be98057e-e601-4a73-bf04-c30b2c2ae018>
3.65625
384
Documentation
Software Dev.
42.194684
Measurements of any kind, in any experiment, are always subject to uncertainties or errors, as they are more often called. We will argue in this section that the measurement process is, in fact, a random process described by an abstract probability distribution whose parameters contain the information desired. The results of a measurement are then samples from this distribution which allow an estimate of the theoretical parameters. In this view, measurement errors can be seen then as sampling errors. Before going into this argument, however, it is first necessary to distinguish between two types of errors: systematic and random.
<urn:uuid:78d1e573-df41-40df-8c2e-1a0f0b2ea8db>
2.796875
115
Academic Writing
Science & Tech.
24.374364
Why does this fold create an angle of sixty degrees? Are these estimates of physical quantities accurate? How many generations would link an evolutionist to a very distant Is there a relationship between the coordinates of the endpoints of a line and the number of grid squares it crosses? Take any prime number greater than 3 , square it and subtract one. Working on the building blocks will help you to explain what is special about your results. A napkin is folded so that a corner coincides with the midpoint of an opposite edge . Investigate the three triangles formed . What is the same and what is different about these circle questions? What connections can you make? Four rods are hinged at their ends to form a convex quadrilateral. Investigate the different shapes that the quadrilateral can take. Be patient this problem may be slow to load. Make a conjecture about the sum of the squares of the odd positive integers. Can you prove it? Sort these mathematical propositions into a series of 8 correct How would you massage the data in this Chi-squared test to both accept and reject the hypothesis? One side of a triangle is divided into segments of length a and b by the inscribed circle, with radius r. Prove that the area is: Predict future weather using the probability that tomorrow is wet given today is wet and the probability that tomorrow is wet given that today is dry. See how graphical methods can be used to solve this rates of change problem and how this simple congfiguration leads to an equation which needs a numerical solution. Go to last month's problems to see more solutions. In this Sudoku, there are three coloured "islands" in the 9x9 grid. Within each "island" EVERY group of nine cells that form a 3x3 square must contain the numbers 1 through 9.
<urn:uuid:c0623a6f-71b1-49fc-80b0-bf722afaa2a4>
3.640625
396
Content Listing
Science & Tech.
54.130309
Looking for Signs of Past Water on Mars The big science question for the Mars Exploration Rovers is how past water activity on Mars has influenced the red planet's environment over time. While there is no liquid water on the surface of Mars today, the record of past water activity on Mars can be found in the rocks, minerals, and geologic landforms, particularly in those that can only form in the presence of water. That's why the rovers are specially equipped with tools to study a diverse collection of rocks and soils that may hold clues to past water activity on Mars. The rovers offer unique contributions in pursuit of the overall Mars science strategy to "Follow the Water." Understanding the history of water on Mars is important to meeting the four science goals of NASA's long-term Mars Exploration Program: - Determine whether Life ever arose on Mars - Characterize the Climate of Mars - Characterize the Geology of Mars - Prepare for Human Exploration Learn about the rovers' unique contributions to these science goals through the pursuit of seven primary science objectives. How scientists will rely on the rovers to look for signs of past water Because scientists cannot go to Mars themselves at this point in time, they have to rely on robot geologists--the rovers--to look for signs of past water activity on Mars for them. To do their job, the rovers carry a number of science instruments that will analyze rocks and soils on the Martian surface and perform other important tasks and studies. Science results are being published in scientific journals.
<urn:uuid:a25655d0-215d-4ddc-9fab-b9e72614c05a>
4.03125
316
Knowledge Article
Science & Tech.
32.276229
In this illustration, we will discuss about the appending the value to the node in XML document. The createTextNode() method creates a new Text for the specified node as the document element for Text node. DocumentBuilderFactory class is used to create new DOM parsers and DocumentBuilder is used in code given below for creating an Blank DOM Document. The following code is used to create a textNode. Text childElement = doc.createTextNode("string"); Here is the full code of createTextNodeExample.java <?xml version="1.0" encoding="UTF-8" standalone="no"?> If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:e6fe1463-61d0-4d76-9a32-f2e51283388b>
3.375
172
Documentation
Software Dev.
49.112109
Lessons from Past Climate Predictions: Arctic Sea Ice Extent 2012 Update Posted on 10 October 2012 by dana1981 Last year we examined a few predictions of the September annual Arctic sea ice extent in 2010 and 2011. We now have another year of predictions in the books, and a new record sea ice minimum. The Study of Environmental Arctic Change (SEARCH) has also collected sea ice extent predictions since 2008. In this post we will examine how these predictions have fared year-by-year, and Figure 1 below shows the average accuracy of individuals or groups who have made at least two annual Arctic sea ice extent predictions since 2008. In all of the graphics in this post, orange bars represent model-based predictions, blue bars represent statistics-based predictions, purple bars represent heuristic predictions (experience-based, which essentially means any predictions not based on statistics or models), and green bars represent a combination of methodologies. Observational data from the National Snow and Ice Data Center (NSIDC) are provided to evaluate the accuracy of the predictions. Sea ice extent is defined as the portion of the Arctic covered by at least 15 percent ice. Figure 1: Percent difference between predictions and actual annual Arctic sea ice minima for individuals and groups which have made a minimum of two predictions. Bear in mind that Figure 1 is not an entirely fair comparison, because as we will see below, the sea ice minimum in some years have been easier to predict than others; years in which the minimum extent falls near the long-term trend are easier, while extreme years like 2012 are much more of a challenge. Thus a group or individual which made two predictions in easier years has an advantage over those who made two predictions in more anomalous years. Note that in this post we are only considering initial predictions, usually made in May or June each year. SEARCH also allows for revised submissions in July, August, and sometimes September, but those revised predictions are much less interesting. 2008 was the first year that SEARCH solicited Arctic sea ice predictions. It was a challenging year because the previous year had shattered the minimum sea ice extent record, and it was therefore difficult to know if ice conditions had changed to allow even further declines, or if the ice would "rebound" and regress toward less extreme values. As the Arctic Sea Ice Escalator in Figure 2 shows, the latter happened. Figure 2: NSIDC September Arctic sea ice extent (blue diamonds) with "recovery" years highlighted in red, vs. the long-term sea ice decline fit with a second order polynomial, also in red. 14 groups submitted their predictions to SEARCH in 2008, with most being heuristic (no model-based or statistical methodology noted). Interestingly, the two model-based predictions of Lindsay and Zhang performed the best, predicting the ensuing sea ice extent almost perfectly (Figure 3). Note that in Figure 1 above, Lindsay (who has made predictions every year from 2008 through 2012 using varying methodologies) has had the most overall predictive success, and Zhang's model has also performed well through the years. The statistically-based estimates were the least successful in 2008, on average. Figure 3: 2008 SEARCH September Arctic sea ice extent predictions vs. observation (4.52 million square kilometers). In 2009 there was a second consecutive regression after the 2007 record minimum. In fact, there was a larger increase from 2008 to 2009 than there was from 2007 to 2008 (as is evident in Figure 2). As a result, all 15 SEARCH submissions under-predicted the 2009 extent (Figure 4). The statistical and combined approaches performed the best in 2009, with the models having less success. Figure 4: 2009 SEARCH September Arctic sea ice extent predictions vs. observation (5.36 million square kilometers). The climate contrarians were emboldened by the sea ice "recoveries" in 2008 and 2009, and made a few unofficial predictions of their own. Anthony Watts and Steve Goddard predicted that there would be another 500,000 square kilometer rebound from 2009, thus predicting a 5.75 million square kilometer extent in 2010. There were also 18 submissions to SEARCH in 2010, and amongst the climate realist bloggers, tamino weighed in with his own prediction by fitting a quadratic trend line to the annual September sea ice extent data, as did Skeptical Science's own Gavin Cawley (Dikran Marsupial) using a Gaussian process model, as did Robert Grumbine, who also submitted a prediction to SEARCH in 2009. There was no continued "rebound" in 2010 - instead Arctic sea ice extent declined to a level closer to the long-term downward trend (Figure 2). Overall, the statistical and model-based predictions fared equally well in 2010, and the heuristic predictions were not far behind, with Watts and Goddard dragging their accuracy down. There was also a rather strange combined heuristic and statistical prediction by Wilson of just 1 million square kilometers, which was very far off (Figure 5). Watts and Goddard may have been deterred by their inaccurate 2010 prediction, and did not record any predictions for 2011. However, in 2011 Watts submitted a prediction from his WattsUpWithThat (WUWT) blog readers to SEARCH. Another climate contrarian, Joe Bastardi weighed in with his own prediction. Coincidentally, the WUWT and Bastardi predictions were almost identical, predicting another large "rebound" from 2010. Cawley and tamino also weighed in with their predictions once again (Figure 6), and there were also 18 total SEARCH submissions. Figure 6: JAXA Arctic sea ice extent data, with an approximate re-creation of Joe Bastardi's 2011 prediction (red), WUWT readers' prediction (blue dot), tamino's prediction (green dot), Dikran Marsupial's prediction (orange dot), and the actual 2011 data (black). Source. 2011 saw yet another sea ice minimum decline, again close to the long-term downward trend (Figure 2 and Figure 6). As a result, the statistical approaches performed the best, with the models performing fairly well, and the heuristic predictions doing very poorly (Figure 7). The climate contrarians were perhaps discouraged by their poor prediction performances in 2010 and 2011, so only WUWT made a prediction in 2012 with a submission to SEARCH. Cawley also submitted his prediction to SEARCH, as did 18 other groups, and tamino submitted his own prediction on his blog. As we now know, 2012 demolished the previous minimum Arctic sea ice extent record by approximately 760,000 square kilometers. As a result, all submissions over-predicted this year's extent. Once again the statistical predictions performed the best, with the heuristic predictions doing reasonably well thanks to Morison and Lukovich (despite WUWT dragging their average accuracy down), while the models were not very accurate (Figure 8). Between 2008 and 2012, the model-based and statistical predictions have had the most accuracy, with an average difference from the observational data of 13%. However, in recent years the statistically-based predictions have been the most accurate. The heuristic predictions have been slightly less accurate, on average 15% off from the observations between 2008 and 2012. The lower accuracy of the heuristic submissions is primarily due to the inaccurate predictions of the climate contrarians, which one Skeptical Science contributor has described as "hubristic," since they are based more on irrational optimism, unwise focus on short-term noise, and hubris than logic or experience. Ignatius Rigor's heuristic predictions have also not fared well (Figure 1), which are based on the belief that the Arctic Oscillation plays a major role in sea ice extent - a hypothesis which appears not to be borne out by the data thus far. Climate contrarian bloggers and blog readers (Watts, Goddard, Bastardi, and WUWT readers) averaged a 23% miss with their hubristic sea ice extent predictions, while climate realist bloggers and blog contributors (tamino, Cawley, and Grumbine) averaged just a 9% miss with their statistically-based predictions. The take-home lesson here is that statistical predictions based on the long-term Arctic sea ice death spiral are clearly more realistic than denial-based optimistic predictions that sea ice will somehow magically recover to previous levels.
<urn:uuid:d2b9826a-02d3-48e1-a3b5-818b029bc5c2>
2.84375
1,723
Personal Blog
Science & Tech.
37.289341
One way to graph radical functions is to create a table of values and then plot the points. Before starting the table, first determine the domain of the function. Remember, the radical must be greater than or equal to zero. Once this lower limit for input (domain) values is established, create the table of values. When graphing radicals we plot the points in the coordinate plane. Whenever you're asked to graph anything in Math class, the most basic way to create a graph is using an xy table. That's what we're going to look at today in graphing rational functions. Excuse me radical functions, we're not doing rationals. Radicals. It's a big difference. Okay, so first thing you want I want to remind you guys is of domain and range. The domain is the set of all possible input values. That's really important when it comes to square roots because you guys know the square root of a negative number is not a real value, is not a real solution. So in order to find the domain, the radicand must be greater than or equal to zero. Let me show you what I'm talking about. If I want to take the square root of something, this thing whatever it is, I'm drawing a little cloud. That cloud or the radicand has to be greater than or equal to zero. That's how you find the domain of a radical function. Like in this problem for example, we're going to look at the parent function. In order to find the domain or my x values that I'm going to put in my table, I'm going to start by setting my radicand greater than or equal to zero. Radicand is whatever's under the square root. All I have was x so all I need is x is greater than or equal to zero. That tells me when I'm setting up my table that I want to use x numbers that are zero or bigger. I don't want to use any negative values because that would be non-real solutions in this function. So let's go through and plug in these x numbers one at a time and find their corresponding y numbers. If I stick in the square root of zero, if I just put in zero there, square root of zero is zero. Square root of one is one, square root of two, I'm going to approximate the decimal with 1.41, you can check that on your calculator. Square root of three when I stick that in there, I get 1.7 something, 1.73 I think, and then I'm going to stick in 4. Square root of 4 you guys know, is 2. Okay. So now I have a good number of points. I'm going to get these guys on the graph and you'll see the shape that all radical functions have. Here we go. I first start with 0 0, then I had 1 1, 2 1.4, that's like almost one and a half. I have to approximate a little bit. 3 is 1.7 and then 4 was 2. Okay. So you can kind of see what this is looking like. It's not a straight line. What this is, is like half of a parabola on it's side. This curve continues forever and ever. It goes out in this direction out forever and ever, but notice how it stops right there at the end. That's because my domain was only x numbers that were bigger than or equal to zero. My graph doesn't continue in that directions. Please don't put an arrow on that side because it doesn't continue. It just stops right there at zero and then it heads out in this direction. So all of the graphs that you're going to be doing by making a table or any time you graph a radical function, it's going to have this half parabola shape. It's going to be having like a dead end on one on one side and then an arrow on the other side that continues out forever and ever. When you're making your table, be sure to be clever about what x values you choose. Use the domain to tell you what x numbers are possible. That is take the radicand, set it greater than or equal to zero and then solve for x in order to find what x number should go to your table.
<urn:uuid:8f538d7c-03a4-4f06-a7ea-6d9ac33d4249>
4.65625
890
Tutorial
Science & Tech.
80.552762
climate protection & energy policy The greenhouse effect is the process by which the atmosphere traps some of the sun’s energy, warming the earth and moderating our climate. A human-driven increase in ‘greenhouse gases’ has enhanced this effect artificially, raising global temperatures and disrupting our climate. These greenhouse gases include carbon dioxide, produced by burning fossil fuels and through deforestation, methane, released from agriculture, animals and landfill sites, and nitrous oxide, resulting from agricultural production, plus a variety of industrial chemicals. Every day we damage our climate by using fossil fuels (oil, coal and gas) for energy and transport. As a result, climate change is already impacting on our lives, and is expected to destroy the livelihoods of many people in the developing world, as well as ecosystems and species, in the coming decades. We therefore need to significantly reduce our greenhouse gas emissions. This makes both environmental and economic sense. According to the Intergovernmental Panel on Climate Change, the United Nations forum for established scientific opinionon climate change, the world’s temperature could potentially increase over the next hundred years by up to 6.4° Celsius. This is much faster than anything experienced so far in human history. The goal of climate policy should be to avoid dangerous climate change, which is being translated in limiting global mean temperature rise, as compared to pre-industrial levels, well below 2°C above, or even below 1.5°C. Above these tresholds, we will reach dangerous tipping points and damage to ecosystems and disruption to the climate system increases dramatically. We have very little time within which we can change our energy system to meet these targets. This means that global emissions will have to peak and start to decline by 2015. Climate change is already harming people and ecosystems. Its reality can be seen in disintegrating polar ice, thawing permafrost, dying coral reefs, rising sea levels and fatal heat waves. It is not only scientists that are witnessing these changes. From the Inuit in the far north to islanders near the Equator, people are already struggling with the impacts of climate change. An average global warming of 1.5°C threatens millions of people with an increased risk of hunger, malaria, flooding and water shortages. Never before has humanity been forced to grapple with such an immense environmental crisis. If we do not take urgent and immediate action to stop global warming, the damage could become irreversible. This can only happen through a rapid reduction in the emission of greenhouse gases into the atmosphere. This is a summary of some likely effects if we allow current trends to continue: Likely effects of small to moderate warming • Sea level rise due to melting glaciers and the thermal expansion of the oceans as global temperature increases. Massive releases of greenhouse gases from melting permafrost and dying forests. • A greater risk of more extreme weather events such as heatwaves, droughts and floods. Already, the global incidence of drought has doubled over the past 30 years. • Severe regional impacts. In Europe, river flooding will increase, as well as coastal flooding, erosion and wetland loss. Flooding will also severely affect low-lying areas in developing countries such as Bangladesh and South China. • Natural systems, including glaciers, coral reefs, mangroves, alpine ecosystems, boreal forests, tropical forests, prairie wetlands and native grasslands will be severely threatened. • Increased risk of species extinction and biodiversity loss. The greatest impacts will be on poorer countries in sub-Saharan Africa, South Asia, Southeast Asia and Andean South America as well as small islands least able to protect themselves from increasing droughts, rising sea levels, the spread of disease and decline in agricultural production. longer term catastrophic effects Warming from emissions may trigger the irreversible meltdown of the Greenland ice sheet, adding up to seven metres of sea level rise over several centuries. New evidence shows that the rate of ice discharge from parts of the Antarctic mean it is also at risk of meltdown. Slowing, shifting or shutting down of the Atlantic Gulf Stream current will have dramatic effects in Europe, and disrupt the global ocean circulation system. Large releases of methane from melting permafrost and from the oceans will lead to rapid increases of the gas in the atmosphere, and consequent warming. Read more in Chapter 1 of the usa energy [r]evolution report.
<urn:uuid:abe4c827-cf17-4151-b070-36b4796fa259>
3.890625
895
Knowledge Article
Science & Tech.
32.4191
Page 4 of 6 At this point we have the necessary code to allow the user to plot a default view of the Mandelbrot set. Next we want to let the user select an area by dragging with a mouse and then recomputing the Mandelbrot set so that this selected area is blown up to cover the entire picture box. To allow the user to mark out an area we need to make use of the mouse events – mousedown, mousemove and mouseup. Essentially what we need to do is record the x,y co-ordinates when the user presses the mouse button down. This marks the first corner of the rectangle. Then as the user moves the mouse we animate a rectangle drawn from the first corner to the current mouse position. Finally the user lets go of the mouse button , i.e. a mouseup event occurs, and we store the final mouse co-ordinates of the rectangle. This sounds easy but as we draw a rectangle by specifying its height, width and top lefthand corner things are a little more tricky. In particular the mouse down position is only the top lefthand corner of the rectangle if the user drags down and to the right. If they drag up and to the left it is the bottom right hand corner. The solution to this problem is to sort the mouse positions to find the top lefthand corner no matter what the two points specified are. First we might as well create the selection rectangle ready to be used: private Rectangle selection = new Rectangle() Stroke= new SolidColorBrush(Colors.Black), StrokeThickness = 1, Visibility = Visibility.Collapsed You can define this Rectangle using XAML is you want but using object initiliasation syntax code is just as easy. We also have to add the Rectangle to the Canvas and the best place to do this is in the contructor: (Notice that the default name that Silverlight assigns the Canvas is canvas1 whereas WPF assigns Canvas1.) We also need a flag to signal that a selection is in progress and a Point to store the location the mouse button is first pressed down: private bool mousedown = false; private Point mousedownpos; The mouseLeftButtonDown event handler is: private void canvas1_MouseLeftButtonDown( object sender, MouseButtonEventArgs e) mousedown = true; mousedownpos = e.GetPosition(canvas1); selection.Width = 0; selection.Height = 0; selection.Visibility = Visibility.Visible; This is an easy method but there are some subtle points. The first is that the Canvas only receives routed events from image1. We also get the mouse position relative to Canvas1. Then we set the top left hand corner of the selection rectangle to the mouse down position and make it visible. The MouseMove event handler is only complicated because of the need to identify the top left-hand corner of the rectangle between the two points. First we have to check that the mousedown event occured - we don't do anything if the mouse is just moving over the Canvas: private void canvas1_MouseMove( object sender, MouseEventArgs e) If the mouse is moving after a down button event then we can retrieve its position and work out the difference between its current position and where the mousedown event occurred: Point mousepos = e.GetPosition(Canvas1); Point diff = new Point( mousepos.X - mousedownpos.X, mousepos.Y - mousedownpos.Y); Notice that Silverlight doesn't support arithmetic with Points and Vector types as does WPF. The easiest solution is to simply use the Point structure as an all purpose 2D data type. We might as well start with the assumption the mousedownpos is the TopLeft hand corner - after all most users drag down and to the right: Point TopLeft = mousedownpos; Now we test to see if any of the differences were negative and if so we have to swap a co-ordinate: if (diff.X < 0) TopLeft.X = mousepos.X; diff.X = -diff.X; if (diff.Y < 0) TopLeft.Y = mousepos.Y; diff.Y = -diff.Y; Notice that now TopLeft really does hold the position of the top left of the rectangle. This, together with the differences, can be used to set the position of the selection: selection.Width = diff.X; selection.Height = diff.Y;
<urn:uuid:94a0b221-95dc-4dd6-b581-9c5a822b4087>
2.90625
1,003
Documentation
Software Dev.
55.861305
Take out a sheet of paper. Pop quiz! (just kidding). Draw any number of dots on your page. Now connect the dots with lines, subject to the following rules: lines may not cross each other as they move from dot to dot, and every dot on your page must be connected to every other dot through a sequence of Now count the number dots (D), lines (L), and regions separated by lines (R). (Don't forget to count the outside as a region too.) Compute D-L+R. What do you get? No matter how you started, the number you will always get is 2! In Figure 1, D=9, L=12, R=5, and indeed, D-L+R=2. If the lines represent fences, and the dots fenceposts, then the regions separated by the fenceposts are the pastures. So, if you are a farmer who wants to fence off 4 pastures together with 55 sections of fence, you can calculate exactly how many fenceposts you need, no matter how you arrange the fences! (L=55, R=5=4+outside, so D=2+L-R=2+55-5=52 fenceposts.) You may wish to have everyone shout out their answer at the same time... students will be surprised they all get the same answer. The Math Behind the Fact: The number D-L+R is called the Euler characteristic of a surface. It is an invariant of a surface, meaning that while it looks like it may depend on the system of fences you draw, it really does not (as long as every pasture, including the outside, is topologically a disk with no holes). Thus the number only depends on the topology of the surface that you are on! For planes and spheres, this number is always 2. How to Cite this Page: Su, Francis E., et al. "Euler Characteristic." Math Fun Facts.
<urn:uuid:773856ad-2eee-46c5-8979-fc403ce5e3f7>
3.671875
444
Tutorial
Science & Tech.
78.444133
CAMEROON is like many countries in west Africa: ecologically rich, economically poor and at war with itself. Its tropical forests and savannas, home to rare black rhinos, elephants, gorillas and a wealth of plant life, are being destroyed as people try to scratch a living from the land. As such, Cameroon is an ideal candidate for environmental aid from the West. In 1995, developed nations agreed to invest $17 million over four years in a project to manage and improve its biodiversity. The scheme is run by the Global Environment Facility (GEF), a fund set up in 1991 by the United Nations and the World Bank to channel money from the West to help solve environmental problems in the developing world. But three years on, according to a damning internal assessment, the Cameroon project has comprehensively failed. An unpublished review carried out last year by the GEF concludes that it ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:2228cf9c-8abd-4361-a6e1-55769d4031b8>
3.625
210
Truncated
Science & Tech.
46.830989
A GIANT comet spotted between the orbits of Uranus and Neptune is the first object known to have come from an exotic part of the solar system called the inner Oort cloud. The 100-kilometre-wide lump of ice was spotted two years ago in an oddly eccentric orbit around the sun. The comet is now at its closest point, about 24 astronomical units (AU) away (1 AU is the distance from the sun to the Earth). The most distant point in its orbit is nearly 1600 AU from the sun. Comets from the outer Oort cloud are the only objects known to travel further from the sun, reaching distances of between 20,000 and 200,000 AU. Theorists have long predicted the existence of an inner Oort cloud, but this is the first observation of a body coming from it. This may be because the sun's gravity is still relatively strong at this ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:91d0530c-1d4b-447f-a4e9-a52e0d5af976>
3.609375
211
Truncated
Science & Tech.
66.44613