text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Charles E. Griswold
Flies were collected generally using hand held aerial nets sweep nets ,and stationary flight (malaise) traps. Perhaps 100 specimens were collected. An effort was made to collect three target groups for immediate study. Acroceridae (‘small headed flies’ are a primitive group of brachyceran flies that are worldwide in distribution but rare in collections. As larvae they are obligate endoparasites of spiders. Blepharoceridae (‘net winged midges’) are delicate nematocera whose larvae cling to rock faces in fast moving streams and waterfalls. Therevidae (‘stiletto flies’) are fast flying higher Brachycera most characteristic of arid regions. Unfortunately, no individuals of these families were collected. | <urn:uuid:a49659e9-5640-4975-9618-294d55e3895d> | 3.28125 | 171 | Academic Writing | Science & Tech. | 20.21188 |
Gravity and Extreme Magnetism Small Explorer
Mission Project Home Page - http://gems.gsfc.nasa.gov/
The Gravity and Extreme Magnetism Small Explorer (GEMS) did not pass its confirmation review in 2012 and never moved into the development phase.
GEMS would have used an X-ray telescope to measure the polarization of the X-rays coming from the vicinity of compact objects in the universe: black holes and neutron stars and it would have also studied the remnants of massive stars which have exploded as supernovae. Few polarization measurements have been made in X-ray astronomy since the 1970’s so GEMS was expected to break new ground. The polarization depends, in part, on the X-ray scattering in the accretion disk around the compact object in a binary star system, so GEMS would have helped to constrain the geometry in such systems. GEMS might also have helped to constrain the shape of space that has been distorted by a spinning black hole's gravity, and probe the structure and effects of the formidable magnetic field around magnetars, dead stars with magnetic fields trillions of times stronger than Earth's.
GEMS would have helped to explain:
- How spinning black holes affect space-time and matter as it is drawn in and compressed by strong gravitational fields.
- What happens in the super strong magnetic fields near pulsars and magnetars.
- How cosmic rays are accelerated by shocks in supernova remnants.
Last Updated Date: April 29, 2013 | <urn:uuid:d54790e0-c377-4708-95c9-a3d89ecd9f06> | 3.90625 | 307 | Knowledge Article | Science & Tech. | 46.197912 |
(x2 + xy + ax - b2)2 = (b2 - x2)(x - y + a)2
Dürer calls the curve 'ein muschellini' which means a conchoid, but since it is not a true conchoid we have called it Dürer's shell curve (muschellini = conchoid = shell).
This curve arose from Dürer's work on perspective. He constructed the curve in the following way. He drew lines QRP and P'QR of length 16 units through Q (q, 0) and R (0, r) where q + r = 13. The locus of P and P' as Q and R move on the axes is the curve. Dürer only found one of the two branches of the curve.
The envelope of the line P'QRP is a parabola and the curve is therefore a glissette of a point on a line segment sliding between a parabola and one of its tangents.
There are a number of interesting special cases:
In the above formula we have:
b = 0 : Curve becomes two coincident straight lines x2 = 0.
a = 0 : Curve becomes the line pair x = b/√2, x = -b/√2
together with the circle x2 + y2 = b2.
a = b/2 : The curve has a cusp at (-2a, a).
|Main index||Famous curves index|
|Previous curve||Next curve|
|History Topics Index||Birthplace Maps|
|Mathematicians of the day||Anniversaries for the year|
|Societies, honours, etc||Search Form|
The URL of this page is: | <urn:uuid:b12f5cba-20ca-4cf7-a26c-44e5010f5f67> | 3.6875 | 375 | Knowledge Article | Science & Tech. | 74.239535 |
- Sunspots [2.74 mb]
Sunspots are visible evidence of
much larger magnetic activity and structures on the Sun. Extreme
ultraviolet and X-ray images of the Sun (all taken the same day)
reveal magnetic activity unseen in the visible spectrum.
- The Dynamic Sun [3.58 mb]
The rotating Sun seen in
extreme ultraviolet light reveals active regions, magnetic loops
and a blast across its surface.
- Magnetic Loop [3.25 mb]
Close-up clip of a
fountain-like solar flare observed by the TRACE spacecraft.
- Solar Wind and Giant Eruptions
corona (with the Sun blocked out) over a two-week observation shows
the streaming solar wind and over 10 coronal mass ejections heading out into space.
- Solar Flare Event [3.25 mb]
A close-up sequence
of two solar flares (seen as a bright flash) that blasts high-speed
protons into space at almost the speed of light. They appear as snowy
flecks almost immediately on the spacecraft's imaging device.
- Solar Cycle [2.75 mb]
A comparison of the
rotating Sun in extreme ultraviolet light in 1996 and 1999 highlights
how much more active the Sun is as it approaches its solar maximum in mid-2000.
- Sun-grazing Comets
1)[2.33 mb] A comet arcs into the Sun leaving a
rocket-like trail behind it;
2)[1.82 mb] Two comets head along similar paths
towards the Sun and disappear.
- CMEs Up Close
A closer view of the
corona reveals many coronal mass ejections blasting particles into space
over a busy three day period. CMEs are the key drivers of space weather.
- A CME Impacts Earth [4.93 mb]
of the basic elements of space weather: a CME explodes away from the Sun,
travels across space, and impacts Earth's magnetosphere.
- Magnetic Storm [2.01 mb]
This computer animation,
based on actual satellite observations, shows the changes in the Earth's
magnetosphere during a coronal mass ejection from the Sun. Scientists use
computer models to understand exactly how magnetic storms behave.
- Spectacular Aurora [2.56 mb]
swirling, and sweeping aurora, the only visible evidence of space weather.
- Storm Impact Seen from Space [2.0 mb]
vantage point in space, watch as the aurora disturbances spread down across
the U.S. and intensify dramatically (near the end). Observers on the ground
were treated to a beautiful auroral show of lights.
- NASA's Eyes on Space Weather
animation shows many of the Sun-Earth Connection spacecraft in their orbits as they
constantly monitor the Sun and its effects on Earth.
- Space Weather Event [1.83 mb]
presents a few minutes of the March 1989 magnetic storm responsible for
shutting down the electric power grid in Canada's Quebec Province. | <urn:uuid:b307539f-e447-4e2e-bdd1-c41e457f558d> | 3.5 | 659 | Content Listing | Science & Tech. | 60.679267 |
Having obtained a or, more compactly a , we can compute the probability of any point in the vector space5.2. However, evaluating the pdf in such a manner is not necessarily the ultimate objective. Often, some components of the vector are given as input () and the learning system is required the estimate the missing components as output5.3 (). In other words, can be broken up into two sub-vectors and and a conditional pdf is computed from the original joint pdf over the whole vector as in Equation 5.3. This conditional pdf is with the j superscript to indicate that it is obtained from the previous estimate of the joint density. When an input is specified, this conditional density becomes a density over , the desired output of the system. This density is the required function of the learning system and if a final output estimate is need, the expectation or arg max can be found via Equation 5.4.
Obtaining a conditional density from the unconditional (i.e. joint) probability density function in such a roundabout way can be shown to be suboptimal. However, it has remained popular and is convenient partly because of the availability of powerful techniques for joint density estimation (such as EM).
If we know a priori that we will need the conditional density, it is evident that it should be estimated directly from the training data. Direct Bayesian conditional density estimation is defined in Equation 5.5. The vector (the input or covariate) is always given and the (the output or response) is to be estimated. The training data is of course also explicitly split into the corresponding and vector sets. Note here that the conditional density is referred to as to distinguish it from the expression in Equation 5.3.
Here, parametrizes a conditional density . is exactly the parametrization of the conditional density that results from the joint density parametrized by . Initially, it seems intuitive that the above expression should yield exactly the same conditional density as before. It seems natural that p(y|x)c should equal p(y|x)j since the is just the conditioned version of . In other words, if the expression in Equation 5.1 is conditioned as in Equation 5.3, then the result in Equation 5.5 should be identical. This conjecture is wrong.
Upon closer examination, we note an important difference. The we are integrating over in Equation 5.5 is not the same as in Equation 5.1. In the direct conditional density estimate (Equation 5.5), the only parametrizes a conditional density and therefore provides no information about the density of or . In fact, we can assume that the conditional density parametrized by is just a function over with some parameters. Therefore, we can essentially ignore any relationship it could have to some underlying joint density paramtrized by . Since this is only a conditional model, the term in Equation 5.5 behaves differently than the similar term in Equation 5.1. This is illustrated in the manipulation involving Bayes rule shown in Equation 5.6.
In the final line of Equation 5.6, an important manipulation is noted: is replaced with . This implies that observing does not affect the probability of . This operation is invalid in the joint density estimation case since has parameters that determine a density in the domain. However, in conditional density estimation, if is not also observed, is independent from . It in no way constrains or provides information about the density of since it is merely a conditional density over . The graphical models in Figure 5.4 depict the difference between joint density models and conditional density models using a directed acyclic graph . Note that the model and the are independent if is not observed in the conditional density estimation scenario. In graphical terms, the joint parametrization is a parent of the children nodes and . Meanwhile, the conditional parametrization and the data are co-parents of the child (they are marginally independent). Equation 5.7 then finally illustrates directly estimated conditional density solution .
The Bayesian integration estimate of the conditional density appears to be different and inferior from the conditional Bayesian integration estimate of the unconditional density. 5.4 The integral (typically) is difficult to evaluate. The corresponding conditional MAP and conditional ML solutions are given in Equation 5.8.
At this point, the reader is encouraged to read the Appendix for an example of conditional Bayesian inference ( ) and how it differs from conditioned joint Bayesian inference ( ). From this example we note that (regardless of the degree of sophistication of the inference) direct conditional density estimation is different and superior to conditioned joint density estimation. Since in many applications, full Bayesian integration is computationally too intensive, the MLc and the MAPc cases derived above will be emphasized. In the following, we shall specifically attend to the conditional maximum likelihood case (which can be extended to the MAPc) and see how General Bound Maximization (GBM) techniques can be applied to it. The GBM framework is a set of operations and approaches that can be used to optimize a wide variety of functions. Subsequently, the framework is applied to MLc and MAPc expressions that were advocated above to find their maximum. The result of this derivation is the Conditional Expectation Maximization (CEM) algorithm which will be the workhorse learning system we will be using for the ARL training data. | <urn:uuid:d237c068-a227-49bb-bba5-de15ac730bba> | 3.328125 | 1,117 | Academic Writing | Science & Tech. | 36.955134 |
Csóka, György and Kovács, Tibor (1999): Xilofág rovarok - Xylophagous insects. Hungarian Forest Research Institute. Erdészeti Turományos Intézet, Agroinform Kiadó, Budapest, 189 pp.
Mn: Nagy nyárfacincér / En: Large poplar longhorn
22-31 mm. Occurs in Europe, in the Caucasus and Siberia. Widespread in Hungary but only locally abundant. Sometimes a pest in poplar (Populus) and willow (Salix) forests. The female lays her eggs singly in small holes excavated in the bark at the base of the tree, where they overwinter. On hatching the larvae chew their way into the heartwood where they excavate vertical galleries 20-30 centimetres long. This greatly reduces of the value of the timber. They pupate in June at the lower end of these galleries. The adult emerge after 2-3 weeks and spend a further 1-2 weeks maturing in the pupal chamber. Several individuals can develop in the same trunk. Development takes 2-3 years. Adults can be found from late June until September on the trunk and branches of the foodplant, and on wood piles. Adults show maturation feeding behaviour on leaves of the same trees. Attracted to lights. Piles of frass and woodchips (as for the goat moth Cossus), and sometimes dying adults, can be found at the base of infested trees. | <urn:uuid:f942f33e-b29f-470c-b2d4-6756c946db16> | 2.78125 | 325 | Knowledge Article | Science & Tech. | 57.422928 |
If f is a function from a certain set A to the same set A, then for any x in A, f(x) is also an element of A. Consequently, it makes sense to apply f to the element f(x). If we do this, we obtain another element of A, which could be denoted by f(f(x)). Similarly, we could apply f to f(f(x)), to obtain f(f(f(x))). If we continue this process, we will obtain the sequence: x, f(x), f(f(x)), f(f(f(x))), ... . This sequence is called the orbit of x under f.
As a notational convenience, we denote f(f(x)) by f 2(x), f(f(f(x))) by f 3(x), and so on. For example, the orbit of x = .6 under is , because
If the domain A of the function f is a set of real numbers, a calculator makes it easy to compute the orbit of a number under f. For instance, suppose . To find the orbit of x = .6 under f using a TI-83, we do the following steps:
Do you think that this behavior -- convergence to 0 -- is peculiar to the starting value, .6 ? Try using several other inital values of x, and see what happens. In particular, start with x = .2, x = .99, x = -.5, and x = 1.1 , and describe the long-term behavior of the orbits. You should find that the first three of these orbits converge to 0, but the last one, the orbit of 1.1, gets larger and larger without bound. (We say that this orbit diverges.)
Attracting and Repelling Fixed Points
The number 0 is called a fixed point for the function , because f "fixes" 0; that is, f(0) = 0. In general, a fixed point for the function f is an element x for which f(x) = x. For a function that maps real numbers to real numbers, a fixed point can be interpreted as a value of x at which the graph of f and the graph of y = x intersect. (Does the function f(x) = x2 have any other fixed points?)
A fixed point may be an attracting fixed point, meaning that the orbits of nearby points converge -- are attracted to -- the fixed point; or it may be a repelling fixed point, meaning that the orbits of nearby points move away from the fixed point. Other fixed points may be neither attracting nor repelling. For , 0 is an attracting fixed point while 1 is a repelling fixed point. (Calculate the orbits of x = .999 and x = 1.001 to convince yourself.)
Using the Fixed Point Method
If a fixed point is attracting, that very fact gives us a way to find an approximate value of that fixed point -- simply start with any guess of the fixed point's value, and compute some of the terms in the orbit. Now of course there is no need to do this to find the attracting fixed point of , since it is clear that it is 0. But in other situations, this method can be quite useful.
For example, consider the problem of solving the equation cos x = x. This equation cannot be solved using algebraic techniques, since cos x is not an algebraic expression. However, we can view this problem as one of finding a fixed point for the function f(x) = cos x, and the fixed point (the value of x for which cos x = x) can be found by making an initial guess, and computing the orbit of that initial guess under the function f(x) = cos x . To do this, first be sure that your calculator is in radian mode. Then make a guess, say x = 1, and compute terms in the orbit of x as described above. After about 60 terms in the orbit, there is no change in successive terms (at least in the 10 decimal places displayed by the TI-83). The solution to the equation f(x) = cos x is thus seen to be approximately .7390851332. The neat thing about this is that any starting value will work just as well, because all orbits are attracted to this fixed point!
Isaac Newton, one of the co-creators of calculus, discovered a very general method of approximately solving equations, which is based on fixed points. This technique is now known as Newton's method. As a simple example, suppose we wish to solve the equation x2 - 2 = 0. (Now of course we know that the solution is . But the point is that Newton's method can be applied to much more difficult equations.) Basically, Newton's method involves finding an attracting fixed point for a function derived from the equation to be solved. The specific function that is to be used is where is the derivative of the function f, a construction studied in calculus. For the case of the equation x2 - 2 = 0 , the function f is f(x) = x2 - 2, and the function g that must be iterated to solve the equation f(x) = 0 is
Try it Again
This time, let's solve the equation x3 + x + 1 = 0, using x = 2 as the initial guess. The function to be iterated is
What Else are Orbits Good For?
Orbits and their fixed points are studied in the field of mathematics known as dynamical systems. One important application is to population modeling, in which successive values of a function f are thought of as the sizes of a given population in successive years. If the function f has an attracting fixed point, then a population modeled by f would have an eventual, stable size as indicated by that fixed point. In a later vignette, we will look at some aspects of population models of this type. If we take a slightly different point of view when looking at orbits, our work gives rise to a certain type of fractal sets, known as Julia sets. | <urn:uuid:37161fa6-cc50-49fa-807f-9bb06fc62ab6> | 3.609375 | 1,267 | Knowledge Article | Science & Tech. | 76.298674 |
3. The sun is about 863,700 miles wide. It is about 110 times wider than the Earth.
4. The sun is the center of our solar system which is located on the edge of a spiral arm called Orion's Arm, and is one-half to two-thirds of the way (28,000 light-years) from the center of our Milky Way galaxy.
5. The sun is mostly hydrogen with about 10% being helium and other elements.
6. The core of the sun is thought to be 27 million degrees Fahrenheit.
7. The sun's light takes a mere 8 minutes to reach earth.
8. If the sun stopped shining all human, animal, and plant life would freeze to death, the tropics would be as cold as the poles, and the seven seas would turn to solid ice.
9. There are 100 billion stars in our galaxy. Six thousand of these stars can be seen with the naked eye from the Earth. The sun is one of these stars.
10. In producing its energy, the sun uses up about 22 quadrillion tons of hydrogen every year. Despite this, according to scientific predictions, the sun contains enough hydrogen to continue shining at its present strength for another 5 billion years. | <urn:uuid:df39c02d-04e0-42f3-ab30-2d9b76d1ae7c> | 3.734375 | 257 | Listicle | Science & Tech. | 83.780357 |
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Predicting Invasions of Nonindigenous Plants and Plant Pests
traits. Cody and Overton (1996) described the reduction in dispersal ability for wind-dispersed seeds of invasive species onto islands in just a few generations in small isolated populations. Carroll and Dingle (1996) indicate that populations of the soapberry bug (Jadera haemotoloma) have evolved differing beak lengths in response to the introduction of new invasive hosts within only 50 years and Singer et al. (1993) have shown rapid evolution in the feeding preferences of the Euphydryas butterfly for the invading herb Plantago lanceolata in only 10 years. Thus, there are many cases of evolution both in invading species and in the species affected by invaders.
As noted earlier in this report, the ability of pathogens to adapt to different plant genotypes has been studied in detail, and the resulting knowledge has helped in forecasting the fate of new microbial genotypes in the environment (Mundt 1995). Fungal, bacterial, and viral plant pathogen populations evolve quickly to overcome resistance genes in hosts. For example, the average useful life of race-specific genes for resistance to fungal rusts of wheat has been estimated to be only 5 years (Mundt 1995).
Population and Community Effects
Invaders can cause reduction in the biological diversity of native species and the size of populations; next to land transformation, they are the most important cause of extinction (Vitousek et al. 1996). After habitat destruction (which affects 81% of imperiled plant species), introduced species contribute more to the imperilment of species (57%) in the United States than the next three causes combined— pollution (7%), overexploitation (10%), and disease (1%) (Wilcove et al. 1998, 2000) (Categories are nonexclusive and do not sum to 100%). Replacement of natives with nonindigenous species is immediate, readily measurable evidence of the impact of invasions.
Extinction could be the most dramatic impact of invasive species. Small populations of natives suffer the highest risk of extinction from various genetic and demographic causes discussed earlier in this report in connection with the same hazards that small immigrant populations experience. Invaders pose a major risk to threatened and endangered species: about 400 of the 958 species that are listed as threatened or endangered under the Endangered Species Act are considered to be at risk primarily because of competition with or predation by nonindigenous species (Wilcove et al. 1998, Stein et al. 2000). Invaders can also interact with habitat transformation and thus exacerbate the threat to biodiversity (Hobbs 2000).
Extinction of native species, although dramatic, actually characterizes relatively few invasions (Simberloff 1981). Reduced population sizes and local extirpation of a species appear more common than global extinction of a species, but changes in population sizes of native species after invasion by nonindigenous species can vary greatly in magnitude and even direction. For example, establish- | <urn:uuid:480d6e10-9cd9-459b-ba84-1707e2b84a0c> | 3.578125 | 655 | Knowledge Article | Science & Tech. | 21.195548 |
Chandra Discovers Relativistic Pinball Machine
This extraordinarily deep Chandra image shows Casseiopeia A (Cas A, for short), the youngest supernova remnant in the Milky Way. New analysis shows that this supernova remnant acts like a relativistic pinball machine by accelerating electrons to enormous energies. The blue, wispy arcs in the image show where the acceleration is taking place in an expanding shock wave generated by the explosion. The red and green regions show material from the destroyed star that has been heated to millions of degrees by the explosion.
+ Read more/access larger images
Image credit: NASA/CXC/UMass Amherst/M.D.Stage et al. | <urn:uuid:4992cb4a-3116-4bdb-a3f6-38e720864685> | 3.171875 | 145 | Truncated | Science & Tech. | 36.912857 |
SCRAPS of food could soon be helping power your home, thanks to an ultra-cheap bacteria-driven battery. Its developers hope that instead of feeding the dog or making garden compost, organic household waste could top up your home's electricity.
Although such "microbial fuel cells" (MFCs) have been developed in the past, they have always proved extremely inefficient and expensive. Now Chris Melhuish and technologists at the University of the West of England (UWE) in Bristol have come up with a simplified MFC that costs as little as £10 to make.
Right now, their fuel cell runs only on sugar cubes, since these produce almost no waste when broken down, but they aim to move on to carrot power. "It has to be able to use raw materials, rather than giving it a refined fuel," says Melhuish.
Inside the Walkman-sized battery, a colony of
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:afc862a4-0aca-4769-aa15-b3ad590e586f> | 3.0625 | 216 | Truncated | Science & Tech. | 47.152063 |
As i walk the beach at Seal Rock State Park, I observe numerous tide pool enthusiasts enjoying the colorful anemones and leathery sea stars. Beachcombers sort through the proverbial seashells on the seashore, but I notice everyone ignores “the wrack”—the salad-like heap of seaweeds cast upon the beach. Well, almost everyone.
Dr. Gayle Hansen introduces herself as she passes by. She is a marine phycologist who studies the taxonomy, distribution, and life histories of seaweeds. She is on her way home, but stops to chat about photography and her interest in seaweeds. She would like to publish a field guide to West Coast seaweeds, and I’m interested. Of course, neither of us has a pen handy to exchange names and e-mail addresses. “If you can’t remember my name, just ask for ‘the Seaweed Lady’ at the Hatfield Marine Science Center,” she says prior to leaving. And that is how I contacted her.
We meet in her office, which is now located in the Environmental Protection Agency building adjacent to the Hatfield Marine Science Center. Though her space is small and crowded with texts, pressed specimens awaiting labels, and floor to-ceiling cabinets filled with her herbaria, she is appreciative for the space to complete her species lists and studies.
Hansen is a taxonomist specializing in seaweeds.
Her work focuses on the physical and, at times, genetic relationships between seaweeds and on their distribution along the coast. She is the expert on seaweeds in Oregon and one of two experts working in Alaska.
“The Oregon Parks and Recreation Department has relied on Dr. Hansen as one of the state’s only experts in marine phycology,” says Laurel Hillmann, the Coastal Planner for OPRD. “Her key insights on marine intertidal seaweed cover both general information on seaweed biology and suggestions on potential future management practices.”
The Nature Conservancy also invited her to participate in their marine conservation planning, especially the section on threatened and endangered algal species. “Whidbeyella cartilaginea is rare, almost extinct,” Hansen explains. “But for many species we don’t have enough baseline information to determine their rarity over their range.” That makes it difficult to direct conservation efforts.
A LIFE LEADING TO ALGAE
Growing up in Virginia, Hansen remembered col-lecting algae as a child “more for fun than anything.” She eventually went to college in Connecticut and followed up with a masters degree from the University of Vermont in mycology and a post doctorate in phycology from North Carolina in Chapel Hill. From there she went to Harvard, and then on to teaching positions at University of Massachusetts and Maine before moving west to British Columbia, Washington, and finally to Oregon.
“I spent three years in Friday Harbor, and there I was the founder of the Northwest Algal Society,
a group that still meets annually,” she says. While teaching at Western Washington University in Anacortes she was hired by the State of Alaska. “I got to go to Alaska after the Valdez oil spill,”
she explains. “I made collections there during the summer, and then was provided with a lab at the Marine Science Center in Oregon to work on them during the winter.”
GETTING SEAWEED SOME RESPECT
Hansen now works on a database that includes both historical and modern day records. “One of the first algal specimens collected from Oregon, well actually on the north shore of the Columbia River, was the feather boa or Egregia menziesii by Meriwether Lewis and William Clark.” Her database targets primarily Oregon and Alaskan collections made from the early 1800s to modern day.
“We are very lucky in this state to have a resource like Gayle who knows her seaweeds backwards and forwards,” says Phillip Johnson, Director of CoastWatch. “Seaweeds are the lynchpin of intertidal and near shore ocean ecology.”
The main message I take home from Hansen is that seaweeds and microscopic algae are the Rodney Dangerfields of the marine world—they don’t get no respect. Like land plants, seaweeds are food for underwater grazers and they provide oxygen and shelter for tide pool creatures. “It takes about 1000 pounds of algae to produce one pound of salmon,” Hansen states, noting that three or four steps in the food chain are involved. “You can quickly understand the importance of algae in the commercial fishing industry, but we can not detect the changes without first knowing what seaweeds are out there.”
As Hansen continues with her studies and searches for funding sources, she says she would like to see a coastal natural history museum built somewhere along the Oregon coast. When it is built, you can bet your last sand dollar that seaweeds will be given the respect they deserve.
Damian Fagan is a freelance writer–photographer based in central Oregon.
Oregon Coast January/Feburary 2007 | <urn:uuid:79dfdd30-d683-4a16-9428-ad9ed03e10b1> | 2.71875 | 1,088 | Nonfiction Writing | Science & Tech. | 41.090983 |
Strandline - secrets of the seashore
Amongst the seaweed and debris on the strandline, various animal eggs and egg cases may be washed ashore. These provide clues to how various marine animals protect their eggs to help increase the chance of their young surviving. These include the leathery egg cases of rays, dogfish and whelks. Scientists can study these beached egg cases to help discover which coastlines different sea creatures live and breed on. While many of the cases are empty, some may still have developing animals inside, especially if they have been washed up after a storm.
Skates and rays
Skates and rays also have a skeleton made of cartilage and are closely related to sharks. They are flat-bodied and have a disc-shaped body with winglike fins. When not swimming they rest on the seabed. Skates and rays have declined in number around the British coastline.
Sharks and dogfish
Sharks differ from most other fish in that they have a skeleton made of cartilage rather than bone. They range from fast-swimming species like the porbeagle to sluggish bottom dwellers like the dogfish. Sharks have an amazing array of senses for locating prey, and skin teeth that protect them like a suit of armour.
The lesser spotted dogfish is the most common shark species around the UK, including Sussex. It can reach a length of 60-100 centimetres and feeds on the seabed in the shallow coastal waters.
Dogfish lay eggs - protected in a leathery capsule - between November and July. Eggs are laid two at a time.
Tendrils at the corner of each egg case are used to secure the egg to seaweed.
The egg capsule will contain enough food (yolk) for the baby dogfish while it grows inside the egg case, which on average takes 9 months.
By 4 months the baby dogfish is about half grown.
By the time the baby dogfish is ready to hatch it will have used up all the egg yolk and will be twice the length of the egg case.
Cuttlefish and Squid
Cuttlefish, octopus and squid belong to the scientific group Cephalopods, which means "head footed". It's hard to believe that cephalopods are molluscs and therefore invertebrates (animals without backbones) related to the common whelk.
Cuttlefish eggs can be found on the strandline. They are often mistaken for seaweed air bladders and resemble a black bunch of grapes. The black colour comes from the cuttlefish's defensive ink. These eggs are often still alive. If you find them on the beach, put them in a tide pool or back in the sea.
Squid are similar to cuttlefish, only more streamlined and faster swimmers. Cuttlefish and squid swim using both rippling movements of their fins and jet propulsion by squirting water out of their siphons.
Squid eggs look like the tentacle remains of a dead jellyfish. When they hatch, squid are as tiny as grains of rice.
image copyright of Judith Oakley, oakleynaturalimages.com | <urn:uuid:dc52bc1f-89d5-43e4-a861-c7d48e150bef> | 3.640625 | 659 | Knowledge Article | Science & Tech. | 55.360674 |
Crossing the Barrier; June/July 2006; Scientific American Mind; by Grit Vollmer; 6 Page(s)
Paul Ehrlich had just injected aniline dye--used to color blue jeans--into a rat's bloodstream. For years the immunologist had been working on ways to stain cells so they would be more visible under a microscope, and aniline looked promising. Soon all the animal's muscles, blood vessels and organs were deep indigo. But for some confounding reason the central nervous system--the brain and spinal cord--remained untouched.
Ehrlich's experiment, done at Berlin's Charit¿ hospital in 1885, provided early evidence for the blood-brain barrier--a vital wall that controls which molecules in the bloodstream can enter the brain or nerve pathways. Oxygen, sugars and amino acids are allowed in; most compounds are kept out. As a result, the brain can do its job inside a secure perimeter not available to any other organ. Which is handy, because substances in air, water and food--as well as toxins and even the body's own hormones--can severely impair the brain's functioning. Easy access would quickly lead to mental chaos. | <urn:uuid:8d916ea2-c32e-4551-ae5f-f0904062bffa> | 3.4375 | 245 | Truncated | Science & Tech. | 47.375643 |
Information contained on this page is provided by NewsUSA, an independent third-party content provider. WorldNow and this Station make no warranties or representations in connection therewith.(NewsUSA/American Chemical Society) - Chemistry plays a critical role in most of life's daily activities, but we tend to take it for granted, which means our children probably do, too. A passion for chemistry can lead to efficient transportation, improvements in medicine, safer environmental practices and more powerful computers. But, passion must start with an understanding of the basics. | <urn:uuid:b21781fa-12c1-43fb-bb88-0d9e679862c4> | 3.03125 | 107 | Truncated | Science & Tech. | 22.891241 |
Some Frost/Freeze Knowledge
By Stephen Gode on September 25, 2012, 11:29am
Use your ← → (arrow) keys to browse more stories.
In the map above, the 10% probability of 32°F or less on an earlier date for most of interior Connecticut is highlighted for September 15th to October 1st. The extreme northeast and northwest portions of Connecticut is highlighted for September 1st to September 15th. The shore is highlighted for October 1st to October 15th except for the majority of southern Fairfield County (October 15th to November 1st).
General NWS terms:
Indicates the potential for significant weather events up to 7 days in advance with a forecaster confidence around 30%.
Indicates that conditions are favorable for the particular weather event in and near the Watch area, and which may pose a risk to life and property. Watches are issued up to 48 hours in advance with forecaster confidence around 50%.
Indicates that a particular weather event is imminent or occurring. Advisories are issued if the weather event will lead to nuisance conditions, while Warnings are issued for significant weather events which will pose a risk to life and property. Warnings and Advisories are issued up to 48 hours in advance with forecaster confidence of at least 80%.
Note: Watches and Warnings issued for Severe Thunderstorms, Tornadoes, and Flash Flooding have much shorter lead times, on the order of hours for Watches or even minutes for Warnings.
From the NWS frost and freeze terms are:
Freeze - A condition occurring over a large area when the surface air temperature remains below 32 degrees Fahrenheit for an extended period of time possibly leading to the damage of certain crops.
Freeze Warning - Issued during the growing season when surface temperatures are expected to drop below freezing over a large area for an extended period of time, regardless if frost develops or not. They are usually issued to highlight the first few freezes of the fall, or unusually late freezes in the spring.
A "hard freeze" is used to imply temperatures that are sufficiently cold, for a long enough period, to seriously damage or kill seasonal vegetation. For example, the Mobile AL forecast office lists criteria for its hard freeze warning as temperatures 26 degrees or lower for at least 5 hours. The purpose of this warning to alert people to the potential for frozen pipes, radiators, livestock and so on, not just damage to sensitive plants.
Frost - A covering of ice on exposed surfaces when the air temperature falls below the frost point.
Frost point - temperature, below 0° C (32° F), at which moisture in the air will condense as a layer of frost on any exposed surface. The frost point is analogous to the dew point, the temperature at which the water condenses in liquid form; both the frost point and the dew point depend upon the relative humidity of the air.
Frost Advisory - Issued during the growing season when widespread frost formation is expected over an extensive area. Surface temperatures are usually 33°F to 36°F Fahrenheit. | <urn:uuid:17b78aac-2eea-4788-ae4b-a091927af6f9> | 2.890625 | 634 | Tutorial | Science & Tech. | 39.607177 |
Coastal Habitat Loss
Homes, jetties, seawalls, canals, and other structures built on beaches or wetlands often destroy habitat for sea turtles, birds, fish, and other sea life. Salt and tidal marshes, wetlands, mangroves, and coral reefs also suffer when development is unsustainable.
Wetlands, mangroves and sea grasses are valuable natural resources as they hold sediment and nutrients, filter pollutants, protect coastal environments from storms, shelter wildlife, and benefit people.
In the US, half the population lives within fifty miles of the coast. The wrong kind of coastal development, coupled with sea level rise, pollution, and ocean warming, can damage or devastate coastal habitats. Coastal communities rely on the natural systems in coastal habitats for food, tourism, and recreational fishing.
3 things you can do to help prevent coastal habitat loss:
1. Switch to renewable energy sources when and wherever possible.
2. Don’t use streets or storm drains as dumps.
3. Research local efforts to protect wetlands.
Other great ways you can make a difference.
LINKS & VIDEOS
Reefs at Risk in the Caribbean – Earth Trends
Coastal Habitats Threat – Sea Turtle Conservation
Mangrove Swamps – Environmental Protection Agency
Coastal Development – Coral Reef Alliance
National Coastal Trends – NOAA
Population and Coastal Regions – Population Reference Bureau
Engaging Community for Sustainable Coastal Development in Lombok, Indonesia
Wetland FAQs – America’s Wetland Resource
Ecosystem Services A Primer – Action BioScience
UN Warns Against Rapid Coastal Development, Al Jazeera
A United Nations University report has warned against rapid coastal development for countries in the middle east.
Mike Carloss with the LDWF Coastal & Nongame Resources division, discusses the potential for increased habitat loss due to oil entering Pass a Loutre Wildlife Management area.
May is National Wetland Month, USDA | <urn:uuid:9e12b050-7887-4812-8a3a-655df94bced5> | 3.625 | 405 | Knowledge Article | Science & Tech. | 27.450476 |
The Great Observatories Origins Deep Survey (GOODS) : An Observational Legacy for Studying Galaxy Evolution
Prof Marc Dickinson
The following was written during the final plenary talk of the first day at the American Astronomical Society Meeting in St Louis. I was going to post as we went along, but the wireless connection in the meeting room was very flaky (probably just as well – it means the audience are paying attention to the speaker!) I’m posting it in a lump near the end of the talk. Images to follow.
In the introduction to this talk, we were told that the hallmark of modern astronomical research is the survey, and it’s certainly true that astronomers have learnt to make use of projects which carefully chart sections of the sky. The speaker began by reminding us that it’s more than a decade since the Hubble Deep Field – as he said, every time you get a new telescope the temptation is to push it to its limits. After 150 orbits staring at the same field, it turned out Hubble was excellent at seeing the distant Universe.
The data were released, and then most other major observatories all observed the same field, producing hundreds of papers to understand this region of the sky. Not bad for a patch just 2.5 arcminutes square (an arcminute is a sixtieth of a degree). But the question is, with such a small area how can we be sure that we have a fair census? What if that patch turned out to be unusual in some way? Even if we’ve got lucky and picked the right region, then rare objects will be missed entirely.
GOODS is the solution to this problem; using Chandra in the x-ray, Spitzer in the infrared and Hubble’s ACS camera (not available at the time of the original HDF), they set out to cover two regions, each thirty times larger than the original Hubble Deep Field. The aim was to disentangle the evolution of normal galaxies in the first third of the Universe’s evolution, taking a census of black hole growth and activity, understanding how and when star formation takes place and so on.
Each telescope had a different role to play; Spitzer, for example, in the mid Infrared allowed the team to weight the galaxies. The total stellar mass in a galaxy turns out to be very sensitive to the brightness in this band (although you have to worry about the evolution of the stars, we’ve got pretty good at doing that). As before, other observatories have chipped in, with GALEX providing the Ultraviolet, for example, and the SCUBA camera on the JCMT providing a view of the cold early Universe in the sub-mm region of the spectrum.
Astronomers are greedy, though, and as well as imaging we demand data. The first step in understanding an object is to work out how far away it is, and for objects as far as those in GOODS that means measuring their redshift. Lines in their spectrum will be shifted due to the expansion of the Universe; in all more than 5000 GOODS objects have had their distance measured. That’s not a huge number compared to something like the Sloan Digital Sky Survey, but the objects are much further away (so more telescope time is required per object to get a decent spectrum).
The results were far too numerous to go into here, but there are some nice highlights. For example, we can show that galaxies were, on average, smaller in the past, just as you’d expect if the systems we see around us today were assembled by mergers of smaller galaxies. Arguments are raging about the star formation history of the Universe; we know our Universe is past its peak, forming ten times as many stars about 6 billion years ago as it does today, but the GOODS data suggest that looking further back the rate drops once more.
One of the reasons this is controversial is that most of the energy emitted by the newly formed stars is absorbed and then reradiated by dust. This process makes the galaxies bright in the infrared, and so Spitzer can help here. Prof Dickinson went so far as to call the early Universe (before z=0.7 if you understand and care about redshifts) ‘the age of obscurity’.
As well as changing sizes and star formation rates, the population of galaxies has changed too. In the first third of the Universe’s evolution, the average massive galaxy was what is called a ULIRG – an Ultra-Luminous InfraRed Galaxy. Spin forward to today and you’ll find that in the present day the typical massive galaxy is an elliptical – old, red and dead, devoid of star formation and about as far from a ULIRG as it’s possible to be while still being a massive galaxy.
Disentangling everything that might contribute to the light we receive from a galaxy is hard work, to say the least. The team looked at a set of galaxies which had an excess of light in the mid-infrared – the massive galaxies described in the previous paragraph. It’s tempting to assume that the infrared is due entirely to star formation, but by looking with Chandra they detected x-rays from hidden Active Galactic Nuclei. In other words, these galaxies are not just forming stars, but half of all galaxies had black holes at their centre which were in the act of consuming large amounts of material. As Prof Dickinson said, it seems that around 4 billion years after the Big Bang was an important time in a galaxy’s life.
Perhaps one of the most surprising results is the presence of another population of galaxies at this time. There seem to be a set of galaxies which aren’t doing very much at all – they’ve already formed their stars and are already quietly and passively enjoying the galactic equivalent of late middle age. One mystery is that there are smaller for their weights than we’d expect – and it’s hard to imagine how they might ‘puff up’ to see the galaxies that we see today.
Looking further back, the team managed to detected light emitted from galaxies when the Universe was not much more than a billion years old. Even at this time, there’s evidence for a fairly mature stellar population, so substantial numbers of stars must have been formed before the epoch of the earliest galaxies astronomers have seen to date. They have some candidates from this early epoch, but it’ll have to wait for the next generation space telescopes to confirm these detections, so don’t hold your breath.
As if all of that wasn’t enough, the team realised that by going back to the same parts of the sky every 40 or so days, they stood a great chance of discovering distant supernovae. Of those they discovered, almost 50 are a particular type of exploding star – supernovae type 1a. These explosions seem to contain a clue to their true luminosity, and so by comparing how bright they appear with how bright they actually are we can try and measure the acceleration of the Universe.
At this stage I’m being to feel a bit breathless after all the work the GOODS team have done. Prof Dickinson is finishing his talk by asking ‘are we done yet’? The answer, perhaps not surprisingly, is an emphatic no. One of the major problems is tht the measured star formation rate should tie up with the measurements of the total number and mass of stars –and they don’t. They also know there must be more black holes hiding, because they see energetic x-rays with no obvious source. Black holes hiding behind dust are the obvious candidates.
What we really need is a new telescope, working in the far infrared. ESA’s Herschel space telescope – larger than any other telescope ever to fly into space – is due for launch early next year is designed to solve this problem, and will take a long early look at the GOODS fields. I’m planning to head straight from AAS to go and visit Herschel which is undergoing final tests, so it’s great to hear that people are already anticipating the data it will provide. | <urn:uuid:e3394fbf-44d6-4f70-9ded-3f8db620e875> | 2.953125 | 1,694 | Personal Blog | Science & Tech. | 48.536495 |
The Galilean transformation is used to transform between the coordinates of two reference frames which differ only by constant relative motion within the constructs of Newtonian physics. This is the passive transformation point of view. The equations below, although apparently obvious, break down at speeds that approach the speed of light on account of the principles of relativity theory.
Galileo formulated these concepts in his description of uniform motion. The topic was motivated by Galileo's description of the motion of a ball rolling down a ramp, by which he measured the numerical value for the acceleration of gravity near the surface of the Earth.
In essence, the Galilean transformations embody the intuitive notion of addition and subtraction of velocities. The assumption that time can be treated as absolute is at the heart of the Galilean transformations.
This assumption is abandoned in the Lorentz transformations. These relativistic transformations are applicable to all velocities, whilst the Galilean transformation can be regarded as a low-velocity approximation to the Lorentz transformation.
The notation below describes the relationship under the Galilean transformation between the coordinates (x,y,z,t) and (x′,y′,z′,t′) of a single arbitrary event, as measured in two coordinate systems S and S', in uniform relative motion (velocity v) in their common x and x’ directions, with their spatial origins coinciding at time t=t'=0:
Note that the last equation expresses the assumption of a universal time independent of the relative motion of different observers.
In the language of linear algebra, this transformation is considered a shear mapping, and is described with a matrix acting on a vector. With motion parallel to the x-axis, the transformation acts on only two components:
Though matrix representations are not strictly necessary for Galilean transformation, they provide the means for direct comparison to transformation methods in special relativity.
Galilean transformations
The Galilean symmetries can be uniquely written as the composition of a rotation, a translation and a uniform motion of space-time. Let x represent a point in three-dimensional space, and t a point in one-dimensional time. A general point in space-time is given by an ordered pair (x,t). A uniform motion, with velocity v, is given by where v is in R3. A translation is given by where a in R3 and b in R. A rotation is given by where G : R3 → R3 is an orthogonal transformation. As a Lie group, the Galilean transformations have dimensions 10.
Central extension of the Galilean group
The Galilean group: Here, we will only look at its Lie algebra. It's easy to extend the results to the Lie group. The Lie algebra of L is spanned by H, Pi, Ci and Lij (antisymmetric tensor) subject to commutators, where
H is generator of time translations (Hamiltonian), Pi is generator of translations (momentum operator), Ci is generator of Galileian boosts and Lij stands for a generator of rotations (angular momentum operator).
We can now give it a central extension into the Lie algebra spanned by H', P'i, C'i, L'ij (antisymmetric tensor), M such that M commutes with everything (i.e. lies in the center, that's why it's called a central extension) and
See also
- Representation theory of the Galilean group
- Lorentz group
- Poincaré group
- Lagrangian and Eulerian coordinates
- Galileo 1638 Discorsi e Dimostrazioni Matematiche, intorno á due nuoue scienze 191 - 196, published by Lowys Elzevir (Louis Elsevier), Leiden, or Two New Sciences, English translation by Henry Crew and Alfonso de Salvio 1914, reprinted on pages 515-520 of On the Shoulders of Giants: The Great Works of Physics and Astronomy. Stephen Hawking, ed. 2002 ISBN 0-7624-1348-4
- Mould, Richard A. (2002), Basic relativity, Springer-Verla, ISBN 0-387-95210-1, Chapter 2 §2.6, p. 42
- Lerner, Lawrence S. (1996), Physics for Scientists and Engineers, Volume 2, Jones and Bertlett Publishers, Inc, ISBN 0-7637-0460-1, Chapter 38 §38.2, p. 1046,1047
- Serway, Raymond A.; Jewett, John W. (2006), Principles of Physics: A Calculus-based Text, Fourth Edition, Brooks/Cole - Thomson Learning, ISBN 0-534-49143-X, Chapter 9 §9.1, p. 261
- Hoffmann, Banesh (1983), Relativity and Its Roots, Scientific American Books, ISBN 0-486-40676-8, Chapter 5, p. 83
- Arnold, V. I. (1989). Mathematical Methods of Classical Mechanics (2 ed.). Springer-Verlag. p. 6. ISBN 0-387-96890-3. | <urn:uuid:7cd0d19d-f1b6-4df1-951b-391e76219d4b> | 4.125 | 1,082 | Knowledge Article | Science & Tech. | 53.0349 |
If You Build It
Constructed wetlands provide an ideal solution for dealing with stormwater in increasingly urbanized environments
- By Scott D. Wallace
- Apr 01, 2006
Stormwater managers around the country are challenged by growing regulatory requirements in the face of increasingly urbanized land uses. As cities continue to grow, more and more areas are covered with roads, buildings, parking lots, and other impervious surfaces. Instead of storing and slowly releasing water, these impervious surfaces quickly shed rainfall. At the same time, contaminants on these impervious surfaces, such as salt, oils, and sediments, are picked up and carried away in the runoff. The result is both an increase in runoff volume and a decrease in water quality, contributing to the decline of urban and suburban streams throughout the United States.
This threat to our streams and rivers has lead to intense interest in stormwater best management practices (BMPs). Because stormwater managers often have very limited areas in which to install BMPs, there is a push toward systems that can provide as much multiple-use benefit as possible.
Why Constructed Wetlands?
Alterations of drainage patterns within watersheds often require the creation of new, engineered wetland systems. These constructed wetlands can be designed to achieve specific project goals. Wetlands combine the goals of water storage and release, water quality improvement, wildlife habitat, and community green space. By combining these attributes, wetlands offer many opportunities for multiple-benefit stormwater projects.
Located within depressed areas in the landscape, wetlands are natural accumulation points for stormwater runoff. After storm events, wetlands fill with stormwater runoff, which is gradually released from the wetland basin. Water stored in the wetland can be released by overland flow to surface waters, or discharged to ground water through infiltration, depending on the specifics of the project.
Managing Water in Wetlands
The key benefits provided by water storage in wetlands are volume and time. Every cubic foot of water that is temporarily stored in a wetland is a cubic foot of water subtracted from damaging peak floods. Processes that treat stormwater and reduce contaminants require time to operate. The storage and gradual release of runoff from wetlands provides the time needed for water treatment.
With knowledge of engineering hydraulics and plant hydrology, designers can create wetlands that store and release water in a manner that mimics the hydroperiod of natural wetlands. Wetland plants have developed the ability to transport oxygen from the leaves, through the plant stems, and into the root system. This oxygen transport capacity allows the plant to survive in waterlogged soils (which do not contain oxygen). However, individual plant species vary widely in this regard. Plants with a high degree of oxygen transfer can tolerate permanently flooded soils. Other plants may tolerate flooding for only a few days, or not at all. The U.S. Army Corps of Engineers has developed a classification system for this flood tolerance that ranges from Obligate (plants occur almost always (greater than 99 percent) in wetlands) to Upland (plants almost never (less than 1 percent) occur in wetlands).1 The U.S. Fish and Wildlife Service ( www.fws.gov/nwi/bha/) has classified more than 6,700 plant species according to their wetland tolerance.
In addition to this classification system, there are handbooks available to assist in the plant selection process.2 Armed with this knowledge, designers can determine the acceptable "bounce" of the wetland (how much and how long a plant community can be flooded without adverse impacts). A typical bounce target for design purposes is less than 2 feet of water level increase from the 10-year, 24-hour storm event. Outlet weirs for the stormwater wetland can then be designed to produce bounce fluctuations that are within the acceptable range of the plant community.
Wetland Treatment Processes
Wetland systems remove contaminants through a variety of physical, chemical, and biological treatment mechanisms.3 Treatment mechanisms that are particularly important for stormwater management include settling, precipitation, plant uptake, and microbial degradation.
In the process of storing water, wetlands reduce flow velocity through the dense stem networks of aquatic plants. Reduction in flow velocity allows for settling, interception, and filtration of sediment particles. Since nutrients, metals, and organic matter are absorbed into sediment particles, substantial reductions of these pollutants occur as well.
These systems act as "microbial hotels," with the microbes growing on plants, plant detritus, and sediment particles (the biofilm) performing as powerful treatment drivers for organic matter and nitrogen compounds. The wide range of oxidation/reduction (redox) potentials within the wetland environment creates conditions conducive to the precipitation of many metals, including iron, copper, and nickel.
Wetlands and Wildlife Habitat
Since wetlands combine terrestrial and aquatic habitats, they offer unique benefits for hundreds of species of wildlife. These range from breeding habitat for amphibians to "stop-over" feeding and resting areas for migratory waterfowl. The decline of many animal species can be directly linked to the ongoing loss of wetland habitat within the United States. Despite an official policy of "no net loss" of wetlands, the United States continues to lose wetlands, with corresponding declines in many species (especially migratory birds) and associated increases in flood damages.
Maintaining a healthy and diverse community of plants, insects, fish, birds and other species is a key ingredient is controlling nuisance species like mosquitoes. Well-designed wetland systems include a variety of habitat zones, ranging from open water to emergent plants to upland buffer areas. Wetland areas that are greater than 3-feet deep will generally not support emergent vegetation like cattails and bulrushes. These "deep zones" will instead be open-water areas that can support submerged aquatic plants and are attractive habitats for fish, waterfowl, and animals such as muskrats. Generally speaking, the greatest habitat diversity of a wetland system will be achieved with a 50/50 mix of open water and emergent plants.
Wetlands and People
Wetlands can provide "green islands" in an otherwise urban landscape. By combining zones of different water depth (with associated plant communities), a diverse range of habitats can be combined in a relatively small area. This creates a broad range of wildlife viewing opportunities, especially if visitor access through boardwalks or trails is incorporated into the project. This range of access can include viewing blinds for bird watching, elevated boardwalks for access into the wetland, fishing docks in deep-water areas, and associated trails through upland areas adjacent to the wetland.
Planning for visitor use requires an understanding of the type of wildlife that will use the wetland, the anticipated degree of access to the wetland, and the programming or educational goals associated with the project. For instance, wetlands that support populations of alligators or other potentially dangerous species will require boardwalks that are located at least 3 feet above the water level with a secure railing system. Educational programs may have their own specific needs, ranging from signage to a good location to gather visiting students. Fishing docks can be constructed over deep-water areas that provide good fish habitat (old pipes or brush piles can be used to increase fish habitat in these areas). The key is to identify the project goals for visitor use up-front and then design the needed infrastructure into the project to support these goals.
Wetlands have a lot to offer as a stormwater BMP. Through the storage and gradual release of water, they can reduce flood damage to downstream properties. Within the wetland ecosystem, a variety of processes naturally occur that treats runoff and reduces contaminants. Wetlands provide habitat for key wildlife species; and through the use of creative design, a diverse range of habitat areas can be combined into a small area. Finally, since wetlands are attractive to both people and wildlife, a broad range of visitor use can use the potential trails, boardwalks, and other design features.
The natural "kidneys" of our landscape, the ability of wetlands to store water and gradually release it, reducing flood damage, is well documented. Inside the wetland, complex assemblages of plants and microbes act to purify the water as it flows through the system. Wetlands offer protection from predators for many kinds of fish, amphibians, and reptiles, and they are an important link in the life chain of hundreds of species of migratory birds. Because of their abundant wildlife habitat, they offer the chance for people to "get away" and experience nature, even in urban environments.
- United States Army Corps of Engineers (1987) Wetlands Delineation Manual. Wetlands Research Program Technical Report Y-87-1 (online edition). Washington D.C., U.S. Army Corps of Engineers. ( www.usace.army.mil/inet/functions/cw/cecwo/reg/wlman87.pdf)
- Shaw D., Schmidt R. (2003) Plants for Stormwater Design -- Species Selection for the Upper Midewest. St. Paul, Minnesota, Minnesota Pollution Control Agency.
- Wallace S.D., Knight R.L. (2006) Small-Scale Constructed Wetland Treatment Systems -- Feasibility, Design Criteria, and O&M Requirements. Alexandria, Virginia, Water Environment Research Foundation.
This article originally appeared in the 04/01/2006 issue of Environmental Protection. | <urn:uuid:0cceb033-26cc-4928-9025-40aa70c09135> | 3.25 | 1,944 | Knowledge Article | Science & Tech. | 33.187785 |
at this LINK...
Published: October 31, 2009 3:00 a.m.
a compendium of research findings from Harper’s magazine:
•As honeybees continued to vanish from their hives, researchers supported by the National Honey Board pointed to pesticide accumulation in beeswax as a contributing factor in Colony Collapse Disorder. The researchers, who also found that beeswax loses half its accumulated mite-killing pesticides when subjected to Cobalt 60 gamma radiation, suggested that beekeepers change their honeycombs more often.
•Scottish beekeepers reported the appearance of American Foul Brood (which, unlike European Foul Brood, is incurable), and Cape honeybees breached the Capensis Line, which South Africa’s government maintains to prevent the spread of AFB to African honeybees.
•Bee inbreeding was rising as populations shrank, leading to freak male bees with excessive chromosomes, lower fertility and bad work habits.
•In Britain, where the countryside was plagued by bee thefts, authorities planned to reintroduce, from New Zealand, the locally extinct short-haired bumblebee; U.S. entomologists hoped to offset honeybee declines by promoting the solitary blue orchard bee, which can live in Styrofoam.
•It was discovered that America once had its own native honeybee, Apis nearctica.
•Scientists found that forcing forager bees to undertake nursing tasks makes them less likely to grow stupid with age, that baby bees’ immune systems are less active if their hives are coated in antimicrobial bee resin, that male orchid bees stick out their legs to remain stable in high winds, and that bumblebees stay aloft through brute force.
•Invasive wasps were eating pheasants in Hawaii. “You see them flying with their balls of meat,” said an entomologist of the wasps. “If you have something that can fight back, like a honeybee, then they go straight for the head.”
•Elephants can be kept at bay by barriers built of beehives. | <urn:uuid:6c6cb009-a526-47c5-88fa-4e36fbf20592> | 2.796875 | 437 | Comment Section | Science & Tech. | 40.775092 |
Michael Fowler 10/29/07
As a warm up to analyzing how a wave function transforms under rotation, we review the effect of linear translation on a single particle wave function . We have already seen an example of this: the coherent states of a simple harmonic oscillator discussed earlier were (at t = 0) identical to the ground state except that they were centered at some point displaced from the origin. In fact, the operator creating such a state from the ground state is a translation operator.
The translation operator T(a) is defined at that operator which when it acts on a wave function ket gives the ket corresponding to that wave function moved over by a, that is,
so, for example, if is a wave function centered at the origin, T(a) moves it to be centered at the point a.
We have written the wave function as a ket here to emphasize
the parallels between this operation and some later ones, but it is simpler at
this point to just work with the wave function as a function, so we will drop
the ket bracket for now. The form of T(a)
as an operator on a function is made evident by rewriting the
Now for the quantum connection: the differential operator appearing in the exponential is in quantum mechanics proportional to the momentum operator () so the translation operator
An important special case is that of an infinitesimal translation,
The momentum operator is said to be the generator of the translation.
(A note on possibly confusing notation: Shankar writes (page 281) Here denotes a delta-function type wave function centered at x. It might be better if he had written , then we would see right away that this translates into the wave function transformation , the sign of now obviously consistent with our usage above.)
It is important to be clear about whether the system is being translated by a, as we have done above or whether, alternately, the coordinate axes are being translated by a, that latter would result in the opposite change in the wave function. Translating the coordinate axes, along with the apparatus and any external fields by -a relative to the wave function would of course give the same physics as translating the wave function by +a. In fact, these two equivalent operations are analogous to the time development of a wave function being described either by a Schrödinger picture, in which the bras and kets change in time, but not the operators, and the Heisenberg picture in which the operators develop but the bras and kets do not change. To pursue this analogy a little further, in the “Heisenberg” case
and is unchanged since it commutes with the operator. So there are two possible ways to deal with translations: transform the bras and kets, or transform the operators. We shall almost always leave the operators alone, and transform the bras and kets.
We have established that the momentum operator is the generator of spatial translations (the generalization to three dimensions is trivial). We know from earlier work that the Hamiltonian is the generator of time translations, by which we mean
It is tempting to conclude that the angular momentum must be the operator generating rotations of the system, and, in fact, it is easy to check that this is correct. Let us consider an infinitesimal rotation about some axis through the origin (the infinitesimal vector being in the direction of the axis). A wavefunction initially localized at will shift to be localized at , where So, how does a wave function transform under this small rotation? Just as for the translation case, . If you don’t understand the minus sign, reread the discussion on translations and the sign of .
to first order in the infinitesimal quantity, so the rotation operator
If we write this as
it is clear that a finite rotation is given by multiplying together a large number of these operators, which just amounts to replacing by in the exponential. Another way of going from the infinitesimal rotation to a full rotation is to use the identity
which is clearly valid even if A is an operator.
We have therefore established that the orbital angular momentum operator is the generator of spatial rotations, by which we mean that if we rotate our apparatus, and the wave function with it, the appropriately transformed wave function is generated by the action of on the original wave function. It is perhaps worth giving an explicit example: suppose we rotate the system, and therefore the wave function, through an infinitesimal angle about the z-axis. Denote the rotated wave function by . Then
That is to say, the value of the new wave function at (x,y) is the value of the old wave function at the point which was rotated into (x,y).
However, it has long been known that in quantum mechanics, orbital angular momentum is not the whole story. Particles like the electron are found experimentally to have an internal angular momentum, called spin. In contrast to the spin of an ordinary macroscopic object like a spinning top, the electron’s spin is not just the sum of orbital angular momenta of internal parts, and any attempt to understand it in that way leads to contradictions.
To take account of this new kind of angular momentum, we generalize the orbital angular momentum to an operator which is defined as the generator of rotations on any wave function, including possible spin components, so
This is of course identical to the equation we found for L, but there we derived if from the quantum angular momentum operator including the momentum components written as differentials. But up to this point has just been a complex valued function of position. From now on, the wave function at a point can have several components, so it is in some vector space, and the rotation operator will operate in this space as well as being a differential operator with respect to position. For example, the wave function could be a vector at each point, so rotation of the system could rotate this vector as well as moving it to a different .
To summarize: is in general an n-component function at each point in space, is an matrix in the component space, and the above equation is the definition of J. Starting from this definition, we will find J’s properties.
The first point to make is that in contrast to translations, rotations do not commute even for a classical system. Rotating a book through first about the z-axis then about the x-axis leaves it in a different orientation from that obtained by rotating from the same starting position first about the x-axis then about the z-axis. Even small rotations do not commute, although the commutator is second order. Since the R-operators are representations of rotations, they will reflect this commutativity structure, and we can see just how they do that by considering ordinary classical rotations of a real vector in three-dimensional space.
The matrices rotating a vector by about the x, y and z axes are
In the limit of rotations about infinitesimal angles (ignoring higher order terms),
It is easy to check that
The rotation operators on quantum mechanical kets must, like all rotations, follow this same pattern, that is, we must have
where we have used the definition of the infinitesimal rotation operator on kets, . The zeroth and first-order terms in e all cancel, the second-order term gives . The general statement is:
This is one of the most important formulas in quantum mechanics.
The commutation formula which is, after all, a straightforward extension of the result for ordinary classical rotations, has surprisingly far-reaching consequences: it leads directly to the directional quantization of spin and angular momentum observed in atoms subject to a magnetic field.
It is by now very clear that in quantum mechanical systems such as atoms the total angular momentum, and also the component of angular momentum in a given direction, can only take certain values. Let us try to construct a basis set of angular momentum states for a given system: a complete set of kets corresponding to all allowed values of the angular momentum. Now, angular momentum is a vector quantity: it has magnitude and direction. Let’s begin with the magnitude, the natural parameter is the length squared:
Now we must specify direction—but here we run into a problem. Jx, Jy and Jz are all mutually non-commuting, so we cannot construct a set of common eigenkets of any two of them, which we would need for a precise specification of direction. They do all commute with , since it is spherically symmetric and therefore cannot be affected by any rotation (and, it’s easy to check this commutation explicitly).
The bottom line, then, is that in attempting to construct eigenkets describing the different possible angular momentum states of a quantum system, the best we can do is to find the common eigenkets of and one direction, say Jz. The commutation relations do not allow us to be more precise about direction, analogous to the Uncertainty Principle for position and momentum, which also comes from noncommutativity of the relevant operators.
We conclude that the appropriate angular momentum basis is the set of common eigenkets of the commuting Hermitian matrices :
Our next task is to find the allowed values of a and b.
The sets of allowed eigenvalues a, b can be found using the “ladder operator” trick previously discussed for the simple harmonic oscillator. It turns out
are closely analogous to the simple harmonic oscillator raising and lowering operators and a.
and have commutation relations with Jz:
and they of course commute with , as do Jz, Jx and Jy.
Therefore, operating on cannot affect the value of a. But they do change the value of b:
so if is an eigenket of Jz with eigenvalue b, is either zero or an eigenket of Jz with eigenvalue , that is, where is a normalization constant, taking the initial to be normalized. Just as with the simple harmonic oscillator, we have to find these normalization constants in order to compute matrix elements. All the physics is in the matrix elements.
The squared norm of
Now a, being the eigenvalue of a sum of squares of Hermitian operators, is necessarily nonnegative, and b is real. Hence for a given a, b is bounded: there must be a bmax and a (negative or zero) bmin. But this must mean that
Note that for a given a, bmax and bmin are determined uniquely—there cannot be two kets with the same a but different b annihilated by J+. It also follows immediately that Furthermore, we know that if we keep operating on with J+, we generate a sequence of kets with Jz eigenvalues . This series must terminate, and the only possible way for that to happen is for to be equal to with n an integer, from which it follows that bmax is either an integer or half an odd integer times
At this point, we switch to the standard notation. We have established that the eigenvalues of Jz form a finite ladder, spacing . We write them as , and j is used to denote the maximum value of m, so the eigenvalue of Both j and m will be integers or half odd integers, but the spacing of the ladder of m values is always unity. Although we have been writing with we shall henceforth follow convention and write .
The operators have a common set of orthonormal eigenkets ,
where j, m are integers or half integers. The allowed quantum numbers m form a ladder with step spacing unity, the maximum value of m is j, the minimum value is -j.
It is now straightforward to compute the normalization factors needed to find matrix elements:
and , so
With these formulas, and the base set of normalized eigenkets , we are in a position to construct explicit matrix representations of the angular momentum algebra for any integer or half integer value of angular momentum j.
Historical note: the use of m to denote the component of angular momentum in one direction came about because a Bohr-type electron in orbit is a current loop, with a magnetic moment parallel to its angular momentum, so the m measured the component of magnetic moment in a chosen direction, usually along an external magnetic field, and m is often called the magnetic quantum number. | <urn:uuid:c324d6ac-1799-4759-9cfd-a10718ffbbca> | 3.109375 | 2,579 | Academic Writing | Science & Tech. | 32.05525 |
National Geographic has an article "Taming the Wild" by Evan Ratliff on the now famousunder-the-radar experiment by a renegade Soviet geneticist, who had been previously sent to the Gulag by the Lysenkoists, to breed a domesticated silver fox that would be a amiable as a dog
In fact, says Anna Kukekova, a Cornell researcher who studies the foxes, "they remind me a lot of golden retrievers, who are basically not aware that there are good people, bad people, people that they have met before, and those they haven't." These foxes treat any human as a potential companion, a behavior that is the product of arguably the most extraordinary breeding experiment ever conducted. ...
One number I've never seen in accounts of this experiment is what percentage of these domesticated silver foxes breed true. Do 99% of new kits grow up to act like Labradors or do a sizable percentage have to be shipped off to a fur farm?
Miraculously, Belyaev had compressed thousands of years of domestication into a few years. But he wasn't just looking to prove he could create friendly foxes. He had a hunch that he could use them to unlock domestication's molecular mysteries. Domesticated animals are known to share a common set of characteristics, a fact documented by Darwin in The Variation of Animals and Plants Under Domestication. They tend to be smaller, with floppier ears and curlier tails than their untamed progenitors. Such traits tend to make animals appear appealingly juvenile to humans. Their coats are sometimes spotted—piebald, in scientific terminology—while their wild ancestors' coats are solid. These and other traits, sometimes referred to as the domestication phenotype, exist in varying degrees across a remarkably wide range of species, from dogs, pigs, and cows to some nonmammalians like chickens, and even a few fish.
Belyaev suspected that as the foxes became domesticated, they too might begin to show aspects of a domestication phenotype. He was right again: Selecting which foxes to breed based solely on how well they got along with humans seemed to alter their physical appearance along with their dispositions. After only nine generations, the researchers recorded fox kits born with floppier ears. Piebald patterns appeared on their coats. By this time the foxes were already whining and wagging their tails in response to a human presence, behaviors never seen in wild foxes. ...The Soviet biology establishment of the mid-20th century, led under Joseph Stalin by the infamous agronomist Trofim Lysenko, outlawed research into Mendelian genetics. But Dmitry Belyaev and his older brother Nikolay, both biologists, were intrigued by the possibilities of the science. "It was his brother's influence that caused him to have this special interest in genetics," Trut says of her mentor. "But these were the times when genetics was considered fake science." When the brothers flouted the prohibition and continued to conduct Mendelian-based studies, Belyaev lost his job as director of the Department of Fur Breeding. Nikolay's fate was more tragic: He was exiled to a labor camp, where he eventually died. ...Not all domestication researchers believe that Belyaev's silver foxes will unlock the secrets of domestication. Uppsala University's Leif Andersson, who studies the genetics of farm animals—and who lauds Belyaev and his fellow researchers' contribution to the field—believes that the relationship between tameness and the domestication phenotype may prove to be less direct than the fox study implies. "You select on one trait and you see changes in other traits," Andersson says, but "there has never been proven a causal relationship."
To understand how Andersson's view differs from that of the researchers in Novosibirsk, it's helpful to try and imagine how the two theories might have played out historically. Both would agree that the animals most likely to be domesticated were those predisposed to human contact. Some mutation, or collection of mutations, in their DNA caused them to be less afraid of humans, and thus willing to live closer to them. Perhaps they fed off human refuse or benefited from inadvertent shelter from predators. At some point humans saw some benefit in return from these animal neighbors and began helping that process along, actively selecting for the most amenable ones and breeding them. "At the beginning of the domestication process, only natural selection was at work," as Trut puts it. "Down the road, this natural selection was replaced with artificial selection."
Where Andersson differs is in what happened next. If Belyaev and Trut are correct, the self-selection and then human selection of less fearful animals carried with it other components of the domestication phenotype, such as curly tails and smaller bodies. In Andersson's view, that theory understates the role humans played in selecting those other traits. Sure, curiosity and lack of fear may have started the process, but once animals were under human control, they were also protected from wild predators. Random mutations for physical traits that might quickly have been weeded out in the wild, like white spots on a dark coat, were allowed to persist. Then they flourished, in part because, well, people liked them. "It wasn't that the animals behaved differently," as Andersson says, "it's just that they were cute." ...
These perspectives might also apply to the evolution of phenotypical racial differences. Some differences in looks might have just been unselected for side effects of traits that were selected for by the environment. Or, as Darwin suggested, sexual selection or, among children, what Judith Rich Harris calls selection for cuteness might have played major roles.
But delving into the DNA of our closest companions can deliver some tantalizing insights. In 2009 UCLA biologist Robert Wayne led a study comparing the wolf and dog genomes. The finding that made headlines was that dogs originated from gray wolves not in East Asia, as other researchers had argued, but in the Middle East. Less noticed by the press was a brief aside in which Wayne and his colleagues identified a particular short DNA sequence, located near a gene called WBSCR17, that was very different in the two species. That region of the genome, they suggested, could be a potential target for "genes that are important in the early domestication of dogs." In humans, the researchers went on to note, WBSCR17 is at least partly responsible for a rare genetic disorder called Williams-Beuren syndrome. Williams-Beuren is characterized by elfin features, a shortened nose bridge, and "exceptional gregariousness"—its sufferers are often overly friendly and trusting of strangers. ....
"They didn't select for a smarter fox but for a nice fox," says Hare. "But they ended up getting a smart fox." This research also has implications for the origins of human social behavior. "Are we domesticated in the sense of dogs? No. But I am comfortable saying that the first thing that has to happen to get a human from an apelike ancestor is a substantial increase in tolerance toward one another. There had to be a change in our social system."
I'm not sure that the friendliest dogs are the smartest dogs. If Golden Retrievers don't distinguish between humans in terms of their intentions, which keeps them from biting your kid's friends but also makes them lousy guard dogs, that doesn't seem too smart.
There is also much else in the article, such as on nature-nurture adoption experiments with silver foxes. The keepers have also been breeding an Evil Twin breed of extremely nasty foxes. What happens when Nasty Fox is raised by a Nice Fox and vice-versa? | <urn:uuid:78f8beff-32a9-4149-9674-7a19c5846185> | 3.28125 | 1,615 | Personal Blog | Science & Tech. | 44.250835 |
Temperatures of about 20 000°K with ion densities ranging from 1017 to 1018 cm−3 have been produced in helium by means of explosive‐driven shocks. Helium was used because of its relatively simple structure, but this choice eliminated usual shock methods because of the high sound speed and high ionization potential. Shock waves having sufficient strength and planarity were obtained by reflecting against a glass plate initially strong shock waves produced by high explosives. Equilibrium calculations based on smear camera velocities of accurately plane shocks were used to determine the state of the gas behind the reflected shock wave.
The light emitted consisted of a continuum on which were superimposed shifted and broadened lines of the normal helium spectrum and forbidden lines as well. Time‐resolved spectrograms showed evidence of a measurable relaxation time at the shock front but no evidence of significant radiative cooling of the gas behind the shock.
Under the conditions of these experiments, it was demonstrated that a quantitative prediction of the behavior of the helium states with principal quantum numbers 2, 3, and 4 requires consideration of the effects of electrons as well as ions. | <urn:uuid:40426c7a-ebf0-4326-94cc-47175818991b> | 3.203125 | 226 | Academic Writing | Science & Tech. | 23.01375 |
What orbits the Earth at 17,500 miles per hour, is 360 feet wide, 260 feet long and has a crew of 3? It is the International Space Station, also known as the I.S.S. It is going to be finished in the year 2006. The countries that are putting it together are the United States, Russia, Japan, Canada, Brazil, Austria, Belgium, Denmark, Finland, France, Germany, Great Britain, Ireland, Italy, the Netherlands, Norway, Spain, Sweden and Switzerland. There will be 100 different pieces sent up to space to create the I.S.S.
What You Want to Know About The I.S.S.
When it is completed, the I.S.S. will be the size of two football fields. You will be able to see it from Earth without a telescope. The I.S.S. will be much bigger than previous space stations. Other space stations, like Mir, were cramped. You also had to eat dehydrated food (food without water). The I.S.S. will have a refrigerator with fresh foods.
The I.S.S. is like living on earth except one thing-you are weightless! When you are sleeping, you have to be strapped to the wall or else you will float out of your bed.
The first crew went up in orbit on October 31, 2000. The size of the crew was decided by how big the escape pod was. That was 3.
Did you know that every time we breathe or exercise we add moisture to the air? This extra moisture has to be removed from the air on the ISS so the moisture doesnít collect on the ISS equipment. This moisture will be recycled.
The crew will recycle all of the water on board from the moisture in the air to the crewís urine and wash water. I know that sounds really gross, but the water is going to be purified and going to be cleaner than the water we drink here on Earth. There are 3 steps to purifying the water:
The crew will be up in space for a long time, but when they come back, they wonít be able to walk right away. There is no gravity in space so you donít use your legs that much, and when you go to use them itís just like trying to run after you rollerblade.
The people that are training to get on the space station are going to train hard. They will have to train for 2 years. Some of that training will be in the Canadian forest in the middle of winter to prepare them for problems they might run into up in space.
What are Some Modules Up in Space?
In 1999,the first U.S. built station component, the Unity connecting module, was moved to the launch pad. It was loaded onto the Space Shuttle, Endeavour.
More than six major components are in the processing facility. At the end of the year 2000, more than 500,000 pounds of U.S. and international station equipment was completed. Thatís the weight of 250 pick-up trucks.
Unity is a six-sided connecting module to which all-future U.S. station modules will attach. Unity will serve as a passageway to various parts of the station. Attached to Unity are two adapters. One to serve as a permanent connection to the Russian station and another that will serve as a shuttle docking port.
What is Going to Put Some Of the International Space Station Together?
Some of the I.S.S. is going to be put together from the outside. One of the dangers is that you could be hit by a micrometeoroid. Some micrometeoroids are the size of a grain of sand, but others can be the size of basketballs and go right through you. The countries that are building the I.S.S. have built robots to put together some of the I.S.S. so that no one gets hurt. Remember, the I.S.S. is made up of 100 different modules so it will take a long time to put it together.
Space: Everything You Want to Know and Beyond. </J0112388> Last Visited: December, 2001.
NASA.<http://www.nasa.gov> Last Visited: January, 2002.
Space in the Spotlight Novi Meadows Elementary 2002
All pictures courtesy of NASA unless otherwise noted | <urn:uuid:991becfd-9c85-4bda-9c57-df0359bc1e99> | 3.109375 | 913 | Knowledge Article | Science & Tech. | 79.164163 |
If I have a triangle with the coodinates A(0,0) B (30,0) and C (30,40)
Is the centre of gravity 1/3 of the base and 1/3 of the height?
would this make it (10, 40/3) ??
The area of the triangle is
You now need a triangle such that
The ratio of the main triangle's height to the base is
You need to have or
The center of gravity will occur along the triangle with the x coordinate equal to 21.2132.
You will need to do the appropriate calculations for the y coordinate.
You want to cut the large triangle so that the upper or smaller triangle is half the area of the larger triangle. | <urn:uuid:d2de28d5-af2d-45a3-8ec0-c97cb3836c78> | 3.109375 | 155 | Q&A Forum | Science & Tech. | 87.208934 |
Value Changes During Conversions
A conversion from a value type stores a copy of the source value in the destination of the conversion. However, this copy is not an exact image of the source value. The destination data type stores values differently, and even the value being represented might change, depending on the kind of conversion being performed.
Changes During Widening and Narrowing Conversions
Narrowing conversions change the destination copy of the source value, with potential information loss. For example, a fractional value is rounded when converted to an integral type, and a numeric type being converted to Boolean is reduced to either True or False.
Widening conversions preserve the source value but can change its representation. This happens if you convert from an integral type to Decimal, or from Char to String.
The original source value is not changed as a result of a conversion.
Changes During Reference Type Conversions
A conversion from a reference type copies only the pointer to the value. The value itself is neither copied nor changed in any way. The only thing that can change is the data type of the variable holding the pointer. In the following example, the data type is converted from the derived class to its base class, but the object that both variables now point to is unchanged.
' Assume class cSquare inherits from class cShape. Dim shape As cShape Dim square As cSquare = New cSquare ' The following statement performs a widening ' conversion from a derived class to its base class. shape = square | <urn:uuid:6b684080-ff2e-4635-a278-ba4627e2c3ec> | 3.9375 | 308 | Documentation | Software Dev. | 38.686966 |
Blue buttons are just one of the many small critters that live in the oceans. They are often mistaken for jellyfish or colored plastic when they wash up on beaches, but they are actually free-floating colonies of hydrozoa.
Blue buttons have two main parts: the central disk, which is about an inch across and yellow brown, is a hard flattened bubble that holds gas to keep the blue button floating. Attached to this disk are a type of bluish stinging polyp, which act as tentacles, although the blue button itself does not have a powerful sting—it can only cause minor skin irritation.
In the center of the disk, a larger central polyp acts as a mouth for food intake and waste removal for the entire blue button colony. Blue buttons eat live and dead small fish, eggs, and zooplankton.
The blue button cannot swim; it relies on drifting on currents and wind to move through the ocean.
Pretty cool, huh?
- Victory! Delaware Becomes Seventh State in U.S. to Ban Shark Fin Trade! Posted Thu, May 16, 2013
- It's Endangered Species Day! Posted Fri, May 17, 2013
- Stocks Show Signs of Recovery, But Still Work to Do Posted Fri, May 17, 2013
- Disabled Killer Whale Survives with Help from Its Pod Posted Tue, May 21, 2013 | <urn:uuid:52ae800d-975e-46f0-8834-3fbcad6f5052> | 3.265625 | 284 | Personal Blog | Science & Tech. | 66.892318 |
This video, captured by University of California Santa Cruz professor Giacomo Bernardi, shows an orange-dotted tuskfish (Choerodon anchorago) cracking open a clam by throwing it against a rock. Other fish from the wrasse family have also been observed using similar techniques to crack open clams. These include the blackspot tuskfish (Choerodon schoenleinii), yellowhead wrasse (Halichoeres garnoti), and a sixbar wrasse (Thalassoma hardwicke). Tool use among fish is not well-studied. For a fish to plan such an elaborate scheme (digging up the clam, finding a suitable rock to use as an anvil, and cracking the clam open) is really quite impressive. More research into the use of tools by these beautiful fish is clearly warranted.
In his short article, Dr. Bernardi describes how this behavior is exhibited in three genera of wrasses (the ancestral Choerodon, and the more derived Halichoeres and Thalassoma). Because the animals use the same similar movements of the head to toss the clams, he suggests that these behaviors may either have evolved independently for the species mentioned or may be a common trait that might be found in other wrasse.
Bernardi G. The Use of Tools by Wrasses (Labridae). Coral Reefs. September 20, 2011. | <urn:uuid:d98ea315-e619-4a80-9330-81452be2abf6> | 3.640625 | 288 | Personal Blog | Science & Tech. | 39.102717 |
Sand swimming is a specialized locomotion used by several species of lizards and snakes. The following was posted on the Physics Central website. Be sure to visit the website and view the videos that include a Sandfish (Scincus scincus, Family Scincidae) swimming through sand.
Swimming through sand: The secret of sandfish locomotion
Monday, December 27, 2010
We know how airplanes glide in the air and how submarines move through water, but we don't know much about how creatures "swim" through sand. 'Til now...
How an object's shape affects its generation of lift and drag in both the air and in water is well understood. Otherwise, we'd be misplacing submarines all the time. But how objects - animals in particular - create lift and drag in granular materials like sand is less well understood.
A couple of Ph.D. students and their professor have been taking a closer look at what happens when sand-dwelling creatures - like lizards, crabs, snakes and worms - dive below the surface.
Yang Ding and Nick Gravish, along with Daniel I. Goldman, their Georgia Tech professor of physics, have been studying the sandfish lizard, a popular sand-dwelling pet, to see how it maneuvers in its subterranean environment.
Goldman described the sandfish as a little lizard that lives in the desert in North Africa. When startled, it can burrow 10 cm beneath the surface in less than half a second. Its wedge-shaped head, which biologists believe gives the critter its lightning-quick burrowing ability, was the project's inspiration.
"We think the sandfish is the champion of rapid burial," Goldman said.
Another thing the trio noticed about the lizard, Ding said, is that its belly is really flat. "We thought that might have an effect," he said.
To test the theory on both the head shape and the belly, the team dragged three objects of different shapes through a container filled with tiny glass beads that acted as a sand analogue. They watched to see whether each object generated any lift - the force perpendicular to the direction of motion that "pushes" an object up.
The first was a cylinder. The team dragged it horizontally through the beads (if it were a Coke can, it would have been dragged from the dash in between the words "Coca" and "Cola") and measured the forces acting on it.
The cylinder experienced positive lift; it tended to rise within the beads, headed for the surface. A square rod was also dragged through the beads and it, too, rose towards the surface, but just barely. The third object was a half-cylinder. It experienced negative lift, sinking lower into the beads as it was dragged along.
Of the three objects, the half-cylinder most approximates the shape of the sandfish lizard's head. Since the lizard also experiences negative lift when it enters the sand, the lab test showed that the half-cylinder was a good starting point for modeling the lizard's head.
The researchers then dragged flat plates through the sand. The plates were given roughly the same angle of attack - or angle away from horizontal - as the leading edge of each of the objects. To mimic the cylinder, the first plate was at a very small angle almost perpendicular to the floor. Just as for the cylinder, the plate experienced positive lift.
The plate was then dragged forward at a 90 degree angle relative to the floor, and again, as with the cube-shaped rod, there was next to no lift. Then the plate was dragged at a wide angle, leaning back from the direction of motion like a lawn chair leans back from the surf at the beach. This time, as with the half-cylinder, there was negative lift.
These were exciting results for the researchers because they realized that they could break up the shape of any object into flat plates and sum them up in a computer model to see the forces acting on any object. In addition to showing lift, the models also helped them to understand how much drag, or force acting opposite the direction of motion, "tugging" on an object, was being produced.
"We found that we can basically understand the forces by decomposing them in flat plates," Gravish said. "You can build whatever object you want to see what forces it undergoes in granular materials."
A database of how objects respond when traveling through granular materials can be created simply by finding the sum of simple materials - the plates. Since there are no equations to describe locomotion in granular materials, the find was particularly exciting.
"What you really want to do in all this business is figure out the principles of what's going on," Goldman said. The results of this research have opened the door for the physicists to do just that.
On an earlier research project, Goldman's CRAB Lab used high-speed x-ray imaging to observe the lizard's movement when submerged. They found that it doesn't use its legs when swimming through sand, instead tucking them by its side and slithering like a snake.
Using the data garnered from watching lizards swim and the new lift and drag research, the CRAB Lab got down to serious business and built a sandfish lizard robot they hope to debut at the 2011 International Conference on Robotics and Automation. They envision creating a rubble-swimmer that could aid with search-and-rescue missions after disasters like the earthquake in Haiti or the 9/11 collapse of the Twin Towers.
Ding, Gravish and Goldman's paper, "Drag induced lift in granular media," is due to appear in Physical Review Letters Dec. 31.
Posted by Echo Romeo | <urn:uuid:2ac9a017-4387-4eb6-80fa-7d4b1023efad> | 3.578125 | 1,175 | Personal Blog | Science & Tech. | 55.897858 |
Maxwell's equations are,
Where the vectors are defined in the usual way. Also,
Taking a Fourier transform with respect to the time variable, we have,
See ``Electromagnetics" for more.
The integral form is also commonly given.
Energy of the Field
Power Flow into a Volume
The Poynting vector gives the direction of power flow. The Poynting vector is given by,
It has the units, watts per square meter.
The power flow through a differential area id given by . The power flow through a volume is then given by summing up the power flow through the area enclosing that volume, thus:
Expanding by Maxwell's equations,
The first summand is the power density of the vacuum electromagnetic field; the second and third summands, when integrated, are the power dissipation by magnetic and electric dipoles; the fourth summand, when integrated is power dissipation by current.
Reference: ECE525 Lasers and Detectors. notes taken in class, 2010. Instructor: Joyce Poon. | <urn:uuid:448ffe15-86a7-4c69-9b74-c88fc9fc3437> | 3.21875 | 224 | Knowledge Article | Science & Tech. | 44.331347 |
Study predicts climate change impacts on polar bear litter size
When little food is available, polar bears are known to rely on stores of energy for survival and reproduction. The reliance on energy stores in pregnant females however, limits the survival rates of their cubs. Using data obtainable under current conditions, the authors show 28% of pregnant females failed to reproduce due to energetic reasons during the 1990’s.
They then use predictive modelling to suggest that 40-73% of pregnant females could fail to reproduce in the same way if spring sea ice break-up occurs 1 month earlier than during the 1990’s and 55-100% if break-up occurs 2 months earlier.
The authors suggest that their finding may also apply to populations outside the western Hudson Bay area, however they caution that the expected time-line for declines in litter size may vary with different climate models’ predictions about sea ice loss.
Read the full article in the Nature Communications online journal. | <urn:uuid:1f0566ed-dd9f-4088-9c17-4ed56da0e8bc> | 2.9375 | 196 | Truncated | Science & Tech. | 36.054538 |
Living in a Dying Solar System, Part 2: Delaying Doomsday
With this essay by Ray Villard, news director for the Hubble Space Telescope, Astrobiology Magazine presents another in our series of 'Gedanken', or thought, experiments - musings by noted scientists on scientific mysteries in a series of "what if" scenarios. Gedanken experiments, which have been used for hundreds of years by scientists and philosophers to ponder thorny problems, rely on the power of one's imagination to project these scenarios to logical conclusions. They do not involve lab equipment or, often, even experimental data. They can be thought of as focused daydreams. Yet, as in the famous case of Einstein's Gedanken experiments about what it would be like to hitch a ride on a light wave, they have often led to important scientific breakthroughs.
Earth’s ultimate fate 5 billion years from now is a death spiral into the Sun. As the Sun ages, its gravitational pull will weaken, and Earth will briefly migrate out to the distance of Mars’ orbit. At this distance, however, the Earth still will be close enough to generate a tidal bulge in the now-bloated Sun. The gravitational tug from the solar bulge will slow Earth’s orbital velocity, eventually causing our planet to spiral in toward the Sun. Friction from the tenuous gases in the Sun’s ballooning atmosphere will speed up this process, dragging us irrevocably inward.
Asteroids would be civilization killers too if we don’t develop the technological wherewithal to deflect them. Manmade disasters could include nanotechnology run amok, plagues brought on by terrorist-engineered super-organisms, or extinction by intelligent machines -- among many other man made disasters yet to be imagined.
The absence of any evidence for intelligent life in space, commonly known as the Fermi Paradox, would suggest that extraterrestrial civilizations are short-lived because they easily succumb to natural or technology-induced catastrophes, otherwise they would have stopped by and visited us by now. The vast age of our Milky Way allows more than enough time to star-hop across the galaxy at a fraction of the speed of light.
But let’s be wildly optimistic for a moment and assume that humanity will have the stability, cultural tenacity, and technological prowess to hold onto our planet for the next billion years.
Knowing that our world will inevitably succumb to the Sun’s evolution, a far advanced civilization on Earth could undertake an extraordinary engineering project to keep Earth inhabitable for the next 5 billion years.
An asteroid’s orbit would be modified to swing very close to Earth. Our planet would gain energy from an asteroid swing-by if the asteroid would need to be at least 100 kilometers across, and pass within 10,000 kilometers of Earth. This energy transfer could slightly increase the diameter of Earth’s orbit, nudging it farther from the Sun.
On an outbound trajectory, the asteroid swings by Jupiter and robs energy from Jupiter’s orbital momentum to make up for energy lost to Earth. This slightly shrinks Jupiter’s orbit.
For Earth to maintain the “Goldilocks” distance where the amount of solar energy remains constant, the asteroid must swing by Earth for another momentum transfer once every 6,000 years, according to Korycansky.
But this mega-project could go awry if the asteroid went off course and plowed into Earth. Imagine filing an environmental impact statement (no pun intended) for this project, much less convincing world governments to collectively support it.
Another challenge is that the time span between asteroid encounters is equal to a good-sized chunk of current recorded human history. Civilizations could forget about it, or they might even view the incoming asteroid as a threat to Earth and destroy it!
Just imagine a future Bruce Willis (as in the 1998 science fiction film Armageddon) landing on the asteroid to place a nuclear bomb. His crews come upon a 2001-type monolith that is a time capsule, perhaps with an image of Earth etched onto it showing the continent’s positions at the time the project was started. The crew has a launch window of a few hours to decide how to proceed. What would they do? The future of Earth hangs in balance!
However, twiddling with the gravitational stability of the solar system could bring on interplanetary chaos. For starters, changes in Jupiter’s orbit could disrupt asteroid orbits, hurtling them into the inner solar system.
One computer simulation of the dynamical evolution of the solar system by Jacques Laskar of the Paris Observatory predicts there is a small chance the solar system could become chaotic in about 3 billion years, even without our tinkering with it.
He ran 2,501 numerical simulations of the dynamical evolution of the solar system over the next 5 billion years. In one simulation Mercury's orbit becomes so eccentric that the planet falls into the Sun or collides with Venus. In another simulation run, Mercury's eccentricity causes angular momentum to be transferred from the giant outer planets. This destabilizes all the terrestrial planets 3.34 billion from now. In the Mother-of-All Apocalypses, Mercury, Mars or Venus smashes into Earth.
In another simulation there is a close encounter where Mars passes within 500 miles of Earth! Such a sideswipe would probably obliterate all higher life forms on the Earth. Mars could be tidally ripped apart on approach and the pieces would carpet-bomb Earth. What’s left of Mars might form a ring around our lifeless planet – a mocking tiara for an Earth thrown backward in time to the Hadean era.
However, we could never be certain if the planet disintegrated by natural collision, or it was the result of a planet migration experiment run amok.
Despite the future dangers we face, it is ludicrous for some folks to think the “end-of-world” is right around the corner. We are the only intelligent species capable of taking possession of the solar system. Our civilization might come up with a strategy to build artificial mini-planets – essentially flying city-states -- that would modify their orbits to migrate along with the petulant Sun’s expanding and shrinking habitable zone. As the white dwarf cools, the wagon train of space habitats would move inward. Raw materials would be harvested from in-falling comets and asteroids. Explorers would be free to travel outward to visit surviving planets and moons. Given our passion for survival, bolstered by super-technology, the future for mankind could truly stretch on indefinitely, beyond even the life of the Sun.
This article is also available in French. | <urn:uuid:cf511a1a-2426-4dab-82bf-a69a7024ffc7> | 3.34375 | 1,381 | Truncated | Science & Tech. | 39.945361 |
PRANBURI, Thailand — They are better known as stealthy killing machines to take out suspected terrorists with pinpoint accuracy. But drones are also being put to more benign use in skies across several continents to track endangered wildlife, spot poachers, and chart forest loss.
Although it is still the ‘‘dawn of drone ecology,’’ as one innovator calls it, these unmanned aerial vehicles are skimming over Indonesia’s jungle canopy to photograph orangutans, protect rhinos in Nepal, and study invasive aquatic plants in Florida.
Activists launched a long-range drone in December to locate and photograph a Japanese whaling ship as the Sea Shepherd Conservation Society attempted to block Japan’s annual whale hunt in Antarctic waters.
Relatively cheap, portable, and earth-hugging, the drones fill a gap between satellite and manned aircraft imagery and on-the-ground observations, said Percival Franklin at the University of Florida, which has been developing such drones for more than a decade.
‘‘The potential uses are almost unlimited,’’ said Ian Singleton, director of the Sumatran Orangutan Conservation Program, testing drones this year over Indonesia’s Tripa peat forest where fires set by palm oil growers are threatening the world’s highest density habitat of the great apes.
Conservation is one of the latest roles for these multitaskers, either autonomously controlled by on-board computers or under remote guidance of a navigator. Ranging in size from less than half a pound to more than 20 tons, drones have been used for firefighting, road patrols, hurricane tracking, and other jobs too dull, dirty, or dangerous for piloted craft.
Most prominently, they have been harnessed by the US military in recent years, often to detect and kill terrorism targets in Afghanistan, Pakistan, and elsewhere.
A conservation drone pioneer, Lian Pin Koh of the Swiss Federal Institute of Technology, says the idea came to him after another sweaty, jungle slog in Sabah, Malaysia, hauling heavy equipment for his field work.
‘‘I told my assistant, who happened to be my wife, ‘How wonderful it would be if we could fly over that area rather than walk there again tomorrow,’ ’’ recalled the Singaporean expert on tropical deforestation, and a model plane hobbyist.
Ecodrones in the United States are mostly custom-built or commercial models. Koh last year cobbled together a far cheaper, off-the-shelf version that poorer organizations and governments in the developing world can better afford.
He and partner Serge Wich bought a model plane — some are available in China for as little as $100 — added an autopilot system, open source software to program missions, and still and video cameras. All for less than $2,000, or 10 times cheaper than some commercial vehicles with similar capabilities.
This year, they have flown more than 200 mostly test runs in Asia using an improved version with a 6.5 foot wing span, air time of 45 minutes, and a 15.5-mile range.
The drones were flown over rough terrain in Malaysia where GPS-collared elephants are difficult to monitor from the ground. In Nepal’s Chitwan National Park, the World Wide Fund for Nature (WWF) and the Nepal Army conducted trials on detecting rhino and elephant poachers.
‘‘Counting orangutan nests is the main way of surveying orangutan populations,’’ said Graham Usher of the Sumatran project, which captured one of the apes atop a palm tree feeding on palm heart in a sharp photograph. From higher altitudes the drones, he said, also provide high-resolution, real-time images showing where forests are being cleared and set ablaze.
By contrast, ground expeditions are time-consuming, logistically cumbersome, and expensive. A conventional orangutan census in Sumatra, which may also involve helicopters and aircraft, costs some $250,000. Surveying land use by satellite is likewise costly and hampered by frequent cloud cover over tropical areas.
But there are drawbacks with drones, including landing them in often thickly vegetated areas since they need clear touch-down zones of about 100-by-100 yards. Koh said he was working to rig the vehicle with a parachute to allow landing in confined space.
Franklin, at the University of Florida, said the hardware and image interpretation are still being developed as more missions are planned in the United States, ranging from counting pygmy rabbit burrows in Idaho to monitoring salmon-eating seabirds off the Oregon coast.
The University of Florida is testing another antiterrorism weapon, thermal imaging, to hunt for Burmese pythons invading the state’s Everglades, having found the snakes regulate temperatures of their nests in a way that makes them visible through such technology.
Other eyes-in-the-sky increasingly used for conservation tasks are ultralights, birdlike craft with a major advantage over drones — the human touch.
‘‘It’s the closest thing we have come to flying like birds 30,000 years after coming out of caves,’’ says Mark Silverberg, preparing to take a reporter up in a paramotor ultralight, one earlier hired by conservation groups to photograph and video Mekong River dolphins, tiger habitat in Myanmar, and denuded hills in northern Thailand. | <urn:uuid:a2ea5cb7-8637-43d9-921a-e1c65fda8b44> | 2.859375 | 1,138 | Truncated | Science & Tech. | 28.120323 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
atmosphere and precipitation
...the formation of liquid cloud droplets or ice crystals depends on which phase of water occurs. A cloud in which only liquid water occurs (even at temperatures less than 0 °C) is referred to as a warm cloud, and the precipitation that results is said to be due to warm-cloud processes. In such a cloud, the growth of a liquid water droplet to a raindrop begins with condensation, as additional...
What made you want to look up "warm cloud"? Please share what surprised you most... | <urn:uuid:4fb2a480-a18d-4fe5-b8b9-9ddee6f3c2b5> | 3.25 | 147 | Knowledge Article | Science & Tech. | 63.48 |
Infrared imaging is only one example
of using wavelengths other than visible light to gather
information about Earth. Most satellites today measure
energy at many wavelengths. This is called multispectral
imaging. Images taken at different wavelengths can be
combined to make composite images by displaying the
image for each wavelength as red, green, or blue in
the final image. These composite images result in color
patterns that can be used to identify surface features.
This simulation shows images of San
Francisco, California at three different wavelengths.
Bright areas show higher amounts of energy; darker areas
show lower amounts of energy. You can display each image
in red, green, or blue light, then generate a false-color
composite image for any color combination.
Look for these features in the image:
Water appears dark around land in the center of the
image. Blocky patterns represent buildings and streets.
Park areas on both sides of the Golden Gate Bridge (the
thin line across the water in upper left of image) are
covered with vegetation.
After the page has fully loaded,
click the R (red), G (green), and B (blue) buttons to
assign a different color to the images for each wavelength.
Click Show Composite to combine the information into
one color image. | <urn:uuid:dc90912d-2556-4ed4-8619-ee870ba84896> | 3.765625 | 277 | Tutorial | Science & Tech. | 35.079603 |
This short program shows how a live video stream from a web cam (or from a video file) can be rendered in OpenGL as a texture. The live video stream is captured using the Open Source Computer Vision library (OpenCV). The program also shows how an OpenCV image can be converted into OpenGL texture. This code can be used as the first step in the development of an Augmented Reality application using OpenCV and OpenGL.
Understanding the Code
The program renders an OpenGL textured quad which shows a live video stream. The code does not contain any additional functionality, and is kept very simple for easy understanding.
The OpenGL texture is continuously created in the
OnIdle callback function. The next available frame in the video stream is captured first:
IplImage *image = cvQueryFrame(g_Capture);
The image is stored in the OpenCV data structure
IplImage. Please see the OpenCV documentation for details. The image captured by OpenCV is stored as a
BGR. It is first converted to
RGB using the OpenCV function
cvCvtColor(image, image, CV_BGR2RGB);
Then, the following magic call creates a 2D OpenGL texture from the OpenCV image:
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, image->width, image->height,
GL_RGB, GL_UNSIGNED_BYTE, image->imageData);
The texture is loaded into memory and is available for rendering.
Compiling and Running
The code is compiled and tested using Microsoft Visual Studio 2008. However, it can be compiled using any C++ compiler on any platform. The program uses the OpenGL, GLUT, and OpenCV libraries. Please make sure that you have installed them and paths to the include and lib directories are set. OpenCV can be downloaded from here. | <urn:uuid:f9e668ca-a721-497d-9301-f636606e067e> | 3.171875 | 389 | Documentation | Software Dev. | 38.182208 |
Taken from Issue 22.
Although seemingly beautiful and serene, this fiery image shows the hundreds of millions of stars at the turbulent heart of the Milky Way, all cocooned in cosmic gas and dust. The life of such a star is visible in its entirety, from the dusty regions of star birth, populations of young stars, ageing stars, old stars, and dead stars, to their remnants. All of this chaos is permeated with a hazy blue light, the product of X-ray outflows from black holes and massive stars.
Released back in 2009, the panorama is a composite of images from the Hubble Space Telescope, the Spitzer Space Telescope, and the Chandra X-ray Observatory. It played a part in the International Year of Astronomy (IYA), a global celebration of astronomy and its contributions to our society and culture.
The IYA was held in 2009, 400 years after Galileo first blinked up at the skies through a telescope – a moment often lauded as the birth of modern astronomy. Copies of this image were printed and unveiled by NASA across more than 150 sites – including planetariums, museums, and libraries – across the US, showing how involved the organisation is in public engagement and communication. Hubble’s ability to go beyond gathering data for scientists to study has proved to be a real bonus for igniting the public’s interest in astronomy.
Astronomy is a highly collaborative field – partially by necessity. Sharing time on the world’s largest telescopes requires high levels of co-operation, as does observing the same phenomena from various parts of the globe. It also has the ability to bring countries together – although the recent landing of the Mars Science Laboratory on Mars was a NASA effort, underneath it all was the uniting achievement that Earth had successfully sent a probe to another planet.
Image: NASA, ESA, SSC, CXC, and STScI | <urn:uuid:c005a996-9085-43b1-aaca-635b3aae2363> | 3.375 | 392 | Truncated | Science & Tech. | 38.360385 |
Unit 4: Ecosystems // Section 8: Evolution and Natural Selection in Ecosystems
As species interact, their relationships with competitors, predators, and prey contribute to natural selection and thus influence their evolution over many generations. To illustrate this concept, consider how evolution has influenced the factors that affect the foraging efficiency of predators. This includes the predator's search time (how long it takes to find prey), its handling time (how hard it has to work to catch and kill it), and its prey profitability (the ratio of energy gained to energy spent handling prey). Characteristics that help predators to find, catch, and kill prey will enhance their chances of surviving and reproducing. Similarly, prey will profit from attributes that help avoid detection and make organisms harder to handle or less biologically profitable to eat.
These common goals drive natural selection for a wide range of traits and behaviors, including:
- Mimicry by either predators or prey. A predator such as a praying mantis that blends in with surrounding plants is better able to surprise its target. However, many prey species also engage in mimicry, developing markings similar to those of unpalatable species so that predators avoid them. For example, harmless viceroy butterflies have similar coloration to monarch butterflies, which store toxins in their tissues, so predators avoid viceroy butterflies.
- Optimal foraging strategies enable predators to obtain a maximum amount of net energy per unit of time spent foraging. Predators are more likely to survive and reproduce if they restrict their diets to prey that provide the most energy per unit of handling time and focus on areas that are rich with prey or that are close together. The Ideal Free Distribution model suggests that organisms that are able to move will distribute themselves according to the amount of food available, with higher concentrations of organisms located near higher concentrations of food (footnote 8). Many exceptions have been documented, but this theory is a good general predictor of animal behavior.
- Avoidance/escape features help prey elude predators. These attributes may be behavioral patterns, such as animal herding or fish schooling to make individual organisms harder to pick out. Markings can confuse and disorient predators: for example, the automeris moth has false eye spots on its hind wings that misdirect predators (Fig. 14).
- Features that increase handling time help to discourage predators. Spines serve this function for many plants and animals, and shells make crustaceans and mollusks harder to eat. Behaviors can also make prey harder to handle: squid and octopus emit clouds of ink that distract and confuse attackers, while hedgehogs and porcupines increase the effectiveness of their protective spines by rolling up in a ball to conceal their vulnerable underbellies.
- Some plants and animals emit noxious chemical substances to make themselves less profitable as prey. These protective substances may be bad-tasting, antimicrobial, or toxic. Many species that use noxious substances as protection have evolved bright coloration that signals their identity to would-be predators—for example, the black and yellow coloration of bees, wasps, and yellowjackets. The substances may be generalist defenses that protect against a range of threats, or specialist compounds developed to ward off one major predator. Sometimes specialized predators are able overcome these noxious substances: for example, ragwort contains toxins that can poison horses and cattle grazing on it, but it is the exclusive food of cinnabar moth caterpillars. Ragwort toxin is stored in the caterpillars' bodies and eventually protects them as moths from being eaten by birds.
Figure 14. Automeris moth
See larger image
Source: © D.H. Jansen and Winnie Hallwachs, janzen.sas.upenn.edu.
Natural selection based on features that make predators and prey more likely to survive can generate predator-prey "arms races," with improvements in prey defenses triggering counter-improvements in predator attack tools and vice versa over many generations. Many cases of predator-prey arms races have been identified. One widely known case is bats' use of echolocation to find insects. Tiger moths respond by emitting high-frequency clicks to "jam" bats' signals, but some bat species have overcome these measures through new techniques such as flying erratically to confuse moths or sending echolocation chirps at frequencies that moths cannot detect. This type of pattern involving two species that interact in important ways and evolve in a series of reciprocal genetic steps is called coevolution and represents an important factor in adaptation and the evolution of new biological species.
Other types of relationship, such as competition, also affect evolution and the characteristics of individual species. For example, if a species has an opportunity to move into a vacant niche, the shift may facilitate evolutionary changes over succeeding generations because the species plays a different ecological role in the new niche. By the early 20th century, large predators such as wolves and puma had been largely eliminated from the eastern United States. This has allowed coyotes, who compete with wolves where they are found together, to spread throughout urban, suburban, and rural habitats in the eastern states, including surprising locations such as Cape Cod in Massachusetts and Central Park in New York City. Research suggests that northeastern coyotes are slightly larger than their counterparts in western states, although it is not yet clear whether this is because the northeastern animals are hybridizing with wolves and domestic dogs or because they have adapted genetically to preying on larger species such as white-tailed deer (footnote 9). | <urn:uuid:3a412859-2e69-4fc2-ae43-aa9d74142f38> | 4.4375 | 1,137 | Academic Writing | Science & Tech. | 25.005611 |
Cod Food Web
I was again amazed with the infinite ramifications of scale-free networks and the applications of this recent knowledge. Led by a few short citations in the books I’ve been reading, I started researching complex networks in the context of food webs, particularly marine food webs – binary feeding relationships between the species in a community. Until recently, biologists, government officials and even environmentalists had a very simplistic view of nature and looked at animal species as a scattered web of nodes with a short number of dependencies. However, most links among components of food webs are not so simple and may involve the interaction of hundreds of organisms.
The importance of fully understanding the dynamics of scale-free networks as been recognized by the cod fishery industry in the worst way. “The collapse of the Northwest Atlantic cod fishery has become a metaphor for ecological catastrophe and is universally cited as an example of failed management of a natural resource” (MacKenzie 1995). Peter Meisenheimer in his paper “Seals, Cod, Ecology and Mythology” collects an incisive list of six hypotheses that might have led to the demise of the once abundant cod stock:
1. Canadian elected officials and Department of Fisheries and Oceans (DFO) staff have stated that the culling of seals will benefit the recovery of Northwest Atlantic cod stocks.
2. In contrast, published reports in scientific journals, including those authored by DFO biologists, unequivocally conclude that seals are having no demonstrable impact on cod recovery.
3. “Common sense” arguments that culling seals will “obviously” benefit the fishery are premised on a mythological view of predators that is unsubstantiated by most scientific evidence.
4. Research conducted in other fisheries has indicated that the complexity of marine food webs, and the diversity of seal diets mean increased seal numbers can sometimes lead to positive effects on commercial fish stocks.
5. Consistently, recent research in terrestrial systems indicates that top predators can have a significant positive impact on numbers of herbivores by reducing numbers of smaller predators.
6. The Canadian political agenda for dealing with the collapse of the cod stocks has evolved to include a subsidized seal cull, and suppression of internal reports contradicting the “common sense” position adopted by the political leadership.
As Meisenheimer says, the use of seals as scapegoats for the failings of Canadian fisheries management is an example of a global problem in the management of fisheries and wildlife. Whether the system is aquatic or terrestrial, tropical or arctic, the predators of the world are seen as problems to be controlled, not as integral parts of a functioning ecosystem. Whenever I think of food webs I instantly recall those simple and infantile diagrams showing the carrot, rabbit and fox. Although this example is intently exaggerated I believe most of us think of a food web of any particular species as an isolated set of interactions, not having more than a few links. Of course we couldn’t be more wrong.
Prof. David Lavigne, a zoologist researcher sponsored by the Natural Sciences and Engineering Research Council and the International Marine Management Association is a leading force in combating this miscomprehension of food webs. Regarding the cod stock decrease, he also claims that seals are being used as scapegoats because government scientists are failing to look at the problem in a macro level, the way any network should to be analyzed. The image below is Lavigne’s effort to understand the complex map of interactions in a food web. This astonishing work shows the Cod food web displaying some trophic interactions for part of the Northwest Atlantic.
Copyright David Lavigne. For a larger version of this image click here. | <urn:uuid:b41507b0-7a47-499d-9384-6d6482385dac> | 2.734375 | 753 | Academic Writing | Science & Tech. | 30.22648 |
Canadarm can't even move itself -- on earth
The Canadarm can move some heavy stuff in space, but it can't even move itself on earth.
"Meant for a weightless environment, Canadarm cannot even lift itself off the ground in Earth's gravity. A special test room was built to allow the arm to flex its joints under operating conditions. In addition, a computer-based simulation facility, much like a video game, was built to evaluate controllability and provide training for astronauts." | <urn:uuid:1c4a1a6f-8a18-44ca-9895-3f040c1d4d1e> | 3.125 | 104 | Comment Section | Science & Tech. | 42.620183 |
Why We Use Telemetry
Telemetry helps scientists study animals where they cannot directly observe them. Ushagat Island, Alaska, home to Steller sea lions, sea otters, killer whales, and harbor seals is one of these isolated places. These marine mammals are connected with each other in food webs that affect their survival. Many of these populations are declining, including the Steller sea lion. Steller sea lions spend a large part of their lives at sea or hauled-out on land in remote areas. Telemetry allows researchers to study Steller sea lions around the clock, anywhere on the planet, in any type of weather, and, most importantly, underwater.
Dr. Markus Horning at Oregon State University and
Dr. Jo-Ann Mellish at the Alaska Sea Life Center study the population ecology of Steller sea lions in remote locations like Ushagat Island using satellite telemetry tags called Life History Transmitters (LHX). They want to learn why the Steller sea lion population is not recovering by discovering how Steller sea lions are dying. The population of Steller sea lions has declined from 300,000 to 75,000 individuals worldwide. Western Steller sea lions in the Gulf of Alaska and Aleutian Islands are now listed as endangered under the Endangered Species Act. LHX tags are helping to determine their cause of death and hopefully a clue to their recovery.
Reasons why we use telemetry:
- Steller sea lions in the Gulf of Alaska and the Aleutian Islands have declined by as much as 75% in the last 40 years. Researchers are looking for clues to understand this decline and help their recovery by studying how Steller sea lions are dying.
- Steller sea lions that do not breed or that die at sea are unlikely to ever be seen. These unseen animals are critical to learning why the population is not recovering.
- Steller sea lions are not easy to capture. Adult males can weigh as much as a large truck and can become aggressive when cornered.
- Steller sea lions’ habitat and territory are complex, and they interact with many different organisms including orcas and sharks in the food web.
- Since they live in remote areas, scientists know surprisingly little about their life history. | <urn:uuid:aef2a324-1f6a-41f5-bf92-ce510afedf35> | 3.25 | 463 | Knowledge Article | Science & Tech. | 43.171416 |
Space Weather throughout the Solar System
The Sun is surrounded by a "bubble" in space called the heliosphere. In a sense, we Earthlings live within the outer atmosphere of our Sun. The solar wind fills the heliosphere with energetic particles and magnetic fields, extending the outermost reaches of the solar atmosphere well beyond the orbit of Pluto. The heliopause is the boundary where the influence of the solar wind finally wanes and interstellar space truly begins. Instruments on interplanetary spacecraft help us probe the heliosphere, while those same spacecraft are at risk from damage by space weather storms. Some day astronauts will venture far from Earth, and their safety will depend upon our knowledge of radiation throughout the heliosphere.
Within the heliosphere, the solar wind interacts with planets, moons, and other smaller bodies in our Solar System. Some planets possess strong global magnetic fields that interact with the solar wind. This interplay gives rise to complex, dynamic systems of radiation belts, flows of electrical currents, and auroral displays in the neighborhoods of such planets. Planets lacking magnetic fields are left unshielded from bombardment by the solar outpourings. A few moons have magnetic fields and magnetospheres as well, though most do not. Comets, with their long tails of dust and ionized gases, are the bodies most visibly influenced by the solar wind.
Stars, and the planetary systems that surround them, change over time. Our Sun, though dimmer, was more active in its infancy. The strong solar wind of our Sun's youthful stage swept away the leftover dust after the planets had formed. In recent years we have become able to observe the heliospheres of other stars, helping us learn about our own Sun via comparison. Early, active phases of a star's life exert a powerful influence over the formation of planets in their vicinity. Likewise, the outpouring of energy during the death throes of older stars, especially in cases that lead to nova and supernova explosions, can influence the development of other stellar systems over distances of many light years. | <urn:uuid:21c6f858-147a-44b0-a4c4-52cc6b7a4353> | 4.0625 | 423 | Knowledge Article | Science & Tech. | 38.998178 |
Overfishing has already depleted the populations of many fish in coastal regions and on the continental shelf. As a result, fishing's moved ever deeper into the oceans. Unfortunately, in the dark and cold, fish grow slowly, and some species take as long as humans do to reach reproductive age. A study reported in the New Scientist shows that fishing has caused five deep water species to reach critically endangered status in only 15 years, with some populations dropping by over 95 percent. Even more depressing, three of the species weren't even used for food - they were simply accidentally caught along with the other two.
In a similar vein, the New York Times covers the decimation of shark populations, with the results going to feed the increasingly global taste for what was formerly a rare Asian delicacy: shark fin soup. In this case, the rest of the shark goes to waste, as the meat isn't valuable. Many shark populations have dropped by 70 percent in the last 15 years.
Although countries aren't getting their acts together to protect oceanic species, there is some hope for their freshwater relatives. The BBC reports on ban in the international caviar trade. In this case, an international convention on endangered species called CITES, consisting of over 169 member nations, decided that the plans of caviar producing nations for preserving its source, the sturgeon, was insufficent. Apparently, the CITES authorities felt the plans didn't accurately account for reality, in that they ignored illegal fishing and pretended that the fish respected national borders. Although CITES can't penalize member states, the ban's expected to work even if caviar producing nations attempt to ignore it. That's because the wealthy, caviar-consuming nations in North America and Europe are prohibited from importing it, and have no real economic incentive to ignore the ban. This contrasts with situations such as the north Atlantic fisheries, where much of the EU as well as the US and Canada have economic intrests and are slow to respond to declining fish populations. | <urn:uuid:f24045b5-618a-4350-bf03-3e36fc8485e7> | 3.5 | 409 | Comment Section | Science & Tech. | 38.566522 |
The following sections describe the standard types that are built into the interpreter. These are the numeric types, sequence types, and several others, including types themselves. There is no explicit Boolean type; use integers instead.
Some operations are supported by several object types; in particular,
all objects can be compared, tested for truth value, and converted to
a string (with the
` ...` notation). The latter
conversion is implicitly used when an object is written by the | <urn:uuid:c5049951-372f-4ded-8145-7fd4029b0bb4> | 2.796875 | 97 | Documentation | Software Dev. | 33.528 |
- Fish, Frank E. and Howle, Laurens E. and Murray, Mark M., Hydrodynamic flow control in marine mammals,
INTEGRATIVE AND COMPARATIVE BIOLOGY, vol. 48 no. 6
pp. 788--800 [doi] .
(last updated on 2011/07/03)
Synopsis The ability to control the flow of water around the body dictates the performance of marine mammals ill the aquatic environment. Morphological specializations of marine mammals afford mechanisms for passive flow control. Aside from the design of the body, which minimizes drag, the morphology of the appendages provides hydrodynamic advantages with respect to drag, lift, thrust, and stall. The flukes of cetaceans and sirenians and flippers of pinnipeds possess geometries with flexibility, which enhance thrust production for high efficiency swimming. The pectoral flippers provide hydrodynamic lift for maneuvering. The design of the flippers is constrained by performance associated with stall. Delay of stall call be accomplished passively by modification of the flipper leading edge. Such a design is exhibited by the leading edge tubercles oil the flippers of humpback whales (Megaptera novaeangliae). These novel morphological structures induce a spanwise flow field of separated vortices alternating with regions of accelerated flow. The coupled flow regions maintain areas of attached flow and delay stall to high angles of attack. The delay of stall permits enhanced turning performance with respect to both agility and maneuverability. The morphological features of marine mammals for flow control call be utilized in the biomimetic design of engineered structures for increased power production and increased efficiency. | <urn:uuid:84eab4a6-dedd-4489-9e90-238d134e61f7> | 2.90625 | 341 | Academic Writing | Science & Tech. | 27.720755 |
Day 2: Curriculum Development
Scott Kittelman's talk addressed key tools for in-class experiments related to climate and weather. Through his talk, the difference between an experiment and a demonstration became clear – the simpler the better. Some simple experiments were suggested. For example, latent heat could be demonstrated by using hand warmers. Another example was to use spin up and spin down rotating tank experiments, pick one aspect, and explain that. Finally, he detailed the specifics needed to do these demonstrations, which were proportional to class size. The list included: NTSC video connection to projector, DVD player, computer and internet access, and a student response system. The details of constructing a mobile rotating table were outlined. The importance of lighting was discussed and the preference for fiber optics was presented. The audience suggested some cheaper alternatives: a slide projector, or a laser pointer directed through a glass stirring rod. Another suggestion for successful demonstrations was adaptable data acquisition software. Scott suggested Labview. Videos can be used to show additional experiments which may take more time than available during the class, but which are worth showing. However, there is no substitute for seeing the lab experiments and demonstrations in person.
Jack Whitehead spoke about experiments in geology and geophysics. Since his experience was mostly teaching to graduate students, his experiments were a little more technical. However, the first few examples were simple enough for undergraduates. First, he described how to demonstrate the Rayleigh-Taylor instability. Using a rectangular box filled with glycerin or corn syrup, and a thin layer at the surface of oil that is clamped shut, wavelength selection can be readily observed when the apparatus is turned upside down. Next, he presented a demonstration of a plume using pipes, corn syrup, and watered-down corn syrup (~1-2% water). The injected watered-down corn syrup can’t ascend in the denser corn syrup until it has grown into a big enough sphere. An additional experiment can be done modifying the plume experiment by placing it on a sloping bottom. The next experiment he showed involved pumping melted paraffin over a cooled aluminum plate. This experiment showed how flow was focused in response to the phase change. As the paraffin cooled and solidified, the molten paraffin veered into another direction, clockwise around the plate. He also presented a solitary wave collision, which was published in Nature (1986), an unsteady volcano experiment, salt channelization, and an earthquake machine. Finally, the talk was concluded with a demo about subcritical and supercritical Froude number flows.
Joe Witte offered views of a broadcast meteorologist. His goal was to get input on demonstrations that meteorologists could use to teach the public about greenhouse gas emissions into the atmosphere. An approach was presented where the CO2 from a car was totaled up and estimates were given as to how high the car would fill up the atmosphere with CO2. A problem with atmospheric visualizations is that the atmosphere is much thinner than the public perceives it to be from currently available visualizations. In addition, questions were raised as to how to demonstrate the dispersion of CO2 in the atmosphere. A lively discussion of simple models for the greenhouse effect for the public followed this talk. The idea that the greenhouse may not be the best example of the effect CO2 has on global temperature was raised. A solution was suggested to cut up the radiation balance problem into smaller pieces. In particular, to do a demo of absorption of light by gases, maybe even use one that phosphoresces – absorbs the visible light and then glows with the lights out – as an analog for CO2 and infared light. | <urn:uuid:c66bd2e5-7d00-4a12-81df-6a02efc7ba07> | 3.53125 | 749 | Content Listing | Science & Tech. | 33.512353 |
“About 74,000 years ago,” Lynas began, “a volcanic event nearly wiped out humanity. We were down to just a thousand or so embattled breeding pairs. We’ve made a bit of a comeback since then. We’re over seven billion strong. In half a million years we’ve gone from prodding anthills with sticks to building a worldwide digital communications network. Well done! But… there’s a small problem. In doing this we’ve had to capture between a quarter and a third of the entire photosynthetic production of the planet. We’ve raised the temperature of the Earth system, reduced the alkalinity of the oceans, altered the chemistry of the atmosphere, changed the reflectivity of the planet, hugely affected the distribution of freshwater, and killed off many of the species that share the planet with us. Welcome to the Anthropocene, our very uniquely human geological era.”
Some of those global alterations made by humans may be approaching tipping points—-thresholds—-that could destabilize the whole Earth system. Drawing on a landmark paper in Nature in 2009 (“A Safe Operating Space for Humanity,” by Johan Rockström et al.) Lynas outlined the nine boundaries we should stay within, starting with three we’ve already crossed. 1. Loss of biodiversity reduces every form of ecological resilience. The boundary is 10 species going extinct per million per year. Currently we lose over 100 species per million per year. 2. Global warming is the most overwhelming boundary. Long-term stability requires 350 parts per million (ppm) of carbon dioxide in the atmosphere; we’re currently at 391 ppm and rising fast. “The entire human economy must become carbon neutral by 2050 and carbon negative thereafter.” 3. Nitrogen pollution. With the invention a century ago of the Haber-Bosch process for creating nitrogen fertilizer, we doubled the terrestrial nitrogen cycle. We need to reduce the amount of atmospheric nitrogen we fix per year to 35 million tons; we’re currently at 121 million tons.
Other quantifiable boundaries have yet to be exceeded, but we’re close. 4. Land use. Every bit of natural landscape lost threatens ecosystem services like clean water and air and atmospheric carbon balance. “Already 85% of the Earth’s ice-free land is fragmented or substantially affected by human activity.” The danger point is 15% of land being used for row crops; we’re currently at 12%. 5. Fresh water scarcity. Increasing droughts from global warming will make the problem ever worse. In the world’s rivers, “the blue arteries of the living planet,” there are 800,000 dams with two new large ones built every day. The numeric limit is thought to be 4,000 cubic kilometers of runoff water consumed per year; the current number is 2,600. 6. Ocean acidification from excess atmospheric carbon dioxide is increasingly lethal to ocean life such as coral reefs. The measure here is “aragonite saturation level.” Before the industrial revolution it was 3.44; the limit is 2.75; we’re already down to 2.90. 7. The ozone layer protects the Earth from ultraviolet radiation. One man (Thomas Midgley) invented the chlorofluorocarbon coolant that rapidly reduced stratospheric ozone, and one remarkable agreement (Montreal Protocol, 1987) cut back on CFCs and began restoring the ozone layer. (In Dobson units the limit is 276; before Midgley it was 290; we’re now back up to 283.)
Two boundaries are so far unquantifiable. 8. Chemical pollution. Rachel Carson was right. Human toxics are showing up everywhere and causing harm. Coal-fired power plants are one of the worst offenders in this category. (Lynas added that nuclear waste belongs in this category but “the supposedly unsolved problem of nuclear waste hasn’t so far harmed a single living thing.” 9. Atmospheric aerosols—-airborne dust and smoke. It kills hundreds of thousands of people annually, the soot causes ice to melt faster, and everyone wants to get rid of it. But one beneficial effect it has is cooling, so Lynas proposes “we could move this pollution from the troposphere where people have to breathe it up to the stratosphere where it can still cool the Earth and no one has to breathe it. That’s called geoengineering.”
Lynas proposed that the goal for the future should be to get the whole world out of poverty by 2050 while staying within the planetary boundaries. Among the solutions he proposed are: clean cookstoves for the poor (they cause 1.6 million deaths a year); better GM crops for nitrogen efficiency and concentrated land use; integral fast reactors which run on nuclear waste (a recent calculation shows the UK could get 500 years of clean energy from its present waste, and the resulting IFR waste is a problem for 300 years, not for thousands of years); international treaties, which are crucial for dealing with global problems; carbon capture (everything from clean coal to biochar); and ongoing “dematerialization,” doing ever more with ever less, including more intense farming on less land. “Peak consumption,” Lynas noted, has already arrived in much of the developed world. | <urn:uuid:f0cef600-6aa9-4baf-8890-e2ffdf936b2c> | 3.8125 | 1,121 | Personal Blog | Science & Tech. | 51.854011 |
5. An 80-Foot-High Tsunami Hits the U.S. East Coast
Imagine a series of tsunami waves racing across the Atlantic Ocean at more than 500 mph, headed directly for the United States. As the tsunami waves near shore, they reach a staggering height of more than 80 feet, inundating the Eastern Seaboard and washing inland several miles. Sound impossible? A couple of noted geologists, Britain’s Dr. Simon Day and Dr. Steven Ward of the University of California, created a media stir a few years ago by suggesting this very scenario. According to Day and Ward’s theory, a catastrophic eruption of the Cumbre Vieja volcano in the Canary Islands off the coast of Africa could send much of the volcano’s caldera plunging into the Atlantic, creating a tsunami that would devastate parts of several continents, including North America. Other geologists concede such a collapse would create a tsunami, but strongly disagree on whether the caldera would collapse in one giant block, the likelihood of such a collapse — it could be thousands of years from now — or whether the resulting tsunami could create such widespread devastation. Cumbre Vieja is an active volcano, most recently erupting in 1949 and 1971. Here’s a link to Day and Ward’s original report, with more of their predictions, including 300-foot-high waves smashing into the African coast.
4. Electromagnetic Pulse Bomb Knocks Out U.S. Electrical Grid
The threat of a terrorist group or rogue nation exploding a nuclear bomb over U.S. territory has many security analysts worried. A nuclear bomb or two, triggered at high altitude over the United States, could strike a critical blow to the country’s infrastructure. The electromagnetic pulse from such a bomb or bombs would fry electronic circuits. Modern cars would stop running and thousands of planes would fall from the sky. The biggest blow, however, would be the collapse of the country’s electrical grid. No power means major problems with water and sewer systems, communications, financial institutions, the supply and distribution of food and medicine and other mainstays of modern life.
The United States established the EMP Commission in 2001 to study the impact of such a threat and how the country could shield its infrastructure. Among other findings, the commission determined that such an attack could leave “significant parts” of the electrical infrastructure without power for months to a year or more. The commission’s chilling conclusion: “Should significant parts of the electrical power infrastructure be lost for any substantial period of time, the Commission believes that the consequences are likely to be catastrophic, and many people may ultimately die for lack of the basic elements necessary to sustain life in dense urban and suburban communities.” Dr. Peter Pry, president of EMPACT America, estimates the U.S. could lose two-thirds of its population to disease, starvation and societal breakdown in such a scenario. Here’s a link to the EMP Commission’s findings.
3. Earthquake Rocks South-Central U.S., Killing Tens of Thousands
In late 1811 and early 1812, a series of three earthquakes now estimated between 7.0 and 8.0 magnitude on the Richter Scale struck the sparsely populated Mississippi River Valley around the town of New Madrid, Missouri. Soil liquefied, structures collapsed and the Mississippi River ran backward. Geologists have found evidence of such severe earthquakes occurring thousands of years ago, and many are concerned a similar quake or series of quakes happening today could cause widespread death and devastation. A 2008 Federal Emergency Management Agency report noted a large earthquake in the New Madrid Seismic Zone would cause “widespread and catastrophic physical damage” in eight Southern and Midwestern states, especially Tennessee and Missouri. Hundreds of bridges could collapse, and tens of thousands of buildings would be damaged. And then comes the danger posed by nuclear power plants in the region. A comparatively minor 5.8 magnitude quake in Virginia in 2011 temporarily shut down the North Anna nuclear plant in Virginia and led a dozen other nuclear facilities, from New Jersey south to North Carolina and west to Michigan, to declare an “Unusual Event,” the lowest level in the Nuclear Regulatory Commission’s emergency classification system. There are 15 nuclear power plants in the New Madrid Seismic Zone.
How serious is this threat? Consider that the Department of Homeland Security conducted an exercise in the region in May 2011 to test emergency preparedness for a New Madrid earthquake. The drill began with the premise of 100,000 deaths.
2. Terrorist Cyber Attack Targets U.S.
Army Gen. Keith Alexander, director of the National Security Agency, said in July 2012 that he rates the U.S. at 3, on a scale of 1-10, in terms of preparedness for a cyber attack. A major concern, Alexander told an audience at the Aspen Institute’s annual security forum, is an attack that would cripple vital servers and routers, which could take weeks or months to replace. Such an attack could also target the nation’s electrical grid, creating the same type of mayhem noted in item No. 4 above on the electromagnetic pulse scenario. As the investigation into the 2003 Northeastern blackout showed, the U.S. electrical grid is woefully outdated and subject to widespread failure under the right conditions.
Several avowed enemies of the United States, including Iran and North Korea, reportedly are active in the cyber attack realm. The threat is such that in 2009, President Barack Obama appointed Gen. Alexander to head a new Cyber Command, designed to protect U.S. computer networks and assets.
1. Yellowstone Supervolcano Erupts
This has been the veritable rock star of disaster scenarios in recent years, featured in a TV special and on the cover of the August 2009 issue of National Geographic. In a nutshell, the area under and around Yellowstone National Park is actually the caldera of an enormous supervolcano that explodes with cataclysmic force every 600,000 to 700,000 years or so. The most recent Yellowstone super eruption came 640,000 years ago. Such an eruption today would take an incredible toll on human life. Most of the population for hundreds of miles around would die instantly. Others as far away as the East Coast would suffer a slow, agonizing death as volcanic dust from the eruption clogged their lungs. That’s just the beginning. Volcanic dust would also contaminate water supplies, bury crops and could cause a type of “nuclear winter,” dramatically affecting global weather for several years. The good news? Some geologists believe changes in the Yellowstone caldera since the last mega-eruption make another such eruption unlikely. | <urn:uuid:ec8160ab-7716-48d2-bc9a-7a40ada181c6> | 3.40625 | 1,386 | Listicle | Science & Tech. | 46.428347 |
[Tutor] my first program
Tue, 15 Feb 2000 06:55:35 -0500 (EST)
This is a little program I designed to test experimental odds against
There is a problem.
After much deliberation, I simplified the formula that I was using to make
the differentiation percentage with, which ended up doing the exact same
thing, just simpler. However, the program still wouldn't work.
The problem is that I don't know how to make a division become a real
number, especially when involving variables. (I know that I can do 2/7.0
instead of 2/7 to make it a real result)
Thanks a lot!
Also, I've got one question involving real numbers in this python..
is there any way that I can limit the number to a certain ammount of decimal
places? will it be rounded or truncated, or automatically rounded up?
Thanks very much!!! | <urn:uuid:54bbf4d8-f8ce-48d9-aa1c-d42e9ce5d756> | 2.984375 | 196 | Comment Section | Software Dev. | 71.809345 |
FWC scientists discover new bass species
May 07, 2013
Male panther rescued with sister as kittens in 2011 is released
April 03, 2013
FWC looking for cause of pelican deaths in Brevard
March 19, 2013
Red tide bloom affecting manatees along southwest Florida coast
March 11, 2013
Headed to the beach? Help biologists monitor spawning horseshoe crabs
Red Tide Current StatusThis summary report of current red tide conditions around Florida includes a map of sampling results and regional status reports.
Selected Sawfish ReferencesA list of publications on sawfish research.
Getting to the Bottom of Stone Crab Population TrendsResearchers are conducting a long-term fishery-independent monitoring program to better understand stone crab population dynamics in Florida.
Researcher Spotlight: Charles CrawfordCrawford's work sheds light on the health of Florida's stone crab populations while satisfying his curiosity.
New PublicationsNew to the publications list are these published reports.
Scientists Recognize New Bass Species in Florida WatersThe Choctaw bass, which was long mistaken for spotted bass, is found in coastal rivers along the Florida Panhandle and southern Alabama.
About the ImageBiologists measure a stone crab's carapace (shell) width while out on a sampling trip.Learn more about this project
Birds Black Basses Careers Coral Reefs Fish and Wildlife Health Hotlines Locations Outreach Sea Turtles
Florida Native PlantsA collection on Flickr
OUR MISSION Through effective research and technical knowledge, we provide timely information and guidance to protect, conserve, and manage Florida's fish and wildlife resources.
FWC Facts:Red tides are not always red. They can appear greenish, brownish or even purple in color. The water can even remain its normal color during a bloom.
Learn More at AskFWC
Get FWC News Feeds
Florida Fish and Wildlife Conservation Commission • Farris Bryant Building
620 S. Meridian St. • Tallahassee, FL
32399-1600 • (850) 488-4676
Copyright © 1999-2013 State of Florida • Technical Help •
EEO/AA/ADA • Privacy Statement • Advertising Statement & Disclaimer • Site Map
Employee Resources (Password required)
SharePoint | Outlook E-Mail | Retiree Info
Disaster Information for FWC Employees | <urn:uuid:0b8658ef-5720-4ea4-9edc-89123231dcd4> | 3 | 477 | Content Listing | Science & Tech. | 32.163713 |
This is continuing from the previous post (http://pdg3.lbl.gov/atlasblog/?p=1071), where I discussed how we convert data collected by ATLAS into usable objects. Here I explain the steps to get a Physics result.
I can now use our data sample to prove/disprove the predictions of Supersymmetry (SUSY), string theory or what have you. What steps do I follow? Well, I have to understand the predictions of this theory; is it saying that there will be multiple muons in an event or there will be only one very energetic jet in the event, etc? For instance, the accompanying figure shows the production and decay of SUSY particles, which lead to events with many energetic jets, a muon, and particles that escape the detector without leaving a trace (missing energy), like X1.
Cartoon of the production and decay of SUSY particles
If the signature is unique, then my life is considerably simpler; essentially, I will write some software to go through each event and pick out those that match the prediction (you can think of this as finding the proverbial (metal) needle in a haystack). If the signal I am searching for is not very unique, then I have to be much cleverer (think of this as looking for a fat, wooden needle in a haystack).
First, I have to decide the selection criteria, e.g., I want one muon with momentum greater than, say, 100 GeV/c, or one electron and exactly two jets, etc. Once I’ve decided the selection criteria, I cannot change them, and have to accept the results, whatever they may be. Otherwise, there is a very real danger of biasing the result. To decide these selection criteria, I may look at simulation, i.e., fake data, and/or sacrifice a small portion of real data to do my studies on. With these criteria, I could have a non-zero number of candidate events, or zero events.
In either case, I have to estimate how many events I expect to see due to garden-variety physics effects, which can occur as much as a million or a billion times more frequently, and may produce a similar signature; this is called background. This can happen because our reconstruction software could mis-identify a pion as a muon, or make a wrong measurement of an electron’s energy, or if we produce enough of these garden-variety events a few of them (out in the “tails”) may look like new physics. So I have to think of all the standard processes that can mimic what I am searching for. One way to do this is to run my analysis software on simulated events; since we know what a garden-variety process looks like, we generate tons of fake data and see if some events look like the new effect that I am looking for. I can also use “real” data, and by applying a different set of selection criteria, come up with what we call “data driven background estimate”. If the background estimate is much less than the number of candidate signal events, excitement mounts, and the result pops up on the collaboration’s radar screen.
There is usually a trade-off between increasing the efficiency of finding signal events and reducing background. If you use loose selection criteria, you expect to find more signal events, i.e., increase in efficiency, but also more background. Since the background can overwhelm the signal, one has to be careful. Conversely, if you choose very strict criteria, you could have zero background, but also zero signal efficiency – not very useful!!
There is one more thing that I need to do, which sometimes can take a while, and for which there is definitely no standard prescription. I need to determine systematic uncertainties, i.e., an error estimate for my methodology, on both the signal efficiency, and on the background estimate. For instance, if I use a meter-scale to measure the length of a table, how do I know the meter-scale is correct? I have to quantify the correctness of the meter-scale. A result in our field has to have systematic uncertainties otherwise it is meaningless. This step is usually a source of lot of arguments. For instance, in the paper mentioned in Part 1 (http://arxiv.org/pdf/1110.6191v2.pdf), we say that there is a systematic uncertainty of 6.6% (see section 6). Depending on whether this is smaller (larger) than the statistical uncertainty, we say that the result is statistics (systematics) limited. In the first case, adding more data is necessary, and in the second case, a better understanding is needed. At times, one can have a statistical fluctuation that disappears by adding more data; conversely, many results go by the wayside because of people not understanding systematic effects.
Since there is no fixed recipe to do analysis, I can sometimes run into obstacles, or my results may look “strange”; I then have to step back and think about what is going on. After I get some preliminary results I have to convince my colleagues that they are valid; this involves giving regular progress reports within the analysis group. This is followed by a detailed note, which is reviewed by an internal committee appointed by the experiment’s Publication Committee and/or the Physics Coordinator. If I pass this hurdle, the note is released to the entire collaboration for further review. All along this process, people ask me to do all sorts of checks, or tell me that I am completely wrong, or whatever. Given that every physicist thinks that he/she is smarter than the next, this process can be cantankerous at times, since I have to respond to and satisfy each and every comment. Once the experiment’s leader signs off on the paper, we submit it to a peer-reviewed journal, where the external referee(s) can make you jump through hoops; sometimes their objections are valid, sometimes not. I have been on both sides of this process. Needless to say, as a referee my objections are always valid!!
Depending on the complexity of the analysis, the time from the start to finish can be anywhere from a few months to a year or more (causing a few more grey hair, or in my case a few less hair). The two papers that I mentioned at the start of part 1 took about 1-2 years each. Luckily, I had collaborators and we divided up the work among ourselves, so I could work on both of them in parallel.
||Vivek Jain is a Scientist at Indiana University, Bloomington. His current interests range from understanding various aspects of tracking to R-parity violating Supersymmetry. More information about his interests can be found at http://www.indiana.edu/~iubphys/faculty/jain2.shtml | <urn:uuid:71ef77ec-7651-4656-9b3a-35025b2e33af> | 2.84375 | 1,442 | Personal Blog | Science & Tech. | 50.675205 |
This question really boils down to a more general one:
What degree of photometric precision can be achieved by a smartphone camera?
To put this question in context, let me give a brief explanation of what is fundamentally different about a scientific image sensor versus a consumer grade sensor.
As you would expect, a scientific CCD will usually have much higher quality than a consumer grade CCD. In this context, "quality" is quantified by dozens of characteristics of the CCD, such as the uniformity of the sensitivity of each pixel, dark current, defective pixel count, photoelectron well depth, electron lag, spectral response, bleed and saturation, etc. The superior performance of a scientific CCD is, of course, helpful in making a good measurement; but the really critical distinction of a scientific CCD is that each and every property of the CCD will be measured and tested before the CCD is ever put into use. In fact, when purchasing a scientific CCD one receives many pages of documentation detailing each characteristic of the sensor, and the testing methods used to measure those characteristics.
This knowledge of the behavior of the sensor is critical. It allows us to take the raw image data produced by the CCD, and make a very precise estimate of the light that we are actually interested in measuring. Even more importantly, it lets us quantify how precise this estimate is. As a general rule in science, a result is worthless if you cannot state how much confidence you have in it.
So what does this mean for making a measurement with a smartphone camera? It means the quality of your measurement will depend strongly on how much effort you put in to measuring the characteristics of the sensor. It also depends on what sort of measurement you want.
For example, if all you want to do is determine whether there is more light in one image than in another image, you could do a pretty good job just by looking at the average pixel values and comparing them. This is basically what is described in the article linked in the question. In this case you're using the same sensor in both cases, so regardless of the characteristics of the sensor, you can be pretty confident that a brighter image will result from a brighter sky.
As you try to make better measurements, you begin to require better knowledge of the sensor. What if you want to measure exactly how much brighter one scene is than another? You would need to know about the background level measured by your sensor, so you would need a dark frame (an image captured while no light is reaching the sensor); you would need to know if the value measured by each pixel is a linear function of the amount of light hitting the pixel, so you would need to expose the sensor to a series of illumination sources of known intensity and compare the measured value with the known illumination; you would need to know how likely it is that the observed change in image brightness is simply due to noise, so you would need to quantify the read noise, dark noise, and possibly photon noise. The list can very quickly become very long!
The short answer is that you probably could measure light pollution with a smartphone camera, but realistically it would be a very rough measurement. At the very least it would require you to be able to control the exposure time and gain (often called the "ISO speed") of the camera. In my experience, smartphones usually change these settings automatically, so you would need to write a special app to collect your images. | <urn:uuid:69f397b7-973c-4e83-85fd-31dd8d620bf1> | 2.703125 | 701 | Q&A Forum | Science & Tech. | 37.869417 |
What is the difference between a scalar and a vector?
A scalar is a numerical quantity. .
It has magnitude (size) only.
Examples: A car is moving at 55 miles per hour.. . . . . . . .
A ball is thrown at 30 feet per second.
A vector has magnitude and
Examples: A car is moving at 55 mph to the northeast.. . . . . . . .
A ball is thrown at 30 ft/s at an angle of 50° to the horizontal.
That explanation is only true if the field is (
or subfields of
) and the vector space is a normed space. It doesn't make sense when it is, e.g.,
-vector space, or the space of rational functions of n variables over a field
of positive characteristic. | <urn:uuid:97e0de6e-bf6c-4dbc-aafe-702c60af7895> | 3.390625 | 172 | Q&A Forum | Science & Tech. | 82.375032 |
A close series of consecutive exposures are combined in this intriguing
composite of the Full Moon slowly crawling,
across the sky.
Beginning on the upper right at 19:42 UT and ending at 22:14 UT on
the sequence follows the Moon from Germany as it passes
through Earth's shadow in a partial lunar eclipse.
Near the top, the Moon just grazes the southern edge of
Earth's dark central shadow, or umbra.
in the darker part of the outer shadow region, the penumbra,
is also apparent on the lunar disk.
the relative size and shape of the Earth's
and the Moon are easier to see along the segments of
this lunar caterpillar.
Nearly impossible to follow with the eye though, a penumbral lunar eclipse,
the Full Moon passing only through the pale outer penumbral shadow,
begin on May 25. | <urn:uuid:dc543f21-7698-4639-b534-05142a0b5f50> | 3.0625 | 185 | Knowledge Article | Science & Tech. | 54.126009 |
What is 'Java'?
Java is an object-oriented computer language from Sun Microsystems.
Compared to many other languages, such as FORTRAN and LISP, Java is a relative newcomer on the scene. It started out in 1995 as a simple C-like language whose main advantages were automatic garbage collection and the ability to bring web pages to life with applets. It is also platform-independent, as programs run in a virtual machine on the supported platform. That means, for example, that a program written on a UNIX machine will run without modification on a Windows PC.
Early in the development of the World Wide Web, Sun and Netscape announced that Java 1.0 would be included in Netscape Navigator, the most popular web browser at the time. In 1997, Java 1.1 brought a more scaleable events model for the programming of graphical user interfaces and the introduction of inner classes. Java 1.2 and 1.3 saw the introduction of the Collections framework, incremental improvements in Swing GUI components, and the introduction of countless other libraries and Application Programmer Interfaces (APIs), such as JavaMail and the Java Speech API. We also saw the rise of Java as a server-side language, with Java Server Pages (JSPs) and Java Enterprise Beans. In 2002, Java 1.4 introduced assertions into the core language, and a logging facility.
The latest release (at the time of writing) is Java 1.5. This version changes the syntax of Java in some areas to add new features to the language. Some important additions are type-safe enumerations and generics. For more information, read this article about Java 1.5 | <urn:uuid:e0bc4aa7-f12f-4d44-bf18-86b9e6d7b85d> | 3.3125 | 345 | Knowledge Article | Software Dev. | 48.474878 |
Everywhere humans have ever gone, they have left garbage in their wake. This includes space, too. Right now, there are thousands of man-made satellites orbiting Earth, most of which can be classified as space junk. Obviously, with so much floating around space, it is only natural that space program planners are growing increasingly concerned over risks to future missions.
Well, according to a recent proposal, a new kind of satellite could serve as a cosmic cleaner, mopping up near-earth orbit of space junk.
So, how could it work?
According to its designers, the TAMU Space Sweeper satellite would launch into space, hereupon it would start spinning. Directed toward a piece of debris, the satellite would then catch it in a robotic arm, fling the junk down toward Earth and a burn-up in the atmosphere, and then, using the momentum from the first encounter,fly to another target and so-on. The benefits here: minimal fuel and no need to develop new technology. The Space Sweeper is currently in development of Texas A&M University.
If all goes according to plan, the TAMU could help correct a decades-old problem.
Throughout the history of spaceflight, the simple answer of what to do with unwanted material was simple: Just let it float away. Unfortunately, in a problem that couldn't have been unforeseen by someone, Earth's orbit is now becoming filled with pieces of space junk ranging from dead satellites to astronauts' daily garbage. Now, while this may not seem all that big of a problem -- after all, the Earth is over 24,000 miles in circumference -- the cause for concern is that this space litter is just not floating in orbit, but traveling around the Earth at thousands of miles per hour. Needless to say, anything traveling at that speed can do a lot of damage if it were to hit something.
Now, with more junk floating around in space than ever after more than 50 years of spaceflight, some scientists are worried that all of the man-made debris orbiting our planet may pose serious threats to space programs by 2030 if something is not done to clean up space.
Lastly, what are the consequences if we fail to act?
At the rate we humans are going, there are predictions that, in the coming decades, the popular low-Earth and geosynchronous orbits of today may become unusable simply because of all the junk occupying them. Think about it: Would any private business or government agency want to put a satellite into an orbit where it knew the multimillion-dollar piece of equipment was sure to be bashed to pieces? I think not.
So, while the environmental movement on Earth has gone mainstream and is well-entrenched in the public mind, space enthusiasts are undoubtedly hoping that, in the near future,the same can be done when it comes to space junk, too.
For more info:
Hit the 'subscribe' button for automatic email updates when I write something new!
Want even more? Check out my personal website:
Bodzash Photography & Astronomy | <urn:uuid:90e82fe4-bab8-472d-b732-7b5d16d292f4> | 3.765625 | 629 | Personal Blog | Science & Tech. | 45.339947 |
(Difference between revisions)
|Line 11:||Line 11:|
Revision as of 18:52, 9 January 2009
MIDI is a system for control of digital music instruments. The MIDI standard also defines a file format to store such control data. MIDI treats music as a sequence of notes. Audio signal processing is not its purpose.
- MIDI files can be created and dissected by Haskell by the midi library. In the past this was integrated in Haskore. There is also a Darcs repository.
- You can compile Haskore music into MIDI files.
- You can do real-time MIDI processing | <urn:uuid:f89c9401-856b-4e1e-8e20-431bdcef11c7> | 2.734375 | 134 | Documentation | Software Dev. | 50.516612 |
he German mathematician and astronomer Johannes Kepler (1571 1630) was an avowed Platonist, and set out early in his professional career to demonstrate that the motion of the planets was circular, in accordance with the established Aristotelian doctrine, and that they could be described in terms of the Platonic solids. However, he was also a friend and assistant to the great Danish astronomer Tycho Brahe, who was making precise and detailed observations of the planets and stars. When Tycho Brahe died, in 1601, Kepler inherited this enormous mountain of raw data. After studying this data for 20 years, Kepler came to understand that his earlier assumptions about planetary motion had been naive, and that if an earth-centered (Ptolemaic) understanding of the universe were abandoned for a sun-centered (Copernican) model, then the motion of the planets was clearly elliptical.
From this basis, Kepler generated his three famous laws of planetary motion:
These laws are illustrated in the following diagram:
- The orbit of each planet is an ellipse with the sun at one focus.
- The line segment joining a planet to the sun sweeps out equal areas in equal time intervals.
- The square of the period of revolution of a planet about the sun is proportional to the cube of the semimajor axis of the planets elliptical orbit.
Keplers laws imply that the speed of revolution of a planet around the sun is not uniform, but changes throughout the planets year. It is fastest when the planet is nearest the sun (called the perihelion) and slowest when the planet is farthest away (aphelion). Of course, a circle is also an ellipse an ellipse with eccentricity 0 and in which the foci coincide in the center of the circle and indeed the orbits of most planets are far more nearly circular than the diagram would suggest. But they are not circles nonetheless; they are ellipses with non-zero eccentricity.
The third law means that if Y is the length of a planet's year, that is, the time it takes the planet to make a complete revolution about the sun, and if we denote by a the length of the semimajor axis of the planets orbit, then the quantity Y2/a3 is the same for every planet (and comet, and other satellite) in the solar system. Thus, if a planets orbit is known, the length of its year can be immediately calculated, and vice versa.
Keplers laws were empirical, that is, they were derived strictly from careful observation and had no purely theoretical foundation. However, about 30 years after Kepler died, the English mathematician and physicist Sir Isaac Newton derived his inverse square law of gravity, which says that the force acting on two gravitating bodies is proportional to the product of their masses and inversely proportional to the square of the distance between them. Keplers laws may be derived from this theoretical principle using calculus. | <urn:uuid:b62dcc74-2b1b-46d3-8b5f-a838e0455cca> | 4.53125 | 611 | Knowledge Article | Science & Tech. | 31.462864 |
Name: Patricia A Marengo
Why does the Sun burn if there is no oxygen in space?
The Sun 'burns' nuclear fuel that does not need oxygen. Nuclear energy
comes in two forms: fission in which heavy elements are broken down into
lighter ones; and fusion in which light elements are 'fused' together to
make heavier elements. Stars 'burn' by nuclear fusion. | <urn:uuid:c328f018-56d2-4d11-9128-680227c12590> | 3.3125 | 85 | Q&A Forum | Science & Tech. | 54.054611 |
Thunderstorms affect relatively small areas when compared with hurricanes and winter storms. The typical thunderstorm is 15 miles in diameter and lasts an average of 30 minutes. Despite their small size, ALL thunderstorms are dangerous! Of the estimated 100,000 thunderstorms that occur each year in the United States, about 10 percent are classified as severe.
Tornadoes - Although tornadoes occur in many parts of the world, they are found most frequently in the United States.....
Union City, Oklahoma tornado. Learn more...
- A tornado is a violently rotating column of air extending from a thunderstorm to the ground.
- Tornadoes cause an average of 70 fatalities and 1,500 injuries in the U.S. each year..
- The strongest tornadoes have rotating winds of more than 250 mph.
- Tornadoes can be one mile wide and stay on the ground over 50 miles.
- Tornadoes may appear nearly transparent until dust and debris are picked up or a cloud forms within the funnel. The average tornado moves from southwest to northeast, but tornadoes have been known to move in any direction.
- The average forward speed is 30 mph but may vary from nearly stationary to 70 mph.
- Waterspouts are tornadoes which form over warm water. They can move onshore and cause damage to coastal areas.
- Causes an average of about 60 fatalities and 300 injuries each year.
- Lightning occurs in all thunderstorms; each year lightning strikes the United States 25 million times.
- The energy from one lightning flash could light a 100-watt light bulb for more than 3 months.
- Most lightning fatalities and injuries occur when people are caught outdoors in the summer months during the afternoon and evening.
- Lightning can occur from cloud-to-cloud, within a cloud, cloud-to-ground, or cloud-to-air.
- Many fires in the western United States and Alaska are started by lightning.
- The air near a lightning strike is heated to 50,000°F--hotter than the surface of the sun!
- The rapid heating and cooling of the air near the lightning channel causes a shock wave that results in thunder.
- When Thunder Roars, Go Indoors! - NWS lighnting safety site helps ypu learn more about lightning risks and how to protect yourself, your loved ones and your belongings. The site offers a comprehensive page of handouts, brochures, links and more.
- Straight-line winds are responsible for most thunderstorm wind damage.
- Winds can exceed 100 mph!
- One type of straight-line wind, the downburst, is a small area of rapidly descending air beneath a thunderstorm
- A downburst can cause damage equivalent to a strong tornado and can be extremely dangerous to aviation.
- A “dry microburst” is a downburst that occurs with little or no rain. These destructive winds are most common in the western United States
- Is the #1 cause of deaths associated with thunderstorms...more than 140 fatalities each year
- Most flash flood fatalities occur at night and most victims are people who become trapped in automobiles.
- Six inches of fast-moving water can knock you off your feet; a depth of two feet will cause most vehicles to float.
- Strong rising currents of air within a storm, called updrafts, carry water droplets to a height where freezing occurs.
- Ice particles grow in size, becoming too heavy to be supported by the updraft, and fall to the ground.
- Causes more than $1 billion in damage to property and crops each year.
- Large stones fall at speeds faster than 100 mph.
Multiple lightning strokes observed during night-time thunderstorm. Learn more...
The National Severe
Storms Laboratory is one of NOAA's internationally
known research laboratories, leading the way in investigations
of all aspects of severe weather. Headquartered in Norman OK,
the people of NSSL, in partnership with the National Weather
Service, are dedicated to improving severe weather warnings and
forecasts in order to save lives and reduce property damage.
Severe weather research conducted at NSSL has led to substantial improvements in severe and hazardous weather forecasting resulting in increased warning lead times to the public. NSSL scientists are exploring new ways to improve our understanding of the causes of severe weather and ways to use weather information to assist National Weather Service forecasters, as well as federal, university and private sector partners.
The Storm Prediction Center (SPC) is part of the National Weather Service (NWS) and the National Centers for Environmental Prediction (NCEP). The mission of the SPC is to provide timely and accurate forecasts and watches for severe thunderstorms and tornadoes over the contiguous United States. The SPC also monitors heavy rain, heavy snow, and fire weather events across the U.S. and issues specific products for those hazards.
Weather Forecast Offices of NOAA’s National Weather Service issue local Severe Thunderstorm, Tornado and Flash Flood warnings. Severe thunderstorm, tornado, and flash flood warnings are passed to local radio and television stations and are broadcast over local NOAA Weather Radio stations serving the warned areas. These warnings are also relayed to local emergency management and public safety officials who can activate local warning systems to alert communities.
NOAA Weather Radio is the best means to receive warnings from the National Weather Service.
The National Weather Service continuously broadcasts warnings and
forecasts that can be received by NOAA Weather Radios, which are
sold in many stores. The average range is 40 miles, depending on
topography. Purchase a radio that has a battery back-up and a Specific
Area Message Encoder feature, which automatically alerts you when
a watch or warning is issued for your county or parish.
When conditions are favorable for severe weather to develop, a severe thunderstorm or tornado WATCH is issued. Weather Service personnel use information from weather radar, spotters, and other sources to issue severe thunderstorm and tornado WARNINGS for areas where severe weather is imminent. Severe thunderstorm and tornado warnings are passed to local radio and television stations and are broadcast over local NOAA Weather Radio stations serving the warned areas. These warnings are also relayed to local emergency management and public safety officials who can activate local warning systems to alert communities. If a tornado warning is issued for your area or the sky becomes threatening, move to your pre-designated place of safety.
Check with your local National Weather Service office or visit the Internet site to determine if your county is covered by NOAA Weather Radio. National Weather Service watches and warnings are also available on the Internet by selecting your local National Weather Service office at or by going to the National Weather Service Home Page.
Terms to know:
Tornado - A violently rotating column of air, usually pendant to a cumulonimbus, with circulation reaching the ground. It nearly always starts as a funnel cloud and may be accompanied by a loud roaring noise. On a local scale, it is the most destructive of all atmospheric phenomena
Severe Thunderstorm - A thunderstorm that produces a tornado, winds of at least 58 mph (50 knots), and/or hail at least 1 inch in diameter. Structural wind damage may imply the occurrence of a severe thunderstorm.
Flash Flood - A flood which is caused by heavy or excessive rainfall in a short period of time, generally less than 6 hours. Also, at times a dam failure can cause a flash flood, depending on the type of dam and time period during which the break occurs.
Tornado Watch: Tornadoes are possible in your area. Remain alert for approaching storms. Know what counties or parishes are in the watch area by listening to NOAA Weather Radio or your local radio/television outlets.
Severe Thunderstorm Watch: Tells you when and where severe thunderstorms are likely to occur. Watch the sky and stay tuned to know when warnings are issued.
Flash Flood Watch - Issued to indicate current or developing hydrologic conditions that are favorable for flash flooding in and close to the watch area, but the occurrence is neither certain or imminent.
Tornado Warning: A tornado has been sighted or indicated by weather radar.
Severe Thunderstorm Warning: Issued when severe weather has been reported by spotters or indicated by radar. Warnings indicate imminent danger to life and property to those in the path of the storm.
Flash Flood Warning - Issued to inform the public, emergency management, and other cooperating agencies that flash flooding is in progress, imminent, or highly likely. | <urn:uuid:bfc957b5-5e03-441d-8c42-bde63cfc9d97> | 3.71875 | 1,758 | Knowledge Article | Science & Tech. | 44.058268 |
To view this page ensure that Adobe Flash Player version 10.0.0 or greater is installed.
Eric A. Cornell held his Nobel Lecture December 8, 2001, at Aula Magna, Stockholm University. He was presented by Professor Mats Jonson, Chairman of the Nobel Committee for Physics. Summary: Fundamental ideas behind creating Bose-Einstein condensate (BEC) in a gas are outlined. Starting with Heisenberg's uncertainty principle, the formation of Bose-Einstein condensate (BEC) is explained as occurring when the interatomic spacing is comparable to thermal de Broglie wavelength. The conditions for creating BEC in a gas are described, and the necessary ingredients for creating BEC in a gas are listed in an "Ultra Cold Alkali Tool Kit". Credits: Kamera Communications (webcasting)
Copyright © Nobel Web AB 2001
The Nobel Prize in Physics 2001 Lecture | <urn:uuid:bcda4f8f-e2dc-4d6a-8e73-3cc2db858337> | 3.140625 | 189 | Truncated | Science & Tech. | 39.809992 |
News From the Field
Physicists Discover New Way to Visualize Warped Space and Time
April 11, 2011
When black holes slam into each other, the surrounding space and time surge and undulate like a heaving sea during a storm. This warping of space and time is so complicated that physicists haven't been able to understand the details of what goes on--until now.
California Institute of Technology
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2012, its budget was $7.0 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.
Get News Updates by Email
Useful NSF Web Sites:
NSF Home Page: http://www.nsf.gov
NSF News: http://www.nsf.gov/news/
For the News Media: http://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: http://www.nsf.gov/statistics/
Awards Searches: http://www.nsf.gov/awardsearch/ | <urn:uuid:79411a1e-0f93-44b6-9fd9-220782a671f8> | 2.90625 | 293 | Content Listing | Science & Tech. | 70.519789 |
Looking at Cells Under the Energy Microscope
When we discussed heat and entropy, it became crystal clear that thermodynamics is critically important to cells and cell function. Now, let's take a closer look at the involvement of thermodynamics in cellular biochemical reactions. The second law of thermodynamics (yep, it's back for more) can be interpreted in the context of chemical reactions, where energy transfer events
tend to progress downhill—that is, products have less energy than reactants—and any extra energy produced during chemical reactions is lost as heat, which ultimately increases the disorder in the universe. In other words, once an energy transfer event occurs, there is less energy left in the product to do additional work.Cellular reactions
that occur spontaneously will proceed to a more disordered, but lower energy state. This may seem counterintuitive if you were thinking that increasing disorder always means increasing energy in the products, but this is not the case. Remember that cellular reactions release heat to the surroundings as they form products that are themselves in a lower energy state.
Proteins that have been broken down into their individual amino acids (see the Biomolecules and the Chemistry of Life unit if this is not ringing a bell) by a cellular reaction will not suddenly spontaneously re-form intact proteins. Instead, any uphill reaction
is a reaction that involves a large input of energy, such as the reactions that build proteins or nucleic acids. However, when we start trying to predict if a reaction will occur, or why a protein takes on a certain structure, it is not enough to rely on the second law of thermodynamics. Instead, we need to combine the first and the second laws of thermodynamics
The first and second laws of thermodynamics were combined into one equation by Josiah Willard Gibbs (not Barry
, Robin, or Maurice) in the late 1800s. The equation is
or the change in something called enthalpy
) equals the change in something called free energy
) plus the absolute temperature
, in degrees Kelvin, where x
degrees Kelvin = y
degrees Celsius + 273) multiplied by the change in entropy
). Did you catch all that?
This equation is useful (it is; stop shaking your head) because it allows scientists to find out information about a reaction while only knowing a few details about the system. This equation is not nearly as equation-y as it may seem at first glance.
You might say, "But Shmoop, what exactly is enthalpy
?" Sorry, you are not privileged enough to receive such information. In all seriousness, you will learn more about enthalpy in chemistry, but for our purposes, enthalpy is equal to the total amount of energy in a system
If a system gains heat from a chemical reaction, ΔH
will be positive. Alternatively, if heat is lost from a system, ΔH
will be negative. Not hard, right? Then, there is the term T
. We know that entropy is a measurement of disorder. In this case, T
takes into account only the entropy change of the system
- If a reaction increases the disorder in a system, the entropy term TΔS will be positive.
- If a reaction decreases the disorder in a system, the entropy term will be negative.
Now, what about free energy? You wish gasoline were free... Oh, right, ΔG
. Coming right up.Brain Snack
Spontaneous combustion—when something starts to be consumed by fire without external ignition—is real! And, in reference to our enthalpy discussion above, spontaneous combustion occurs if the heat in the system increases without being able to escape. | <urn:uuid:c7d11ec6-8cdc-4add-b93e-4017f42d452b> | 3.390625 | 758 | Personal Blog | Science & Tech. | 43.792054 |
Amazingly, if we were actually able to convert matter perfectly to energy with 1 kg of matter being completely annihilated, the energy produced from just that small amount of matter is about 42.95 mega tons of TNT. So an adult male weighing in at around 200 pounds has somewhere in the vicinity of 4000 megatons of TNT potential in their matter if completely annihilated.
This is about 80 times more energy than was produced by the largest ever detonated nuclear bomb, the Tzar Bomba, which itself produced a blast about 1,400 times more powerful than the combined explosions of the bombs dropped on Hiroshima and Nagasaki.
To further illustrate, 1 megaton of TNT, when converted to kilowatt hours, makes enough electricity to power an average American home for about 100,000 years. It is also enough to power the entire United States for a little over 3 days. So 1 kg of some matter being completely annihilated would be able to power the entire United States for about four months. One average adult male then, when completely annihilated, would produce enough energy to power the U.S. for about 30 years. Energy crisis solved.
On a completely baffling scale, a typical supernova explosion will give off about 10,000,000,000,000,000,000,000,000,000 megatons of TNT. *cowers in the corner* | <urn:uuid:0fb757d5-1dda-4071-a32e-f6d5ea8a994e> | 2.921875 | 282 | Personal Blog | Science & Tech. | 48.091733 |
Extensible Markup Language (XML) is a simple, very flexible text format derived from SGML. Originally designed to meet the challenges of large-scale electronic publishing, XML is also playing an increasingly important role in the exchange of a wide variety of data on the Web and elsewhere. (Taken from the W3C)
XML has emerged not so much as another new technology added on to the pile that is straining the Web, but rather as a reassertion of one of the original principles that made the Web possible: Simplicity.
The links below have been assembled to help you, no matter what your level of XML expertise.
Also, don't forget our forums. You can always find an answer there. | <urn:uuid:c0d86c55-418b-4e6c-9fd0-19b66a9befb0> | 2.9375 | 150 | Knowledge Article | Software Dev. | 46.351299 |
Metalastic Wheels (1962)
On 7 November 1962, NASA told the world that it had selected Grumman Aircraft Engineering Company to manufacture the Apollo Lunar Module (LM) manned moon lander. Even before the U.S. civilian space agency tapped the company to build the LM, Grumman engineers had begun to look beyond Apollo. In a paper presented in June 1962, for example, engineer Edward Markow summed up 18 months of Grumman studies of advanced lunar surface locomotion systems.
Markow reported that the moon’s reduced gravity (equal to one-sixth of Earth’s gravitational pull) created unique difficulties for designers of lunar surface vehicles. For example,
a simple turning maneuver has to contend with the same centrifugal forces as on earth, but only 1/6 of the stability force [provided by gravity] is available on the moon. [Thus, a] 3000-pound vehicle would require a 16-foot wheel base, just to prevent overturning, while negotiating a modest 20-foot radius turn at 10 miles per hour. Response of the vehicle’s suspension system to bumps is also exaggerated. . .Contacting a mere 4-inch bump at 10 miles per hour was shown to cause a 3000-pound vehicle to leave the surface for a distance of twenty feet.
To help solve these anticipated problems, Grumman envisioned equipping its proposed 3000-pound lunar traverse vehicle with four six-foot-diameter metal-elastic (“metalastic”) wheels weighing 120 pounds each. Each wheel would include a hub housing the wheel’s motor and transmission. The metalastic wheel, Markow reported, would take on an elliptical shape under the vehicle’s weight, providing the favorable ground contact characteristics of a caterpillar tread without its mass and complexity. The rim would deform when it struck a bump (for example, a rock), preventing the vehicle from bouncing off the ground.
Grumman found most promising a metalastic wheel consisting of flexible spiral spokes and a rim with evenly spaced cleats (image at top of post). This design the company tested beside a rigid metal wheel on two simulated lunar surfaces: Long Island beach sand (presumably collected near Grumman’s headquarters in Bethpage, Long Island, New York) and crushed shale. Not until April 1967 would a robot lander (Surveyor III) provide detailed data on the texture and bearing strength of the moon’s surface, so Grumman based its simulated lunar surfaces on best guesses; a “granular” model proposed by the Jet Propulsion Laboratory in Pasadena, California, and a “rock froth” model based on data gathered by bouncing radar pulses off the moon.
Markow reported that, compared to the rigid wheel, the flexible metalastic wheel needed 50% less energy to roll over the simulated lunar surfaces and provided 60% more traction. It also pulled a trailer 40% more efficiently and demonstrated improved “obstacle climbing performance.”
“Metalastic Wheels for Lunar Locomotion,” IAS 62-135, Edward G. Markow; paper presented at the Institute of the Aerospace Sciences National Summer Meeting in Los Angeles, California, 19-22 June 1962.
I research and write about the history of space exploration and space technology with an emphasis on missions and programs planned but not flown (that is, the vast majority of them). Views expressed are my own. | <urn:uuid:290188cd-53a5-4837-ad54-4f35846feeeb> | 3.96875 | 712 | Personal Blog | Science & Tech. | 39.346636 |
7.5.1 File Objects
Python's built-in file objects are implemented entirely on the
FILE* support from the C standard library. This is an
implementation detail and may change in future releases of Python.
This subtype of PyObject represents a Python file object.
- PyTypeObject PyFile_Type
This instance of PyTypeObject represents the Python file
type. This is exposed to Python programs as
- int PyFile_Check(PyObject *p)
Returns true if its argument is a PyFileObject.
- PyObject* PyFile_FromString(char *filename, char *mode)
On success, returns a new file object that is opened on the
file given by filename, with a file mode given by mode,
where mode has the same semantics as the standard C routine
fopen() . On failure, returns NULL.
- PyObject* PyFile_FromFile(FILE *fp,
char *name, char *mode,
Creates a new PyFileObject from the already-open standard C
file pointer, fp. The function close will be called when
the file should be closed. Returns NULL on failure.
- FILE* PyFile_AsFile(PyFileObject *p)
Returns the file object associated with p as a FILE*.
- PyObject* PyFile_GetLine(PyObject *p, int n)
function reads one line from the object p. p may be a
file object or any object with a readline() method. If
0, exactly one line is read, regardless of the
length of the line. If n is greater than
0, no more than
n bytes will be read from the file; a partial line can be
returned. In both cases, an empty string is returned if the end of
the file is reached immediately. If n is less than
however, one line is read regardless of length, but
EOFError is raised if the end of the file is reached
- PyObject* PyFile_Name(PyObject *p)
Returns the name of the file specified by p as a string object.
- void PyFile_SetBufSize(PyFileObject *p, int n)
Available on systems with setvbuf() only. This should only be called immediately after file object
- int PyFile_SoftSpace(PyObject *p, int newflag)
This function exists for internal use by the interpreter.
Sets the softspace attribute of p to newflag and
previous value. p does not have to be a file object
for this function to work properly; any object is supported (thought
its only interesting if the softspace attribute can be set).
This function clears any errors, and will return
0 as the
previous value if the attribute either does not exist or if there were
errors in retrieving it. There is no way to detect errors from this
function, but doing so should not be needed.
- int PyFile_WriteObject(PyObject *obj, PyFileObject *p,
Writes object obj to file object p. The only supported
flag for flags is Py_PRINT_RAW ;
if given, the str() of the object is written instead of the
0 on success or
failure; the appropriate exception will be set.
- int PyFile_WriteString(char *s, PyFileObject *p,
Writes string s to file object p. Returns
-1 on failure; the appropriate exception will be
See About this document... for information on suggesting changes. | <urn:uuid:c3319340-dbc1-4c87-a5fb-3a7e0384d499> | 2.953125 | 761 | Documentation | Software Dev. | 51.696939 |
A framer wants to enclose a rectangular field by a fence divide it into two smaller rectangular fields by constructing another fence parallel to one side of the field.
The farmer has 3000 yards of fencing. Find the dimension of the field so that total enclosed area is a maximum. (hint let h be the height and w be the width)
then 3h+2w=3000 You want to maximize the area hw. If you solve for h in terms of w then substitute into the expression hw, you get a quadratic function (you could just as well solve for w in terms of h) Find the maximum of quadratic using one of three techniques) | <urn:uuid:68427eba-42cf-4f44-8f89-b9b5c014a478> | 3.109375 | 139 | Q&A Forum | Science & Tech. | 60.654868 |
1. make a sketch of the situation (see attachment)
2. A solid with a mass of 25 kg develops a vetrical force of
3. Cable and pole have to develop an uplifting force of the same value. In my sketch this uplifting force is labeled
4. The angle between pole and cable is
5. The cable is pulling upward.
6. The pole is pushing to the right.
7. The angle between the cable and the uplifting force is 30°.
8. You are dealing with a right triangle which angles you know and the length of one leg.
9. The force with which the cable is pulling can be calculeted by:
10. The force with which the pole is pushing to the right can be calculated by: | <urn:uuid:d2e20a94-79ee-467d-a5b8-f1aae5bf650a> | 3.765625 | 162 | Q&A Forum | Science & Tech. | 83.494714 |
A common feature of the X-ray-bright shells surrounding the radio bubbles shown in Section 2 is that the pressure measured for the shells is approximately equal to that just outside of them. In other words, there is no evidence for a strong shock. Another feature common to many of these objects is that the pressure measured in the X-ray-bright shells is about an order of magnitude higher than the pressure measured from the radio data within the radio-bright bubbles, assuming equipartition of energies (an example is Abell 2052, with an X-ray shell pressure of 1.5 × 10-10 dyn cm-2 [Blanton et al. 2001], and a radio equipartition pressure of 2 × 10-11 dyn cm-2 [Zhao et al. 1993]). However, we expect that the bubbles and the shells are in pressure equilibrium, since otherwise they would collapse and fill in. Therefore, either some of the assumptions made for the equipartition pressure estimates are incorrect, or there is an additional source of pressure within the radio bubbles. This additional pressure component may be magnetic fields, low energy relativistic electrons, or very hot, diffuse, thermal gas that would not be detected by Chandra because of its low surface brightness in the Chandra energy band. The temperature of hot, thermal gas that would provide the required pressure to support the X-ray shells has been limited to > 15 keV for Hydra A (Nulsen et al. 2002), > 11 keV for Perseus (Schmidt et al. 2002), and > 20 keV for Abell 2052 (Blanton et al. 2003). High sensitivity at high energies is necessary to detect diffuse gas at such temperatures, and XMM-Newton or the upcoming Constellation-X may be able to detect it.
A detection of gas within an X-ray depression with a temperature significantly hotter than its surroundings has been made using Chandra data of the cooling flow cluster MKW 3s (Mazzotta et al. 2002). The gas in the bubble is hotter than the gas at any radius in the cluster, and the temperature measurement is therefore not a projection effect. The deprojected gas temperature within the bubble is 7.5 keV, compared with a temperature of 3.5 - 4 keV for the surrounding emission. This cluster contains a central radio source, however the 1.4 GHz radio emission is not directly connected with the X-ray depression, as shown in Mazzotta et al. 2002. | <urn:uuid:51615665-d7cc-4c94-8f65-5c6131c8e0ea> | 3.359375 | 508 | Academic Writing | Science & Tech. | 55.804032 |
This image of the northern wall of Coprates Chasma, in Valles Marineris, was taken by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) at 1227 UTC (8:27 a.m. EDT) on June 16, 2007, near 13.99 degrees south latitude, 303.09 degrees east longitude. CRISM's image was taken in 544 colors covering 0.36-3.92 micrometers, and shows features as small as 20 meters (66 feet) across. The region covered is just over 10 kilometers (6.2 miles) wide at its narrowest point.
Valles Marineris is a large canyon system straddling Mars' equator, with a total size approximating the Mediterranean Sea emptied of water. It is subdivided into several interconnected "chasmata" each hundreds of kilometers wide and, in some cases, thousands of kilometers long. The walls of several of the chasmata, including Coprates Chasma, expose a section of Mars' upper crust about 5 kilometers (3 miles) in depth. Exposures like these show the layers of rock that record the formation of Mars' crust over geologic time, much as the walls of the Grand Canyon on Earth show part of our planet's history.
The upper panel of this montage shows the location of the CRISM image on a mosaic from the Mars Odyssey spacecraft's Thermal Emission Imaging System (THEMIS), taken in longer infrared wavelengths than measured by CRISM. The CRISM image samples the base of Coprates Chasma's wall, including a conspicuous horizontal band that continues along the wall for tens of kilometers to the east and west, and a topographic shelf just above that.
The middle two panels show the CRISM image in visible and infrared light. In the middle left panel, the red, green, and blue image planes show brightness at 0.59, 0.53, and 0.48 microns, similar to what the human eye would see. Color variations are subdued by the presence of dust on all exposed surfaces. In the middle right panel, the red, green, and blue image planes show brightness at 2.53, 1.51, and 1.08 microns. These three infrared wavelengths are the "usual" set that the CRISM team uses to provide an overview of infrared data, because dust has a less obscuring effect, and because they are sensitive to a wide variety of minerals. Layering is clearly evident in the wall rocks. The conspicuous band running along the base of the chasma wall appears slightly yellowish, and the scarp at the edge of the topographic bench appears slightly green.
The bottom two panels use combinations of wavelengths to show the strengths of absorptions that provide "fingerprints" of different minerals. In the lower left panel, red shows strength of a 0.53-micron absorption due to oxidized iron in dust, green shows strength of an inflection in the spectrum at 0.6 microns that may be related to rock coatings, and blue shows strength of a 1-micron absorption due to the igneous minerals olivine and pyroxene. The conspicuous horizontal band appears slightly blue, indicating a stronger signature of olivine and/or pyroxene. In the lower right panel, red is a measure of an absorption particular to olivine, green is a measure of a 2.3-micron absorption due to phyllosilicates (clay-like minerals formed when rock was subjected to liquid water), and blue is a measure of absorptions particular to pyroxene. The conspicuous horizontal band is now resolved into an upper portion richer in pyroxene, underlain by material richer in olivine than the rest of the wall rock. Also, erosion-resistant material forming the topographic bench is underlain by phyllosilicate-containing material exposed on the scarp.
Taken together, these data reveal a layer cake-like composition of the crustal material exposed in Coprates Chasma's wall. Most of the rock is rich in pyroxene, which is expected because much of Mars' crust consists of volcanic basaltic rock. However discrete layers are richer in olivine, and in some layers the presence of phyllosilicates indicates interaction of rock with liquid water. Because the phyllosilicate-containing layer is low on the walls and deeply buried, it likely represents an early period of Mars' history that was exposed when the canyon system formed.
The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad. | <urn:uuid:a5120a14-43f6-4160-b78f-67e353f2eaed> | 3.640625 | 989 | Knowledge Article | Science & Tech. | 46.85914 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Monday, 20 May 2013
A recent slowdown in global warming means the harshest climate change predictions are less likely in the immediate decades, say an international team of scientists.
Friday, 10 May 2013
The cooling effect of clouds is overestimated in current climate change models, suggests new research.
Friday, 26 April 2013
A bizarre stellar system made up of two dead stars 7000 light-years away has put Einstein's famous general theory of relativity under its most extreme test yet.
Friday, 26 April 2013
Faint clouds detected just above Saturn's rings are caused by meteoroid debris slamming into the rings.
Friday, 19 April 2013
Astronomers have detected the most Earth-like planet that has the potential to support water yet found by NASA's Kepler Space Telescope
Thursday, 18 April 2013
Astronomers have found the oldest starburst galaxy in the universe, producing up to 3000 Sun-like stars per year.
Monday, 25 March 2013
Migration of giant gas planets such as Jupiter created the biggest meteor storm in our solar system's history, according to a new study.
Wednesday, 12 December 2012
Male fish engage in same sex acts in a bid to entice females to copulate with them.
Wednesday, 14 November 2012
City grasshoppers are changing their tune in an effort to be heard by potential mates over the noise from their urban environment.
Tuesday, 16 October 2012
Industrialisation and development are the main reason why modern human lifespans are greater than those of hunter-gatherers.
Wednesday, 3 October 2012
Bats combine work with courting potential mates - a lot of them.
Friday, 28 September 2012
A detailed analysis of images has helped astronomers pinpoint the exact orbit of the tiny Martian moon Deimos.
Thursday, 6 September 2012
Mystery surrounds the discovery of a very young star deep in the midst of an ancient stellar cluster.
Tuesday, 28 August 2012
Genius chimp Certain apes appear to be much smarter than others, with at least one chimpanzee now characterised as being "exceptional" when compared to other chimps.
Tuesday, 21 August 2012
Wireless routers for homes and offices could be knitted together to provide a communications system for emergency responders if the mobile phone network fails, German scientists say. | <urn:uuid:ce616814-bde3-4aad-bec9-b16edcc6bc58> | 2.796875 | 489 | Content Listing | Science & Tech. | 38.656594 |
November 2, 2009
In this paper you will read about Isaac Newton’s three laws of motion.
Sir Isaac Newton was a British physicist. Many people regard him as the greatest physicist of all time. His work is often compared with that of Archimedes and Galileo. The scientific discoveries that he made have given way to new scientific ideas and realizations today. Isaac Newton created the 3 Laws of Motion. These laws explain the properties of motion, why objects move in a certain manner, and what causes them to move that way.
The first law of motion, also called the law of inertia, states: “an object in motion stays in motion unless acted upon by an unbalanced force. An object at rest stays at rest unless it is acted upon by an external force.” This means that when an object is moving, it will stay moving unless a force interferes with it. It also means that when an object is stopped, it will remain that way until an outside force disrupts it. The thing that causes an object to reject motion or resist the urge to stop is called inertia. For example: A boy is traveling at a constant speed of 5 m/s on his skateboard. The boy takes a wrong turn and hits a tree stump. The board comes to a halt, and the boy flies off of the skateboard from the impact of the collision. In this case, the object in motion was the boy on his skateboard. The force that was preventing the boy on the skateboard from stopping is inertia. The tree stump acted as the external force that interfered with the skateboard. With an object, the amount of mass that object has will increase its inertia. In other words, the more mass you have, the more inertia there will be. Mass is the measurement of the amount of matter in an object. Mass can also be defined as how much inertia something has.
The second law of motion says: “A net force acting on an object causes an object to accelerate in the direction of the net force applied.” In simpler terms, this law means... | <urn:uuid:63dce835-f4f2-42e8-9dae-8cc69c889198> | 3.8125 | 428 | Truncated | Science & Tech. | 66.427988 |
What's happened to the setting Sun?
In early 2009, the Moon eclipsed part of the Sun as visible from parts of
Africa, Australia, and Asia.
In particular the above image, taken from the
Mall of Asia
seawall, caught a partially eclipsed Sun setting over
Manila Bay in the
Piers are visible in
silhouette in the foreground.
and well placed
sky enthusiasts captured
many other interesting and artistic images of the year's only
annular solar eclipse, including
eclipse shadow arrays, and
rings of fire.
Today parts of the Sun again will become briefly blocked by the Moon,
again visible to some as a partial eclipse of a setting Sun.
A small swath of Earth, however, will be exposed to the unusual ring of fire effect when the Moon is
completely surrounded by the glowing light of the slightly larger Sun. | <urn:uuid:33bb4ddd-a5bc-40c1-8f4c-b2aed22596f5> | 2.921875 | 187 | Personal Blog | Science & Tech. | 43.1825 |
neuron/wire artificial synapse
lhaarsma at opal.tufts.edu
Fri Sep 20 14:03:14 EST 1996
A few years ago I heard a story in the popular media about
a research group who cultured nerve cells in such a way that
they formed terminals at specific locations, close to electrodes
on a circuit board, allowing some electrical signals to pass
back and forth. (I hope I'm remembering correctly.)
I haven't been able to find a reference with a key-word search.
Could someone point me to a name or a journal reference?
More information about the Neur-sci | <urn:uuid:e8c83932-4293-487e-a099-0ffaab08f83b> | 2.75 | 133 | Comment Section | Science & Tech. | 59.07986 |
Virile crayfish (Orconectes virilis
Virile crayfish have been recorded in some lengths of river in the London area, these are the only known populations in the UK. They are not a European species and originate from the USA and Canada. Virile crayfish are likely to be carriers of crayfish plague and can increase in numbers quickly. Their burrowing activity causes damage to river banks.
Virile crayfish (Orconectes virilis) © D.M. Holdich
FACTFILE: The Virile crayfish has a body length less than 10cm and it is smooth, chestnut or chocolate in colour with a bowl-shaped or wine glass-shaped light brown pattern. Their claws are the same colour as the body on the upper surface and dirty-white on the underside; they have prominent yellow warty spots on them. They can easily be confused with spiny-cheek crayfish and other species, especially when young.
Virile and Signal crayfish © Adam Ellis | <urn:uuid:76a1b030-294d-4936-988a-8e5ae97c53b0> | 2.8125 | 223 | Knowledge Article | Science & Tech. | 42.150333 |
provided sufficient work is done to move the pencil from George’s end and the ends are fixed as mentioned before, the only thing that possibly can result from this is that the whole pencil is translated in the direction of the force, including the unperturbed position about which any fexural vibration is occuring. Thus, the whole thing, which is staying together as one piece can only move in the direction of the force and since the other end is fixed wrt any flexural red herrings you’ve introduced, it moves with George’s end, without delay.
Again, since the information about the movement between the atoms of the material is transmitted by photons, which themselves move at the speed of light, I don’t see how this can occur “without delay”. Also I am wondering how what you claim can possibly be true, when it is patently obvious that one can move one end of a long steel bar from side to side without the far end moving at all. (The bridge movie makes this entirely clear). The bar flexes, because it is not perfectly rigid.
The exchange particles are continually being produced and resonating, and only over infinitessimally distances and are there throughout, rather than acting sequentially or within any time lag. because they are simultaneous. If they weren’t you’d have one half of the pencil leaving the other half behind. Can you not see that that is ludicrous?
This isn’t well explained at all. You will have to explain why and how the photon exchanges occur simultaneously along the entire length of the material, rather than being emitted and absorbed by each molecule in sequence.
And nobody is claiming that the pencil moves by halves; this is a straw man argument. The claim is that each molecule in the pencil moves the ones next to it; the procedure is done by the electromagnetic interactions between the particles and is not simultaneous.
As for “ludicrous”, it is ludicrous that one could move a 93 million mile long pole instantaneously. I am always prepared to be convinced; physics is definitely very strange and at times entirely counterintuitive, but you’ll need to do a heck of a lot better than that to convince me. In particular, I’d like some outside confirmation—that is, credible websites that cite the relevant theory and data. | <urn:uuid:73f810f5-7c1e-4e1b-9f49-dc2b73096cba> | 2.890625 | 486 | Comment Section | Science & Tech. | 41.847634 |
|Earth Exploration Toolbook Chapter: Analyzing Wetlands|
The Ramsar Wetlands Data Gateway is a database containing information on protected international wetlands. In this activity, users search the database to find a wetland that they are interested in helping to protect. Using the database search capabilities, users select various wetland characteristics and generate a report on the sites that meet their search criteria. Then, they access an interactive map to view the locations and nearby features of the identified wetland sites. Next, they narrow the choices down to a single wetland that they would like to protect and gather further information about it. Finally, users prepare a brief report to persuade others of the value of protecting the site.
Intended for grade levels:
Type of resource:
Cost / Copyright:
Original, creative works created for the Earth Exploration Toolbook website remain the intellectual property of that program and may be used freely for any non-commercial, educational purpose with attribution to TERC, Carleton College, and the chapter author's affiliation. The metadata that makes up the EET Collection may be shared with the DLESE Discovery System, and may be used by third parties according to the DLESE Collections Accession Policy. The EET website was developed by the Science Education Resource Center (SERC) at Carleton College with funding from the National Science Foundation. Any views expressed in this website do not necessarily reflect the views of SERC, Carleton College or any of its sponsors or affiliates. We encourage the reuse and disemination of the material on this site for educational, noncommercial purposes as long as attribuition is retained. To this end the material on this site is offered under a Creative Commons license Attribution-NonCommercial-ShareAlike 1.0.
DLESE Catalog ID: SERC-EET-000-000-000-027
Resource contact / Creator / Publisher:
Author: Dr Robert R. Downs
Center for International Earth Science Information Network (CIESIN) | <urn:uuid:bc312185-9035-4e50-8c82-0d0e52d668a0> | 3.4375 | 404 | Content Listing | Science & Tech. | 23.446092 |
Superficially, georgiaites look like volcanic glass, or obsidian; however, there was no volcanic activity in or near Georgia 35 million years ago, and georgiaites lack the mineral crystals that characterize volcanic glass. Natural glasses of the same age from Texas, called bediasites, and smaller spherules of glass dating from the same era have been found in deep-sea sediments off the eastern coast of North America and in the Gulf of Mexico. Natural glasses of different ages have also been found in central Europe (15-million-year-old moldavites), in Africa's Ivory Coast (1-million-year-old tektites), and in Indochina and Australia (800,000-year-old indochinites and australites). All of these glasses, including the georgiaites, are known as tektites.
All tektites are thought to be impact glasses; that is, they represent material that was melted as a result of heat generated by the impact of an asteroid or comet on the earth. The energy produced by one of these impacts is tremendous—some meteorites travel at velocities of more than forty miles per second before they hit the earth, and the largest of these meteorites produce craters. The energy released by a large impact can result in the melting of a thin layer of the earth's uppermost crust. The chemical composition of tektites is consistent with this idea; tektites have the same chemical makeup as the rocks of the earth's crust. Some scientists had suggested at one time that tektites came from the moon, but lunar rock samples have been found to be chemically distinct from most tektites.
Georgiaites and the other tektites are natural curiosities, but they also have a modest commercial value as collectibles. Some tektites, especially the moldavites, are quite pleasing in appearance and are made into jewelry.
Edward Albin, Marc Norman, and Michael Roden, "Major and Trace Element Compositions of Georgiaites: Clues to the Source of North American Tektites," Meteoritics and Planetary Science 35 (2000): 795-806.
G. J. H. McCall, Tektites in the Geological Record: Showers of Glass from the Sky (London: Geological Society, 2001).
C. Wylie Poag, Chesapeake Invader: Discovering America's Giant Meteorite Crate r (Princeton, N.J.: Princeton University Press, 1999).
Harold R. Povenmire and John A. O'Keefe, Tektites: A Cosmic Enigma (Indian Harbour Beach, Fla.: privately printed, 2003).
Michael F. Roden, University of Georgia
Edward Albin, Fernbank Science Center
A project of the Georgia Humanities Council, in partnership with the University of Georgia Press, the University System of Georgia/GALILEO, and the Office of the Governor. | <urn:uuid:cc9153e6-534b-454f-900d-4eda7a545869> | 3.625 | 612 | Knowledge Article | Science & Tech. | 40.089119 |
A 3D animation shows the crucial RNA editing step called splicing
Spinal Muscular Atrophy (SMA) is an inherited condition. Humans have two closely related versions of the SMN gene, SMN1 and SMN2. SMN1 is fully functional but SMN2 is only partially functional. SMA occurs when an individual inherits two mutated SMN1 genes and the SMN2 gene cannot produce sufficient SMN protein to maintain motor neuron function. To understand the science of SMA, it is important to understand the flow of genetic information: the DNA code is transcribed into RNA, which is then translated into an SMN protein.
Genes and RNA Splicing
In the SMN gene the protein coding information ("exons") are interrupted by non-coding regions ("introns"). The introns need to be edited out of the RNA to produce the final set of instructions for the SMN protein. This process is called "RNA splicing."
An animation shows how the DNA genetic code is made into protein
A step-by-step animation shows the details of RNA splicing
Dr. Roberts describes the flow of information from DNA to RNA to protein
SMA and Splicing
Unlike the SMN1 gene, the SMN2 gene only produces a small amount of functional SMN protein. Incorrect RNA splicing of the SMN2 gene results in a shortened version of the protein, called Delta7, which is not functional.
An animation shows how the change in the SMN2 gene produces a different protein through RNA splicing
Drs. Sharp and Krainer describe alternative splicing | <urn:uuid:e545580b-c043-4e66-9d28-8fb4669598df> | 3.546875 | 337 | Knowledge Article | Science & Tech. | 50.281913 |
Have you ever come in from a day of sledding or ice skating and sat down for a drink of cold chocolate? Or had a glass of hot lemonade in the summer? Probably not. We use hot water for some things and cold water for others. Have you ever thought about what makes hot water hot and cold water cold?
A water molecule is made of a negatively charged oxygen atom and two positively charged hydrogen atoms.
Solids hold their shape even if you put them in a new container, liquids take the shape of the new container, and gases spreads out through the whole container. Image courtesy NASA.
You should notice the food coloring in the warm water spreading out faster than the food coloring in the cold water. If you didn’t observe this, try making your cold water a little colder and your warm water a little warmer. Also make sure you add the food coloring to each glass at the same time.
Water is made of molecules (two hydrogen atoms and one oxygen atom stuck together). Molecules in a liquid have enough energy to move around and pass each other. This is why water can flow and take the shape of the glass you pour it into. The molecules in solids, like ice, don’t have enough energy to move around very much so the solid keeps its shape. Molecules in a gas have lots of energy and spread out even more than molecules in a liquid.
Warm water has more energy than cold water, which means that molecules in warm water move faster than molecules in cold water. The food coloring you add to the water is pushed around by the water molecules. Since the molecules in warm water move around faster, the food coloring spreads out quicker in the warm water than in the cold water. | <urn:uuid:d8331599-de39-4b09-942a-65cf8bd71206> | 3.796875 | 355 | Knowledge Article | Science & Tech. | 55.946478 |
- Special Report: Syria's Islamists seize control as moderates dither
- Angelina Jolie stunt double sues News Corp over hacking
- Global shares firm, dollar steady before Fed decision
- Kanye West wins over critics with 'daring' new album 'Yeezus'
- Journalist who brought down U.S. general is killed in Los Angeles car crash
World over-using underground water reserves for agriculture
LONDON (Reuters) - The world is depleting underground water reserves faster than they can be replenished due to over-exploitation, according to scientists in Canada and the Netherlands.
The researchers, from McGill University in Montreal and Utrecht University in the Netherlands, combined groundwater usage data from around the globe with computer models of underground water resources to come up with a measure of water usage relative to supply.
That measure shows the groundwater footprint - the area above ground that relies on water from underground sources - is about 3.5 times bigger than the aquifers themselves.
The research suggests about 1.7 billion people, mostly in Asia, are living in areas where underground water reserves and the ecosystems that rely on them are under threat, they said.
Tom Gleeson from McGill, who led the study, said the results are "sobering", showing that people are over-using groundwater in a number of regions in Asia and North America.
Over 99 percent of the world's fresh and unfrozen water sits underground, and he suggests this huge reservoir that could be crucial for the world's growing population, if managed properly.
The study, published in the journal Nature, found that 80 percent of the world's aquifers are being used sustainably but this is offset by heavy over-exploitation in a few key areas.
Those areas included western Mexico, the High Plains and California's Central Valley in the United States, Saudi Arabia, Iran, northern India and parts of northern China.
"CRITICAL TO AGRICULTURE"
"The relatively few aquifers that are being heavily exploited are unfortunately critical to agriculture in a number of different countries," Gleeson told Reuters. "So even though the number is relatively small, these are critical resources that need better management."
Previous research has shown that it takes about 140 liters of water to grow the beans that go into one cup of coffee, whether they are cultivated in arid Ethiopia or the Colombian rain forest.
"The effect of this water use on the supply of available water will be very different," the researchers wrote. "Until now, there has been no way of quantifying the impact of such agricultural groundwater use in any consistent, global way."
Gleeson said limits on water extraction, more efficient irrigation and the promotion of different diets, with less or no meat, could make these water resources more sustainable.
Water sitting in underground aquifers was the subject of research by British researchers published in April that mapped huge reserves sitting under large parts of Africa that could provide a buffer against the effects of climate change, if used sustainably.
A team from the British Geological Survey and University College London estimated that reserves of groundwater across Africa are about 100 times the amount found on the continent's surface.
Some of the largest reserves are under the driest North African countries like Libya, Algeria, Egypt and Sudan, but some schemes to exploit them are not sustainable.
The biggest is Libya's $25 billion Great Manmade River project, built by the regime of slain dictator Muammar Gaddafi to supply cities including Tripoli, Benghazi and Sirte with an estimated 6.5 million cubic meters of water a day.
The network of pipes and boreholes is sucking water out of the ground that was deposited in the rocks under the Sahara an estimated 40,000 years ago, but is not being replenished.
It is unclear how long this water source will last, with estimates ranging between 60 and 100 years.
(Editing by Mark Heinrich)
- Tweet this
- Share this
- Digg this | <urn:uuid:1ab7c3f4-de49-4606-a527-90de25d1427a> | 2.875 | 822 | Truncated | Science & Tech. | 31.823422 |
Jan. 23, 2013 Amongst the range of domestic livestock species, the goat is not just the 'black sheep' but a resource of survival in impoverished countries, and many breeds are at great risk of disappearing. This is the case according to researchers of the Regional Service of Agro-Food Research and Development in their first monographic study tackling the global impact of this species.
A study from the Regional Service of Agro-Food Research and Development (SERIDA) has analysed the situation of the global goat population.
The study took into account the state of different breeds, the multiple implications of their conservation, the interaction with other animal species (wild and domestic) and the consequences of goat grazing from an environmental point of view.
"The risk of the gene pool of the goat disappearing has increased due to intensive animal husbandry systems that use a very limited number of breeds. Strangely enough, the biggest loss in the genetic resources of indigenous animals has been observed in Europe, although the situation is unknown in many areas," as explained to SINC by Rocío Rosa García, researcher at SERIDA and coauthor of the study.
The bad reputation given to goats stems from one of its main virtues: it has an extraordinary capacity to adapt to the most difficult of environmental conditions in places where other domestic livestock species would not survive.
"It is a reality that the grazing of these animals can cause damaging effects on the environment but ecosystems become overloaded because of inadequate practices of handling," ensures the scientist.
According to data from the Food and Agriculture Organisation of the United Nations (FAO), nowadays the largest number of goats can be found in the poorest of countries and especially those which have difficult environmental conditions and mountainous, desert and semi-desert regions.
"In poor regions, poor communities are commonplace and often the goat is the only source of animal protein in their diet," explains Rosa García.
The team led by Koldo Osoro Otaduy, manager of the Animal Production Systems Area at SERIDA and centre director, undertook a large part of the field work in areas in which the role of the goat is very relevant and have certain similarities with hostile environments in other parts of the world.
"Many national and international projects have been carried out in less-favoured areas, like the Asturian mountains which are home to steep slopes, poor soil, an aging population and a high risk of depopulation and abandonment of traditional activities," ensure the researchers.
The goat: its virtues and defects
Poor handling of grazing, which does not consider the livestock species and their most fitting habitat, is the main cause of the damaging effects that goats can cause on the environment.
For example, the uncontrolled growth of the cashmere goat to increase production of its prized wool has meant in some cases that the ecosystems have become overloaded. This has not only affected vegetation but also certain indigenous species in India, China and Mongolia.
To counteract this, the study also considers a large number of cases in which the species plays an important role in environmental conservation. These include their use in the fight against fires in areas dominated by bushes and in controlling exotic vegetation plagues that could put ecosystems at risk.
"We wanted to perform a global review, taking into account very different regions of the world, from the Himalayan peaks to tropical areas, and analysing to what extent the goat competes with local fauna in each region and whether it interferes with the survival of the most sensitive species," outlines Rosa García.
Other social bookmarking and sharing tools:
- R. Rosa Garcia, R. Celaya, U. Garcia, K. Osoro. Goat grazing, its interactions with other herbivores and biodiversity conservation issues. Small Ruminant Research, 107 (2012) 49%u2013 64
Note: If no author is given, the source is cited instead. | <urn:uuid:ecd936e4-0a66-40ee-a298-7f118dc214cb> | 3.4375 | 791 | Truncated | Science & Tech. | 29.440504 |
Often seen in backyards, gardens and fields, the American toad is a common and adaptable species found all across the state, limited only by access to water for breeding. Solitary and mainly nocturnal, they are most active during warm, humid weather.
Description: The American toad is 2 to 3.5 inches long, has short legs, a stout body and thick warty skin. Brown skin color is most common, but this is highly variable and it can also be red, olive or gray. What's more, the skin color of American toads can change depending on temperature and humidity as well as physical stress. Their bellies are white or yellow. Males, which are smaller than females, have black or brown throats while females have white throats.
Similar Species: Distinguished from the Fowler's toad by the space between the cranial crest and the parotoid glands; these features abut in the Fowler's toad. Also, the American toad has 1 or 2 warts in each of the largest spots on its back-the similar Fowler's toad has 3 or more.
Voice: A long trill lasting between 4 and 20 seconds. Male American toads use this call to attract females, their calls becoming insistent, loud and frequent during mating season.
Habitat: As long as they have access to water for breeding and cover for hiding during the day, American toads are found in a variety of habitats. They readily adapt to human encroachment.
Diet: American toads are carnivores as adults, herbivores as tadpoles. Adults are generalists, consuming insects, snails, slugs and earthworms. Toads do not drink water. Instead they absorb it through their skin.
Breeding information: Breeding typically takes place in March or April though it may continue into July. Females lay spiral strands of 4,000 to 8,000 eggs that hatch in about a week. Metamorphosis of tadpoles takes around 2 months; American toads become mature in 2 to 3 years.
Status in Tennessee: Abundant.
- Most American toads don't survive more than a year in the wild, but some have lived to 10 years old. One captive individual reached 36 years of age
- Like many toads, the American toad produces a milky poison from its skin. This can be toxic to humans if it comes into contact with the eyes, nose or skin
- The Eastern hognose snake, which is immune to toad toxin, specializes in eating American toads
Best places to see in Tennessee: Found statewide.
For more information:
The Frogs and Toads of Tennessee web site
LEAPS Consulting web site on frogs and toads
Animal Diversity Web - The University of Michigan Museum of Zoology
Conant, R. and Collins, J. 1998. Peterson Field Guides: Reptiles and Amphibians (Eastern/Central North America). Houghton Mifflin Company, New York. 616pp.
Recording ©2010, Robert English, Leaps | <urn:uuid:69eab1aa-2c11-4dbf-b3f9-4cea9cfaceab> | 3.484375 | 636 | Knowledge Article | Science & Tech. | 55.365333 |
It sounds like magic: walls, curtains, even dresses could be rendered transparent by bathing them in a specially crafted beam of light. Rescuers could use the beam to peer through rubble after an earthquake, while doctors could gaze at a damaged lung after making a patient's skin and ribs vanish.?... At the flick of a switch, he and his colleague Dr Mark Frogley can make something invisible, albeit just a fraction of a millimetre square of a special material and only for a one ten thousandth of a millionth of a second. ...an electron can be prevented from absorbing a particle of laser light and jumping to a higher energy level if a second laser beam is used to link or "couple" the two energy levels to a third one. ...
Using two powerful beams made this way, the team performed its vanishing trick: the artificial atoms became transparent to one beam when a second - coupling - laser illuminated them at the same time. "By shining an invisible powerful laser onto these 'artificial atoms', we have learnt how to control the motion of the electrons so they no longer absorb light - when the laser is switched on, the crystals instantly become invisible, only to return to their normal opaque state when the laser is switched off."
As Prof Phillips says, "we have proved the physics". Although this was achieved with an idealised material, it suggests that by carefully designing a wand of laser light it may be possible to make anything transparent. "The effect has the potential to lead to all sorts of new applications. You can imagine a laser that works at frequencies we can't see and, when it shines on your hand, it would open up a transparent hole." - telegraph
Fun crazy stuff in our future! | <urn:uuid:9cd95e56-9bef-4986-b2f1-29a6c2a223a7> | 3.140625 | 351 | Personal Blog | Science & Tech. | 56.16321 |
Messier 45 is more commonly referred to as the Pleiades, and is the most famous open star cluster among amateur astronomers and the general public alike. Easily visible to the naked eye during the northern Winter, this cluster has been featured in literature over three millennia! Research compilations now suggest that about 500 stars, mostly faint, are gravitationally bound together as the Pleiades.
Visible: DSS, Visible: Color - © AAO, Royal Observatory, Edinburgh
Let us begin our examination of Messier 45 with a look at the visible light images (above). The black-and-white and color images both reveal wispy veils of fog surrounding the brightest stars. These reflection nebulae are produced by dust particles within the cluster. The chemical composition of the reflected light (as determined through spectroscopy) is identical to that of the blue stars, thereby confirming that the nebulae are simply stellar light reflected to our line of sight by the dust. The blue tint is a consequence of the fact that the illuminating stars are young and blue, and because dust preferentially reflects light of shorter wavelengths (for example, blue) and preferentially scatters light of longer wavelengths (such as red). The spikes seen around the brightest stars, particularly in the color image, are artifacts produced by the telescope optics.
Near-Infrared: 2MASS and Visible: DSS
Now ponder the near-infrared photograph (above left) and compare it with the previously studied visible-light picture (above right). The pattern of the five brightest stars is still clearly seen. However, the surrounding haze has completely vanished! This is because near-infrared light, corresponding to wavelengths a few times longer than what the human eye sees, can easily pierce through the obscuring effects of dust. This is an important reason why astronomers often rely on near-IR radiation to study the birth of stars, since stellar formation normally occurs within thick cocoons of dust and gas.
Mid-Infrared: IRAS and Far-Infrared: IRAS
The IRAS infrared images depicted above were obtained at wavelengths of 25 and 60 microns. Two dramatic differences from the previous images are immediately obvious. First, the five-star pattern is barely recognizable in the mid-infrared (upper left) and has disappeared at far-infrared wavelength (upper right). Second, much of the infrared emission now corresponds to the dust wisps noted in the visible light photos. This is the paradox of infrared light. At the shortest wavelengths, near-IR light effectively passes through obscuring dust. At the longer wavelengths, infrared emission is increasingly due to the dust particles themselves. At these wavelengths, the dust particles absorb the ambient visible and ultraviolet photons emitted by the nearby stars. The dust then re-radiates the light as infrared, with the energy difference between the types of light serving to slightly heat the dust particle. [In both of these infrared images, a point of light is stretched along the east-west direction because of the peculiar rectangular shape of the IRAS detectors. Furthermore, the pattern of stripes is a result of data processing.]
Mid-Infrared: IRAS and Visible: Color - © AAO,
Royal Observatory, Edinburgh
If you need additional evidence that longer-wavelength infrared originates primarily from dust, re-examine the mid-IR and visible-light images above. Concentrate on the southernmost bright star in the Pleiades, named Merope. You will see that the fuzzy pattern of infrared light to the immediate southwest (lower right) of this star closely resembles the pattern of reflected light seen in the visible light photograph.
Radio: NVSS, Ultraviolet: MSX, X-Ray: Thomas Preibisch, Wurzburg (Germany)
The radio image (above left) of Messier 45 shows some faint point sources of emission, color coded as green. [The black and blue background is simply noise, analogous to radio static.] The pattern of the point sources does not clearly match the brightest stars seen in the visible and near-infrared photos. What are these sources? Good question, with no obvious answer. Many distant quasars are strong radio emitters. However, a search of the online NASA/IPAC Extragalactic Database (NED) fails to uncover any cataloged background quasars within the field of view. Some of the faint radio emissions could originate from distant and uncataloged quasars. Most quiescent stars are not known for being strong sources of radio emission. However, such unusual stars as flare stars, pulsars, and binary stars can produce radio signals, and these could account for some of the sources seen in the Messier 45 image.
In the ultraviolet (above center) we see the emission from hot stars. The dust around newly formed stars reflects and scatters ultraviolet light. Ultraviolet images can provide information about the properties of dust surrounding newly formed stars.
Finally, let us study the x-ray image (above right) of the Pleiades. The square green boxes
denote the position of the brightest stars in the cluster. Some of the boxes
are contain a faint source, suggesting that the x-rays are due to the
(optically) bright star. However, other boxes appear to contain no x-ray
emission. In fact, most of the x-ray sources (color coded to denote different
x-ray energies, or wavelengths) are widely scattered throughout the field of
view, although clearly centered about the cluster position. Research
astronomers have found that the majority of x-ray sources are faint, low-mass
stars within the Pleiades. | <urn:uuid:f64603c7-23be-466c-8dde-336358fc3424> | 4.03125 | 1,176 | Knowledge Article | Science & Tech. | 39.114084 |
For some of our readers we are getting back to the very basics form in this post. But if you are about to pursing web design either a career or just for a hobby, you must be able to do more than just simply design a pretty page. Today web designers aren’t just the designer anymore they need to become some what a experts in coding as well.
Today there are plenty of different web design standards that one must master to become a web designer. The two most important in the industry today are HTML and CSS because these can be a stepping stone.
First Say Hello To HTML
HTML have been the standard for websites and it stands for HyperText Markup Language. This can be quite confusing to someone who has no experience using it. But to put it as simple HTML is the language (code) that is used to edit and position web page elements such as images, text and other web page elements.
Hint: If you go to your web browser and select View and then Source you can see the code that been used to design that website.
For those who have fortunately used HTML will tell you that it is not difficult to learn. There are a some basic tags (codes) that a designer has to learn to create a simple webpage. The more complex the HTML, the more you can do with a site. As of today we have HTML5 also but for today we will leave it alone.
Now Say Hello To CSS
CSS is another thing to familiar beginners in web designers you will need this more in the today world of websites. CSS stands for Cascading Style Sheets.
CSS was created to allow the designers to have more creativity and control over their designs. Today there is more than just one type of viewing port for the design to reach the visitors and this can be tricky and time consuming for HTML designers. These saves users time, effort and most likely money when they are creating sites.
Also, CSS encourages less effort by allowing designers to create style sheets. This means when an edit is made to one page, all other affected changes are automatically made and as this keeps designers from having to make multiple edits for large and more detailed websites.
HTML vs CSS
CSS is not really taking the place of HTML. It is just used as an enhancement for HTML is still a perfect type of coding for the main structure of a site and CSS goes beyond when it comes to how a web page looks. The outward appearance including backgrounds, colors, content and image placement can be handled by the CSS.
Allowing designers to do all type of different things like setting different page margins for all sides of a page, overlapping words, better positioning page elements. These make all seem like minor adjustments but these tricks can free up time for you to worry about overall layout, design and navigation etc.
To make it simple as: Just think of these two languages as building a house. The HTML is the structure of the house and CSS is the interior design | <urn:uuid:be62d60f-e844-41ae-b964-d1c23b295776> | 3 | 605 | Tutorial | Software Dev. | 56.573572 |
A tiny bit of language support can turn a simple text box into a compact and powerful way to let a user quickly supply complex data. An application can constrain the allowable text to adhere to a small and computer-friendly grammar that mirrors somewhat a natural human language. Even a highly constrained grammar can still produce a user experience that feels natural. This works best when the domain of text a user might enter is small, when the user can easily imagine what they can type, and when the designer and developer can collaborate on thoughtful construction of the grammar.
A simple example of a micro-grammar at work is a UI combining a scalar quantity and a unit into a single field, such as the measurement fields in Microsoft Word 2007. These fields throughout Word accept a variety of units, such as inches, metric centimeters, points, or lines. As in natural (here, English) language, the units directly follow the numeric value:
The fields on the left in the image above happen to be showing measurements in inches (") while those on the right are showing points, but all the fields accept all supported units. The controls not only parse out the value from the units, they can also convert and render measurements in a canonical format. In the default settings for the Microsoft Word on a U.S. system, indentations are converted to inches, and leading is converted to points.
Compare the above to an application like Adobe PhotoShop CS2 that uses standard operating system controls:
Here the user must enter units separately, requiring that they move the focus with the mouse or keyboard. The UI also looks significantly more cluttered. Word's text-parsing micro-grammar let one control in Word do the job of two in Photoshop. The trade-off is one of cleanliness versus discoverability. This efficiency is critical in cases like Word's Ribbon, in which a large number of controls are packed into a small space. On the other hand, using two controls makes clearer what units are supported. A UI with separate controls is also significantly easier to implement.
One distinct advantage of text boxes that support micro-grammars is that they can offer shortcuts to power users without compromising the simplicity of the typical user's experience. The date fields in Microsoft Outlook, for example, not only accept dates in local form, but also accept shortcut phrases such as "tomorrow" or "next Tuesday". Some supported shortcuts don't seem to add much value. The U.S. version of Outlook lets the user type the names of a number of common American holidays like "Christmas" (but not Easter, the date of which involves non-trivial astronomy, and even an overeager Microsoft Office developer has to stop somewhere).
It's dubious that the ability to type in "New Year's Day" as an appointment date has ever actually helped anyone—who schedules appointments on New Year's Day, anyway? Still, even these dubious shortcuts don't clutter anything up. Another advantage is that such text boxes allow the pasting of complete text from other sources directly into the UI in one step, letting the application do the work of breaking apart relevant information instead of forcing the user to do this by hand.
To have a program understand text the user has typed requires that a developer create a parser: a chunk of code that implements the rules of the grammar to determine what the user is trying to express with that text. If the desired grammar is extremely simple, a developer might hand-code a parser for it, but this can quickly get out of hand. More complex grammars typically entail the use of a parser generated by a tool. For example, if it's possible to restrict the supported input to a form known as a context-free grammar, there are a wide variety of tools for generating a parser that can handle such grammar.
Any attempt to optimize a micro-grammar for the nuances of one natural language will, of course, complicate matters if and when the need arises to localize the UI for other languages. Suppose some culture normally puts the units before the numeric value. Users in this culture might reasonably expect to be able to enter data that way. If the parser has been generated with a tool, it should be relatively straightforward for the developer to create a new grammar definition that swaps the position of those elements. The rest of the application logic should remain virtually unchanged.
I'm not sure how many software companies would actually go through the trouble to adapt a micro-grammar for a specific market. Then again, companies rarely go through much trouble to change the layout of a dialog like the one from PhotoShop above, in which the layout of the controls is heavily biased in support of the designer's natural language. The work required to update a well-factored grammar definition is likely less than that required to reposition a significant number of controls across a large number of pages or dialogs. | <urn:uuid:b9e93f21-9573-4395-9103-3168754f00a6> | 3.296875 | 991 | Personal Blog | Software Dev. | 40.571163 |
Lets start from experimental data. What is an electron, what do we know about an electron? It is too small to touch or see or smell. Everything we know about an electron comes from several levels of proxies. We end up measuring a track circle in a magnetic field and get e/m, consistently for different "electrons" and we do the millikan oil drop and get e and then we can assign a mass to these manifestations consistently.
That is all we have for the electron, it has a mass m_e measured and a charge e, measured.
Nature has been good to us and a working theory exists for QED. Mathematics is a tool, it can describe and predict measurements but it is not something that creates reality. Reality is what one measures . If the theory predicts, it does not matter if it goes into a yoga position to do so, as long as it can predict consistently. They want to call them bare and dressed mass? Fine. Who can measure anything more than that the measured mass is m_e and and the measured charge e?
Better theories/computations may come up, but to be better they should describe existing measurements and predict more and different ones, that QED cannot explain, for anybody to pay any attention. Or be as overwhelmingly economic and elegant as the heliocentric is to the geocentric pov. QED works.
added: I want to give an example from real physics history that I heard from the horse's mouth back in the 1980s, of how succesful new methods of computation overwhelm tradition and sweep over reluctances once shown to successfully predict faster and accurately.
Back in the Manhattan day project, a physicist think tank had been set up with the best brains of the time to calculate crossections needed for making the bomb. Feynman was a junior member of the team. They gave the group a problem and a week later people reported the result of their independent calculations, parallel processing. Feynman said that one afternoon he was lying on his bed with his feet on the wall, when the Feynman diagram method came to him, whole ( he had eidetic memory so he probably saw it). He calculated the current problem and waited impatiently for the report of the others. When he gained confidence that his method was as good as the long drawn out s matrix calculations he started playing games with the team. He would get the result in an evening, tell them the next day what they would find, and it would take them the rest of the week to confirm.
Of course Feynman diagrams were universally accepted after that.
I was reminded of this story when I listened to the talk of Nima Arkani-Hamed which he gave on the twistor revolution. He finds extremely cumbersome the Feynman diagrams method and is exploring a new one that gives the same results as the thousands of summed QCD feynman diagrams. I was amused, and am sure that Feynman would have been too, if he were still alive.
If a new computational method is faster, sleeker and as predictive, it will be adopted as surely as God made little cabbages.
In my experimentalist's opinion of course. | <urn:uuid:27870e59-377f-47c8-9709-38f6a13617e8> | 3.109375 | 660 | Q&A Forum | Science & Tech. | 54.007146 |
This is really weird, New World Post-pandemic Reforestation Helped Start Little Ice Age, Say Scientists:
Stanford University researchers have conducted a comprehensive analysis of data detailing the amount of charcoal contained in soils and lake sediments at the sites of both pre-Columbian population centers in the Americas and in sparsely populated surrounding regions. They concluded that reforestation of agricultural lands–abandoned as the population collapsed–pulled so much carbon out of the atmosphere that it helped trigger a period of global cooling, at its most intense from approximately 1500 to 1750, known as the Little Ice Age.
The same researchers published a paper on this last spring, Effects of syn-pandemic fire reduction and reforestation in the tropical Americas on atmospheric CO2 during European conquest:
A new reconstruction of Late Holocene biomass burning in the tropical Americas is consistent with the expansion of fire use by Mesoamerican and Amazonian agriculturalists and a subsequent period of fire reduction beginning 500 years BP. The marked reduction of biomass burning after 500 years BP, a unique feature of the fire history of the tropical Americas relative to other regions of the globe, is synchronous with the collapse of the American indigenous population during pandemics accompanying European conquest. We predict that fire reduction contemporaneous with pandemics in the tropical Americas was associated with massive forest regeneration on 5 × 105 km2 of land and sequestration of 5-10 Gt C into the terrestrial biosphere, which contributed to the 2% global reduction in atmospheric CO2 levels and the 0.1‰ increase in δ13C of atmospheric CO2 from 1500 to 1750 A.D. This study 1) builds upon prior fire history reconstructions by synthesizing a substantially greater number of stratigraphic charcoal accumulation records and soil charcoal 14C dates to resolve features of the Late Holocene biomass burning record in the tropical Americas; and 2) corroborates the hypothesis advanced by Ruddiman [Ruddiman, W.F., 2003. The Anthropogenic Era began thousands of years ago. Climatic Change 61, 261-293, Ruddiman, W.F., 2005. Plows, Plagues, and Petroleum. Princeton University Press, Princeton, New Jersey] that biospheric carbon sequestration via reforestation of cropland abandoned during pandemics contributed to changes in atmospheric CO2 concentration during the past millennium. | <urn:uuid:047f354a-4227-493c-bb1c-2e4a93320aeb> | 3.015625 | 484 | Personal Blog | Science & Tech. | 21.646522 |
Examples of browser compatibility issues:
3. Misalignment of controls on the web page which looks frustrating to the customer
4. Web page looking cluttered on mobile browsers
Having elaborated on browser compatibility, let us discuss a very popular tool called selenium grid that provides a very easy way to execute multiple tests in parallel on different browsers. By running the test suite concurrently on several browsers it reduces the cost for browser compatibility testing and also dramatically speeds up the feedback cycle. This article is a great head-start on how selenium grid makes browser compatibility testing fun and easy. It explores the boundless potential it offers when used in conjunction with Selenium RC. In the next article “Six steps to complete test automation with selenium grid” we will get our hands dirty with selenium grid programming and execution.
Here we can see three main components in the diagram, Test, Selenium Hub and Selenium Remote Control.
The Test is a series of Selenium commands
The Selenium hub routes the selenese requests from the Test to the appropriate Selenium Remote Control.
The Selenium Remote Control is an instance of Selenium Server that registers itself to the hub when it starts describing the environment provided…….for example chrome on ubuntu, IE on windows.
Selenium grid is a distributed grid of Selenium Remote Controls. The Remote Control is an instance of Selenium Server that registers itself to the hub when it starts describing the environment provided. When the Selenium Remote Control is started we define which environment it provides to the hub. In the selenium test, when instantiating the Selenium driver, the hub environment is passed in the place of the browser string.
The Remote Control instances that have been registered with the grid show up in http://localhost:4444/console. The Selenium hub routes the selenese requests to the appropriate Selenium Remote Control. The hub puts the request on hold if there is no available Remote Control on the grid. As soon as the suitable Remote Control becomes available the hub will serve the request. The hub parses the tests that target specific environments. We can specifically target a particular browser and a particular platform for testing the software.
This is how Selenium grid works and it makes browser compatibility testing a breeze. | <urn:uuid:73f86c2d-35ca-4ab2-916e-18ca46097632> | 2.796875 | 467 | Personal Blog | Software Dev. | 35.8224 |
AL9 Directive alpine species
Habitats Directive species
The EU Habitats Directive includes 14 species which occur primarily in alpine habitats. One of these species, Arctic Marsh Sedge, can also be found in the boreal region where its conservation status was evaluated as favourable. The rest of the species are restricted to the alpine region.
The conservation status of most (9) of the Habitats Directive's alpine species has been evaluated as favourable. These species, which include two mammals, three butterflies, and four vascular plants are generally well protected within the existing protected areas. While the distribution area of the mammals Wolverine and Arctic Hare is large, the other species are local and mainly restricted to the northwestern fells with calcareous soils.
The status of two species, Arctic Fox and Wall Hawk's-beard, was evaluated as unfavourable-bad. The state of the Arctic Fox population is the most critical and is likely to weaken even further in the future. The population has been decreasing since the 1980s and the latest reported breeding occurred more than ten years ago. Based on actual sightings of the species, the Arctic Fox population has been estimated to consist of only five individuals.
One of the most important reasons for the decline of the Arctic Fox population is the spreading of competing species Red Fox into the alpine region. In addition, changes in reindeer husbandry and the weakening of vole population cycles have decreased the amount of food available to Arctic Foxes.
Wall Hawk's-beard occurs in the alpine region only in Kevo strict nature reserve. The population has decreased because its habitats have become more grass dominated. Dry summers have also weakened the state of Wall Hawk's-beard's habitats.
The conservation status of the moss Encalypta mutica was assessed as unfavourable-inadequate because its population size is so small that the risk of extinction due to random events has increased. For the two other moss species, full assessment has not been possible, due to insufficient knowledge.
- Updated (14.05.2013) | <urn:uuid:908c5953-85e7-4d75-a5b5-1d8cec451d58> | 3.640625 | 420 | Knowledge Article | Science & Tech. | 34.748272 |
Return to Vignettes of Ancient Mathematics
Definition of faster (Physics Z 2.232a23-7)
Since every magnitude is divisible into magnitudes (for it was proved that it is impossible for a continuous magnitude to be from indivisible ones, but every magnitude is continuoous), then it is necessary that (a) the faster traverse a greater distance in the equal time and (b) an equal distance in the lesser time and (c) more distance in the lesser time, just as some define the faster.
Argument for a (Physics Z 2.232a27-31):
(figure 1: moving) or (figure 2: still)
For let A be faster than B. Then
since that which changes first is faster, in the time in which A has changed
from to , e.g. ZH, B will not yet be at , but it will fall short, so that
in the equal time the faster will traverse more in an equal time.
Argument for c (Physics Z 2.232a31-b5):
1) In fact, it will also move more in less time. For in the
time in which A has come to be at , let
B be at E, since it is the slower. (figure
2) Accordingly since A has come to be at in
the whole time ZH, it will be at in
a smaller time than this. (figure
3) And let it be in time ZK. And so , which A has traversed is
larger than E, and time ZK
is smaller than the whole time ZH, so that it will traverse a
larger amount in a smaller time.
Argument 1 for b (Physics Z 2.232b5-14):
1) It is also obvious from these that the faster traverses
an equal amount in a smaller time. For since it traverses a greater
in less time than the slower, while taken by itself it will traverse
more than the lesser amount in more time, e.g. M larger
than , the
time P in which it traverses
M would be more than , the
time in which it traverses . (figure
2) Thus if the time P is
smaller than X, the time in which the slower traverses , and will be smaller than X,
since it is smaller than P, as
that which is smaller than a smaller is itself smaller. Thus it
will move the equal amount in less time.
Argument 2 for b (Physics Z 2.232b14-29):
Furthermore, if everything must move in equal or less or in more time, then that which moves in more is slower, that in equal equally fast, while the faster is neither equally fast nor slower, the faster neither move in equal nor in more time. And so, it remains that it moves in less time, so that it is also necessary that the faster traverse an equal magnitude in less time. | <urn:uuid:9089b7ed-da51-4a61-b683-29e2d6f99dfb> | 4.21875 | 632 | Academic Writing | Science & Tech. | 68.08438 |
|Themes > Science > Chemistry > Inorganic Chemistry > Acids and Bases > Typical Acids and Bases|
Compounds that contain hydrogen bound to a nonmetal are called nonmetal hydrides. Because they contain hydrogen in the +1 oxidation state, these compounds can act as a source of the H+ ion in water.
Metal hydrides, on the other hand, contain hydrogen bound to a metal. Because these compounds contain hydrogen in a -1 oxidation state, they dissociate in water to give the H- (or hydride) ion.
The H- ion, with its pair of valence electrons, can abstract an H+ ion from a water molecule.
Since removing H+ ions from water molecules is one way to increase the OH- ion concentration in a solution, metal hydrides are bases.
A similar pattern can be found in the chemistry of the oxides formed by metals and nonmetals. Nonmetal oxides dissolve in water to form acids. CO2 dissolves in water to give carbonic acid, SO3 gives sulfuric acid, and P4O10 reacts with water to give phosphoric acid.
Metal oxides, on the other hand, are bases. Metal oxides formally contain the O2- ion, which reacts with water to give a pair of OH- ions.
Metal oxides therefore fit the operational definition of a base.
We see the same pattern in the chemistry of compounds that contain the OH, or hydroxide, group. Metal hydroxides, such as LiOH, NaOH, KOH, and Ca(OH)2, are bases.
Nonmetal hydroxides, such as hypochlorous acid (HOCl), are acids.
The table below summarizes the trends observed in these three categories of compounds. Metal hydrides, metal oxides, and metal hydroxides are bases. Nonmetal hydrides, nonmetal oxides, and nonmetal hydroxides are acids.
Typical Acids and Bases
The acidic hydrogen atoms in the non-metal hydroxides in the table above aren't bound to the nitrogen, sulfur, or phosphorus atoms. In each of these compounds, the acidic hydrogen is attached to an oxygen atom. These compounds are therefore all examples of oxyacids.
Skeleton structures for eight oxyacids are given in the figure below. As a general rule, acids that contain oxygen have skeleton structures in which the acidic hydrogens are attached to oxygen atoms. | <urn:uuid:f789b874-69c7-4b20-914b-23d675bfbaf6> | 4.125 | 516 | Knowledge Article | Science & Tech. | 36.91588 |
Many different scenarios have been written on achieving a substantial reduction in carbon emissions. In all these scenarios Carbon Capture and Sequestration (CCS) plays a significant role, as the predicted use of fossil fuels will continue to grow. There are two factors that determine the success of large-scale employment of CCS: (1) the uncertainties associated with the sequestration in geological formations and (2) the costs associated with carbon capture.
In this EFRC we focus on the energy costs associated with the separation of CO2 from gas mixtures. The current technology has a parasitic energy of 30-40%, which implies a significant decrease in efficiency of power generation. Simple thermodynamic arguments show that the minimal parasitic energy to separate CO2 from flue gasses is 3.5%. The chemical industry typically operates at 3-5 times the thermodynamic minimum, which suggests that the parasitic energy of carbon capture can be reduced by at least a factor of two.
From a scientific point of view, the separation of CO2 is very challenging as the differences between the molecules are relatively small. Modern chemistry and nano-science allows us molecular control over the properties of materials. The vision of our EFRC is to develop the science to create, understand, and predict novel materials that are tailor-made with exactly the right molecular properties to separate gasses relevant for clean energy technologies. The long-term goal of this EFRC is to develop the science and materials that will contribute to the reduction of the parasitic energy costs of CCS.
At UC Berkeley and LBNL, alternative energy has been an important research theme, as evidenced by the effort on solar-to-fuel (Helios) and biomass conversion (JBEI and EBI). Carbon Capture and Sequestration is a logical extension that perfectly fits into the energy emphasis of Berkeley. | <urn:uuid:c91562b3-b252-4f66-97da-13b669fe11f7> | 2.875 | 370 | Knowledge Article | Science & Tech. | 28.179277 |
The Horn Antenna at Bell Telephone Laboratories in Holmdel, New Jersey, was constructed in 1959 to support Project Echo--the National Aeronautics and Space Administration's passive communications satellite project.
The antenna is 50 feet in length with a radiating aperture of 20 x 20 feet and is made of aluminum. The antenna's elevation wheel is 30 feet in diameter and supports the weight of the structure by means of rollers mounted on a base frame. All axial or thrust loads are taken by a large ball bearing at the apex end of the horn. The horn continues through this bearing into the equipment cab. The ability to locate receiver equipment at the apex of the horn, thus eliminating the noise contribution of a connecting line, is an important feature of the antenna. A radiometer for measuring the intensity of radiant energy is found in the equipment cab.
The triangular base frame of the antenna is made from structural steel. It rotates on wheels about a center pintle ball bearing on a track 30 feet in diameter. The track consists of stress-relieved, planed steel plates which are individually adjusted to produce a track flat to about 1/64 inch. The faces of the wheels are cone-shaped to minimize sliding friction. A tangential force of 100 pounds is sufficient to start the antenna in motion.
To permit the antenna beam to be directed to any part of the sky, the antenna is mounted with the axis of the horn horizontal. Rotation about this axis affords tracking in elevation while the entire assembly is rotated about a vertical axis for tracking in the azimuth.
With the exception of the steel base frame, which was made by a local steel company, the antenna was fabricated and assembled by the Holmdel Laboratory shops under the direction of Mr. H. W. Anderson, who also collaborated on the design. Assistance in the design was also given by Messrs. R. O'Regan and S. A. Darby. Construction of the antenna was completed under the direction of Mr. A. B. Crawford from Freehold, New Jersey.
When not in use, the antenna azimuth sprocket drive is disengaged, thus permitting the structure to "weathervane" and seek a position of minimum wind resistance. The antenna was designed to withstand winds of 100 miles per hour and the entire structure weighs 18 tons.
The Horn Antenna combines several ideal characteristics it is extremely broad-band, has calculable aperture efficiency, and the back and sidelobes are so minimal that scarcely any thermal energy is picked up from the ground. Consequently it is an ideal radio telescope for accurate measurements of low levels of weak background radiation.
A plastic clapboarded utility shed 10 x 20 feet, with two windows, a double door and a sheet metal roof, is found next to the Horn Antenna. This structure houses equipment and controls for the Horn Antenna and is included in this nomination.
The Horn Antenna, at the Bell Telephone Laboratories in Holmdel, New Jersey, is significant because of its association with the research work of two radio astronomers, Dr. Arno A. Penzias and Dr. Robert A. Wilson. In 1965 while using the Horn Antenna, Penzias and Wilson stumbled on the microwave background radiation that permeates the universe. Cosmologists quickly realized that Penzias and Wilson had made the most important discovery in modern astronomy since Edwin Hubble demonstrated in the 1920s that the universe was expanding. This discovery provided the evidence that confirmed George Gamow's and Abbe Georges Lemaitre's "Big Bang" theory of the creation of the universe and forever changed the science of cosmology--the study of the history of the universe--from a field for unlimited theoretical speculation into a subject disciplined by direct observation. In 1978 Penzias and Wilson received the Nobel Prize for Physics for their momentous discovery.
"We live in an ocean of whispers left over from our eruptive creation, physicist George Gamow and his colleagues had said. Nobody was listening."
By the middle of the 20th century cosmologists concerned with the creation of the universe had evolved two leading theories to explain their views. Some astronomers supported the steady-state theory of creation, which stated that the universe has always existed and will continue to survive without noticeable change. Others believed in the "Big Bang" theory of creation which taught that the universe is the glowing debris of a huge fireball that was created in a massive explosion about 16 billion years ago. No one knew for sure which theory was correct.
At Holmdel, New Jersey, in 1964 Dr. Arno Penzias and Dr. Robert Wilson were experimenting with a supersensitive, 20-foot horn-shaped antenna originally built to detect radio waves bounced off Echo balloon satellites. To measure faint radio waves from the Telstar communications satellite, they had to eliminate all recognizable interference from their receiver. They removed the effects of radar and radio broadcasting, and suppressed interference from the heart in the receiver itself by cooling it with liquid helium to -269°C, only 4° above absolute zero--the temperature at which all motion in atoms and molecules stops.
When Penzias and Wilson reduced their data they found a low, steady, mysterious noise that persisted in their receiver. This residual noise was 100 times more intense than they had expected, was evenly spread over the sky, and was present day and night. They were certain that the radiation they detected on a wavelength of 7.35 centimeters did not come from the Earth, the Sun, or our Galaxy. After thoroughly checking their equipment, the noise remained. Both men concluded that this noise was coming from outside our own galaxy--although they were not aware of any radio source that would account for it.
At that same time, Robert H. Dicke, Jim Peebles, and David Wilkenson, astrophysicists at Princeton University, just 40 miles away, were preparing to search for microwave radiation in this region of the spectrum. Dicke and his colleagues reasoned that the "Big Bang" must have scattered not only the matter that condensed into galaxies but also must have released a tremendous blast of radiation. With the proper instrumentation, this radiation should be detectable.
When a friend told Penzias about a preprint paper he had seen by Jim Peebles on the possibility of finding radiation left over from a fireball that filled the universe at the beginning of its existence, Penzias and Wilson began to realize the significance of their discovery. The characteristics of the radiation detected by Penzias and Wilson fit exactly the radiation predicted by Robert H. Dicke and his colleagues at Princeton University. Penzias called Dicke at Princeton, who immediately sent him a copy of the still-unpublished Peebles paper. Penzias read the paper and called Dicke again and invited him to Bell Labs to look at the Horn Antenna and listen to the background noise. Dicke, Penzias, and Wilson visited the antenna and immediately recognized the significance of their discovery--they had stumbled on to the "embers" of creation predicted by their Princeton colleagues.
To avoid potential conflict, they decided to publish their results jointly. Two notes were rushed to the Astrophysical Journal Letters. In the first, Dicke and his associates outlined the importance of cosmic background radiation as substantiation of the Big Bang Theory. In a second note, jointly signed by Penzias and Wilson titled, "A Measurement of Excess Antenna Temperature at 4080 Megacycles per Second," they noted the existence of the residual background noise and attributed a possible explanation to that given by Dicke in his companion letter.
Harvard physicist Edward Purcell read this announcement and concluded that "It just may be the most important thing anybody has ever seen."
Astronomer Robert Jastrow echoed this conclusion by stating that Penzias and Wilson ". . . made one of the greatest discoveries in 500 years of modern astronomy."
In 1978, Dr. Arno Penzias and Dr. Robert Wilson were awarded the Nobel Prize for Physics for their joint discovery.
Richard Learner, Astronomy Through the Telescope (New York: Van Nostrand Reinhold Company, 1981), p. 154.
Aaronson, Steve. "The Light of Creation: An Interview with Arno A. Penzias and Robert W. Wilson." Bell Laboratories Record. January 1979, pp. 12-18.
Abell, George O. Exploration of the Universe. 4th ed., Philadelphia: Saunders College Publishing, 1982.
Asimov, Isaac. Asimov's Biographical Encyclopedia of Science and Technology. 2nd ed., New York: Doubleday & Company, Inc., 1982.
Bernstein, Jeremy. Three Degree Above Zero: Bell Labs in the Information Age. New York: Charles Scribner's Sons, 1984.
Chown, Marcus. "A cosmic relic in three degrees," New Scientist, September 29, 1988, pp. 51-55.
Crawford, A.B. , D.C. Hogg and L.E. Hunt. "Project Echo: A Horn-Reflector Antenna for Space Communication," The Bell System Technical Journal, July 961, pp. 1095-1099.
Disney, Michael. The Hidden Universe. New York: Macmillan Publishing Company, 1984.
Ferris, Timothy. The Red Limit: The Search for the Edge of the Universe. 2nd ed., New York: Quill Press, 1978.
Friedman, Herbert. The Amazing Universe. Washington, DC: National Geographic Society, 1975.
Hey, J.S. The Evolution of Radio Astronomy. New York: Neale Watson Academic Publications, Inc., 1973.
Jastrow, Robert. God and the Astronomers. New York : W. W. Norton & Company, Inc., 1978.
Kirby-Smith, H.T. U.S. Observatories: A Directory and Travel. Guide. New York: Van Nostrand Reinhold Company, 1976.
Learner, Richard. Astronomy Through the Telescope. New York: Van Nostrand Reinhold Company, 1981.
Penzias, A.A., and R. W. Wilson. "A Measurement of the Flux Density of CAS A At 4080 Mc/s," Astrophysical Journal Letters, May 1965, pp. 1149-1154.
(click on the above photographs for a more detailed view) | <urn:uuid:725ad9a1-5dab-4938-8e7f-631a0e9e4653> | 3.34375 | 2,156 | Knowledge Article | Science & Tech. | 50.574425 |
Oracle Programming with PL/SQL Collections, Page 2
The Varray is short for Variable Array. A Varray stores elements of the same type in the order in which they are added. The number of elements in a Varray must be known at the time of its declaration. In other words, a Varray has a fixed lower and upper bounds, making it most similar to collection types from other programming languages. Once it is created and populated, each element can be accessed by a numeric index.
The following statements declare, and then populate, a Varray that will contain 4 elements of the same type as the column genre_name in table book_genre:
DECLARE TYPE genres IS VARRAY(4) OF book_genre.genre_name%TYPE; Fiction_genres genres; BEGIN fiction_genres := genres('MYSTERY','SUSPENSE', 'ROMANCE','HORROR'); END;
We could have declared genres to be of type VARCHAR2(30) because all values here are text. However, in keeping with good Oracle programming practices, you should always prefer to declare variables that are based on table columns with the %TYPE attribute. This allows your code to grow with the database schema. If we were to populate genres with a variable like v_genre (versus a text literal), it would be easy for the column type to change in the database without modifying our code.
All PL/SQL collections contain a number of built-in methods that prove useful when working with them. Table 1 lists these Collection methods.
|Method||Action It Performs|
|COUNT||Returns number of elements in the Collection|
|EXISTS||Returns Boolean true if element at specified index exists; otherwise, false|
|EXTEND||Increases size of Collection by 1 or number specified, ie. EXTEND(n)|
**Cannot use with Associative Array
|FIRST||Navigates to the first element in the Collection|
|LAST||Navigates to the last element|
|PRIOR||Navigates to the previous element|
|NEXT||Navigates to the next element|
|TRIM||Removes the last element, or the last n elements if a number is specified, ie. TRIM(n) |
**Cannot use with Associative Array
|DELETE||Removes all elements of a Collection, or the nth element, if a parameter is specified|
The following code sample demonstrates how to use a few of these methods. We are using a Varray in the example, but the methods function similarly on all collection types. We mentioned that a Varray differs from Nested Tables and Associative Arrays in that you must supply a size during its declaration. This example usese the EXTENDS method to demonstrate that it is possible to modify a Varray's size programmatically.
--Add a new genre. IF adding_new_genre THEN --Is this genre id already in the collection? IF NOT fiction_genres.EXISTS(v_genre_id) THEN --**Add** another element to the varray. fiction_genres.EXTENDS(1); fiction_genres(v_genre_id) := v_genre; END IF; --Display the total # of elements. DBMS_OUTPUT.PUT_LINE('Total # of entries in fiction_genres is : '||fiction_genres.COUNT(); END IF; ... ... --Remove all entries. IF deleting_all_genres THEN Fiction_genres.DELETE(); END IF;
The advantage that Varrays (and Nested Tables) have over Associative Arrays is their ability to be added to the database. For example, you could add the genres type, a Varray, to a DML statement on the library table.
CREATE TYPE genres IS VARRAY(4) OF book_genre.genre_name%TYPE; / CREATE TABLE book_library ( library_id NUMBER, name VARCHAR2(30), book_genres genres); /
When a new library record is added, we can supply values to our genres type, book_genres, by using its constructor:
--Insert a new collection into the column on our book_library table. INSERT INTO book_library (library_id, name, book_genres) VALUES (book_library_seq.NEXTVAL,'Brand New Library', Genres('FICTION','NON-FICTION', 'HISTORY', 'BUSINESS AND FINANCE'));
The query SELECT name, book_genres from book_library returns us:
NAME BOOK_GENRES -------------------- --------------------------------------------- Brand New Library GENRES('FICTION', 'NON-FICTION', 'HISTORY', 'BUSINESS AND FINANCE')
Note how the insertion order of elements in book_genres is retained. When a table contains a Varray type, its data is included in-line, with the rest of the table's data. When a Varray datatype is selected from a database table, all elements are retrieved. The Varray is ideal for storing fixed values that will be processed collectively. It is not possible to perform inserts, updates, and deletes on the individual elements in a Varray. If you require your collection to be stored in the database but would like the flexibility to manipulate elements individually, Nested Tables are a better solution. | <urn:uuid:5f3fa543-0e02-4115-a6f8-b789536ee626> | 3.546875 | 1,149 | Documentation | Software Dev. | 36.071512 |
|While it's easy to understand gas pressure
(as gas is heated it expands, increasing pressure, and as it cools, it contracts,
decreasing pressure), magnetic pressure may be a tougher concept to grasp.
David Dearborn explains, "If you take those places where there are
concentrations of magnetic field and put them together, they have pressure
of their own. You can feel magnetic pressure when you take two magnets and
take the ends of the same polarity and try to put them together. The just
don't quite want to go together. That's magnetic pressure."|
|George Fisher and David Dearborn answer the question, "What is
a sunspot?"||Think of a sunspot as a bubble of magnetic
pressure, surrounded, by the gas pressure of the photosphere. For the sunspot
to exist, the total pressure must be in balance between the region inside
and the region outside of the sunspot. David Dearborn elaborates on how
magnetic fields keep sunspots cooler: " Outside a sunspot, you have
only gas pressure, which depends on the temperature. In the sunspot you
have both gas pressure and magnetic field pressure combined." Since
the pressure must be in balance, magnetic pressure inside the sunspot allows
the gas pressure (and thus the temperature) to remain lower than the areas
outside of the sunspot.|
| More About the Sun
To better understand the process
that creates sunspots we first need to learn more about the sun. The sun
is by far the largest object in the solar system, containing more than 99.8%
of the total mass of the solar system (Jupiter contains most of the rest).
The sun is made of about 75% hydrogen and 25% helium by mass, with tiny
trace amounts of metals and other compounds. Over time, the nuclear fusion
reactions that fuel the sun are converting hydrogen into helium in its core,
changing the ratio of the two elements.
The energy produced by nuclear fusion in the
core of the sun is carried outward by convective motions in the outer 20-30%
of the sun, called the convection zone. Convection is the process by which
hot gas from the center of the sun rises to the surface, and cooler gas,
which comes to the surface and radiates its heat away, sinks back towards
The scale of the sun is hard to fathom.
The sun is so large and so dense that it takes about 50,000,000 years for
energy produced at its core to make its way to the sun's surface!
The sun has been radiating light and heat for the past four or five billion
years. The sunspots to which this site is devoted appear as tiny spots on
the sun--but an average-sized sunspot is as large as the earth.
Magnetic Fields and the Solar Dynamo
The sun, like the earth, generates
a magnetic field that permeates the surface and extends out into space.
The sun's magnetic field moves and changes over time, fluctuating in intensity
in different areas of the sun's surface. The sun's magnetic field is thought
to be produced by fluid motions within and just below the convection zone,
something Dearborn refers to as the "solar dynamo," but the ultimate
source of the sun's magnetic field, and the reasons for its fluctuations,
is not well understood. | <urn:uuid:c5363477-1926-455e-95a5-04d03bd95a04> | 4.25 | 719 | Knowledge Article | Science & Tech. | 50.919348 |
Núñez Group Research
Biophysics of Bacterial Biofilms
Biofilms are complex microbial communities that grow at interfaces. Bacteria in biofilms are phenotypically different than their planktonic (free swimming) relatives; they adapt to the communal, sessile lifestyle by expressing a specific complement of genes that allows them to optimize their motility, adhesion, and metabolism for this specialized environment. Within these communities bacteria organize themselves to form complex architectures, differentiate to carry out distinct roles, and communicate with other cells using small molecules in ways that were once thought to be characteristic only of eukaryotes. Biofilms are robust, dynamic, and difficult to control or destroy, which makes potential removal agents of great interest in medical, industrial, and agricultural settings.
Our interest has focused on the removal of simple bacterial biofilms by the predatory bacterium Bdellovibrio bacteriovorus. Bdellovibrios are Gram negative bacteria that consume a wide variety of other Gram negative bacteria by burrowing into the prey periplasm and breaking down the prey cytoplasm for food, digesting the prey from the inside out.
Using Atomic Force Microscopy, Occidental College professor Eileen Spain, students, and I previously demonstrated that bdellovibrios can consume simple bacterial biofilms at hydrated air-solid and air-liquid interfaces.
Exploring Predation in Bacterial Biofilms by AFM
Atomic force microscopy, or AFM, is a type of scanning probe microscopy. With AFM a very small, sharp probe is scanned across a surface, and small deflections of the probe are measured to detect changes in the topography of the surface:
Bacteria are very difficult to image by AFM under fluid because they float away unless they are fixed in place chemically or mechanically, which can heavily modify or distort the cells of interest. However, bacterial biofilms are naturally adhered to surfaces and thus are well-suited to study by AFM.
In our initial studies, we detected distinct differences in the surface of invaded and native prey cells using the deflection data in contact mode. This discovery suggested that the prey's outer membrane was physically stretched out by the predator, or that the cell's membrane potential or ion balance was breached, or that the predator made distinct changes in the chemical composition of the outer membrane.
We are exploring these possibilities using the force measurement modes of the AFM. In addition to its substantial imaging capabilities, the AFM can be used to quantitatively measure important parameters about a sample by measuring the forces exerted on the tip when it is pressed into the sample surface or retracted, allowing us to quantitatively measure the elasticity or adhesion of a surface. Recently former postdoc Dr. Megan Ferguson and Catherine Volle (MHC '06) studied the physical properties of five kinds of bacteria living in simple biofilm communities on a glass surface, in collaboration with MHC physics professor Kathy Aidala. Notably, the biofilm-forming cells have a high cellular spring constant, indicating that they are quite stiff and hinting that stiff bacteria may preferentially colonize surfaces in the early stages of biofilm formation. The extension curves indicate that the biofilm forming cells are coated by a soft layer of associated extracellular polymeric substances. EPS layers are known to facilitate bacterial adhesion to surfaces during biofilm formation, and we observe them adhering strongly to the tip as it is retracted. Pili and flagella, imaged on all but one of the biofilms, may contribute to adhesion events as well.
Using the same techniques, we probed E.coli biofilms during predation by bdellovibrios. Bdelloplasts are less elastic (i.e. have a lower turgor pressure) and are more adhesive than uninfected prey cells. Notably, these trends are consistent with the biochemical changes that are known to occur in the prey cell following infection by Bdellovibrio.
These studies provide a provocative foundation for our exploration of the biophysics of bacterial biofilms.
Megan E. Núñez, Department of Chemistry, Mount Holyoke College
50 College Street, South Hadley MA 01075.
phone (413) 538-2449. fax (413) 538-2327. email menunez at mtholyoke dot edu. | <urn:uuid:6152325f-a7cf-49c0-8dbe-402801fb01b1> | 2.6875 | 921 | Academic Writing | Science & Tech. | 20.326881 |