text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
That dark matter has never been found is no deterrent to the physicists who are looking for it.
"Even if we don't know what dark matter is, we know how it must act," said Eduardo Abancens, a physicist at the University of Zaragoza in Spain and the designer of a prototype dark matter detector.
According to physicists, only around 5 per cent of what makes up the universe can presently be detected. The existence of dark matter is inferred from the behaviour of faraway galaxies, which move in ways that can only be explained by a gravitational pull caused by more mass than can be seen. They estimate dark matter represents around 20 per cent of the universe, with the other 75 per cent made up of dark energy, a repulsive force that is causing the universe to expand at an ever-quickening pace. Continue reading | <urn:uuid:d9aadd22-3749-42e1-bfee-6f6161236376> | 3.53125 | 171 | Truncated | Science & Tech. | 38.871551 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2006 December 31
Explanation: The past year was extraordinary for the discovery of extraterrestrial fountains and flows -- some offering new potential in the search for liquid water and the origin of life beyond planet Earth.. Increased evidence was uncovered that fountains spurt not only from Saturn's moon Enceladus, but from the dunes of Mars as well. Lakes were found on Saturn's moon Titan, and the residual of a flowing liquid was discovered on the walls of Martian craters. The diverse Solar System fluidity may involve forms of slushy water-ice, methane, or sublimating carbon dioxide. Pictured above, the light-colored path below the image center is hypothesized to have been created sometime in just the past few years by liquid water flowing across the surface of Mars.
Authors & editors:
Jerry Bonnell (USRA)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. | <urn:uuid:ba0e06c7-0cc5-4941-b35a-b3e9b339732a> | 3.234375 | 235 | Knowledge Article | Science & Tech. | 38.031541 |
The Met Office is hot out of the blocks on the climate front this year, issuing the first "climate disaster" story of the year via the BBC's Roger Harrabin.
The frequency of extreme rainfall in the UK may be increasing, according to analysis by the Met Office.
Statistics show that days of particularly heavy rainfall have become more common since 1960.
The analysis is still preliminary, but the apparent trend mirrors increases in extreme rain seen in other parts of the world.
It comes as the Met Office prepares to reveal whether 2012 was the wettest year on record in the UK.
Given the apparently overwhelming drought risk in the South East of England - of similar magnitude to the Sahara apparently - we should probably be grateful for this rain. And while we're on the topic, let's not forget the Institute of Civil Engineers' report on water availability in the UK:
By the 2050s, summer river flows may reduce by 35% in the driest parts of England and by 15% for the wetter river basin regions in Scotland. This will put severe pressure on current abstractions of water.
This being the year of the IPCC's Fifth Assessment Report, I think we should expect a lot of this kind of thing in coming months. | <urn:uuid:5a46ab38-3c0c-46b9-ae7a-9545b0ad4401> | 3.046875 | 254 | Personal Blog | Science & Tech. | 52.05 |
The Science of Hob, Kind Of
Ever wonder why extremely advanced machines in science fiction often have an organic look and are sometimes able to morph in decidedly unmechanical ways? The term nanotechnology is tossed around a lot as an explanation, but that covers a very wide range of technologies and can have a lot of meanings. One particular way of achieving such a sophisticated machine is to use utility fog, or foglets for short.
Foglets are hypothetical microscopic robots that are able to extend and retract their arms in such a way that would allow them to perform complex structural reconfigurations, similar to how carbon, depending on its lattice structure, can take the form of different substances like diamond or graphite. Since the color, hardness, shape and opaqueness of an object depend on the spacing and orientation of its molecules, foglets would be able to take on virtually any appearance. Extending their arms widely enough and they could become invisible and seemingly intangible (hence the name “fog”), or shorten and become a durable solid.
Each individual foglet wouldn’t necessarily be very smart, but since they can easily form conductive pathways with one another they would be able to share energy and information at a blinding speed (generally the smaller the scale of computation, the faster things can get due to simplified entropy). In short, utility fog can function as a shapeshifting supercomputer. Hob’s body functions in a similar way, with no central processor but instead trillions of molecular robots all carrying data, changing shape and performing operations. In Hob’s case, the actual computations are performed at the molecular level, similar to a chemical computer. This is what Kimiko means in Hob #7 when she says he could be cut down almost to the molecule and still function. Presumably Hob’s “operating system” and memories are safely stored on only a few foglet-type molecules that are repeated over and over, which is why he’s referred to as somewhat holographic.
Constructing foglets is theoretically possible, but our construction capabilities aren’t yet refined enough to build these little robots molecule by molecule. Give it 20 or 30 years, though, and we may see the “Swiss Army knife” of nanotechnology do some incredible things. | <urn:uuid:36a3d30b-837b-48cc-b16d-eddbce5a7f30> | 2.84375 | 479 | Personal Blog | Science & Tech. | 24.439901 |
Graphics For Media Use
This page gives you access to the original graphics files used to produce the USGS Summary on the new San Francisco Bay Region Earthquake Probability Study. The graphics are available in either Adobe Acrobat .pdf or .tif formats.
Please credit the USGS as the source of these graphics.
Probabilities (shown in boxes) of one or more major (M>=6.7) earthquakes on faults in the San Francisco Bay Region during the coming 30 years. The threat of earthquakes extends across the entire San Francisco Bay region, and a major quake is likely before 2032. Knowing this will help people make informed decisions as they continue to prepare for future quakes.
Note: This map has been superseded by a new version: download high resolution image
Faults and plate motions in the San Francisco Bay Region. Faults in the region, principally the seven faults shown here and characterized in this report, accommodate about 40 mm/yr of mostly strike-slip motion between the Pacific and North American tectonic plates. Yellow lines show the locations of the 1868 M6.8 earthquake on the southern portion of the Hayward Fault and the 1989 M6.9 Loma Prieta earthquake near the San Andreas fault northeast of Monterey Bay.
Shaking hazard of the SFBR, expressed as the modified Mercalli Intensity (MMI) having even odds of being exceeded in 30 years. Shaking hazard is high (MMI VII) throughout the region, and especially pronounced on the soft-soil areas surrounding the bays and the Sacramento River Delta.
Earthquakes M 5.5 in the SFBR since 1850. The decrease in rate of large earthquakes in the 20th century has been attributed to a region-wide drop in stress due to the 1906 M7.8 earthquake, the "stress shadow" hypothesis.
Scenario ShakeMap illustrating the strength and regional extent of shaking that can be expected from a future M 6.7 earthquake on the southern Hayward fault. | <urn:uuid:4dc187d8-944f-41eb-aaf4-009ed5a02453> | 3.078125 | 410 | Knowledge Article | Science & Tech. | 56.711364 |
Helmholtz resonance is the phenomenon of air resonance in a cavity, such as when one blows across the top of an empty bottle. The name comes from a device created in the 1850s by Hermann von Helmholtz, the "Helmholtz resonator", which he, the author of the classic study of acoustic science, used to identify the various frequencies or musical pitches present in music and other complex sounds.
Qualitative explanation
When air is forced into a cavity, the pressure inside increases. When the external force pushing the air into the cavity is removed, the higher-pressure air inside will flow out. The cavity will be left at a pressure slightly lower than the outside, causing air to be drawn back in. This process repeats with the magnitude of the pressure changes decreasing each time.
The air in the port (the neck of the chamber) has mass. Since it is in motion, it possesses some momentum. A longer port would make for a larger mass, and vice-versa. The diameter of the port is related to the mass of air and the volume of the chamber. A port that is too small in area for the chamber volume will "choke" the flow while one that is too large in area for the chamber volume tends to reduce the momentum of the air in the port.
Quantitative explanation
It can be shown that the resonant angular frequency is given by:
- (gamma) is the adiabatic index or ratio of specific heats. This value is usually 1.4 for air and diatomic gases.
- A is the cross-sectional area of the neck
- is the mass in the neck
- P0 is the static pressure in the cavity
- V0 is the static volume of the cavity
For cylindrical or rectangular necks, we have
- L is the length of the neck
- is the volume of air in the neck
By the definition of density: , thus:
The speed of sound in a gas is given by:
thus, the frequency of the resonance is:
The length of the neck appears in the denominator because the inertia of the air in the neck is proportional to the length. The volume of the cavity appears in the denominator because the spring constant of the air in the cavity is inversely proportional to its volume. The area of the neck matters for two reasons. Increasing the area of the neck increases the inertia of the air proportionately, but also decreases the velocity at which the air rushes in and out.
Depending on the exact shape of the hole, the relative thickness of the sheet with respect to the size of the hole and the size of the cavity, this formula can have limitations. More sophisticated formula can still be derived analytically, with similar physical explanations (although some differences matter). See for example the book by F. Mechels. Furthermore, if the mean flow over the resonator is high (typically with a Mach number above 0.3), some corrections must be applied.
Helmholtz resonance finds application in internal combustion engines (see airbox), subwoofers and acoustics. Intake systems have been described as 'Helmholtz Systems' have been used in the Chrysler V10 engine built for both the Dodge Viper and the Ram pickup truck, and several of the Buell tube-frame series of motorcycles. In stringed instruments, such as the guitar and violin, the resonance curve of the instrument has the Helmholtz resonance as one of its peaks, along with other peaks coming from resonances of the vibration of the wood. An ocarina is essentially a Helmholtz resonator where the combined area of the opened finger holes determines the note played by the instrument. The West African djembe has a relatively small neck area, giving it a deep bass tone. The djembe has been used to accompany West African drumming for centuries, making it much older than our knowledge of the physics involved.
The theory of Helmholtz resonators are used in motorcycle and car exhausts to alter the sound of the exhaust note and for differences in power delivery by adding chambers to the exhaust. Exhaust resonators also used to reduce potentially loud and obnoxious engine noise where the dimensions are calculated so that the waves reflected by the resonator help cancel out certain frequencies of sound in the exhaust.
In some twostroke engines, a Helmholtz resonator is used to remove the need for a reed valve. A similar effect is also used in the exhaust system of most twostroke engines, using a reflected pressure pulse to supercharge the cylinder (see Kadenacy effect.)
Helmholtz resonators are used in architectural acoustics to reduce undesirable low frequency sounds (standing waves, etc.) by building a resonator tuned to the problem frequency, thereby eliminating it.
Helmholtz resonators are also used to build acoustic liners for reducing the noise of aircraft engines, for example. These acoustic liners are made of two components:
- a simple sheet of metal (or other material) perforated with little holes spaced out in a regular or irregular pattern; this is called a resistive sheet;
- a series of so-called honeycomb cavities (holes with a honeycomb shape, but in fact only their volume matters).
Such acoustic liners are used in most of today's aircraft engines. The perforated sheet is usually visible from inside or outside the airplane; the honeycomb is just under it. The thickness of the perforated sheet is of importance, as shown above. Sometimes there are two layers of liners; they are then called "2-DOF liners" (DOF meaning Degrees Of Freedom), as opposed to "single DOF liners".
- Helmholtz, Hermann von (1885), On the sensations of tone as a physiological basis for the theory of music, Second English Edition, translated by Alexander J. Ellis. London: Longmans, Green, and Co., p. 44. Retrieved 2010-10-12.
- Derivation of the equation for the resonant frequency of an Helmholtz resonator.
- Formulas of Acoustics
- "Ocarina Physics - How Ocarinas Work". ocarinaforest.com. Retrieved 2012-12-31.
- Wings that waggle could cut aircraft emissions by 20% | <urn:uuid:ab349c5c-ca5b-4457-afff-8bcf0307b7de> | 4.1875 | 1,318 | Knowledge Article | Science & Tech. | 42.089506 |
|Part of a series on|
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (March 2010)|
In geology, a fault is a planar fracture or discontinuity in a volume of rock, across which there has been significant displacement along the fractures as a result of earth movement. Large faults within the Earth's crust result from the action of plate tectonic forces, with the largest forming the boundaries between the plates, such as subduction zones or transform faults. Energy release associated with rapid movement on active faults is the cause of most earthquakes.
A fault line is the surface trace of a fault, the line of intersection between the fault plane and the Earth's surface.
Since faults do not usually consist of a single, clean fracture, geologists use the term fault zone when referring to the zone of complex deformation associated with the fault plane.
The two sides of a non-vertical fault are known as the hanging wall and footwall. By definition, the hanging wall occurs above the fault plane and the footwall occurs below the fault. This terminology comes from mining: when working a tabular ore body, the miner stood with the footwall under his feet and with the hanging wall hanging above him.
Mechanisms of faulting
Because of friction and the rigidity of the rock, the rocks cannot glide or flow past each other. Rather, stress builds up in rocks and when it reaches a level that exceeds the strain threshold, the accumulated potential energy is dissipated by the release of strain, which is focused into a plane along which relative motion is accommodated—the fault. Strain is both accumulative and instantaneous depending on the rheology of the rock; the ductile lower crust and mantle accumulates deformation gradually via shearing, whereas the brittle upper crust reacts by fracture - instantaneous stress release - to cause motion along the fault. A fault in ductile rocks can also release instantaneously when the strain rate is too great. The energy released by instantaneous strain release causes earthquakes, a common phenomenon along transform boundaries.
Microfracturing and accelerating moment release theory
Microfracturing, or microseismicity, is often thought of as a symptom caused by rocks under strain, where small-scale failures, perhaps on areas the size of a dinner plate or a smaller area, release stress under high strain conditions. Only when sufficient microfractures link up into a large slip surface can a large seismic event or earthquake occur. According to this theory, after a large earthquake, the majority of the stress is released and the frequency of microfracturing is exponentially lower. A connected theory, accelerating moment release (AMR), claims that the seismicity rate accelerates in a well-behaved way prior to major earthquakes, and that it might provide a helpful tool for earthquake prediction on the scale of days to years. AMR may be used to predict rock failures within mines, and applications are being attempted for the portions of faults within brittle rheological conditions. Researchers observe like behavior in tremors preceding volcanic eruptions.
Slip, heave, throw
Slip is defined as the relative movement of geological features present on either side of a fault plane, and is a displacement vector. A fault's sense of slip is defined as the relative motion of the rock on each side of the fault with respect to the other side. In measuring the horizontal or vertical separation, the throw of the fault is the vertical component of the dip separation and the heave of the fault is the horizontal component, as in "throw up and heave out".
The vector of slip can be qualitatively assessed by studying the fault bend folding, i.e., the drag folding of strata on either side of the fault; the direction and magnitude of heave and throw can be measured only by finding common intersection points on either side of the fault (called a piercing point). In practice, it is usually only possible to find the slip direction of faults, and an approximation of the heave and throw vector.
Fault types
Geologists categorize faults into three groups based on the sense of slip:
- a fault where the relative movement (or slip) on the fault plane is approximately vertical is known as a dip-slip fault
- where the slip is approximately horizontal, the fault is known as a transcurrent or strike-slip fault
- an oblique-slip fault has non-zero components of both strike and dip slip.
For all naming distinctions, it is the orientation of the net dip and sense of slip of the fault which must be considered, not the present-day orientation, which may have been altered by local or regional folding or tilting.
Dip-slip faults
Dip-slip faults can occur either as "reverse" or as "normal" faults. A normal fault occurs when the crust is extended. Alternatively such a fault can be called an extensional fault. The hanging wall moves downward, relative to the footwall. A downthrown block between two normal faults dipping towards each other is called a graben. An upthrown block between two normal faults dipping away from each other is called a horst. Low-angle normal faults with regional tectonic significance may be designated detachment faults.
A reverse fault is the opposite of a normal fault—the hanging wall moves up relative to the footwall. Reverse faults indicate compressive shortening of the crust. The dip of a reverse fault is relatively steep, greater than 45°.
A thrust fault has the same sense of motion as a reverse fault, but with the dip of the fault plane at less than 45°. Thrust faults typically form ramps, flats and fault-bend (hanging wall and foot wall) folds. Thrust faults form nappes and klippen in the large thrust belts. Subduction zones are a special class of thrusts that form the largest faults on Earth and give rise to the largest earthquakes.
The fault plane is the plane that represents the fracture surface of a fault. Flat segments of thrust fault planes are known as flats, and inclined sections of the thrust are known as ramps. Typically, thrust faults move within formations by forming flats, and climb up section with ramps.
Fault-bend folds are formed by movement of the hanging wall over a non-planar fault surface and are found associated with both extensional and thrust faults.
Faults may be reactivated at a later time with the movement in the opposite direction to the original movement (fault inversion). A normal fault may therefore become a reverse fault and vice versa.
Strike-slip faults
The fault surface is usually near vertical and the footwall moves either left or right or laterally with very little vertical motion. Strike-slip faults with left-lateral motion are also known as sinistral faults. Those with right-lateral motion are also known as dextral faults.Each is defined by the direction of movement of the ground on the opposite side of the fault from an observer.
A special class of strike-slip faults is the transform fault, where such faults form a plate boundary. These are found related to offsets in spreading centers, such as mid-ocean ridges, and less commonly within continental lithosphere, such as the San Andreas Fault in California, or the Alpine Fault, New Zealand. Transform faults are also referred to as conservative plate boundaries, as lithosphere is neither created nor destroyed.
Oblique-slip faults
A fault which has a component of dip-slip and a component of strike-slip is termed an oblique-slip fault. Nearly all faults will have some component of both dip-slip and strike-slip, so defining a fault as oblique requires both dip and strike components to be measurable and significant. Some oblique faults occur within transtensional and transpressional regimes, others occur where the direction of extension or shortening changes during the deformation but the earlier formed faults remain active.
The hade angle is defined as the complement of the dip angle; it is the angle between the fault plane and a vertical plane that strikes parallel to the fault.
Listric fault
A listric fault is a type of fault in which the fault plane is curved. The dip of the fault plane becomes shallower with increased depth and may flatten into a sub-horizontal décollement.
Ring fault
Fault rock
All faults have a measurable thickness, made up of deformed rock characteristic of the level in the crust where the faulting happened, of the rock types affected by the fault and of the presence and nature of any mineralising fluids. Fault rocks are classified by their textures and the implied mechanism of deformation. A fault that passes through different levels of the lithosphere will have many different types of fault rock developed along its surface. Continued dip-slip displacement tends to juxtapose fault rocks characteristic of different crustal levels, with varying degrees of overprinting. This effect is particularly clear in the case of detachment faults and major thrust faults.
The main types of fault rock include:
- Cataclasite - a fault rock which is cohesive with a poorly developed or absent planar fabric, or which is incohesive, characterised by generally angular clasts and rock fragments in a finer-grained matrix of similar composition.
- Tectonic or Fault breccia - a medium- to coarse-grained cataclasite containing >30% visible fragments.
- Fault gouge - an incohesive, clay-rich fine- to ultrafine-grained cataclasite, which may possess a planar fabric and containing <30% visible fragments. Rock clasts may be present
- Clay smear - clay-rich fault gouge formed in sedimentary sequences containing clay-rich layers which are strongly deformed and sheared into the fault gouge.
- Mylonite - a fault rock which is cohesive and characterized by a well developed planar fabric resulting from tectonic reduction of grain size, and commonly containing rounded porphyroclasts and rock fragments of similar composition to minerals in the matrix
- Pseudotachylite - ultrafine-grained vitreous-looking material, usually black and flinty in appearance, occurring as thin planar veins, injection veins or as a matrix to pseudoconglomerates or breccias, which infills dilation fractures in the host rock.
Impacts on structures and people
In geotechnical engineering a fault often forms a discontinuity that may have a large influence on the mechanical behavior (strength, deformation, etc.) of soil and rock masses in, for example, tunnel, foundation, or slope construction.
The level of a fault's activity can be critical for (1) locating buildings, tanks, and pipelines and (2) assessing the seismic shaking and tsunami hazard to infrastructure and people in the vicinity. In California, for example, new building construction has been prohibited directly on or near faults that have moved within the Holocene Epoch (the last 11,000 years) (Hart and Bryant, 1997). Also, faults that have shown movement during the Holocene plus Pleistocene Epochs (the last 2.6 million years) may receive consideration, especially for critical structures such as power plants, dams, hospitals, and schools. Geologists assess a fault's age by studying soil features seen in shallow excavations and geomorphology seen in aerial photographs. Subsurface clues include shears and their relationships to carbonate nodules, translocated clay, and iron oxide mineralization, in the case of older soil, and lack of such signs in the case of younger soil. Radiocarbon dating of organic material buried next to or over a fault shear is often critical in distinguishing active from inactive faults. From such relationships, paleoseismologists can estimate the sizes of past earthquakes over the past several hundred years, and develop rough projections of future fault activity.
See also
- USGS (30 April 2003). "Where are the Fault Lines in the United States East of the Rocky Mountains?". Retrieved 6 March 2010.
- USGS. "Hanging wall Foot wall". Visual Glossary. Retrieved 2 April 2010.
- Tingley, J.V.; Pizarro K.A. (2000). Traveling America's loneliest road: a geologic and natural history tour. Nevada Bureau of Mines and Geology Special Publication 26. Nevada Bureau of Mines and Geology. p. 132. ISBN 978-1-888035-05-6. Retrieved 2010-04-02.
- Marquis, John; Hafner, Katrin; Hauksson, Egill. "The Properties of Fault Slip". Investigating Earthquakes through Regional Seismicity. Southern California Earthquake Center. p. 14. Retrieved 19 March 2010.
- "Faults: Introduction". University of California, Santa Cruz. Retrieved 19 March 2010.
- Park, R.G. (2004). Foundation of Structural Geology (3 ed.). Routledge. p. 11. ISBN 978-0-7487-5802-9.
- Hart, E.W., and Bryant, W.A., 1997, Fault rupture hazard in California: Alquist-Priolo earthquake fault zoning act with index to earthquake fault zone maps: California Division of Mines and Geology Special Publication 42.
- Brodie, Kate; Fettes, Douglas; Harte, Ben; Schmid, Rolf (29 January 2007). 3. Structural terms including fault rock terms. Recommendations by the IUGS Subcommission on the Systematics of Metamorphic Rocks.
- Davis, George H.; Reynolds, Stephen J. (1996). "Folds". Structural Geology of Rocks and Regions (2nd ed.). New York: John Wiley & Sons. pp. 372–424. ISBN 0-471-52621-5.
- Fichter, Lynn S.; Baedke, Steve J. (13 September 2000). "A Primer on Appalachian Structural Geology". James Madison University. Retrieved 19 March 2010.
- McKnight, Tom L.; Hess, Darrel (2000). "The Internal Processes: Types of Faults". Physical Geography: A Landscape Appreciation. Upper Saddle River, N.J.: Prentice Hall. pp. 416–7. ISBN 0-13-020263-0.
|Wikimedia Commons has media related to: Faults| | <urn:uuid:23cb9c85-818d-47b0-aecd-653dd07775ba> | 4.09375 | 3,013 | Knowledge Article | Science & Tech. | 49.259416 |
|Science Applications and Data Distribution with International Partners
Once of the major purpose of NASA's support of collaboration in coral reef mapping is to find ways to get remote sensing data and derived maps into the hands of the managers that are making decisions about coral reef ecosystems. NASA has supported collaboration with three different international non-governmental organizations to help facilitate their use of remote sensing data and to improve global accessibility to maps and data. Particular areas of emphasis are in transferring technological capabilities to use remote sensing data to map coral reefs and linked land areas, and in fostering linkages for data distribution through the partners.
UNEP World Conservation Monitoring Centre
The UNEP-WCMC (http://www.unep-wcmc.org/index.html) serves as a major provider of information for developing conservation policies around the world. They serve as the primary reef map producers for the International Coral Reef Action Network (ICRAN) (http://www.icran.org). They produced and maintain the current existing global coral reef map from a compilation of cartographic sources, and published the World Atlas of Coral Reefs (http://www.unep-wcmc.org/marine/coralatlas/) (Spalding et al. 2001). Although the atlas is a landmark product and provides the best current estimate of the global area of emergent reef crest, it is limited by the variety of cartographic sources that were used. Only 30% of the reefs in the atlas had source data at a 1:250,000 scale or better. Sources also differed in their definitions of the reef areas that were mapped. Spatial and positional accuracy in a cartographic compilation product is also a difficult challenge.
Through partnerships with NASA, the global SeaWiFS bathymetry map was used to identify and correct errors in the WCMC reef map. For example, in this map of Kiritimati, Kiribati, the light blue pixels represent shallow depths and the black areas represent land. The existing WCMC map (red lines) has positional inaccuracies with the reef mapped on top of the land. The shallow bathymetry allows correction of the position of the reef (yellow line on the south side of the island).
Combining the SeaWiFS bathymetry product with the WCMC map has increased map accuracy for use in evaluating the global distribution of marine protected areas (Green et al. in review). The SeaWiFS bathymetry allowed identification of the level of protection of tropical shallow marine habitats around the world. Important considerations in evaluating whether shallow marine and coral reef environments are protected are the degree to which designated marine protected areas overlap (as shown in this map of Grand Cul-de-Sac Marin Reserve, Guadeloupe), and whether protected habitats are representative of the marine biodiversity of different regions.
WCMC personnel were trained by and have participated in the Millennium Coral Reef Mapping project at the University of South Florida. After the Millennium Maps are complete, a reduced "reef-no reef" global map product with uniform global accuracy will be derived as a substantial update to the World Atlas of Coral Reefs.
An important area in global coral reef conservation is evaluating changes in related marine ecosystems such as mangroves and seagrasses. WCMC is also collaborating with Johnson Space Center and Florida International University in evaluating methods for updating the World Mangrove Atlas (Spalding et al. 1997) using Landsat 7 and MODIS data.
World Resources Institute
Reefs at Risk (http://wri.igc.org/reefsatrisk/) is a series of projects at the World Resources Institute that develop indicators to evaluate the human pressures on coral reefs. The projects, which entail extensive collaboration with partners across each region, evaluate threats to coral reefs from coastal development, marine pollution, pollution and sedimentation from inland sources, and overexploitation of resources. A global analysis was released in 1998 (Bryant et al. 1998) and was important for raising global awareness of the extent of threats to reefs around the world. A more detailed analysis of Reefs at Risk in Southeast Asia was released in 2002 (Burke et al. 2002). A more detailed regional analysis for the Caribbean is currently in progress and expected to be released in 2004.
The main objectives for collaboration between NASA and WRI was to help improve the accuracy of input data used in the threat models in the Caribbean regional study. The Reefs at Risk methodology relies on GIS-based modeling with existing map datasets. In previous studies, the threat of pollution and sedimentation were evaluated using the 1-km IGBP DISCover/USGS Global Land Cover dataset (Belward et al. 1999) and WCMC coral reef maps as base information. These coarse datasets were not suitable for modeling the risks to many small islands in the Caribbean such as the Lesser Antilles. Such analyses need to be made at a finer spatial resolution.
Since Millennium Coral Reef Maps for the Caribbean are being completed simultaneously to the Reefs at Risk Caribbean analysis, Millennium Maps are being delivered directly to WRI to allow updated coral reef data to be used in the Reefs at Risk Caribbean analyses.
Through NASA support, Florida International University has been collaborating with WRI in evaluating different sources of higher resolution land cover data by comparing three global land cover maps (IGBP DISCover/USGS Global Land Cover (http://edcdaac.usgs.gov/glcc/glcc.html), Boston University's MODIS/Terra Land Cover (http://geography.bu.edu/landcover/index.html), and EarthSat's Geocover-LC (http://www.geocover.com/gc_lc/index.html)) with a custom classification using Landsat 7 data. WRI partners have been providing expert local review of the maps, and the effect of the different source data on the land pollution and sedimentation risk models is currently being determined. This will allow WRI to better understand their data needs as they complete subsequent regional Reefs at Risk Analyses.
Reefbase (http://www.reefbase.org) is the premier online information system on coral reefs. It includes a database of information on the location, status, legislation and management of coral reefs around the world, archives of coral bleaching observations, and data collected by the Global Coral Reef Monitoring Network (http://www.gcrmn.org/). It serves as the primary data archive and distribution center for the International Coral Reef Action Network (ICRAN) (http://www.icran.org/).
As part of their collaboration with NASA, ReefBase developed a new online interactive map server that allows users to interactively assemble and view key reef mapping datasets, including the coral reef and mangrove maps from UNEP-WCMC, Reefs at Risk data, ReefCheck maps, NOAA AVHRR Coral Reef Hotspot data, NASA remote sensing imagery of reefs, and the SeaWiFS shallow bathymetry product.
The Millennium Coral Reef Map products were developed so that they could be easily incorporated into ReefBase when they are complete. Test datasets have already been successfully included in the interactive map server. Plans are also being implemented to allow users to include viewing a reduced-resolution Landsat-7 layer as an option, and to link to the Landsat Coral Reef Data Archive so that users can seamlessly find the underlying data needed for management applications. | <urn:uuid:02f31eda-5ca2-4286-881a-b4303e140645> | 3.515625 | 1,524 | Knowledge Article | Science & Tech. | 35.937571 |
An experiment was conducted to compare the strength of welds produced by four different welding techniques. Each technique was used to weld five pairs of metal plates. The average strengths for the five welds for each technique were:
Technique: A B C D
69 83 75 71
The estimate of experimental error variance was MSE = 15 with 16 degrees of freedom.
Apply the Tukey method to the data using an overall error rate of 0.05 in order to compare all pairs of treatments. Which treatments produce the same average strength? | <urn:uuid:9bcbdd7d-bdf1-43dc-afb8-4efabcd563f4> | 3.34375 | 109 | Tutorial | Science & Tech. | 60.357317 |
|Annu. Rev. Astron. Astrophys. 1997. 35:
Copyright © 1997 by . All rights reserved
1.2. Distances and Hubble Law
The Hubble law between distance and cosmological redshift is a blessing for the cosmographer. A great motivation for investigation of the distance scale, it is also helpful for tackling the problems mentioned above.
Systematic errors in obtained distances are often recognized as a deviation from the linear Hubble law, and the reality and speed of galaxy streams, for example, closely depend on how well distances to galaxies are known. When one speaks about the choice of one or another distance scale, this is intimately connected with the Hubble constant Ho. In the Friedmann model, Ho together with qo allows one to extend the distance scale to high cosmological redshifts where classical distance indicators are lost. However, this extension is not the topic of my review [for discussion of such questions, see e.g. Rowan-Robinson (1985)].
To discuss biases in extragalactic distances, one might like to know what "distance" represents. As McVittie (1974) says, distance is a degree of remoteness; in some sense or another, faint galaxies must be remote. Only a cosmological model gives the exact recipe for calculating from the observed properties of an object its distance (which may be of different kinds). Because our basic data consist of directions and fluxes of radiation, we are usually concerned with luminosity or diameter distances. An error, say, in the luminosity distance in a transparent space means that if one puts a genuine standard candle beside the distance indicator in question, its photometric distance modulus is not the same.
Among the variety of distance concepts, one would like to think that there exists a fundamental one, corresponding to meter sticks put one after another from the Sun to the center of a galaxy. For instance, in the Friedmann universe, the theoretical and not directly measurable "momentary" proper distance is often in the background of our minds as the basic one (Harrison 1993), and the luminosity and other types of distances are the necessary workhorses. This review refers to the local galaxy universe where the different distance types are practically the same; in any case, the tiny differences between them are overwhelmed by selection biases and other sources of error. Another allowance from Nature is that in this distance range, evolutionary effects can be forgotten: In an evolving universe, distance indicators often require that the look-back time should be much less than the global evolutionary time scale. | <urn:uuid:8bd21e13-bfaf-44c1-ad58-90dfa92ef14b> | 3.109375 | 526 | Academic Writing | Science & Tech. | 31.386482 |
Two trees 20 metres and 30 metres long, lean across a passageway between two vertical walls. They cross at a point 8 metres above the ground. What is the distance between the foot of the trees?
The equation a^x + b^x = 1 can be solved algebraically in special
cases but in general it can only be solved by numerical methods.
In this short problem, try to find the location of the roots of
some unusual functions by finding where they change sign. | <urn:uuid:6b05a558-b4a4-4330-a2f1-b4f94b3e8f9e> | 2.8125 | 101 | Q&A Forum | Science & Tech. | 69.711301 |
04 November 2010
Posted in Crosby Observatory
Measuring and evaluating the brightness of stars can be traced back to the Greek astronomer and mathematician Hipparchus during 190 - 120 BC. He is responsible for producing a catalogue of comparative brightness and positioning of over 850 stars. Hipparchus formed the apparent magnitude scale to determine the brightness of a star as seen by an observer from earth.
How does this scale work? The brighter the celestial object appears, the lower the value of its magnitude. For instance, the faintest objects you can see using the naked eye are indicated with a magnitude of 6, while the Sun on the apparent magnitude scale is –26.74. However, most of the stars we gaze at in an urban neighborhood with our eyes are usually somewhere around 3 to 4 and if using binoculars, the limit is 10. More recently, through the use of the powerful Hubble Space Telescope, astronomers have located stars with magnitudes of 30+. It is this basic classification from over 2,000 years ago that led to the magnitude scale that we still use today! | <urn:uuid:8eef1c84-d41d-411a-8aac-3ec9942cf9f6> | 4.09375 | 216 | Knowledge Article | Science & Tech. | 46.545957 |
USGS/Cascades Volcano Observatory, Vancouver, Washington
RSAM - Real-Time Seismic-Amplitude Measurement System
SSAM - Seismic Spectral-Amplitude Measurement System
Ewert, J.W.; Murray, T.L.; Lockhart, A.B.; and Miller, C.D., 1993,
Preventing Volcanic Catastrophe:
The U.S. International Volcano Disaster Assistance Program:
Earthquakes and Volcanoes, vol.24, no.6
Two new systems, the Real-time Seismic-Amplitude Measurement (RSAM) and the
Seismic Spectral-Amplitude Measurement (SSAM), have been developed by the USGS to
summarize seismic activity during volcanic crises. These techniques for characterizing a
volcano's changing seismicity in real time (as it is occurring) rely on the amplitudes and
frequencies of seismic signals rather than on the locations and magnitudes of the earthquakes.
During a volcanic crisis, seismicity commonly reaches a level at which individual seismic
events are difficult to distinguish. Analog seismic records (seismograms) provide some
information, but rapid quantitative analysis is not always possible without substantially
disturbing the continuity of recording. Although several real-time earthquake-detection and
recording systems exist, most fail to provide quantitative information during periods of intense
seismicity, which is a common situation before a volcanic eruption. Yet it is precisely during
such periods that the need for timely quantitative seismic information becomes most critical.
To fill this need a simple and inexpensive real-time seismic-amplitude measurement system
(RSAM) was developed.
The RSAM computes and stores the average amplitude of ground shaking caused by
earthquakes and volcanic tremor over 10-minute intervals. Increases in tremor amplitude or
the rate of occurrence and size of earthquakes cause the RSAM values to increase. Rather
than focusing on individual events, RSAM sums up the signals from all events during 10-
minute intervals to provide a simplified but still very useful measure of the overall level of
seismic activity (Figure 9). This information is easy to plot and convey to public officials.
Figure 9 --
Real-Time Seismic-Amplitude Measurement (RSAM) plot. Comparison of RSAM
data (top) and seismograms (bottom) shows how RSAM reduces complex seismic data to a
simple line graph that correlates with ground-shaking energy. Eruptions (heavy dark lines)
from Mount Redoubt occurred at 09:47 am, and 10:15 am on the 14th and 15th of December,
The Seismic Spectral-Amplitude Measurement (SSAM) system takes this approach one
step further by computing in real time the average amplitude of the seismic signals in specific
frequency bands (Figure 10). This permits seismologists to evaluate the nature of seismicity
at a volcano and recognize subtle shifts in frequency that are related to changing dynamics of
Figure 10 --Seismic Spectral-Amplitude Measurement (SSAM) plot (Mount Pinatubo, the
Philippines, June 15, 1991). This plot shows the average relative seismic amplitude in
specific frequency bands over 15 minute intervals. This type of seismic data is available in
real time, and permits seismologists to detect and evaluate a change in the type of earthquake
activity occurring beneath an active, restless volcano. In the figure, the time scale refers to
Greenwich mean time (G.m.t.). Brief episodes of intense seismicity in the 0.5-1.5 Hz
frequency band between approximately 0200 and 0530 were associated with explosive
eruptions. Intense tremor during the first part of the climactic eruption began at about 0540
and gave way after approximately 3 hours, as the eruption waned, to higher-frequency
seismicity related to structural readjustments of the volcano. Data gaps result from loss of
power to the system during the evacuation of Clark Air Base.
Myers and Theisen, 1994,
Volcanic Event Notification at Mount St. Helens:
IN: Casadevall, (ed.), 1994,
Volcanic Ash and Aviation Safety: Proceedings of the First International
Symposium on Volcanic Ash and Aviation Safety:
USGS Bulletin 2047, 250p.
The delay in CVO's learning about the January 6, 1990, event, coupled with
increased concerns about the hazards of volcanic ash triggered by the Boeing 747
incident in Alaska on December 15, 1989 (Brantley, 1990), prompted CVO to
develop a seismic-alarm system that is activated by small, as well as large,
volcanic events. CVO also made a few adjustments in the notification and
call-down procedures to improve communication of hazards information during
Because seismicity is one of the main tools used to monitor volcanoes, CVO and
UW maintain a network of 18 seismic stations within 16 kilometers of Mount St.
Helens, including three stations in the crater. These stations provide a
detailed record of seismic activity at Mount St. Helens, including earthquakes,
tremor, rockfalls, explosions, and mudflows (Jonientz-Trisler and others, 1994).
Most of these seismic events, including many of the small ash-producing
explosions, are too small to record on the State-wide network at UW. However,
the events cause significant local ground motions that are detected on the Mount
St. Helens network by the CVO
real-time seismic-amplitude monitoring system
as peaks in the time-averaged seismic amplitude (Endo and Murray, 1991).
An RSAM-based seismic-alarm system was developed and installed for testing 2
days after the January 6, 1990, event and was fully functional by the end of
February 1990. With the RSAM system, a computer program compares the amplitude
of a station's average seismic signal during a 1-minute interval with
empirically determined threshold values. If thresholds are exceeded during the
same 1-minute interval at several (usually three) of the stations in the crater
and on the volcano flanks, an RSAM alert is generated. The computer is set to
automatically dial the duty scientist's 24-hour beeper and transmit a number
code indicating an RSAM alert. The computer redials the beeper every time a new
1-minute RSAM alert is generated.
RSAM alarms that have been triggered at Mount St. Helens
between March 1, 1990, and September 20, 1991
|Type of event
||Number of alarms
|Explosion-like seismic events (four of which had confirmed ash plumes)
|Rockfalls (some with dust plumes)
|Mount St. Helens earthquakes
| Telemetry problems
Many types of events, including explosions, rockfalls, earthquakes, and
telemetry problems, can generate alerts. Because an alert does not indicate the
nature of the event, the duty scientist must examine the seismic
signature of the event recorded on the seismographs at CVO to determine the
basis for the alert. Explosion signals can usually be identified by careful
evaluation of the signal character (Jonientz-Trisler and others, 1994). Once an
alert is received, the speed with which notification is issued will depend on
the time it takes for the duty scientist to reach CVO and the scientist's skill
at recognizing explosion signals.
To date, all known ash-producing explosions since March 1, 1990, have generated
alerts. However, failure of any component (key seismic stations, computer,
computer programs, phone system, beeper, or beeper-pager system) would prevent
an alert from getting through. As a precaution, a daily test alert is sent
through the system to the beeper.
Murray and Endo, 1992,
A Real-Time Seismic-Amplitude Measurement System (RSAM)
IN: Ewert and Swanson, (eds.), 1992, Monitoring
Volcanoes: Techniques and Strategies Used by the Staff of the
Cascades Volcano Observatory, 1980-1990: USGS Bulletin 1966,
Although several real-time detection and recorder systems exist, few address the
problem of continuously measuring the amplitude of seismic signals during
volcano-crisis conditions, when individual events are difficult to recognize.
We developed a
real-time seismic-amplitude measurement system (RSAM)
an inexpensive eight-bit analog-to-digital converter controlled by a laptop
computer to provide 1-minute-averaged, absolute-amplitude information for eight
seismic stations near Mount St. Helens. The absolute voltage level for each
station is digitized at 50 samples/second, averaged, and immediately transmitted
to a host computer for analysis. The RSAM proveds a convenient-to-access,
continuous time history of seismic activity at the volcano. RSAM systems
calculating 10-minute amplitude averages have been installed at the Cascades,
Alaska, and Hawaiian Volcanoes Observatories. The RSAM has been a useful tool
in predicting eruptive activity at Mount St. Helens and Redoubt Volcano, Alaska.
[Volcano and Hydrologic Monitoring Techniques Menu]...
[Mount St. Helens Eruption Advisories and Alerts Menu] ...
[Earthquakes and Seismicity Menu] ...
URL for CVO HomePage is:
URL for this page is:
If you have questions or comments please contact:
08/21/00, Lyn Topinka | <urn:uuid:ad621322-8668-4e68-9477-c6704a8434e4> | 3.375 | 2,054 | Knowledge Article | Science & Tech. | 33.842113 |
Liquid Fiber Optics by Benjamin Freeberg
Honorable - Contrived Category
School: The Wheatley School
Teacher: Mr. Adam Plana
This photo exemplifies total internal reflection. Water was poured into a bucket with a hole in one side. A laser beam was then aimed through that hole, where the water poured out. Since the index of refraction of air is 1.00 and water is 1.33, the laser beam traveled from a high to low index of refraction. The reason that the beam of light was reflected within the stream was because the incident angle of the light beam was greater than the critical angle, which resulted in total internal reflection. | <urn:uuid:666da5b5-42d4-4b5b-8e0f-4b8a4e0262d8> | 2.9375 | 137 | Knowledge Article | Science & Tech. | 57.179937 |
Science Fair Project Encyclopedia
In astronomy, the termination shock is theorised to be a boundary marking one of the outer limits of the sun's influence. It is where the bubble of solar wind particles slows down to below supersonic speed and heats up due to collisions with the galactic interstellar medium. It is believed to be about 100 Astronomical Units from the Sun.
The termination shock boundary fluctuates in its distance from the sun as a result of fluctuations in solar flare activity i.e. changes in the ejections of gas and dust from the sun.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:3d2bd064-6309-43c5-8aa6-461658368888> | 3.6875 | 143 | Knowledge Article | Science & Tech. | 49.576362 |
© Frans Lanting / Minden Pictures
As the human population increases, so does our need for materials and space. Non-human primates are just one of the many creatures that are threatened by human sprawl and resource exploitation. The following are examples of a few typical ways humans are causing the loss of crucial habitat for apes, monkeys and prosimians.
Approximately 26 percent of the world's land area – including one-third of tropical and temperate forests, and a quarter of natural grasslands – have been converted for the use of agriculture and livestock.
Because primates are primarily found in tropical regions of developing nations with vast forested areas, the impact from logging on these species is high.
With roads come direct loss of habitat, erosion, pollution – all major threats to primates.
Development projects, such as mines and dams, can pollute, damage and destroy significant amounts of primate habitat.
Fires can be detrimental to primates by posing a direct threat to individuals, destroying vast areas of habitat and potentially leaving orphaned babies and adolescents.
READ MORE: Primates are also threatened by hunting, disease and capture. | <urn:uuid:ab85bdf3-a552-49c2-af44-aed585c07574> | 3.65625 | 232 | Knowledge Article | Science & Tech. | 25.941313 |
So, what is this chemical? It’s a liquid found in almost every food item available at your local supermarket, is found in nearly every stream, lake, ocean and reservoir on the planet, and a large quantity is currently in your bloodstream. It is also eroding many of the world’s natural landscapes, is the main component of acid rain, is responsible for millions of death every year and can break most electrical equipment upon contact.
If you haven’t guessed yet, dihydrogen monoxide is, in fact, water. Because the term isn’t used scientifically, most people have never heard of it, and assume it’s dangerous.
Pranksters have been using this to their advantage for a while now, and has almost severely backfired a few times. As mentioned in the title, a 14-year-old from Idaho won his school’s science fair by reporting on the dangers of this chemical to ninth graders and asking for their opinion on whether it should be banned.
Of the fifty asked, only ONE student recognized the substance was just water. The project, ‘titled How gullible are we?’, won the science fair and became an internet legend. School students aren’t the only ones to fall for this. One suburb in California came extremely close to BANNING it! It wasn’t until after a vote had been arranged that the council realized they had been tricked into believing the hoax. | <urn:uuid:e16e9c07-48f2-4009-b089-a23eb9ae5b21> | 3.078125 | 305 | Personal Blog | Science & Tech. | 51.755213 |
Bringing Life to Mars; The Future of Space Exploration; Scientific American Presents; by McKay; 6 Page(s)
Four billion years ago Mars was a warm and wet planet, possibly teeming with life. Spacecraft orbiting Mars have returned images of canyons and flood valleys-features that suggest that liquid water once flowed on the planet's surface. Today, however, Mars is a cold, dry, desertlike world with a thin atmosphere. In the absence of liquid water-the quintessential ingredient for life-no known organism could survive on the Red Planet.
More than 20 years ago the Mariner and Viking missions failed to find evidence that life exists on Mars's surface, although all the chemical elements needed for life were present. That result inspired biologists Maurice Averner and Robert D. MacElroy of the National Aeronautics and Space Administration Ames Research Center to consider seriously whether Mars's environment could be made hospitable to colonization by Earthbased life-forms. Since then, several scientists, using climate models and ecological theory, have concluded that the answer is probably yes: With today's technology, we could transform the climate on the planet Mars, making it suitable once more for life. Such an experiment would allow us to examine, on a grand scale, how biospheres grow and evolve. And it would give us the opportunity to spread and study life beyond Earth. | <urn:uuid:73293c6c-0082-4661-bf99-47bfc49abcc6> | 4.25 | 281 | Truncated | Science & Tech. | 32.664142 |
IN TRYING TO DEVELOP a unified theory, Einstein worked closely with Peter Bergmann (left) and Valentine Bargmann (right), two young German-born physicists who also had fled the Nazis and who went on to become renowned scientists in their own right. Bargmann's wife, Sonja, was the one who translated Einstein's Scientific American article (and many other manuscripts) into English. This picture was taken in 1940. Image: LUCIEN AIGNER Corbis
When Albert Einstein started his efforts to develop a unified theory of physics in the early 1920s, it was such a hopeful enterprise. Existing theories, including both relativity and the emerging quantum mechanics, raised as many questions as they answered, so most physicists agreed on the need for a grander framework. Ideas poured forth from figures such as Hermann Weyl, Arthur Stanley Eddington and Theodor Kaluza. Although these pioneering efforts fell short of achieving unification, they introduced theorists to such fruitful concepts as gauge symmetry and extra dimensions.
Thirty years later Einstein stood alone. He had published and retracted a string of unified theories. Other scientists saw his approach as a dead end--an assessment that has been borne out by the progress of physics since his death in 1955. Whereas Einstein sought to base a unified theory on general relativity, quantum mechanics has proved the best starting point.
This article was originally published with the title Forces of the World, Unite. | <urn:uuid:5098fdaa-1603-4046-8ceb-b108ea279b60> | 3.765625 | 292 | Truncated | Science & Tech. | 39.349401 |
This drawing shows the 4 Galilean satellites and the path of the Galileo spacecraft.
Click on image for full size
This drawing shows the positions of the four Galilean satellites relative to each other, and the path of the Galileo spacecraft on one of its flyby's past Jupiter.
Of the four Galilean satellites, Io is the inside moon, Europa the next closest to Jupiter, Ganymede the third furthest out, followed by Callisto as the furthest from Jupiter.
Shop Windows to the Universe Science Store!
Our online store
on science education, classroom activities in The Earth Scientist
specimens, and educational games
You might also be interested in:
The Galilean satellites are the 4 major moons of Jupiter, Io, Europa, Ganymede, and Callisto. In this picture, Io, and Io’s surface, are shown on the left-most end, then Europa, and its surface, then Ganymede,...more
The Galileo spacecraft was launched on October 19, 1989. Galileo had two parts: an orbiter and a descent probe that parachuted into Jupiter's atmosphere. Galileo's main mission was to explore Jupiter and...more
Amalthea was discovered by E Barnard in 1872. Of the 17 moons it is the 3rd closest to Jupiter, with a standoff distance of 181,300 km. Amalthea is about the size of a county or small state, and is just...more
Callisto was first discovered by Galileo in 1610, making it one of the Galilean Satellites. Of the 60 moons it is the 8th closest to Jupiter, with a standoff distance of 1,070,000 km. It is the 2nd largest...more
The insides of most of the moons and planets separated while they were forming out of the primitive solar nebula. Measurements by the Galileo spacecraft have been shown that Callisto is the same inside...more
Many examples of the differing types of surface are shown in this image. In the foreground is a huge impact crater, which extends for almost an entire hemisphere on the surface. This crater may be compared...more
The surface of Callisto is deeply pockmarked with craters. It looks to be perhaps the most severely cratered body in the solar system. There are also very large craters to be found there. The severity...more | <urn:uuid:66d14941-7ef7-4300-9c75-4bd4c618818b> | 3.796875 | 493 | Content Listing | Science & Tech. | 55.545341 |
Although the Red Bull Stratos jump is over, the physics goes on. In case you missed it (I don’t know how), let me recap:
Felix Baumgartner rides a balloon up to 128,000 feet altitude – of course he has to wear a space suit.
He jumps (falls) out.
During the fall, he goes faster than the speed of sound.
Although he made it, there was a tense part of the jump. While still high in the atmosphere, Felix began to spin. Why did he spin? Well, although the density of air is low, there is an interaction between Felix and the air. If he is not completely symmetrical during his fall, the air can exert a torque on him and induce a spin. If air makes him spin, can’t air make him stop spinning? Yes. Of course Felix could change his body position to compensate for the spin. However, this is easier said than done. The problem is that with the density of air so low, corrections to his body position don’t always have the effects you would assume.
This looks dangerous, but I wouldn’t call it the wall of DEATH. The basic idea is that a vehicle can ride around the inside vertical wall of a cylinder.
Is it real? Yes. Is it possible? Did you not just read the last answer? It has to be possible if it is real, right? Fine. But HOW? How fast would you have to go? What kind of acceleration would you experience? How many times would I vomit if I did this?
Apparently, this thing is pretty old. The Demon Drome Wall of Death has some nice pictures and history of the wall. Strange that it doesn’t list the diameter. It only lists the height of 20 feet. The Wikipedia page says that these shows usually have a cylinder ranging in diameter from 6.1 meters to 11 meters. Fine, I will do my calculations for the whole range of Death Walls.
What about the mass of the car? Well, I know this Mazda2 isn’t your basic model. However, if I went with that value listed value it would be around 1000 kg. I can’t get the speed of the car without knowing the radius of the cylinder. But I can get the angular speed. Here is a plot of the car as it goes around the wall.
This plot is essentially useless except to get the time it takes to go around the wall once. This is right about 2 seconds making the angular velocity about π rad/s (3.14 rad/s). That is all I can get from the video.
Probably the first question that people ask is: how does the car stay on the wall? Here is a diagram showing the forces on the car as it goes around.
I understand that a diagram like this can be difficult to come up with. But here are some tips: start with the forces from things that don’t have to touch the object you are interested in. In this case, that is only the gravitational force from the Earth. All the other forces are from things that are touching the car and there is only one such object: the wall. Surfaces can push in two ways. They can push parallel to the surface (this is friction) or they can push perpendicular to the surface – we call this the “normal” force (using the geometry definition meaning perpendicular).
Something has to be pushing up on the car. It can only be the wall since the wall is the only thing touching it and it can only be friction since this would be in the direction parallel to the wall. However, the typical model for the frictional force says that it has a magnitude of:
Where μs is called the coefficient of static friction and depends on the two materials (tire and wood). It is static friction (and not kinetic) because the two surfaces are not sliding relative to each other. Oh, and the less than sign is there because the frictional force pushes whatever it can to make the two surfaces NOT slide. But really, the important part is the FN – the normal force. The harder the two surfaces are pushed together the greater the frictional force. So, the wall has to push on the car perpendicular to the surface of the wall.
But wait. What is pushing the car against the wall? Nothing. Yes, nothing. What else could it be? Nothing else is touching the car, right? There are no other long range forces that you could put there. There isn’t an electrostatic or magnetic force, right? So, there is nothing pushing on the car towards the wall. However, if there was only the force from the wall pushing to the right, wouldn’t the car accelerate to the right? Yes. It does.
Remember that acceleration is a change in velocity. If a car is moving in a circle at a constant speed, the velocity is changing since the direction of motion of the car is changing. This is called centripetal acceleration. The magnitude of this acceleration is:
Remember, the direction of this acceleration is towards the center of the circle. It has to be since the force is in that direction.
Let me use this centripetal acceleration to determine the minimum coefficient of static friction for this car to stay on the wall. I already have the force diagram above. From this I can say that the net vertical force (I will call this the y-direction) is zero since it doesn’t accelerate in this direction. For the x-direction, the net force is not zero. This would mean:
Now, for the frictional model I will use this:
Since I am looking for the minimum value for the coefficient of friction, the less than or equals becomes just an “equals”. Now, I can substitute this in for the frictional force and solve for μ:
And there’s parliament. Ok – sorry, I had to make a “Tom (Swans on Tea)” title for this one. Tom, forgive me.
Here are two great circular motion videos. First, this one is from Dale Basler. He made himself a fine little floater-type accelerometer. Better than just make it, he made a video of the accelerometer in his car going around a round about. Check it out.
Listen up video makers – this is real important. If you are going to go through this much trouble to possibly injure yourselves, why not get the video camera shot from directly above? This would it MUCH better for video analysis. Oh, and make sure that you include a meter stick or something so I can scale the video (or since you seem European, a metre stick would do).
At least the one guy was trying to set a good example by putting out the cigarette. Don’t you know those things are killers?
There are several free iPhone-iPod Touch apps that let you look at the acceleration of the device using the built in accelerometer. I was planning on reviewing some of these free apps, but I didn’t. When I started playing around with them, it was clear that I needed some way to make a constant acceleration. There are two simple ways to do this – drop it, or spin it in a circle. I decided to go with the circular motion option because I like my iPod and because Steve Jobs told me to.
While playing with this, I realized that the acceleration depends on the distance of the sensor from the center of the circle of rotation. Where is this sensor? Don’t tell me, I want to figure it out experimentally.
I am going to spin my iPod in a circle with the “bottom” (you know, where the home button is) towards the center of the circle. I will record the acceleration. I can find the distance to the sensor if I know the angular speed. Next, I will turn the iPod sideways and do it again.
I am not going to go over all the circular motion stuff – if you want more details, check out this old post on centripetal acceleration. Basically, if something is moving in a circle, it is accelerating because it is changing velocity (even if just the change in direction of that velocity). The direction of this acceleration is towards the center of the circle and it has a magnitude of:
Note – omega is the angular velocity in radians/second (just to be clear).
You will not believe how many different ways I tried to rotate this iPod. I wanted to do it with stuff you could maybe find at home. My first failed attempt was to build a small Lego centrifuge (clearly, this guy knows how to build a home made centrifuge). I finally settled on this awesome PASCO rotating platform. It is great because you can stand or sit on it if you don’t mind getting sick from spinning. And then, how do you get it to rotate at a constant speed. I tried some complicated rotation device, but it turns out that if you just give it a good spin, it won’t slow down that much.
I found the iPhone app AccelGraph. Not a perfect app, but I think it is the best free one that could do the job. Originally, I had planned to use app AccelMeter because it has a great visual display. I was going to jailbreak my iPod so I could use VNC and record the screen. Failed. Anyway, using VNC makes it easier to start and stop the acceleration recording (especially since I mount the iPod with the glass down).
After making the video, I used Tracker Video Analysis to determine the angular velocity of the iPod. Note – autotracking feature on Tracker is the awesome.
How fast was the first set up spinning? Here is a shot from the video. A couple of notes. I put the laser on the platform so that I could use that to measure the rotation rate if I needed to (turns out, I didn’t need it). The CD was taped there for reference of size, again – I didn’t need this. In this configuration, AccelGraph says the acceleration is in the negative y direction. Oh, I put a marker on the back of the iPod so that I could have a reference point. In this case the marker is centered on the “dot” over the “i” in the label that says “iPod”.
After getting x, y, and time data from Tracker Video, I wanted to plot position vs. time and get a function to fit so that I could get the angular frequency. For some reason, Tracker Video was not quite fitting a sinusoidal function correctly. I used LoggerPro instead.
Both the x- and y-motion have an angular frequency of around 9.85 rad/sec. I am pretty happy with that. What about the data from AccelGraph? Here it is:
Notice there is some fluctuation in the data. Even in the z-direction. I guess this could be of “bumps” and stuff. Really, I want the x and the y accelerations (and the total magnitude in the x-y direction) Oops – I just realized that AccelGraph gives the acceleration in untis of “g’s”.
The location of the reference point is at r = 0.174 meters. From the acceleration above, the sensor should be at:
Ok – now I will repeat the above for the iPod turned 90 degrees. Using the same methods, I get an angular velocity of about 11.2 rad/sec. Here is the data from the video:
The data from AccelGraph gives:
This gives an “r” of:
So, I know the distance from the center of the rotation to the sensor for the two orientations. Using some uber-drawing skills, I draw a circle with the same radius as the sensor for the two orientations and then put the two pictures on top of each other. Notice the marking circle on the back of the iPod. This was used to line up the two images.
The yellow box is the location where the two radii intersect. This could be the location of the acceleration sensor. Oh, I know. There are some problems. The biggest problem I had was measuring from the center of the circle of rotation. I kind of guessed. Perhaps I should have put a better marker on the rotation device indicating the center. Oh well.
Also, this would have been a great opportunity to show the error in the measurements and the propagation of this error in the location of the sensor. I did do this – mainly because I don’t want this post to be too long. You can do that as a homework assignment. | <urn:uuid:d8917478-644b-4d02-93a5-a098140f4bd2> | 2.78125 | 2,659 | Personal Blog | Science & Tech. | 67.758371 |
The Tree of Life has three great branches: the eukaryotes, including all plants, animals, fungi, and single-celled creatures like malaria parasites and amoebae. There are the prokaryotes, or bacteria. And there are the Archaea, a group of single-celled organisms divergent from both the other groups.
Life on this planet is overwhelmingly single-celled. A cup of sea water or soil teems with thousands of species, many yet to be discovered. But for the most part, microbiologists only know about those organisms that they can grow in the lab.
Now Jonathan Eisen at the UC Davis Genome Center and colleagues may have discovered signs of a fourth, novel branch of life — or they may have found a lot of weird viruses. Eisen says that he really doesn’t know the answer, but he’s putting the data out there (a paper appeared last week in the journal Public Library of Science (PLoS) One).
On his blog, Eisen tells the story behind this research, which began some years ago when he was working at The Institute for Genome Research with J. Craig Venter on analyzing seawater samples from the Sargasso Sea.
Eisen and colleagues had pioneered a new approach to finding microbes, by simply extracting DNA from environmental samples, sequencing short pieces of it, then reassembling it to find new species. Eisen calls this “Environmental Shotgun Sequencing.”
When they did the original Sargasso Sea sequencing, Eisen and colleagues came up with some sequences that did not fit properly. Now, after going back and looking at new sequences, they were able to fit them into a robust phylogenetic tree, with some new branches.
We then propose and discuss four potential mechanisms that could lead to the existence of such evolutionarily novel sequences. The two we consider most likely are the following
(1) The sequences could be from novel viruses
(2) The sequences could be from a fourth major branch on the tree of life
Unfortunately, we do not actually know what is the source of these sequences. So we cannot determine which of the theories is correct. Obviously if there is a novel lineages of cellular organisms out there, well, that would be cool. But we have no evidence right now if that is what is going on. Personally, I think it is most likely that these novel sequences are from weird viruses. But as far as we can tell, they truly could be from a fourth major branch of cellular organisms and thus even though we did not have the story completely pinned down, we decided to finally write up the paper to get other people to think about this issue.
Over at the Loom, Carl Zimmer does an excellent job of explaining the background to this, including the existence of “giant viruses” which might be candidates for this new branch of life.
More coverage: from New Scientist, “Biology’s Dark Matter.” | <urn:uuid:0c74a315-e88f-4d8c-97fc-261c0a9a03a6> | 3.171875 | 610 | Personal Blog | Science & Tech. | 47.957491 |
A winning proposal for the Innovative Research Program, 2009:
Can sea-ice extent from the 1960s be determined from reprocessed Nimbus data?
Investigators: Walter N. Meier (CIRES/NSIDC), Dennis Wingo (NASA Ames Research Center), Mary J. Brodzik (CIRES/NSIDC)
Old satellite data can now be reprocessed using modern algorithms and processing capacity that was unavailable in the 1960s. This was demonstrated by the reprocessed images taken by Lunar Orbiter 1 on August 23rd, 1966 by the Lunar Orbiter Image Recovery Project (LOIRP) at the Ames Research Center (figure 1). This image of the Earth was acquired while the spacecraft was orbiting the moon.
Figure 1. Comparison of original and reprocessed data from Lunar Orbiter I (1966). The original data were acquired at a distance of 385,000 km. The resolution of the reprocess data is about 1 km. (NASA, 2008)
The Nimbus I, II & III data are of a similar vintage to the LOIRP data. The Nimbus satellites carried a High Resolution Infrared Radiometer (HRIR), with a resolution of 8 km. They also carried a medium resolution infrared radiometer (MRIR) and an Advanced Vidcon Camera System (AVCS). A fortunate coincidence with these early Nimbus satellites is that they likely captured the annual Arctic sea ice minimum (occurring each September) even though they collected data for relatively short durations. Nimbus I collected data from 28 August – 22 September 1964. Nimbus II collected data from 15 May 1966 – 18 January 1969. Nimbus III collected data from 14 April 1969 – 22 January 1972. Data coverage was global with twice daily acquisitions (day & night). Other related research includes an image mosaic of the Antarctic coastlines using Argon satellite photography from 1963 (Kim et al., 2007). However, this project did not involve reprocessing of original data; it used scanning versions of original Argon photographs.
o Can the Nimbus data be reprocessed to enable new science?
In our proposed study, we will address these questions:
o Do the level-0 data still exist or do we use later version from NASA archives?
o From the limited bands available can we determine average sea ice edges?
o Can we differentiate clouds from sea ice by using a time series?
We propose using the LOIRP data reprocessing methods to create a time series of a few HRIR images at a fixed point in the Arctic where we can identify a sea ice margin. We will make use of various algorithms to reprocess these HRIR images, depending on the version of the data available. We will search appropriate NASA archives for the original 2-inch Ampex tapes that contain the raw images and calibrations, as this form of the data offers the highest potential for success. A primary objective of the reprocessing is to remove the periodic jitter that can be seen in Figure 2. The jitter was caused by errors in
finding the horizon as the instrument scanned the surface. After reprocessing, we expect to get a few clean (enhanced) HRIR images and establish if it is possible to determine the sea ice extent. We will mask clouds using manual methods (e.g., since clouds tend to move faster than sea ice they will appear distinct from ice extent in multiple images over time). After demonstrating proof-of-concept in a select Arctic region, we will request funds from NSF and/or NASA to expand the study to determine the sea ice extent for the Nimbus campaigns (1964, 1966 and 1969). We also expect to extend our methods to later instruments to characterize minimum Arctic (and Antarctic maximum) sea ice extent for the period 1964-1978.
Figure 2. HRIR image of Lake Michigan. Imaged acquired October 6th 1966. The jitter is readily apparent in the resultant surface temperature plots (Noble and Wilkerson, 1970)
Due to limitations of historical program funds and processing systems, there is a wealth of early Earth-observing satellite data that were never fully explored There is a disappearing window of opportunity to recover these data, because only one tape drive remains in the world that can read the Ampex 2-inch media. Additionally, the original researchers are now mostly in their late 70s and 80s, and contact with them is critical to answering some of the necessary instrumentation questions. If this work is not done now, we will have forever lost this opportunity. Since the HRIR data coverage is global, the reprocessing techniques could make new 1960s-era data available to the entire Earth science community. The techniques we use would bring the quality of archaic data from other Earth-observing satellites (not limited to Nimbus instruments) up to contemporary standards, reinvigorating the data sets for current applications.
Why It Is Innovative
Our data mining and extraction methods will deliver new data products that have never been available, converting something of only marginal value into high-quality information. We can potentially extend the sea ice record several years back before the current standard satellite sea ice record that began in 1978. This is innovative because we are applying 21st-century processing and data handling capacity to answer a question of critical importance to modern climate change research. The images we will be recovering will be significantly better resolution than anyone has ever seen from these sensors. Finally, this project is truly collaborative: NSIDC/CIRES is the catalyst, Ames Research Center developed the algorithms, and CU students will be involved in the time-series analysis to determine sea ice extent.
Expected Outcome and Impact
This project will provide an unprecedented improvement and assessment of a unique set of historical imagery. It will recover valuable data that is in danger of being lost. The recovered data will potentially extend our record of sea ice minimum extent, a key climate indicator, more than a decade longer than currently exists.
1NASA, 2004. Nimbus Program History, NASA Goddard Space Flight Center.
2NASA, 2008. Lunar Orbiter Image Recovery Project (LOIRP) images, http://www.nasa.gov/topics/moonmars/features/LOIRP/.
3Kim, K., K. C. Jezek, and H. Liu, 2007. Orthorectified image mosaic of Antarctica from 1963 Argon Satellite photography: image processing and glaciological applications, International Journal of Remote Sensing, 28(23), 5357-5373.
4Noble, V. E. and J. C. Wilkerson, 1970. Airborne Temperature Surveys of Lake Michigan, October 1966 and 1967, Limnology and Oceanography, 15(2), 289-296. | <urn:uuid:f30d68de-424e-43bf-9166-58d3cc2621f5> | 2.90625 | 1,383 | Academic Writing | Science & Tech. | 39.664388 |
Date of this Version
The recent comprehensive review of Maccoa Duck Oxyura maccoa biology by Clark (1964) has provlded the first detailed summary of the species' reproductive behaviour patterns and other aspects of breeding in this little-studied stiff-tail. It has been evident that the evolutionary relationships of the Maccoa Duck to the other southern hemisphere stiff-tails and the northern species of Oxyura are still uncertain at best, as evidenced by the varied taxonomic treatment that the Maccoa has received from Delacour and Mayr (1945), who regarded it as a race of O. australis, from Boetticher (1952), who consldered it as a race of O. jamaicensis, and from Delacour (1959), who finally concluded that it represents a distinct species. There can be little doubt that the last approach is most realistic, but the question still remains as to which of the other species of Oxyura the Maccoa is most closely related. In 1961 I suggested that the Argentine Ruddy Duck O. vittata, the Australian Blue-billed Duck O. australis and the Maccoa Duck comprised an evolutionary group distinct from the other forms of Oxyura. Additional evidence supporting this view has since been summarized (Johnsgard 1967), when a comparison of male display patterns of the stiff-tails was undertaken. This survey was an admittedly preliminary one, since the author had never had an opportunity to study the displays of certain stiff-tails, including the Maccoa, and published descriptions were necessarily relied upon. Recently, unpublished notes, cine films, and other information on the Maccoa had been made available, and it has become increasingly apparent that the Maccoa is a species of unusual interest. | <urn:uuid:27541f87-c253-4ab5-ae37-39ddb05ccdb1> | 3.09375 | 365 | Knowledge Article | Science & Tech. | 31.967622 |
Gadagkar, Raghavendra and Kolatkar, Milind (1996) Evidence for Bird Mafia! Threat Pays. In: Resonance, 1 (5). pp. 82-84.
Birds are remarkable for their extraordinary efforts at nest building and brood care. Given that so many species of birds spend so much time and effort at these activities, there is plenty of room for some species to take it easy, lay their eggs in the nests of other species and hitch-hike on their hosts. The cuckoo that lays its eggs in the nests of a variety of host species is well known. Indeed, over 80 species, i.e., over 1% of bird species are known to be such obligate inter-specific brood parasites. These include two sub-families of cuckoos, two types of finches, the honey guides, the cowbirds and the black-headed duck. Because parasite species often use more than one host species, more than 1% of bird species act as hosts to brood parasites. Inter-specific brood parasitism has evolved independently at least seven times in birds and can have a significant effect on the populations of the host species and even lead to their extinction. Although hosts sometimes detect and eject alien eggs, their success in ridding their nests of parasite eggs is often very limited and that is why brood parasitism has survived as a way of life. One reason for such limited success of the hosts is the exquisite mimicry often exhibited by the parasites whose eggs are virtually indistinguishable from those of the host. What is perplexing however is that many parasite species lay eggs that look nothing like their host’s eggs and yet get away with it. Obviously hosts have not perfected the art of removing all or most of the alien eggs. But why should this be so?
|Item Type:||Journal Article|
|Additional Information:||Copyright of this article belongs to Indian Academy of Sciences.|
|Department/Centre:||Division of Biological Sciences > Centre for Ecological Sciences|
|Date Deposited:||11 Dec 2006|
|Last Modified:||19 Sep 2010 04:31|
Actions (login required) | <urn:uuid:b21f8e7e-1394-4420-8f5d-b74a76dca921> | 3.390625 | 449 | Academic Writing | Science & Tech. | 51.126325 |
Most species of ground beetles are predators so they are considered beneficial. The bombardier beetle has a dark body with reddish yellow or orange head and legs. It inhabits all continents except Antarctica. They commonly live under shoreline debris. They are parasitoids of whirligig beetle pupae so they tend to live near bodies of water frequented by the whirligig beetle.
The bombardier beetle is only 4-15 mm. in length—usually less than inch long-- but has the most marvelous defense mechanism. It draws two liquids from two different chambers in its abdominal area and mixes them in a third chamber, creating a chemical reaction. When threatened by a frog or spider or ant, it blasts out from its back end and the resulting acid mixture is 212 degrees F. The beetle can aim two nozzles in many directions and while the resulting spray is not a serious threat; the vertebrates, such as humans, it can burn and stain your fingers and will scare off most of the beetle’s predators.
Scientists and photographers have gotten together and filmed the bug blasting away and its action can be seen on YouTube. How amazing. | <urn:uuid:cb7e6b65-6086-4da4-bbfd-ea154f7fe2d8> | 3.59375 | 235 | Knowledge Article | Science & Tech. | 55.50109 |
For a brief instant, it appears, scientists at Brook haven National Laboratory on Long Island recently discovered a law of nature had been broken.
Action still resulted in an equal and opposite reaction, gravity kept the Earth circling the Sun, and conservation of energy remained intact. But for the tiniest fraction of a second at the Relativistic Heavy Ion Collider (RHIC), physicists created a symmetry-breaking bubble of space where parity no longer existed.
Parity was long thought to be a fundamental law of nature. It essentially states that the universe is neither right- nor left-handed — that the laws of physics remain unchanged when expressed in inverted coordinates. In the early 1950s it was found that the so-called weak force, which is responsible for nuclear radioactivity, breaks the parity law. However, the strong force, which holds together subatomic particles, was thought to adhere to the law of parity, at least under normal circumstances. | <urn:uuid:f9b25933-43b4-4993-9b2d-e5b2e1ebcbfc> | 3.953125 | 191 | Personal Blog | Science & Tech. | 34.263226 |
>>>>> "Bob" == Bob Kline <bkline@stripped> writes:
Bob> On Thu, 14 Oct 1999, MySQL Server wrote:
>> When using certain floating point values in a column of type "float"
>> the value displayed differs from the actual value. For example, a
>> column of type "float(9,7)" with value of "0.283" will be displayed
>> as "0.2830000", yet the actual value is less than "0.282999999" and
>> greater than "0.28299999". One would expect the value to be equal
>> to the one displayed in the table (or at least the value initially
>> put into the table). But this is not the case.
>> Below is a short series of SQL queries to run, to demonstrate the
>> above mentioned problem.
>> create table floatTest(test1 float(9,7), test2 float(10,8));
>> insert into floatTest values(0.283, 0.283);
>> select * from floatTest;
>> select * from floatTest where test1 = 0.283;
>> select * from floatTest where test2 = 0.283;
>> select * from floatTest where test1 > 0.28299999 and test1 < 0.282999999;
>> select * from floatTest where test2 > 0.28299999 and test2 < 0.282999999;
>> drop table floatTest;
Bob> FLOAT is used for approximate values. You need to use DECIMAL or
Bob> NUMERIC (which are different only in very subtle ways in ISO/ANSI SQL
Bob> and not at all different in MySQL) for fixed-precision values.
Bob> As a footnote, I believe the precision bug in the SQL engine itself has
Bob> been fixed in 3.23, but the online documentation still has it wrong.
Bob> I'd submit a patch for the docs, but the same passage has some funky
Bob> language about the FLOAT type, which may be what threw off the original
Bob> poster for this thread. I believe the standard allows a single number
Bob> in parentheses following the keyword FLOAT, specifying precision in
Bob> bits. The table above in section 7.2.2 appears to use that number to
Bob> specify size of value (not precision) in bytes. Section 7.2.5 talks
Bob> about FLOAT as if it took the same precision/scale specifiers as are
Bob> used by DECIMAL or NUMERIC. Furthermore, the engine appears to accept
Bob> this unorthodox syntax. So while I believe I could write a
Bob> documentation patch which describes what the software *should* do, it
Bob> obviously won't be appropriate to apply such a patch without first
Bob> making any necessary modifications to the software itself. Let me know
Bob> if you want a separate patch for the DECIMAL/NUMERIC precision
Bob> documentation, ignoring the last paragraph dealing with FLOATs.
Bob> Hope this helps.
Sorry, but FLOAT/DOUBLE are still approximated values. There isn't
that much one can do about these without a lot of trouble as this is
how floating point values work on computers.
The difference between MySQL 3.22 and MySQL 3.23, is that 3.22 always
rounds the value to the number of decimals while MySQL 3.23 also
supports true floating point values (without rounding) if one uses
FLOAT(4), (= FLOAT) and FLOAT(8) ( =DOUBLE).
(I couldn't from C.J.Dates book figure out exactly how he wanted to
declare FLOAT and DOUBLE; As all values are 'implemention defined'
it sounded ok to just use 4 and 8 :) | <urn:uuid:d656b387-06f8-4ba9-9d7c-f1bf25a5f5d6> | 2.84375 | 845 | Comment Section | Software Dev. | 73.422896 |
Lessons in Nuclear Safety from Fukushima
Technology Review has a short article on what has been learned since the meltdowns last year – What We Learned About Nuclear Safety From Fukushima:
Reactors and radioactive materials at Fukushima Daiichi were destabilized by back-to-back beyond design-basis events. First was the magnitude 9.0 earthquake that felled the plant’s power lines, triggering diesel generators to maintain cooling of its reactor cores and spent fuel rods. Less than an hour later, the generators along with some of the plant’s last-resort battery power backup were gone, knocked out by a 14-meter tsunami wave that crested the plant’s seawall.
Human error and design limitations quickly compounded the impact of the loss of power. Operators mistakenly shut down battery-driven cooling on one reactor for three hours, for example. Within 24 hours of the tsunami, nuclear fuel in three reactors was melting down, and superheated fuel was generating hydrogen gas, whose ignition would blow open three reactor buildings in the days ahead, impeding response efforts and exposing elevated pools holding spent nuclear fuel.
So, that’s what we know happened. What’s surprising is that some of the obvious shortcomings of the plant’s design and operations weren’t recognized and dealt with well before the disaster.
What’s interesting and not surprising is that Fukushima is a textbook engineering failure, in that it wasn’t one flaw in design, execution, or operation that led to the meltdowns but a cascade of such failures, the absence of any one of which might have significantly limited the disaster or prevented it from happening altogether. Even with the beyond design-basis earthquake and the large tsunami following it, he plant might have remained under safe control had, for instance, the power lines not been knocked out for an extended period, or had the backup generators been out of reach of the tsunami.
The response from the US nuclear power industry has been to stage emergency equipment such as generators at strategically-located depots in anticipation of unanticipatable events. New nuclear power stations (yes, there are actually new ones under construction in the US) will use advanced passive safety features to safely shut down reactors in the event of an emergency and buy time for outside emergency response. Naturally such measures aren’t good enough for the Union of
Concerned Scientists Anti-Anything-Nuclear Activists, cited later in the article, for whom no degree of risk can ever be small enough. | <urn:uuid:c0ceaf11-7802-4a4c-8336-d68f657fb1d6> | 2.796875 | 518 | Personal Blog | Science & Tech. | 28.581196 |
I have a question that came up in a discussion with friends. If I throw a ball straight up in an enclosed train car moving with constant velocity, I believe the basic physics books say it will land in the same spot. But will it really? I think I can say that the answer is "not in the real world".
Trivially, a train car is never enclosed. Fresh air is being allowed into the carriage or the passengers would all die. Thus there's currents of air that would affect the ball, agreed? If we remove the passengers and have a trusty robot (who does not need oxygen) throw the ball up in a carriage that really is completely air-tight, I'm still not sure it will land in the same spot. I would imagine that there must still be air circulation. The train had to start from a stop. It's true the floor and the roof will drag the air right at the boundary along with it, but just as an open convertible car does not drag all the air in the world with it, I assume that the air in the middle of the car will not be dragged along at the same speed. The air in the middle will remain stationary with respect to earth and pile up at the back of the car. Then it will be forced along. I further imagine that this "pile of air" will try to redistribute itself uniformallly. Won't all this set up currents? Will the air come to be completely still in the reference frame of the car? [I'm guessing the answer is yet] How long would this take?
Bonus question: I believe if I'm sitting in a convertible car and throw a ball straight up it will land back in my hand as long as I don't throw it too far up. At some point, I'll throw it to high and will lose the ball out the back of the car. What's the relevant equation covering this in a car travelling at X miles per hour in still air? Put another way, I'm trying to get a feel for how extensive the "boundary" layer of air around the car is and how it dissipates with distance.
I hope I've been clear enough. Thanks, Dave | <urn:uuid:7af7359d-25e5-484d-a2a8-41475621106b> | 3.09375 | 450 | Q&A Forum | Science & Tech. | 75.953921 |
"Doppler shift" refers to the stretching of waves (known as "redshifting") from a source moving away from the observer, and the compression of waves (called "blueshifting") from a source approaching the observer. Redshifted waves have a longer wavelength (or lower frequency) than they otherwise would, and blueshifted waves have a shorter wavelength (or higher frequency). This works with radio waves, light, and all other forms of electromagnetic radiation. It's also noticeable in sound waves-it causes a siren coming toward a listener to have a higher pitch than a siren moving away. Just as police equipped with radar guns use Doppler shift to determine the speed of a car, scientists can use the phenomenon to determine the speed at which a spacecraft or celestial object is moving toward or away from Earth. | <urn:uuid:d5cc37cc-d4ef-4fda-9d7d-2f852802694a> | 3.796875 | 170 | Knowledge Article | Science & Tech. | 34.171338 |
The Wimshurst Machine
Electrostatic induction refers to the principle that charges in an object (especially a conductor) redistribute themselves in the presence of nearby charges. Opposite charges are attracted to each other, while similar charges are repelled.
Larger charges can be stored by connecting the knobs to Leyden jars which are component parts of the machine.
- February 08, 2010 11:41
- Creative Commons Attribution-NonCommercial (What is this?)
- Additional Files
- 37786 times
Added over 2 years ago | 00:02:15 | 19941 views
Added 3 years ago | 00:02:02 | 30985 views
Added over 4 years ago | 00:00:41 | 34846 views
Added almost 5 years ago | 00:02:00 | 52417 views
Added 1 year ago | 00:01:20 | 2983 views
Added 5 years ago | 00:01:26 | 45913 views | <urn:uuid:55ff2f33-fae5-4ac9-8af1-5eefb91bbdae> | 3.40625 | 200 | Truncated | Science & Tech. | 56.054738 |
Austin, Texas just hit 100 degrees today (according to weather.com).
This is our 25th day of 100 degree weather this year. That pales in comparison to 2011, where at this time last year we were counting down to breaking the previous record of 69 days of 100 degree days set back in 1925. Austin did that and more, setting a new record of 90 days of 100 degree days in a single year a month and a half later.
Nevertheless, this year is still above our average of 13.5 days of 100 degree weather, but to the north of Texas, the midsection of the country is experiencing drought and heat waves comparable to ours of 2011. That being said, weather forecasters are seeing the development of a moderate El Nino which could bring enough rain to Texas this winter to break our drought. We can only hope that it is not a strong El Nino like the one that hit in 1997 and 1998 which brought major flooding to the state. These feast or famine swings of weather are taking their toll on many things in this state - our agriculture, economy, electric grid . . .
If climate change is responsible for these extreme weather events, then maybe our leaders should look more closely at what we can do to slow climate change and mitigate the effects.
Read Full Post »
Posted in Global Warming, tagged atmosphere, climate change, extreme weather, Global Warming, greenhouse gas, meteorological, powershift, snowstorm, storm, temperature on March 4, 2009 |
1 Comment »
The incredible snowstorm that swept across the east coast yesterday was a bit of a surprise to the thousands of people that flocked to Washington, DC for Powershift 2009. While it did not stop the enviro-activists from protesting dirty energy sources such as coal, some blog noise today indicates that the snowstorm may have blurred the whole message of Powershift. Several (shortsighted) bloggers wrote that the snow debunks the whole threat of global warming. This only demonstrates that some people still don’t truly understand the effects of global warming.
“Global Warming” has become a jazz word over the past decade, but it can misconstrue the environmental effects of the general “warming” of our atmosphere. If you’re reading this blog, you are probably familiar with the fact that an increase of greenhouse gases in our atmosphere (hacking cough, coal plants, cough) has caused an increase in the average temperature of the air and ocean. But don’t be fooled into thinking that this means that every day will get just a little bit hotter. The rise in temperature has drastic effects in the meteorological dynamics of the earth, and causes more storms and other abnormal weather events.
We don’t need science to tell us that this is occurring—we all remember the THREE major hurricanes that hit the Texas Coast in 2008 — and don’t need a radar to show that Texas (along with so many other parts of the world) is getter hotter and suffering from drought. So just remember, global warming is not just about heat, it’s about tornados and hurricanes and droughts…oh my!
Read Full Post » | <urn:uuid:1936fac9-946e-4d04-9fc2-f257584af777> | 2.703125 | 658 | Personal Blog | Science & Tech. | 53.589812 |
Before continuing our tale, I'm going to bluntly advertise for the next session of my Introduction to Astronomy course which starts in January.
This course is for school children ages 8 and up, and surveys the science of astronomy from the contents of our solar system, to how stars form age and die, to the farthest regions of our universe, and the variety of strange objects that we have discovered within it.
I touch on the fundamental physics involved in our knowledge of the universe surrounding us - how we use the light from distant objects to understand their distance and composition, and even introduce thermonuclear physics and general relativity at a level that even the 8 year olds can grasp.
Weekly presentations are followed by observing sessions in my front yard (in Southbury) using a telescope capable of showing the major planets, star clusters, nebulae and galaxies, along with the occasional comet, meteors, and plenty of artificial satellites. We will meet for 19 weeks (Thursdays at 7 p.m.) from January through May. The cost of this course is $150 per student; parents are encouraged to stay and learn as well (for free). Full details and contact information are at http://www.turnerclasses.com
Ah, but where were we? Oh yes, the second of our two worlds.
After the discovery of Uranus, its orbit about the Sun was determined with increasing accuracy over the next several years, allowing exact predictions of where it should appear in the sky month after month into the indefinite future. As time went on however, it was noticed that where Uranus was predicted to be differed from where it was actually observed by a steadily increasing amount.
By the 1840s, 60 years after the discovery of Uranus, the error in the predicted and observed position of Uranus became large enough to suggest that something was fundamentally wrong with the assumptions that were made in calculating the orbit of the planet.
The mathematics involved in computing the orbit of a planet are actually quite simple if the only objects to be considered are the planet and the Sun. A single force of gravity pulls on the planet in the direction of the Sun, and the momentum of the planet's motion keeps it in an elliptical path around the Sun. However, this simple condition is not what occurs in the solar system.
Gravitational attraction occurs between all objects in the universe. The force increases as the masses of the attracted objects increase, and it increases as the distance between the objects decreases. Here on Earth, the largest source of gravity is the Earth itself, as it is both enormous compared to the objects on its surface, and we are close to its center compared to our distance to other massive objects such as the Sun. But nonetheless, all of the objects on Earth and elsewhere are attracted to one another. The person sitting across the room from you is very slightly pulling you toward them, and (as creepy as it may sound) even you and I, separated by many miles, are ever so slightly pulled toward each other.
Going back to the orbit of a planet, there are several other planets also orbiting the Sun at varying distances from the planet being studied, and each of these exerts a small gravitational pull of its own upon the planet. Because each of these other planets is itself in motion, and all planets are pulling on all other planets as they move, the simple problem of calculating an orbit suddenly becomes exceedingly complex. In fact, with as few as three objects involved, an exact general solution for the motions has been impossible to achieve, and only very high precision approximations are possible.
In the case of the orbits of planets in our solar system, the smaller planets can be ignored, and for Uranus, only the effects of the Sun, Jupiter and Saturn were considered in the predicted positions determined in the 1820s. However, the steady disagreement between computed and observed positions of Uranus suggested that either Newton's law of gravity did not apply to objects as distant as Uranus, or there was another large body other than the Sun and the known planets affecting the orbit of Uranus.
In the early 1840s, the British mathematician John Couch Adams realized that it would be theoretically possible to determine the mass and the orbit of a hypothetical planet that would explain the observed deviations in the position of Uranus using only the observations of Uranus itself in the calculation. Adams began attacking the problem in 1844, working on it part-time while tutoring other students to earn his livelihood. Meanwhile, in France, another mathematician, Urbain Le Verrier, began to work the same problem in 1845. Partly because of the intense rivalry between England and France in the mid-19th century, these men knew nothing of each other's work.
We should take a moment here to consider what these men were attempting to accomplish, and the conditions under which they were working. The calculations involved take the form of lengthy differential equation calculations which can only be approximately solved, and require many repeated iterations to achieve a level of accuracy sufficient to be better than the observed errors in the position of Uranus. Each iteration required weeks of effort - recalling that in the 1840s there were no mechanical devices to assist in calculations, and every line of computation was written out manually on paper with a quill pen, using candle or gas lights to work into the early hours of the morning.
By 1846, both Adams in England and Le Verrier in France had arrived at solutions for the orbit of an unknown body which would explain the changes observed in the observed positions of Uranus. From these calculations they could begin to predict where in the sky one should point a telescope to see the predicted object. But both men were surprised by the lack of interest their predictions generated in the communities of astronomers in their countries. Being primarily mathematicians, their work was neither well understood nor appreciated by the astronomers of their day.
Finally, upon seeing Le Verrier's work published, and understanding the importance to national pride, if not science, the Royal Astronomer of England (Sir George Airy) urged British astronomers to begin a search, using the predictions of Adams as a guide. At about the same time, generating no interest in his own country, Le Verrier reached outside of France to the Berlin Observatory, sending a letter with coordinates to be used in searching for the unknown object. When this letter arrived on September 23, 1846, the director of the observatory (Johann Encke) had just left on a vacation to the Alps. His understudy, Johann Galle, decided to begin a search that very night.
Unlike the British astronomers who had attempted a search, Galle had very recent star maps of the search area available to him, allowing him to easily separate stars with known and constant positions from the object he was seeking. Amazingly, within a few hours of starting the quest, Galle located a bluish-green object showing a very miniscule disk quite definitely not on his star map within a very small distance from where Le Verrier had predicted it would be found. Subsequent observations confirmed an excellent match to the orbit and size predicted by both Adams and Le Verrier. The planet Neptune had been discovered.
What is unique about the discovery of Neptune is the clear triumph of mathematics snd physics in being able to discover an object purely through computation, only directly observed after the discovery had been completed on paper. The mighty intellectual efforts of Galileo, Kepler, and most directly Isaac Newton, were forever confirmed and validated on that September night in 1846.
Speaking of Galileo, a pair of observations he made in December, 1612 and January, 1613 add a footnote to this tale. During this time, Galileo was intensely studying the motions of the moons of Jupiter. In his notebook from December 28th, we find a diagram of Jupiter and its moons, with a "fixed star" also marked. In his later notebook entry from January 27th, he again notes this "star", but also indicates that it has moved relative to other stars near it in the sky.
Using modern computer models of the orbit of Neptune, we can predict the positions of the planets at any date in the future or past (what took Adams and Le Verrier years to compute can now be accomplished in a matter of minutes). What we find for the dates of Galileo's observations is that Neptune and Jupiter where very close to one another in the sky on those dates, with the position of Neptune almost exactly as Galileo had drawn it!
Had he realized what he was observing, Galileo would have discovered Neptune some 220 years prior to Adams and Le Verrier. But then the story of this great discovery would have been significantly less interesting, at least in my opinion. | <urn:uuid:77660bd7-4cac-461c-9db3-19e215f1ca06> | 3.359375 | 1,773 | Personal Blog | Science & Tech. | 35.183928 |
PGC2429 is an Elliptical Galaxy and is considered part of the Andromeda Galaxy. This galaxy has turned up in many different reference guides and has many names; M110 and NGC205, to name a few. A satellite of the Andromeda Galaxy, PGC2429 is often seen as a small ball of light underneath or just outside of the larger spiral galaxy. This picture was taken when PGC2429 was to the northwest of Andromeda. Though considered a Dwarf, PGC2429 has eight globular clusters forming a halo around it. A globular cluster is a multitude of stars gravitationally bound together. These clusters allow for PGC2429 to be seen from an amateur telescope. Usually, Elliptical Galaxies are known for their bright red stars. However, this galaxy is made up of blue, ultra-violet light. This means that the stars of PGC2429 are burning at a very high temperature, hotter than most Elliptical Galaxies.
The picture above shows these intensely burning blue stars, which appear white at this distance. The misty halo around PGC2429 is the series of eight globular clusters within an arc minute of the dense center, allowing our telescope to pick up the light from Earth. There is a thin dust cloud in front of PGC2429, visible at the left top and bottom a few arc minutes away from the ring of light emited by the globular clusters, and is characterised by the dark smudges of absent light in the elliptical dome of outer light. The globular clusters and dense core emit this large ellipse shaped dome of light that reaches to the edges of the picture. At the edges of this light, there are multiple blue and yellow lights, stars around the same ages as would be found in PGC2429. We can also see that PGC2429 has a dense, bright core, unlike the larger Andromeda Galaxy, whose core is a super massive black hole. This means that the Elliptical Galaxy has a central star or stars. The distance to PGC2429 is 2900 kly (kilolight-years); we find the maximum angular size to be 17 arc minutes, which corresponds to a linear size of 0.24 kly.
Frommert, Hartmut and Kronberg, Christine. "Messier 110" Students for the Exploration and Development of Space. http://messier.seds.org/m/m110.html
Microsoft Research, “M110” World Wide Telescope. http://www.worldwidetelescope.org/search/ObjectDetails.aspx?id=110
Nemiroff, Robert and Bonnell, Jon. “M32: Blue Stars in an Elliptical Galaxy.” http://apod.nasa.gov/apod/ap991103.html
|Right Ascension (J2000)||00:40:22|
|Filters used||B (Blue), C (Clear), R (Red), V (Green)|
|Exposure time per filter||B, V, R, C (60s x 5)|
|Date observed||November 10, 2011| | <urn:uuid:cdec210a-44a9-4d45-8bb9-8454e979e72c> | 3.25 | 661 | Knowledge Article | Science & Tech. | 64.251765 |
Abell 644 and SDSS J1021+131: Two galaxies, 920 million and 1.1 billion light years away from Earth respectively, used in a new study of supermassive black holes.
Caption: The galaxy on the left, Abell 644, is in the center of a cluster of galaxies. The right panel contains SDSS J1021+131, a so-called field galaxy because it is isolated. Both images are composites with data from Chandra (blue) and the Sloan Digital Sky Survey (red, yellow, white). A survey of these and hundreds of other galaxies tells scientists how often the biggest black holes in field galaxies like SDSS J1021+131 have been active over the last few billion years. This has important implications for how environment affects black hole growth.
Scale:Abell 644 Image is 13.2 arcmin across. SDSS J1021+131 Image is 3.2 arcmin across.
Chandra X-ray Observatory ACIS Image | <urn:uuid:a3b57857-2983-40ac-b756-9fd5ab9d0be1> | 3.75 | 210 | Truncated | Science & Tech. | 67.993501 |
Rare Corals Breed Their Way Out Of Trouble
Source: Science Daily
October 20, 2008
Rare corals may be smarter than we thought. Faced with a dire shortage of mates of their own kind, new research suggests they may be able to cross-breed with certain other coral species to breed themselves out of a one-way trip to extinction.
To read the full article, click here. | <urn:uuid:0d92073d-35b5-4e70-b882-cda4f1f50b19> | 2.78125 | 83 | Truncated | Science & Tech. | 62.732273 |
bobhenstra wrote:So, where is the strength in EQ shock waves of less than one hertz? How does a frequency of less than one hertz "signal?" earthquakes in distance areas as the map above supposedly shows? It would seem to me that if the supposition is true that frequencies of a percentage of less than one would have to have a big increase in power to effect areas of the distance shown on the map, and I don't see how that is possible!
I guess that I'm have trouble understanding how an EQ frequency of .50 hertz, or a sub hertz frequency, is more powerful than frequencies at 1 or above (one) hertz! Radio frequencies are not applicable here. As I understand it earthquake S and P waves are shock waves. On a seismograph when S and P waves flat line, there are no more waves, so how do weakened "sub hertz" waves trigger earthquakes at the distances claimed in the map above?
The frequency is different from the amplitude of the wave. The amplitude is the height or magnitude of the wave. Both frequency and amplitude have an effect on the "power" or energy in the wave, and a higher frequency (changing or moving faster) means more energy.
Earthquakes are usually so slow that the frequency doesn't make a big different in the power, so they mostly measure amplitude.
I think what it was saying above is not that lower frequency quakes have more power, but that they carry further through the earth, ie the waves are not dampened as much by stable material as high frequency waves would be. | <urn:uuid:81457fa6-5249-4afd-8ea1-2dcce375edce> | 3.703125 | 328 | Comment Section | Science & Tech. | 54.512336 |
I feel like whenever a big giant roach crosses my path, it comes right at me as if it’s being remotely controlled by someone who has learned my deepest, darkest fears and is using them to torment and ‘bug’ me. Well, it seems I may be right!
Scientists from North Carolina have developed remotely-controlled roaches, not to torment me, more like to help find and save survivors of natural disasters.
National Geographic writes:
The sight of a cockroach scuttling across the floor makes most of us shudder, but in a disaster, roaches might prove to be our new best friends.
Cockroaches that are surgically transformed into remote-controlled “biobots” could help locate earthquake survivors in hard-to-access areas. This new video from North Carolina State University’s iBionics Laboratory shows how the lab’s enhanced roaches can be steered with surprising precision.
To learn more, Amanda Fiegl spoke to assistant professor of engineering Alper Bozkurt, who led the roach biobot project.
What exactly is a biobot? Is it like a cyborg, a combination of a living organism and a robot?
“Biobot” is short for “biological robot.” It is the first stage of creating what we would call an insect cyborg.
Currently, we can steer these roaches remotely and make them stop, go, and turn. If we can have them interact independently with the technologies we’ve surgically implanted in them, then they will become true cyborgs.
Is it hard to perform surgery on a cockroach?
No, it’s quite simple. Insects can be anesthetized by putting them in the fridge for a few hours—the cold basically makes them hibernate, so they don’t move. Then you just need tweezers and a microscope.
We do a simple surgery to insert the electrodes in the roaches’ antennae and cerci [rear sensors]. We also use medical-grade epoxy to glue tiny magnets to their backs, so that we can just snap on the backpack containing the wireless control system.
Your paper mentions that these biobots could help rescue earthquake survivors. How, exactly?
Their backpacks can carry a locator beacon and a tiny microphone to pick up cries for help. Of course, a human operator or computer still has to be listening and steering them. Our biobots are basically just beasts of burden. They could also carry a camera or any other kind of miniaturized sensor one can imagine.
These experiments were done in a very controlled laboratory environment, on a flat surface, so we are now in the process of building test-beds that mimic some real-life scenarios. I don’t think it will be very long before we can deploy them to actually help rescue people.
Read more of the Q & A at news.nationalgeographic.com | <urn:uuid:fc39882d-0aff-45d2-8918-6bac26705f69> | 2.921875 | 627 | Audio Transcript | Science & Tech. | 51.060196 |
THE CASE for life in the underworld is stronger than ever, say Swedish scientists, who have found the remains of micro-organisms that once flourished in granite more than 200 metres underground.
Earlier this decade scientists claimed to have found bacteria in a 1.8-billion-year-old granite layer nearly 200 metres below Sweden's Baltic coast. The drill that sampled the rock passed through a fracture, and water found there contained large numbers of bacteria.
Sceptics claimed that those organisms were contaminants from the surface that had been introduced as the core was drilled. But Karsten Pedersen of Gothenburg University in Sweden was not one of them. "I was convinced there is a deep biosphere," he says. "However, it is difficult to prove unambiguously that it is there."
Pedersen and his colleagues studied a core drilled from the same site on Sweden's coast for evidence that microorganisms were living in the rock long before ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:05c566ee-da77-42b0-89be-72c66868c067> | 3.46875 | 218 | Truncated | Science & Tech. | 50.10352 |
|Feb13-13, 11:27 AM||#1|
Resonance frequency of a matrix system
I have a 2d system of the form:
dm1/dt + A11 m1 + A12 m2 = 0
dm2/dt + A21 m2 + A22 m2 = 0
I saw that the resonance frequency of such a system is defined as:
fr = 1/(2*pi) [ (A11 + A22)/2 + sqrt( ((A11-A22)/2)^2 + A12*A21 )
if I'm not mistaking - I can't even remember where I saw that formula.
Could someone confirm that it's the exact formula and/or explain why this holds and how this was obtained and/or how does this extend to the 3d case?
Thanks a lot!
|Similar Threads for: Resonance frequency of a matrix system|
|Resonance frequency and Natural frequency??||Classical Physics||6|
|Resonance frequency in second-order system||Mechanical Engineering||1|
|What does Resonance Frequency mean?||Introductory Physics Homework||1|
|Angular Frequency and Resonance Frequency||General Physics||1|
|What does resonance frequency mean?||General Physics||13| | <urn:uuid:19dfdf4d-a77d-4390-b2f0-2a95e46beec8> | 2.96875 | 279 | Comment Section | Science & Tech. | 60.965 |
Alain Van Ryckegham, a professor at the School of Natural Resources at Sir Sandford Fleming College in Lindsay, Ontario, Canada, offers this explanation:
Image: Bioinfo Animal Pictures Archive
Bats are a fascinating group of animals. They are one of the few mammals that can use sound to navigate--a trick called echolocation. Of the some 900 species of bats, more than half rely on echolocation to detect obstacles in flight, find their way into roosts and forage for food.
Echolocation--the active use of sonar (SOund Navigation And Ranging) along with special morphological (physical features) and physiological adaptations--allows bats to "see" with sound. Most bats produce echolocation sounds by contracting their larynx (voice box). A few species, though, click their tongues. These sounds are generally emitted through the mouth, but Horseshoe bats (Rhinolophidae) and Old World leaf-nosed bats (Hipposideridae) emit their echolocation calls through their nostrils: there they have basal fleshy horseshoe or leaf-like structures that are well-adapted to function as megaphones.
Echolocation calls are usually ultrasonic--ranging in frequency from 20 to 200 kilohertz (kHz), whereas human hearing normally tops out at around 20 kHz. Even so, we can hear echolocation clicks from some bats, such as the Spotted bat (Euderma maculatum). These noises resemble the sounds made by hitting two round pebbles together. In general, echolocation calls are characterized by their frequency; their intensity in decibels (dB); and their duration in milliseconds (ms).
Image: Museum of Paleontology, UC Berkeley and Regents of the University of California.
In terms of pitch, bats produce echolocation calls with both constant frequencies (CF calls) and varying frequencies that are frequently modulated (FM calls). Most bats produce a complicated sequence of calls, combining CF and FM components. Although low frequency sound travels further than high-frequency sound, calls at higher frequencies give the bats more detailed information--such as size, range, position, speed and direction of a prey's flight. Thus, these sounds are used more often.
In terms of loudness, bats emit calls as low as 50 dB and as high as 120 dB, which is louder than a smoke detector 10 centimeters from your ear. That's not just loud, but damaging to human hearing. The Little brown bat (Myotis lucifugus) can emit such an intense sound. The good news is that because this call has an ultrasonic frequency, we are unable to hear it.
The ears and brain cells in bats are especially tuned to the frequencies of the sounds they emit and the echoes that result. A concentration of receptor cells in their inner ear makes bats extremely sensitive to frequency changes: Some Horseshoe bats can detect differences as slight as .000l Khz. For bats to listen to the echoes of their original emissions and not be temporarily deafened by the intensity of their own calls, the middle ear muscle (called the stapedius) contracts to separate the three bones there--the malleus, incus and stapes, or hammer, anvil and stirrup--and reduce the hearing sensitivity. This contraction occurs about 6 ms before the larynx muscles (called the crycothyroid) begin to contract. The middle ear muscle relaxes 2 to 8 ms later. At this point, the ear is ready to receive the echo of an insect one meter away, which takes only 6 ms. | <urn:uuid:e12e7f89-a44b-45f0-811f-090d0b95df2e> | 4.5 | 763 | Knowledge Article | Science & Tech. | 41.42623 |
The Scariest Monsters of the Deep Sea
We took the spook-tacular celebration to the depths of the ocean, where some of the craziest—and scariest—looking creatures lurk in the dark.
- By Emily G. Frost and Hannah Waters
- October 30, 2012, Subscribe
|Learn more about the
ocean from the
Smithsonian's Ocean Portal.
(Ocean Portal / David Shale)
This red octopus is eerily beautiful. Found in the deep Atlantic waters off the U.S. Coast, the eight arms of Stauroteuthis syrtensis are connected by webbing that it uses to swim. Rows of glowing bioluminescent suckers trail down its eight arms and glow in the deep-sea. Scientists think these glow-in-the-dark suckers may be used to attract planktonic prey like insects drawn to a light. The species has been recognized for at least 100 years, but it wasn't until 1999 that scientists realized it glowed. | <urn:uuid:15acc594-5a66-485c-b6a1-55262c6b5dc9> | 2.828125 | 215 | Listicle | Science & Tech. | 67.771108 |
A strange 525 million-year-old fossil creature is baffling scientists because it does not fit neatly into any existing animal groups.
The animal, from the early Cambrian Period, might have belonged to a now extinct mollusc-like phylum, academics from America and China say.
Other researchers have suggested the creature could represent an early annelid or arthropod.
Details are published in Proceedings of the Royal Society B.
The 5-10cm-long (2-4 inch) fossil, from Anning in China, had a flattened body and horizontal fins which, researchers think, could have been used to support it as it moved along the sea floor. It also had well developed senses, including a pair of eyes on stalks.
The trouble is the animal, named Vetustodermis planus, did not possess a set of features, or characters, which placed it clearly within any known group.
When it was first described in 1979, Vetustodermis was included in the annelid category. Later researchers argued against this classification, saying it was, in fact, either an arthropod or a mollusc.
According to the latest study, the weird creature seems closest to molluscs, primarily because it had a snail or slug-like flat foot. However, the researchers say, it does not sit happily in this group.
"Phyla are defined by an organism having a set of features called characters, and currently there are no animals that we know of which contain the set of characters that Vetustodermis has," co-author David Bottjer, of the University of Southern California, US, told the BBC News website.
Vetustodermis planus does not fit comfortably within any known phylum "The phylum with which it shares the most characters is the Mollusca, but squeezing Vetustodermis into the mollusca is a somewhat messy job."
Since Vetustodermis requires some "pushing and pulling" to force it into any known phylum, Professor Bottjer and his colleagues are tempted to speculate it belonged to a different group entirely; one which flourished and faded within the Cambrian.
"We have always been intrigued by the many molluscan features of these fossils, but in the great menagerie of organisms that have inhabited Earth through life's long history, we may come to conclude that Vetustodermis indeed represents a new phylum," he said.
Jonathan Todd, a palaeontologist from the Natural History Museum, London, UK, is also mystified by the baffling animal.
"It is an intriguing beast," he told the BBC News website. "It is another strange thing from the Cambrian. It doesn't look much like an arthropod and I don't find its molluscan affinities particularly convincing."
However, Dr Todd is reluctant to create a whole new phylum to accommodate Vetustodermis; that, he thinks, would be premature.
"Some scientists have thought that there were so many distinct phyla in the Cambrian," he said. "They came to that conclusion because they were not thinking in the phylogenetic sense, they were thinking 'hey, that is a unique set of features - it must be a distinct phylum'."
So rather than creating new phyla every time something doesn't fit an existing one, the really interesting exercise, Dr Todd thinks, is to establish just how Vetustodermis slotted into the greater evolutionary tree.
If, indeed, it did belong to a different phylum, how did that group connect to the molluscs, annelids and arthropods?
"We don't really know the phylo-genetic relationships between the extant phyla," he said. "Molecular genetics has only gone so far. But recent phyla have got to connect somehow. These fossils really offer the opportunity to tie together recent phyla."
The nature of the mind is clear light.
The obscurations are temporary.
Posted 19 August 2005 - 09:06 AM
It looks cool. I want one.
Go n-éirí an bóthar leat
Go raibh an ghaoth go brách ag do chúl
Go lonraí an ghrian go te ar d’aghaidh
Go dtite an bháisteach go mín ar do pháirceanna
Agus go mbuailimid le chéile arís,
Go gcoinní Dia i mbos A láimhe thú.
the cigarette smoking man...*cough* *cogh* blaaah *cough*
Joined:28 Jul 2005
Location:hiding in the shadow
Lurking in the shadow, smoking my cigarette...
Posted 29 August 2005 - 10:00 AM
it reminds me the HALLUCIGENIA another unclassificable fossiles. With seven tentacles on it back and with sharp swordlike legs. Scientisist have been for years unable to find out what it was uintil they found out htey were holding the fossile upside down
just search on google the word hallucigenia and have a laugh
".... ain't no river wide enough to curb that leap of faith." ~ psyche101 Think you're possessed? I recommend eating right, drinking lots of fluids, and getting plenty of exorcism. What to know more? Then follow me on twitface. Ouija boards provide the most fun you can have with the dead without it being an illegal act. | <urn:uuid:aad5e940-5608-4d6c-9d1f-63db621b49a6> | 3.671875 | 1,171 | Comment Section | Science & Tech. | 51.746431 |
The colorsys module defines bidirectional conversions of color values between colors expressed in the RGB (Red Green Blue) color space used in computer monitors and three other coordinate systems: YIQ, HLS (Hue Lightness Saturation) and HSV (Hue Saturation Value). Coordinates in all of these color spaces are floating point values. In the YIQ space, the Y coordinate is between 0 and 1, but the I and Q coordinates can be positive or negative. In all other spaces, the coordinates are all between 0 and 1.
More information about color spaces can be found at http://www.poynton.com/ColorFAQ.html and http://www.cambridgeincolour.com/tutorials/color-spaces.htm.
The colorsys module defines the following functions:
>>> import colorsys >>> colorsys.rgb_to_hsv(.3, .4, .2) (0.25, 0.5, 0.4) >>> colorsys.hsv_to_rgb(0.25, 0.5, 0.4) (0.3, 0.4, 0.2) | <urn:uuid:bd4f9230-0d40-4a16-9e92-177231154384> | 2.6875 | 246 | Documentation | Software Dev. | 76.791438 |
Humans have been using wind power for more than 5500 years to propel all manner of boats, and to create some natural ventilation inside of many buildings. Its mechanical use came much latter in history however. The first historic mention of a windmill occurred in the 1rst century AD. The first modern wind turbines were built in the early 1980s, although more efficient designs are still being developed.
Other info about wind power
An estimated 72 TW of wind power on the Earth potentially can be commercially viable, compared to the 12 TW of total global power used in 2005. Due to the low energy density of wind several hundred wind turbines are required to replace a conventional power station. The only affect that wind turbines have on nature is the death of birds and bats from collisions. | <urn:uuid:0e493e1b-f8ac-4319-bf86-fca862205a22> | 3.140625 | 156 | Knowledge Article | Science & Tech. | 46.438043 |
Mass m is connected to the end of a cord at length R above its rotational axis (the axis is parallel to the horizon, the position of the mass is perpendicular to the horizon). It is given an initial velocity, V0, at a direction parallel to the horizon. The initial state is depicted at position A in the image.
The forces working on the mass are MG from the earth and T the tension of the cord.
How can I calculate the tension of the cord when the mass is at some angle $\theta$ from its initial position (position B in the image)?
Here's what I thought:
Since the mass is moving in a circle then the total force in the radial direction is T - MG*$\cos\theta$ = M*(V^2)/R
and so T = MG*$\cos\theta$+M*(V^2)/R
but since MG applies acceleration in the tangential direction then V should also be a function of $\theta$ and that is where I kind of got lost. I tried to express V as the integration of MG*$\sin\theta$, but I wasn't sure if that's the right approach. | <urn:uuid:84d75004-70c0-42fd-a80d-f8a16032f7dc> | 3.34375 | 251 | Q&A Forum | Science & Tech. | 64.442746 |
astroengine writes "What if the Martian terrain is too rugged for a rover to traverse? How do we study surface features that are too small for an orbiter to resolve? If selected by NASA, the Aerial Regional-Scale Environment Surveyor (ARES) could soar high above the Martian landscape, getting a unique birds-eye view of the Red Planet. Its primary mission is to sniff out potential microbial-life-generating gases like methane, but it would also be an ideal reconnaissance vehicle to find future landing sites for a manned expedition. Prototypes of the rocket-powered drone have been successfully flown here on Earth, so will we see ARES on Mars any time soon?" | <urn:uuid:f0d17b2e-d3f8-459d-b049-5a5c753d2982> | 3.234375 | 138 | Comment Section | Science & Tech. | 34.161487 |
This simulation illustrates the electric field generated
by a line of charge, and shows how, by the principle
of superposition, a continuous charge distribution can
be thought of as the sum of many discrete charge elements.
Each element generates its own field, described by Coulomb's
Law (and represented here by the small vectors attached
to the observation point), which, when added to the
contribution from all the other elements, results in
the total field of the ring (given by the large resultant
vector, and by the large two dimensional field map).
By moving the observation point around with the arrow
keys, changes in field magnitude and direction can be
observed at different positions relative to the line.
Note that if this were truly an "infinite line"
of charge, the components of the field in the direction
of the line would cancel each other out. The resultant
field would thus be described only by the contributions
perpendicular to the axis of the line. | <urn:uuid:e0e3b870-19c8-422e-b6d0-8b846798e58d> | 3.703125 | 206 | Documentation | Science & Tech. | 28.420847 |
M theory is a name for a more unified theory that has the different string theories, as we know them, as limits, and which also can reduce, under appropriate conditions, to eleven-dimensional supergravity. There's this picture that we all have to draw where different string theories are limits of this M theory, where M stands for Magic, Mystery or Matrix, but it also sometimes is seen as standing for Murky, because the truth about M theory is Murky. And the different limits, where the main parameter simplifies, give the different string theories -- Type IIA, Type IIB, Type I, and there's eleven-dimensional supergravity, which turns out to be an important limit even though it isn't part of the systematic perturbation expansion, then there's the E8XE8 heterotic string, and there's SO(32) heterotic string.
So M-theory is a name for this picture, this more general picture that will generate the different limits through the different string theories. The parameters in this picture we can think of being roughly hbar, which is Planck's constant, and that determines how important the quantum effects are, and the other parameter is alpha prime, which is the tension, related to the tension of the string, that determines how important stringy effects are. So traditionally, a physicist looking at Type IIA, for example, by traditional weak coupling methods, explores this little region, and if asked how his theory is related to Type I theory, the answer would have to be, "Well I don't know, that's something else."
And likewise, if you ask this observer what happens for strong coupling, the traditional answer was, "Well I don't know." In graduate courses, you learn that you can do more or less anything for weak coupling, but you can't do anything for strong coupling. What happened in the 90s was that we learned how to do a little bit for strong coupling, and it turned out that the answer is Type IIA at strong coupling turns out to be Type I in a slightly different limit, SO(32) heterotic, and so on. So we built up this more unified picture, but we still don't understand what it means
Dr. Ed Witten, Princeton University
My favorite passage from Dr. Witten's statement:
And the different limits, where the main parameter simplifies, give the different string theories -- Type IIA, Type IIB, Type I, and there's eleven-dimensional supergravity, which turns out to be an important limit even though it isn't part of the systematic perturbation expansion
My second favorite passage:
...but we still don't understand what it means | <urn:uuid:41e58dc5-b0cf-414e-8b67-e2fd3e4c5e53> | 2.875 | 554 | Comment Section | Science & Tech. | 39.659683 |
Vapor Pressure: Using Barometers
The measurement of pressure exerted by a vapor is demonstrated using barometers. The vapor pressures of water and ethanol are compared. The 28 thumbnail images summarize the content of the video. Click an image to see the image gallery.
The measurement of pressure exerted by a vapor is demonstrated using barometers. The vapor pressures of water and ethanol are compared. Vapor pressure varies with the strength of the intermolecular forces in the liquid. In this case the number of hydrogen bonds per molecule is different, and more hydrogen bonds means greater intermolecular forces. Also each hydrogen bond between water molecules is stronger than a hydrogen bond between ethanol molecules.
vapor pressure, atmospheric pressure, barometric pressure, barometer, intermolecular forces, hydrogen bond, gases and liquids, organic, phase changes, physical properties
Pressure exerted by a vapor can be measured using barometers. Mercury is poured into a dish below the three barometer tubes. A vacuum pump is used to create a vacuum at the top of the tube, and draw the mercury into it. When the tube is evacuated, the valve is closed, and the pump is turned off. Here the height of the mercury is 736 millimeters.
A syringe is filled with water. The water is carefully injected into the open end of the barometer (by going underneath the pool of mercury.) The liquid rises up to the top of the mercury column. When it reaches the vacuum at the top of the barometer, some of the liquid will vaporize. This depresses the column of mercury to 716 millimeters. (The difference in the height of the mercury column before and after injecting the liquid is the vapor pressure of the liquid.) The vapor pressure of water is 20 millimeters of mercury.
A single water molecule can form hydrogen bonds between itself and 4 other water molecules. Vapor pressure varies with the strength of the intermolecular forces in the liquid. Ethanol can only form 3 hydrogen bonds, and they are not as strong as the hydrogen bonds in water. When ethanol is injected, the ethanol rises to the top and vaporizes. The mercury is depressed to 686 millimeters, more than it was with water. More ethanol has vaporized indicating that it has weaker intermolecular forces than water.
These ChemEd DL Resource Groups Include This Video
Embedding Video in Learning Materials
The ChemEd DL encourages you to include this high-quality video in learning materials you create. We provide means to embed the video in a Web page, in a wiki, or in an online course using a course-management system. (If you want to include the video in another kind of learning material, contact us.)
If you create learning materials and make them available through the ChemEd DL, all of our videos may be freely used. For example, you can embed any video into the ChemPRIME wiki, or an online course on the ChemEd Moodle Courses site. For more information about developing resources at ChemEd DL, click here.
This video is owned by the Journal of Chemical Education and copyrighted by the ACS Division of Chemical Education, Inc. All rights reserved. To use the video in your materials at your own institution, you must be a subscriber to JCE Web Software. Subscribe as an individual or an institution at the JCE Online Store.
Individual subscribers are permitted to incorporate video into PowerPoint or other multimedia presentations or into lessons written in HTML, provided the presentations or lessons are restricted to the subscriber’s own class (lecture presentation or course management system where only enrolled students have access). Through an institutional subscription the video becomes available for viewing by any student using a computer within the institution’s range of IP addresses. A student could also incorporate video into a report or presentation and the video would play from computers at the student’s own institution.
When this video is incorporated into other materials, this attribution should be present: Video from JCE Web Software, Chemistry Comes Alive!. Copyright ACS Division of Chemical Education, Inc. Used with permission.
In the box below is HTML code to embed this video into your own Web page. Copy the code and paste it into your Web page code.
ChemPRIME project is a full general chemistry textbook available online through the ChemEd DL. We seek collaborators to add exemplars to ChemPRIME that describe chemistry topics within various contexts, such as biological science, food chemistry, and everyday life. ChemPRIME is built using MediaWiki (the same platform used to build Wikipedia). ChemEd DL encourages users to add ChemEd DL videos to their own exemplars. MediaWiki does not allow full HTML code to be inserted, so the ChemEd DL community has created a MediaWiki extension that enables embedding videos. The extension is installed in ChemPRIME, so the code below allows you to add this video to a ChemPRIME page. More information is at ChemPRIME Help:Media. Contact us if you would like to use this MediaWiki extension on your own wiki.
ChemEd Courses is the ChemEd DL's Moodle course management system. Users can manage their own courses, or develop learning resources in Moodle to share with the community. Moodle does not allow full HTML code to be inserted, so the ChemEd DL community has created a Moodle extension that enables embedding videos. The extension is installed in ChemEd Courses, so the code below allows you to add this video to a course in ChemEd Courses. If you would like to use this Moodle extension within your own Moodle installation, contact us.
ChemPaths is the ChemEd DL's Drupal-powered integrated online textbook. Students can explore a full online textbook or other learning pathways. Instructors can sign-up to build their own pathways through course material. Drupal does not allow full HTML code to be inserted, so the ChemEd DL community has created a Drupal extension that enables embedding videos. The extension is installed in ChemPaths (currently in beta-testing), so the code below allows you to add this video to a course in a pathway. If you would like to use this Drupal extension within your own Drupal installation, contact us. | <urn:uuid:847d8a24-b744-49a8-ae7e-ee37a6afbbab> | 3.78125 | 1,267 | Truncated | Science & Tech. | 38.365447 |
find_path : (term -> bool) -> term -> string
Returns a path to some subterm satisfying a predicate.
The call find_path p t traverses the term t top-down until it finds a
subterm satisfying the predicate p. It then returns a path indicating how to
reach it; this is just a string with each character interpreted as:
- "b": take the body of an abstraction
- "l": take the left (rator) path in an application
- "r": take the right (rand) path in an application
- FAILURE CONDITIONS
Fails if there is no subterm satisfying p.
# find_path is_list `!x. ~(x = ) ==> CONS (HD x) (TL x) = x`;;
Warning: inventing type variables
val it : string = "rblrrr"
- SEE ALSO | <urn:uuid:469400a7-6002-42f0-9c84-fd51facec4c0> | 3.84375 | 192 | Documentation | Software Dev. | 67.509835 |
As was mentioned earlier, a Tkinter application spends most of its time inside an event loop (entered via the mainloop method). Events can come from various sources, including key presses and mouse operations by the user, and redraw events from the window manager (indirectly caused by the user, in many cases).
Tkinter provides a powerful mechanism to let you deal with events yourself. For each widget, you can bind Python functions and methods to events.
If an event matching the event description occurs in the widget, the given handler is called with an object describing the event.
Here’s a simple example:
from Tkinter import * root = Tk() def callback(event): print "clicked at", event.x, event.y frame = Frame(root, width=100, height=100) frame.bind("<Button-1>", callback) frame.pack() root.mainloop()
In this example, we use the bind method of the frame widget to bind a callback function to an event called <Button-1>. Run this program and click in the window that appears. Each time you click, a message like “clicked at 44 63” is printed to the console window.
Keyboard events are sent to the widget that currently owns the keyboard focus. You can use the focus_set method to move focus to a widget:
from Tkinter import * root = Tk() def key(event): print "pressed", repr(event.char) def callback(event): frame.focus_set() print "clicked at", event.x, event.y frame = Frame(root, width=100, height=100) frame.bind("<Key>", key) frame.bind("<Button-1>", callback) frame.pack() root.mainloop()
If you run this script, you’ll find that you have to click in the frame before it starts receiving any keyboard events.
Events are given as strings, using a special event syntax:
The type field is the most important part of an event specifier. It specifies the kind of event that we wish to bind, and can be user actions like Button, and Key, or window manager events like Enter, Configure, and others. The modifier and detail fields are used to give additional information, and can in many cases be left out. There are also various ways to simplify the event string; for example, to match a keyboard key, you can leave out the angle brackets and just use the key as is. Unless it is a space or an angle bracket, of course.
Instead of spending a few pages on discussing all the syntactic shortcuts, let’s take a look on the most common event formats:
A mouse button is pressed over the widget. Button 1 is the leftmost button, button 2 is the middle button (where available), and button 3 the rightmost button. When you press down a mouse button over a widget, Tkinter will automatically “grab” the mouse pointer, and subsequent mouse events (e.g. Motion and Release events) will then be sent to the current widget as long as the mouse button is held down, even if the mouse is moved outside the current widget. The current position of the mouse pointer (relative to the widget) is provided in the x and y members of the event object passed to the callback.
You can use ButtonPress instead of Button, or even leave it out completely: <Button-1>, <ButtonPress-1>, and <1> are all synonyms. For clarity, I prefer the <Button-1> syntax.
The mouse is moved, with mouse button 1 being held down (use B2 for the middle button, B3 for the right button). The current position of the mouse pointer is provided in the x and y members of the event object passed to the callback.
Button 1 was released. The current position of the mouse pointer is provided in the x and y members of the event object passed to the callback.
Button 1 was double clicked. You can use Double or Triple as prefixes. Note that if you bind to both a single click (<Button-1>) and a double click, both bindings will be called.
The mouse pointer entered the widget (this event doesn’t mean that the user pressed the Enter key!).
The mouse pointer left the widget.
Keyboard focus was moved to this widget, or to a child of this widget.
Keyboard focus was moved from this widget to another widget.
The user pressed the Enter key. You can bind to virtually all keys on the keyboard. For an ordinary 102-key PC-style keyboard, the special keys are Cancel (the Break key), BackSpace, Tab, Return(the Enter key), Shift_L (any Shift key), Control_L (any Control key), Alt_L (any Alt key), Pause, Caps_Lock, Escape, Prior (Page Up), Next (Page Down), End, Home, Left, Up, Right, Down, Print, Insert, Delete, F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, F11, F12, Num_Lock, and Scroll_Lock.
The user pressed any key. The key is provided in the char member of the event object passed to the callback (this is an empty string for special keys).
The user typed an “a”. Most printable characters can be used as is. The exceptions are space (<space>) and less than (<less>). Note that 1 is a keyboard binding, while <1> is a button binding.
The user pressed the Up arrow, while holding the Shift key pressed. You can use prefixes like Alt, Shift, and Control.
The widget changed size (or location, on some platforms). The new size is provided in the width and height attributes of the event object passed to the callback.
The Event Object
The event object is a standard Python object instance, with a number of attributes describing the event.
The widget which generated this event. This is a valid Tkinter widget instance, not a name. This attribute is set for all events.
- x, y
The current mouse position, in pixels.
- x_root, y_root
The current mouse position relative to the upper left corner of the screen, in pixels.
The character code (keyboard events only), as a string.
The key symbol (keyboard events only).
The key code (keyboard events only).
The button number (mouse button events only).
- width, height
The new size of the widget, in pixels (Configure events only).
The event type.
For portability reasons, you should stick to char, height, width, x, y, x_root, y_root, and widget. Unless you know exactly what you’re doing, of course…
Instance and Class Bindings
The bind method we used in the above example creates an instance binding. This means that the binding applies to a single widget only; if you create new frames, they will not inherit the bindings.
But Tkinter also allows you to create bindings on the class and application level; in fact, you can create bindings on four different levels:
the widget instance, using bind.
the widget’s toplevel window (Toplevel or root), also using bind.
the widget class, using bind_class (this is used by Tkinter to provide standard bindings).
the whole application, using bind_all.
For example, you can use bind_all to create a binding for the F1 key, so you can provide help everywhere in the application. But what happens if you create multiple bindings for the same key, or provide overlapping bindings?
First, on each of these four levels, Tkinter chooses the “closest match” of the available bindings. For example, if you create instance bindings for the <Key> and <Return> events, only the second binding will be called if you press the Enter key.
However, if you add a <Return> binding to the toplevel widget, both bindings will be called. Tkinter first calls the best binding on the instance level, then the best binding on the toplevel window level, then the best binding on the class level (which is often a standard binding), and finally the best available binding on the application level. So in an extreme case, a single event may call four event handlers.
A common cause of confusion is when you try to use bindings to override the default behavior of a standard widget. For example, assume you wish to disable the Enter key in the text widget, so that the users cannot insert newlines into the text. Maybe the following will do the trick?
def ignore(event): pass text.bind("<Return>", ignore)
or, if you prefer one-liners:
text.bind("<Return>", lambda e: None)
(the lambda function used here takes one argument, and returns None)
Unfortunately, the newline is still inserted, since the above binding applies to the instance level only, and the standard behavior is provided by a class level bindings.
You could use the bind_class method to modify the bindings on the class level, but that would change the behavior of all text widgets in the application. An easier solution is to prevent Tkinter from propagating the event to other handlers; just return the string “break” from your event handler:
def ignore(event): return "break" text.bind("<Return>", ignore)
text.bind("<Return>", lambda e: "break")
By the way, if you really want to change the behavior of all text widgets in your application, here’s how to use the bind_class method:
top.bind_class("Text", "<Return>", lambda e: None)
But there are a lot of reasons why you shouldn’t do this. For example, it messes things up completely the day you wish to extend your application with some cool little UI component you downloaded from the net. Better use your own Text widget specialization, and keep Tkinter’s default bindings intact:
class MyText(Text): def __init__(self, master, **kw): apply(Text.__init__, (self, master), kw) self.bind("<Return>", lambda e: "break")
In addition to event bindings, Tkinter also supports a mechanism called protocol handlers. Here, the term protocol refers to the interaction between the application and the window manager. The most commonly used protocol is called WM_DELETE_WINDOW, and is used to define what happens when the user explicitly closes a window using the window manager.
You can use the protocol method to install a handler for this protocol (the widget must be a root or Toplevel widget):
Once you have installed your own handler, Tkinter will no longer automatically close the window. Instead, you could for example display a message box asking the user if the current data should be saved, or in some cases, simply ignore the request. To close the window from this handler, simply call the destroy method of the window:
from Tkinter import * import tkMessageBox def callback(): if tkMessageBox.askokcancel("Quit", "Do you really wish to quit?"): root.destroy() root = Tk() root.protocol("WM_DELETE_WINDOW", callback) root.mainloop()
Note that even you don’t register an handler for WM_DELETE_WINDOW on a toplevel window, the window itself will be destroyed as usual (in a controlled fashion, unlike X). However, as of Python 1.5.2, Tkinter will not destroy the corresponding widget instance hierarchy, so it is a good idea to always register a handler yourself:
top = Toplevel(...) # make sure widget instances are deleted top.protocol("WM_DELETE_WINDOW", top.destroy)
Future versions of Tkinter will most likely do this by default.
Window manager protocols were originally part of the X window system (they are defined in a document titled Inter-Client Communication Conventions Manual, or ICCCM). On that platform, you can install handlers for other protocols as well, like WM_TAKE_FOCUS and WM_SAVE_YOURSELF. See the ICCCM documentation for details. | <urn:uuid:e020d3ad-cf86-4ba5-9ba3-921c2b0d3b74> | 3.59375 | 2,646 | Documentation | Software Dev. | 58.774521 |
STRANGE parents should have unusual offspring. Sure enough, when two carbon "onions" collide, a nanodiamond is born. It's an insight into the weird chemistry that arises in outer space.
Meteorites are home to diamonds a few nanometres wide but how these tiny crystals form is a mystery. A precursor could be the equally exotic nested carbon cages called carbon onions that lurk in interstellar space. When Nigel Marks of Curtin University in Perth, Australia, and colleagues simulated collisions between two carbon onions, and carbon onions and dust grains, both produced nanodiamonds (Physical Review Letters, DOI: 10.1103/PhysRevLett.108.075503).
Clouds of dust around ageing, carbon-rich stars and dusty planet-forming discs could host such collisions. When the dust gets baked into asteroids, chips can fall to Earth as meteorites.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Find Adhesive, Replace Silicon
Thu Mar 01 19:59:20 GMT 2012 by Alex Petkus
If this method of creating diamonds can become inexpensive, it would be neat if a wafer could be formed, with an adhesive or other method, that would allow for Silicon to be replaced. This coupled with 3D cores could significantly increase computing speed.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:40f54be9-970e-4c5c-97c8-80ab51aefdc1> | 3.4375 | 395 | Truncated | Science & Tech. | 49.598374 |
American physicist whose experiments using laser light to cool and
trap atoms earned him the Nobel Prize for Physics in 1997. He shared
the award with Steven Chu and Claude Cohen-Tannoudji, who also developed
methods of laser cooling and atom trapping.
One result of the development of laser-cooling techniques was the first
observation, in 1995, of the Bose-Einstein condensate, a new state of
matter originally predicted 70 years earlier by Albert Einstein and
the Indian physicist Satyendra Nath Bose. In this state atoms are so
chilled and so slow that they, in effect, merge and behave as one single
quantum entity that is much larger than any individual atom.
Main Page | About Us | All text is available under the terms of the GNU Free Documentation License. Timeline of Nobel Prize Winners is not affiliated with The Nobel Foundation. External sites are not endorsed or supported by http://www.nobel-winners.com/ Copyright © 2003 All Rights Reserved. | <urn:uuid:4668ba9a-e0ac-4a03-9f40-c9d3c803eb53> | 3.421875 | 211 | Knowledge Article | Science & Tech. | 42.050503 |
Learn It: Sources of Energy
It is estimated that at current levels of usage, there is sufficient coal to last for about 300 years – but the world’s gas and oil supplies will run out around the middle of the 21st century. Hence there is a growing urgency in seeking new sources of energy, preferably renewable ones, and in promoting energy conservation.
Many of the world’s pollution problems result from man’s ever increasing demand for energy. The pollution of the air by motor vehicles and industry, the fear of “another Chernobyl” (where a very serious accident occurred in a nuclear power station in 1986), the problem of nuclear reprocessing, and the imminent shortage of the world’s reserves of fossil fuels are all symptoms of our desire for more sources of cheap energy.
Energy usage from different sources:
As is clear from this graph, most of the world's energy comes from fossil fuels - oil, coal, peat and natural gas. Because they were formed over millions of years, they cannot be reformed during a lifetime and are said to be non-renewable.
Renewable sources of energy offer sustainable alternatives to our dependency on fossil fuels, a means of reducing harmful greenhouse emissions and opportunities to reduce our reliance on imported fuels.
Renewable energy from sources such as the wind, the sun, wood, waste and water are abundantly available in Ireland. Several renewable energy technologies are now commercially viable and capable of supplying clean, economical heat and power.
The term “renewable” usually refers to sources of energy that are continually renewable, such as the sun’s rays, wind, rainfall or tides, or those which can be replaced within a short time-span, such as biomass.
Next: Learn It - Forms of Renewable Energy | <urn:uuid:ae6456d4-5cd4-446c-97d0-e3fb89beefbf> | 3.59375 | 372 | Tutorial | Science & Tech. | 29.262053 |
I've been doing some writing about warp drive physics, and what the hardware will look like. It all starts with having opto-electronics that can induce acceleration fields. I don't think it will be possible to curve space-time because the energy requirements are unreasonably large (e.g., the Alcubierre drive requires the energy content equivalent to the planet Jupiter). Instead, I believe that transmitting redshift, rapidly and repeatedly (<1 millisecond), will cause hyper-space to curve. If hyperspace curves, it will produce an acceleration field. This will lead to warp drive technology.https://sites.google.com/site/superluminaldrivephysics/
Acceleration field generator technology, which I call Blueshift technology, is all about emitting "shift photons". I expect there to be mathematics with df/dt terms. For example, a frequency shift from 400THz to 800THz, every millisecond, will have a df/dt = 4x10^17.
There will be a frequency analogue to the Newtonian force equation, F = ma, that will look like F = (h/c)(df/dt), where h is the Planck constant and c is the speed of light.
It will take a long time before the physics community discovers the connection between df/dt, and acceleration field generation with opto-electronics. | <urn:uuid:e6d27ecb-54a0-40f9-a4e6-86a688bfb831> | 2.875 | 285 | Comment Section | Science & Tech. | 48.257422 |
"Taking fast action to reduce short-lived climate pollutants (SLCPs) including black carbon, methane, tropospheric ozone and hydrofluorocarbons (HFCs), is a critical climate strategy which can reduce the possibility that severe storms will inject more water vapor into the stratosphere, (causing accelerated ozone depletion and climate change). Reducing SLCPs could cut the rate of global warming in half for the next several decades, cut the rate of warming over the elevated regions of the Himalayas and Tibet by at least half, and the rate of warming in the Arctic by two-thirds over the next 30 years. Since many SLCPs are also potent air pollutants cutting them can also prevent up to 4.7 million premature deaths each year and prevent billions of dollars in crop losses."
"The possibility of significant ozone depletion over North America is only the newest in a litany of accelerating impacts of climate change," stated Durwood Zaelke, President of IGSD. "We cannot afford to wait to take fast-action."
"A warming world with violent storms holds many unpleasant surprises" said Durwood Zaelke, President of the Institute for Governance and Sustainable Development (IGSD). "Recent research now suggests that this may include damage to the protective ozone shield, which protects us from harmful ultraviolet radiation that causes skin cancer, cataracts, suppresses the human immune system, and damages crops and ecosystems."
"Protecting the stratospheric ozone layer is a job that the Montreal Protocol has done for the last 25 years, putting the ozone layer on a course of recovery by mid-century," Zaelke continued.
The new challenge is the surprise finding in a recently published Harvard University study that increasing climate-driven summer thunderstorms might inject more water into the stratosphere, which has the potential to damage the protective ozone layer over the United States and possibly other parts of the globe. This study is one of the first to hypothesize that climate change could reduce stratospheric ozone over populated areas. If they prove correct, depletion of the ozone layer will increase if global warming leads to more such storms.
In the stratosphere when temperatures are very low, increasing water vapor releases chlorine residing in inactive forms, mimicking processes that cause the ‘ozone hole' over Antarctica. While ozone depletion from storms in midlatitude regions like the US has not been reported so far, the study concludes that if the intensity and frequency of the convective injecting storms were to increase as a result of climate change, increased risk of ozone depletion and associated increases in ultraviolet exposure could follow. To confirm and quantify the risk, more detailed modeling of storms and the response of ozone to water vapor injections in the stratosphere is needed.
"The most surprising aspect is that this potential impact of climate on stratospheric ozone was not anticipated," stated Stephen O. Andersen, Director of Research at IGSD. "This new research brings back into play the ‘precautionary principal' of global environmental protection that justifies action before the science is resolved if delay would make solutions too expensive or too late to protect the earth for future generations.
By: Institute for Governance and Sustainable Development "
Source: http://yubanet.com/scitech/Climate-Change-Impacts-Linked-to-Ozone-Depletion-Over-US.php (Paragraph Order Changed)
The Harvard report is here: J.G. Anderson et al., UV Dosage Levels in Summer: Increased Risk of Ozone Loss from Convectively Injected Water Vapor, Science (26 July 2012). | <urn:uuid:d4a03229-9dbe-482c-94b7-3a1580d54b77> | 3.390625 | 740 | Comment Section | Science & Tech. | 23.622686 |
Tautomers are organic compounds that are interconvertible by a chemical reaction called tautomerization. As most commonly encountered, this reaction results in the formal migration of a hydrogen atom or proton, accompanied by a switch of a single bond and adjacent double bond. In solutions where tautomerization is possible, a chemical equilibrium of the tautomers will be reached. The exact ratio of the tautomers depends on several factors, including temperature, solvent, and pH. The concept of tautomers that are interconvertible by tautomerizations is called tautomerism. Tautomerism is a special case of structural isomerism and can play an important role in non-canonical base pairing in DNA and especially RNA molecules.
Tautomerizations are catalyzed by:
- base (1. deprotonation; 2. formation of a delocalized anion (e.g. an enolate); 3. protonation at a different position of the anion).
- acids (1. protonation; 2. formation of a delocalized cation; 3. deprotonation at a different position adjacent to the cation).
Common tautomeric pairs are:
- ketone - enol, e.g. for acetone (see: keto-enol tautomerism).
- amide - imidic acid, e.g. during nitrile hydrolysis reactions.
- lactam - lactim, an amide - imidic acid tautomerism in heterocyclic rings, e.g. in the nucleobases guanine, thymine, and cytosine.
- enamine - imine
- enamine - enamine, e.g. during pyridoxalphosphate catalyzed enzymatic reactions.
Prototropic tautomerism refers to the relocation of a proton, as in the above examples, and may be considered a subset of acid-base behavior. Prototropic tautomers are sets of isomeric protonation states with the same empirical formula and total charge.
Annular tautomerism is a type of prototropic tautomerism where a proton can occupy two or more positions of a heterocyclic system. for example, 1H- and 3H-imidazole; 1H-, 2H- and 4H- 1,2,4-triazole; 1H- and 2H- isoindole.
Valence tautomerism is distinct from prototropic tautomerism, and involves processes with rapid reorganisation of bonding electrons. An example of this type of tautomerism can be found in bullvalene. Another example is open and closed forms of certain heterocycles, such as azide - tetrazole. Valence tautomerism requires a change in molecular geometry and should not be confused with canonical resonance structures or mesomers.
- Smith, M. B.; March, J. Advanced Organic Chemistry, 5th ed., Wiley Interscience, New York, 2001; pp 1218-1223. ISBN 0471585890
- Katritzky, A.R.; Elguero, J. et al. The Tautomerism of heterocycles, Academic Press, New York, 1976, ISBN 012020651Xde:Tautomerie
There is no pharmaceutical or device industry support for this site and we need your viewer supported Donations | Editorial Board | Governance | Licensing | Disclaimers | Avoid Plagiarism | Policies | <urn:uuid:67fa25b2-317c-4f60-963a-504352c99ad6> | 3.921875 | 752 | Knowledge Article | Science & Tech. | 25.672694 |
Structure and Bonding: Dueling Mechanisms - SN1 verses SN2
You may want to review the two activities that introduce the two different substitution reactions Reaction Rates and Molecular Crowding (SN2)and Consecutive Reactions (SN1). Explain the product(s) of the reaction of 2-bromobutane with the nucleophile, OH-:
C4H9Br + OH- C4H9OH + Br-
What type of alkyl halide is 2-bromobutane? methyl 1o 2o 3o
Are the two products of the SN1 pathway isomers? If so, why? For help see the last part of The Arrangement of Bonds.
If only the SN1 pathway was to occur, what would the proportions of the two products be? Why?
What happens if both SN1 and SN2 reaction mechanisms occur together in a reaction?
|Mechanism||Describe the Product(s) of the Reaction||Favored Conditions|
3o alkyl halide, water or alcohol as solvent, poor nucleophile
methyl or 1o alkyl halide, KOH in aprotic solvent, good nucleophile
2o alkyl halide
Now how are the kinetics influenced if the reaction has both mechanisms occurring at the same time? These are competing reactions for the alkyl halide; however, the product is a mixture of the two optical isomers not two different compounds. The overall rate of the reaction is the sum of the rates for each competitive reaction or mechanism.
|SN1||Rate = k1(alkyl halide)|
|SN2||Rate = k2(alkyl halide)(nucleophile)|
|overall||Rate = k1(alkyl halide) + k2(alkyl halide)(nucleophile)|
How can we control the final product by variation in reaction condition? The nucleophile used in the reaction along with the solvent influences the rate constants.
The graph below shows the rate constant, k1, for an SN1 reaction, where the solvent mixture has an increase in the amount of water, a polar substance. How does the reaction rate change? Why?
Click here to get a STELLA model that looks at the competing SN1 and SN2 mechanisms plus allows you to vary a number of parameters.
Use the STELLA model to address the following questions:
1. Do two runs with one with just the SN1 mechanism operating k1 = 1 and k2 = 0 and then just the SN2 mechanism operating k1 = 0 and k2 = 1. How does the decrease in concentration of the alkyl halide differ for the two mechanisms? Why?
2. If you want one of the mechanisms to drive the reaction, what do you want the rate constants to be? Explain.
3. How does the concentration of the nucleophile influence the results?
4. If the (nucleophile) = 0, what is the outcome of the reaction? Why?
If k1> k2, which mechanism SN1 or SN2 has the greater activation energy, Ea? Why?
Now you know that rate constants increase when the temperature rises. Which rate constant will change the most - the one with the lower or higher Ea? You may want to click here to get an interactive Excel spreadsheet to investigate how Ea influences temperature variation of rate constants.
In fact a temperature increase will add another level of competition as elimination reactions will now be enhanced. A new set of products with a double bond will form. What can you conclude about the activation energies of elimination reactions compared to substitution reactions?
Because the SN2 pathway is second order with the concentration of the nucleophile in the rate law, lowering the concentration of the nucleophile slows this mechanism. Lowering the concentration of the nucleophile also helps the SN1 pathway to produce a 50-50 racemic mixture. The leaving group is given time to move away from the carbocation, allowing an equal chance of the nucleophile attacking from either side. Lowering the concentration of the nucleophile to zero stops both mechanisms. Why?
If you want to produce a particular optical isomer (not a mixture of both), which mechanism do you want to operate?
Return to Structure and Bonding Homepage. | <urn:uuid:21b453b7-1a32-4757-8347-e4c08b7b3f32> | 3.078125 | 901 | Tutorial | Science & Tech. | 53.593405 |
Understanding Evolution: Theory, Prediction and Converging Lines of Evidence, Part 1
One of the challenges for discussing evolution within evangelical Christian circles is that there is widespread confusion about how evolution actually works. In this (intermittent) series, I discuss aspects of evolution that are commonly misunderstood in the Christian community. In this post, we explore how evolution is a theory in the scientific sense, how it is supported by converging lines of evidence, and how it can make accurate predictions about the natural world, using whale evolution as an example.
Evolution: just a theory
One game that my (young) children like to play is a guessing game where both players select a character from among many choices, and by process of elimination, tries to guess the character the other has selected. Questions like “does your character have red hair? glasses?” etc., are used to narrow down the possibilities. Once you have guessed correctly which character your opponent has selected, you can perfectly predict the answer to every question thereafter (and a good many parents likely prolong the questioning to keep the hopes of victory alive for their children). When considered separately, the individual features of each character—glasses, brown hair, purple hat, and so on—mean almost nothing, since they could be features shared with other characters in the game. Only the convergence of multiple features is indicative of a good guess, and the accuracy of that guess is put to the test every time a new question is asked.
A good theory is something like this: an educated guess, based on and consistent with all past work on the topic to date. It allows you to predict how future tests should pan out. In the guessing game, there are limited options to choose from (so the analogy, like all analogies, eventually breaks down). In science, we don’t really know the true way things actually work. What we have are theories—broad explanatory frameworks supported by experimentation, that make sense of our current collection of facts—that we can use to make testable predictions about the natural world. All theories in science are provisional in that they are not complete descriptions of how the world actually works and are subject to future revision; but at the same time they are robust frameworks that can be used to predict how experiments should behave with almost boring regularity. So, far from the colloquial usage of “theory” as speculation, “just a theory” is high praise in science.
The current understanding of evolutionary theory in all its scope and diversity is far more complex than Darwin himself could have ever envisaged. (As a geneticist, I’ve often wished I could have a cup of tea with him to show him how far his theory has grown, especially given his confusion about how heredity worked.) Our understanding of how evolution works has grown by leaps and bounds since the 1850s. What is remarkable is just how much Darwin got “right” given his time and place. His main hypotheses—that species descend from ancestral forms through descent with modification, that and natural selection acting on heritable variation is a significant force in that process—remains the core of modern evolutionary theory. We’ve added a lot of detail since then (population genetics, kin selection, neutral evolution/genetic drift, symbiosis, horizontal gene transfer, molecular exaptation, and so on), but Darwin’s core ideas have produced a wealth of successful predictions. They were a very good “guess” that continues to pay rich scientific dividends.
Whale evolution: an example of converging lines of evidence
One of the things I personally find quite enjoyable about evolutionary theory is the counter-intuitiveness of some of the predictions it makes. One example that is a personal favorite, and one I often use to illustrate how evolution makes sense of converging lines of evidence, is cetacean (whale) evolution. Let’s set up the “problem” that evolutionary biology forces upon us:
- Modern cetaceans are mammals – they nourish their young in utero through a placenta, give birth to live young, and feed newborns with milk – all features of standard mammalian biology.
- Mammals are tetrapods – organisms with four limbs. Mammalian life shows up in the fossil record as an innovation within tetrapods, so mammals are “nested within the set” of tetrapod forms. Not all tetrapods are mammals (amphibians, for example) but all mammals are tetrapods.
- Tetrapods are by and large terrestrial creatures. Having four limbs for locomotion is a distinctly land-based adaptation.
The “problem”, of course, is that modern whales are emphatically not terrestrial, nor do they have four limbs – they have two front flippers and a tail, with no hind limbs in sight. Yet they are mammals, which forces evolution’s hand as it were. Evolution thus is dragged, under protest, to the prediction that modern whales, as mammals, are descended, with modification, from ancestral terrestrial, tetrapod ancestors. Instantly this prediction raises a host of uncomfortable questions: where did their hind limbs go? How did they acquire a blowhole on the top of their heads when other mammals have two nostrils on the front of their faces? How did they transition to giving birth in the water? What happened to the teeth of the baleen whales? What happened to the hair characteristic of mammals? and so on. In some ways, evolutionary thinking about whales creates more difficulties than it appears to solve.
And yet, these difficulties are the stuff of science. If indeed our “educated guess” of terrestrial, tetrapod ancestry for whales is correct, the evidence will show that these transitions, challenging though they may seem, did indeed occur on the road to becoming “truly cetacean”.
Going out on a limb
Anyone who has seen a modern whale skeleton in a museum and noted it carefully may have noticed that though whales lack hind limbs, they do have a bit of bone back there where the hind limbs ought to be. While this is suggestive of a vestigial characteristic (a feature in a modern organism that has a reduced role relative to the role the structure played in an ancestral species), it’s hardly a smoking gun for evolution. Still, it’s consistent with the idea.
When we look at the cetacean fossil record, we also see forms suggestive of a progressive loss of hind limb function and structure over time, as David Kerk and Darrel Falk have elegantly explained before. Again, if one were resistant to evolutionary explanations, it would be possible (if a bit strained) to interpret these creatures as having been created directly as we find them in the fossil record. The facts that we do not see these forms in the present day, and that they seem to blur the distinctions between terrestrial tetrapods and whales might make one a bit uncomfortable, however.
Recent work on cetacean embryogenesis (how whales and their relatives develop from fertilized eggs into fully-formed baby whales) has shed even more light on the issue for modern species, however. Dolphin embryos actually have four limbs early in their development, as well as a few facial hairs, just as any good mammal should have. The hind limbs and hairs are lost later in development, and work on the molecular signaling events that halt hind limb growth and cause the limb bud to regress into the body wall have now been worked out in some detail. Moreover, early in dolphin development the nostrils are distinct and on the front of the face, and only fuse into a blowhole and migrate to the top of the head later in development. Early dolphin embryogenesis is distinctly mammalian and uncannily tetrapod-like.
… and passing the test
Taken in isolation, these facts about whales are interesting trivia. Taken together, however, they begin to form a picture entirely consistent with the prediction that modern whales are derived from terrestrial ancestors. The true strength of evolution as a scientific theory for the origin of whales is this: not that we can prove it, (for no theory is ever proven in science due to its permanently provisional nature), nor that we have full access to every bit of data we would like (consider how fragmentary the fossil record is, for example), but rather that we haven’t been able to disprove it yet, despite our best efforts. Descent with modification remains a productive educated guess that grows stronger with each investigation.
In the next post in this series, we’ll explore some additional lines of evidence for cetacean evolution that further illustrate the predictive power of evolutionary theory.
For further reading
Evidences for Evolution, Part 2b: The Whale's Tale
J. G. M. Thewissen, M. J. Cohn, L. S. Stevens, S. Bajpai, J. Heyning, and W. E. Horton, Jr. (2006). Developmental basis for hind-limb loss in dolphins and origin of the cetacean bodyplan. Proceedings of the National Academy of Sciences 103 (22), 8414–8418. available freely online.
Dennis Venema is Fellow of Biology for The BioLogos Foundation and associate professor of biology at Trinity Western University in Langley, British Columbia. His research is focused on the genetics of pattern formation and signalling. | <urn:uuid:d214eaf7-df3c-4b25-9507-fcea60168079> | 2.78125 | 1,947 | Personal Blog | Science & Tech. | 38.643649 |
There have been whole books written about the history and development of the Periodic Table of the Elements, and how that shaped what we know about chemistry now, how great it is, blah, blah, blah. And the Periodic Table is a great and useful tool, but today I’m sticking to some basic fun stuff, like this crocheted periodic table that Erin introduced me to, or the periodic table TABLE.
All you really need to know (for now) is that the chemical elements are pure substances usually considered the building blocks of all the matter in the universe. Hydrogen, oxygen, carbon, lithium, potassium, bismuth, neon, helium, tungsten, copper, and gold are all elements, along with about a hundred or so others. Tom Lehrer wrote a fantastic and ridiculous song, The Elements, which names them all (or at least all the ones known when he wrote the song).
Most of the time we encounter the elements as compounds (like a water molecule is two hydrogen atoms attached to one oxygen atom) or mixtures (like air) or alloys (a special kind of mixture) like steel (iron and friends) or brass (copper and zinc).
If you want to get picky, then yes, an atom of an element can be broken down into even smaller and simpler building blocks, but I’m not worrying about that now.
So, returning to just the elements, each element has unique characteristics, just like peanut butter is different from jelly. Each element has a certain number of protons (the atomic number), and an atomic mass (essentially how much stuff there is in an average atom of that element), and a number of valence electrons (which determines a lot about how the element reacts).
Don’t get bogged down in what each of those things means, just remember that different elements have different characteristics, just like some shapes have curves and others have angles, or like some animals have two legs, some four, some more, and some none.
Anyway, the point is that if you have different characteristics, you can do some sorting, and sorted options are easier to deal with than a pile of randomness (I bet you select a sorted pair of matching socks most of the time, and prefer that your oatmeal isn’t stored in the same container with chocolate chips, soy sauce, and croutons in your pantry).
Science LOVES classification and sorting, and there are all sorts of systems and methods to do it. One word for these systems is taxonomy (to be distinguished from taxidermy, though you may certainly classify and sort your stuffed road-kill should you so choose.) Taxonomy often refers to sorting and classifying living or once living things, but it can also mean classifying rocks, stars, etc., based on their characteristics.
Things to Try:
Periodic Table of Office Supplies?
Instead of sorting elements or classifying animals into classes or species, I sorted some office supplies of the order Paperclippius, more commonly known as paper clips. Go ahead and try this with your own office supplies, pocket change, or whatever small items are handy.
Here is an assortment of specialty paperclips — how could you sort these?
You might sort by color:
Or by shape:
And both ways have value. If we sort by color and shape at the same time, a ‘table’ starts to form:
And the next step……
And the next step……
And then we might complete the table this way:
But we could have set up our sorting table like this:
Or like this:
And those ways are just as good — they all sort by both attributes (shape and color), and we know where each type of clip ‘belongs’ in the table. If we had everything laid out except the blue triangle clip, you would notice the ‘hole’ and know approximately what that clip should look like.
The existence of many elements in the periodic table was correctly predicted before those elements were discovered, because of the ‘holes’ left in the early versions of the table. Pretty cool, right? | <urn:uuid:08289795-d1ef-423b-8c2b-63cb5cf66133> | 3.328125 | 865 | Personal Blog | Science & Tech. | 41.659974 |
Panels are windows with the added feature of depth, so they can be stacked on top of each other, and only the visible portions of each window will be displayed. Panels can be added, moved up or down in the stack, and removed.
The module curses.panel defines the following functions:
Returns the bottom panel in the panel stack.
Returns a panel object, associating it with the given window win. Be aware that you need to keep the returned panel object referenced explicitly. If you don’t, the panel object is garbage collected and removed from the panel stack.
Returns the top panel in the panel stack.
Panel objects, as returned by new_panel() above, are windows with a stacking order. There’s always a window associated with a panel which determines the content, while the panel methods are responsible for the window’s depth in the panel stack.
Panel objects have the following methods:
Returns the panel above the current panel.
Returns the panel below the current panel.
Push the panel to the bottom of the stack.
Returns true if the panel is hidden (not visible), false otherwise.
Hide the panel. This does not delete the object, it just makes the window on screen invisible.
Move the panel to the screen coordinates (y, x).
Change the window associated with the panel to the window win.
Set the panel’s user pointer to obj. This is used to associate an arbitrary piece of data with the panel, and can be any Python object.
Display the panel (which might have been hidden).
Push panel to the top of the stack.
Returns the user pointer for the panel. This might be any Python object.
Returns the window object associated with the panel. | <urn:uuid:649877e3-bcca-4380-9525-44c5a8351d1d> | 3.421875 | 364 | Documentation | Software Dev. | 58.980493 |
NOAA-10 (ATN series) was launched on September 17, 1986 and is a
third-generation operational meteorological satellite. The satellite
design provides an economical and stable sun-synchronous ... platform. This
platform enables the satellite to carry advanced operational instruments
that measure the earth's atmosphere, its surface and cloud cover, and
the near-space environment. The satellite is based upon the Block 5D
spacecraft bus developed for the U.S. Air Force, and is capable of
maintaining an earth-pointing accuracy of better than plus or minus
0.1 degree with a motion rate of less than 0.035 degree/second.
Primary sensors include (1) an Advanced Very High Resolution
Radiometer (AVHRR), (2) TIROS Operational Vertical Sounder (TOVS), (3)
Earth Radiation Budget Experiment (ERBE), and (4) a Solar Backscatter
Ultraviolet Spectrometer (SBUV/2). Secondary experiments consist of a
Space Environment Monitor (SEM), and a Data Collection System (DCS). A
Search and Rescue (SAR) system is also carried on NOAA-10.
Orbital Period: 101.50 m
Inclination: 98.59 degrees Eccentricity: 0.00256
Periapsis: 833.00 km Apoapsis: 870.00 km
For more information about the NOAA POES satellite series link to the
To view a 3D orbit, observe the J track satellite tracking web page:
Taken from the NSSDC System for Information Retrieval and Storage (SIRS). For
more information contact the NSSDC Coordinated Request and User Support Office,
301-286-6695 (NASA Goddard Space Flight Center, Code 933.4, Greenbelt, Maryland
20771, USA, http://nssdc.gsfc.nasa.gov/). | <urn:uuid:b2fbb3e3-78f4-4cbb-96b5-13635c8f09fd> | 3.203125 | 411 | Knowledge Article | Science & Tech. | 55.538781 |
Remember how fun it was studying chemistry and physics in high school? Well we guess your recollection depends on the person who taught the class. Why not have another go at it by learning the A-to-Z of electronics from one of our favorite teachers, [Jeri Ellsworth].
You know, the person who whips up chemistry experiments and makes her own semiconductors? The first link in this post will send you to her video playlist. So far she’s posted A is for Ampere and B is for Battery, both of which you’ll find embedded after the break. Her combination of no-nonsense technical explanation, and all-nonsense paper-doll history reenactment make for a fun viewing whether you retain any of the information or not. | <urn:uuid:28bc8223-4dc6-4809-b75b-bda725550689> | 2.71875 | 162 | Truncated | Science & Tech. | 59.188504 |
+ Search Quest
Lunar Prospector Mission
The Lunar Prospector mission was a tremendous success. Following a nearly flawless launch, a four-day journey to the Moon, and entry into lunar orbit, the tiny spin-stabilized spacecraft successfully sent valuable data back to Earth. Carrying five scientific instruments, Lunar Prospector was designed to map the Moon’s surface composition and possible deposits of polar ice, measure magnetic and gravity fields, and study lunar outgassing events.
On March 5th, 1998, Lunar Prospector captured the public’s imagination by announcing the discovery of a signal for water ice at both of the lunar poles. The neutron spectrometer instrument onboard Lunar Prospector detected hydrogen, which is assumed to be in the form of water. The data indicated that a large quantity of water ice, possibly as much as 300 million metric tons, was mixed into the regolith at each pole. This was the first direct evidence of the presence of water ice at the Moon's frigid poles.
The mission ended on July 31, 1999 when Lunar Prospector was deliberately targeted to impact a permanently shadowed area of a crater near the lunar South Pole. It was hoped that the impact would release water vapor from the suspected ice deposits and that the plume would be detectable from Earth, but no plume was observed. However, the LCROSS mission will pick up where Lunar Prospector left off…
+ Inspector General Hotline
+ Equal Employment Opportunity Data posted pursuant to the No Fear Act
+ Budgets, Strategic Plans and Accountability Reports
+ Freedom of Information Act
+ The President's Management Agenda
+ NASA Privacy Statement, Disclaimer, and Accessibility Certification | <urn:uuid:c95ca802-235a-4feb-915a-fec7ce38ff5a> | 3.328125 | 345 | Knowledge Article | Science & Tech. | 22.098614 |
|Measuring Angles - Part 4|
The size of an angle will depend on the opening between the two sides of the angle (Pac Mac's mouth) and is measured in units that are referred to as degrees which is indicated by the ° symbol. To help you remember approximate sizes of angles, you will want to remember that a circle, once around measures 360°. To assist you to remember approximations of angles, it will be helpful to remember the following:
Think of a whole pie as 360°, if you eat a quarter (1/4) of it the measure would be 90°. If you ate 1/2 of the pie? Well, as stated above, 180° is half, or you can add 90° and 90° - the two pieces you ate!
If you cut the whole pie into 8 equal pieces. What angle would one piece of the pie make? To answer this question, you can divide 360° by 8 (the total by the number of pieces). This will tell you that each piece of pie has a measure of 45°.
Usually, when measuring an angle, you will use a protractor, each unit of measure on a protractor is a degree °.
Try a few best guesses, the angles below are approximately 10°, 50° , 150°,
1. = approximately 150°
2. = approximately 50°
3 = approximately 10°
Part 5: Bisectors, Congruencies and Theorems | <urn:uuid:12c0d210-e61e-45bf-93a1-84c85217649f> | 4.375 | 305 | Tutorial | Science & Tech. | 67.408869 |
Prove analytically the the diagonals of a rectangle are equal.
I really have no idea. Please help. Thanks!
there are so many ways to prove this
Here's the easiest one
Let's say you have a rectangle ABCD. AB is parallel to CD and BC is parallel to AD. And since rectangle has right angles it would be true that AB is congrudent to CD.
Therefore we can use a Pythagorean Theorem:
AB² + BC² = AC² and CD² + BC² = BD²
and since AB = CD -> AB² = CD² so
AB² + BC² = CD² + BC²
AB² + BC² = AB² + BC²
AC² = BD² | <urn:uuid:e3c7682f-58b1-4668-9aac-5b9fd836a4d2> | 2.921875 | 158 | Q&A Forum | Science & Tech. | 83.749381 |
Path Shape of Venus Over Time
Name: Peter C.
I was recently reading a popular fictional novel and it
said that if one were to track Venus' transit through the night sky, it
would seem as though it would form the shape of a 5 pointed star through
course of eight years. I would like to know if this is true.
Have you seen a Spirograph drawing toy (or the applet)?
You could use this to explain how it is possible for any orbit to define a
track such as a five-pointed star by showing the ellipsoidal path. I do not
know if Venus actually tracks that way, but I do not think it should be
Greg (Roberto Gregorius)
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:93ef85db-8e58-4323-8443-a19b19045589> | 2.890625 | 168 | Comment Section | Science & Tech. | 60.122916 |
Determining Stellar Ages
Name: Beth S.
How can we tell a star's age?
Estimating the age of a star depends on our understanding of the evolution
and process within a star. All stars are expected to begin their lives as a
collapsing mass of mostly hydrogen and helium with a few trace elements. When
the nuclear process begins, the hydrogen is steadily converted to helium and
Main sequence stars (those that are not very massive and much like our Sun)
spend most of their life converting hydrogen to helium. As the hydrogen is
exhausted, the star expands (becoming a red giant). Helium fusion then begins
and helium becomes the main fusion reactant. The star will have a higher
luminosity and surface temperature.
Main sequence stars eventually run out of fusion fuel, shed most of their
elements as part of the solar wind, creating a nebula of gases that can
eventually form a new solar system. The compressed mass at the center of the
star is not massive enough to continue to fuse elements and it is now known
as a white dwarf and eventually fade out, their nuclear fires extinguished.
More massive stars can continue fusing elements heavier than helium. At this
stage the fusion reaction is fueled by elements such as oxygen, etc. This will
progress until the iron core (iron does not fuse since it is at the thermodynamic
well of energy) becomes so massive that the star collapses on itself and explodes
in a supernova. Most of the star matter is blown away and what remains is a
neutron star (or in very massive stars - a black hole). The blown out material
can be recycled to form new planetary systems.
Thus, since we have an expectation of how stars mature, we simply need to look
at the spectrum of light emitted from stars to know what atoms exists on the
star. If it is mostly hydrogen, then it must be a young star. If it has quite
a bit of helium than it is in its second stage of development. If it has even
heavier elements, than it must be a massive star at its later stages. The ratio
of elements can tell us at what stage the star is.
Greg (Roberto Gregorius)
A very simple way is by seeing the star's color. A red star is cooler than
a blue or yellow one, and tends to be an older star as well! Blue stars tend
to be the youngest.
David H., Levy
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:7c10a5ac-6420-4a36-a19b-d0100bb94fa2> | 3.65625 | 536 | Knowledge Article | Science & Tech. | 57.354812 |
Country: United States
Date: May 2007
I understand than when a
dna-polymerase is making a copy of a dna strand, it attaches nucleosides
that are present in it´s surroundings, but how
does the enzyme "picks" or "catches" the nucleosides in order to
attach them to the template strand if they are
somehow "floating around" randomly?
First, the nucleotides are nearby. They are floating around, and thanks to
the natural vibrations of molecules (Brownian motion), they move back and
forth and bump into the polymerase continuously. The way nucleotides attach
has to do with the shape of the DNA polymerase and the shape of the
nucleotide (here, they're nucleoTides, not nucleoSides).
As you may know, enzymes (DNA polymerase being one kind of enzyme) have very
specific, special shapes to allow their target molecules to attach, or
'bind', to them. Not only the shape, but also the chemistry of the enzyme --
sometimes people use a lock-and-key analogy, but that's not the whole story;
it's more like a lock that changes shape to fit the key, and rejects keys
that aren't make of the right metal. DNA polymerase is shaped to attach to a
strand of DNA and to have nucleotides attach as well. The exact mechanism of
what-binds-to-what-and-when is a subject of current research, but it
involves magnesium ions and several intermediate steps.
Generally speaking, though, the reason things bind is because they have
lower energy. Things are 'happier' the lower energy they have, so they're
always looking to reduce their energy. A free nucleotide floating in a cell
has more energy than one bound in DNA polymerase (because the DNA polymerase
is shaped just right).
When the DNA polymerase attempts to attach the next nucleotide to the DNA
strand, if it is not the right nucleotide, it will have very high energy,
and will resist binding (it will be 'unhappy'). The right nucleotide with
fit just right (low energy = happy), and will bind more easily. Mistakes can
occur occasionally. Some DNA polymerases can even go back and fix mistakes.
There's a lot more to this story than I shared (and I may be guilty of
oversimplifying) -- you didn't list your grade level so I tried to give a
medium amount of detail. If you still have a specific question, please ask.
Hope this helps,
Nucleotides are colliding with other molecules on the order of 1 million
collisiRon Bakeons per second. When they collide with a complementary
nucleotide in the template strand, they are held there long enough for
the DNA polymerase to bond it to the previous nucleotide in the growing
DNA strand. The nucleotide is held in position before incorporation by
the specific Hydrogen bonding between the complementary base pairs
(2 Hydrogen bonds between an AT base pair and 3 between a GC base pair).
Less than 2 Hydrogen bonds are formed between either a GT base pair or a
AC base pair.
Ron Baker, Ph.D.
Click here to return to the Molecular Biology Archives
Update: June 2012 | <urn:uuid:11ef9d26-2fc6-4041-8caf-2ffd400e5a53> | 3.234375 | 711 | Q&A Forum | Science & Tech. | 43.983943 |
How do you tell the difference between a gamma-ray burst and a star just from a picture of a nebula, in which it cannot flash on and off here and there?
GRBs have a spectra far more Red-shifted than that of a star.. In fact, GRBs have such an unique spectra that nothing matches with it. Its profile has one or two high peaks of 1/2 seconds during which most of the energies will be emitted.. An ordinary star would be bright in optical wavelength... But, doesn't have any crazy peaks & hence won't be extremely red-shifted (Otherwise, we won't be able to see them) A GRB's redshift can be measured only with the afterglow when the initial gamma ray flare would be over. Also, Crab nebula doesn't host GRB phenomenon... It only flares up because of some abnormal neutron star spin-down or magnetic disturbances. And we're able to catch the ultra high energy gamma ray photons from Crab only because it's so nearer to us than any GRB you can name of... GRBs are always highly red-shifted extra-galactic explosions. | <urn:uuid:8797338b-371e-4d80-bb4a-8bfc0b3c9ed0> | 2.984375 | 236 | Q&A Forum | Science & Tech. | 73.216193 |
I've got a rather humiliating question considering newton's third law
"If an object A exterts a force on object B, then object B exerts an equal but opposite force on object A" -> $F_1=-F_2$
Considering that, why is there motion at all? Should not all forces even themselves out, so nothing moves at all?
When I push a table using my finger, the table applies the same force onto my finger like my finger does on the table just with an opposing direction, nothing happens except that I feel the opposing force.
But why can I push a box on a table by applying force ($F=ma$) on one side, obviously outbalancing the force the box has on my finger and at the same time outbalancing the friction the box has on the table?
I obviously have the greater mass and acceleration as for example the matchbox on the table and thusly I can move it, but shouldn't the third law prevent that from even happening? Shouldn't the matchbox just accommodate to said force and applying same force to me in opposing direction?
I've found a lot of answers considering that question but none was satisfying to an extend that I had an epiphany solving my fundamental problem I've got understanding it. | <urn:uuid:fa33c19d-39d0-4ed9-ae10-dc69a829cdf7> | 2.703125 | 265 | Q&A Forum | Science & Tech. | 45.608486 |
We recover the energy in a maglev flywheel in the same way we almost always convert mechanical energy to electrical energy: with a 3 phase electric power generator/motor, also called an alternator, with the rotor on the same shaft or otherwise integrated with the flywheel.
In cars with a combined starter/generator, pumped-storage hydroelectric dams, maglev flywheels, etc.,
3-phase electric power goes into the coils of the stator, and gets converted to mechanical torque to spin the rotor.
Later, the mechanical energy of the same spinning rotor pushes a magnetic field around that pushes electrons through those same coils of the stator, converting mechanical energy to electrical energy that exits out the wires of the generator.
Such 3-phase motor/generator electric machines don't need anything to touch the rotor.
(Unlike some DC motors/generators that require a "brush" to touch the rotor).
In practice, most of them do have something that touches the rotor --
a long input shaft to carry mechanical energy into the generator, a long output shaft to carry mechanical energy out of the generator, various bearings to keep the rotor more or less centered end-to-end and radially, or some combination.
What makes magnetic levitated flywheel energy storage a little special is that nothing actually does touch the rotor.
Some of the coils surrounding the rotor act like the coils of a 3 phase electric machines.
Those coils convert electric energy to mechanical energy to spin up the rotor in motor mode.
The same coils later convert the mechanical energy of the rotor back to electrical energy, slowing down the rotor in generator mode.
Other coils surrounding the rotor are used as a magnetic bearing to levitate the flywheel.
A typical regenerative variable-frequency drive typically uses 6 IGBTs to convert DC electric power to mechanical energy and back; the IGBT arrangement used in maglev flywheels is no different. | <urn:uuid:b510df96-bc32-4a7d-a376-79c4a82db681> | 3.296875 | 399 | Q&A Forum | Science & Tech. | 26.640501 |
September 8, 2009
Stemming, in the parlance of searching and information retrieval, is the operation of stripping the suffices from a word, leaving its stem. Google, for instance, uses stemming to search for web pages containing the words connected, connecting, connection and connections when you ask for a web page that contains the word connect.
There are basically two ways to implement stemming. The first approach is to create a big dictionary that maps words to their stems. The advantage of this approach is that it works perfectly (insofar as the stem of a word can be defined perfectly); the disadvantages are the space required by the dictionary and the investment required to maintain the dictionary as new words appear. The second approach is to use a set of rules that extract stems from words. The advantages of this approach are that the code is typically small, and it can gracefully handle new words; the disadvantage is that it occasionally makes mistakes. But, since stemming is imperfectly defined, anyway, occasional mistakes are tolerable, and the rule-based approach is the one that is generally chosen.
In 1979, Martin Porter developed a stemming algorithm that, with minor modifications, is still in use today; it uses a set of rules to extract stems from words, and though it makes some mistakes, most common words seem to work out right. Porter describes his algorithm and provides a reference implementation in C at http://tartarus.org/~martin/PorterStemmer/index.html; the description of the algorithm is repeated on the next page.
Your task is to write a function that stems words according to Porter’s algorithm; you should be aware that this exercise requires rather more code than we usually write, though it’s no harder than usual. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below. | <urn:uuid:2255fc14-94d1-4e69-be4f-b82066e11c9a> | 3.890625 | 391 | Tutorial | Software Dev. | 40.981192 |
Spiny back orb-weaver spider. This spider (Gasteracantha arcuata) grows a pair of extremely long, curving spines from its abdomen. It is thought the long spines evolved as a deterrent against predators or to mimic spines and thorns.
Picture: Nicky Bay / Science Photo Library / Rex Features
“It is the first-known predominantly vegetarian spider; all of the other known 40,000 spider species are thought to be mainly carnivorous. Bagheera kiplingi, which is found in Central America and Mexico, bucks the meat-eating trend by feasting on acacia plants. ” -BBC | <urn:uuid:649691eb-098c-4c09-a08e-d64df4eb9520> | 3.125 | 135 | Personal Blog | Science & Tech. | 44.745 |
law of corresponding states
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
...vary from substance to substance, the nature of the behaviour in the vicinity of the critical point is similar for all compounds. This fact has led to a method that is commonly referred to as the law of corresponding states. Roughly speaking, this approach presumes that, if the phase diagram is plotted using reduced variables, the behaviour of all substances will be more or less the same....
...accuracy (which is not very good). It furnished the impetus for the development of theories of liquids and of solutions. The equation is compatible with a unifying idea called the principle of corresponding states. This principle states that, if the pressure ( p), volume ( V), and temperature ( T) of a gas are replaced, respectively, with the corresponding reduced...
What made you want to look up "law of corresponding states"? Please share what surprised you most... | <urn:uuid:14ffefd4-020f-4493-9467-4d9c8d6787e9> | 3.1875 | 215 | Truncated | Science & Tech. | 52.30303 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
Filaments of the fungi called endomycorrhizae live within the cells of the roots of certain gymnosperms, especially conifers. Endomycorrhizal fungi are apparently parasitic, but not destructively so. In cycads, blue-green algae grow in nodules in the roots. These roots may grow opposite to the force of gravity and may form corallike masses on the ground surface, hence the term “coralloid...
...(hyphae) intermingle with them to form mycorrhizae. There are two distinct types of mycorrhizal associations among the conifers. The majority of species have vesicular-arbuscular mycorrhizae, called endomycorrhizae because the fungal hyphae actually penetrate the cells of the roots. All of the Pinaceae, and only the Pinaceae, have the other kind of root symbiosis, called ectomycorrhizal because...
...of certain plants ( e.g., citrus, orchids, pines) is dependent on mycorrhiza; other plants survive but do not flourish without their fungal symbionts. The two main types of mycorrhiza are endotrophic, in which the fungus invades the hosts’ roots ( e.g., orchids), and ectotrophic, in which the fungus forms a mantle around the smaller roots ( e.g., pines). Exploitation of...
There are two main types of mycorrhiza: ectomycorrhizae and endomycorrhizae. Ectomycorrhizae are fungi that are only externally associated with the plant root, whereas endomycorrhizae form their associations within the cells of the host.
What made you want to look up "endotrophic mycorrhiza"? Please share what surprised you most... | <urn:uuid:cc087f6c-f0e1-4048-96b5-416c4aac84e0> | 3.6875 | 430 | Knowledge Article | Science & Tech. | 38.814437 |
The answer, as everyone knows, is a Nobel Prize. Exactly fifty years ago this month, on April 25, 1953, the molecular biologists James D. Watson and Francis H. C. Crick published their pivotal paper in Nature in which they described the geometric shape of DNA, the molecule of life. The molecule was, they said, in the form of a double helix - two helices that spiral around each other, connected by molecular bonds, to resemble nothing more than a rope ladder that has been repeatedly twisted along its length. Their Nobel Prizewinning discovery opened the door to a new understanding of life in general and genetics in particular, setting humanity on a path that in many quite literal ways would change life forever.
"This structure has novel features which are of considerable biological interest," they wrote. Well, duh. You're telling me it does. But does the structure have any mathematical interest? More generally, never mind the double helix, does the single helix offer the mathematician much of interest?
Given the neat way the two intertwined helices in DNA function in terms of genetic reproduction, you might think that the helix had important mathematical properties. But as far as I am aware, there's relatively little to catch the mathematician's attention.
The equation of the helix is quite unremarkable. In terms of a single parameter t, the equation is
x = a cos t, y = a sin t, z = b tThis is simply a circular locus in the xy-plane subjected to constant growth in the z-direction.
A deeper characterization of a helix is that it is the unique curve in 3-space for which the ratio of curvature to torsion is a constant, a result known as Lancret's Theorem.
Helices are common in the world around us. Various sea creatures have helical shells, like the ones shown here
and climbing vines wind around supports to trace out a helix.
In the technological world of our own making, spiral staircases, corkscrews, drills, bedsprings, and telephone handset chords are helix-shaped.
And what kind of a world would it be without the binding capacity the helix provides in the form of various kinds of screws and bolts.
One of the most ingenious uses of a helix was due to the ancient Greek mathematician Archimedes, who was born in Syracuse around 287 BC. Among his many inventions was an elegant device for pumping water uphill for irrigation purposes. Known nowadays as the Archimedes screw, it comprised a long, helix-shaped wooden screw encased in a wooden cylinder, like this:
By turning the screw, the water is forced up the tube. The same device was also used to pump water out of the bilges of ships.
But when you look at each of these useful applications, you see that there is no deep mathematics involved. The reason the helix is so useful is that it is the shape you get when you trace out a circle at the same time as you move at a constant rate in the direction perpendicular to the plane of the circle. In other words, the usefulness of the helix comes down to that of the circle.
So where does that leave mathematicians as biologists celebrate the fiftieth anniversary of the discovery that the helix was fundamental to life? Well, if what you are looking for is a mathematical explanation of why nature chose a double helix for DNA, the answer is: on the sidelines. On this occasion, the mathematics of the structure simply does not appear to be significant.
On the other hand, that does not mean that Crick and Watson did not need mathematics to make their discovery. Quite the contrary. Crick's own work on the x-ray defraction pattern of a helix was a significant step in solving the structure of DNA, which involved significant applications of mathematics (Fourier transforms, Bessel functions, etc.). Based on these theoretical calculations, Watson quickly recognized the helical nature of DNA when he saw one of Rosalind Franklin's x-ray diffraction patterns. In particular, Watson and Crick looked for parameters that came from the discrete nature of the DNA helices.
Now, in the scientific advances that followed Crick and Watson's breakthrough, in particular the cracking of the DNA code, mathematics was much more to the fore. But that is another story. In the meantime, I hope I speak for all mathematicians when I wish the double-helix a very happy fiftieth birthday.
Jeff Denny of the Department of Mathematics at Mercer University contributed to this month's column. | <urn:uuid:e903c82f-189d-4f6c-8910-0f47feaebfa4> | 3.875 | 948 | Personal Blog | Science & Tech. | 46.193688 |
Why do my fingernails grow at least three times as fast as my toenails?
Toenails are subject to less wear and tear and so do not need to grow as quickly. Hands can be used as spades, for example, and fingernails can be used to prise things open.
According to Linden Edwards and Ralph Schott, in a paper published in 1937 in the Ohio Journal of Science (vol 37, p 91), toenails grow at half the rate of fingernails. On average, fingernails grow a little less than 4 centimetres a year. There is quite a big variation between individuals, depending on heredity, gender, age and how much they exercise. Nails grow faster in the summer.
Willenhall, West Midlands
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:d8960a68-a438-41c5-8c85-ae075d812b78> | 3.265625 | 191 | Truncated | Science & Tech. | 61.306449 |
Take a walk along the coast and help us monitor the effects of climate change and invasive species on the UK's seaweeds.
Seaweeds are easy to find and occur all around the UK coastline. There are a staggering 650 seaweed species in the UK, around 7% of the world’s species, and they play a vital role in the functioning of the marine environment.
Scientists think that the effects of climate change and the spread of invasive species are starting to have an effect on where they are found but they need more information to be sure.
This is where you can help. Identify the seaweeds you spot on the UK's coast and tell us what you find. This will help researchers from the British Phycological Society and Natural History Museum to find out what is happening to our seaweeds.
Seaweeds support a wealth of life, but they may be affected by invasive species or by climate change. Find out more about the aims of the Big Seaweed Search and what scientists hope to discover.
What exactly are seaweeds and what are they used for in everyday life? Although you may not know it, you use them regularly. Find out more.
Identify the different types of seaweed found on British beaches by looking through these images and using our identification tips.
Anyone can take part in the Big Seaweed Search, whatever their experience. Find out how to record your seaweed observations and send them back to the scientists who will be studying them.
Submit the results of your seaweed search online so that scientists can use them to study the distribution of British seaweeds.
Follow the progress of the Big Seaweed Search by finding out the distribution of different British seaweeds recorded so far, on the interactive map.
Help us find out more about these wonderful organisms by taking part in the Big Seaweed Search. You don't need to be an expert - just use our downloadable guides.
Discuss seashore life and ask the experts for help identifying your finds including seaweeds, crabs, shells and animal bones. | <urn:uuid:7b98e53f-1540-4afa-acfa-e2693e0d5965> | 3.5625 | 422 | Content Listing | Science & Tech. | 55.557061 |
Global Warming and Southwestern Streams
Southwestern streams originating in high mountains are fed by snowpack and often flow throughout the year because of this summertime water source. There are also an abundance of intermittent or ephemeral southwestern streams that flow only in spring or during heavy rains. At lower altitudes these streams flow through desert and steppe habitats with very low rainfall, and are often the only water source across large landscapes. The stream corridors harbor plants, such as native cottonwood and willows that are unable to survive in the dryer surrounding uplands.
Benefits for Humans and Wildlife
The value of western streams is so great that it is essentially incalculable. Just their importance for supplying drinking water to major cities, such as Phoenix, Las Vegas and Los Angeles, is huge. They also supply essential irrigation water for large-scale farming, especially during the summer growing period when temperatures are high and snowpack melt becomes the primary source of water.
Western streams provide critical in stream flows that nurture diverse ecosystems and wildlife, without which the ecology of the Southwest would be radically different. The diversity and productivity of western streams are important for many Native American tribes. For example, the Lower Colorado is vital to the Cocopah Tribe for subsistence, cultural, economic, and recreational activities.
Whether permanent or intermittent, western streams are important recreational areas throughout the southwest, with both the permanent and higher altitude streams supporting recreational fishing. The stream habitats are used extensively by migratory birds during nesting and migration, as well as a variety of other wildlife including beaver and deer.
Threats from Global Warming
The threat to western streams from global warming is extensive. As a result of rising winter temperatures reducing the winter accumulation of snowpack in western mountains, the amount of available spring and summer melt-water is declining.
An equally significant threat is that spring temperatures are arriving as much as 3-4weeks earlier than in the past, also reducing the amount of water available in the summer and fall. With only a 1.5 degree F increase in average global temperature, the Colorado River may shrink to its lowest level in at least 500 years. This is expected to occur within the lifetime of children born now, even with immediate reductions in greenhouse gas emissions. Lake Mead, a major reservoir on the Colorado River, is less than half full and could run dry by 2020. Increased severity of droughts and flood events from global warming will also affect water supply, and water quality.
The decline of water availability caused by global warming will exacerbate an already severe shortage of water in many areas of the Southwest. Water prices will likely sky-rocket as growing water demand conflicts with declining water availability. Mandatory water restrictions will likely affect people’s daily lives and the economic survival of irrigated croplands. These water wars may leave fish and wildlife last in line.
Conservation Investments to Minimize Global Warming Impacts
Addressing exacerbated water shortages brought on by global warming will require a diversity of approaches. Considerable investment will be needed to reduce water demand by assisting homes, businesses and agriculture in developing new water conservation strategies and low water use technologies.
Stream corridors should be restored to natural conditions wherever possible to maintain optimal water flows and habitat for fish and wildlife. Restoration of native plant species along southwestern streams is vital for wildlife survival and diversity. Ensuring base flows and restoring natural flood flows is also critical to preserving natural stream habitat processes. On the Lower Colorado River alone, riparian and river flow restoration has a price tag of more than $250 million. | <urn:uuid:e845d348-9799-4bd9-bc2e-92d43ef3ce0a> | 3.90625 | 723 | Knowledge Article | Science & Tech. | 23.976313 |
A nyone who has been to the beach has probably seen starfish or sand dollars. The more intrepid beachcomber may find brittle stars, sea cucumbers, or sea urchins. These and many other organisms, living and extinct, make up the Echinodermata, the largest phylum to lack any freshwater or land representatives.
Most living echinoderms, like this sand dollar from Baja California, are pentameral; that is, they have fivefold symmetry, with rays or arms in fives or multiples of five. However, a number of fossil echinoderms were not pentameral at all, and some had downright bizarre shapes. Echinoderms have a system of internal water-filled canals, which in many echinoderms form suckered "tube feet", with which the animal may move or grip objects.
Click on the buttons below to find out more about the Echinodermata
Visit the Echinoderm Homepage at the California Academy of Sciences for additional information and links. Or peruse the Echinoderm Newsletter, brought to you by the National Museum of Natural History. | <urn:uuid:dc3daa02-52cf-44a9-9576-fdcb7ca60771> | 3.875 | 240 | Knowledge Article | Science & Tech. | 28.891798 |
Mapping land cover from multi or hyper spectral imagery can proceed only on the premise that the target cover types are spectrally different from one another. Classification is a process of grouping pixels of similar spectral properties into spectral classes in which each class is assumed to or designed to correspond to a surface cover type. A number of factors can influence the spectral "separability" of target features.
They may be observed to be spectrally different in ground observations and yet not distinguished from one another in the imagery because
The 2 meter ground resolution distance of this data set is sufficiently small in comparison with the size wetland land cover features that the problem of spectral signature mixing should not preclude the identification of most land cover types. Training targets in which ten or more pixels are sufficiently set back from the perimeter of the target to be considered "pure" should not be difficult to obtain in this high spatial resolution data. However, some linear features along ditches, for example, may not be sufficiently wide to be "uncontaminated." There are examples where some pixel spectra of water features can be shown to have been modified by adjacent land pixels in this image (see Appendix II.)
Methods of Classification
The process of classification may be either unsupervised or supervised. Verbyla (1995) provides an excellent elementary presentation. More comprehensive presentations are available in Estes et al . (1983) and in many texts (for example Campbell, 1996, Jensen 1996, Lillesand and Kiefer, 2000, and Richards, 1993.)
Unsupervised Classification Results
In unsupervised classification the spectral variance displayed by the features in the image is partitioned into a specified number of spectral classes without using prior knowledge of the existing cover types or their spectral properties. The partitioned classes must then be identified and labeled by the analyst. In a supervised classification, on the other hand, the analyst previously identifies certain "pure" examples of each target land cover which can be delimited on the ground and identified in the image. The spectral signature of each such target "training site" is determined from the image and used as a starting point for aggregating pixels which have similar spectral characteristics. We applied both approaches to classifying these data in order to compare their effectiveness.
Initial experiments with an unsupervised approach showed that certain spectrally distinct target land cover classes could be identified and segregated. These land surfaces include unconsolidated shore and water and shallow water/tidal flat and land cover which contains vegetation. Salt hay in the wetland was readily identified, although confused with some agricultural areas. Brighter, unshaded forest canopy was readily distinguished, although occasional confusion with high tide bush on the wetlands did occur; however the many pixels of forest shadow in the upland forest were confused with and indistinguishable from wetland vegetation. These experiments demonstrated that it was not possible to perform a satisfactory unsupervised classification of the wetland and upland vegetation at the same time. Nor was spectral separation of forest/upland from the adjacent wetland shrub and/or Phragmites successful.
Additionally, several of the wetland vegetation habitats which we seek to identify display very small spectral differences across the image. Their spectral diversity due to environmental factors appear to introduce greater differences within a given habitat type than exist between these plant species . Particular spectral confusion within wetland areas was experienced between Phragmites and dense canopies of S. alterniflora . Their spectral signatures showed so much diversity and overlapped such that unsupervised classification techniques could not separate the two over significant portions of the image. Each vegetation type showed such spectral variation over the image that no wetland land cover or vegetation type could be represented by a single spectral class. Several different factors appear to have contributed to these variations, some related to conditions on the surface and some related to conditions of the remote imagery. These spectral signatures are strongly affected by the underlying wet mud and water substrate so that canopy density differences result in different mixtures of vegetation spectra with mud/water spectra. Some of the surface conditions are
Extensive information about on-the-ground conditions were required in order to establish "ground truth" targets to associate with their spectral appearance in the image.
We next adopted a supervised approach to classifying the wetland vegetation. The image was divided (segmented) into upland and wetland (including water) portions, as noted above, for separate classification. By manual on-screen digitizing a polygon area of interest was defined at very high spatial resolution separating upland forest and agricultural areas from adjacent wetland. The human interpreter was able to distinguish between trees and adjacent wetland shrub and marsh vegetation using spectral cues augmented by spatial cues such as patterns of shadows. The whole study area was segmented into three sections for separate classification: agricultural crop land, forest (including developed/impervious surfaces, and tidal wetland. Tidal wetland was further subdivided into five areas. Signatures of land cover types were separately identified within each area.
Training samples which had been identified in the field were located in the image and their spectral signatures (spectral profiles in the center of the training areas) taken from the image. Training areas were identified using these signatures and visual on-screen interpretation of the hyperspectral image. Identification of maize and soy bean 1999 was aided with the help of a local farmer who identified his 1999 crops in fields totaling some 1200 acres. The identity of land cover in wetland areas was established from on-the ground visitation and from oblique photography from low altitude aircraft. In wetland areas many training sites (5 to 10 per land use category) were required to allow the correct classification of the overall image because the spectral diversity of similar land cover classes across the image. Comparison of spectra from training areas showed good spectral separation among the samples but wide differences are evident in each spectral class presumably because of differences in background water depth, canopy density and necrotic material in the canopy.
Based on these training samples we undertook the supervised classification. In the final classification of agricultural land we have not separated agricultural fields by crop type. The forest section is classified into five classes: forest, shrub, grass, developed land/impervious surfaces, and other. The category "other" contains unidentified surfaces and upland water bodies. The tidal wetland is classified into 13 classes (Table 1). These classes were selected for their potential relevance as wetland habitats and represent the limit of presently possible spectral identification. The 13 wetland classes are
Each class is discussed further below. Aerial and ground views of examples of tidal wetland cases are shown in Appendix III.
The unconsolidated shore has a very bright spectral return. For plant canopy to be visible over the bright sand background, the canopy must be sufficiently heavy to significantly obscure the background.
S. alterniflora cover is divided into two classes according to canopy density. Class (1) is lower density, hence its reflectance spectrum shows more influence of the underlying water and marsh surface. Class(2) has a higher density canopy and is usually associated with higher marsh elevations than class (1). We have noted that there are frequent examples of confusion between pixels of the alterniflora class (2) and Phragmites. Three examples of apparent reflectance spectra for S. alterniflora illustrate this confusion.
The Salt Hay vegetation shows a very wide spectral range. Lodged patches of salt hay produce highly recognizable bright spectra in the image. This high visible-band reflectance is shown in a comparison of apparent reflectance spectra for 5 vegetation types . However, there are examples of freshwater broad leaf vegetation confused with salt hay. Non-lodged salt hay, which shows a clump (or hummock) habit, may also be recognized in this class spectrally. Such areas can be noted along the northern margin of the lagoon. However, this identification is less secure elsewhere.
Of the two Phragmites classes, dead Phragmites canes standing after herbicidal treatment are distinct spectrally. While the dead canes themselves are not visible in vertical view, the absence of a green vegetation spectral signature against the wet marsh surface background, renders these patches readily identifiable. Regrowing Phragmites within them is also identifiable. Compare the spectra Phragmites and Phragmites canes.
Live Phragmites stands are more difficult to reliably identify spectrally in this image even after extensive ground truthing. At the time of the image Phragmites canopies from place to place showed considerable variety of progression toward senescence. We found frequent confusion with dense canopies of S. alterniflora . Much of our ground truth sampling was directed toward establishing a spectral distinction between these classes. In the end it was necessary to edit the mapping of this class based on observations on the ground and from visual interpretation of the oblique aerial photographs.
The High Tide Bush could be interpreted spectrally in local areas after ground visitation and with the assistance of the oblique photography. The May aerial oblique photographs revealed these shrubs particularly well. Narrow strips along ditches may not be wide enough to fill pixels sufficiently produce spectra distinguishable from the surrounding marsh.
Shallow Water/Tidal Flats are mud flat areas that are inundated more than one half of the time so that S. alterniflora tends not to grow. These flats may have a very sparse canopy of S. alterniflora or be bare of vegetation. They are spectrally identifiable as above water or inundated by only a centimeter or two of water. The spectra of this class show a mixture of the strong influence of wet mud and water with a slight evidence of green vegetation. The green vegetation signal may be due either to algae or vascular vegetation. Water depth of this class in the image is estimated to be less than 2cm. The Figure Mudflat and Water Spectra shows spectra from the haze corrected image for mudflat, and lagoon water 2 to 16 cm deep and deeper.
Open water is sufficiently deep that the spectral signature does not suggest the influence of signal from the bottom. From our observations, water depth in these areas may be as little as 10 cm or even less. Much of the lagoon is, in fact, only 10 to 20 cm deep at low tide.
Developed and Impervious Surfaces within the wetland section are roads and residences. This class in upland portion of the image also includes a number of larger buildings, especially chicken houses.
The areal extent of all features classified in the three sections of the study area is 10,767 ha. A breakdown of area by class is presented in Tables 1 and 2. The 1999 Land Cover Figure shows the classification.
Table 1. Tabulated Area in Agriculture and Forest Images
|Class Name||Area (ha)||% Upland Area|
Table 2. Area of Wetland Classes from the image in Hectares
|Class Name||Area (Ha)||% of Wetland Area|
S. alterniflora (1)
S. alterniflora (2 )
Phragmites , live
Phragmites , canes
High Tide Bush
Shallow Water/ Mud Flat
( < ~2 cm deep)
(> ~2 cm deep)
|Total Wetland Area||4463||100| | <urn:uuid:88ac6ed2-eba7-4aaa-a262-a871df17b9f5> | 2.96875 | 2,345 | Academic Writing | Science & Tech. | 27.028139 |
Russian Firestorm: Finding a Fire Cloud from Space
Thick, choking smoke hung over Russia on August 1, 2010, adding to the misery of a stifling summer heat wave. Thousands of people were fleeing nearly 700 fires burning in the drought-dried forests and peat bogs of western Russia, while those not directly threatened were struggling to see through and breathe the smoky air.
It was perhaps not too surprising, then, when the Ozone Monitoring Instrument (OMI) on NASA’s Aura satellite recorded high concentrations of aerosols over far northern Russia on August 1. Smoke from forest fires contains tiny particles (aerosols) produced when a fire incompletely burns through trees and other carbon-based fuel. These aerosols usually linger in the lower part of the atmosphere before falling out. On this day, OMI measured aerosols above the top of high clouds.
A decade ago, a scientist trying to trace the source of those aerosols would have looked for an erupting volcano. A volcanic eruption, it was thought, was the only force powerful enough to loft aerosols twelve kilometers or more into the atmosphere.
But in 2010, meteorologist Michael Fromm saw another suspect far closer to northern Russia. Working at the Naval Research Laboratory in Washington, D.C., Fromm had spent the last decade studying how fires inject smoke into the upper atmosphere. His experience told him that at least one of the hundreds of fires burning in western Russia had probably generated a powerful, dangerous firestorm.
Large fires can create their own weather by rapidly heating the air above them. The heated air rises with smoke until water vapor in the air condenses into a puffy cloud. An odd-looking puff of white capping a dark column of smoke is the sign of a fire-formed, or pyrocumulus cloud.
Occasionally, if the superheated air rises fast and high enough, it forms a towering thundercloud. Like the thunderstorms that form on a hot summer’s day, the tops of these cauliflower-shaped clouds reach high enough into the atmosphere that ice crystals form. Those ice crystals electrify the cloud, creating lightning. Called pyrocumulonimbus clouds, the clouds are capable of dangerous lightning, hail, and strong winds. One such firestorm in 2003 pelted Canberra, Australia, with large, soot-darkened hail, produced a damaging tornado, and generated strong winds that caused the fire to explode into neighborhoods in the capital city.
As dangerous and destructive as pyrocumulonimbus-driven storms can be, the giant clouds also act like a chimney, sucking smoke high into the atmosphere. After the Canberra fires, the Total Ozone Mapping Spectrometer (OMI’s predecessor) detected extremely high levels of aerosols in the atmosphere. NASA’s Stratospheric Aerosol and Gas Experiment (SAGE III ) satellite confirmed that the smoke from Canberra’s firestorm had reached the stratosphere.
Was OMI’s observation this summer an indicator that a similar firestorm had erupted in Russia? Fromm suspected that it was, and he set out to find proof of a pyrocumulonimbus cloud in other satellite data. | <urn:uuid:7c720e9c-7397-462e-b61a-f5727ca5ecc4> | 3.78125 | 666 | Knowledge Article | Science & Tech. | 37.821341 |
From faraway planets to the deepest depths of the ocean, 2012 has been an exciting year for scientific achievements and milestones.
Humans broke previously unimaginable barriers by detecting an elusive tiny particle and free-falling 24 miles from the edge of space. At the same time, we said goodbye to four retired NASA space shuttles that found new museum-type homes.
Here's our list of the biggest science achievements this year, in order of significance:
1. Curiosity lands, performs science on Mars
Every time I hear the word "curiosity" in a sentence, I'm tempted to butt in and ask if you're talking about the Mars rover Curiosity. She's really there! On Mars! Right now! And people are driving it! (Forgive me, I get excited about this.)
Landing this 2-ton rover flawlessly on the surface of Mars is our choice for the most exciting science moment of 2012. You can see from NASA's "seven minutes of terror" video how crazy-complicated that was -- the landing process included a supersonic parachute and a sky crane.
I'll never forget watching the live NASA feed with hundreds of other science enthusiasts at Georgia Institute of Technology in the first hours of August 6. James Wray, assistant professor at Georgia Tech, who is affiliated with Curiosity's science team, was next to me, rubbing his hands together in anticipation. And when the landing was confirmed, the room erupted in cheers and shouts. This was only one of many gatherings around the world celebrating this achievement.
And then there's all the stuff Curiosity's been doing since then, such as taking gorgeous photos, finding shiny objects, and coming across evidence that water once flowed on Mars.
We can't wait to see what Curiosity will do in 2013.
2. Higgs boson -- it's real | <urn:uuid:113caff8-5a14-4f0d-8fad-221600fd87fd> | 3.171875 | 375 | Listicle | Science & Tech. | 55.846981 |
I find it interesting that all life on earth use DNA. I've seen video on how helicase and ribosomes work together to copy DNA sequences (to RNA) with helicase then recreate them using ribosomes. Does this process work the same way in the simplest life forms? Bacteria and other unicellular life that reproduce through mitosis, for example?
I've had a direct look at Carsonella rudii, which is I believe is the bacterial genome with the smallest number of genes so far - 184 as well as being only 160 kb in length. It has a putative DNA helicase encoded in it. 37 of those genes encode for ribosomal components as well.
If there was ever a bacterium or any living cell (non virus that is) discovered without a ribosome, you would have heard about it. nobel prizes all around.
So I would say that your answer is yes.
First, mitosis is a eukaryotic process where not only genome replication and cell division is involved but quite a few more processes, not the least because there is a nucleus in eukaryotes. Bacteria simply divide after replication. And yes, even the simplest bacteria (and even some symbionts) use ribosomes for protein production. Example:
The smallest bacteria are from species Mycoplasma. The Mycoplasma species with the smallest annotated proteome on UniProt is Mycoplasma genitalium (strain ATCC 33530 / G-37 / NCTC 10195, with 484 genes. I have attached links to the set, as well as to its helicase. (For the 16S gene you would look that up in a gene not protein database) | <urn:uuid:d7d777e9-6c34-4434-a399-a0888f91623e> | 3.046875 | 357 | Q&A Forum | Science & Tech. | 47.822788 |
Each week we write about the science behind environmental protection. Previous Science Wednesdays.
About the Author: Karl Berg is currently a Ph.D. student at Cornell University in Ithaca, NY, and is looking forward to a career that will combine his interests in animal behavior and conservation. His master’s research was funded by an EPA Science to Achieve Results (STAR) Graduate Research Fellowship.
Bird populations have long been viewed as “canaries in the coal mine” for indicating changes in environmental health. As EPA’s Report on the Environment states, “changes in bird populations reflect changes in landscape and habitat, food availability and quality, toxic exposure, and climate.” Because this is so important, annual bird counts to document population changes are conducted by the North American Breeding Bird Survey.
If the timing of the species’ calls is staggered, birds could be undercounted, which is why I wanted to find an improved method to monitor bird populations to better understand how they are changing and why.
In my quest to understand the “dawn chorus,”—why different bird species chime in at different times—I chose my research site in the tropical forests of Ecuador where hundreds of bird species occur together. Tropical forests are the most threatened terrestrial ecosystems on Earth and have large and diverse bird populations. As more forests are cut one immediate change that takes place in remaining forests is the quantity and quality of forest light.
My study showed that common communicative and reproductive behaviors of forest birds are synchronized or have co-evolved with seemingly tiny changes in forest light.
My wife and I spent several months trudging up muddy, forested mountains in a tropical rainforest of Ecuador at 4:00 AM to make over 100 hours of recordings, synchronized with twilight, to determine if the birds had a singing schedule.
Back at Florida International University, we identified 130 bird species from the recordings and logged the times of 25,000 songs. My research showed that tropical birds began to sing only when they saw light. Big-eyed birds that foraged high in the forest canopy sang earlier. The late risers were birds with small eyes in the dark, dense underbrush. The control mechanism then, was a combination of ecological and morphological traits synchronized with an atmospheric one.
In the future, I believe that automated birdsong monitoring, supplemented by the sophisticated understanding of birdsong timing, will help EPA and others better understand our changing environment.
Editor's Note: The opinions expressed in Greenversations are those of the author. They do not reflect EPA policy, endorsement, or action, and EPA does not verify the accuracy or science of the contents of the blog. | <urn:uuid:772b1ae0-fb48-418c-a52b-9acbb66bee1a> | 3.765625 | 551 | Personal Blog | Science & Tech. | 34.265831 |
XDR library routines allow C programmers to describe arbitrary data structures in a machine-independent fashion. Protocols such as remote procedure calls (RPC) use these routines to describe the format of the data.
These routines deal with the creation of XDR streams. XDR streams have to be created before any data can be translated into XDR format.
See rpc(3NSL) for the definition of the XDR , CLIENT , and SVCXPRT data structures. Note that any buffers passed to the XDR routines must be properly aligned. It is suggested that malloc(3C) be used to allocate these buffers or that the programmer insure that the buffer address is divisible evenly by four.
A macro that invokes the destroy routine associated with the XDR stream, xdrs . Destruction usually involves freeing private data structures associated with the stream. Using xdrs after invoking xdr_destroy() is undefined.
This routine initializes the XDR stream object pointed to by xdrs . The stream's data is written to, or read from, a chunk of memory at location addr whose length is no less than size bytes long. The op determines the direction of the XDR stream (either XDR_ENCODE , XDR_DECODE , or XDR_FREE ).
This routine initializes the read-oriented XDR stream object pointed to by xdrs . The stream's data is written to a buffer of size sendsz ; a value of 0 indicates the system should use a suitable default. The stream's data is read from a buffer of size recvsz ; it too can be set to a suitable default by passing a 0 value. When a stream's output buffer is full, writeit is called. Similarly, when a stream's input buffer is empty, readit is called. The behavior of these two routines is similar to the system calls read() and write() (see read(2) and write(2) , respectively), except that an appropriate handle (read_handle or write_handle ) is passed to the former routines as the first parameter instead of a file descriptor. Note: the XDR stream's op field must be set by the caller.
Warning: this XDR stream implements an intermediate record stream. Therefore there are additional bytes in the stream to provide record boundary information.
This routine initializes the XDR stream object pointed to by xdrs . The XDR stream data is written to, or read from, the standard I/O stream file . The parameter op determines the direction of the XDR stream (either XDR_ENCODE , XDR_DECODE , or XDR_FREE ).
Warning: the destroy routine associated with such XDR streams calls fflush() on the file stream, but never fclose() (see fclose(3C) ).
Failure of any of these functions can be detected by first initializing the x_ops field in the XDR structure (xdrs => x_ops ) to NULL before calling the xdr*_create() function. After the return from the xdr*_create() function, if the x_ops field is still NULL, the call has failed. If the x_ops field contains some other value, the call can be assumed to have succeeded.
See attributes(5) for descriptions of the following attributes:
|ATTRIBUTE TYPE||ATTRIBUTE VALUE| | <urn:uuid:36bf5c1b-f1d2-40bb-a808-59319686e003> | 2.890625 | 704 | Documentation | Software Dev. | 55.200181 |
Tk/Tcl has long been an integral part of Python. It provides a robust and platform independent windowing toolkit, that is available to Python programmers using the Tkinter module, and its extension, the Tix module.
The Tkinter module is a thin object-oriented layer on top of Tcl/Tk. To use Tkinter, you don't need to write Tcl code, but you will need to consult the Tk documentation, and occasionally the Tcl documentation. Tkinter is a set of wrappers that implement the Tk widgets as Python classes. In addition, the internal module _tkinter provides a threadsafe mechanism which allows Python and Tcl to interact.
Tk is not the only GUI for Python, but is however the most commonly
used one; see section
|Tkinter||Interface to Tcl/Tk for graphical user interfaces|
|Tix||Tk Extension Widgets for Tkinter|
|ScrolledText||Text widget with a vertical scroll bar.|
|turtle||An environment for turtle graphics.|
See About this document... for information on suggesting changes. | <urn:uuid:ddd1a49f-5c32-4a94-afa6-556251e8d9ca> | 2.78125 | 241 | Documentation | Software Dev. | 54.494102 |
Try turning a growing plant on end, and a funny thing happens. It will right itself, twisting and contorting so its leaves project skyward and its roots extend down toward the center of the Earth. Logic would dictate that plants solve the problem of which way to grow by sensing gravity — but how did scientists ever prove that this is correct?
Over on NPR, Robert Krulwich recounts the story of scientist Thomas Knight and the first experiment to demonstrates that plants do, in fact, sense gravity:
He attached a bunch of plant seedlings onto a disc (think of a 78 rpm record made of wood). The plate was then turned by a water wheel powered by a local stream, "at a nauseating speed of 150 revolutions per minute for several days."
If you've ever been at amusement park in a spinning tea cup, you know that because of centrifugal force you get pushed away from the center of the spinning object toward the outside.
Knight wondered, would the plants respond to the centrifugal pull of gravity and point their roots to the outside of the spinning plate? When he looked... that's what they'd done. Every plant on the disc had responded to the pull of gravity, and pointed its roots to the outside. The roots pointed out, the shoots pointed in. So Thomas Knight proved that plants can and do sense gravitational pull.
Problem solved! Sort of. As is wont to happen with good, interesting science, Knight's experiments simply led to more questions. What is it, exactly that makes it possible for plants to sense gravity in the first place?
Read the rest at NPR to find out (hint: it involves a trip to space). | <urn:uuid:b225e47d-fbe1-46f4-a54c-8724c136320f> | 4.25 | 340 | Truncated | Science & Tech. | 68.448219 |
a1 Department of Astronomy, Cornell University, 402 Space Sciences Building, Ithaca, NY 14853 USA email: firstname.lastname@example.org
The giant planets of our solar system contain a record of elemental and isotopic ratios of keen interest for what they tell us about the origin of the planets and in particular the volatile compositions of the solid phases. In situ measurements of the Jovian atmosphere performed by the Galileo Probe during its descent in 1995 demonstrate the unique value of such a record, but limited currently by the unknown abundance of oxygen in the interior of Jupiter–a gap planned to be filled by the Juno mission set to arrive at Jupiter in July of 2016. Our lack of knowledge of the oxygen abundance allows for a number of models for the Jovian interior with a range of C/O ratios. The implications for the origin of terrestrial water are briefly discussed. The complementary data sets for Saturn may be obtained by a series of very close, nearly polar orbits, at the end of the Cassini-Huygens mission in 2016-2017, and the proposed Saturn Probe. This set can only obtain what we have for Jupiter if the Saturn Probe mission carries a microwave radiometer. | <urn:uuid:ca958896-d8b8-4c26-b334-3348ff2050a5> | 2.859375 | 248 | Academic Writing | Science & Tech. | 32.672454 |
Earth Radiation Budget
Most input of the Earth energy is received from the Sun. The solar energy is short-wave radiation. Although the Earth also receives electromagnetic energy from the other bodies in space, it's negligible, compared with solar energy [Table 1]. The incident solar energy (shortwave) may be reflected and absorbed by the Earth's surface or the atmosphere. And Earth's surface and atmosphere also emit the radiation (longwave).
The Earth Radiation Budget is the balance between incoming energy from the sun and the outgoing longwave (thermal) and reflected shortwave energy from the Earth.
The radiant solar Energy is from nuclear energy and the temperature of the Sun is 6000K. The spectrum of the solar radiation received at the top of the atmosphere is well approximated by the spectrum of a blackbody having a surface temperature of about 6000K. Thus Sun may be considered as a blackbody.
The solar energy reaching the Earth is traditionally quantified as the solar constant which is the annual average solar irradiance received outside the Earth's atmosphere or surface normal to the incident radiation at the Earth's mean distance from the Sun (about 1370 W/m2). The actual solar irradiance from the Sun. The actual solar irradince varies by 3.4% from the solar constant during the year due to the eccentricity of Earth's orbit about the Sun.
The Surface Radiation Budget is divided into downward shortwave radiation, reflected shortwave radiation, downward longwave radiation, upward longwave radiation, net radiation. They are dominated by cloud.
The downward shortwave radiation may be reflected to the space absorbed in the atmosphere and absorbed at the ground.
where Esun is the incident solar radiation, A is albedo of TOA (Top of atmophere), Eatm is the energy of absorbed in atmosphere, Esfc is downward irradiance at surface and Asfc is surface albedo. This equation implicity includes scattering and multiple reflectors between the surface and clouds. Usually the incident solar radiation and TOA albedo is measured by satellite measurement.(ERBE) The reflected solar radiation is the product of surface albedo and the downward solar radiation, the surface albedo should be determined. One method to estimate surface albedo is the minimum albedo technique. Because few locations are likely to be cloud-covered for an entire month, the minimum albedo is likely to be represent the clear-sky albedo. It can be calculated from narrow band AVHRR observations. The downward longwave radiation is mostly from the atmosphere. It depends on the temperature and moisture of the atmosphere. The water vapor and other gases, aerosols absorb some solar energy and emit some longwave radiation energy computation of downward longwave radiation from the atmosphere is difficult, even when the distributions of water vapor, carbon dioxide, cloudiness, and temperature are measured. Some satellite measurements like TOVS estimates downward longwave radiation. Little longwave radiation is reflected by the surface: natural surface emission is dominant. It is also difficlut to measure and define the surface temperature especially vegetation surface. To combine the above four components makes the calculation of net radiation at the surface. This is not accurate because the errors in each accumulate. So it is developed the research to use some satellite measurements-NOAA, GOES etc.
In 1978, an international team of Scietists was selected to design and develop of ERBE. Dr. Bruce Barkstern was the ERBE principal Investigator. This team developed two types of instruments :
If you have any comments or questions; contact to Yuri,Mun
Last updated Apr. 23, 1998
Earth Radiation Budget/Environmental Science/Rutgers,The University of New Jersey | <urn:uuid:342b18ca-7052-4bf0-b02d-bf094faed82f> | 4.15625 | 757 | Academic Writing | Science & Tech. | 27.478258 |
Volume & Surface Area of a pyramid within an isohedron
Hi Maths heads
I want to find the mathmatical formula for a pyramid within a sphere.
Consider a spere of any radius R
I want to cut out a pyramid with vertex at the centre of the sphere and the cutting through the sphere at radius R
The angle between the opposite planes of the vertex is 'theta'
ie the volume of the pyramid must be a multiple wjole number 'n'.
Also the surface area of the base of the pyramid which is part of the curved surface area of the sphere is to be the same multiple of the surface area of the sphere.
I am tryoing to work out an easy formula for a isohedron where the sphere is divided into a square pattern and therefore the shere is divided fully into these square units.
As i cannot draw a diagram here, any clarifications would be welcome.
Re: Volume & Surface Area of a pyramid within an isohedron
To clarify the base of the pyramid is dome shaped and follows the curvature of the sphere surface. | <urn:uuid:b063ad1a-3d8a-49ad-952c-bcdd91d28b6d> | 3.125 | 226 | Comment Section | Science & Tech. | 34.37 |
Hydroecological factors governing surface water flow on a low-gradient floodplain
Article first published online: 28 MAR 2009
Copyright 2009 by the American Geophysical Union.
Water Resources Research
Volume 45, Issue 3, March 2009
How to Cite
2009), Hydroecological factors governing surface water flow on a low-gradient floodplain, Water Resour. Res., 45, W03421, doi:10.1029/2008WR007129., , , , , and (
- Issue published online: 28 MAR 2009
- Article first published online: 28 MAR 2009
- Manuscript Accepted: 28 JAN 2009
- Manuscript Revised: 13 JAN 2009
- Manuscript Received: 2 MAY 2008
- surface water hydraulics;
- vegetated flows;
Interrelationships between hydrology and aquatic ecosystems are better understood in streams and rivers compared to their surrounding floodplains. Our goal was to characterize the hydrology of the Everglades ridge and slough floodplain ecosystem, which is valued for the comparatively high biodiversity and connectivity of its parallel-drainage features but which has been degraded over the past century in response to flow reductions associated with flood control. We measured flow velocity, water depth, and wind velocity continuously for 3 years in an area of the Everglades with well-preserved parallel-drainage features (i.e., 200-m wide sloughs interspersed with slightly higher elevation and more densely vegetated ridges). Mean daily flow velocity averaged 0.32 cm s−1 and ranged between 0.02 and 0.79 cm s−1. Highest sustained velocities were associated with flow pulses caused by water releases from upstream hydraulic control structures that increased flow velocity by a factor of 2–3 on the floodplain for weeks at a time. The highest instantaneous measurements of flow velocity were associated with the passage of Hurricane Wilma in 2005 when the inverse barometric pressure effect increased flow velocity up to 5 cm s−1 for several hours. Time-averaged flow velocities were 29% greater in sloughs compared to ridges because of marginally higher vegetative drag in ridges compared to sloughs, which contributed modestly (relative to greater water depth and flow duration in sloughs compared to ridges) to the predominant fraction (86%) of total discharge through the landscape occurring in sloughs. Univariate scaling relationships developed from theory of flow through vegetation, and our field data indicated that flow velocity increases with the square of water surface slope and the fourth power of stem diameter, decreases in direct proportion with increasing frontal area of vegetation, and is unrelated to water depth except for the influence that water depth has in controlling the submergence height of vegetation that varies vertically in its architectural characteristics. In the Everglades the result of interactions among controlling variables was that flow velocity was dominantly controlled by water surface slope variations responding to flow pulses more than spatial variation in vegetation characteristics or fluctuating water depth. Our findings indicate that floodplain managers could, in addition to managing water depth, manipulate the frequency and duration of inflow pulses to manage water surface slope, which would add further control over flow velocities, water residence times, sediment settling, biogeochemical transformations, and other processes that are important to floodplain function. | <urn:uuid:3c8ad5e0-5c76-40e8-9493-ff713c3c9ccc> | 2.75 | 676 | Academic Writing | Science & Tech. | 21.8344 |
Part of why you don't see colors in astronomical objects through a telescope is that your eye isn't sensitive to colors when what you are looking at is faint. Your eyes have two types of photoreceptors: rods and cones. Cones detect color, but rods are more sensitive. So, when seeing something faint, you mostly use your rods, and you don't get much color. Try looking at a color photograph in a dimly lit room.
As Geoff Gaherty points out, if the objects were much brighter, you would indeed see them in color.
However, they still wouldn't necessarily be the same colors you see in the images, because most images are indeed false color. What the false color means really depends on the data in question. What wavelengths an image represents depends on what filter was being used (if any) when the image was taken, and the sensitivity of the detector (eg CCD) being used. So, different images of the same object may look very different. For example, compare this image of the Lagoon Nebula (M8) to this one.
Few astronomers use filter sets designed to match the human eye. It is more common for filter sets to be selected based on scientific considerations. General purpose sets of filters in common use do not match the human eye: compare the transmission curves for the Johnson-Cousins UBVRI filters and the SDSS filters the the sensativity of human cone cells. So, a set of images of an object from a given astronomical telescope may have images at several wavelengths, but these will probably not be exactly those that correspond to red, green, and blue to the human eye. Still, the easiest way for humans to visualise this data is to map these images to the red, green, and blue channels in an image, basically pretending that they are.
In addition to simply mapping images through different filters to the RGB channels of an image, more complex approaches are sometimes used. See, for example, this paper (2004PASP..116..133L).
So, ultimately, what the colors you see in a false color image actually mean depends both of what data happened to be used to be make the image and the method of doing the mapping preferred by whoever constructed the image. | <urn:uuid:aeeede3b-bc6d-48d0-81a9-181ad4993de0> | 3.78125 | 463 | Q&A Forum | Science & Tech. | 51.907919 |
We are all familiar with the idea that there are strikingly different kinds of eyes in animals: insects have compound eyes with multiple facets, while we vertebrates have simple lens eyes. It seems like a simple evolutionary distinction, with arthropods exhibiting one pattern and vertebrates another, but the story isn’t as clean and simple as all that. Protostomes exhibit a variety of different kinds of eyes, leading to the suggestion that eyes have evolved independently many times; in addition, eyes differ in more than just their apparent organization, and there are some significant differences at the molecular level between our photoreceptors and arthropod photoreceptors. It’s all very confusing.
There has been some recent press (see also this press release from the EMBL) about research on a particular animal model, the polychaete marine worm, Platynereis dumerilii, that is resolving the confusion. The short answer is that there are fundamentally two different kinds of eyes based on the biology of the cell types, and our common bilaterian ancestor had both—and the diversity arose in elaborations on those two types.
Common features of metazoan eyes
Molecular, developmental, and morphological studies have revealed some common ground in the eyes of virtually all multicellular animals. Their formation is regulated by a common homeobox gene, a pax6 homolog. All photoreceptors use a light-sensitive pigment derived from vitamin A, and this pigment is bound to a protein called opsin. Light activates opsin by causing a conformation change in the photopigment, and opsin then binds to a G-protein, a common and versatile molecule used in many signal transduction cascades. These similarities suggest that all eyes have a common evolutionary ancestor.
There are also significant differences, though. Beyond those similarities in developmental signaling and the general outline of how they turn light into a chemical signal, there are two different kinds of photoreceptors, rhabdomeric and ciliary. They differ in their strategy for increasing membranous surface area (the photoreceptor molecule are imbedded in the membrane, so the more membrane, the more opsins they can pack in), and the steps of signal transduction after the G-protein is bound.
Rhabdomeric photoreceptors are found in the compound eyes of arthropods. They increase their surface area by throwing up their apical surfaces into numerous folds—in some forms, the cell looks like it has had a flat-top crewcut, with a crowning bristle of fine membranous bristles, although the cell itself can have many different shapes in different species.
Signal transduction in rhabdomeric photoreceptors involves activation of phospholipase C (PLC) and the inositol phosphate (IP3) pathway.
The increase in membrane surface area in ciliary photoreceptors, the kind of receptor we vertebrates use, is by modification of the cilium, a process that extends from the cell. The ciliary membrane is expanded and thrown into deep folds, so that the actual receptor region of the cell looks like a stack of discs.
Ciliary photoreceptors use a different signalling pathway, activating a phosphodiesterase (PDE) that changes the concentration of cyclic GMP in the cell. Both the IP3 and the PDE pathways exist in all animals; the difference is in which pathway is used in the different photoreceptors. The diagram below illustrates the two different pathways, and also shows the phylogenetic relationships between their different molecular components (beware of tiny print! Click on the image for a more readable verson).
So, there are two distinct kinds of photoreceptors, with different deep molecular pathways. The evolutionary question is how they arose—when did this distinction first appear? The diagram below illustrates the problem: did the urbilaterian, the last common ancestor of all bilateral animals, a) have just one kind of photoreceptor that later diverged into the two types, or b) did it possess both kinds, and we vertebrates simply lost the rhabdomeric form?
The diagram itself suggests that (b) is probably the best answer, since diverse animals have both types, and it’s us vertebrates that are the oddballs in lacking one of the forms. Arendt et al. have found additional evidence for (b) in the polychaete worm, Platynereis.
That’s the lovely little worm above, and at it’s head end you can see some big dark eyes. Platynereis has multiple sets of eyes, and in particular, it first develops a very simple larval eye (in A, below) that consists of a single photoreceptor cell (in yellow) with a single pigment cell. The adult eye starts out similarly simple (B), but as it matures acquires a simple spherical lens and a larger array of photoreceptors. These are all of the rhabdomeric type.
One can never have enough eyes, though. In addition to the larval eyes (le) and adult eyes (ae), Platynereis has another interesting pair of organs imbedded in its brain, marked by the arrows below. These have turned out to be another pair of simple “eyes”…but of the ciliary type. They have ciliary extensions, and contain opsin; a specific form of opsin, c-opsin, of the type found in ciliary photoreceptors. The other eyes all contain r-opsin, the kind expected to be found in rhabdomeric photoreceptors, and do not express c-opsin. Phylogenetic analysis shows that Platynereis c-opsin clusters with the vertebrate opsins. As the authors state, “This result indicates that two distinct opsin orthology groups exist in Bilateria: the ciliary opsins (c-opsins, active in ciliary PRCs in vertebrates and polychaetes) and the rhabdomeric opsins (r-opsins, active in rhabdomeric PRCs).”
The authors identified another marker, Pdu-rx, a homolog of the vertebrate rx (retinal homeobox) genes. The expression of vertebrate rx genes is restricted to just the ciliary photoreceptor cells of the retina (and a few other interesting places, such as the ciliary receptor cells of the pineal), and what do you know, the ciliary receptors of Platynereis also express this gene.
A couple of questions: what do these c-opsin cells do in the polychaete? They aren’t for vision. They also contain proteins that vary with a circadian rhythm, so what these cells are almost certainly involved in is detecting ambient light for resetting the circadian clock.
What happened to the r-opsin cells in the vertebrate lineage? And there we see an interesting and complicated answer: they seem to have been subsumed into various functions in the vertebrate eye other than direct photoreception. Arendt also examined various proteins known to be expressed in other cells in the retina, the bipolar, horizontal, amacrine, and retinal ganglion cells (RGCs), and compared those to related proteins expressed in r-opsin and c-opsin cells in the worm. Surprise: while not traditionally considered receptor cells, several of these other retinal cells seem to cluster in their molecular properties with the r-opsin cells from the invertebrate.
What’s also persuasive here is that vertebrate retinal ganglion cells have been recently discovered to contain a photopigment, melanopsin, and to function in resetting the vertebrate circadian rhythm—and melanopsin is an r-opsin homolog.
It’s a solid story that ties visual system history in protostomes and deuterostomes together, resolving the differences between them into a convincing evolutionary account. It’s a bit like finding the germ of a vertebrate eye imbedded insided the brain of a worm; it’s an additional link between two remote branches on the tree of life, and at the same time it clarifies our understanding of the relationships between the different kinds of eyes we see in the animal world.
We propose the following scenario for the evolution of animal PRCs and eyes. Early metazoans possessed a single type of precursor PRC [photoreceptor cell] that used an ancestral opsin for light detection and was involved in photoperiodicity control and possibly in phototaxis. In prebilaterian ancestors, the opsin gene then duplicated into two paralogs, c-opsin and r-opsin, allowing the diversification of the precursor PRC into ciliary and rhabdomeric sister cell types. The rhabdomeric PRCs associated with pigment cells to form simple eyes, whereas the ciliary PRCs formed part of the evolving brain, active in nondirectional photoresponse. This ancestral setting of Bilateria is still present in extant invertebrates such as Platynereis. In the evolutionary line leading to vertebrates, both photoreceptor types were incorporated into the evolving retina. The rhabdomeric PRCs transformed into ganglion cells, acquiring a new role in image processing. A distinctive feature of vertebrate eye evolution is that the ciliary (not rhabdomeric) PRCs became the main visual PRCs, the rods and cones. The vertebrate eye thus represents a composite structure, combining distinct types of lightsensitive cells with independent evolutionary histories.
Arendt D, Tessmar-Raible K, Snyman H, Dorresteijn AW, Wittbrodt J (2004) Ciliary photoreceptors with vertebrate-type opsins in an invertebrate brain. Science 306:869-871.
Arendt D (2003) Evolution of eyes and photoreceptor cell types. Int. J. Dev. Biol. 47:563-571. | <urn:uuid:ea20c422-049e-48fe-9de5-008c64851f61> | 3.796875 | 2,110 | Personal Blog | Science & Tech. | 29.437934 |
In a couple of decades from now, your version of the Bible or Harry Potter (or the best-selling book of the 22nd century, whatever that might be) might just be stored in a small vial of liquid or on small chips. Harvard University researchers have just encoded a book in DNA fragments instead of on physical copy or e-copy.
The Alphabets of DNA
DNA is made up of building blocks called nucleotides, similar to how the English alphabet is made up of building blocks called alphabets. In the language of DNA, there are just 4 alphabets instead of 26, ‘A’, ‘T’, ‘G’ and ‘C’. Moving to information theory, each letter in DNA can thus encode 2 bits of information. Each nucleotide weighs around 250 Dalton (each Dalton weighs 1.66×10-24g). Thus, a single gram of single-stranded DNA could encode 455 exabytes (1 exabyte is 1018 bytes) of information. The previous sentence says ‘single-stranded’ because in nature, DNA molecules form two strands that wrap around each other to form a helix. Even keeping in mind this condition, a single gram of double stranded DNA could still encode around 225 exabytes, not a small number!
Translating English into DNAese
The entire book was translated onto small fragments of DNA called oligonucleotides. Each of these fragments had information from the book, and a small block with information for the ‘address’, where in the book the block belonged to. Thus, a ‘library’ of oligonucleotides is created on a DNA microchip. To ‘read’ the book, this library has to be amplified and sequenced using molecular approaches. These researchers encoded just one bit of information per DNA base instead of the maximum two, made multiple copies of the same oligonucleotide fragment so that errors could be accounted for, and still obtained a whopping density of 5.5 petabits (1015 bits) per millimeter cube.
Current costs of sequencing make this technology prohibitive. However, the costs of DNA synthesis and sequencing are decreasing exponentially every year, making this a feasible storage molecule for the future. DNA is also stable at room temperature meaning it can be preserved for long periods. While DNA storage and retrieval is slower compared to other methods, its scale offers huge potential. It could thus be used in applications involving archival storage of massive amounts of data.
You can read more about this research here. | <urn:uuid:96fe25ef-3dbd-493d-95d2-774d48914b58> | 3.546875 | 534 | Knowledge Article | Science & Tech. | 46.808536 |
Two recent studies provide further evidence that climate change sceptics are simply burying their heads in the sand. Both studies, one in Europe the other in the US are showing that not only does increasing atmospheric CO2 lead to higher temperatures, but that higher temperatures appear to cause CO2 to increase. That’s a ‘psitive feedback loop’ kids. Great if you’re trying to get your amp to howl. Very bad if you want a stable, sustainable climate.
First, researchers at Wageningen University (Netherlands), the Potsdam Institute for Climate Impact Research (Germany) and the Centre for Ecology and Hydrology (UK) have been examining data from proxy sources (ice cores and such) over the period following the Middle Ages’ ‘little ice age’. Their findings strongly suggest that global warming will be accelerated by the effects of climate change on the rate of carbon dioxide increase.
Second, scientists at the Lawrence Berkeley National Laboratory studying the Vostok ice core data have found that the 21st Century may get warmer than we thought
In their GRL paper, Torn and Harte make the case that the current climate change models, which are predicting a global temperature increase of as much as 5.8 degrees Celsius by the end of the century, may be off by nearly 2.0 degrees Celsius because they only take into consideration the increased greenhouse gas concentrations that result from anthropogenic (human) activities.
“If the past is any guide, then when our anthropogenic greenhouse gas emissions cause global warming, it will alter earth system processes, resulting in additional atmospheric greenhouse gas loading and additional warming,†said Torn.
[tags]climate change, global warming, co2, feedback[/tags] | <urn:uuid:47945a6b-7d1c-4fac-b424-1c06220aad47> | 3.015625 | 360 | Personal Blog | Science & Tech. | 30.094949 |
|Proofreading RNA: Structure of RNA Polymerase II's Backtracked State|
|Wednesday, 25 November 2009 00:00|
RNA polymerase II (pol II) is responsible for the production of messenger RNA, which serve as templates for the synthesis of all proteins, including key enzymes, scaffold proteins, hormones, etc. Because a low error rate during transcription is critical, pol II is very selective in nucleotide triphosphate (NTP) loading and incorporation; it also uses proofreading to improve overall transcription accuracy. During RNA transcription, pol II occasionally reverse-translocates—or backtracks—along the growing strand of RNA, correcting any mistakes that have been made. The newly created (3′) end of the RNA strand is extruded from the active center of pol II, allowing the RNA transcript to be checked and repaired.
Pol II assumes one of three major states during the transcription elongation phase. The pre-translocation state occurs when a newly added nucleotide still occupies pol II's nucleotide addition site. In the post-translocation state, the nucleotide addition site is vacant, available for the next NTP. The backtracked state occurs during reverse-translocation and is dominant during nucleotide misincorporation or when pol II runs into DNA damage or other impediments.
The structures of the pre-translocation and post-translocation states were solved in 2001 and 2004. In this research, the structure of the pol II complex in the backtracked state was solved at ALS Beamlines 5.0.2 and 8.2.2.
Using a hybrid containing one mismatched residue at the 3′ end of the RNA, researchers found the last correctly matched residue positioned within the nucleotide addition site, and the mismatched residue located at a novel site called 'P' for proofreading. The mismatched residue's interaction with pol II distorts the RNA–DNA helix, making forward transcription difficult. The enzyme's equilibrium shifts toward the backtracked state.
One of two important conclusions of this research is that pol II backtracked by one residue is stable, even reversible. In the course of backtracking, pol II stalls at this position, supporting the idea that there is equilibrium between forward and backward motion during transcription. This confirms that backtracking one residue is preferable to going back several residues, which can lead to arrest (irreversible backtracking). Recovery from arrest is only possible by cleaving the transcript and excising the misincorporated nucleotide(s).
The second conclusion is that the distorted helix that causes pol II to backtrack one residue allows for cleavage by elongation factor IIS (TFIIS) and for intrinsic cleavage (cleavage without TFIIS). However, the one-residue backtracked state is more readily cleaved in the presence of TFIIS, which rescues the complex from arrest and releases a dinucleotide. This strengthens the theory that cleavage occurs in the pol II active site, and that such cleavage is important for removal of misincorporated nucleotides.
In summary, pol II's forward movement along a DNA template is driven by NTP loading during normal transcription elongation—unless a mismatch causes the RNA–DNA helix to distort, shifting the polymerase into the backtracked state. If it remains in the backtracked state for too long, cleavage ensues. The one-residue backtracked state is a key contributor to pol II's proofreading ability, and plays an important role in increasing the fidelity of RNA polymerase.
Research conducted by D. Wang, D.A. Bushnell, X. Huang, K.D. Westover, M. Levitt, and R.D. Kornberg (Stanford University School of Medicine).
Research funding: National Institutes of Health. Operation of the ALS is supported by the U.S. Department of Energy, Office of Basic Energy Sciences.
Publication about this research: D. Wang, D.A. Bushnell, X. Huang, K.D. Westover, M. Levitt, and R.D. Kornberg, "Structural basis of transcription: Backtracked RNA polymerase II at 3.4 angstrom resolution," Science 324, 5931 (2009). | <urn:uuid:58bff703-d36b-408e-82c9-6261e63b2926> | 2.765625 | 891 | Academic Writing | Science & Tech. | 43.746682 |