text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
An electron is a subatomic particles of spin 1/2. It couples with photons and, thus, is electrically charged. It is a lepton with a rest mass of 9.109 * 10 − 31kg and an electric charge of − 1.602 * 10 − 19 C, which is the smallest known charge possible for an isolated particle (confined quarks have fractional charge). The electric charge of the electron e is used as a unit of charge in much of physics.
Electron pairs within an orbital system have opposite spins due to the Pauli exclusion principle; this characteristic spin pairing allows electrons to exist in the same quantum orbital, as the opposing magnetic dipole moments induced by each of the electrons ensures that they are attracted together.
Current theories consider the electron as a point particle, as no evidence for internal structure has been observed.
As a theoretical construct, electrons have been able to explain other observed phenomena, such as the shell-like structure of an atom, energy distribution around an atom, and energy beams (electron and positron beams).
- ↑ Massimi, M. (2005). Pauli's Exclusion Principle, The Origin and Validation of a Scientific Principle. Cambridge University Press. pp. 7–8
- ↑ Mauritsson, J.. "Electron filmed for the first time ever". Lunds Universitet. Retrieved 2008-09-17. http://www.atomic.physics.lu.se/research/attosecond_physics
- ↑ Chao, A.W.; Tigner, M. (1999). Handbook of Accelerator Physics and Engineering. World Scientific. pp. 155, 188. ISBN 981-02-3500-3. | <urn:uuid:e1790b63-dd2a-43d8-ae60-c3a435647df2> | 3.859375 | 352 | Knowledge Article | Science & Tech. | 58.2225 | 100 |
Researchers at New Jersey Institute of Technology (NJIT) have developed an inexpensive solar cell that can be painted or printed on flexible plastic sheets.
“Someday, homeowners will even be able to print sheets of these solar cells with inexpensive home-based inkjet printers. Consumers can then slap the finished product on a wall, roof or billboard to create their own power stations,” said Somenath Mitra, Ph.D., lead researcher, professor and acting chair of NJIT’s Department of Chemistry and Environmental Sciences.
Harvesting energy directly from abundant solar radiation using solar cells is increasingly emerging as a major component of future global energy strategy, Mitra said. Yet, when it comes to harnessing renewable energy, challenges remain.
Expensive, large-scale infrastructures, such as windmills or dams, are necessary to drive renewable energy sources, such as wind or hydroelectric power plants. Purified silicon, also used for making computer chips, which continue to rise in demand, is a core material for fabricating conventional solar cells. However, the processing of a material such as purified silicon is beyond the reach of most consumers.
“Developing organic solar cells from polymers, however, is a cheap and potentially simpler alternative,” Mitra said. “We foresee a great deal of interest in our work because solar cells can be inexpensively printed or simply painted on exterior building walls and/or rooftops. Imagine some day driving in your hybrid car with a solar panel painted on the roof, which is producing electricity to drive the engine. The opportunities are endless.”
The solar cell developed at NJIT uses a carbon nanotubes complex, which is a molecular configuration of carbon in a cylindrical shape. Although estimated to be 50,000 times smaller than a human hair, just one nanotube can conduct current better than any conventional electrical wire.
Mitra and his research team took the carbon nanotubes and combined them with tiny carbon fullerenes (sometimes known as buckyballs) to form snake-like structures. Buckyballs trap electrons, although they can’t make electrons flow. Add sunlight to excite the polymers, and the buckyballs will grab the electrons. Nanotubes, behaving like copper wires, then will be able to make the electrons or current flow.
“Someday, I hope to see this process become an inexpensive energy alternative for households around the world,” Mitra said. EC | <urn:uuid:ebbdca74-3fac-402d-ad43-fe49594f0872> | 3.890625 | 517 | News Article | Science & Tech. | 28.841491 | 101 |
Satellites are tracing Europe's forest fire scars
Burning with a core heat approaching 800°C and spreading at up to 100 metres per minute, woodland blazes bring swift, destructive change to landscapes: the resulting devastation can be seen from space. An ESA-backed service to monitor European forest fire damage will help highlight areas most at risk of future outbreaks.
Last year's long hot summer was a bumper year for forest fires, with more than half a million hectares of woodland destroyed across Mediterranean Europe. So far this year fresh fires have occurred across Portugal, Spain and southern France, with 2500 people evacuated from blazes in foothills north of Marseille.
According to the European Commission, each hectare of forest lost to fire costs Europe's economy between a thousand and 5000 Euros.
The distinctive 'burn scars' left across the land by forest fires can be identified from space as a specific reddish-brown spectral signature from a false-colour composite of spectral bands from optical sensors in the short wavelength infrared, near infrared and visible channels.
A new ESA-backed, Earth Observation-based service is making use of this fact, employing satellite imagery from SPOT and Landsat to automatically detect the 2004 burn scars within fire-prone areas of the Entente region of Southwest France, within the Puglia and Marche regions of Italy and across the full territory of Spain.
Burn scar detection is planned to take place on a seasonal basis, identifying fires covering at least one hectare to a standard resolution of 30 metres, with detailed damage assessment available to a maximum resolution of 2.5 metres using the SPOT 5 satellite.
Partner users include Italy's National Civil Protection Department, Spain's Dirección general para la Biodiversidad – a directorate of the Environment Ministry that supports regional fire-fighting activities with more than 50 aircraft operating from 33 airbases – as well as France's National Department of Civil Protection (DDSC) and the country's Centre D'Essais Et De Recherce de l'Entente (CEREN), the test and research centre of the government organisation tasked with combating forest fires, known as the Entente Interdépartementale.
"To cope with fire disasters, the most affected Departments in the south of France have decided to join forces to ensure effective forest fire protection," explained Nicolas Raffalli of CEREN. "Within the Entente region we have an existing fire database called PROMETHEE, which is filled out either by firemen, forestry workers or policemen across the 13 Departments making up the region."
Current methods of recording fire damage vary greatly by country or region. The purpose of this new service – part of a portfolio of Earth Observation services known as Risk-EOS – is to develop a standardised burn scar mapping methodology for use throughout Europe, along with enabling more accurate post-fire damage assessment and analysis of vegetation re-growth and manmade changes within affected areas.
"We want to link up PROMETHEE with this burn scar mapping product from Risk-EOS to have a good historical basis of information," Raffalli added. "The benefit is that it makes possible a much more effective protection of the forest."
Characterising the sites of past fires to a more thorough level of detail should mean that service users can better forecast where fires are most likely to break out in future, a process known as risk mapping.
Having been validated and geo-referenced, burn scar maps can then be easily merged with other relevant geographical detail. The vast majority of fires are started by the actions of human beings, from discarding cigarette butts up to deliberate arson. Checking burn scar occurrences against roads, settlements and off-road tracks is likely to throw up correlations.
These can be extrapolated elsewhere to help identify additional areas at risk where preventative measures should be prioritised. And overlaying burn scar maps with a chart of forest biomass has the potential to highlight zones where new blazes would burn the fiercest. Once such relatively fixed environmental elements, known as static risks, are factored in, other aspects that change across time – including temperature, rainfall and vegetation moisture – can be addressed. These variables are known as dynamic risks. At the end of the risk mapping process, the probability of fire breaking out in a particular place and time can be reliably calculated.
The Risk-EOS burn scar mapping service began last year. The intention is to develop further fire-related services by the end of 2007, including daily risk maps combining EO with meteorological and vegetation data.
Another planned service will identify 'hot spots' during fires, and map fire events twice a day, permitting an overall assessment of its development and the damage being done. A 'fires memory atlas' set up at national or regional level will allow the routine sharing of all information related to forest fire events and fire risk.
"For the future I think near-real time fire and hot spot mapping would obviously be extremely useful," Raffalli concluded. "With these products those managing the situation could see where the fire is, as well as the hot spots inside it. They can then deploy ground and aerial resources with maximum efficiency."
Building on ITALSCAR
Italy's National Civil Protection Department is providing advice on the implementation of the Risk-EOS service, based on previous experience with an ESA Data User Programme (DUP) project called ITALSCAR.
Run for ESA by the Italian firms Telespazio una Societá Finmeccanica and Vitrociset, ITALSCAR charted burn scars across the whole of Italian territory occurring between June and September during the years 1997, 1998, 1999 and 2000.
For the last quarter of a century, Italian legislation had required that all burned areas be recorded and mapped, as no land use change is permitted to occur on such terrain for 15 years after a blaze, no new building construction for the next ten years, and no new publicly funded reforestation for a half-decade.
However the mapping of burn scars is the responsibility of local administration and their methodologies and overall effectiveness are highly variable. No central cartographic archive of burn scar perimeters exists: the closest equivalent is a cardset index (Anti Incendio Boschivi or AIB) recording fire-fighting interventions by the Italian Forest Guards.
The ITALSCAR burn scar maps were produced across a wide variety of different forest classes. Burn scars were mapped pixel by pixel using an automated software system, followed up with manual photo-interpretation for quality assurance. To ensure confidence in the results they were validated using ground surveys and checked against reports from local fire brigades and Forest Guards' AIB records.
The Risk-EOS burn scar mapping service is based around this same methodology.
Managed by Astrium, Risk-EOS also incorporates services for flood as well as fire risk management. It forms part of the Services Element of Global Monitoring for Environment and Security (GMES), an initiative supported jointly by ESA and the European Commission and intended to establish an independent European capability for worldwide environmental monitoring on an operational basis. | <urn:uuid:fd153b67-1ade-4ef6-a92e-d45919430591> | 3.640625 | 1,462 | News (Org.) | Science & Tech. | 22.774747 | 102 |
Math is the basis for music, but for those of us who aren’t virtuosic at either, the connection isn’t always easy to grasp. Which is what makes the videos of Vi Hart, a “mathemusician” with a dedicated YouTube following, so wonderful. Hart explains complex phenomena--from cardioids to Carl Gauss--using simple (and often very) funny means.
As Maria Popova pointed out yesterday, Hart’s latest video is a real doozy. In it, she uses a music box and a Möbius strip to explain space-time, showing how the two axes of musical notation (pitch and tempo) correspond to space and time. Using the tape notation as a model for space-time, she cuts and folds it to show the finite ways you can slice and dice the axes. Then, she shows us how you can loop the tape into a continuous strip of twinkling notes:
If you fold space-time into a Mobius strip, you get your melody, and then the inversion, the melody played upside down. And then right side up again. And so on. So rather than folding and cutting up space-time, just cut and tape a little loop of space-time, to be played over, and over.
It’s a pretty magical observation, and it makes even me--the prototypical math dunce--wish I’d tried harder. Yet there’s still time: Hart works for the Khan Academy, a nonprofit that offers free educational videos about math, biology, and more. Check it out.
[H/t Brain Pickings] | <urn:uuid:a37519b2-ce71-4875-976f-9b4e9a28090c> | 3.28125 | 346 | Personal Blog | Science & Tech. | 59.43732 | 103 |
14 October 2005
GSA Release No. 05-37
FOR IMMEDIATE RELEASE
Mars' Climate in Flux: Mid-Latitude Glaciers
New high-resolution images of mid-latitude Mars are revealing glacier-formed landscapes far from the Martian poles, says a leading Mars researcher.
Conspicuous trains of debris in valleys, arcs of debris on steep slopes and other features far from the polar ice caps bear striking similarities to glacial landscapes of Earth, says Brown University's James Head III. When combined with the latest climate models and orbital calculation for Mars, the geological features make a compelling case for Mars having ongoing climate shifts that allow ice to leave the poles and accumulate at lower latitudes.
"The exciting thing is a real convergence of these things," said Head, who will present the latest Mars climate discoveries on Sunday, 16 October, at the Annual Meeting of the Geological Society of America in Salt Lake City (specific time and location provided below).
"For decades people have been saying that deposits at mid and equatorial latitudes look like they are ice-created," said Head. But without better images, elevation data and some way of explaining it, ice outside of Mars' polar regions was a hard sell.
Now high-resolution images from the Mars Odyssey spacecraft's Thermal Emission Imaging System combined with images from the Mars Global Surveyor spacecraft's Mars Orbiter Camera and Mars Orbiter Laser Altimeter can be compared directly with glacier features in mountain and polar regions of Earth. The likenesses are hard to ignore.
For instance, consider what Head calls "lineated valley fill." These are lines of debris on valley floors that run downhill and parallel to the valley walls, as if they mark some sort of past flow. The same sorts of lines of debris are seen in aerial images of Earth glaciers. The difference is that on Mars the water ice sublimes away (goes directly from solid ice to gas, without any liquid phase between) and leaves the debris lines intact. On Earth the lines of debris are usually washed away as a glacier melts.
The lines of debris on Mars continue down valleys and converges with other lines of debris - again, just like what's seen on Earth where glaciers converge.
"There's so much topography and the debris is so thick (on Mars) that it's possible some of the ice might still be there," said Head. The evidence for present day ice includes unusually degraded recent impact craters in these areas - just what you'd expect to see if a lot of the material ejected from the impact was ice that quickly sublimed away.
Another peculiarly glacier-like feature seen in Martian mid-latitudes are concentric arcs of debris breaking away from steep mountain alcoves - just as they do at the heads of glaciers on Earth.
As for how ice could reach Mars lower latitudes, orbital calculations indicate that Mars may slowly wobble on its spin axis far more than Earth does (the Moon minimizes Earth's wobble). This means that as Mars' axis tilted to the extremes - up to 60 degrees from the plane of Mars' orbit - the Martian poles get a whole lot more sunshine in the summertime than they do now. That extra sun would likely sublime water from the polar ice caps, explains Head.
"When you do that you are mobilizing a lot of ice and redistributing it to the equator," Head said. "The climate models are saying it's possible."
It's pure chance that we happen to be exploring Mars when its axis is at a lesser, more Earth-like tilt. This has led to the false impression of Mars being a place that's geologically and climatically dead. In fact, says Head, Mars is turning out to be a place that is constantly changing.
WHEN AND WHERE
Lineated Valley Fill at the Dichotomy Boundary on Mars: Evidence for Regional Mid-Latitude Glaciation
Sunday, 16 October, 3:15 p.m. MDT, Salt Palace Convention Center Room 257
View abstract: http://gsa.confex.com/gsa/2005AM/finalprogram/abstract_94125.htm
Click photo for larger image with caption.
During the Geological Society of America Annual Meeting, 16-19 October, contact Ann Cairns at the GSA Newsroom, Salt Palace Convention Center, for assistance and to arrange for interviews: +1-801-534-4770.
- After the meeting contact:
- James Head III
- Department of Geological Sciences
- Brown University, Providence, RI
- Phone: +1-401-863-2526
- E-mail: James_Head_III@brown.edu | <urn:uuid:5e57b7bd-efbf-43b3-b350-f9952196314a> | 3.375 | 971 | News (Org.) | Science & Tech. | 45.058572 | 104 |
The clock Command
The clock command has facilities for getting the current time, formatting time values, and scanning printed time strings to get an integer time value. The clock command was added in Tcl 7.5. Table 13-1 summarizes the clock command:
Table 13-1. The clock command.
|clock clicks||A system-dependent high resolution counter.|
|clock format value ?-format str?||Formats a clock value according to str.|
|clock scan string ?-base clock? ?-gmt boolean?||Parses date string and return seconds value. The clock value determines the date.|
|clock seconds||Returns the current time in seconds.|
The following command prints the current time:
clock format [clock seconds]
=> Sun Nov 24 14:57:04 1996
The clock seconds command returns the current time, in seconds since a starting epoch. The clock format command formats an integer value into a date string. It takes an optional argument that controls the format. The format strings contains % keywords that are replaced with the year, month, day, date, hours, minutes, and seconds, in various formats. The default string is:
%a %b %d %H:%M:%S %Z %Y
Tables 13-2 and 13-3 summarize the clock formatting strings:
Table 13-2. Clock formatting keywords.
|%%||Inserts a %. |
|%a||Abbreviated weekday name (Mon, Tue, etc.). |
|%A||Full weekday name (Monday, Tuesday, etc.). |
|%b||Abbreviated month name (Jan, Feb, etc.). |
|%B||Full month name. |
|%c||Locale specific date and time (e.g., Nov 24 16:00:59 1996).|
|%d||Day of month (01 ?31). |
|%H||Hour in 24-hour format (00 ?23). |
|%I||Hour in 12-hour format (01 ?12). |
|%j||Day of year (001 ?366). |
|%m||Month number (01 ?12). |
|%M||Minute (00 ?59). |
|%p||AM/PM indicator. |
|%S||Seconds (00 ?59). |
|%U||Week of year (00 ?52) when Sunday starts the week.|
|%w||Weekday number (Sunday = 0). |
|%W||Week of year (01 ?52) when Monday starts the week. |
|%x||Locale specific date format (e.g., Feb 19 1997).|
|%X||Locale specific time format (e.g., 20:10:13).|
|%y||Year without century (00 ?99).|
|%Y||Year with century (e.g. 1997).|
|%Z||Time zone name.|
Table 13-3. UNIX-specific clock formatting keywords.
|%D||Date as %m/%d/%y (e.g., 02/19/97).|
|%e||Day of month (1 ?31), no leading zeros. |
|%h||Abbreviated month name. |
|%n||Inserts a newline. |
|%r||Time as %I:%M:%S %p (e.g., 02:39:29 PM).|
|%R||Time as %H:%M (e.g., 14:39).|
|%t||Inserts a tab. |
|%T||Time as %H:%M:%S (e.g., 14:34:29).|
The clock clicks command returns the value of the system's highest resolution clock. The units of the clicks are not defined. The main use of this command is to measure the relative time of different performance tuning trials. The following command counts the clicks per second over 10 seconds, which will vary from system to system:
Example 13-1 Calculating clicks per second.
set t1 [clock clicks]
after 10000 ;# See page 218
set t2 [clock clicks]
puts "[expr ($t2 - $t1)/10] Clicks/second"
=> 1001313 Clicks/second
The clock scan command parses a date string and returns a seconds value. The command handles a variety of date formats. If you leave off the year, the current year is assumed.
Year 2000 Compliance
Tcl implements the standard interpretation of two-digit year values, which is that 70?9 are 1970?999, 00?9 are 2000?069. Versions of Tcl before 8.0 did not properly deal with two-digit years in all cases. Note, however, that Tcl is limited by your system's time epoch and the number of bits in an integer. On Windows, Macintosh, and most UNIX systems, the clock epoch is January 1, 1970. A 32-bit integer can count enough seconds to reach forward into the year 2037, and backward to the year 1903. If you try to clock scan a date outside that range, Tcl will raise an error because the seconds counter will overflow or underflow. In this case, Tcl is just reflecting limitations of the underlying system.
If you leave out a date, clock scan assumes the current date. You can also use the -base option to specify a date. The following example uses the current time as the base, which is redundant:
clock scan "10:30:44 PM" -base [clock seconds]
The date parser allows these modifiers: year, month, fortnight (two weeks), week, day, hour, minute, second. You can put a positive or negative number in front of a modifier as a multiplier. For example:
clock format [clock scan "10:30:44 PM 1 week"]
=> Sun Dec 01 22:30:44 1996
clock format [clock scan "10:30:44 PM -1 week"]
Sun Nov 17 22:30:44 1996
You can also use tomorrow, yesterday, today, now, last, this, next, and ago, as modifiers.
clock format [clock scan "3 years ago"]
=> Wed Nov 24 17:06:46 1993
Both clock format and clock scan take a -gmt option that uses Greenwich Mean Time. Otherwise, the local time zone is used.
clock format [clock seconds] -gmt true
=> Sun Nov 24 09:25:29 1996
clock format [clock seconds] -gmt false
=> Sun Nov 24 17:25:34 1996 | <urn:uuid:f36d7530-13dd-4d6a-8426-ea739f255160> | 3.765625 | 1,432 | Documentation | Software Dev. | 94.315313 | 105 |
The data on species level is structured in four areas (see picture below):
1. At the top in light yellow, the species' name is shown together with, when applicable, its IUCN code (click on the code and you will be redirected to IUCN's webpage with detailed information about this threatened species) and, if you have ticked the species, a green tick to the right
2. In the rich yellow field you also have the species name and a scroll function up (left) or down (right) the sequence of the chosen checklist (click on Filter if you want to change the active checklist).
3. Below the yellow field, the taxonomic tree down to the chosen level is shown (click on any higher level to get a new selection of species groups).
4. The submenu in black shows the information sets available:
* Info - species info including a distribution map, a photo and, if applicable, subspecific information and taxonomic notes
* Names [# of] - shows the species' name in different languages (recommended as well as optional names) and within brackets # of names
* Photo [# of] - all photos on the GT Network of this species and within brackets # of photos
* Distribution - a distribution map and countries where this particular species/subspecies has been recorded and also its status
* Who X - list of GT members that have ticked the species and in which countries
* My ticks [# of] - my own ticks on country level and within brackets # of ticks
* My notes [*]- a free text field where you can save your personal notes related to this species; if you have saved information you will have a [*] marker
* Literature - in which book and on which plate is the taxon depicted (this is work-in-progress so not many references so far...)
* xeno-canto - click and you will be redirected to xeno-canto's website to hear voice recordings of the species
* Wikipedia - click on the icon and you will be redirected to Wikipedia's website
* Google images - click on the icon and you will be redirected to Google's website | <urn:uuid:d54669f7-f3e7-4350-a5da-ac6342663efe> | 2.59375 | 446 | Structured Data | Science & Tech. | 34.87232 | 106 |
A recursive function typically contains a conditional expression which has three parts:
Recursive functions can be much simpler than any other kind of function. Indeed, when people first start to use them, they often look so mysteriously simple as to be incomprehensible. Like riding a bicycle, reading a recursive function definition takes a certain knack which is hard at first but then seems simple.
There are several different common recursive patterns. A very simple pattern looks like this:
(defun name-of-recursive-function (argument-list) "documentation..." (if do-again-test body... (name-of-recursive-function next-step-expression)))
Each time a recursive function is evaluated, a new instance of it is created and told what to do. The arguments tell the instance what to do.
An argument is bound to the value of the next-step-expression. Each instance runs with a different value of the next-step-expression.
The value in the next-step-expression is used in the do-again-test.
The value returned by the next-step-expression is passed to the new instance of the function, which evaluates it (or some transmogrification of it) to determine whether to continue or stop. The next-step-expression is designed so that the do-again-test returns false when the function should no longer be repeated.
The do-again-test is sometimes called the stop condition, since it stops the repetitions when it tests false. | <urn:uuid:e12ee1b9-01d9-4f3a-a24b-f6f5a1a8acc0> | 3.6875 | 310 | Documentation | Software Dev. | 50.137392 | 107 |
|This is a measure of the brightness of a celestial object. The lower the value, the brighter the object, so magnitude -4 is brighter than magnitude 0, which is in turn brighter than magnitude +4. The scale is logarithmic, and a difference of 5 magnitudes means a brightness difference of exactly 100 times. A difference of one magnitude corresponds to a brightness difference of around 2.51 (the fifth root of 100).
The system was started by the ancient Greeks, who divided the stars into one of six magnitude groups with stars of the first magnitude being the first ones to be visible after sunset. In modern times, the scale has been extended in both directions and more strictly defined.
Examples of magnitude values for well-known objects are;
|Sun||-26.7 (about 400 000 times brighter than full Moon!)|
|Brightest Iridium flares||-8|
|Venus (at brightest)||-4.4|
|International Space Station||-2|
|Sirius (brightest star)||-1.44|
|Limit of human eye||+6 to +7|
|Limit of 10x50 binoculars||+9|
|Limit of Hubble Space Telescope||+30| | <urn:uuid:a13e5774-8a15-4ad6-bc01-def7c66a2edb> | 4.25 | 260 | Structured Data | Science & Tech. | 60.330227 | 108 |
Scientists gets further evidence that Mars once had oceans
Mars, our neighbor, once the dreams of science fiction writers and astronomers, one of which only wrote about the live that could have lived on Mars, and still might; while the other seeks to prove that there might actually have been life on that red planet eons ago.
Part of proving that idea is being able to show that there was water on the surface of Mars, water that would have been the foundation of life, just as it is here on earth.
To help find the facts behind whether there was, or even still is, water on Mars the European Space Agency (ESA) Mars Express space craft which houses the Mars Advanced Radar for Subsurface and Ionsphere Sounding (MARSIS) has detected sediment on the planet, the type of sediment that you would find on the floor of an ocean.
It is within the boundaries of features tentatively identified in images from various spacecraft as shorelines that MARSIS detected sedimentary deposits reminiscent of an ocean floor.
“MARSIS penetrates deep into the ground, revealing the first 60 – 80 meters (197 – 262 ft) of the planet’s subsurface,” says Wlodek Kofman, leader of the radar team at the Institut de Planétologie et d’Astrophysique de Grenoble (IPAG). “Throughout all of this depth, we see the evidence for sedimentary material and ice.”
The sediments detected by MARSIS are areas of low radar reflectivity, which typically indicates low-density granular materials that have been eroded away by water and carried to their final resting place.
Scientists are interpreting these sedimentary deposits, which may still be ice-rich, as another indication that there once an ocean in this spot.
At this point scientists have proposed that there were two main oceans on the planet. One was aroun the 4 billion year ago range with the second at around 3 billion years ago.
For the scientist the MARSIS findings provide some of the best evidence yet that Mars did have large bodies of water on its surface and that the water played a major role in the planet’s geological history. | <urn:uuid:40e4be34-8172-4949-b887-cd566fea95cb> | 4.03125 | 460 | News Article | Science & Tech. | 33.409438 | 109 |
The Current Surface Analysis map shows current weather conditions
, including frontal and high/low pressure positions, satellite infrared
(IR) cloud cover
, and areas of precipitation
. A surface weather analysis is a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations. Weather maps are created by plotting or tracing the values of relevant quantities such as sea level pressure, temperature
, and cloud cover
onto a geographical map to help find synoptic scale features such as weather fronts.
The first weather maps in the 19th century were drawn well after the fact to help devise a theory on storm systems. After the advent of the telegraph, simultaneous surface weather observations
became possible for the first time, and beginning in the late 1840s, the Smithsonian Institution became the first organization to draw real-time surface analyses. Use of surface analyses began first in the United States, spreading worldwide during the 1870s. Use of the Norwegian cyclone model for frontal analysis began in the late 1910s across Europe, with its use finally spreading to the United States during World War II.
Surface weather analyses have special symbols which show frontal systems, cloud cover
, or other important information. For example, an H may represent high pressure, implying good and fair weather. An L on the other hand may represent low pressure, which frequently accompanies precipitation
. Various symbols are used not just for frontal zones and other surface boundaries on weather maps, but also to depict the present weather at various locations on the weather map. Areas of precipitation
help determine the frontal type and location. | <urn:uuid:4bf6a042-4fef-41c6-a3c8-799ab5420858> | 3.96875 | 334 | Knowledge Article | Science & Tech. | 27.378185 | 110 |
by Dave Phillips
OpenAL, the Open Audio Library, is an initiative from Creative Labs and Loki Entertainment designed to provide a cross-platform open source solution for programming 2D and 3D audio. It is licensed under the GNU Lesser General Public License (LGPL), with current implementations supporting Windows, the Macintosh OS, Linux, FreeBSD, OS/2, and BeOS. The OpenAL API has been designed for portability of applications between supported platforms, particularly games and other multimedia applications using OpenGL for 3D graphics.
As its name implies, OpenAL is analogous in many ways to SGI's OpenGL, a widely implemented standard for specifying high-quality 3D graphics (see Chris Halsall's article for more information regarding OpenGL). The analogy extends far beyond the name: Many of the design considerations for OpenAL are derived from similar considerations for the visual effects possible from OpenGL, particularly with regard to movement in three dimensions and proximity-dependent texture variance. Because the OpenAL API is so similar to OpenGL, programmers employing OpenGL for graphics can more easily bind sonic activity to visuals, leading to exciting possibilities for games and other graphics-intensive applications.
As with OpenGL, a little OpenAL code does a lot. Developers can simply place their sounds into a scene and let OpenAL render the changes of the sounds relative to the positional changes of the listener.
The programming interface is hardware-independent. It can be deployed on virtually any soundcard usable on the supported platforms, though of course its potential will be most fully realized on cards with multichannel audio output. The API is a relatively higher-level interface that provides a communication protocol with the sound card driver. For Linux users it should make no difference whether the card driver comes from the kernel sources, ALSA, or OSS/Linux.
The OpenAL library is designed to act in coordination with the low-level routines of the driver. At this time the API and library focus only on PCM audio, although it is possible that future revisions will address CD audio and hardware MIDI synthesis.
OpenAL also follows recommendations put forth by the Interactive Audio Special Interest Group (IASIG). The current API has been written to accommodate at least the following IASIG Level 1 guidelines:
- Distance-based attenuation -- the strengthening or weakening of a sound's dynamic level as it approaches or leaves the listener.
- Position-based panning -- the location of a sound is calculated relative to the listener, not merely shifting between speakers.
- Doppler effects -- the perceived rise and fall of a sound's pitch as a source approaches and leaves the listener.
- Sound radiation -- control of a sound's dispersion through the acoustic field.
OpenAL Resources On-line
The author would like to thank Joseph I. Valenzuela, Michael Vance, Bernd Kreimeier, Fotis Hadginikos, and Derrick Story for their vast assistance. This series of articles could not have been written without their help.
The IASIG Level 2 guidelines specify a set of environment parameters for reverberation. Work has already begun to incorporate those parameters into the OpenAL API.
The current OpenAL API reference can be found in the openal/docs directory, but it is in SGML format. You will need the DocBook tools to compile the API documentation into readable HTML. The reference documentation is also available on-line here. I should emphasize that the documentation is directed only to developers at this time.
As an open source project with corporate blessings, OpenAL seems assured of widespread implementation. It offers an open source solution to the problem of highly portable cross-platform support for 3D audio in games and other multimedia applications, making it of great interest to developers and end-users alike. Even while the 1.0 specification is being ratified, some developers are already employing OpenAL's services.
OpenAL is not without its contenders, but those solutions are proprietary or locked into a single architecture. OpenAL is already a working multi-platform interface for audio (especially 3D audio) services, and with hardware acceleration, OpenAL could revolutionize computer audio in the same way as OpenGL revolutionized computer graphics, an exciting prospect indeed.
As a final reminder, please note that OpenAL is a community effort, and community involvement is encouraged. See the OpenAL Web site for complete details on getting involved in the project.
In the last article we will take a deeper look at the internals of the API, but first we'll discuss 3D audio and then see what real-world applications have already employed the OpenAL specification.
Discuss this article in the O'Reilly Network Linux Forum.
Return to the Linux DevCenter. | <urn:uuid:0d0f813d-7d1e-4f27-93bc-de88209a2b8f> | 2.609375 | 959 | Knowledge Article | Software Dev. | 26.463304 | 111 |
Range: Vancouver - Baja Calif. depth: 6-18 (38) m.
Table of Contents
The Sea Grape
Commonly known as "sea grapes," Botryocladia (botryo=grape,
cladia=branches) pseudodichotoma is an abundant member of the RHODOPHYTA
(red algae). The following phylogeny consists of links to list of common
characteristics which justify Botryocladia's inclusion:
- thallus is 10-30 cm. tall
- elongate, pyriform (pear-shaped), sacchate (sack-like) branches
- sacchate branches are 4-7 cm long and 6-25 mm in diameter
- branches contain a colorless, acidic, polysaccharide and protein
mucilage which makes them bouyant and therefore better able to
compete for light
- 3 cell layers
- pigmented cortical cells
- unpigmented medium sized gelatinous cells
- unpigmented large gelatinous medullar cells (& specialized
gland cells cluster in groups of 10-20 on the inward facing surface
of medullar cells which in pseudodichotoma are noticeably
smaller than its neighbors. It is easy to view secretory cells under
a microscope by making cross-sections with a razorblade.
with all Florideophyceae, B.pseudodichotoma has a tri-phasic life
cells of the diploid tetrasporophyte undergo meiosis to create cruciate
tetraspores (3.88 million/day). Each of the 4 spores can grow into
a haploid gametophyte (male or female).
mature male gametophyte emits spermatia which fertilize cells on
the female gametophyte. Where fertilization has succeeded, a diploid
carposporophyte grows on the female gametophyte.
carposporophyte has a pore opening to the outside through which it
releases diploid carpospores. These carpospores settle and grow into | <urn:uuid:5af214eb-c261-4fff-a47c-c2ca3a8e2822> | 2.875 | 452 | Knowledge Article | Science & Tech. | 26.541283 | 112 |
The genus Solenopsis includes both the "fire ants", known for their aggressive nature and potent sting, and the minute "thief ants", many of which are lestobiotic subterranaen or arboreal species that are rarely collected. Many species may be polygynous.
Generic level identification of Solenopsis is relatively straight forward, although sizes are greatly variable ranging from approximately 1.0 mm to over 4.0 mm. The genus can be basically characterized by the following: mandible with four teeth (usually), bicarinate clypeus with 0-5 teeth, median part of clypeus with a pair of longitudinal carinae medially or at lateral edges, 10-segmented antennae that terminates in a distinctive 2-segmented club, overall shiny appearance and general lack of or reduced sculpture (when present usually restricted to rugulae or striae on the head, alitrunk, petiole, and postpetiole), lack of propodeal spines or other protuberances on the alitrunk, well developed petiole and postpetiole, and a well developed sting. Workers are either polymorphic (especially in the fire ant group) or monomorphic (especially thief ants). The thief ant group shares these characteristics, but workers are minute (usually under 2.0 mm in total length), usually have minute eyes (usually with only 1-5 ommatidia (rarely more than 18, except for S. globularia in our region), minor funicular segments 2-3 typically wider than long (usually longer than wide in the fire ant group).
Biology and Economic Importance
Discover Life Images | <urn:uuid:8a797adf-7747-41ba-9596-f22493eb5470> | 2.828125 | 351 | Knowledge Article | Science & Tech. | 21.7025 | 113 |
Joined: 16 Mar 2004
|Posted: Tue Aug 04, 2009 2:40 pm Post subject: Immune Responses Jolted into Action by Nanohorns
|The immune response triggered by carbon nanotube-like structures could be harnessed to help treat infectious diseases and cancers, say researchers.
The way tiny structures like nanotubes can trigger sometimes severe immune reactions has troubled researchers trying to use them as vehicles to deliver drugs inside the body in a targeted way.
White blood cells can efficiently detect and capture nanostructures, so much research is focused on allowing nanotubes and similar structures to pass unmolested in the body.
But a French-Italian research team plans to use nanohorns, a cone-shaped variety of carbon nanotubes, to deliberately provoke the immune system.
They think that the usually unwelcome immune response could kick-start the body into fighting a disease or cancer more effectively.
To test their theory, Alberto Bianco and Hélène Dumortier at the CNRS Institute in Strasbourg, France, in collaboration with Maurizio Prato at the University of Trieste, Italy, gave carbon nanohorns to mouse white blood cells in a Petri dish. The macrophage cells' job is to swallow foreign particles.
After 24 hours, most of the macrophages had swallowed some nanohorns. But they had also begun to release reactive oxygen compounds and other small molecules that signal to other parts of the immune system to become more active.
The researchers think they could tune that cellular distress call to a particular disease or cancer, by filling the interior of nanohorns with particular antigens, like ice cream filling a cone.
"The nanohorns would deliver the antigen to the macrophages while also triggering a cascade of pro-inflammatory effects," Dumortier says. "This process should initiate an antigen-specific immune response."
"There is still a long way to go before this interesting approach might become safe and effective," says Ruth Duncan at Cardiff University , UK . "Safety would ultimately depend on proposed dose, the frequency of dose and the route of administration," she says.
Dumortier agrees more work is needed, but adds that the results so far suggest that nanohorns are less toxic to cells than normal nanotubes can be. "No sign of cell death was visible upon three days of macrophage culture in the presence of nanohorns," Dumortier says.
Recent headline-grabbing results suggest that nanotubes much longer than they are wide can cause similar inflammation to asbestos . But nanohorns do not take on such proportions and so would not be expected to have such an effect.
Journal reference: 10 Advanced Materials (DOI: 1002/adma.200702753)
Source: New Scientist /...
Subscribe to the IoN newsletter. | <urn:uuid:5cade7be-722d-4875-86c2-cdb3dd43ad4f> | 3.390625 | 593 | Comment Section | Science & Tech. | 32.083152 | 114 |
Atomic oxygen, a corrosive space gas, finds many applications on Earth.
An Atomic Innovation for Artwork
Oxygen may be one of the most common substances on the planet, but recent space research has unveiled a surprising number of new applications for the gas, including restoring damaged artwork.
It all started with a critical problem facing would-be spacecraft: the gasses just outside the Earth’s atmosphere are highly corrosive. While most oxygen atoms on Earth’s surface occur in pairs, in space the pair is often split apart by short-wave solar radiation, producing singular atoms. Because oxygen so easily bonds with other substances, it is highly corrosive in atomic form, and it gradually wears away the protective layering on orbiting objects such as satellites and the International Space Station (ISS).
To combat this destructive gas, NASA recreated it on Earth and applied it to different materials to see what would prove most resistant. The coatings developed through these experiments are currently used on the ISS.
During the tests, however, scientists also discovered applications for atomic oxygen that have since proved a success in the private sector.
Breathing New Life into Damaged Art
In their experiments, NASA researchers quickly realized that atomic oxygen interacted primarily with organic materials. Soon after, they partnered with churches and museums to test the gas’s ability to restore fire-damaged or vandalized art. Atomic oxygen was able to remove soot from fire-damaged artworks without altering the paint.
It was first tested on oil paintings: In 1989, an arson fire at St. Alban’s Episcopal Church in Cleveland nearly destroyed a painting of Mary Magdalene. Although the paint was blistered and charred, atomic oxygen treatment plus a reapplication of varnish revitalized it. And in 2002, a fire at St. Stanislaus Church (also in Cleveland) left two paintings with soot damage, but atomic oxygen removed it.
Buoyed by the successes with oil paints, the engineers also applied the restoration technique to acrylics, watercolors, and ink. At Pittsburgh’s Carnegie Museum of Art, where an Andy Warhol painting, Bathtub, has been kissed by a lipstick-wearing vandal, a technician successfully removed the offending pink mark with a portable atomic oxygen gun. The only evidence that the painting had been treated—a lightened spot of paint—was easily restored by a conservator.
A Genuine Difference-maker
When the successes in art restoration were publicized, forensic analysts who study documents became curious about using atomic oxygen to detect forgeries. They found that it can assist analysts in figuring out whether important documents such as checks or wills have been altered, by revealing areas of overlapping ink created in the modifications.
The gas has biomedical applications as well. Atomic oxygen technology can be used to decontaminate orthopedic surgical hip and knee implants prior to surgery. Such contaminants contribute to inflammation that can lead to joint loosening and pain, or even necessitate removing the implant. Previously, there was no known chemical process that fully removed these inflammatory toxins without damaging the implants. Atomic oxygen, however, can oxidize any organic contaminants and convert them into harmless gases, leaving a contaminant-free surface.
Thanks to NASA’s work, atomic oxygen—once studied in order to keep it at bay in space—is being employed in surprising, powerful ways here on Earth.
To learn more about this NASA spinoff, read the original article | <urn:uuid:672eb588-eeaa-401f-81e0-1a0e5c9d984f> | 3.703125 | 714 | Knowledge Article | Science & Tech. | 27.007077 | 115 |
Is light made of waves, or particles?
This fundamental question has dogged scientists for decades, because light seems to be both. However, until now, experiments have revealed light to act either like a particle, or a wave, but never the two at once.
Now, for the first time, a new type of experiment has shown light behaving like both a particle and a wave simultaneously, providing a new dimension to the quandary that could help reveal the true nature of light, and of the whole quantum world.
The debate goes back at least as far as Isaac Newton, who advocated that light was made of particles, and James Clerk Maxwell, whose successful theory of electromagnetism, unifying the forces of electricity and magnetism into one, relied on a model of light as a wave. Then in 1905, Albert Einstein explained a phenomenon called the photoelectric effect using the idea that light was made of particles called photons (this discovery won him the Nobel Prize in physics). [What's That? Your Physics Questions Answered]
Ultimately, there's good reason to think that light is both a particle and a wave. In fact, the same seems to be true of all subatomic particles, including electrons and quarks and even the recently discovered Higgs boson-like particle. The idea is called wave-particle duality, and is a fundamental tenet of the theory of quantum mechanics.
Depending on which type of experiment is used, light, or any other type of particle, will behave like a particle or like a wave. So far, both aspects of light's nature haven't been observed at the same time.
But still, scientists have wondered, does light switch from being a particle to being a wave depending on the circumstance? Or is light always both a particle and a wave simultaneously?
Now, for the first time, researchers have devised a new type of measurement apparatus that can detect both particle and wave-like behavior at the same time. The device relies on a strange quantum effect called quantum nonlocality, a counter-intuitive notion that boils down to the idea that the same particle can exist in two locations at once.
"The measurement apparatus detected strong nonlocality, which certified that the photon behaved simultaneously as a wave and a particle in our experiment," physicist Alberto Peruzzo of England's University of Bristol said in a statement. "This represents a strong refutation of models in which the photon is either a wave or a particle."
Peruzzo is lead author of a paper describing the experiment published in the Nov. 2 issue of the journal Science.
The experiment further relies on another weird aspect of quantum mechanics — the idea of quantum entanglement. Two particles can become entangled so that actions performed on one particle affect the other. In this way, the researchers were able to allow the photons in the experiment to delay the choice of whether to be particles or waves.
MIT physicist Seth Lloyd, who was not involved in the project, called the experiment "audacious" in a related essay in Science, and said that while it allowed the photons to delay the choice of being particles or waves for only a few nanoseconds, "if one has access to quantum memory in which to store the entanglement, the decision could be put off until tomorrow (or for as long as the memory works reliably). So why decide now? Just let those quanta slide!"
- Twisted Physics: 7 Mind-Blowing Findings
- Quantum Weirdness Goes Big – Molecules Act Like Waves | Video
- Wacky Physics: The Coolest Little Particles in Nature
© 2012 LiveScience.com. All rights reserved. | <urn:uuid:2beee413-cfb4-4e4b-b8c0-5316c1b27634> | 3.15625 | 747 | News Article | Science & Tech. | 39.82629 | 116 |
Young goats learn new and distinctive bleating "accents" once they begin to socialise with other kids.
The discovery is a surprise because the sounds most mammals make were thought to be too primitive to allow subtle variations to emerge or be learned. The only known exceptions are humans, bats and cetaceans – although many birds, including songbirds, parrots and hummingbirds have legendary song-learning or mimicry abilities.
Now, goats have joined the club. "It's the first ungulate to show evidence of this," says Alan McElligott of Queen Mary, University of London.
McElligott and his colleague, Elodie Briefer, made the discovery using 23 newborn kids. To reduce the effect of genetics, all were born to the same father, but from several mothers, so the kids were a mixture of full siblings plus their half-brothers and sisters.
The researchers allowed the kids to stay close to their mothers, and recorded their bleats at the age of 1 week. Then, the 23 kids were split randomly into four separate "gangs" ranging from five to seven animals. When all the kids reached 5 weeks, their bleats were recorded again. "We had about 10 to 15 calls per kid to analyse," says McElligott.
Some of the calls are clearly different to the human ear, but the full analysis picked out more subtle variations, based on 23 acoustic parameters. What emerged was that each kid gang had developed its own distinctive patois. "It probably helps with group cohesion," says McElligott.
"People presumed this didn't exist in most mammals, but hopefully now, they'll check it out in others," says McElligott. "It wouldn't surprise me if it's found in other ungulates and mammals."
Erich Jarvis of Duke University Medical Center in Durham, North Carolina, says the results fit with an idea he has developed with colleague Gustavo Arriaga, arguing that vocal learning is a feature of many species.
"I would call this an example of limited vocal learning," says Jarvis. "It involves small modifications to innately specified learning, as opposed to complex vocal learning which would involve imitation of entirely novel sounds."
Journal reference: Animal Behaviour, DOI: 10.1016/j.anbehav.2012.01.020
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article | <urn:uuid:072e317e-2d2a-4c8e-97c1-335b8f03bdb2> | 3.578125 | 569 | Truncated | Science & Tech. | 48.607064 | 117 |
Evolution can fall well short of perfection. Claire Ainsworth and Michael Le Page assess where life has gone spectacularly wrong
THE ascent of Mount Everest's 8848 metres without bottled oxygen in 1978 suggests that human lungs are pretty impressive organs. But that achievement pales in comparison with the feat of the griffon vulture that set the record for the highest recorded bird flight in 1975 when it was sucked into the engine of a plane flying at 11,264 metres.
Birds can fly so high partly because of the way their lungs work. Air flows through bird lungs in one direction only, pumped through by interlinked air sacs on either side. This gives them numerous advantages over lungs like our own. In mammals' two-way lungs, not as much fresh air reaches the deepest parts of the lungs, and incoming air is diluted by the oxygen-poor air that remains after ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:ad635de7-8a5e-4c98-be53-8c463594f176> | 3.28125 | 207 | Truncated | Science & Tech. | 59.637347 | 118 |
Algorithm Positions Solar Trackers, Movie Stars
March 30, 2011
Math and programming experts at a federal laboratory took an algorithm used to track the stars and rewrote its code to precisely follow the sun, even taking into consideration the vagaries of the occasional leap second.
Now, the algorithm and its software are helping solar power manufacturers build more precise trackers, orchards to keep their apples spotless and movie makers to keep the shadows off movie stars.
The Solar Position Algorithm (SPA) was developed at the U.S. Department of Energy's National Renewable Energy Laboratory to calculate the sun's position with unmatched low uncertainty of +/- 0.0003 degrees at vertex, in the period of years from -2000 to 6000 (or 2001 B.C. until just short of 4,000 years from now). That's more than 30 times more precise than the uncertainty levels for all other algorithms used in solar energy applications, which claim no better than +/- 0.01 degrees, and are only valid for a maximum of 50 years. And those uncertainty claims cannot be validated because of the need to add an occasional leap second because of the randomly increasing length of the mean solar day. The SPA does account for the leap second.
That difference in uncertainty levels is no small change, because an error of .01 degrees at noon can throw calculations off by 2 or 3 percent at sunrise or sunset, said NREL Senior Scientist Ibrahim Reda, the leader on the project. "Every uncertainty of 1 percent in the energy budget is millions of dollars uncertainty for utility companies and bankers," Reda said. "Accuracy is translated into dollars. When you can be more accurate, you save a lot of money."
"Siemens Industry Inc. uses NREL's SPA in its newest and smallest S7-1200 compact controller," says Paul Ruland of Siemens Industry, Inc. "Siemens took that very complex calculation, systemized it into our code and made a usable function block that its customers can use with their particular technologies to track the sun in the most efficient way. The end result is a 30 percent increase in accuracy compared to other technologies."
Science, Engineering and Math All Add to Breakthroughs
An algorithm is a set of rules for solving a mathematical problem in a finite number of steps, even though those steps can number in the hundreds or thousands.
NREL is known more for its solar, wind, and biofuel researchers than for its work in advanced math. But algorithms are key to so many scientific and technological breakthroughs today that a scientist well-versed in the math of algorithms is behind many of NREL's big innovations.
Since SPA was published on NREL's website, more than 4,000 users from around the world have downloaded it. In the European Union, for the past three years, it has been the reference algorithm to calculate the sun's position both for solar energy and atmospheric science applications. It has been licensed to, and downloaded by, major U.S. manufacturers of sun trackers, military equipment and cell phones. It has been used to boost agriculture and to help forecast the weather. Archaeologists, universities and religious organizations have employed SPA, as have other national laboratories.
Fewer Dropped Cell-Phone Calls
Billions of cell-phone calls are made each day, and they stay connected only because algorithms help determine exactly when to switch signals from one satellite to another.
Cell-phone companies can use the SPA to know exactly the moments when the phone, satellite, and the bothersome sun are in the same alignment, vulnerable to disconnections or lost calls. "The cell phone guys use SPA to know the specific moment to switch to another satellite so you're not disconnected," said Reda, who has a master's degree in electrical engineering/measurement from the University of Colorado. "Think of how many millions of people would be disconnected if there's too much uncertainty about the sun's position."
From a Tool for Solar Scientists to Widespread Uses
SPA sprang from NREL's need to calibrate solar measuring instruments at its Solar Radiation Research Laboratory. "We characterize the instruments based on the solar angle," Reda said. "It's vital that instruments get a precise read on the amount of energy they are getting from the sun at precise solar angle."
That will become even more critical in the future when utilities add more energy garnered from the sun to the smart grid. "The smart grid has to know precisely what your budget is for each resource you are using — oil, coal, solar, wind," Reda said.
Making an Astronomy Algorithm One for the Sun
Reda borrowed from the "Astronomical Algorithms," which is based on the Variations Sèculaires des Orbites Planètaires Theory (VSOP87) developed in 1982 then modified in 1987. Astronomers trust it to let them know exactly where to point their telescopes to get the best views of Jupiter, Alpha Centauri, the Magellan galaxy or whatever celestial bodies they are studying. "We were able to separate and modify that global astronomical algorithm and apply it just to solar energy, while making it less complex and easy to implement," said Reda, highlighting the role of his colleague, Afshin Andreas, who has a degree in engineering physics from the Colorado School of Mines, as well as expertise in computer programming.
They spent an intense three or four weeks of programming to make sure the equations were accurate before distributing the 1,100 lines of code, Andreas said.
They used almanacs and historical data to ensure that what the algorithm was calculating agreed with what observers from previous generations said about the sun's position on a particular day. "We did spot checks so we would have a good comfort level that the future projections are accurate," Reda said.
"We used our independent math and programming skills to make sure that our results agreed, Reda said.
Available for Licensing, Free Public Use
The new SPA algorithm simply served the needs of NREL scientists, until the day it was put on NREL's public website.
"A lot of people started downloading it," so NREL established some rules of use, Reda said. Individuals and universities could use SPA free of charge, but companies with commercial interests would have to pay for the software.
Factoring in Leap Seconds Improves Accuracy
NREL's SPA knows the position of the sun in the sky over an 8,000 year period partly because it has learned when to add those confounding leap seconds. Solar positioners that don't factor in the leap second only can calculate a few years or a few decades.
The length of an Earth day isn't determined by an expensive watch, but by the actual rotation of the Earth.
Almost immeasurably, the Earth's rotation is slowing down, meaning the solar day is getting just a tiny bit longer. But it's not doing so at a constant rate. "It happens in unpredictable ways," Reda said. Sometimes a leap second is added every year; sometimes there isn't a need for another leap second for three or four years. For example, the International Earth Rotation and Reference Systems Service (IERS) added six leap seconds over the course of seven years between 1992 and 1998, but has added just one extra second since 2006.
The algorithm calculates exactly when to add a leap second because included in its equations are rapid, monthly, and long-term data on the solar day provided by IERS, Reda and Andreas said.
"IERS receives the data from many observatories around the world," Reda added. "Each observatory has its own measuring instruments to measure the Earth's rotation. A consensus correction is then calculated for the fraction of second. As long as we know the time, and how much the Earth's rotation has slowed, we know the sun's position precisely."
That precision has proved useful in unexpected fields.
Practical Uses in Agriculture, Movie Making
One person who bought a license for the SPA software has an apple orchard, and wanted to keep the black spots off the apples that turn off finicky consumers, thus making wholesale buyers hesitate, Reda said.
The black spots appear when too much sun hits a particular apple, a particular tree or a particular row of trees in an orchard.
The spots can be prevented by showering the apples with water, but growers don't want to use more water than necessary.
SPA's precise tracking of the sun tells the grower exactly when the automatic sprinkler should spray for a few moments on a particular set of trees, and when it's OK to shut off that sprayer and turn on the next one. SPA communicates with the sprinkler system so, "instead of spraying the whole orchard, the spray moves minute by minute," Reda said. "He takes our tool and plugs it into the software that controls the sprinkler system. And he saves a lot of water."
Religious groups with traditions of praying at a particular time of day even have turned to SPA to help with precision.
A movie-camera manufacturer has purchased the SPA software to help cinematographers combat the precious waste of money when shadows disrupt outdoor shooting.
"They have cameras on those big cranes and booms, and typically they'd have to manually change them based on the shadows," Reda said. "This company that bought it has an automatic camera positioner."
Combining the positioner with the SPA's calculations, the camera can tell the precise moment when the sun will, say, peak above the tall buildings of an outdoor set. "They don't have to make so many judgments on their own about where the camera should be positioned," Reda said. "It gives them a clearer picture."
Learn more about NREL's solar radiation research and the Electricity, Resources, and Building Systems Integration Center.
— Bill Scanlon | <urn:uuid:b8675757-db5b-4acd-a245-0aceb7ed0441> | 3.140625 | 2,039 | News (Org.) | Science & Tech. | 44.952227 | 119 |
PPPL scientists propose a solution to a critical barrier to producing fusion
Posted April 23, 2012; 05:00 p.m.
Physicists from the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) have discovered a possible solution to a mystery that has long baffled researchers working to harness fusion. If confirmed by experiment, the finding could help scientists eliminate a major impediment to the development of fusion as a clean and abundant source of energy for producing electric power.
An in-depth analysis by PPPL scientists zeroed in on tiny, bubble-like islands that appear in the hot, charged gases — or plasmas — during experiments. These minute islands collect impurities that cool the plasma. And these islands, the scientists report in the April 20 issue of the journal Physical Review Letters, are at the root of a longstanding problem known as the "density limit" that can prevent fusion reactors from operating at maximum efficiency.
Fusion occurs when plasmas become hot and dense enough for the atomic nuclei contained within the hot gas to combine and release energy. But when the plasmas in experimental reactors called tokamaks reach the mysterious density limit, they can spiral apart into a flash of light.
"The big mystery is why adding more heating power to the plasma doesn't get you to higher density," said David Gates, a principal research physicist at PPPL and co-author of the proposed solution with Luis Delgado-Aparicio, a postdoctoral fellow at PPPL and a visiting scientist at the Massachusetts Institute of Technology's Plasma Science Fusion Center. "This is critical because density is the key parameter in reaching fusion and people have been puzzling about this for more than 30 years."
A discovery by Princeton Plasma Physics Laboratory physicists Luis Delgado-Aparicio (left) and David Gates could help scientists eliminate a major impediment to the development of fusion as a clean and abundant source of energy for producing electric power. Listen to a podcast with the scientists discussing their discovery. (Photo by Elle Starkman)
The scientists hit upon their theory in what Gates called "a 10-minute 'Aha!' moment." Working out equations on a whiteboard in Gates' office, the physicists focused on the islands and the impurities that drive away energy. The impurities stem from particles that the plasma kicks up from the tokamak wall. "When you hit this magical density limit, the islands grow and coalesce and the plasma ends up in a disruption," said Delgado-Aparicio.
These islands actually inflict double damage, the scientists said. Besides cooling the plasma, the islands act as shields that block out added power. The balance tips when more power escapes from the islands than researchers can pump into the plasma through a process called ohmic heating — the same process that heats a toaster when electricity passes through it. When the islands grow large enough, the electric current that helps to heat and confine the plasma collapses, allowing the plasma to fly apart.
Gates and Delgado-Aparicio now hope to test their theory with experiments on a tokamak called Alcator C-Mod at MIT, and on the DIII-D tokamak at General Atomics in San Diego. Among other things, they intend to see if injecting power directly into the islands will lead to higher density. If so, that could help future tokamaks reach the extreme density and 100-million-degree temperatures that fusion requires.
The scientists' theory represents a fresh approach to the density limit, which also is known as the "Greenwald limit" after MIT physicist Martin Greenwald, who has derived an equation that describes it. Greenwald has another potential explanation for the source of the limit. He thinks it may occur when turbulence creates fluctuations that cool the edge of the plasma and squeeze too much current into too little space in the core of the plasma, causing the current to become unstable and crash. "There is a fair amount of evidence for this," Greenwald said. However, he added, "We don't have a nice story with a beginning and end and we should always be open to new ideas."
Gates and Delgado-Aparicio pieced together their model from a variety of clues that have developed in recent decades. Gates first heard of the density limit while working as a postdoctoral fellow at the Culham Centre for Fusion Energy in Abingdon, England, in 1993. The limit had previously been named for Culham scientist Jan Hugill, who described it to Gates in detail.
Separately, papers on plasma islands were beginning to surface in scientific circles. French physicist Paul-Henri Rebut described radiation-driven islands in a mid-1980s conference paper, but not in a periodical. German physicist Wolfgang Suttrop speculated a decade later that the islands were associated with the density limit. "The paper he wrote was actually the trigger for our idea, but he didn't relate the islands directly to the Greenwald limit," said Gates, who had worked with Suttrop on a tokamak experiment at the Max Planck Institute for Plasma Physics in Garching, Germany, in 1996 before joining PPPL the following year.
In early 2011, the topic of plasma islands had mostly receded from Gates' mind. But a talk by Delgado-Aparicio about the possibility of such islands erupting in the plasmas contained within the Alcator C-Mod tokamak reignited his interest. Delgado-Aparicio spoke of corkscrew-shaped phenomena called snakes that had first been observed by PPPL scientists in the 1980s and initially reported by German physicist Arthur Weller.
Intrigued by the talk, Gates urged Delgado-Aparicio to read the papers on islands by Rebut and Suttrop. An email from Delgado-Aparicio landed in Gates' inbox some eight months later. In it was a paper that described the behavior of snakes in a way that fit nicely with the C-Mod data. "I said, 'Wow! He's made a lot of progress,'" Gates remembered. "I said, 'You should come down and talk about this.'"
What most excited Gates was an equation for the growth of islands that hinted at the density limit by modifying a formula that British physicist Paul Harding Rutherford had derived back in the 1980s. "I thought, 'If Wolfgang (Suttrop) was right about the islands, this equation should be telling us the Greenwald limit," Gates said. "So when Luis arrived I pulled him into my office."
Then a curious thing happened. "It turns out that we didn't even need the entire equation," Gates said. "It was much simpler than that." By focusing solely on the density of the electrons in a plasma and the heat radiating from the islands, the researchers devised a formula for when the heat loss would surpass the electron density. That in turn pinpointed a possible mechanism behind the Greenwald limit.
Delgado-Aparicio became so absorbed in the scientists' new ideas that he missed several turnoffs while driving back to Cambridge, Mass., that night. "It's intriguing to try to explain Mother Nature," he said. "When you understand a theory you can try to find a way to beat it. By that I mean find a way to work at densities higher than the limit."
Conquering the limit could provide essential improvements for future tokamaks that will need to produce self-sustaining fusion reactions, or "burning plasmas," to generate electric power. Such machines include proposed successors to ITER, a $20 billion experimental reactor that is being built in Cadarache, France, by the European Union, the United States and five other countries.
Why hadn't researchers pieced together a similar theory of the density-limit puzzle before? The answer, said Gates, lies in how ideas percolate through the scientific community. "The radiation-driven islands idea never got a lot of press," he said. "People thought of them as curiosities. The way we disseminate information is through publications, and this idea had a weak initial push."
PPPL, in Plainsboro, N.J., is devoted both to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Through the process of fusion, which is constantly occurring in the sun and other stars, energy is created when the nuclei of two lightweight atoms, such as those of hydrogen, combine in plasma at very high temperatures. When this happens, a burst of energy is released, which can be used to generate electricity.
PPPL is managed by Princeton University for the U.S. Department of Energy's Office of Science. | <urn:uuid:903085bb-3abd-4b6c-8799-284017dca006> | 3.34375 | 1,801 | News (Org.) | Science & Tech. | 43.826244 | 120 |
Python's flexible, duck-typed object system lowers the cost of architectural options that are more difficult to exercise in more rigid languages (yes, we are thinking of C++). One of these is carefully separating your data model (the classes and data structures that represent whatever state your application is designed to manipulate) from your controller (the classes that implement your user interface.
In Python, a design pattern that frequently applies is to have one master editor/controller class that encapsulates your user interface (with, possibly, small helper classes for stateful widgets) and one master model class that encapsulates your application state (probably with some members that are themselves instances of small data-representation classes). The controller calls methods in the model to do all its data manipulation; the model delegates screen-painting and input-event processing to the controller.
Narrowing the interface between model and controller makes it easier to avoid being locked into early decisions about either part by adhesions with the other one. It also makes downstream maintainance and bug diagnosis easier. | <urn:uuid:bd9b1371-3a14-4f87-b0a1-27d5ecdbd1a8> | 2.796875 | 213 | Documentation | Software Dev. | 12.38169 | 121 |
ActiveMQ via C# using Apache.NMS Part 1
Java Message Service (JMS) is the de facto standard for asynchronous messaging between loosely coupled, distributed applications. Per the specification, it provides a common way for Java application to create, send, receive and read messages. This is great for enterprises or organizations whose architecture depends upon a single platform (Java), but the reality is that most organizations have hi-bred architectures consisting of Java and .NET (and others). Oftentimes these systems need to communicate using common messaging schematics: ActiveMQ and Apache.NMS satisfy this integration requirement.
The JMS specification outlines the requirements for system communication between Java Messaging Middleware and the clients that use them. Products that implement the JMS specification do so by developing a provider that supports the set of JMS interfaces and messaging semantics. Examples of JMS providers include open source offerings such as ActiveMQ, HornetQ and GlassFish and proprietary offerings such as SonicMQ and WebSphere MQ. The specification simply makes it easier for third parties to develop providers.
All messaging in JMS is peer-2-peer; clients are either JMS or non JMS applications that send and receive messages via a provider. JMS applications are pure Java based applications whereas non JMS use JMS styled APIs such as ActiveMQ.NMS which uses OpenWire, a cross language wire protocol that allows native access to the ActiveMQ provider.
JMS messaging schematics are defined into two separate domains: queue based and topic based applications. Queue based or more formally, point-to-point (PTP) clients rely on “senders” sending messages to specific queues and “receivers” registering as listeners to the queue. In scenarios where more a queue has more than one listener, the messages are delivered in a round-robin fashion between each listener; only one copy of the message is delivered. Think of this as something like a phone call between you and another person.
Topic based application follow the publish/subscribe metaphor in which (in most cases) a single publisher client publishes a message to a topic and all subscribers to that topic receive a copy. This type of messaging metaphor is often referred to as broadcast messaging because a single client sends messages to all client subscribers. This is some analogous to a TV station broadcasting a television show to you and any other people who wish to “subscribe” to a specific channel.
JMS API Basics
The JMS Standard defines a series of interfaces that client applications and providers use to send messages and receive messages. From a client perspective, this makes learning the various JMS implementations relatively easy, since once you learn one you can apply what you learned to another implementation relatively easily and NMS is no exception. The core components of JMS are as follows: ConnectionFactory, Connection, Destination, Session, MessageProducer, and MessageConsumer. The following diagram illustrates communication and creational aspects of each object:
NMS supplies similar interfaces to the .NET world which allows for clients to send messages to and from the ActiveMQ JMS via OpenWire. A quick rundown of the NMS interfaces are as follows:
Note that the Apache.NMS namespace contains several more interfaces and classes, but these are the essential interfaces that map to the JMS specification. The following diagram illustrates the signature that each interface provides:
The interfaces above are all part of the Apache.NMS 1.30 API available for download here. In order to use NMS in your .NET code you also need to down load the Apache.NMS.ActiveMQ client as well and to test your code, you will need to download and install the ActiveMQ broker, which is written in Java so it requires the JRE to be installed as well. The following table provides links to each download:
For my examples I will be using the latest release of Apache.NMS and Apache.NMS.ActiveMQ as of this writing time. You should simple pick the latest version that is stable. The same applies for ActiveMQ and the JDK/JRE…note that you only need the Java Runtime Environment (JRE) to host install ActiveMQ. Install the JDK if you want to take advantage of some the tools that it offers for working with JMS providers.
To start ActiveMQ, install the JRE (if you do not already have it installed – most people do already) and unzip the ActiveMQ release into a directory…in directory will do. Open a command prompt and navigate to the folder with the ActiveMQ release and locate the “bin” folder, then type ‘activemq”. You should see something like the following:
Download and install the Apache.NMS and Apache.NMS.ActiveMQ libraries from the links defined in the table above. Unzip them into a directory on your hard drive, so that you can reference them from Visual Studio.
Open Visual Studio 2008/2010 and create a new Windows project of type “Class Library”:
And once the project is created, using the “Add Reference” dialog, browse to the directory where you unzipped the Apache.NMS files defined above and a reference to the Apache.NMS.dll. Do the same for the Apache.NMS.ActiveMQ download. Note that each download contains builds for several different .NET versions; I chose the “net-3.5” version of each dll since I am using VS 2008 and targeting the 3.5 version of .NET.
For my examples you will also need to install the latest and greatest version NUnit from www.nunit.org. After you have installed NUnit, add a reference to the nunit.framework.dll. Note that any unit testing framework should work.
Add three classes to the project:
- A test harness class (ApacheNMSActiveMQTests.cs)
- A publisher class (TopicPublisher.cs)
- A subscriber class (TopicSubscriber.cs).
Your solution explorer should look something like the following:
The test harness will be used to demonstrate the use of the two other classes. The TopicPublisher class represents a container for a message producer and the TopicSubcriber represents a container for a message consumer.
The publisher, TopicPublisher is a simple container/wrapper class that allows a client to easily send messages to a topic. Remember from my previous discussion about topics that topics allow for broadcast messaging scenarios: a single publisher sends a message to one or more subscribers and that all subscribers will receive a copy of the message.
Message producers typically have a lifetime equal to the amount of time it takes to send a message, however for performance reasons you can extend the life out to the length of the application’s lifetime.
Like the TopicPublisher above, the TopicSubscriber class is container/wrapper class that allows clients to “listen in” or “subscribe” to a topic.
The TopicSubscriber class is typically has a lifetime that is the equal to the lifetime of the application. The reason is pretty obvious: a publisher always knows when it will publish, but a subscriber never knows when the publisher will send the message. What the subscriber does is create a permanent “listener” to the topic, when a publisher sends a message to the topic, the subscriber will receive and process the message.
The following unit test shows the classes above used in conjunction with the Apache.NMS and Apache.NMS.ActiveMQ API’s to send and receive messages to ActiveMQ which is Java based, from the .NET world!
Here is quick rundown of the ApachNMSActiveMQTests class:
- Declare variables for the required NMS objects and the TopicSubscriber
- Declare variables for the broker URI, the topic to subscribe/publish to, and the client and consumer ids
- Create a ConnectionFactory object, create and start a Connection, and then create a Session to work with.
- Create and start the TopicSubscriber which will be a listener/subscriber to the “TestTopic” topic. Also, to receive messages you must register an event handler or lambda expression with the MessageReceivedDelegate delegate. In this example I in-lined a lambda expression for simplicity.
- On the test the method, create a temporary publisher and send a message to the topic.
- Tear down and dispose of the subscriber and Session.
- Tear down and dispose of the Connection.
After you run the unit test you should see something like the following message:
Note that ActiveMQ must be up and running for the example to work. | <urn:uuid:73b23611-c6f1-4577-9ef2-cbb95d8098c3> | 2.625 | 1,808 | Personal Blog | Software Dev. | 43.384814 | 122 |
Mar. 6, 2013 Boys are right-handed, girls are left ... Well at least this is true for sugar gliders (Petaurus breviceps) and grey short-tailed opossums (Monodelphis domestica), according to an article in BioMed Central’s open access journal BMC Evolutionary Biology that shows that handedness in marsupials is dependent on gender. This preference of one hand over another has developed despite the absence of a corpus collosum, the part of the brain which in placental mammals allows one half of the brain to communicate with the other.
Many animals show a distinct preference for using one hand/paw/hoof over another. This is often related to posture – an animal is more likely to show manual laterality if it is upright, related to the difficulty of the task, more complex tasks show a handed preference, or even with age. As an example of all three: crawling human babies show less hand preference than toddlers.
Some species also show a distinct sex effect in handedness but among non-marsupial mammals this tendency is for left-handed males and right-handed females. In contrast researchers from St Petersburg State University show that male quadruped marsupials, such as who walk on all fours, tend to be right-handed while the females are left-handed, especially as tasks became more difficult.
Dr Yegor Malashichev from Saint Petersburg State University who led this study explained why they think this has evolved, “Marsupials do not have a corpus callosum – which connects the two halves of the mammalian brain together. Reversed sex related handedness is an indication of how the marsupial brain has developed different ways of the two halves of the brain communicating in the absence of the corpus callosum.”
Other social bookmarking and sharing tools:
- Andrey Giljov, Karina Karenina, Yegor Malashichev. Forelimb preferences in quadrupedal marsupials and there implications for laterality evolution in mammals. BMC Evolutionary Biology, 2013; 13 (1): 61 DOI: 10.1186/1471-2148-13-61
Note: If no author is given, the source is cited instead. | <urn:uuid:0da0c3b0-3202-410e-943e-07c344d95981> | 3.515625 | 469 | Truncated | Science & Tech. | 38.210002 | 123 |
More 60-Second Science
Plants can pull carbon dioxide, the planet-warming greenhouse gas, out of Earth’s atmosphere. But these aren’t the only living organisms that affect carbon dioxide levels, and thus global warming. Nope, I’m not talking about humans. Humble sea otters can also reduce greenhouse gases, by indirectly helping kelp plants. That finding is in the journal Frontiers in Ecology and the Environment. [Christopher C Wilmer et al., Do trophic cascades affect the storage and flux of atmospheric carbon? An analysis of sea otters and kelp forests]
Researchers used 40 years of data to look at the effect of sea otter populations on kelp. Depending on the plant density, one square meter of kelp forest can absorb anywhere from tens to hundreds of grams of carbon per year. But when sea otters are around, kelp density is high and the plants can suck up more than 12 times as much carbon. That’s because otters nosh on kelp-eating sea urchins. In the mammals’ presence, the urchins hide away and feed on kelp detritus rather than living, carbon-absorbing plants.
So climate researchers need to note that the herbivores that eat plants, and the predators that eat them, also have roles to play in the carbon cycle.
[The above text is a transcript of this podcast.] | <urn:uuid:988dc99a-1448-437e-9ce7-9be18141d267> | 3.578125 | 296 | Truncated | Science & Tech. | 56.8306 | 124 |
by Staff Writers
Chicago IL (SPX) Jan 11, 2013
Technologically valuable ultrastable glasses can be produced in days or hours with properties corresponding to those that have been aged for thousands of years, computational and laboratory studies have confirmed.
Aging makes for higher quality glassy materials because they have slowly evolved toward a more stable molecular condition. This evolution can take thousands or millions of years, but manufacturers must work faster. Armed with a better understanding of how glasses age and evolve, researchers at the universities of Chicago and Wisconsin-Madison raise the possibility of designing a new class of materials at the molecular level via a vapor-deposition process.
"In attempts to work with aged glasses, for example, people have examined amber," said Juan de Pablo, UChicago's Liew Family Professor in Molecular Theory and Simulations. "Amber is a glass that has been aged millions of years, but you cannot engineer that material. You get what you get." de Pablo and Wisconsin co-authors Sadanand Singh and Mark Ediger report their findings in the latest issue of Nature Materials.
Ultrastable glasses could find potential applications in the production of stronger metals and in faster-acting pharmaceuticals. The latter may sound surprising, but drugs with the amorphous molecular structure of ultrastable glass could avoid crystallization during storage and be delivered more rapidly in the bloodstream than pharmaceuticals with a semi-crystalline structure. Amorphous metals, likewise, are better for high-impact applications than crystalline metals because of their greater strength.
The Nature Materials paper describes computer simulations that Singh, a doctoral student in chemical engineering at UW-Madison, carried out with de Pablo to follow-up some intriguing results from Ediger's laboratory.
Growing stable glasses
Several years ago, he discovered that glasses grown this way on a specially prepared surface that is kept within a certain temperature range exhibit far more stability than ordinary glasses. Previous researchers must have grown this material under the same temperature conditions, but failed to recognize the significance of what they had done, Ediger said.
Ediger speculated that growing glasses under these conditions, which he compares to the Tetris video game, gives molecules extra room to arrange themselves into a more stable configuration. But he needed Singh and de Pablo's computer simulations to confirm his suspicions that he had actually produced a highly evolved, ordinary glass rather than an entirely new material.
"There's interest in making these materials on the computer because you have direct access to the structure, and you can therefore determine the relationship between the arrangement of the molecules and the physical properties that you measure," said de Pablo, a former UW-Madison faculty member who joined UChicago's new Institute for Molecular Engineering earlier this year.
There are challenges, though, to simulating the evolution of glasses on a computer. Scientists can cool a glassy material at the rate of one degree per second in the laboratory, but the slowest computational studies can only simulate cooling at a rate of 100 million degrees per second. "We cannot cool it any slower because the calculations would take forever," de Pablo said.
"It had been believed until now that there is no correlation between the mechanical properties of a glass and the molecular structure; that somehow the properties of a glass are "hidden" somewhere and that there are no obvious structural signatures," de Pablo said.
Creating better materials
Ultrastable glasses achieve their stability in a manner analogous to the most efficiently packed, multishaped objects in Tetris, each consisting of four squares in various configurations that rain from the top of the screen.
"This is a little bit like the molecules in my deposition apparatus raining down onto this surface, and the goal is to perfectly pack a film, not to have any voids left," Ediger said.
The object of Tetris is to manipulate the objects so that they pack into a perfectly tight pattern at the bottom of the screen. "The difference is, when you play the game, you have to actively manipulate the pieces in order to build a well-packed solid," Ediger said. "In the vapor deposition, nature does it for us."
But in Tetris and experiments alike, when the objects or molecules descend too quickly, the result is a poorly packed, void-riddled pattern.
"In the experiment, if you either rain the molecules too fast or choose a low temperature at which there's no mobility at the surface, then this trick doesn't work," Ediger said. Then it would be like taking a bucket of odd-shaped pieces and just dumping them on the floor. There are all sorts of voids and gaps because the molecules didn't have any opportunity to find a good way of packing."
"Ultrastable glasses from in silico vapor deposition," by Sadamand Singh, M.D. Ediger and Juan J. de Pablo," Nature Materials. National Science Foundation and the U.S. Department of Energy.
University of Chicago
Space Technology News - Applications and Research
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement| | <urn:uuid:c3f50b79-8c61-44c3-9407-5d3feede364f> | 3.171875 | 1,141 | Truncated | Science & Tech. | 27.014458 | 125 |
In January 1992, a container ship near the International Date Line, headed to Tacoma, Washington from Hong Kong, lost 12 containers during severe storm conditions. One of these containers held a shipment of 29,000 bathtub toys. Ten months later, the first of these plastic toys began to wash up onto the coast of Alaska. Driven by the wind and ocean currents, these toys continue to wash ashore during the next several years and some even drifted into the Atlantic Ocean.
The ultimate reason for the world's surface ocean currents is the sun. The heating of the earth by the sun has produced semi-permanent pressure centers near the surface. When wind blows over the ocean around these pressure centers, surface waves are generated by transferring some of the wind's energy, in the form of momentum, from the air to the water. This constant push on the surface of the ocean is the force that forms the surface currents.
Learning Lesson: How it is Currently Done
Around the world, there are some similarities in the currents. For example, along the west coasts of the continents, the currents flow toward the equator in both hemispheres. These are called cold currents as they bring cool water from the polar regions into the tropical regions. The cold current off the west coast of the United States is called the California Current.
Likewise, the opposite is true as well. Along the east coasts of the continents, the currents flow from the equator toward the poles. There are called warm current as they bring the warm tropical water north. The Gulf Stream, off the southeast United States coast, is one of the strongest currents known anywhere in the world, with water speeds up to 3 mph (5 kph).
These currents have a huge impact on the long-term weather a location experiences. The overall climate of Norway and the British Isle is about 18°F (10°C) warmer in the winter than other cites located at the same latitude due to the Gulf Stream.
Take it to the MAX! Keeping Current
While ocean currents are a shallow level circulations, there is global circulation which extends to the depths of the sea called the Great Ocean Conveyor. Also called the thermohaline circulation, it is driven by differences in the density of the sea water which is controlled by temperature (thermal) and salinity (haline).
In the northern Atlantic Ocean, as water flows north it cools considerably increasing its density. As it cools to the freezing point, sea ice forms with the "salts" extracted from the frozen water making the water below more dense. The very salty water sinks to the ocean floor.
Learning Lesson: That Sinking Feeling
It is not static, but a slowly southward flowing current. The route of the deep water flow is through the Atlantic Basin around South Africa and into the Indian Ocean and on past Australia into the Pacific Ocean Basin.
If the water is sinking in the North Atlantic Ocean then it must rise somewhere else. This upwelling is relatively widespread. However, water samples taken around the world indicate that most of the upwelling takes place in the North Pacific Ocean.
It is estimated that once the water sinks in the North Atlantic Ocean that it takes 1,000-1,200 years before that deep, salty bottom water rises to the upper levels of the ocean. | <urn:uuid:dfd00b67-c3db-464f-93c1-6d6c5508de9d> | 3.984375 | 678 | Knowledge Article | Science & Tech. | 51.276897 | 126 |
Michele Johnson, Ames Research Center
Astronomers have discovered a pair of neighboring planets with dissimilar densities orbiting very close to each other. The planets are too close to their star to be in the so-called "habitable zone," the region in a system where liquid water might exist on the surface, but they have the closest-spaced orbits ever confirmed. The findings are published today in the journal Science.
The research team, led by Josh Carter, a Hubble fellow at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., and Eric Agol, a professor of astronomy at the University of Washington in Seattle, used data from NASA's Kepler space telescope, which measures dips in the brightness of more than 150,000 stars, to search for transiting planets.
The inner planet, Kepler-36b, orbits its host star every 13.8 days and the outer planet, Kepler-36c, every 16.2 days. On their closest approach, the neighboring duo comes within about 1.2 million miles of each other. This is only five times the Earth-moon distance and about 20 times closer to one another than any two planets in our solar system.
Kepler-36b is a rocky world measuring 1.5 times the radius and 4.5 times the mass of Earth. Kepler-36c is a gaseous giant measuring 3.7 times the radius and eight times the mass of Earth. The planetary odd couple orbits a star slightly hotter and a couple billion years older than our sun, located 1,200 light-years from Earth
To read more about the discovery, visit: the Harvard-Smithsonian Center for Astrophysics and University of Washington press releases.
Ames Research Center in Moffett Field, Calif., manages Kepler's ground system development, mission operations and science data analysis. NASA’s Jet Propulsion Laboratory, Pasadena, Calif., managed the Kepler mission's development.
Ball Aerospace and Technologies Corp. in Boulder, Colo., developed the Kepler flight system and supports mission operations with the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder.
The Space Telescope Science Institute in Baltimore archives, hosts and distributes Kepler science data. Kepler is NASA's 10th Discovery Mission and is funded by NASA's Science Mission Directorate at the agency's headquarters in Washington. | <urn:uuid:21851e90-e451-4be9-b860-e7e63b41efac> | 3.609375 | 478 | News Article | Science & Tech. | 48.050656 | 127 |
New Zealand grasshoppers belong to the subfamily Catantopinae. A number of species are present including the common small Phaulacridium of the more coastal areas, the larger species of Sigaus of the tussock lands, and the alpine genera Paprides and Brachaspis, which include some quite large species. These inhabit the alpine areas of the South Island, some preferring scree and others tussock areas. They apparently survive the rigorous alpine winter conditions both as nymphs and as adults, and it is possible that they can withstand complete freezing. All species are plant feeders and lay batches of eggs or pods in short holes in the ground which they excavate with their abdomen. After hatching, the young nymphs moult four or five times before becoming adult.
by Graeme William Ramsay, M.SC., PH.D., Entomology Division, Department of Scientific and Industrial Research, Nelson. | <urn:uuid:feefb68d-09c3-45d7-bc1b-52166c84268c> | 3.515625 | 196 | Knowledge Article | Science & Tech. | 45.262532 | 128 |
In mathematics, hyperbolic functions are analogs of the ordinary trigonometric, or circular, functions. The basic hyperbolic functions are the hyperbolic sine "sinh" (typically pronounced /ˈsɪntʃ/ or /ˈʃaɪn/), and the hyperbolic cosine "cosh" (typically pronounced /ˈkɒʃ/), from which are derived the hyperbolic tangent "tanh" (typically pronounced /ˈtæntʃ/ or /ˈθæn/), etc., in analogy to the derived trigonometric functions. The inverse hyperbolic functions are the area hyperbolic sine "arsinh" (also called "asinh", or sometimes by the misnomer of "arcsinh") and so on.
Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the equilateral hyperbola. Hyperbolic functions occur in the solutions of some important linear differential equations, for example the equation defining a catenary, and Laplace's equation in Cartesian coordinates. The latter is important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity.
Hyperbolic functions were introduced in the 18th century by the Swiss mathematician Johann Heinrich Lambert.
The hyperbolic functions are:
Via complex numbers the hyperbolic functions are related to the circular functions as follows:
where is the imaginary unit defined as .
Note that, by convention, sinh2x means (sinhx)2, not sinh(sinhx); similarly for the other hyperbolic functions when used with positive exponents. Another notation for the hyperbolic cotangent function is , though cothx is far more common.
Hyperbolic sine and cosine satisfy the identity
which is similar to the Pythagorean trigonometric identity.
It can also be shown that the area under the graph of cosh x from A to B is equal to the arc length of cosh x from A to B.
For a full list of integrals of hyperbolic functions, see list of integrals of hyperbolic functions
In the above expressions, C is called the constant of integration.
It is possible to express the above functions as Taylor series:
A point on the hyperbola xy = 1 with x > 1 determines a hyperbolic triangle in which the side adjacent to the hyperbolic angle is associated with cosh while the side opposite is associated with sinh. However, since the point (1,1) on this hyperbola is a distance √2 from the origin, the normalization constant 1/√2 is necessary to define cosh and sinh by the lengths of the sides of the hyperbolic triangle.
and the property that cosh t ≥ 1 for all t.
The hyperbolic functions are periodic with complex period 2πi (πi for hyperbolic tangent and cotangent).
The parameter t is not a circular angle, but rather a hyperbolic angle which represents twice the area between the x-axis, the hyperbola and the straight line which links the origin with the point (cosh t, sinh t) on the hyperbola.
The function cosh x is an even function, that is symmetric with respect to the y-axis.
The function sinh x is an odd function, that is −sinh x = sinh(−x), and sinh 0 = 0.
The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of 2, 6, 10, 14, ... sinhs. This yields for example the addition theorems
the "double angle formulas"
and the "half-angle formulas"
The derivative of sinh x is cosh x and the derivative of cosh x is sinh x; this is similar to trigonometric functions, albeit the sign is different (i.e., the derivative of cos x is −sin x).
The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers.
The graph of the function a cosh(x/a) is the catenary, the curve formed by a uniform flexible chain hanging freely under gravity.
From the definitions of the hyperbolic sine and cosine, we can derive the following identities:
These expressions are analogous to the expressions for sine and cosine, based on Euler's formula, as sums of complex exponentials.
Since the exponential function can be defined for any complex argument, we can extend the definitions of the hyperbolic functions also to complex arguments. The functions sinh z and cosh z are then holomorphic.
Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers: | <urn:uuid:34eefbfb-968b-4240-9caa-0182a3ca0559> | 4.0625 | 1,119 | Knowledge Article | Science & Tech. | 37.831287 | 129 |
Cloudy outlook for climate models
More aerosols - the solution to global warming?
Climate models appear to be missing an atmospheric ingredient, a new study suggests.
December's issue of the International Journal of Climatology from the Royal Meteorlogical Society contains a study of computer models used in climate forecasting. The study is by joint authors Douglass, Christy, Pearson, and Singer - of whom only the third mentioned is not entitled to the prefix Professor.
Their topic is the discrepancy between troposphere observations from 1979 and 2004, and what computer models have to say about the temperature trends over the same period. While focusing on tropical latitudes between 30 degrees north and south (mostly to 20 degrees N and S), because, they write - "much of the Earth's global mean temperature variability originates in the tropics" - the authors nevertheless crunched through an unprecedented amount of historical and computational data in making their comparison.
For observational data they make use of ten different data sets, including ground and atmospheric readings at different heights.
On the modelling side, they use the 22 computer models which participated in the IPCC-sponsored Program for Climate Model Diagnosis and Intercomparison. Some models were run several times, to produce a total of 67 realisations of temperature trends. The IPCC is the United Nation's Intergovernmental Panel on Climate Change and published their Fourth Assessment Report [PDF, 7.8MB] earlier this year. Their model comparison program uses a common set of forcing factors.
Notable in the paper is a generosity when calculating a figure for statistical uncertainty for the data from the models. In aggregating the models, the uncertainty is derived from plugging the number 22 into the maths, rather than 67. The effect of using 67 would be to confine the latitude of error closer to the average trend - with the implication of making it harder to reconcile any discrepancy with the observations. In addition, when they plot and compare the observational and computed data, they also double this error interval.
So to the burning question: on their analysis, does the uncertainty in the observations overlap with the results of the models? If yes, then the models are supported by the observations of the last 30 years, and they could be useful predictors of future temperature and climate trends.
Unfortunately, the answer according to the study is no. Figure 1 in the published paper available here[PDF] pretty much tells the story.
Douglass et al. Temperature time trends (degrees per decade) against pressure (altitutude) for 22 averaged models (shown in red) and 10 observational data sets (blue and green lines). Only at the surface are the mean of the models and the mean of observations seen to agree, within the uncertainties.
While trends coincide at the surface, at all heights in the troposphere, the computer models indicate that higher trending temperatures should have occurred. And more significantly, there is no overlap between the uncertainty ranges of the observations and those of the models.
In other words, the observations and the models seem to be telling quite different stories about the atmosphere, at least as far as the tropics are concerned.
So can the disparities be reconciled? | <urn:uuid:ee3fdce7-621c-4026-a762-105cef4462ab> | 3.265625 | 646 | News Article | Science & Tech. | 32.206566 | 130 |
The flask in the image above contains cell cultures in the laboratory of UMass Amherst cellular engineer Susan Roberts who is studying cells extracted from the yew tree, known to produce the powerful anti-cancer agent, Taxol. Produced by plants as a defense against predators, Taxol and other secondary metabolites have many beneficial uses. Roberts and her colleagues are developing methods to enhance Taxol productivity in lab cultures to enable a large-scale, sustainable supply. Roberts also directs the interdisciplinary Institute for Cellular Engineering, whose researchers are working to harness the cellular “machinery” in plants, animals, and microorganisms for applications in human health, bioenergy and the environment. Learn more.
Photo credit: Amanda Drane | <urn:uuid:bb8181e6-d340-40bc-88b6-ba2aa64e1de7> | 2.859375 | 148 | Knowledge Article | Science & Tech. | 5.711929 | 131 |
Everyone is familiar with weather systems on Earth like rain, wind and snow. But space weather – variable conditions in the space surrounding Earth – has important consequences for our lives inside Earth’s atmosphere.
Solar activity occurring miles outside Earth’s atmosphere, for example, can trigger magnetic storms on Earth. These storms are visually stunning, but they can set our modern infrastructure spinning.
On Jan. 19, scientists saw a solar flare in an active region of the Sun, along with a concentrated blast of solar-wind plasma and magnetic field lines known as a coronal mass ejection that burst from the Sun’s surface and appeared to be headed for Earth.
When these solar winds met Earth’s magnetic field, the interaction created one of the largest magnetic storms on Earth recorded in the past few years. The storm peaked on Jan. 24, just as another storm began.
“These new storms, and the storm we witnessed on Sept 26, 2011, indicate the up-tick in activity coming with the Earth’s ascent into the next solar maximum,” said USGS geophysicist Jeffrey Love.” This solar maximum is the period of greatest activity in the solar cycle of the Sun, and it is predicted to occur sometime in 2013, which will increase the amount of magnetic storms on Earth.
Magnetic storms, said Love, are a space weather phenomenon responsible for the breathtaking lights of the aurora borealis, but also sometimes for the disruption of technology and infrastructure our modern society depends on. Large magnetic storms, for example, can interrupt radio communication, interfere with global-positioning systems, disrupt oil and gas well drilling, damage satellites and affect their operations, and even cause electrical blackouts by inducing voltage surges in electric power grids. Storms can also affects airline activity — as a result of last weekend’s storm, both Air Canada and Delta Air Lines rerouted flights over the Arctic bound for Asia as a precautionary measure. Although the storm began on the 19th of January, it did not peak until January 24th.
While this particular storm had minor consequences on Earth, other large storms can be crippling, Love said. He noted that the largest storm of the 20th century occurred in March, 1989, accompanied by auroras that could be seen as far south as Texas, and sent electric currents into Earth’s crust that made their way into the high-voltage Canadian Hydro-Quebec power grid. This caused the transformer to fail and left more than 6 million people without power for 9 hours. The same storm also damaged and disrupted the operation of satellites, GPS systems, and radio communication systems used by the United States military.
While large, the 1989 storm pales in comparison to one that occurred in September 1859 and is the largest storm in recorded history. Scientists estimate that the economic impact to the United States from a storm of the same size in today’s society could exceed $1 trillion as a result of the technological systems it could disrupt.
The USGS, a partner in the multi-agency National Space Weather Program, collects data that can help us understand how magnetic storms may impact the United States. Constant monitoring of Earth’s magnetic field allows us to better assess the impact of these phenomena on Earth’s surface. To do this, the USGS Geomagnetism Program maintains 14 observatories around the United States and its territories, which provide ground-based measurements of changes in the magnetic field. These measurements are being used by the NOAA Space Weather Prediction Center and the US Air Force Weather Agencyto track the intensity of the magnetic storm generated by this solar activity.
In addition to providing data to its customers, the USGS produces models of the Earth’s magnetic field that are used in a host of applications, including GPS receivers, military and civilian navigational systems, and in research for studies of the effects of geomagnetic storms on the ionosphere (a shell of electrons and electrically charged atoms and molecules surrounding Earth), atmosphere, and near-space environment. | <urn:uuid:bf427001-3692-4386-aad9-df932c4b7088> | 3.796875 | 827 | News (Org.) | Science & Tech. | 35.320797 | 132 |
Researchers are concerned about a fish that's turning into a new threat to the ecology of Lake Tahoe.
Biologists with the University of Nevada, Reno say they're finding a growing number of giant goldfish in the lake.
While officials have been working for years in trying to keep the lake's water crystal clear, researcher Sudeep Chandra told KCRA-TV (
) the discovery of the goldfish is particularly worrisome because goldfish eat a lot and excrete "lots of nutrients."
Those nutrients stimulate algae growth.
The goldfish, some of which have grown to 18 inches, could also eat smaller fish, creating new competition for native trout.
Chandra says with no prior studies on goldfish for guidance, researchers are catching the giant goldfish and bringing them back to their lab to study.
It's not clear how the goldfish got into Lake Tahoe, but it's believed to be from people dumping aquariums into the lake.
Information from: KCRA-TV
The Associated Press | <urn:uuid:7b9b5d5c-24a5-40e1-9d4a-36778dd52295> | 2.890625 | 209 | News Article | Science & Tech. | 48.089444 | 133 |
In the Star Wars: Where Science Meets Imagination exhibit, Luke Skywalker's Landspeeder is on display for the first time.
Click on image for full size
Courtesy of Landspeeder image © 2006 Lucasfilm Ltd. & TM Photo: Dom Miguel Photography
Star Wars Exhibition Brings Reality to Fantasy
News story originally written on April 16, 2008
A new museum exhibit shows that some of the robots, vehicles and devices from the Star Wars films are close to the types of things scientists have developed to use in space.
The exhibition--at the Science Museum of Minnesota in St. Paul, Minn., from June 13 until August 24--showcases landspeeders, R2D2 and other items from the Star Wars films. Visitors will learn how researchers today are pursuing similar technologies. The exhibit developers were surprised and excited to learn that many of today's scientists were inspired by the fantasy technologies they saw in the Star Wars movies. One of the goals of the exhibit is to be an inspiration for the kids will be the next set of future scientists.
The exhibit contains film clips, props, models and costumes. Visitors are encouraged to participate in hands-on exhibits and activities.
Shop Windows to the Universe Science Store!
Our online store
includes fun classroom activities
for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist
are also full of classroom activities on different topics in Earth and space science!
You might also be interested in:
Scientists have learned that Mount Hood, Oregon's tallest mountain, has erupted in the past due to the mixing of two different types of magma. "The data will help give us a better road map to what a future...more
The Earth's mantle is a rocky, solid shell that is between the Earth's crust and the outer core, and makes up about 84 percent of the Earth's volume. The mantle is made up of many distinct portions or...more
Some geologic faults that appear strong and stable, slip and slide like weak faults, causing earthquakes. Scientists have been looking at one of these faults in a new way to figure out why. In theory,...more
The sun goes through cycles that last approximately 11 years. These solar cycle include phases with more magnetic activity, sunspots, and solar flares. They also include phases with less activity. The...more
Studying tree rings doesn't only tell us the age of that tree. Tree rings also show what climate was like for each year of a tree's life, which means they can tell us about climates of the past and about...more
Earth's first life form may have developed between the layers of a chunk of mica sitting like a multilayered sandwich in primordial waters, according to a new hypothesis. The mica hypothesis, which was...more
Acid rain plays a small role in making the world's oceans more acidic. But new research has found that acid rain has a much bigger impact on the coastal sections of the ocean. Acid rain is caused by pollution...more | <urn:uuid:b520b6fd-c96f-4973-b437-da14b19d3e8f> | 3.171875 | 620 | News Article | Science & Tech. | 60.871593 | 134 |
This is one of my favorite stories. In short, one of John Burk’s (@occam98) students wanted to launch a space balloon. If you want all the details, this post at Quantum Progress pretty much says it all. The part that makes this story so cool is that it was the student who did all of the set up and fundraising and stuff. Love it. Oh, and the student is apparently named “M.” I wonder if the student is either one of the Men in Black or a James Bond scientist.
Ok, you know what I do, right? I need to add something. Here is a very nice video of the space balloon launch.
You know I like to use pictures for data from time to time, right? One problem is that I don’t know much about cameras. There, I said it. Really, almost all of my photos are made with my phone. That is what makes the phone so great, you almost always have your camera with you.
To make these pictures useful for physics, it helps to know the angular size of the picture. Here is a diagram so you can see what I am talking about:
There are 20 seconds left on the clock. Your team is down by 2 points such that a field goal would win it. The ball is spotted on the hash mark at the 15 yard line and it is first down. What to do? Should you call a run play so that the ball is in the center of the field? Or should the ball be kicked from where it is?
So there is the question. Is it better to kick the ball from an angle or move back and kick it head on? Let me just look at one aspect of this situation. What is the angular size of the goal post from the location of the kicker? I am not looking at the height of the horizontal goal post – I will assume the kicker can get the ball over this.
This was on reddit. It is an image from google maps showing an aircraft. Not surprising, there are lots of aircraft that get caught by the cameras in mid flight. But what about the colors? Is this some rainbow-unicorn plane? I am not sure of the exact details, but this rainbow effect is from the camera. I am not sure why, but this camera is capturing red green and blue (and probably white) colors separately at different times. Here is the actual link to the google map.
The first thing that comes to my mind is – I wonder how fast the plane was moving. That question is difficult to answer because I don’t know how much time was between each ‘color filter’ photo. Oh well, I will proceed anyway. First, some info. Reading through the very insightful reddit comments, it seems the commenters are certain that the plain is an Embraer ERJ 145. Really, all I need is the length. Wikipedia lists it with a 29.87 m length and a 20.04 meter wingspan. From the image, does the rainbow plane have the same ratio of length to wingspan as listed?
Ok, not quite the same. Maybe that is close enough. The one thing is that the image clearly has some distortion. Either the plane it turning or the image has been adjusted to make it look like it is a top down view. Well, surfing around a bit I couldn’t find another plane that was close in length/wing span ratio. I am going with ERJ 145.
If I scale the image from the length of the plane, how “far” between the different colors? Here is a plot of the 4 color images.
Note that for this image, I put the axis along the fuselage of the plane. The points are the locations of the back tip of one of the wings. The first cool thing that I can learn from this is that there must have been a cross-wind. The aircraft is not traveling in the direction that it is heading. Of course this is not uncommon, planes do this all the time. Oh, let me not that I am assume the aircraft is far enough away from the satellite that the multiple colors are due to the motion of the plane and not the satellite. This is probably a good assumption since the houses below are not rainbow colored.
What about the speed? If it is moving at a constant velocity, then:
I know the changes in position. So, let me just call the change in time 1 cs (cs for camera-second). This means that the plane’s speed would be 1.8 m/cs. Ok, let’s just play a game. What if the time between frames was 1/100th of a second? That would mean that the speed would be 180 m/s or 400 mph. That is possible since wikipedia lists the max speed at around 550 mph. If the time between images is 1/30th of a second (I picked that because that is a common frame rate for video) then the speed would be 54 m/s (120 mph). That doesn’t seem too low. I would imaging the landing speed would be around that speed (or maybe a little lower – but what do I know?)
But WAIT – there is more. Can I determine the altitude of the plane? Well, suppose I have two objects of two different lengths that are two different distances from a camera. Here is an example.
My notation here looks a little messy, but both objects have a length (L) and a distance from the camera (r). They also have an angular size, denoted by θ. About angular size, I can write the following.
I don’t know the distances from the camera and I don’t know the angles. But, I can sort of measure the angles. Suppose I measure the number of pixels each object takes up in the photo. Then the angular size could be written as:
Where p1 is the pixel size of an object and c is some constant for that particular camera. Now I can re-write these angular equations and divide so that I get rid of the c.
I can get values for all the stuff on the right of that equation. Here are my values (object 1 is the plane and object 2 is the background – really, I will just use the scale provided by google maps). Oh, one more thing. I am not going to measure the pixel length but rather some arbitrary length of the same scale.
L1 = 29.87 m
p1 = 1 unit
L2 = 10 m
p2 = 0.239 unit
Putting in my values above I get the ratio of the distances from the camera as:
Now I just need one of the r‘s – ideally it would be r2 (the distance the camera is from the ground). Wikipedia says that the satellite images are typically taken from an aircraft flying 800-1500 feet high. So, suppose r2 = 1500 feet (457 meters). In this case the altitude of the rainbow plane would be:
1000 feet would mean that the rainbow plane is probably landing (or taking off). It looks like Teterboro Airport is quite close and the rainbow plane is heading that way. I claim landing.
So, here is what I can say:
Airspeed. Really, I don’t have a definite answer. Like I said before it depends on the camera rate. If I had to pick (and I don’t) I would say that the rainbow plane is going 120 mph and the time between different colored images is 1/30th of a second.
Altitude. If I go with the higher value of the typical google-map planes (like the google map cars but with wings) then the altitude would be around 1000 feet. This lower altitude is why I used the lower value for the airspeed.
Windspeed. Now I am changing my answer for windspeed. I am going to pretend like there is no wind. The perpendicular motion of the colored images could be due to the motion of the google-map plane. | <urn:uuid:ab1372c3-67f1-40bb-a97d-79e3d444774a> | 2.8125 | 1,668 | Personal Blog | Science & Tech. | 77.914116 | 135 |
If you really want to hit a home run with a global warming story, manage to link climate change to the beloved rainforest of the Amazon. The rainforest there is considered by many to be the “lungs of the planet,” the rainforest surely contains a cure for any ailment imaginable, all species in the place are critical to the existence of life on the Earth, and the people of the Amazon are surely the most knowledgeable group on the planet regarding how to care for Mother Earth.
The global warming alarmists have taken full advantage of the Amazon and they are very quick to suggest that the Amazon ecosystem is extremely sensitive to climate change. Furthermore, not only can climate change impact the Amazon, but global climate itself is strongly linked to the state of the Amazon rainforest.
But, as usual, there is more to this story than meets to eye (or, rather, the press).
For instance, a headline last year from USA Today sounded the alarm declaring “Amazon hit by climate chaos of floods, drought”. In the first few sentences, we learn that “Across the Amazon basin, river dwellers are adding new floors to their stilt houses, trying to stay above rising floodwaters that have killed 44 people and left 376,000 homeless. Flooding is common in the world’s largest remaining tropical wilderness, but this year the waters rose higher and stayed longer than they have in decades, leaving fruit trees entirely submerged. Only four years ago, the same communities suffered an unprecedented drought that ruined crops and left mounds of river fish flapping and rotting in the mud. Experts suspect global warming may be driving wild climate swings that appear to be punishing the Amazon with increasing frequency.”
This piece is typical of thousands of other news stories about calamities in the Amazon that are immediately blamed on global warming. Other headlines quickly found include “Ocean Warming - Not El Niño - Drove Severe Amazon Drought in 2005” or “Amazon Droughts Will Accelerate Global Warming” or “Amazon Could Shrink by 85% due to Climate Change, Scientists Say.” Notice that climate change can cause droughts and floods in the Amazon PLUS droughts in the Amazon can cause global warming (by eliminating trees that could uptake atmospheric carbon dioxide). Throughout many of these stories, the words “delicate” and “irreversible” are used over and over.
As we have discussed countless times in other essays, climate models are predicting the greatest warming in the mid-to-high latitudes of the Northern Hemisphere during the winter season. The Amazon is not located in a part of the Earth expected to have substantial warming due to the buildup of greenhouse gases. Somewhat surprisingly, the IPCC Technical Summary comments “The sign of the precipitation response is considered less certain over both the Amazon and the African Sahel. These are regions in which there is added uncertainty due to potential vegetation-climate links, and there is less robustness across models even when vegetation feedbacks are not included.” Basically, the models are not predicting any big changes in precipitation in the Amazon due to the change in atmospheric composition, nor are the models predicting any big change in temperature. Should the people of the Amazon deforest the place down to a parking lot, there is evidence that precipitation would decrease. There is a lot going on in the Amazon – deforestation, elevated carbon dioxide levels, global warming, and all these reported recent droughts and floods. One would think that the entire place is a wreck!
A recent article in Hydrological Processes might come as a huge surprise to the climate change crusade. The first two sentences of the abstract made this one an immediate favorite at World Climate Report. The author has the nerve to write “Rainfall and river indices for both the northern and southern Amazon were used to identify and explore long-term climate variability on the region. From a statistical analysis of the hydrometeorological series, it is concluded that no systematic unidirectional long-term trends towards drier or wetter conditions have been identified since the 1920s.” We should leave it at that!
The author is José Marengo with Brazil’s “Centro de Ciéncia do Sistema Terrestre/Instituto Nacional de Pesquisas Espaciais”; the work was funded by the Brazilian Research Council and the “UK Global Opportunity Fund-GOF-Dangerous Climate Change”. Very interesting – we suspect the “Dangerous Climate Change” group was not happy with the first two sentences of the abstract.
José Marengo begins the piece noting “The main objective of this study is the assessment of long-term trends and cycles in precipitation in the entire Amazon basin, and over the northern and southern sections. It was addressed by analysing rainfall and streamflow indices, dating from the late 1920s”. The Figure 1 shows his subregions within the greater Amazon basin.
Figure 1. Orientation map showing the rainfall network used on this study for (a) northern Amazonia (NAR) and (b) southern Amazonia (SAR) (from Marengo, 2009).
The bottom line here is amazing. The author writes “The analysis of the annual rainfall time series in the Amazon represented by the NAR and SAR indices indicates slight negative trends for the northern Amazon and positive trends for the southern Amazon. However, they are weak and significant at 5% only in the southern Amazon” (Figure 2). So, nothing is happening out of the ordinary in the north and the south is getting wetter. There is definitely variability around the weak trends, but it all seems to be related to natural variability, not deforestation or global warming.
Figure 2. Historical hydrometeorological indices for the Amazon basin. They are expressed as anomalies normalized by the standard deviation from the long-term mean, (a) northern Amazonia, (b) southern Amazonia. The thin line represents the trend. The broken line represents the 10-year moving average (from Marengo, 2009).
Marengo notes “Since 1929, long-term tendencies and trends, some of them statistically significant, have been detected in a set of regional-average rainfall time series in the Amazon basin and supported by the analysis of some river streamflow time series. These long-term variations are more characteristic of decadal and multi-decadal modes, indicators of natural climate variability, rather than any unidirectional trend towards drier conditions (as one would expect, due to increased deforestation or to global warming).” [emphasis added]
José – nice work, have a Cuervo on us!!!
Marengo, J.A. 2009. Long-term trends and cycles in the hydrometeorology of the Amazon basin since the late 1920s. Hydrological Processes, 23, 3236-3244. | <urn:uuid:1d043e2c-548a-4380-aff6-44daad02285d> | 2.84375 | 1,450 | Personal Blog | Science & Tech. | 33.824703 | 136 |
If the city feels hotter to you in the summer, you're right.
The Japan Meteorological Agency has proved that all the asphalt and tall buildings and exhaust heat are indeed to blame.
"Urban heat islands" raised the daily August temperatures by 1 to 2 degrees in Japan's three biggest megalopolises of Tokyo, Osaka and Nagoya, the JMA said July 9.
This is the first time the JMA has analyzed the effects of urban heat islands, where asphalt, buildings and heat from the exhaust of automobiles and air conditioners and other factors contribute to a rise in temperatures.
The JMA used data from last August to simulate air temperatures on the assumption that all ground surface was covered by grassland and that there was no exhaust heat from human activities in the three megalopolises, and compared the numerical outcomes with what was actually recorded.
The urban heat islands accounted for rises of about 2 degrees in the cities' central areas and about 1 degree on their outskirts, JMA officials said.
It is believed that air temperatures have risen about 3 degrees in the three big cities during the last 100 years due to both global warming and urban heat islands, but the JMA has never evaluated to what extent the urban heat islands are responsible.
"Urbanization accounted for as much part of the temperature rises as global warming," a JMA representative said. "The situation is thought to be similar in other regions of advanced urbanization."
- « Prev
- Next » | <urn:uuid:ecb6b354-28af-45b3-9398-6ade895716d1> | 3.140625 | 302 | News Article | Science & Tech. | 35.933047 | 137 |
Consider the following in Haskell:
let p x = x ++ show x in putStrLn $ p"let p x = x ++ show x in putStrLn $ p"
Evaluate this expression in an interactive Haskell session and it prints itself out. But there's a nice little cheat that made this easy: the Haskell 'show' function conveniently wraps a string in quotation marks. So we simply have two copies of once piece of code: one without quotes followed by one in quotes. In C, on the other hand, there is a bit of a gotcha. You need to explicitly write code to print those extra quotation marks. And of course, just like in Haskell, this code needs to appear twice, once out of quotes and once in. But the version in quotes needs the quotation marks to be 'escaped' using backslash so it's notactually the same as the first version. And that means we can't use exactly the same method as with Haskell. The standard workaround is not to represent the quotation marks directly in the strings, but instead to use the ASCII code for this character and use C's convenient %c mechanism to print at. For example:
Again we were lucky, C provides this great %c mechanism. What do you need in a language to be sure you can write a self-replicator?
It turns out there is a very general approach to writing self-replicators that's described in Vicious Circles. What follows is essentially from there except that I've simplified the proofs by reducing generality.
We'll use capital letters to represent programs. Typically these mean 'inert' strings of characters. I'll use square brackets to indicate the function that the program evaluates. So if P is a program to compute the mathematical function p, we write [P](x) = p(x). P is a program and [P] is a function. We'll consider both programs that take arguments like the P I just mentioned, and also programs, R, that take no arguments, so [R] is simply the output or return value of the program R.
Now we come to an important operation. We've defined [P](x) to be the result of running P with input x. Now we define P(x) to be the program P modified so that it no longer takes an argument or input but instead substitutes the 'hard-coded' value of x instead. In other words [P(x)] = [P](x). P(x) is, of course, another program. There are also many ways of implementing P(x). We could simply evaluate [P](x) and write a program to simply print this out or return it. On the other hand, we could do the absolute minimum and write a new piece of code that simply calls P and supplies it with a hard-coded argument. Whatever we choose is irrelevant to the following discussion. So here's the demand that we make of our programming language: that it's powerful enough for us to write a program that can compute P(x) from inputs P and x. This might not be a trivial program to write, but it's not conceptually hard either. It doesn't have gotchas like the quotation mark issue above. Typically we can compute P(x) by some kind of textual substitution on P.
With that assumption in mind, here's a theorem: any program P that takes one argument or input has a fixed point, X, in the sense that running P with input X gives the same result as just running X. Given an input X, P acts just like an interpreter for the programming language as it outputs the same thing as an
interpreter would given input X.
So here's a proof:
Define the function f(Q) = [P](Q(Q)). We've assumed that we can write a program that computes P(x) from P and x so we know we can write a program to compute Q(Q) for any Q. We can then feed this as an input to [P]. So f is obviously computable by some program which we call Q0. So [Q0](Q) = [P](Q(Q)).
Now the fun starts:
[P](Q0(Q0)) = [Q0](Q0) (by definition of Q0)
= [Q0(Q0)] (by definition of P(x))
In other words Q0(Q0) is our fixed point.
So now take P to compute the identity function. Then [Q0(Q0)] = [P](Q0(Q0)) = Q0(Q0). So Q0(Q0) outputs itself when run! What's more, this also tells us how to do other fun stuff like write a program to print itself out backwards. And it tells us how to do this in any reasonably powerful programming language. We don't need to worry about having to work around problems like 'escaping' quotation marks - we can always find a way to replicate the escape mechanism too.
So does it work in practice? Well it does for Haskell - I derived the Haskell fragment above by applying this theorem directly, and then simplifying a bit. For C++, however, it might give you a piece of code that is longer than you want. In fact, you can go one step further and write a program that automatically generates a self-replicator. Check out Samuel Moelius's kpp. It is a preprocessor that converts an ordinary C++ program into one that can access its own source code by including the code to generate its own source within it.
Another example of an application of these methods is Futamura's theorem which states that there exists a program that can take as input an interpreter for a language and output a compiler. I personally think this is a little bogus. | <urn:uuid:e9f8736e-fa3e-4ea6-b907-b80b1d97b5d9> | 3.171875 | 1,215 | Personal Blog | Software Dev. | 69.541045 | 138 |
What is Fluorescence?
Fluorescence is the ability of certain chemicals to give off visible light after absorbing radiation which is not normally visible, such as ultraviolet light. This property has led to a variety of uses. Let’s shed some further light on this topic; consider the omnipresent “fluorescent” lights. Just how do they work? Fluorescent tubes contain a small amount of mercury vapor. The application of an electric current causes a stream of electrons to traverse the tube. These collide with the mercury atoms which become energized and consequently emit ultraviolet light. The inside of the tube is coated with a fluorescent material, such as calcium chlorophosphate, which converts the invisible ultraviolet light into visible light. The same idea is used to produce color television pictures. The screen is coated with tiny dots of substances which fluoresce in different colours when they are excited by a beam of electrons which is used to scan the picture.
But fluorescent materials had practical uses even before we dreamed of color television. One of the most amazing of all fluorescent materials is a synthetic compound, appropriately called fluorescein. Under ultraviolet light it produces an intense yellow-green fluorescence which during World War II was responsible for saving the lives of many downed flyers. Over a million pounds of the stuff were manufactured in 1943 and distributed to airmen in little packets to use as a sea marker. Since the fluorescence is so potent that it can be seen when the concentration of fluorescein is as little as 25 parts per billion, rescue planes easily spotted the men in the ocean. Aircraft carriers also made extensive use of fluorescein. The signal men on deck wore clothes and waved flags treated with the compound which was then made to glow by illumination with ultraviolet light. The incoming pilots could clearly see the deck and the need to use runway lights which would have drawn the attention of enemy aircraft was eliminated. Certain natural substances also fluoresce under ultraviolet light. Urine and moose fur are interesting examples. Prisoners have actually made use of this property of urine and have used it as a secret ink. What about the moose fur? Well, in Canada and Sweden there are hundreds of accidents each year involving the collision of automobiles with moose. Some of these result in fatalities. Some car manufacturers are now considering fitting their vehicles with UV emitting headlights to reduce moose collisions! How’s that for putting the right chemistry to work. | <urn:uuid:8a9fffa2-d413-4b8a-87c4-d9c60c04d098> | 3.671875 | 494 | Personal Blog | Science & Tech. | 38.59486 | 139 |
February 3, 2010 | 9
Few things in my life have brought me as much joy as watching sea otters play in the waters near Monterey, Calif. So when I heard this week that the frisky yet endangered critters may be slightly expanding their habitat, I figured everyone would think that was good news.
Once hunted into near-extinction for their fur, the southern, or California, sea otter (Enhydra lutris nereis) now numbers around 2,600 to 2,700 animals, all of which live in a fairly small habitat range off the central California coast. The problem is that their extant habitat is the only place the U.S. Endangered Species Act (ESA) grants them protected status. (Although they are also protected under California state law and the U.S. Marine Mammal Protection Act, those laws do not govern habitat.) Everything south of their current habitat is designated a "no-otter zone".
The origins of this restriction shouldn’t surprise anyone. When otters were first listed as a threatened species under the ESA, they were protected everywhere, according to Allison Ford, executive director of The Otter Project in Monterey, Calif. But in order to protect species, the ESA requires the creation of a recovery plan. In this case the U.S. Fish and Wildlife Service wanted to try to move some otters to a new habitat. This "experimental population" would protect the southern otter from extinction in a catastrophic event, such as a major oil spill. But in order to create a new habitat for the otters, the government also created a no-otter zone, an area where the animals would not be able to impact the fishing or oil industries.
Unfortunately, "the experimental population never thrived," Ford says. But the otter-free zone remains.
And now some otters are swimming past that imaginary line in the surf in search of sea urchins and other tasty marine life in the forbidden zone. Fishermen are not happy with the encroachment. "Based on historic action, we think eventually they’ll wipe out the shellfish industry in California," Vern Goehring, executive director of the California Sea Urchin Commission, told the Associated Press.
So why are sea otters swimming into verboten territory? "Food supply is always an impediment to otter survival and expansion," Ford says. "Scientists believe that food limitation is an issue in certain parts of the otter’s range."
Ford says that large, bachelor otters "tend to go back and forth over the no-otter line. Certainly, abundant prey that otters like to eat exists in the no-otter zone," and because humans tend to like the same foods, that creates conflict.
"Otters eat voraciously," Ford says. "They have a strong appetite, and eat 25 percent of their body weight every day." Otters do not have blubber, and use their fur and their high metabolisms to keep warm.
Otters can impact fisheries and the industry’s ability to operate at the same productivity levels it is used to, Ford says, adding: "Sea urchin is where the big conflict is." But she points out that the very reason there is a sea urchin industry is because otters no longer exist in their historic habitats. "Otters are a keystone species, and they maintain sea urchins, which in turn eat kelp. When otters were removed from the ecosystem, you lost the kelp, which hurt total biodiversity." Restoring sea otters in other areas of California, Ford says, could actually increase biodiversity and create additional fishing markets.
No matter what happens, the sea otter expansion won’t be anything that happens overnight. Populations have dipped slightly the past two years, and only a few dozen otters make their way regularly into the sans otter zone. But for now, that’s enough to get some people worried—and angry.
Image: Sea otter, via Wikipedia | <urn:uuid:aa7e8a6a-3149-4647-b097-a763f8ba1a3d> | 3.15625 | 843 | Personal Blog | Science & Tech. | 48.020844 | 140 |
For those seeking to understand and manage ecosystems, a key idea has resonated for more than two decades: spatial variation is essential for ecological sustainability over time. Now a new book examines the impact of that revelation.
The Center for Systems Integration and Sustainability at Michigan State University integrates ecology with socioeconomics, demography and other disciplines for ecological sustainability from local, national to global scales.
Coupled Human and Natural Systems(CHANS) are integrated systems in which humans and natural components interact. CHANS research has recently emerged as an exciting and integrative field of cross-disciplinary scientific inquiry to find sustainable solutions that both benefit the environment and enable people to thrive. Visit CHANS-Net, the international network of research on coupled human and natural systems, for information and ways to engage. | <urn:uuid:f5e920cf-6dde-417e-8074-7339b64f476a> | 2.640625 | 161 | Content Listing | Science & Tech. | 0.620732 | 141 |
|Invasive red-eared sliders compete with native turtles for food, habitat and basking and nesting sites.
Oregon’s aquatic invasive species
Oregon terrestrial invasive species
Rick Boatner, ODFW Invasive Species coordinator
Martyne Reesman, ODFW Aquatic Invasive Species technician
Invasive species are animals and plants that are not native to an ecosystem and that cause economic or environmental harm. While not all non-native species are invasive, many become a serious problem. They damage Oregon’s habitats and can aggressively compete with native species for food, water and habitat. Choose from the species lists above to learn about specific species. Visit the Oregon Department of Agriculture website to learn about invasive plants | <urn:uuid:dbe935d4-f609-4c19-bbc9-438ddb36a283> | 3.359375 | 148 | Knowledge Article | Science & Tech. | 5.956054 | 142 |
|Skip Navigation Links|
|Exit Print View|
|man pages section 3: Networking Library Functions Oracle Solaris 11 Information Library|
- get network entry
cc [ flag ... ] file ... -lsocket -lnsl [ library ... ] #include <netdb.h> struct netent *getnetbyname(const char *name);
struct netent *getnetbyname_r(const char *name, struct netent *result, char *buffer, int buflen);
struct netent *getnetbyaddr(long net, int type);
struct netent *getnetbyaddr_r(long net, int type, struct netent *result, char *buffer, int buflen);
struct netent *getnetent(void);
struct netent *getnetent_r(struct netent *result, char *buffer, int buflen);
int setnetent(int stayopen);
These functions are used to obtain entries for networks. An entry may come from any of the sources for networks specified in the /etc/nsswitch.conf file. See nsswitch.conf(4).
getnetbyname() searches for a network entry with the network name specified by the character string parameter name.
getnetbyaddr() searches for a network entry with the network address specified by net. The parameter type specifies the family of the address. This should be one of the address families defined in <sys/socket.h>. See the NOTES section below for more information.
Network numbers and local address parts are returned as machine format integer values, that is, in host byte order. See also inet(3SOCKET).
The netent.n_net member in the netent structure pointed to by the return value of the above functions is calculated by inet_network(). The inet_network() function returns a value in host byte order that is aligned based upon the input string. For example:
Commonly, the alignment of the returned value is used as a crude approximate of pre-CIDR (Classless Inter-Domain Routing) subnet mask. For example:
in_addr_t addr, mask; addr = inet_network(net_name); mask= ~(in_addr_t)0; if ((addr & IN_CLASSA_NET) == 0) addr <<= 8, mask <<= 8; if ((addr & IN_CLASSA_NET) == 0) addr <<= 8, mask <<= 8; if ((addr & IN_CLASSA_NET) == 0) addr <<= 8, mask <<= 8;
This usage is deprecated by the CIDR requirements. See Fuller, V., Li, T., Yu, J., and Varadhan, K. RFC 1519, Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy. Network Working Group. September 1993.
The functions setnetent(), getnetent(), and endnetent() are used to enumerate network entries from the database.
setnetent() sets (or resets) the enumeration to the beginning of the set of network entries. This function should be called before the first call to getnetent(). Calls to getnetbyname() and getnetbyaddr() leave the enumeration position in an indeterminate state. If the stayopen flag is non-zero, the system may keep allocated resources such as open file descriptors until a subsequent call to endnetent().
Successive calls to getnetent() return either successive entries or NULL, indicating the end of the enumeration.
endnetent() may be called to indicate that the caller expects to do no further network entry retrieval operations; the system may then deallocate resources it was using. It is still allowed, but possibly less efficient, for the process to call more network entry retrieval functions after calling endnetent().
The functions getnetbyname(), getnetbyaddr(), and getnetent() use static storage that is reused in each call, making these routines unsafe for use in multi-threaded applications.
The functions getnetbyname_r(), getnetbyaddr_r(), and getnetent_r() provide reentrant interfaces for these operations.
Each reentrant interface performs the same operation as its non-reentrant counterpart, named by removing the ``_r'' suffix. The reentrant interfaces, however, use buffers supplied by the caller to store returned results, and are safe for use in both single-threaded and multi-threaded applications.
Each reentrant interface takes the same parameters as its non-reentrant counterpart, as well as the following additional parameters. The parameter result must be a pointer to a struct netent structure allocated by the caller. On successful completion, the function returns the network entry in this structure. The parameter buffer must be a pointer to a buffer supplied by the caller. This buffer is used as storage space for the network entry data. All of the pointers within the returned struct netent result point to data stored within this buffer. See RETURN VALUES. The buffer must be large enough to hold all of the data associated with the network entry. The parameter buflen should give the size in bytes of the buffer indicated by buffer.
For enumeration in multi-threaded applications, the position within the enumeration is a process-wide property shared by all threads. setnetent() may be used in a multi-threaded application but resets the enumeration position for all threads. If multiple threads interleave calls to getnetent_r(), the threads will enumerate disjointed subsets of the network database.
Like their non-reentrant counterparts, getnetbyname_r() and getnetbyaddr_r() leave the enumeration position in an indeterminate state.
Network entries are represented by the struct netent structure defined in <netdb.h>.
The functions getnetbyname(), getnetbyname_r, getnetbyaddr, and getnetbyaddr_r() each return a pointer to a struct netent if they successfully locate the requested entry; otherwise they return NULL.
The functions getnetent() and getnetent_r() each return a pointer to a struct netent if they successfully enumerate an entry; otherwise they return NULL, indicating the end of the enumeration.
The functions getnetbyname(), getnetbyaddr(), and getnetent() use static storage, so returned data must be copied before a subsequent call to any of these functions if the data is to be saved.
When the pointer returned by the reentrant functions getnetbyname_r(), getnetbyaddr_r(), and getnetent_r() is non-NULL, it is always equal to the result pointer that was supplied by the caller.
The functions setnetent() and endnetent() return 0 on success.
The reentrant functions getnetbyname_r(), getnetbyaddr_r and getnetent_r() will return NULL and set errno to ERANGE if the length of the buffer supplied by caller is not large enough to store the result. See Intro(2) for the proper usage and interpretation of errno in multi-threaded applications.
network name database
configuration file for the name service switch
See attributes(5) for descriptions of the following attributes:
Fuller, V., Li, T., Yu, J., and Varadhan, K. RFC 1519, Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy. Network Working Group. September 1993.
The reentrant interfaces getnetbyname_r(), getnetbyaddr_r(), and getnetent_r() are included in this release on an uncommitted basis only, and are subject to change or removal in future minor releases.
The current implementation of these functions only return or accept network numbers for the Internet address family (type AF_INET). The functions described in inet(3SOCKET) may be helpful in constructing and manipulating addresses and network numbers in this form.
When compiling multi-threaded applications, see Intro(3), Notes On Multithread Applications, for information about the use of the _REENTRANT flag.
Use of the enumeration interfaces getnetent() and getnetent_r() is discouraged; enumeration may not be supported for all database sources. The semantics of enumeration are discussed further in nsswitch.conf(4). | <urn:uuid:8774debe-ee15-4278-b547-054c813f354c> | 2.90625 | 1,804 | Documentation | Software Dev. | 39.153842 | 143 |
New in version 2.4.
The cookielib module defines classes for automatic handling of HTTP cookies. It is useful for accessing web sites that require small pieces of data - cookies - to be set on the client machine by an HTTP response from a web server, and then returned to the server in later HTTP requests.
Both the regular Netscape cookie protocol and the protocol defined by
RFC 2965 are handled. RFC 2965 handling is switched off by default.
RFC 2109 cookies are parsed as Netscape cookies and subsequently
treated either as Netscape or RFC 2965 cookies according to the
'policy' in effect. Note that the great majority of cookies on the
Internet are Netscape cookies. cookielib attempts to follow
the de-facto Netscape cookie protocol (which differs substantially
from that set out in the original Netscape specification), including
taking note of the
introduced with RFC 2965. Note:
The various named parameters found in
Set-Cookie: and Set-Cookie2: headers
expires) are conventionally referred to
as attributes. To distinguish them from Python attributes, the
documentation for this module uses the term cookie-attribute
The module defines the following exception:
The following classes are provided:
The CookieJar class stores HTTP cookies. It extracts cookies from HTTP requests, and returns them in HTTP responses. CookieJar instances automatically expire contained cookies when necessary. Subclasses are also responsible for storing and retrieving cookies from a file or database.
|filename, delayload=None, policy=None)|
A CookieJar which can load cookies from, and perhaps save cookies to, a file on disk. Cookies are NOT loaded from the named file until either the load() or revert() method is called. Subclasses of this class are documented in section 18.22.2.
|blocked_domains=None, allowed_domains=None, netscape=True, rfc2965=False, rfc2109_as_netscape=None, hide_cookie2=False, strict_domain=False, strict_rfc2965_unverifiable=True, strict_ns_unverifiable=False, strict_ns_domain=DefaultCookiePolicy.DomainLiberal, strict_ns_set_initial_dollar=False, strict_ns_set_path=False )|
Constructor arguments should be passed as keyword arguments only. blocked_domains is a sequence of domain names that we never accept cookies from, nor return cookies to. allowed_domains if not None, this is a sequence of the only domains for which we accept and return cookies. For all other arguments, see the documentation for CookiePolicy and DefaultCookiePolicy objects.
DefaultCookiePolicy implements the standard accept / reject rules for Netscape and RFC 2965 cookies. By default, RFC 2109 cookies (ie. cookies received in a Set-Cookie: header with a version cookie-attribute of 1) are treated according to the RFC 2965 rules. However, if RFC 2965 handling is turned off or rfc2109_as_netscape is True, RFC 2109 cookies are 'downgraded' by the CookieJar instance to Netscape cookies, by setting the version attribute of the Cookie instance to 0. DefaultCookiePolicy also provides some parameters to allow some fine-tuning of policy. | <urn:uuid:ea0c6176-36d3-463f-8c1b-0df500940a87> | 2.578125 | 702 | Documentation | Software Dev. | 34.168097 | 144 |
Hydrothermal circulation in its most general sense is the circulation of hot water; 'hydros' in the Greek meaning water and 'thermos' meaning heat. Hydrothermal circulation occurs most often in the vicinity of sources of heat within the Earth's crust. This generally occurs near volcanic activity, but can occur in the deep crust related to the intrusion of granite, or as the result of orogeny or metamorphism.
Seafloor hydrothermal circulation
The term includes both the circulation of the well known, high temperature vent waters near the ridge crests, and the much lower temperature, diffuse flow of water through sediments and buried basalts further from the ridge crests. The former circulation type is sometimes termed "active", and the latter "passive". In both cases the principle is the same: cold dense seawater sinks into the basalt of the seafloor and is heated at depth whereupon it rises back to the rock-ocean water interface due to its lesser density. The heat source for the active vents is the newly formed basalt, and, for the highest temperature vents, the underlying magma chamber. The heat source for the passive vents is the still-cooling older basalts. Heat flow studies of the seafloor suggest that basalts within the oceanic crust take millions of years to completely cool as they continue to support passive hydrothermal circulation systems.
Hydrothermal vents are locations on the seafloor where hydrothermal fluids mix into the overlying ocean. Perhaps the best known vent forms are the naturally-occurring chimneys referred to as black smokers.
Hydrothermal circulation is not limited to ocean ridge environments. The source water for hydrothermal explosions, geysers and hot springs is heated groundwater convecting below and lateral to the hot water vent. Hydrothermal circulating convection cells exist any place an anomalous source of heat, such as an intruding magma or volcanic vent, comes into contact with the groundwater system.
Deep crust
Hydrothermal also refers to the transport and circulation of water within the deep crust, generally from areas of hot rocks to areas of cooler rocks. The causes for this convection can be:
- Intrusion of magma into the crust
- Radioactive heat generated by cooled masses of granite
- Heat from the mantle
- Hydraulic head from mountain ranges, for example, the Great Artesian Basin
- Dewatering of metamorphic rocks which liberates water
- Dewatering of deeply buried sediments
Hydrothermal ore deposits
During the early 1900s various geologists worked to classify hydrothermal ore deposits which were assumed to have formed from upward flowing aqueous solutions. Waldemar Lindgren developed a classification based on interpreted decreasing temperature and pressure conditions of the depositing fluid. His terms: hypothermal, mesothermal, epithermal and teleothermal were based on decreasing temperature and increasing distance from a deep source. Only the epithermal has been used in recent works. John Guilbert's 1985 redo of Lindgren's system for hydrothermal deposits includes the following:
- Ascending hydrothermal fluids, magmatic or meteoric water
- Porphyry copper and other deposits, 200 - 800 °C, moderate pressure
- Igneous metamorphic, 300 - 800 °C, low - moderate pressure
- Cordilleran veins, intermediate to shallow depths
- Epithermal, shallow to intermediate, 50 - 300 °C, low pressure
- Circulating heated meteoric solutions
- Circulating heated seawater
- Oceanic ridge deposits, 25 - 300 °C, low pressure
See also
- W. Lindgren, 1933, Mineral Deposits, McGraw Hill, 4th ed.
- Guilbert, John M. and Charles F. Park, Jr., 1986, The Geology of Ore Deposits, Freeman, p. 302 ISBN 0-7167-1456-6 | <urn:uuid:0e4197df-03b2-41c6-9c75-3c14bfc507e3> | 3.96875 | 830 | Knowledge Article | Science & Tech. | 29.31706 | 145 |
Phosphorescence is a specific type of photoluminescence related to fluorescence. Unlike fluorescence, a phosphorescent material does not immediately re-emit the radiation it absorbs. The slower time scales of the re-emission are associated with "forbidden" energy state transitions in quantum mechanics. As these transitions occur very slowly in certain materials, absorbed radiation may be re-emitted at a lower intensity for up to several hours after the original excitation.
Commonly seen examples of phosphorescent materials are the glow-in-the-dark toys, paint, and clock dials that glow for some time after being charged with a bright light such as in any normal reading or room light. Typically the glowing then slowly fades out within minutes (or up to a few hours) in a dark room.
The study of phosphorescent materials led to the discovery of radioactivity in 1896.
In simple terms, phosphorescence is a process in which energy absorbed by a substance is released relatively slowly in the form of light. This is in some cases the mechanism used for "glow-in-the-dark" materials which are "charged" by exposure to light. Unlike the relatively swift reactions in a common fluorescent tube, phosphorescent materials used for these materials absorb the energy and "store" it for a longer time as the processes required to re-emit the light occur less often.
Quantum mechanical
Most photoluminescent events, in which a chemical substrate absorbs and then re-emits a photon of light, are fast, on the order of 10 nanoseconds. Light is absorbed and emitted at these fast time scales in cases where the energy of the photons involved matches the available energy states and allowed transitions of the substrate. In the special case of phosphorescence, the absorbed photon energy undergoes an unusual intersystem crossing into an energy state of higher spin multiplicity (see term symbol), usually a triplet state. As a result, the energy can become trapped in the triplet state with only classically "forbidden" transitions available to return to the lower energy state. These transitions, although "forbidden", will still occur in quantum mechanics but are kinetically unfavored and thus progress at significantly slower time scales. Most phosphorescent compounds are still relatively fast emitters, with triplet lifetimes on the order of milliseconds. However, some compounds have triplet lifetimes up to minutes or even hours, allowing these substances to effectively store light energy in the form of very slowly degrading excited electron states. If the phosphorescent quantum yield is high, these substances will release significant amounts of light over long time scales, creating so-called "glow-in-the-dark" materials.
where S is a singlet and T a triplet whose subscripts denote states (0 is the ground state, and 1 the excited state). Transitions can also occur to higher energy levels, but the first excited state is denoted for simplicity.
Some examples of "glow-in-the-dark" materials do not glow by phosphorescence. For example, "glow sticks" glow due to a chemiluminescent process which is commonly mistaken for phosphorescence. In chemiluminescence, an excited state is created via a chemical reaction. The light emission tracks the kinetic progress of the underlying chemical reaction. The excited state will then transfer to a "dye" molecule, also known as a sensitizer or fluorophor, and subsequently fluoresce back to the ground state
Common pigments used in phosphorescent materials include zinc sulfide and strontium aluminate. Use of zinc sulfide for safety related products dates back to the 1930s. However, the development of strontium aluminate, with a luminance approximately 10 times greater than zinc sulfide, has relegated most zinc sulfide based products to the novelty category. Strontium aluminate based pigments are now used in exit signs, pathway marking, and other safety related signage.
|This section requires expansion. (October 2008)|
See also
- Karl A. Franz, Wolfgang G. Kehr, Alfred Siggel, Jürgen Wieczoreck, and Waldemar Adam "Luminescent Materials" in Ullmann's Encyclopedia of Industrial Chemistry 2002, Wiley-VCH, Weinheim. doi:10.1002/14356007.a15_519
- Zitoun, D.; Bernaud, L.; Manteghetti, A. Microwave Synthesis of a Long-Lasting Phosphor. J. Chem. Ed. 2009, 86, 72-75.doi:10.1021/ed086p72
|Look up phosphorescence or glowing in Wiktionary, the free dictionary.| | <urn:uuid:cbfeed37-69d9-4a8e-b3f9-0b5f8344ab1e> | 3.703125 | 983 | Knowledge Article | Science & Tech. | 33.322812 | 146 |
|Quantum field theory|
||It has been suggested that this article be merged with Zero-point energy. (Discuss) Proposed since June 2012.|
In quantum field theory, the vacuum state (also called the vacuum) is the quantum state with the lowest possible energy. Generally, it contains no physical particles. Zero-point field is sometimes used[by whom?] as a synonym for the vacuum state of an individual quantized field.
According to present-day understanding of what is called the vacuum state or the quantum vacuum, it is "by no means a simple empty space", and again: "it is a mistake to think of any physical vacuum as some absolutely empty void." According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of existence.
The QED vacuum of quantum electrodynamics (or QED) was the first vacuum of quantum field theory to be developed. QED originated in the 1930s, and in the late 1940s and early 1950s it was reformulated by Feynman, Tomonaga and Schwinger, who jointly received the Nobel prize for this work in 1965. Today the electromagnetic interactions and the weak interactions are unified in the theory of the electroweak interaction.
The Standard Model is a generalization of the QED work to include all the known elementary particles and their interactions (except gravity). Quantum chromodynamics is the portion of the Standard Model that deals with strong interactions, and QCD vacuum is the vacuum of quantum chromodynamics. It is the object of study in the Large Hadron Collider and the Relativistic Heavy Ion Collider, and is related to the so-called vacuum structure of strong interactions.
Non-zero expectation value
If the quantum field theory can be accurately described through perturbation theory, then the properties of the vacuum are analogous to the properties of the ground state of a quantum mechanical harmonic oscillator (or more accurately, the ground state of a QM problem). In this case the vacuum expectation value (VEV) of any field operator vanishes. For quantum field theories in which perturbation theory breaks down at low energies (for example, Quantum chromodynamics or the BCS theory of superconductivity) field operators may have non-vanishing vacuum expectation values called condensates. In the Standard Model, the non-zero vacuum expectation value of the Higgs field, arising from spontaneous symmetry breaking, is the mechanism by which the other fields in the theory acquire mass.
In many situations, the vacuum state can be defined to have zero energy, although the actual situation is considerably more subtle. The vacuum state is associated with a zero-point energy, and this zero-point energy has measurable effects. In the laboratory, it may be detected as the Casimir effect. In physical cosmology, the energy of the cosmological vacuum appears as the cosmological constant. In fact, the energy of a cubic centimeter of empty space has been calculated figuratively to be one trillionth of an erg. An outstanding requirement imposed on a potential Theory of Everything is that the energy of the quantum vacuum state must explain the physically observed cosmological constant.
For a relativistic field theory, the vacuum is Poincaré invariant, which follows from Wightman axioms. Poincaré invariance implies that only scalar combinations of field operators have non-vanishing VEV's. The VEV may break some of the internal symmetries of the Lagrangian of the field theory. In this case the vacuum has less symmetry than the theory allows, and one says that spontaneous symmetry breaking has occurred. See Higgs mechanism, standard model.
Electrical permittivity
In principle, quantum corrections to Maxwell's equations can cause the experimental electrical permittivity ε of the vacuum state to deviate from the defined scalar value ε0 of the electric constant. These theoretical developments are described, for example, in Dittrich and Gies. In particular, the theory of quantum electrodynamics predicts that the QED vacuum should exhibit nonlinear effects that will make it behave like a birefringent material with ε slightly greater than ε0 for extremely strong electric fields. Explanations for dichroism from particle physics, outside quantum electrodynamics, also have been proposed. Active attempts to measure such effects have been unsuccessful so far.
The vacuum state is written as or . The VEV of a field φ, which should be written as , is usually condensed to .
Virtual particles
The presence of virtual particles can be rigorously based upon the non-commutation of the quantized electromagnetic fields. Non-commutation means that although the average values of the fields vanish in a quantum vacuum, their variances do not. The term "vacuum fluctuations" refers to the variance of the field strength in the minimal energy state, and is described picturesquely as evidence of "virtual particles".
It is sometimes attempted to provide an intuitive picture of virtual particles based upon the Heisenberg energy-time uncertainty principle:
(with ΔE and Δt being the energy and time variations respectively; ΔE is the accuracy in the measurement of energy and Δt is the time taken in the measurement, and ħ is the Planck constant divided by 2π) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times.
Although the phenomenon of virtual particles is accepted, this interpretation of the energy-time uncertainty relation is not universal. One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty Δt determines a "budget" for borrowing energy ΔE. Another issue is the meaning of "time" in this relation, because energy and time (unlike position q and momentum p, for example) do not satisfy a canonical commutation relation (such as [q, p] = i ħ). Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy. The very many approaches to the energy-time uncertainty principle are a long and continuing subject.
Physical nature of the quantum vacuum
According to Astrid Lambrecht (2002): "When one empties out a space of all matter and lowers the temperature to absolute zero, one produces in a Gedankenexperiment the quantum vacuum state."
Photon-photon interaction can occur only through interaction with the vacuum state of some other field, for example through the Dirac electron-positron vacuum field; this is associated with the concept of vacuum polarization.
According to Milonni (1994): "... all quantum fields have zero-point energies and vacuum fluctuations." This means that there is a component of the quantum vacuum respectively for each component field (considered in the conceptual absence of the other fields), such as the electromagnetic field, the Dirac electron-positron field, and so on.
According to Milonni (1994), some of the effects attributed to the vacuum electromagnetic field can have several physical interpretations, some more conventional than others. The Casimir attraction between uncharged conductive plates is often proposed as an example of an effect of the vacuum electromagnetic field. Schwinger, DeRaad, and Milton (1978) are cited by Milonni (1994) as validly, though unconventionally, explaining the Casimir effect with a model in which "the vacuum is regarded as truly a state with all physical properties equal to zero." In this model, the observed phenomena are explained as the effects of the electron motions on the electromagnetic field, called the source field effect. Milonni writes: "The basic idea here will be that the Casimir force may be derived from the source fields alone even in completely conventional QED, ..." Milonni provides detailed argument that the measurable physical effects usually attributed to the vacuum electromagnetic field cannot be explained by that field alone, but require in addition a contribution from the self-energy of the electrons, or their radiation reaction. He writes: "The radiation reaction and the vacuum fields are two aspects of the same thing when it comes to physical interpretations of various QED processes including the Lamb shift, van der Waals forces, and Casimir effects." This point of view is also stated by Jaffe (2005): "The Casimir force can be calculated without reference to vacuum fluctuations, and like all other observable effects in QED, it vanishes as the fine structure constant, α, goes to zero."
See also
References and notes
- Astrid Lambrecht (Hartmut Figger, Dieter Meschede, Claus Zimmermann Eds.) (2002). Observing mechanical dissipation in the quantum vacuum: an experimental challenge; in Laser physics at the limits. Berlin/New York: Springer. p. 197. ISBN 3-540-42418-0.
- Christopher Ray (1991). Time, space and philosophy. London/New York: Routledge. Chapter 10, p. 205. ISBN 0-415-03221-0.
- AIP Physics News Update,1996
- Physical Review Focus Dec. 1998
- Walter Dittrich & Gies H (2000). Probing the quantum vacuum: perturbative effective action approach. Berlin: Springer. ISBN 3-540-67428-4.
- For an historical discussion, see for example Ari Ben-Menaḥem, ed. (2009). "Quantum electrodynamics (QED)". Historical Encyclopedia of Natural and Mathematical Sciences, Volume 1 (5th ed.). Springer. pp. 4892 ff. ISBN 3-540-68831-5. For the Nobel prize details and the Nobel lectures by these authors see "The Nobel Prize in Physics 1965". Nobelprize.org. Retrieved 2012-02-06.
- Jean Letessier, Johann Rafelski (2002). Hadrons and Quark-Gluon Plasma. Cambridge University Press. p. 37 ff. ISBN 0-521-38536-9.
- Sean Carroll, Sr Research Associate - Physics, California Institute of Technology, June 22, 2006 C-SPAN broadcast of Cosmology at Yearly Kos Science Panel, Part 1
- David Delphenich (2006). "Nonlinear Electrodynamics and QED". arXiv:hep-th/0610088 [hep-th].
- Klein, James J. and B. P. Nigam, Birefringence of the vacuum, Physical Review vol. 135, p. B1279-B1280 (1964).
- Mourou, G. A., T. Tajima, and S. V. Bulanov, Optics in the relativistic regime; § XI Nonlinear QED, Reviews of Modern Physics vol. 78 (no. 2), 309-371 (2006) pdf file.
- Holger Gies; Joerg Jaeckel; Andreas Ringwald (2006). "Polarized Light Propagating in a Magnetic Field as a Probe of Millicharged Fermions". Physical Review Letters 97 (14). arXiv:hep-ph/0607118. Bibcode:2006PhRvL..97n0402G. doi:10.1103/PhysRevLett.97.140402.
- Davis; Joseph Harris; Gammon; Smolyaninov; Kyuman Cho (2007). "Experimental Challenges Involved in Searches for Axion-Like Particles and Nonlinear Quantum Electrodynamic Effects by Sensitive Optical Techniques". arXiv:0704.0748 [hep-th].
- Myron Wyn Evans, Stanisław Kielich (1994). Modern nonlinear optics, Volume 85, Part 3. John Wiley & Sons. p. 462. ISBN 0-471-57548-8. "For all field states that have classical analog the field quadrature variances are also greater than or equal to this commutator."
- David Nikolaevich Klyshko (1988). Photons and nonlinear optics. Taylor & Francis. p. 126. ISBN 2-88124-669-9.
- Milton K. Munitz (1990). Cosmic Understanding: Philosophy and Science of the Universe. Princeton University Press. p. 132. ISBN 0-691-02059-0. "The spontaneous, temporary emergence of particles from vacuum is called a "vacuum fluctuation"."
- For an example, see P. C. W. Davies (1982). The accidental universe. Cambridge University Press. p. 106. ISBN 0-521-28692-1.
- A vaguer description is provided by Jonathan Allday (2002). Quarks, leptons and the big bang (2nd ed ed.). CRC Press. pp. 224 ff. ISBN 0-7503-0806-0. "The interaction will last for a certain duration Δt. This implies that the amplitude for the total energy involved in the interaction is spread over a range of energies ΔE."
- This "borrowing" idea has led to proposals for using the zero-point energy of vacuum as an infinite reservoir and a variety of "camps" about this interpretation. See, for example, Moray B. King (2001). Quest for zero point energy: engineering principles for 'free energy' inventions. Adventures Unlimited Press. pp. 124 ff. ISBN 0-932813-94-1.
- Quantities satisfying a canonical commutation rule are said to be noncompatible observables, by which is meant that they can both be measured simultaneously only with limited precision. See Kiyosi Itô (1993). "§ 351 (XX.23) C: Canonical commutation relations". Encyclopedic dictionary of mathematics (2nd ed ed.). MIT Press. p. 1303. ISBN 0-262-59020-4.
- Paul Busch, Marian Grabowski, Pekka J. Lahti (1995). "§III.4: Energy and time". Operational quantum physics. Springer. pp. 77 ff. ISBN 3-540-59358-6.
- For a review, see Paul Busch (2008). "Chapter 3: The Time–Energy Uncertainty Relation". In J.G. Muga, R. Sala Mayato and Í.L. Egusquiza, editors. Time in Quantum Mechanics (2nd ed ed.). Springer. pp. 73 ff. ISBN 3-540-73472-4.
- Fowler, R., Guggenheim, E.A. (1965). Statistical Thermodynamics. A Version of Statistical Mechanics for Students of Physics and Chemistry, reprinted with corrections, Cambridge University Press, London, page 224.
- Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London, page 220.
- Wilks, J. (1971). The Third Law of Thermodynamics, Chapter 6 in Thermodynamics, volume 1, ed. W. Jost, of H. Eyring, D. Henderson, W. Jost, Physical Chemistry. An Advanced Treatise, Academic Press, New York, page 477.
- Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics, New York, ISBN 0–88318–797–3, page 342.
- Jauch, J.M., Rohrlich, F. (1955/1980). The Theory of Photons and Electrons. The Relativistic Quantum Field Theory of Charged Particles with Spin One-half, second expanded edition, Springer-Verlag, New York, ISBN 0–387–07295–0, pages 287–288.
- Milonni, P.W. (1994). The Quantum Vacuum. An Introduction to Quantum Electrodynamics, Academic Press, Inc., Boston, ISBN 0–12–498080–5, page xv.
- Milonni, P.W. (1994). The Quantum Vacuum. An Introduction to Quantum Electrodynamics, Academic Press, Inc., Boston, ISBN 0–12–498080–5, page 239.
- Schwinger, J., DeRaad, L.L., Milton, K.A. (1978). Casimir effect in dielectrics, Annals of Physics, 115: 1–23.
- Milonni, P.W. (1994). The Quantum Vacuum. An Introduction to Quantum Electrodynamics, Academic Press, Inc., Boston, ISBN 0–12–498080–5, page 418.
- Jaffe, R.L. (2005). Casimir effect and the quantum vacuum, Phys. Rev. D 72: 021301(R), http://1–5.cua.mit.edu/8.422_s07/jaffe2005_casimir.pdf
Further reading
- Free pdf copy of The Structured Vacuum - thinking about nothing by Johann Rafelski and Berndt Muller (1985) ISBN 3-87144-889-3.
- M.E. Peskin and D.V. Schroeder, An introduction to Quantum Field Theory.
- H. Genz, Nothingness: The Science of Empty Space
- Maybe this should discuss Star Trek and/or Star Gate: Engineering the Zero-Point Field and Polarizable Vacuum for Interstellar Flight
- E. W. Davis, V. L. Teofilo, B. Haisch, H. E. Puthoff, L. J. Nickisch, A. Rueda and D. C. Cole(2006)"Review of Experimental Concepts for Studying the Quantum Vacuum Field" | <urn:uuid:b041aa3d-13d3-4cc3-b22e-63374d63dc70> | 3.3125 | 3,739 | Knowledge Article | Science & Tech. | 55.702686 | 147 |
Hinsley, S.A., Hill, R.A., Bellamy, P. E., Broughton, R.K., Harrison, N.M., MacKenzie, J.A., Speakman, J.R. and Ferns, P.N., 2009. Do Highly Modified Landscapes Favour Generalists at the Expense of Specialists? The Example of Woodland Birds. Landscape Research, 34 (5), pp. 509-526.
This is the latest version of this eprint.
Full text not available from this repository.
Demands on land use in heavily populated landscapes create mosaic structures where semi-natural habitat patches are generally small and dominated by edges. Small patches are also more exposed and thus more vulnerable to adverse weather and potential effects of climate change. These conditions may be less problematic for generalist species than for specialists. Using insectivorous woodland birds (great tits and blue tits) as an example, we demonstrate that even generalists suffer reduced breeding success (in particular, rearing fewer and poorer-quality young) and increased parental costs (daily energy expenditure) when living in such highly modified secondary habitats (small woods, parks, farmland). Within-habitat heterogeneity (using the example of Monks Wood NNR) is generally associated with greater species diversity, but to benefit from heterogeneity at a landscape scale may require both high mobility and the ability to thrive in small habitat patches. Modern landscapes, dominated by small, modified and scattered habitat patches, may fail to provide specialists, especially sedentary ones, with access to sufficient quantity and quality of resources, while simultaneously increasing the potential for competition from generalists.
|Subjects:||Geography and Environmental Studies|
Science > Biology and Botany
|Group:||School of Applied Sciences > Centre for Conservation, Ecology and Environmental Change|
|Deposited By:||Dr Ross Hill|
|Deposited On:||01 Nov 2009 12:25|
|Last Modified:||07 Mar 2013 15:17|
Available Versions of this Item
- Do Highly Modified Landscapes Favour Generalists at the Expense of Specialists? The Example of Woodland Birds. (deposited 21 Nov 2008 20:00)
- Do Highly Modified Landscapes Favour Generalists at the Expense of Specialists? The Example of Woodland Birds. (deposited 01 Nov 2009 12:25) [Currently Displayed]
|Repository Staff Only -|
|BU Staff Only -|
|Help Guide -||Editing Your Items in BURO| | <urn:uuid:73270d67-6a72-45f1-ba47-59fcfe240afe> | 2.5625 | 535 | Academic Writing | Science & Tech. | 39.70733 | 148 |
Monday, April 2, 2012 - 15:31 in Earth & Climate
Corals may be better placed to cope with the gradual acidification of the world's oceans than previously thought – giving rise to hopes that coral reefs might escape climatic devastation.
- Corals 'could survive a more acidic ocean'Mon, 2 Apr 2012, 10:11:13 EDT
- Studies shed light on collapse of coral reefsThu, 28 May 2009, 14:26:24 EDT
- Acid oceans demand greater reef careMon, 14 Feb 2011, 10:03:09 EST
- Rising Co2 'will hit coral reefs harder'Tue, 28 Oct 2008, 10:44:19 EDT
- New ocean acidification study shows added danger to already struggling coral reefsMon, 8 Nov 2010, 15:52:09 EST | <urn:uuid:4db93cc5-ea5d-424c-8382-3c6ac6cf74d6> | 3.03125 | 167 | Content Listing | Science & Tech. | -19.572241 | 149 |
Gold has been known since prehistory. The symbol is derived from Latin aurum (gold).
AuI 9.2 eV, AuII 20.5 eV, AuIII 30.0 eV.
Absorption lines of AuI
In the sun, the equivalent width of AuI 3122(1) is 0.005.
Behavior in non-normal stars
The probable detection of Au I was announced by Jaschek and Malaroda (1970) in one Ap star of the Cr-Eu-Sr subgroup. Fuhrmann (1989) detected Au through the ultimate line of Au II at 1740(2) in several Bp stars of the Si and Ap stars of the Cr-Eu-Sr subgroups. The presence of Au seems to be associated with that of platinum and mercury.
Au has one stable isotope, Au197 and 20 short-lived isotopes and isomers.
Au can only be produced by the r process.
Published in "The Behavior of Chemical Elements in Stars", Carlos Jaschek and Mercedes Jaschek, 1995, Cambridge University Press. | <urn:uuid:8d506fc6-f879-413c-9824-20930fe8e0a0> | 3.75 | 240 | Structured Data | Science & Tech. | 77.703306 | 150 |
Adult survival rates of Shag (Phalacrocorax aristotelis), Common Guillemot (Uria aalge), Razorbill (Alca torda), Puffin (Fratercula arctica) and Kittiwake (Rissa tridactyla) on the Isle of May 1986-96
Harris, M. P.; Wanless, S.; Rothery, P.. 2000 Adult survival rates of Shag (Phalacrocorax aristotelis), Common Guillemot (Uria aalge), Razorbill (Alca torda), Puffin (Fratercula arctica) and Kittiwake (Rissa tridactyla) on the Isle of May 1986-96. Atlantic Seabirds, 2. 133-150.Full text not available from this repository.
On the Isle of May between 1986 and 1996, the average adult survival of Shags Phalacrocorax aristotelis was 82.1%, Common Guillemots Uria aalge 95.2%, Razorbills Alca torda 90.5%, Puffins Fratercula arctica 91.6% and Kittiwakes Rissa tridactyla 88.2%. Shags, Razorbills and Puffins all had a single year of exceptionally low survival but these years did not coincide. In contrast, Kittiwake survival declined significantly over the period and there was evidence that substantial non-breeding occurred in several years. Breeding success of Kittiwakes also declined, which gives rise to concern for its future status. Given a high enough level of resighting, return rates (the proportion of birds known to be alive one year that were seen the next year) on a year-by-year basis provide a reasonable indication of relative changes in adult survival.
|Programmes:||CEH Programmes pre-2009 publications > Other|
|CEH Sections:||_ Biodiversity & Population Processes|
|Additional Keywords:||Shag, Phalacrocorax aristotelis, Common Guillemot, Uria aalge, Razorbill, Alca torda, Puffin, Fratercula arctica, Kittiwake, Rissa tridactyla|
|NORA Subject Terms:||Zoology|
|Date made live:||08 Dec 2008 21:30|
Actions (login required) | <urn:uuid:c2223b59-5dd0-474f-acd5-a52f82c794e8> | 2.765625 | 516 | Academic Writing | Science & Tech. | 31.554757 | 151 |
These two group activities use mathematical reasoning - one is
numerical, one geometric.
EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules.
Place this "worm" on the 100 square and find the total of the four
squares it covers. Keeping its head in the same place, what other
totals can you make? | <urn:uuid:c570ce75-26da-4b5b-900f-83f7a3e743de> | 3.40625 | 89 | Tutorial | Science & Tech. | 46.566154 | 152 |
Mutualism is very common: the classic example is the relationship pollinators and their plants. Around 70% of land plants require other species to help them reproduce via pollination. Often, the pollinators, like bees and wasps, gain food from the plant while the plant benefits by getting to mix its genes with other plants - a clear win-win for both. But both have to give up something, too, and whenever there is a cost to a relationship, both sides have good reason to cheat.
When I say cheat, I mean a species not keeping up their half of the deal. A species would gain something if they could maintain the positive benefits provided by another other species without having to expend whatever cost is associated with their side of the mutualistic bargain. A plant would benefit, for example, if it could attract its pollinators without having to make nectar or pretty flowers to attract them.
So how is mutualism maintained when there is strong evolutionary pressure to cheat? In some cases, it's by nature of the relationship. In the example above, it's simply hard for the plant to cheat because skimping on the goods directly affects how the other side acts - no nectar-laden flowers, no reason for a bee or other bug to stop and get covered in pollen.
But some mutualist relationships are easier to cheat on - take the case of fig wasps.
Fig wasps are wasps that lay their eggs in fig flowers. As these flowers turn into fruits, the wasp larvae are protected and fed by the fig, costing the tree resources. This relationship looks parasitic at first glance: the wasp gets healthy babies while the fig gets its fruit ruined. But the wasp has a promise it must keep to the tree: when it lays its eggs, it has to pollinate flowers so the tree can produce seeds.
There are actually two kinds of fig wasps: one that pollinates passively and one that pollinates actively. The passive pollinators collect pollen on their extremities and, while climbing around to deposit eggs, pollinate the trees' flowers without even thinking about it. Passively pollinating wasps do not expend extra energy to pollinate, and they cannot easily avoid carrying pollen, so there's no real way or reason for them to cheat.
David Attenborough explains their relationship rather nicely:
The active pollinators are much more deliberate about things: female wasps specifically collect pollen in specialized pouches (see R) and deposit it on another tree's flowers by choice when they lay their eggs. Active pollinators don't have to pollinate, per se - they can, and do sometimes, flit around without collecting pollen and bring it to another tree. After all, it costs the wasp time and energy to go about collecting and lugging around pollen, so why bother if they don't have to? Instead, the female wasps just infect flowers with wasp eggs, acting more like a parasite than a mutualist.
Clearly, there's an easy, good reason for the wasp to short-change the tree. But, if there's good reason for the wasps to cheat, there is equally good reason for the trees to catch them, evolutionarily speaking. Having a cheating wasps' young growing in its fruit does the tree no good whatsoever. But can the trees spot cheaters and somehow punish them for it?
That's the question that biologists K. Charlotte Jandér and Edward Allen Herre wanted to answer. To find out, they carefully watched six different species of figs, four that had active pollinating wasps and two that had passive pollinating wasps. They wanted to see if the actively-pollinated trees somehow reacted differently to loyal wasps who pollinated like they're supposed to and cheaters. Since it's hard to tell if a wasp is doing its job, instead, the researchers intentionally manipulated the wasps. For each fig tree–pollinator species-pair, they experimentally produced pollen-carrying and artificially pollen-free wasps, which, because they had no pollen, played the role of cheaters. They then waited to see how well the cheaters larvae survived.
They found that the passively pollinated figs had no system in place to protect against cheaters - which is exactly what you'd expect, since it's basically impossible for a passive-pollinating wasp to get around on the flowers without pollinating, meaning that cheating is not likely.
The actively pollinated figs, on the other hand, all punished cheaters.
First off, the figs carrying cheater offspring were aborted more frequently. When a fig aborts a larvae-containing fruit, it kills all of the larvae inside. One active species only kept around 3% of the number of figs that the passive pollinated species did. But to punish them even more, the fig also manipulated the conditions within the growing fruits which contained cheating larvae - per fruit, fewer cheater adults emerged than non-cheating ones. In one species of fig, almost no cheaters survived to adulthood - just 5% of the number that emerged from passively pollinated figs. How exactly the fig changes the condition of the fruit to harm the growing larvae isn't yet known.
This made the scientists wonder how common cheaters were in the wild, and whether the species that strongly reacted to cheating were plagued by more cheaters. As expected, they didn't find any pollen-free passive pollinating wasps, but they did find active pollinating ones that weren't carrying the goods. They also found that the species that cheated the most lived on the fig tree that punished them the least.
These data strongly support consistent coevolution between the fig wasps and their trees. If the tree doesn't catch cheaters, the wasps exploit their longtime friends, and since cheating isn't punished, cheating young grow up and continue cheating, leading to high frequencies of cheaters. This rapidly degrades their relationship from mutualism to parasite-host. However, if the trees respond by culling free-riders, they reduce the number of wasps inclined to cheat and maintain the true mutualism that the two have had for around 80 million years.
Mutualism is often portrayed as "playing nice", a beautiful harmony between species. Just listen to how the relationship between active pollinating fig wasps and their trees is portrayed in this PBS special:
How sweet. Too bad it's totally not true. Just like the arms races between predator and prey or parasite and host, mutualist species constantly adapt to try get the upper hand in their relationship. There is still a battle going on between even the best of friends to gain an evolutionary advantage, and just like other interactions, mutualists have to constantly evolve to maintain the status quo.
Jander, K., & Herre, E. (2010). Host sanctions and pollinator cheating in the fig tree-fig wasp mutualism Proceedings of the Royal Society B: Biological Sciences DOI: 10.1098/rspb.2009.2157 | <urn:uuid:d00124ba-8fe4-4e20-95ce-f046f7843fba> | 3.5 | 1,448 | Personal Blog | Science & Tech. | 50.696879 | 153 |
An important discovery has been made with respect to the mystery of “handedness” in biomolecules. Researchers led by Sandra Pizzarello, a research professor at Arizona State University, found that some of the possible abiotic precursors to the origin of life on Earth have been shown to carry “handedness” in a larger number than previously thought.
The work is being published in this week’s Early Edition of the Proceedings of the National Academy of Sciences. The paper is titled, “Molecular asymmetry in extraterrestrial chemistry: Insights from a pristine meteorite,” and is co-authored by Pizzarello and Yongsong Huang and Marcelo Alexandre, of Brown University.
Pizzarello, in ASU’s Department of Chemistry and Biochemistry, worked with Huang and Alexandre in studying the organic materials of a special group of meteorites that contain among a variety of compounds, amino acids that have identical counterparts in terrestrial biomolecules. These meteorites are fragments of asteroids that are about the same age as the solar system (roughly 4.5 billion years.)
Scientists have long known that most compounds in living things exist in mirror-image forms. The two forms are like hands; one is a mirror reflection of the other. They are different, cannot be superimposed, yet identical in their parts.
When scientists synthesize these molecules in the laboratory, half of a sample turns out to be “left-handed” and the other half “right-handed.” But amino acids, which are the building blocks of terrestrial proteins, are all “left-handed,” while the sugars of DNA and RNA are “right-handed.” The mystery as to why this is the case, “parallels in many of its queries those that surround the origin of life,” said Pizzarello.
Years ago Pizzarello and ASU professor emeritus John Cronin analyzed amino acids from the Murchison meteorite (which landed in Australia in 1969) that were unknown on Earth, hence solving the problem of any contamination. They discovered a preponderance of “left-handed” amino acids over their “right-handed” form.
“The findings of Cronin and Pizzarello are probably the first demonstration that there may be natural processes in the cosmos that generate a preferred amino acid handedness,” Jeffrey Bada of the Scripps Institution of Oceanography, La Jolla, Calif., said at the time.
The new PNAS work was made possible by the finding in Antarctica of an exceptionally pristine meteorite. Antarctic ices are good “curators” of meteorites. After a meteorite falls -- and meteorites have been falling throughout the history of Earth -- it is quickly covered by snow and buried in the ice. Because these ices are in constant motion, when they come to a mountain, they will flow over the hill and bring meteorites to the surface.
“Thanks to the pristine nature of this meteorite, we were able to demonstrate that other extraterrestrial amino acids carry the left-handed excesses in meteorites and, above all, that these excesses appear to signify that their precursor molecules, the aldehydes, also carried such excesses,” Pizzarello said. “In other words, a molecular trait that defines life seems to have broader distribution as well as a long cosmic lineage.”
“This study may provide an important clue to the origin of molecular asymmetry,” added Brown associate professor and co-author Huang.
Source: Arizona State University
Explore further: University of Illinois biophysicists measure mechanism that determines fate of living cells | <urn:uuid:7facf7f8-f73a-45cd-8b97-c036384aa446> | 3.625 | 783 | News Article | Science & Tech. | 24.587367 | 154 |
A schematic of a blind quantum computer that could protect user's privacy.
Image credit: Phillip Walther et al./Vienna University.
Researchers worry that if quantum computers are realized in the next few years, only a few specialized facilities will be able to host them. This may leave users' privacy vulnerable. To combat this worry, scientists have proposed a "blind" quantum computer that uses polarization-entangled photonic qubits. | <urn:uuid:eeacf744-6601-455a-9e61-923665b3bff7> | 3.1875 | 88 | Knowledge Article | Science & Tech. | 34.84049 | 155 |
Scientific Investigations Report 2005-5232
The carbonate-rock aquifer of the Great Basin is named for the thick sequence of Paleozoic limestone and dolomite with lesser amounts of shale, sandstone, and quartzite. It lies primarily in the eastern half of the Great Basin and includes areas of eastern Nevada and western Utah as well as the Death Valley area of California and small parts of Arizona and Idaho. The carbonate-rock aquifer is contained within the Basin and Range Principal Aquifer, one of 16 principal aquifers selected for study by the U.S. Geological Survey’s National Water- Quality Assessment Program.
Water samples from 30 ground-water sites (20 in Nevada and 10 in Utah) were collected in the summer of 2003 and analyzed for major anions and cations, nutrients, trace elements, dissolved organic carbon, volatile organic compounds (VOCs), pesticides, radon, and microbiology. Water samples from selected sites also were analyzed for the isotopes oxygen-18, deuterium, and tritium to determine recharge sources and the occurrence of water recharged since the early 1950s.
Primary drinking-water standards were exceeded for several inorganic constituents in 30 water samples from the carbonate-rock aquifer. The maximum contaminant level was exceeded for concentrations of dissolved antimony (6 μg/L) in one sample, arsenic (10 μg/L) in eleven samples, and thallium (2 μg/L) in one sample. Secondary drinking-water regulations were exceeded for several inorganic constituents in water samples: chloride (250 mg/L) in five samples, fluoride (2 mg/L) in two samples, iron (0.3 mg/L) in four samples, manganese (0.05 mg/L) in one sample, sulfate (250 mg/L) in three samples, and total dissolved solids (500 mg/L) in seven samples.
Six different pesticides or metabolites were detected at very low concentrations in the 30 water samples. The lack of VOC detections in water sampled from most of the sites is evidence thatVOCs are not common in the carbonate-rock aquifer. Arsenic values for water range from 0.7 to 45.7 μg/L, with a median value of 9.6 μg/L. Factors affecting arsenic concentration in the carbonate-rock aquifer in addition to geothermal heating are its natural occurrence in the aquifer material and time of travel along the flow path.
Most of the chemical analyses, especially for VOCs and nutrients, indicate little, if any, effect of overlying land-use patterns on ground-water quality. The water quality in recharge areas for the aquifer where human activities are more intense may be affected by urban and/or agricultural land uses as evidenced by pesticide detections. The proximity of the carbonate-rock aquifer at these sites to the land surface and the potential for local recharge to occur through the fractured rock likely results in the occurrence of these and other land-surface related contaminants in the ground water. Water from sites sampled near outcrops of carbonate-rock aquifer likely has a much shorter residence time resulting in a potential for detection of anthropogenic or land-surface related compounds. Sites located in discharge areas of the flow systems or wells that are completed at a great depth below the land surface generally show no effects of land-use activities on water quality. Flow times within the carbonate-rock aquifer, away from recharge areas, are on the order of thousands of years, so any contaminants introduced at the land surface that will not degrade along the flow path have not reached the sampled sites in these areas.
First posted February, 2006
Part or all of this report is presented in Portable Document Format (PDF); the latest version of Adobe Reader or similar software is required to view it. Download the latest version of Adobe Reader, free of charge.
Schaefer, D.H., Thiros, S.A., and Rosen, M.R., 2005, Ground-water quality in the carbonate-rock aquifer of the Great Basin, Nevada and Utah, 2003: U.S. Geological Survey Scientific Investigations Report 2005-5232, 41 p.
Description of Study Area
Study Design and Methods
Appendix 1. Water-quality constituents analyzed in ground-water samples from wells and springs in the carbonate-rock aquifer, Nevada and Utah | <urn:uuid:71d96a69-5ff9-445d-8096-085c505ac74e> | 3.234375 | 916 | Academic Writing | Science & Tech. | 39.976144 | 156 |
I’ve been looking for a good, easy to read document outlining the latest climate science research and putting it in context for Copenhagen and I think I’ve found it.
Today in Sydney, the Climate Change Research Centre, a unit of the University of New South Wales, released The Copenhagen Diagnosis. It’s free to download or view online in a nice rich text format so credit to the centre for making it accessible in multiple attractive formats. But most praise has to be reserved for the 26 contributing authors who have laid out the science to make it easy to understand for a layman like myself. Chapters cover aspects of climate science including “the atmosphere”, “permafrost and hydrates” and “global sea level”.
Throughout are scattered common questions about climate change and answers designed to clear up confusion. An example: “Are we just in a natural warming phase, recovering from the ‘little ice age?‘.
The document, once pictures and the reference section is including is a slim 50 pages. If you want something to get yourself up to speed on the science ahead of Copenhagen this could well be the document to download. Its even better if you have a colleague willing to run across the road and get it bound for you as I have!
The executive summary of the Copenhagen Diagnosis, which I’ve excerpted below gives the basics you need to know if even 50 pages is too much to handle as we head into the highly-stressful (for everyone other than academics) end of year period.
The diplomats and politicians soon to board flights to Denmark could do worse than slip a copy of The Copenhagen Diagnosis into their cabin luggage.
The most significant recent climate change findings are:
Surging greenhouse gas emissions: Global carbon dioxide emissions from fossil fuels in 2008 were nearly 40% higher than those in 1990. Even if global emission rates are stabilized at present-day levels, just 20 more years of emissions would give a 25% probability that warming exceeds 2°C, even with zero emissions after 2030. Every year of delayed action increases the chances of exceeding 2°C warming.
Recent global temperatures demonstrate human-induced warming: Over the past 25 years temperatures have increased at a rate of 0.19°C per decade, in very good agreement with predictions based on greenhouse gas increases. Even over the past ten years, despite a decrease in solar forcing, the trend continues to be one of warming. Natural, short-term fluctuations are occurring as usual, but there have been no significant changes in the underlying warming trend.
Acceleration of melting of ice-sheets, glaciers and ice-caps: A wide array of satellite and ice measurements now demonstrate beyond doubt that both the Greenland and Antarctic ice-sheets are losing mass at an increasing rate. Melting of glaciers and ice-caps in other parts of the world has also accelerated since 1990. Rapid Arctic sea-ice decline: Summer-time melting of Arctic sea-ice has accelerated far beyond the expectations of climate models. The area of sea-ice melt during 2007-2009 was about 40% greater than the average prediction from IPCC AR4 climate models.
Current sea-level rise underestimated: Satellites show recent global average sea-level rise (3.4 mm/yr over the past 15 years) to be ~80% above past IPCC predictions. This acceleration in sea-level rise is consistent with a doubling in contribution from melting of glaciers, ice caps, and the Greenland and West-Antarctic ice-sheets.
Sea-level predictions revised: By 2100, global sea-level is likely to rise at least twice as much as projected by Working Group 1 of the IPCC AR4; for unmitigated emissions it may well exceed 1 meter. The upper limit has been estimated as ~ 2 meters sea level rise by 2100. Sea level will continue to rise for centuries after global temperatures have been stabilized, and several meters of sea level rise must be expected over the next few centuries.
Delay in action risks irreversible damage: Several vulnerable elements in the climate system (e.g. continental ice-sheets, Amazon rainforest, West African monsoon and others) could be pushed towards abrupt or irreversible change if warming continues in a business-as-usual way throughout this century. The risk of transgressing critical thresholds (’tipping points’) increases strongly with ongoing climate change. Thus waiting for higher levels of scientific certainty could mean that some
tipping points will be crossed before they are recognized.
The turning point must come soon: If global warming is to be limited to a maximum of 2 °C above pre-industrial values, global emissions need to peak between 2015 and 2020 and then decline rapidly. To stabilize climate, a decarbonized global society — with near-zero emissions of CO2 and other long-lived greenhouse gases — needs to be reached well within this century. More specifically, the average annual per-capita emissions will have to shrink to well under 1 metric ton CO2 by 2050. This is 80-95% below the per-capita emissions in developed nations in 2000. | <urn:uuid:6de73326-296f-4b7a-b8ba-84761d55c25e> | 2.78125 | 1,051 | Personal Blog | Science & Tech. | 41.26094 | 157 |
Part of twisted.internet.protocol View Source View In Hierarchy
Implements interfaces: twisted.internet.interfaces.IConsumer
|Method||write||The producer will write data by calling this method.|
|Method||registerProducer||Register to receive data from a producer.|
|Method||unregisterProducer||Stop consuming data from a producer, without disconnecting.|
Inherited from Adapter:
|Method||__init__||Set my 'original' attribute to be the object I am adapting.|
|Method||__conform__||I forward __conform__ to self.original if it has it, otherwise I simply return None.|
|Method||isuper||Forward isuper to self.original|
Register to receive data from a producer.
This sets self to be a consumer for a producer. When this object runs out of data (as when a send(2) call on a socket succeeds in moving the last data from a userspace buffer into a kernelspace buffer), it will ask the producer to resumeProducing().
resumeProducing will be called once each time data
pauseProducingwill be called whenever the write buffer fills up and
resumeProducingwill only be called when it empties.
|Parameters||producer|| (type: | | <urn:uuid:d742678b-13bf-4525-862a-587e6eb21d1a> | 2.875 | 279 | Documentation | Software Dev. | 29.495357 | 158 |
Classifying Critical Points
So let’s say we’ve got a critical point of a multivariable function . That is, a point where the differential vanishes. We want something like the second derivative test that might tell us more about the behavior of the function near that point, and to identify (some) local maxima and minima. We’ll assume here that is twice continuously differentiable in some region around .
The analogue of the second derivative for multivariable functions is the second differential . This function assigns to every point a bilinear function of two displacement vectors and , and it measures the rate at which the directional derivative in the direction of is changing as we move in the direction of . That is,
If we choose coordinates on given by an orthonormal basis , we can write the second differential in terms of coordinates
This matrix is often called the “Hessian” of at the point .
As I said above, this is a bilinear form. Further, Clairaut’s theorem tells us that it’s a symmetric form. Then the spectral theorem tells us that we can find an orthonormal basis with respect to which the Hessian is actually diagonal, and the diagonal entries are the eigenvalues of the matrix.
So let’s go back and assume we’re working with such a basis. This means that our second partial derivatives are particularly simple. We find that for we have
and for , the second partial derivative is an eigenvalue
which we can assume (without loss of generality) are nondecreasing. That is, .
Now, if all of these eigenvalues are positive at a critical point , then the Hessian is positive-definite. That is, given any direction we have . On the other hand, if all of the eigenvalues are negative, the Hessian is negative definite; given any direction we have . In the former case, we’ll find that has a local minimum in a neighborhood of , and in the latter case we’ll find that has a local maximum there. If some eigenvalues are negative and others are positive, then the function has a mixed behavior at we’ll call a “saddle” (sketch the graph of near to see why). And if any eigenvalues are zero, all sorts of weird things can happen, though at least if we can find one positive and one negative eigenvalue we know that the critical point can’t be a local extremum.
We remember that the determinant of a diagonal matrix is the product of its eigenvalues, so if the determinant of the Hessian is nonzero then either we have a local maximum, we have a local minimum, or we have some form of well-behaved saddle. These behaviors we call “generic” critical points, since if we “wiggle” the function a bit (while maintaining a critical point at ) the Hessian determinant will stay nonzero. If the Hessian determinant is zero, wiggling the function a little will make it nonzero, and so this sort of critical point is not generic. This is the sort of unstable situation analogous to a failure of the second derivative test. Unfortunately, the analogy doesn’t extent, in that the sign of the Hessian determinant isn’t instantly meaningful. In two dimensions a positive determinant means both eigenvalues have the same sign — denoting a local maximum or a local minimum — while a negative determinant denotes eigenvalues of different signs — denoting a saddle. This much is included in multivariable calculus courses, although usually without a clear explanation why it works.
So, given a direction vector so that , then since is in , there will be some neighborhood of so that for all . In particular, there will be some range of so that . For any such point we can use Taylor’s theorem with to tell us that
for some . And from this we see that for every so that . A similar argument shows that if then for any near in the direction of .
Now if the Hessian is positive-definite then every direction from gives us , and so every point near satisfies . If the Hessian is negative-definite, then every point near satisfies . And if the Hessian has both positive and negative eigenvalues then within any neighborhood we can find some directions in which and some in which . | <urn:uuid:1470b6e0-0c2a-416e-a3f3-01bb7910efed> | 2.6875 | 931 | Academic Writing | Science & Tech. | 42.500034 | 159 |
MIT professor’s book digs into the eclectic, textually linked reading choices of people in medieval London.
CAMBRIDGE, Mass. -- Following the 1997 creation of the first laser to emit pulsed beams of atoms, MIT researchers report in the May 16 online version of Science that they have now made a continuous source of coherent atoms. This work paves the way for a laser that emits a continuous stream of atoms.
MIT physicists led by physics professor Wolfgang Ketterle (who shared the 2001 Nobel prize in physics) created the first atom laser. A long-sought goal in physics, the atom laser emitted atoms, similar in concept to the way an optical laser emits light.
"I am amazed at the rapid progress in the field," Ketterle said. "A continuous source of Bose-Einstein condensate is just one of many recent advances."
Because the atom laser operates in an ultra-high vacuum, it may never be as ubiquitous as optical lasers. But, like its predecessor, the pulsed atom laser, a continuous-stream atom laser may someday be used for a variety of applications in fundamental physics.
It could be used to directly deposit atoms onto computer chips, and improve the precision and accuracy of atomic clocks and gyroscopes. It could aid in precision measurements of fundamental constants, atom optics and interferometry.
A continuous stream laser could do all of these things better than a pulsed atomic laser, said co-author Ananth P. Chikkatur , a physics graduate student at MIT. "Similar to the optical laser revolution, a continuous stream atom laser might be useful for more things than a pulsed laser," he said.
In addition to Ketterle and Chikkatur, authors include MIT graduate students Yong-Il Shin and Aaron E. Leanhardt; David F. Kielpinski, postdoctoral fellow in the MIT Research Laboratory of Electronics (RLE); physics senior Edem Tsikata; MIT affiliate Todd L. Gustavson; and David E. Pritchard, Cecil and Ida Green Professor of Physics and a member of the MIT-Harvard Center for Ultracold Atoms and the RLE.
A NEW FORM OF MATTER
An important step toward the first atom laser was the creation of a new form of matter - the Bose-Einstein condensate (BEC). BEC forms at temperatures around one millionth of a degree Kelvin, a million times colder than interstellar space.
Ketterle's group had developed novel cooling techniques that were key to the observation of BEC in 1995, first by a group at the University of Colorado at Boulder, then a few months later by Ketterle at MIT. It was for this achievement that researchers from both institutions were honored with the Nobel prize last year.
Ketterle and his research team managed to merge a bunch of atoms into what he calls a single matter-wave, and then used fluctuating magnetic fields to shape the matter-wave into a beam much like a laser.
To test the coherence of a BEC, the researchers generated two separate matter-waves, made them overlap and photographed a so-called "interference pattern" that only can be created by coherent waves. The researchers then had proof that they had created the first atom laser.
Since 1995, all atom lasers and BEC have been produced in a pulsed manner, emitting individual pulses of atoms several times per minute. Until now, little progress has been made toward a continuous BEC source.
While it took about six months to create a continuous optical laser after the first pulsed optical laser was produced in 1960, the much more technically challenging continuous source of coherent atoms has taken seven years since Ketterle and colleagues first observed BEC in 1995.
A NEW CHALLENGE
Creating a continuous BEC source involved three steps: building a chamber where the condensate could be stored in an optical trap, moving the fresh condensate and merging the new condensate with the existing condensate stored in the optical trap. (The same researchers first developed an optical trap for BECs in 1998.)
The researchers built an apparatus containing two vacuum chambers: a production chamber where the condensate is produced and a "science chamber" around 30 centimeters away, where the condensate is stored.
The condensate in the science chamber had to be protected from laser light, which was necessary to produce a fresh condensate, and also from hot atoms. This required great precision, because a single laser-cooled atom has enough energy to knock thousands of atoms out of the condensate. In addition, they used an optical trap as the reservoir trap, which is insensitive to the magnetic fields used for cooling atoms into a BEC.
The researchers also needed to figure out how to move the fresh condensate - chilled to astronomically low temperatures - from the production chamber to the science chamber without heating them up. This was accomplished using optical tweezers - a focused laser light beam that traps the condensate.
Finally, to merge the new condensate with the existing condensate in the science chamber, they moved the new condensate in the tweezers into the science chamber by merging the condensates together.
A BUCKET OF ATOMS
If the pulsed atom laser is like a faucet that drips, Chikkatur says the new innovations create a sort of bucket that collects the drips without wasting or changing the condensate too dramatically by heating it. This way, a reservoir of condensate is always on hand to replenish an atom laser.
The condensate pulses are like a dripping faucet, where the drops are analogous to the pulsed BEC production. "We have now implemented a bucket (our reservoir trap), where we collect these drips to have continuous source of water (BEC)," Chikkatur said. "Although we did not demonstrate this, if we poke a hole in this bucket, we will have a steady stream of water. This hole would be an outcoupling technique from which we can produce a continuous atom laser output.
"The big achievement here is that we have invented the bucket, which can store atoms continuously and also makes sure that the drips of water do not cause a lot of splashing (heating of BECs)," he said.
The next step would be to improve the number of atoms in the source, perhaps by implementing a large-volume optical trap. Another important step would be to demonstrate a phase-coherent condensate merger using a matter wave amplification technique pioneered by the MIT group and a group in Japan, he said.
This work is funded by the National Science Foundation, the Office of Naval Research, the Army Research Office, the Packard Foundation and NASA. | <urn:uuid:00cd54cf-be16-4b4b-8800-7d5342159b7a> | 3.515625 | 1,407 | News Article | Science & Tech. | 38.22077 | 160 |
|First Detailed Look at RNA Dicer|
Scientists have gotten their first detailed look at the molecular structure of an enzyme that Nature has been using for eons to help silence unwanted genetic messages. A team of researchers with Berkeley Lab and the University of California, Berkeley, used x-ray crystallography at ALS Beamlines 8.2.1 and 8.2.2 to determine the crystal structure of Dicer, an enzyme that plays a critical role in a process known as RNA interference. The Dicer enzyme is able to snip a double-stranded form of RNA into segments that can attach themselves to genes and block their activity. With this crystal structure, the researchers learned that Dicer serves as a molecular ruler, with a clamp at one end and a cleaver at the other end a set distance away, that produces RNA fragments of an ideal size for gene-silencing.
RNA—ribonucleic acid—has long been known as a multipurpose biological workhorse, responsible for carrying DNA's genetic messages out from the nucleus of a living cell and using those messages to make specific proteins in a cell's cytoplasm. In 1998, however, scientists discovered that RNA can also block the synthesis of proteins from some of those genetic messages. This gene-silencing process is called RNA interference and it starts when a double-stranded segment of RNA (dsRNA) encounters the enzyme Dicer.
Dicer cleaves dsRNA into smaller fragments called short interfering RNAs (siRNAs) and microRNAs (miRNAs). Dicer then helps load these fragments into a large multiprotein complex called RISC, for RNA-Induced Silencing Complex. RISC can seek out and capture messenger RNA (mRNA) molecules (the RNA that encodes the message of a gene) with a base sequence complementary to that of its siRNA or miRNA. This serves to either destroy the genetic message carried by the mRNA outright or else block the subsequent synthesis of a protein.
Until now, it has not been known how Dicer is able to recognize dsRNA and cleave those molecules into products with lengths that are exactly what is needed to silence specific genes. The Berkeley researchers were able to purify and crystallize a Dicer enzyme from Giardia intestinalis, a one-celled microscopic parasite that can infect the intestines of humans and animals. This Dicer enzyme in Giardia is identical to the core of a Dicer enzyme in higher eukaryotes, including humans, that cleaves dsRNA into lengths of about 25 bases.
In this work, the researchers describe a front view of the structure as looking like an axe. On the handle end there is a domain that is known to bind to small RNA products, and on the blade end there is a domain that is able to cleave RNA. Between the clamp and the cleaver is a flat-surfaced region that carries a positive electrical charge. The researchers propose that this flat region binds to the negatively charged dsRNA like biological Velcro, enabling Dicer to measure out and snip specified lengths of siRNA. When you put the clamp, the flat area, and the cleaver together, you get a pretty good idea as to how Dicer works. The research team is now using this structural model to design experiments that might reveal what triggers Dicer into action.
In addition, one size does not fit all for Dicer: different forms of the Dicer enzyme are known to produce different lengths of siRNA, ranging from 21 to 30 base pairs in length or longer. Having identified the flat-surfaced positively charged region in Dicer as the "ruler" portion of the enzyme, the researchers speculate that it may be possible to alter the length of a long connector helix within this domain to change the lengths of the resulting siRNA products. The researchers would like to see what happens when you take a natural Dicer and change the length of its helix.
Research conducted by I.J. MacRae and K. Zhou (University of California, Berkeley, and Howard Hughes Medical Institute); F. Li, A. Repic, A.N. Brooks, and W.Z. Cande (University of California, Berkeley); P.D. Adams (Berkeley Lab); and J.A. Doudna (University of California, Berkeley, Howard Hughes Medical Institute, and Berkeley Lab).
Research funding: National Institutes of Health. Operation of the ALS is supported by the U.S. Department of Energy, Office of Basic Energy Sciences.
Publication about this research: I.J. MacRae, K. Zhou, F. Li, A. Repic, A.N. Brooks, W.Z. Cande, P.D. Adams, and J.A. Doudna, "Structural basis for double-stranded RNA processing by dicer," Science311, 195 (2006). | <urn:uuid:b56c8760-12eb-4ec7-be08-1f531a0e88a9> | 3.828125 | 1,014 | Knowledge Article | Science & Tech. | 52.162377 | 161 |
Coming soon! Nanotech on your desktop
Within 15 years, desktop nanofactories could pump out anything from a new car to a novel nanoweapon, says a technology commentator.
And he warns that society needs to start preparing for this brave new world.
Mike Treder from the Center for Responsible Nanotechnology (CRN) in New York says advanced nanotechnology, like these nanofactories, could help solve world poverty but it could also wreak economic and social chaos.
"It's the biggest challenge we've ever faced as a species," says Treder, who has been addressing scientists in Australia this week.
CRN is a non-profit organisation advised among others by the so-called father of nanotechnology, Dr Eric Drexler.
The organisation says it aims to raise awareness about the benefits and dangers of molecular manufacturing, the precise assembly of products atom-by-atom.
While molecular manufacturing is not yet a reality, Treder says researchers are already working on building molecular-scale machines that could eventually move atoms around to make products.
And he says that in less than 15 years, nanoscale factories could be making consumer products from cups and chairs to cars and house bricks.
Raw materials like carbon would be pumped into the nanofactory, where atoms would be rearranged to make products according to programs downloaded from the internet, says Treder.
Treder says such desktop nanofactories could help reduce poverty and starvation in developing nations, and provide tremendous medical benefits. But society needs to guard against its potential risks.
In particular, he says CRN is concerned that these desktop nanofactories would lead to a nano "arms race" in which hard-to-detect nanoweapons could be designed, manufactured and tested much quicker than they are today.
"Imagine a suitcase filled with billions of toxin-carrying flying robots that could be released anywhere to target a population," he says.
"You could make a suitcase full of these things overnight for a few dollars."
The mass production of consumer goods by private desktop factories could also trigger social chaos due to economic disruption, says Treder.
"If I can make my own car at home for a couple of hundred dollars with a design downloaded from the internet that means I'm not a customer of the auto dealer down the road."
Waste from such easy manufacturing, or nanolitter, is another issue that needs to be thought about, he says. As is the prospect of nanospam.
"If someone could send you a product online that you don't want but they just make it pump out of your nanofactory, how are we going to prevent that?"
Experts are generally sceptical that desktop factories could exist so soon but welcome Treder's discussion of impacts of nanotechnology on society.
Dr Peter Binks of Nanotechnology Victoria, a sponsor for Treder's tour, says his organisation does not "yet buy into the idea" of the desktop factory.
"But we don't dismiss it either," he says. "We think there are a large number of technical hurdles to be overcome."
William Price, professor of nanotechnology at the University of Western Sydney says desktop factories may be possible but technical issues will mean this will not be within 15 years.
Professor Chennupati Jagadish of the Australian Research Council Nanotechnology Network, which is also a sponsor for the tour, thinks Treder's views are imaginative and futuristic.
"Expecting those sorts of machines in 15 years is probably too optimistic," he says, estimating they would be more like 30 or 40 years away, if at all.
And it's this challenge that makes Professor Ned Seeman, of New York University, who is involved in self-assembling arrays of DNA machines, sceptical of Treder's claims.
"I think this suggestion is wildly optimistic," he says. "Most of the basic principles have not been demonstrated, much less in a 'desktop' context."
But even he is not willing to rule the technology out completely.
"One hundred years from now anything is possible." | <urn:uuid:8b29f1b2-f3ec-4c61-a471-ce6a42fd2e31> | 2.625 | 853 | News Article | Science & Tech. | 39.911951 | 162 |
Science Fair Project Encyclopedia
Cryonics is the practice of preserving organisms, or at least their brains, for possible future revival by storing them at cryogenic temperatures where metabolism and decay are almost completely stopped.
An organism held in such a state (either frozen or vitrified) is said to be cryopreserved. Barring social disruptions, cryonicists believe that a perfectly vitrified person can be expected to remain physically viable for at least 30,000 years, after which time cosmic ray damage is thought to be irreparable. Many scientists in the field, most notably Ralph Merkle and Brian Wowk, hold that molecular nanotechnology has the potential to extend even this limit many times over.
To its detractors, the justification for cryonics is unclear, given the primitive state of preservation technology. Advocates counter that even a slim chance of revival is better than no chance. In the future, they speculate, not only will conventional health services be improved, but they will also quite likely have expanded even to the conquering of old age itself (see links at the bottom). Therefore, if one could preserve one's body (or at least the contents of one's mind) for, say, another hundred years, one might well be resuscitated and live indefinitely long. But critics of the field contend that, while an interesting technical idea, cryonics is currently little more than a pipedream, that current "patients" will never be successfully revived, and that decades of research, at least, must occur before cryonics is to be a legitimate field with any hope of success.
Probably the most famous cryopreserved patient is Ted Williams. The popular urban legend that Walt Disney was cryopreserved is false; he was cremated, and interred at Forest Lawn Memorial Park Cemetery. Robert Heinlein, who wrote enthusiastically of the concept, was cremated and his ashes distributed over the Pacific Ocean. Timothy Leary was a long-time cryonics advocate, and signed up with a major cryonics provider. He changed his mind, however, shortly before his death, and so was not cryopreserved.
Obstacles to success
Damage from ice formation
Cryonics has traditionally been dismissed by mainstream cryobiology, of which it is arguably a part. The reason generally given for this dismissal is that the freezing process creates ice crystals, which damage cells and cellular structures—a condition sometimes called "whole body freezer burn "—so as to render any future repair impossible. Cryonicists have long argued, however, that the extent of this damage was greatly exaggerated by the critics, presuming that some reasonable attempt is made to perfuse the body with cryoprotectant chemicals (traditionally glycerol) that inhibit ice crystal formation.
According to cryonicists, however, the freezer burn objection became moot around the turn of the millennium, when cryobiologists Greg Fahy and Brian Wowk, of Twenty-First Century Medicine developed major improvements in cryopreservation technology, including new cryoprotectants and new cryoprotectant solutions, that greatly improved the feasibility of eliminating ice crystal formation entirely, allowing vitrification (preservation in a glassy rather than frozen state). In a glass, the molecules do not rearrange themselves into grainy ice crystals as the solution cools, but instead become locked together while still randomly arranged as in a fluid, forming a "solid liquid" as the temperature falls below the glass transition temperature. Alcor Life Extension Foundation, the world's largest cryonics provider, has since been using these cryoprotectants, along with a new, faster cooling method, to vitrify whole human brains. They continue to use the less effective glycerol-based freezing for patients who opt to have their whole bodies preserved, since vitrification of an entire body is beyond current technical capabilities. The only other full-service cryonics provider in the world, the Cryonics Institute, is currently testing its own vitrification solution.
Current solutions being used for vitrification are stable enough to avoid crystallization even when a vitrified brain is warmed up. This has recently allowed brains to be vitrified, warmed back up, and examined for ice damage using light and electron microscopy. No ice crystal damage was found. However, if the circulation of the brain is compromised, protective chemicals may not be able to reach all parts of the brain, and freezing may occur either during cooling or during warming. Cryonicists argue, however, that injury caused during cooling can be repaired before the vitrified brain is warmed back up, and that damage during rewarming can be prevented by adding more cryoprotectant in the solid state, or by improving rewarming methods.
Some critics have speculated that because a cryonics patient has been declared legally dead, their organs are dead, and thus unable to allow cryoprotectants to reach the majority of cells. Cryonicists respond that it has been empirically demonstrated that, so long as the cryopreservation process begins immediately after legal death is declared, the individual organs (and perhaps even the patient as a whole) remain biologically alive, and vitrification (particularly of the brain) is quite feasible.
Critics have often quipped that it is easier to revive a corpse than a cryonically frozen body. Many cryonicists might actually agree with this, provided that the "corpse" were fresh, but they would argue that such a "corpse" may actually be biologically alive, under optimal conditions. A declaration of legal death does not mean that life has suddenly ended—death is a gradual process, not a sudden event. Rather, legal death is a declaration by medical personnel that there is nothing more they can do to save the patient. But if the body is clearly biologically dead, having been sitting at room temperature for a period of time, or having been traditionally embalmed, then cryonicists would hold that such a body is far less revivable than a cryonically preserved patient, since any process of resuscitation will depend on the quality of the structural and molecular preservation of the brain, which is largely destroyed by ischemic damage (from lack of blood flow) within minutes or hours of cardiac arrest, if the body is left to sit at room temperature. Traditional embalming also largely destroys this crucial neurological structure.
Cryonicists would also point out that the definitions of "death" and "corpse" currently in use may change with future medical advances, just as they have changed in the past, and so they generally reject the idea that they are trying to "raise the dead", viewing their procedures instead as highly experimental medical procedures, whose efficacy is yet to be either demonstrated or refuted. Some also suggest that if technology is developed that allows mind transfer, revival of the frozen brain might not even be required; the mind of the patient could instead be "uploaded" into an entirely new substrate.
The biggest drawback to current vitrification practice is a costs issue. Because the only really cost-effective means of storing a cryopreserved person is in liquid nitrogen, possibly large-scale fracturing of the brain occurs, a result of cooling to −196°C, the temperature of liquid nitrogen. Fracture-free vitrification would require inexpensive storage at a temperature significantly below the glass transition temperature of about −125°C, but high enough to avoid fracturing (−150°C is about right). Alcor is currently developing such a storage system. Alcor believes, however, that even before such a storage system is developed, the current vitrification method is far superior to traditional glycerol-based freezing, since the fractures are very clean breaks that occur even with traditional glycerol cryoprotection, and the loss of neurological structure is still less than that caused by ice formation, by orders of magnitude.
While cryopreservation arrangements can be expensive (currently ranging from $28,000 to $150,000), most cryonicists pay for it with life insurance. The elderly, and others who may be uninsurable for health reasons, will often pay for the procedure through their estate. Others simply invest their money over a period of years, accepting the risk that they might die in the meantime. All in all, cryonics is actually quite affordable for the vast majority of those in the industrialized world who really want it, especially if they make arrangements while still young.
Even assuming perfect cryopreservation techniques, many cryonicists would still regard eventual revival as a long shot. In addition to the many technical hurdles that remain, the likelihood of obtaining a good cryopreservation is not very high because of logistical problems. The likelihood of the continuity of cryonics organizations as businesses, and the threat of legislative interference in the practice, don't help the odds either. Most cryonicists, therefore, regard their cryopreservation arrangements as a kind of medical insurance—not certain to keep them alive, but better than no chance at all and still a rational gamble to take.
Brain vs. whole-body cryopreservation
During the 1980s, the problems associated with crystallization were becoming better appreciated, and the emphasis shifted from whole body to brain-only or "neuropreservation", on the assumption that the rest of the body could be regrown, perhaps by cloning of the person's DNA or by using embryonic stem cell technology. The main goal now seems to be to preserve the information contained in the structure of the brain, on which memory and personal identity depends. Available scientific and medical evidence suggests that the mechanical structure of the brain is wholly responsible for personal identity and memories (for instance, spinal cord injury victims, organ transplant patients, and amputees appear to retain their personal identity and memories). Damage caused by freezing and fracturing is thought to be potentially repairable in the future, using nanotechnology, which will enable the manipulation of matter at the molecular level. To critics, this appears a kind of futuristic deus ex machina, but while the engineering details remain speculative, the rapidity of scientific advances over the past century, and more recently in the field of nanotechnology itself, suggest to some that there may be no insurmountable problems. And the cryopreserved patient can wait a long time. With the advent of vitrification, the importance of nanotechnology to the cryonics movement may begin to decrease.
Some critics, and even some cryonicists, question this emphasis on the brain, arguing that during neuropreservation some information about the body's phenotype will be lost and the new body may feel "unwanted", and that in case of brain damage the body may serve as a crude backup, helping restore indirectly some of the memories. Partly for this reason, the Cryonics Institute preserves only whole bodies. Some proponents of neuropreservation agree with these concerns, but still feel that lower costs and better brain preservation justify preserving only the brain.
Historically, cryonics began in 1962 with the publication of The Prospect of Immortality by Robert Ettinger. In the 1970s, the damage caused by crystallization was not well understood. Two early organizations went bankrupt, allowing their patients to thaw out, bringing the matter to the public eye, at which point the problem with cellular damage became more well known and the practice gained something of the reputation of a scam. During the 1980s, the extent of the damage from the freezing process became much clearer and better known, and the emphasis of the movement began to shift from whole-body to neuropreservation.
Alcor currently preserves about 60 human bodies and heads in Scottsdale, Arizona. Before the company moved to Arizona from Riverside, California in 1994, it was the center of several controversies, including a county coroner's ruling that a client was murdered with barbiturates before her head was removed by the company's staff. Alcor contended that the drug was administered after her death. No charges were ever filed.
- engineered negligible senescence
- life extension,
- Interstellar travel,
- Immortality Institute,
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:808b609d-c9b2-4043-aeea-548f59273c25> | 3.4375 | 2,487 | Knowledge Article | Science & Tech. | 20.021667 | 163 |
BOSON OR BOGUS, BILLION-DOLLAR BULL?
by Hank Mills
Salt Lake City, Utah
July 9, 2012
SALT LAKE CITY, Utah -- The universe is a mysterious place, and we understand very little about how it works. Sadly, the challenges our civilization faces such as war, poverty, pollution, economic turmoil, and "black swan" events may not allow humanity to exist long enough to figure it out before our species goes extinct. If we are to learn how the universe works, breakthrough technologies like cold fusion (LENR) need to be pursued, instead of multibillion dollar projects such as the search for the Higgs Boson that will have very few real-world applications in the short-to-medium term.
What humanity needs at this moment are working technologies that can allow us to overcome the issues that threaten our civilization. The cold fusion-based Energy Catalyzer, known colloquially as the E-Cat, is just such a technology because it could allow for almost unlimited energy production utilizing only tiny amounts of cheap, non-polluting fuel. The home version of the E-Cat is expected to be available commercially in the next six months at a cost $600 or less.
The Energy Catalyzer has cost Andrea Rossi, a successful and colorful Italian engineer, virtually everything he has - perhap smore than $1 million dollars in all. By contrast, it took $9 billion and 30 years of work and 9,000 scientists to build the Large Hadron Accelerator in Geneva that they say "may have" discovered the so-called "God Particle," the theoretical construct known as the Higgs Boson. The United States contributed $531 million of that, and it uses enough power FOR 120,000 homes or the entire Canton of Geneva, Switzerland. It costs the United Kingdom enough to buy a beer for everyone in the country.
With cold fusion technology, funded by less than $100 millioon, humanity could gain a tool that could allow for large-scale desalinization of water and the resulting transformation of deserts into productive farmland, along with a massive reduction in the CO2 pollution that fuels global warming.
The E-Cat utilizes tiny amounts of nickel powder, hydrogen gas, and undisclosed (for proprietary reasons) catalysts to produce nuclear reactions, with the result being a massive release of energy in the form of heat. In every way, this technology matches what I had hoped for throughout my childhood and later on in my life:
In addition, the millions of jobs created around the world by a cold-fusion revolution would galvanize the global economy and end the current global recession - and do it all safely, without Fukushima-like events. To be blunt, cold-fusion technology holds the potential to transform our world from a planet of poverty, war and self-destruction into a place of enlightened prosperity.
On the other hand, the existence of the Higgs Boson offers no near-term benefits to humanity. It may give us a bit more knowledge about the universe, but no one claims that even one single technology that could be immediately developed using this knowledge.
If the existence of the Higgs Boson could yield a warp drive, free-energy device, gravity-modifying device, or other breakthrough in a reasonable period of time, perhaps the billions of dollars spent would be worth it. But the truth is that just like hot fusion research, the search for the Higgs Boson is a boondoggle. Due to the lack of any near-term benefits, the funds could be better spent elsewhere.
If a fraction of the money spent on the search for the Higgs Boson had been put into cold fusion research 20 years ago, there would be no energy crisis today. Instead, cold fusion devices that could produce kilowatts of power and very high temperatures - like the E-Cat - would have been quickly developed and commercialized.
Instead of putting money into practical technologies that could benefit all mankind in the near term, the career scientists naysayed exotic technologies like cold fusion, and lobbied for billions of dollars in additional funding for giant hot fusion reactors and particle colliders. All these years later, we have seen little or no return on the investment in the form of technological advancement.
We are still stuck with rockets for propulsion and burning fossil fuels for energy. Literally, we are still in a technological Dark Age when it comes to the most fundamental of technologies - energy and propulsion. (If you don't count the "black," off-budget projects that use taxpayer money with no taxpayer benefit, and actually have the agenda of making us all slaves.)
I do not want to call the search for the Higgs Field or the Higgs Boson totally meaningless. However, I think Nicola Tesla's work (although still ignored by the mainstream) into the nature of the ether is much more meaningful. He worked for years to find ways of harnessing the ether to allow for practical applications.
Some of these applications, such as wireless power transfer and superluminal communication via longitudinal waves in the ether, and his black box that provided electrical power (from the ether) to run an electrical vehicle - have been replicated. Others have not yet been replicated, so far as we know. But if a fraction of the billions of dollars spent on hot fusion and the search for the Higgs Boson were utilized to fund inventors with the open-mindedness of Tesla, knowledge of how the universe works would be opened up to us very quickly.
My personal belief is that all the most important breakthroughs and discoveries will come from projects that can be performed in an ordinary lab, with a modest amount of funding. I think expensive multibillion- dollar projects that require monstrous reactors and miles-long particle accelerators belong in the future - if ever - after we have solved the more immediate issues facing our civilization.
Once our civilization is stabilized and poverty is a thing of the past, after we stop fighting wars over oil and the destruction of our environment has been reduced, then it may be time for larger-scale projects. Of course, by that time, the smaller-scale projects may have figured almost everything out that the monolithic projects were designed to explore. By then, the commercialization of cold fusion, free energy, gravity modification, faster-than-light drives and other technologies may have provided us with a more complete knowledge of how the universe works.
I think the E-Cat is a key example of a technology that was developed on a modest budget that will provide both solutions to the challenges our civilization faces and a huge wealth of information about how our universe works.
In fact, cold fusion may end up telling us more about how our universe works than the existence of the Higgs Boson. In my opinion, once the world recognizes that cold fusion is a reality, it will turn the discovery of the Higgs Boson into a footnote in history.
Also, it will expose how a field of study in which researchers are often forced to work on shoestring budgets can yield greater benefits for humanity than research that receives billions of dollars in funding. It will hopefully result in the end of "mainstream" and expensive hot-fusion research and projects like the Large Hadron Collider.
My friend Sterling Allen of Peswiki.com recently said this to me: "I sat next to a guy on a flight to Amsterdam last February who resigned in protest from the Hadron Collider project because of how dangerous it is, having the potential of annihilating the local universe."
Cold fusion is the real answer!
This article first appeared in Pure Energy Systems News (Peswiki.com), and is republished here with the permission of the author. | <urn:uuid:49e65512-cd36-482c-a80e-a39c0a4b14ac> | 2.6875 | 1,589 | Nonfiction Writing | Science & Tech. | 35.908975 | 164 |
|Why are these strange little spheres on Mars?
rover Opportunity chanced across
these unusually shaped beads earlier this month while exploring a place named
Kirkwood near the rim of Mars'
The above image taken by Opportunity's
Microscopic Imager shows that some ground near the rover is filled with these unusual spheres, each spanning only about 3 millimeters.
At first glance, the sometimes-fractured balls appear similar to the small rocks dubbed
blueberries seen by Opportunity eight years ago, but these spheres are densely compacted and have little iron content.
Although it is thought that
these orbs formed naturally, which natural processes formed them remain unknown.
Opportunity, an older sibling to the recently deployed
Curiosity rover, will continue to study these spheres with the hope that they will provide a new clue to the ancient history of the surface of the
Mars Exploration Rover Mission, | <urn:uuid:5098a63f-0b99-4229-a882-704abc277ff4> | 3.203125 | 186 | Content Listing | Science & Tech. | 14.844009 | 165 |
We are banishing darkness from the night. Electric lights have been shining over cities and towns around the world for a century. But, increasingly, even rural areas glimmer through the night, with mixed – and largely unstudied – impacts on wildlife. Understanding these impacts is a crucial conservation challenge and bats, as almost exclusively nocturnal animals, are ideal subjects for exploring the effects of light pollution.
Previous studies have confirmed what many city dwellers have long noted: some bats enjoy a positive impact of illumination by learning to feed on insects attracted to streetlights. My research, however, demonstrates for the first time an important downside: artificial lighting can disrupt the commuting behavior of a threatened bat species. This project, using a novel experimental approach, was supported in part by BCI Student Research Scholarships.
Artificial lighting is a global phenomenon and the amount of light pollution is growing rapidly, with a 24 percent increase in England between 1993 and 2000. Since then, cultural restoration projects have brought lighting to old docks and riversides, placing important river corridors used by bats and other wildlife at risk of disturbance.
Studies of bats' foraging activity around streetlights find that these bats are usually fast-flying species that forage in open landscapes, typically species of Pipistrellus, Nyctalus, Vespertilio and Eptesicus. Such bats are better able than their slower cousins to evade hawks, owls and other birds of prey.
For our study, we chose the lesser horseshoe bat (Rhinolophus hipposideros), a shy, slow-?ying bat that typically travels no more than about 1.2 miles (2 kilometers) from its roost to forage each night, often flying no more than 16 feet (5 meters) from the ground. The species is adapted for feeding in cluttered, woodland environments. Its global populations are reported decreasing and the species is endangered in many countries of central Europe. The United Kingdom provides a European stronghold for the lesser horseshoe bat, with an estimated population of around 50,000.
These bats' slow flight leaves them especially vulnerable to birds of prey, so they leave their roosts only as the light fades and commute to foraging areas along linear features such as hedgerows. Hedgerows are densely wooded corridors of shrubs and small trees that typically separate fields from each other and from roadways. Such features are important commuting routes for many bat species, which use them for protection from predators and the elements. We suspected that lesser horseshoe bats would avoid illuminated areas, largely because of a heightened risk from raptors.
We conducted arti?cial-lighting experiments along hedgerows in eight sites around southern Britain. We first surveyed light levels at currently illuminated hedgerows, then duplicated those levels at our experimental hedgerow sites, all of them normally unlighted. We installed two temporary, generator-powered lights – about 100 feet (30 meters) apart – that mimic the intensity and light spectra of streetlights. Each site was near a maternity colony and along confirmed commuting routes of lesser horseshoe bats.
Bat activity at each site was monitored acoustically, with mounted bat detectors, during four specific treatments: control (with no lights); noise (generator on and lights installed but switched off); lit (full illumination all night for four consecutive nights); and another night of noise only. We identified horseshoe bat calls to species and measured relative activity by counting the number of bat passes per species each night.
We found no significant difference in activity levels of lesser horseshoe bats between the control nights and either of the two noise nights, when the generators were running but the lights were off. The presence of the lighting units and the noise of the generators had no effect on bat activity.
The negative impacts came when we turned on the lights. We documented dramatic reductions in activity of lesser horseshoe bats during all of the illuminated nights. In our study, 42 percent of commuting bats continued flying through the lights; 30 percent reversed direction and left before reaching the lights; 17 percent flew over the hedgerows; 9 percent flew through the thick hedgerow vegetation; and 2 percent circled high or wide to avoid the lights. We also recorded some strange behavior on one night when two bats flew over the hedge in a dark area between two lights, then flew up and down repeatedly, as though trapped between the lights.
We examined the effects of light on the timing of bats' commuting activity. The bats began their commute, on average, 29.9 minutes after sunset on control nights, but 78.6 minutes after sunset when the lights were turned on. Light pollution significantly delayed the bats' commuting behavior. Interestingly, the activity began a few minutes earlier (23 minutes after sunset) on the first, but not the second, noise night. It is possible that some bats emerged early to investigate the generator noise.
We clearly demonstrated how artificial lighting disrupts the behavior of lesser horseshoe bats. We found no evidence of habituation: at least on our timescale, the bats did not become accustomed to the illumination and begin returning to normal activity or timing.
These results suggest that light pollution may fragment the network of commuting routes used by lesser horseshoe bats, causing them to seek alternate, and probably longer, paths between roosting and foraging habitats. For some bats, this increased flight time can increase energy costs and stress, with potential impacts on reproductive success. It is critical, therefore, that light pollution be considered in conservation efforts.
Light pollution is an increasing global problem with negative impacts on such important animal behaviors as foraging, reproduction and communication. Yet lighting is rarely considered in habitat-management plans and streetlights are specifically excluded from light-pollution legislation in England and Wales.
I plan to use these results as the basis for recommendations for changes in policy, conservation and management for bat habitat in areas that are subject to development. This knowledge is fundamental for understanding the factors that impact bat populations not only in the United Kingdom but around the world, and in developing effective bat-conservation actions. I hope these findings will also help guide further research.
Scientists need to determine what levels of lighting particular bat species can tolerate, so we can take appropriate measures to limit the impact. These might include reducing illumination at commuting times, directing light away from commuting routes and constructing alternative flight routes.
We sincerely hope this research and similar studies will cause both officials and the public to think more about the consequences of artificial lighting on bats and other wildlife.
EMMA STONE is a Ph.D. student at the University of Bristol and a researcher at the university's School of Biological Sciences. This project earned her the national Vincent Weir Scientific Award from the Bat Conservation Trust of the United Kingdom. Visit her project website for more information: www.batsandlighting.co.uk.
This research was originally published in the journal Current Biology, with co-authors Gareth Jones and Stephen Harris. | <urn:uuid:28ac1264-a7a3-4f42-b3f0-d3aa321f1dcf> | 3.71875 | 1,430 | Academic Writing | Science & Tech. | 33.045031 | 166 |
Surface area is a two-dimensional property of a three-dimensional figure. Cones are similar to pyramids, except they have a circular base instead of a polygonal base. Therefore, the surface area of a cone is equal to the sum of the circular base area and the lateral surface area, calculated by multiplying half of the circumference by the slant height. Related topics include pyramid and cylinder surface area.
If you want to calculate the surface area of a cone, you only need to know 2 dimensions. The first is the slant height l and the second is the radius. So what we're going to do, we're going to separate this into two pieces the first is the base which is a circle with radius r and the second is this slant height l. So if I cut, if I took a scissors and cut the cone part and I fended out it would look like a sector. Well what I could do here is I could rearrange this sector into a parallelogram. So again if I cut this into really tiny pieces then I'll be able to organize it into a parallelogram where I would be able to calculate its area. And the way that we'll calculate its area, is first by saying well what are these lines that are going out?
Well those lines are going to be your l, your slant height and this side right here is going to be half of your circumference and half of a circumference is pi times r because the whole circumference is 2 pi r. So this down here is pi times r, so if our height l and our base is pi times r then the area of this is equal to pi times r times l. So the surface area of a cone which I'm going to write over here is equal to the base pi r squared plus this lateral area which is found using your slant height. So that's going be pi times r times l, so you only need to know 2 dimensions the radius and the slant height and you can calculate the surface area of any cone. | <urn:uuid:8c57b621-6116-4614-a9fc-c31bd7ee9c11> | 4.125 | 416 | Tutorial | Science & Tech. | 59.242658 | 167 |
This is an introductory level presentation exploring the various definitions of the term "environmental sustainability" and the connection between climate change and human population growth and its impact on the viability of the earth's systems.
To explore the various perspectives of the term "environmental sustainabilty".
“Environmental Sustainability” has different meanings to different people
• “Environmental Sustainability” extends beyond human existence
• Intelligence is not a good predictor of species longevity
• Longevity within species is tied to metabolic rate
• Environmental factors affect human migration, distribution, endeavors
• Structures made by humans are not sustainable
• Growth of human populations and economic systems are accelerating
CONTEXT FOR USE
This presentation can be used as an introduction to the topic of Climate Change or an introduction to Environmental Sustainability. It could also be used for many interdisiclinary courses to incorporate population growth, sustainability and climate change issues. It could also be used for informal education.
ACTIVITY DESCRIPTION AND TEACHING MATERIALS
Download the Pdf of the presentation.
Assessment at at the discretion of the educator and how the presentation is used.
REFERENCES AND RESOURCES
Produced by the faculty of University of North Carolina, see authors. | <urn:uuid:b9c4ef10-1e5f-48fa-aa53-c665a3b6a9cd> | 2.921875 | 266 | Truncated | Science & Tech. | -5.5675 | 168 |
Next: Measuring Rotation
Previous: Angles of Elevation and Depression
Right Triangles, Bearings, and other Applications
Back to the
top of the page ↑
You need to be signed in to perform this action. Please sign-in and try again.
Oops, looks like cookies are disabled on your browser. Click
to see how to enable them. | <urn:uuid:1a8136cc-012a-4ac8-b554-48f3c8db5bfe> | 2.75 | 79 | Truncated | Science & Tech. | 59.965833 | 169 |
Last August, a 3,000-pound, eight-by-22 foot-robotic platform was launched into the Hudson River just north of Denning’s Point Peninsula in Beacon, N.Y.
On board the floating platform are state-of-the-art sensors that will provide continuous air and water monitoring including barometric pressure, wind speed and direction, water depth, temperature, salinity and flow rate. The sensors will also measure the levels of hydrogen contaminants, dissolved oxygen, and chlorophyll-a (a green pigment found in algae). The data will be transferred in real time to researchers who can track fluctuations in these measurements.
The information provides a detailed record of the overall health of the river. This will alert scientists and environmentalists to escalating pollution levels or to episodic events that can be problematic, such as algae blooms, which can lead to hypoxia. Hypoxia is characterized by a low concentration of oxygen that is exacerbated by increases in nutrients or a particular set of physical conditions. It is associated with fish kills among other problems.
This technology, which promises to revolutionize the way bodies of water are monitored, was developed by a team of scientists and researchers headed up by James Bonner ’85, professor of civil & environmental engineering and director of Clarkson’s Center for the Environment (CCE).
“Our goal is to eventually cover the entire 315-mile river from Mt. Marcy to New York City with a network of sensors,” explains Bonner. “The technology will allow us to create a cyber-infrastructure that stores and processes a great deal of data about the Hudson River. Scientists and engineers around the world will be able to access this information via the Internet.”
Bonner began the development of this real-time monitoring technology at the Shoreline Environmental Research Facility at Texas A&M University where he served as founding director. While in Corpus Christi, Bonner and fellow researchers developed sensing systems that they used to monitor the Gulf of Mexico. Since joining the Clarkson faculty in 2007, Bonner (who holds a Ph.D. from Clarkson) has continued his NSF-funded research program with an eye toward transferring the technology to map and monitor the ecological health of the rivers, Great Lakes and the St. Lawrence Seaway.
The Hudson River monitoring project is a joint partnership between Clarkson University; the Beacon Institute for Rivers and Estuaries, a not-for-profit environmental research organization; and IBM. Last year, Bonner was named the Beacon Institute’s REON Director of Research and will lead the development and implementation of the River and Estuary Observatory Network (REON). The Hudson River project is the first step in a larger plan to develop technology-based monitoring and forecasting network for rivers and estuaries.
“Tremendous human impact occurs in the regions where rivers and estuaries meet the ‘coastal margin’ — coastal wetlands, bays and shorelines,” explains Bonner. “In the United States, this region is home to 70 percent of the population and 20 of its 25 largest cities. It is also where most industry and ports are found. Damage to these ecosystems comes from this increased density of anthropogenic activity associated with pollution from industry, farms and the surrounding communities.”
For example, hypoxia generally occurs in aquatic systems where the water is poorly mixed excluding oxygen and trapping pollutants in the “hypolimion” — the dense bottom layer in a stratified body of water. Chemical reactions within the hypolimion and with bottom sediments depletes the benthic oxygen so aerobic organisms such as fish, oysters, clams and other bottom dwelling organisms perish. “This problem is a growing national concern, for example increasing areas of the Gulf of Mexico (thousands of square miles), portions of the Great Lakes, embayments such Corpus Christi Bay and other near-shore areas are experiencing hypoxia,” says Bonner.
IBM is working with Bonner and the Beacon Institute to develop the cyber framework that will store the data and provide assessment tools, which researchers around the world will be able to use. “Scientists will be able to analyze data and develop models on any environmental parameter of interest.”
For Bonner, one of the most exciting aspects of the project is the way it will transform environmental science and engineering. “The old-fashioned method of retrieving data by collecting samples at discreet locations at only a few times gives a static, incomplete and aliased view or understanding. With this technology, we’ll be able to get real-time data that reflects the constantly changing, dynamic environment of the river. The information will be far more reliable.” | <urn:uuid:02237b71-3d97-43b4-b615-8779adad0180> | 3.03125 | 982 | Knowledge Article | Science & Tech. | 32.182778 | 170 |
An Introduction to ASP.NET Web API
Microsoft recently released the ASP.NET MVC 4.0 beta and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API currently ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or none of the above. You can also self-host Web API in your own applications.
Please note that this article is based on pre-release bits of ASP.NET Web API (pre-RC) and the API is still changing. The samples are built against the latest snapshot of the CodePlex ASP.NET Web Stack Source and some of the syntax and functions might change by the time Web API releases. Overall concepts apply, and I’ve been told that functionality is mostly feature complete, but things are still changing as I write this. Please refer to the latest code samples on GitHub for the final syntax of the examples.
What’s a Web API and Why Do We Need It?
Most mobile devices, like phones and tablets, run apps that use data retrieved from the Web over HTTP.
The .NET stack already includes a number of tools that provide the ability to create HTTP service backends. There’s WCF REST for REST and AJAX, ASP.NET AJAX Services purely for AJAX and JSON, and you can always use plain HTTP Handlers for any sort of response but with minimal plumbing. You can also use plain MVC Controller Methods or even ASP.NET WebForms pages to generate arbitrary HTTP output.
Although all of these can accomplish the task of returning HTTP responses, none of them are optimized for the repeated tasks that an HTTP service has to deal with. If you are building sophisticated Web APIs on top of these solutions, you’re likely to either repeat a lot of code or write significant plumbing code yourself to handle various API requirements consistently across requests.
A Better HTTP Experience
ASP.NET Web API differentiates itself from these other solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted-on technology that is supposed to work in the context of an existing framework.
Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available. There’s much-improved support for content negotiation based on HTTP Accept headers, with the framework capable of detecting content that the client sends and requests and automatically serving the appropriate data format in return. Many of the features favor convention over configuration, making it much easier to do the right thing without having to explicitly configure specific functionality.
Although previous solutions accomplished this using a variety of WCF and ASP.NET features, Web API combines all this functionality into a single server-side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that isn’t automatic, there are overrides for most behaviors, and even many low-level hook points that allow you to plug-in custom functionality with relatively little effort.
ASP.NET Web API differentiates itself from existing Microsoft solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics.
Web API also requires very little in the way of configuration so it’s very quick and unambiguous to get started. To top it all off, you can also host the Web API in your own applications or services.
- Above all, Web API makes it extremely easy to create arbitrary HTTP endpoints in an application without the overhead of a full framework like WebForms or ASP.NET MVC. Because Web API works on top of the core ASP.NET stack, you can plug Web APIs into any ASP.NET application.
By: Rick Strahl
Rick Strahl is president of West Wind Technologies in Maui, Hawaii. The company specializes in Web and distributed application development and tools, with focus on Windows Server Products, .NET, Visual Studio, and Visual FoxPro. Rick is the author of West Wind Web Connection, West Wind Web Store, and West Wind HTML Help Builder. He’s also a C# MVP, a frequent contributor to magazines and books, a frequent speaker at international developer conferences, and the co-publisher of CoDe Magazine. For more information please visit his Web site at www.west-wind.com or contact Rick at firstname.lastname@example.org. | <urn:uuid:9ea58fa8-b877-43a4-812d-20cc6c990fc3> | 2.515625 | 1,010 | Truncated | Software Dev. | 54.43552 | 171 |
I saw some tutorial pages on the internet about how to read files using C++
But I'm kind of confused because there isn't anything in code indicate where the file is from. So I think I need some explanation.
It will open file in current (working) folder. If you want to open file which is in another folder you may write full path: ofstream ofs("C:\\some_folder\\some_file");
There is version of the constructor (and open() function) which takes std::string, if you use them.
What you pass is actually the file path, so you can give a full path, or a relative path. If you just specify the filename that is a relative path. Relative paths are relative to the working directory of program. If you start your program by double clicking on the executable file the working directory will be the directory where the executable file is located. If you start your program from the command line the working directory will be the directory that you set using the cd command. If you start your program from an IDE the working directory is often set to the project directory (not the source directory) but this can differ between IDEs. | <urn:uuid:539cacc7-a7b0-4ae5-a649-fc47d6f41c8c> | 3.375 | 242 | Q&A Forum | Software Dev. | 49.120155 | 172 |
|Effects of Inputs and Outputs on a Region|
The purpose of this resource is to identify what enters and leaves the regional system, and how changes in the input or output of one component can affect other components.
Intended for grade levels:
Type of resource:
Adobe Acrobat reader
Cost / Copyright:
For science/educational use consistent with the methodologies of the GLOBE Program.
DLESE Catalog ID: GLOBE-276
Resource contact / Creator / Publisher:
Author: University Corporation for Atmospheric Research (UCAR)
The GLOBE Program | <urn:uuid:e93ba94b-57b4-4aa5-9944-62a196c61847> | 2.875 | 124 | Structured Data | Science & Tech. | -0.339167 | 173 |
THE FRAGILE FAUNA OF ILLINOIS CAVES
by Steven J. Taylor and Donald W. Webb
Illinois has several hundred caves, many of them in nearly pristine condition.
This unique and fragile environment is home to a diverse array of creatures,
including organisms that are completely limited to the cave environment,
species that may be found in similar habitats above ground, and the many
animals that accidentally wander, fall, or are washed into caves. Many
cave animals are highly adapted for the unique and harsh living conditions
they encounter underground.
caves can be found in four distinct karst regions: in the Mississippian
limestone of the Shawnee Hills, in the Salem Plateau and in the Lincoln
Hills, and in the Ordovician limestone of the Driftless Area. These caves
have been forming though the interaction of geology vegetation, and rainfall
for the past 300 million years. Shallow seas covered much of Illinois
during the Mississippian Period. When the seas receded, forests grew over
the exposed sedimentary rocks; and rainwater-which had become slightly
acidic through interaction with carbon dioxide from both the atmosphere
and the bacterial breakdown of organic material-then seeped into cracks
and bedding planes. As the limestone dissolved, conduits formed. These
conduits eventually developed the geologic features characteristic of
karst terrain-caves, sinking streams, springs, and sinkholes.
INTO THE TWILIGHT ZONE
Caves can be divided into three ecological zones. The entrance zone is
similar in light, temperature, and relative humidity to the surrounding
surface habitat, and the creatures that live there resemble the animals
that live in the moist shaded areas near the cave. Hear we find the eastern
phoebe (Sayornis phoebe), a small gray bird whose nest is constructed
on bare bedrock walls out of mosses and other debris. In the leaf litter,
we find many animals of the forest floor: redbacked salamanders, harvestmen
(or daddy-longlegs), snails, earthworms, millipedes, centipedes, beetles,
ants, and springtails. Cave entrances are often funnel shaped or have
sheer vertical walls, and organisms and organic debris tend to concentrate
at the bottom. The entrance zone also provides a highly protected environment
for overwintering organisms.
Deeper inside the cave, in the twilight zone, there is much less light,
and photosynthesizing plants are no longer able to grow. The temperature
and relative humidity fluctuate here, but the environment is usually damp
and cool. Many animals from the entrance zone wander into the twilight
zone, but most of these creatures must eventually return to the land above.
Several species of cave crickets are common in this part of the cave,
sometimes appearing in large numbers on walls or ceilings.
In larger caves, there is a dark zone characterized by constant temperature
(about 54-58*F in Illinois) and the absence of light. Here, the relative
humidity approaches the saturation point. Many animals in the dark zone
are capable of completing their entire life cycles without leaving the
cave although food is scarce in the absence of photosynthesis. In this
zone, there are fewer species of organisms. Creatures who live here eat
primarily organic debris-wood, leaves, and accidental animals. Dark-zone
dwellers get some of their nutrients from the feces of bats and cave crickets,
animals that leave the cave at night to feed on the surface. Raccoons,
common cave explorers in Illinois, also leave their waste behind. A wide
array of bacteria and fungi feast upon these nutrient-rich items. Other
animals then feed upon the fungi and bacteria. Springtails, minute insects
typically overlooked by the casual observes, are important fungus feeders,
and a variety of beetles, flies, and millipedes get their nourishment
this way as well. These organisms may then become the prey of cave-inhabiting
spiders, harvestmen, predacious fly larvae known as webworms, and an occasional
cave salamander. In the winter, pickerel frogs, mosquitoes, and some moths
move into cave to wait for warmer weather.
ADAPTING AND SURVIVING
Common Cave inhabitants
include (left to right) the moth, Scoliopteryz libatrix,
which does not have a comon name; the cave salamander (Eurycea
lucifuga); and the monorail worm (Macrocera nonilis).
Animals that live in caves vary greatly in their degree of adaptation
to the cave environment. Accidental animals live there only temporarily;
they will either leave or die. Animals that frequent cave but must return
to the surface at some point in there life cycles are know as trogloxenes.
Bats and cave crickets are two examples. Troglophiles are animals that
can complete their entire life cycles within a cave, but they may also
be found in cool, moist habitats outside of caves. Tow troglophilic vertebrates
found in or near Illinois caves are the cave salamander (Eurycea lucifuga)
and the spring cavefish (Forbesicthys agassizi).
Diane Tecic, district
heritage biologist for the Illinois Department of Natural Resources,
looks for cave-adapted organisms in organic debris with Illinois
caver Tim Sickbert.
Most cave animals are trogloxenes and troglophiles; only 20 to 30% of
the animals in North American caves are troglobites. Troglobites are animals
that live exclusively in caves; they are especially interesting because
of their unique morphological, physiological, behavioral, and life-history
adaptations. Many troglobites, for example, lack body pigment. Because
they live where there is no light, there is no evolutionary advantage
for them in maintaining the colors that might be characteristic of their
relatives and ancestors that live above ground. In cave-adapted species,
the evolutionary pressure to maintain functional eyes is also greatly
reduced, and these species have been under strong selective pressure to
evolve other means of sensing their surroundings. Their legs and antennae
usually have more sensory nerve endings than related above-ground species.
These appendages serve important tactile functions and are often greatly
elongated in cave-dwelling creatures.
Adaptations that allow species to exist in an environment with very low
nutrient input are not as obvious. Many cave-adapted species produce fewer
offspring than their surface-inhabiting relatives, but individual eggs
may contain more nutrients. In some species, timing of reproduction may
be synchronized with spring flooding and its new supply of nutrients.
Other species, lacking the above-ground seasonal cues of temperature and
photoperiod, may reproduce year-round. Cave adaptations may include a
reduced metabolic rate, allowing animals to live on limited food resources
for long periods of time. Illinois has many troglobitic invertebrates
but no troglobitic vertebrates.
As cave-adapted species become specialized, they also tend to become
geographically isolated. The geological and hydrological history of some
areas may divide species into isolated populations, and these populations,
over time, may evolve into distinct species. During glacial periods, caves,
as serve as refugia for some aquatic, soil-, and litter-inhabiting animals.
These species may become "stranded" in caves when glaciers retreat
surface conditions are not suitable for recolonization.
VULNERABITLIY OF CAVE ENVIRONMENTS
Human disturbance affects cave ecosystems just as it affects other ecosystems.
As a result of changes we make on the surface, we unknowingly alter cave
environments, destroying unique and valuable organisms before we even
know of their existence. The public knows very little about caves and
the organisms that inhabit them. Small wonder then that the importance
of protecting groundwater, caves, and cave life is not fully appreciated.
It is not uncommon to find sinkholes filled with trash, serving as natural
garbage cans for rural waste disposal. Visitors sometimes permanently
damage caves with graffiti, break stalactites and stalagmites, and carelessly
The very adaptations that allow troglobites to survive in the harsh cave
environment make these animals more vulnerable to changes made by humans.
The reduced metabolic rates that allow these animals to survive in a nutrient-poor
environment also make them less competitive when organic enrichment is
introduced in the form of fertilizers, livestock and agricultural waste,
and human sewage. In Illinois, this effect is commonly seen in stream-inhibiting
amphipods (small shrimplike animals) and isopods (small crustaceans related
to terrestrial pillbugs or sowbugs). These groups contain troglobites
that are highly adapted to cave environments; they also contain more opportunistic
troglophilic species, which have a competitive advantage in the presence
of high levels or organic waste.
Amphipods and isopods feed on small particles of organic debris and on
decomposers such as bacteria and fungi. Because they ingest large quantities
of this material, they are exposed to contamination from a variety of
pollutants. In Illinois, samples of these animals collected in 1992 were
found to contain dieldrin and breakdown products of DDT. They were also
found to contain moderate levels of mercury, although mercury was not
detected in any water samples from the same sites.
Sedimentation also threatens aquatic species. Topsoil run-off from rural
development and agricultural fields enters caves readily when vegetative
buffers around sinkholes are too small or nonexistent. This sediment fills
the spaces in gravel streambeds, eliminating the microhabitats that allow
many cavedwelling species to exist. As a result, cave streams with high
sediment loads ten to contain few species.
Sometimes, humans can't easily see the value of these subterranean systems,
especially when their own interests conflict with the health of cave communities.
Such a conflict is occurring now in our most biologically and hydrologically
significant karst area, the Salem Plateau of Monroe and St. Clair counties.
As part of the greater St. Louis metropolitan area, the Salem Plateau
is experiencing rapid population growth. Scientists can estimate the level
and types of threats that this growth brings to the biological integrity
of the region, but it's much more difficult to develop protected areas,
educational programs, and new regulatory mechanisms within the existing
political, social, and geographic framework. Illinois caves are a high
priority for conservation because cave organisms face serious threats
from agriculture and increasing urbanization. Also, the unique and fragile
cave and environment provides a home for organisms found nowhere else
in the world.
It is not usually possible to include the entire drainage basin of significant
caves within nature preserves or other conservation easements. To manage
a cave effectively, scientists must understand the hydrology of a cave's
subterranean conduits. This knowledge is gained by doing extensive dye
tracing studies and cave mapping. Both of these activities are time- and
labor-intensive. Already, the drainage basins of some of our largest cave
systems are being compromised by agriculture and rural housing projects.
Educating the public-particularly politicians, farmers, and children-about
land use and the impact of human activities is key to the long-term health
of cave communities. We must also enact appropriate regulations for rural
residential development-especially wastewater treatment-and for agricultural
activities in a karst landscape.
For more information on cave conservation and management, contact the
National Speleological Society, 2813 Cave Avenue, Huntsville, AL 35810-4431,
or Steven Taylor or Donald Webb at the Center for Biology, Illinois Natural
History Survey, 607 East Peabody Drive, Champaign, IL 61820.
Steven J. Taylor is an aquatic entomologist in the
Center for Biodiversity at the Illinois Natural History Survey in Champaign.
Donald W. Webb is an insect systematist, also at the Center for Biodiversity.
A GOOD NEIGHBOR POLICY
In a few caves in Monroe and St. Clair counties, you can find a
small shrimplike creature that exists nowhere else in the world.
The Illinois cave amphipod has made our corner of the world its
home, but it may not be here long unless humans take steps to protect
its environment. This unassuming cave creature has been proposed
for listing as a federally endangered species.
Cave amphipods inhabit the bottoms of pools and riffles in large
cave streams, where they creep among cobbles and under stones, feeding
on decaying leaf litter and organic debris. Food is scarce in this
environment, and the amphipods have developed chemosensory structures
that detect the odor of food sources, such as dead or injured animals.
Injured or dying amphipods are vulnerable to such predators as
flatworms, cave salamanders, and even other amphipods. But
the greatest threat these vulnerable creatures face is the
deterioration of the environment. The Illinois cave amphipod
lives near the greater St. Louis metropolitan area, a region
that has been experiencing dramatic population growth for
the past 10 years. Continued urbanization without appropriate
sewage treatment and disposal is especially threatening to
the amphipods existence. Other serious threats are siltation
and the presence of agricultural chemicals in subterranean
Fortunately for the amphipod, the quality of life for people on
the land above depends on water quality in streams below. Because
agricultural chemicals and bacteria associated with sewage have
been found in well water, springs, and cave streams in this area,
a concerted effort is being made to improve the water quality in
this karst region. Efforts to provide communities with safe drinking
water could also provide a healthy cave environment and help ensure
the further existence of our underground neighbor, the Illinois | <urn:uuid:df2ab0ff-bb86-415b-be4c-863c8014597f> | 3.78125 | 3,029 | Knowledge Article | Science & Tech. | 21.357917 | 174 |
There are many types of biomass—organic matter such as plants,
residue from agriculture and forestry, and the organic component of
municipal and industrial wastes—that can now be used to produce fuels,
chemicals, and power. Wood has been used to provide heat for thousands of
years. This flexibility has resulted in increased use of biomass
technologies. According to the Energy Information Administration, 53% of
all renewable energy consumed in the United States was biomass-based in
Biomass technologies break down organic matter to release stored energy
from the sun.
Biofuels are liquid or gaseous fuels produced from biomass. Most biofuels
are used for transportation, but some are used as fuels to produce
electricity. The expanded use of biofuels offers an array of benefits for
our energy security, economic growth, and environment.
Current biofuels research focuses on new forms of biofuels such as
ethanol and biodiesel, and on biofuels conversion processes.
Ethanol—an alcohol—is made primarily from the starch in corn grain. It
is most commonly used as an additive to petroleum-based fuels to reduce
toxic air emissions and increase octane. Today, roughly half of the
gasoline sold in the United States includes 5%-10% ethanol.
Biodiesel use is relatively small, but its benefits to air quality are
Biodiesel is produced through a process that combines
organically-derived oils with alcohol (ethanol or methanol) in the
presence of a catalyst to form ethyl or methyl ester. The biomass-derived
ethyl or methyl esters can be blended with conventional diesel fuel or
used as a neat fuel (100% biodiesel).
Biomass resources include any plant-derived organic matter that is
available on a renewable basis. These materials are commonly referred to
Biomass feedstocks include dedicated energy crops, agricultural crops,
forestry residues, aquatic crops, biomass processing residues, municipal
waste, and animal waste.
Dedicated energy crops
Herbaceous energy crops are perennials that are harvested annually after
taking 2 to 3 years to reach full productivity. These include such grasses
as switchgrass, miscanthus (also known as elephant grass or e-grass),
bamboo, sweet sorghum, tall fescue, kochia, wheatgrass, and others.
Short-rotation woody crops are fast-growing hardwood trees that are
harvested within 5 to 8 years of planting. These include hybrid poplar,
hybrid willow, silver maple, eastern cottonwood, green ash, black walnut,
sweetgum, and sycamore.
Agricultural crops include currently available commodity products such as
cornstarch and corn oil, soybean oil and meal, wheat starch, and vegetable
oils. They generally yield sugars, oils, and extractives, although they
can also be used to produce plastics as well as other chemicals and
Agriculture Crop Residues
Agriculture crop residues include biomass materials, primarily stalks and
leaves, that are not harvested or removed from fields in commercial use.
Examples include corn stover (stalks, leaves, husks, and cobs), wheat
straw, and rice straw. With approximately 80 million acres of corn planted
annually, corn stover is expected to become a major feedstock for biopower
Forestry residues include biomass not harvested or removed from logging
sites in commercial hardwood and softwood stands as well as material
resulting from forest management operations such as pre-commercial
thinning and removal of dead and dying trees.
There are a variety of aquatic biomass resources, such as algae, giant
kelp, other seaweed, and marine microflora.
Biomass Processing Residues
Biomass processing yields byproducts and waste streams that are
collectively called residues and have significant energy potential.
Residues are simple to use because they have already been collected. For
example, the processing of wood for products or pulp produces unused
sawdust, bark, branches, and leaves/needles.
Residential, commercial, and institutional post-consumer waste contains a
significant proportion of plant-derived organic material that constitute a
renewable energy resource. Waste paper, cardboard, wood waste, and yard
waste are examples of biomass resources in municipal waste.
Farms and animal-processing operations create animal wastes that
constitute a complex source of organic materials with environmental
consequences. These wastes can be used to make many products, including
Some biomass feedstocks, such as municipal waste, are found throughout
the United States. Others, such as energy crops, are concentrated in the
eastern half of the country. As technologies develop to more efficiently
process complex feedstocks, the biomass resource base will expand.
Collecting Gas from Landfills
Landfills can be a source of energy. Organic waste produces a gas called
methane as it decomposes, or rots.
Methane is the same
energy-rich gas that is in natural gas, the fuel sold by natural gas
utility companies. It is colorless and odorless. Natural gas utilities add
an odorant (bad smell) so people can detect seeping gas, but it can be
dangerous to people or the environment. New rules require landfills to
collect methane gas as a pollution and safety measure.
compiled from The British Antarctic Study, NASA, Environment Canada,
UNEP, EPA and other sources as stated and credited Researched by Charles
Welch-Updated daily This Website is a project of the The Ozone Hole Inc.
a 501(c)(3) Nonprofit Organization http://www.theozonehole.com | <urn:uuid:43454431-e724-4640-b136-09b9e018b7c6> | 3.828125 | 1,230 | Knowledge Article | Science & Tech. | 24.609679 | 175 |
An earthquake is a sudden vibration or trembling in the Earth. More than 150,000 tremors strong enough to be felt by humans occur each year worldwide (see Chance of an Earthquake). Earthquake motion is caused by the quick release of stored potential energy into the kinetic energy of motion. Most earthquakes are produced along faults, tectonic plate boundary zones, or along the mid-oceanic ridges (Figures 1 and 2).
Figure 1: Distribution of earthquake epicenters from 1975 to 1995. Depth of the earthquake focus is indicated by color. Deep earthquakes occur in areas where oceanic crust is being actively subducted. About 90% of all earthquakes occur at a depth between 0 and 100 kilometers. (Source: U.S. Geologic Survey, National Earthquake Information Center)
Figure 2: Distribution of earthquakes with a magnitude less than 5.0 relative to the various tectonic plates found on the Earth's surface. Each tectonic plate has been given a unique color. This illustration indicates that the majority of small earthquakes occur along plate boundaries. (Source: PhysicalGeography.net)
At these areas, large masses of rock that are moving past each other can become locked due to friction. Friction is overcome when the accumulating stress has enough force to cause a sudden slippage of the rock masses. The magnitude of the shock wave released into the surrounding rocks is controlled by the quantity of stress built up because of friction, the distance the rock moved when the slippage occurred, and ability of the rock to transmit the energy contained in the seismic waves. The San Francisco earthquake of 1906 involved a six meter horizontal displacement of bedrock. Sometime after the main shock wave, aftershocks can occur because of the continued release of frictional stress. Most aftershocks are smaller than the main earthquake, but they can still cause considerable damage to already weakened natural and human-constructed features. Earthquakes that occur under or near bodies of water can give rise to tsunamis, which in cases like the December 26, 2004 Sumatra-Andaman Island earthquake reult in far greater distruction and loss of life that the initial earthquake.
Earthquakes are a form of wave energy that is transferred through bedrock. Motion is transmitted from the point of sudden energy release, the earthquake focus (hypocenter), as spherical seismic waves that travel in all directions outward (Figure 3). The point on the Earth's surface directly above the focus is termed the epicenter. Two different types of seismic waves have been described by geologists: body waves and surface waves. Body waves are seismic waves that travel through the lithosphere. Two kinds of body waves exist: P-waves and S-waves. Both of these waves produce a sharp jolt or shaking. P-waves or primary waves are formed by the alternate expansion and contraction of bedrock and cause the volume of the material they travel through to change. They travel at a speed of about 5 to 7 kilometers per second through the lithosphere and about 8 kilometers per second in the asthenosphere. The speed of sound is about 0.30 kilometers per second. P-waves also have the ability to travel through solid, liquid, and gaseous materials. When some P-waves move from the ground to the lower atmosphere, the sound wave that is produced can sometimes be heard by humans and animals.
Figure 3: Movement of body waves away from the focus of the earthquake. The epicenter is the location on the surface directly above the earthquake's focus. (Source: PhysicalGeography.net)
S-waves or secondary waves are a second type of body wave. These waves are slower than P-waves and can only move through solid materials. S-waves are produced by shear stresses and move the materials they pass through in a perpendicular (up and down or side to side) direction.
Surface waves travel at or near the Earth's surface. These waves produce a rolling or swaying motion causing the Earth's surface to behave like waves on the ocean. The velocity of these waves is slower than body waves. Despite their slow speed, these waves are particularly destructive to human construction because they cause considerable ground movement.
Earthquake Magnitude and Energy
|Table 1: Relationship between Richter Scale magnitude and energy released.|
|2.0||1.3 x 108||Smallest earthquake detectable by people.|
|5.0||2.8 x 1012||Energy released by the Hiroshima atomic bomb.|
|6.0 - 6.9||7.6 x 1013 to 1.5 x 1015||
About 120 shallow earthquakes of this magnitude
occur each year on the Earth.
|6.7||7.7 x 1014||Northridge, California earthquake January 17, 1994.|
|7.0||2.1 x 1015||Major earthquake threshold. Haiti earthquake of January 12, 2010 resulted in an estmated 222,570 deaths|
|7.4||7.9 x 1015||Turkey earthquake August 17, 1999. More than 12,000 people killed.|
|7.6||1.5 x 1016||Deadliest earthquake in the last 100 years. Tangshan, China, July 28, 1976. Approximately 255,000 people perished.|
|8.3||1.6 x 1017||San Francisco earthquake of April 18, 1906.|
|9.0||Japan Earthquake March 11, 2011|
|9.1||4.3 x 1018||December 26, 2004 Sumatra earthquake which triggered a tsunami and resulted in 227,898 deaths spread across fourteen countries|
|9.5||8.3 x 1018||Most powerful earthquake recorded in the last 100 years. Southern Chile on May 22, 1960. Claimed 3,000 lives.|
The strength of an earthquake can be measured by a device called a seismograph. When an earthquake occurs this device converts the wave energy into a standard unit of measurement like the Richter scale. In the Richter scale, units of measurement are referred to as magnitudes. The Richter scale is logarithmic. Thus, each unit increase in magnitude represents 10 times more energy released. Table 1 describes the relationship between Richter scale magnitude and energy released. The following equation can be used to approximate the amount of energy released from an earthquake in joules when Richter magnitude (M) is known:
Energy in joules = 1.74 x 10(5 + 1.44*M)
Figures 4 and 5 describe the spatial distribution of small and large earthquakes respectively. These maps indicate that large earthquakes have distributions that are quite different from small events. Many large earthquakes occur some distance away from a plate boundary. Some geologists believe that these powerful earthquakes may be occurring along ancient faults that are buried deep in the continental crust. Recent seismic studies in the central United States have discovered one such fault located thousands of meters below the lower Mississippi Valley. Some large earthquakes occur at particular locations along the plate boundaries. Scientists believe that these areas represent zones along adjacent plates that have greater frictional resistance and stress.
Figure 4: Distribution of earthquakes with a magnitude less than 5 on the Richter Scale. (Image Source: PhysicalGeography.net)
Figure 5: Distribution of earthquakes with a magnitude greater than 7 on the Richter Scale. (Image Source: PhysicalGeography.net)
The Richter Scale Magnitude, while the most known, is one of several measures of the magnitude of an earthquake. The most commonly used are:
- Local magnitude (ML), commonly referred to as "Richter magnitude;"
- Surface-wave magnitude (Ms);
- Body-wave magnitude (Mb); and
- Moment magnitude (Mw).
Scales 1 to 3 have limited range and applicability and do not satisfactorily measure the size of the largest earthquakes. The moment magnitude (Mw) scale, based on the concept of seismic moment, is uniformly applicable to all sizes of earthquakes but is more difficult to compute than the other types. All magnitude scales should yield approximately the same value for any given earthquake.
The severity of an earthquake can be expressed in terms of both intensity and magnitude. However, the two terms are quite different, and they are often confused.
Intensity is based on the observed effects of ground shaking on people, buildings, and natural features. It varies from place to place within the disturbed region depending on the location of the observer with respect to the earthquake epicenter while magnitude is related to the amount of seismic energy released at the hypocenter of the earthquake.
Although numerous intensity scales have been developed over the last several hundred years to evaluate the effects of earthquakes, the one currently used in the United States is the Modified Mercalli (MM) Intensity Scale. The lower numbers of the intensity scale generally deal with the manner in which the earthquake is felt by people. The higher numbers of the scale are based on observed structural damage. Structural engineers usually contribute information for assigning intensity values of Vlll or above.
The following is an abbreviated description of the 12 levels of Modified Mercalli intensity.
I. Not felt except by a very few under especially favorable conditions.
II. Felt only by a few persons at rest, especially on upper floors of buildings. Delicately suspended objects may swing.
III. Felt quite noticeably by persons indoors, especially on upper floors of buildings. Many people do not recognize it as an earthquake. Standing motor cars may rock slightly. Vibration similar to the passing of a truck. Duration estimated.
IV. Felt indoors by many, outdoors by few during the day. At night, some awakened. Dishes, windows, doors disturbed; walls make cracking sound. Sensation like heavy truck striking building. Standing motor cars rocked noticeably.
V. Felt by nearly everyone; many awakened. Some dishes, windows broken. Unstable objects overturned. Pendulum clocks may stop.
Vl. Felt by all, many frightened. Some heavy furniture moved; a few instances of fallen plaster. Damage slight.
Vll. Damage negligible in buildings of good design and construction; slight to moderate in well-built ordinary structures; considerable damage in poorly built or badly designed structures; some chimneys broken.
Vlll. Damage slight in specially designed structures; considerable damage in ordinary substantial buildings with partial collapse. Damage great in poorly built structures. Fall of chimneys, factory stacks, columns, monuments, walls. Heavy furniture overturned.
IX. Damage considerable in specially designed structures; well-designed frame structures thrown out of plumb. Damage great in substantial buildings, with partial collapse. Buildings shifted off foundations.
X. Some well-built wooden structures destroyed; most masonry and frame structures destroyed with foundations. Rails bent.
Xl. Few, if any (masonry) structures remain standing. Bridges destroyed. Rails bent greatly.
Xll. Damage total. Lines of sight and level are distorted. Objects thrown into the air.
Earthquake Damage and Destruction
Earthquakes are a considerable hazard to humans. Earthquakes can cause destruction by structurally damaging buildings and dwellings, fires, tsunamis, and mass wasting (see Figures 6 to 10). Earthquakes can also take human lives. The amount of damage and loss of life depends on a number of factors. Some of the more important factors are:
- Time of day. Higher losses of life tend to occur on weekdays between the hours of 9:00 AM to 4:00 PM. During this time interval many people are in large buildings because of work or school. Large structures are often less safe than smaller homes in an earthquake.
- Magnitude of the earthquake and duration of the event.
- Distance form the earthquake's focus. The strength of the shock waves diminish with distance from the focus.
- Geology of the area affected and soil type. Some rock types transmit seismic wave energy more readily. Buildings on solid bedrock tend to receive less damage. Unconsolidated rock and sediments have a tendency to increase the amplitude and duration of the seismic waves increasing the potential for damage. Some soil types when saturated become liquefied (Figure 6).
- Type of building construction. Some building materials and designs are more susceptible to earthquake damage (Figure 7).
- Population density. More people often means greater chance of injury and death.
The greatest loss of life because of an earthquake this century occurred in Tangshan, China in 1976 when an estimated 250,000 people died. In 1556, a large earthquake in the Shanxi Province of China was estimated to have caused the death of about 1,000,000 people.
A common problem associated with earthquakes in urban areas is fire (Figure 8). Shaking and ground displacement often causes the severing of electrical and gas lines leading to the development of many localized fires. Response to this problem is usually not effective because shock waves also rupture pipes carrying water. In the San Francisco earthquake of 1906, almost 90% of the damage to buildings was caused by fire.
In mountainous regions, earthquake-provoked landslides can cause many deaths and severe damage to built structures (Figure 9). The town of Yungay, Peru was buried by a debris flow that was triggered by an earthquake that occurred on May 31, 1970. This disaster engulfed the town in seconds with mud, rock, ice, and water and took the lives of about 20,000 people.
Another consequence of earthquakes is the generation of tsunamis (Figure 10). Tsunamis, or tidal waves, form when an earthquake triggers a sudden movement of the seafloor. This movement creates a wave in the water body which radiates outward in concentric shells. On the open ocean, these waves are usually no higher than one to three meters in height and travel at speed of about 750 kilometers per hour. Tsunamis become dangerous when they approach land. Frictional interaction of the waves with the ocean floor, as they near shore, causes the waves to slow down and collide into each other. This amalgamation of waves then produces a super wave that can be as tall as 65 meters in height.
The US Geological Survey estimate that at least 1,783 deaths worldwide resulted from earthquake activity in 2009. In 2010, the number rose to 226,729 as the result of 222,570 people killed by the Jauary 12, 2010 earthquake in Haiti.
The deadliest earthquake of 2009 was a magnitude 7.5 event that killed approximately 1,117 people in southern Sumatra, Indonesia on Sept. 30, according to the U.S. Geological Survey (USGS) and confirmed by the United Nations Office for Coordination of Humanitarian Affairs (OCHA). However, the number of earthquake-related fatalities in 2009 was far less than the 2008 count of over 88,000. The high number of fatalities in 2008 was primarily due to the devastating magnitude 7.9 earthquake that occurred in Sichuan, China on May 12.
Although unrelated, the Sept. 30 Indonesian earthquake occurred a day after the year’s strongest earthquake, a magnitude 8.1 on Sept. 29 in the Samoa Islands region. Tsunamis generated by that earthquake killed 192 people in American Samoa, Samoa and Tonga. A magnitude 6.3 earthquake hit the medieval city of L’Aquila in central Italy on April 6, killing 295 people.
Overall, earthquakes took the lives of people in 15 countries on four continents during 2009, including Afghanistan, Bhutan, China, Costa Rica, Greece, Indonesia, Italy, Kazakhstan, Honduras, Japan, Malawi, Samoa, South Africa and Tonga, as well as the U.S. territory of American Samoa. Earthquakes injured people in 11 additional countries, including the mainland United States, where a magnitude 4.4 earthquake on May 2 injured one person in the Los Angeles area.
The biggest 2009 earthquake in the 50 United States was in the Aleutian Islands of Alaska. The magnitude 6.5 earthquake occurred in the Fox Islands on Oct. 13. It was felt at the towns of Akutan and Unalaska, but caused no casualties or damage. The greatest earthquake for the year in the contiguous United States was a magnitude 5.2 event on October 2 in the Owens Valley southeast of Lone Pine, California. Because of the sparse population in the epicentral area, this quake caused no damage although it was felt as far away as Merced and Los Angeles, California and Las Vegas, Nevada.
A magnitude 9.1 Sumatra-Andaman Island earthquake and subsequent tsunami on December 26, 2004 killed 227,898 people, which is the fourth largest casualty toll for earthquakes and the largest toll for a tsunami in recorded history. As a consequence of that earthquake, the USGS has significantly improved its earthquake notification and response capabilities. Improvements include the addition of nine real-time seismic stations across the Caribbean basin, a seismic and tsunami prone region near the U.S. southern border, implementation of a 24x7 earthquake operations center at the USGS National Earthquake Information Center (NEIC), and development of innovative tools for rapid evaluation of population exposure and damage to potentially damaging earthquakes.
The USGS estimates that several million earthquakes occur throughout the world each year, although most go undetected because they hit remote areas or have very small magnitudes. The USGS NEIC publishes the locations for about 40 earthquakes per day, or about 14,500 annually, using a publication threshold of magnitude 4.5 or greater worldwide or 2.5 or greater within the United States. On average, only 18 of these earthquakes occur at a magnitude of 7.0 or higher each year.
In the 2009 year, 17 earthquakes reached a magnitude of 7.0 or higher, with a single one topping a magnitude of 8.0. These statistics for large magnitude earthquakes are higher than those of 2008, which experienced only 12 earthquakes over magnitude 7.0 and none over 8.0. Factors such as the size of an earthquake, the location and depth of the earthquake relative to population centers, and fragility of buildings, utilities and roads all influence how earthquakes will affect nearby communities.
Table 2. Notable Earthquakes and Their Estimated Magnitude
|January 23, 1556||
|August 17, 1668||
|November 1, 1755||
|December 16, 1857||
|October 27, 1891||
|June 15, 1896||
|April 18, 1906||3,000||7.8|
|August 17, 1906||
|December 28, 1908||
|December 16, 1920||
|September 1, 1923||
|May 22, 1927||
|January 13, 1934||
|December 26, 1939||
|February 29, 1960||
|May 22, 1960||
|March 28, 1964||
Prince William Sound, AK
|May 31, 1970||
|July 27, 1976||
|September 19, 1985||
|December 7, 1988||
|August 17, 1999||
|January 26, 2001||
|December 26, 2003||
|December 26, 2004||
Off west coast northern Sumatra
|October 8, 2005||
|May 26, 2006||
|May 12, 2008||
Eastern Sichuan, China
|January 12, 2010||
Near Port-au-Prince, Haiti
|March 11, 2011||
Pacific Ocean, East of Oshika Peninsula, Japan
* Fatalities in the 1976 Tangshan, China earthquake were estimated as high as 655,000.
Source: Preferred Magnitudes of Selected Significant Earthquakes, USGS, 2010 (with additions on two most recent major earthquakes in Haiti and Japan.
The following links provide some more information about earthquakes.
- American Geophysical Union (AGU)
- Animation of P, S & Surface Waves
- Animations of Seismology Fundamentals
- Association of American State Geologists (AASG)
- Association of Bay Area Governments (ABAG)
- California Geological Survey (CGS)
- California Office of Emergency Services (OES)
- California Seismic Safety Commission
- Center for Earthquake Research & Information (CERI)
- Central United States Earthquake Consortium (CUSEC)
- Consortium of Universities for Research in Earthquake Engineering (CUREE)
- COSMOS Virtual Data Center
- CREW - Cascadia Region Earthquake Workgroup
- Earthquake Engineering Research Institute (EERI)
- Earthquake Information for 2009, USGS
- Earthquake Information for 2010, USGS
- Earthquake Monitoring
- Earthquakes - Online University
- Earthquakes by Bruce A. Bolt Online Companion
- Earthquakes Cause over 1700 Deaths in 2009, USGS
- Earth Science Education Activities
- European-Mediterranean Seismological Centre
- FEMA - Federal Emergency Management Agency
- Finite-source Rupture Model Database
- Global Earthquake Explorer
- GSA - Geological Society of America
- Incorporated Research Institutes for Seismology (IRIS)
- International Association of Seismology and Physics of the Earth's Interior (IASPEI)
- International Seismological Centre (ISC)
- John Lahr's Earthquake website
- McConnell, D., D. Steer, C. Knight, K. Owens, and L. Park. 2010. The Good Earth. 2nd Edition. McGraw-Hill, Dubuque, Iowa.
- Mid-America Earthquake Center
- Multi-Disciplinary Center for Earthquake Engineering Research (MCEER)
- National Geophysical Data Center (NGDC) - NOAA
- National Information Centre of Earthquake Engineering (NICEE)
- National Science Foundation (NSF)
- Natural Hazards Center
- Northern California Earthquake Data Center
- Observatories and Research Facilities for EUropean Seismology (ORFEUS)
- Plummer, C., D. Carlson, and L. Hammersle. 2010. Physical Geology. 13th Edition. McGraw-Hill, Dubuque, Iowa.
- Project IDA
- Quake-Catcher Network
- Saint Louis University Earthquake Center
- Seattle Fault Earthquake Scenario
- Seismographs: Keeping Track of Earthquakes
- Seismological Society of America (SSA)
- Seismo-surfing the Internet for Earthquake Data
- Smithsonian Global Volcanism Program
- SOPAC (Scripps Orbit and Permanent Array Center)
- Southern California Earthquake Center (SCEC)
- Tarbuck, E.J., F.K. Lutgens, and D. Tasa. 2009. Earth Science. 12th Edition. Prentice Hall, Upper Saddle River, New Jersey.
- Tectonics Observatory
- Tracing earthquakes: seismology in the classroom
- UPSeis Seismology Questions Answered
- USGS Earthquake Hazards Program, U.S. Geological Survey
- Western States Seismic Policy Council (WSSPC)
- World Data Center System
- World Organization of Volcano Observatories
- World Seismic Safety Initiative (WSSI) | <urn:uuid:99446ec0-7d83-4817-851c-637593492317> | 4.34375 | 4,773 | Knowledge Article | Science & Tech. | 45.658377 | 176 |
|Created with the Web Accessibility Wizard||Slide 2 of 21|
Eta 6-hour 300 mb heights and wind speed (shaded) valid 5/12 18 UTC.
The weather pattern on May 12 featured a large upper-level ridge over the eastern United States. A strong upper jet was located over eastern Canada. Northeast Pennsylvania was far removed from the upper jet, but could be considered to be located at the edge of the right entrance region. No upper level divergence associated with the jet was located near northeast Pennsylvania, however the circulation associated with the jet may have acted to increase the low-level southwesterly flow. (the low-level jet associated with this case will be shown later). | <urn:uuid:c5bbc3a3-8db1-497d-8ce6-3c75709455fb> | 2.65625 | 146 | Knowledge Article | Science & Tech. | 48.934808 | 177 |
Introductionfox, carnivorous mammal of the dog family, found throughout most of the Northern Hemisphere. It has a pointed face, short legs, long, thick fur, and a tail about one half to two thirds as long as the head and body, depending on the species. Solitary most of the year, foxes do not live in dens except in the breeding season; they sleep concealed in grasses or thickets, their tails curled around them for warmth. During the breeding season a fox pair establishes a den, often in a ground burrow made by another animal, in which the young are raised; the male hunts for the family. The young are on their own after about five months; the adults probably find new mates each season.
Foxes feed on insects, earthworms, small birds and mammals, eggs, carrion, and vegetable matter, especially fruits. Unlike other members of the dog family, which run down their prey, foxes usually hunt by stalking and pouncing. They are known for their raids on poultry but are nonetheless very beneficial to farmers as destroyers of rodents.
Foxes are occasionally preyed upon by larger carnivores, such as wolves and bobcats, as well as by humans and their dogs; birds of prey may capture the young. Despite extensive killing of foxes, most species continue to flourish. In Europe this is due in part to the regulatory laws passed for the benefit of hunters. Mounted foxhunting, with dogs, became popular in the 14th cent. and was later introduced into the Americas; special hunting dogs, called foxhounds, have been bred for this sport. Great Britain banned foxhunting in which the hounds kill the fox in 2005.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Vertebrate Zoology | <urn:uuid:b06f1991-fec6-49bf-b55b-75db6d59f18d> | 3.5 | 382 | Knowledge Article | Science & Tech. | 50.583511 | 178 |
GAINESVILLE, Fla. — Joint research between Florida Museum of Natural History and Chinese scientists to discover and interpret the world’s earliest known flowering fossil is the subject of a PBS NOVA documentary, “First Flower,” which debuts at 8 p.m. April 17.
The origin of flowers is one of botany’s deepest mysteries, and in the NOVA documentary, Florida Musuem paleobotanist David Dilcher guides viewers through segments of the amazing story of the evolution of flowers.
“There’s no doubt about it, flowers are all about sex,” said Dilcher, a graduate research professor and paleobotany curator at the Florida Museum and a member of the National Academy of Sciences.
The search for the world’s first flower drew Dilcher to a remote Chinese lake where colleague Sun Ge of Jilin University in Changchun, China, discovered a 125 million-year-old fossil that scientists believe is the earliest known flower.
Although the fossil lacked the aesthetic petals associated with modern flowers, Dilcher recognized the plant stalk had seeds enclosed within carpels, which are female reproductive structures found in flowers. This led him to conclude the fossil was in fact an early form of a flower. The fossil was named Archaefructus liaoningensis, which means “ancient fruit from Liaoning Province of northeast China.”
Flower production demands an amazing amount of a plant’s precious energy, which leads some scientists to question why and how the world’s first flowers evolved. Angiosperms, or flowering plants that have male pollen and female ovaries, are thought to have made their first appearance on earth roughly 130 million years ago. Today, they dominate the plant world. Today, bees, moths, hummingbirds and other insects facilitate plant reproduction by spreading pollen, but this is the culmination of a complex relationship that evolved over millions of years and that scientists are still decoding.
“Flowering plants were the first advertisers in the world,” Dilcher said. “They put out beautiful colors, colorful patterns, they put out fragrances. And they gave a reward such as nectar or pollen for any insect that would come and visit them.”
Flowers go to elaborate lengths to advertise their sexual organs, the female parts and the male parts, Dilcher said.
“If they could attract these mobile pollinators to visit, crawl around, and feed in flowers, pick up pollen on their legs, pick up pollen on their bodies,” Dilcher said, “and then fly to another flower some distance away, and repeat this process, they could effectively transfer their male genetic material some distance away to another flower.”
Pamela and Doug Soltis, Florida Museum researchers who study plant and flower DNA to better understand evolutionary origins, also are interviewed in “First Flower.” The Soltis’ work addresses evolutionary origins of flowers and flowering plants, plant speciation and the conservation genetics of endangered plant species in Florida. Doug Soltis also is chair of UF’s Department of Botany.
The journal Science featured Sun’s and Dilcher’s fossil flower research on its cover in 1998 and 2002. Sun is a geologist and director of the paleontology and stratigraphy lab at Jilin University. Dilcher holds professorial appointments and teaches at Jilin University and Nanjing University in China, in addition to teaching at the University of Florida.
Writer: DeLene Beeland
Media contact: Paul Ramey, email@example.com
Source: David Dilcher (352) 392-1721 ext. 460
Source: Sun Ge (352) 392-1721 ext. 460; until April 12 | <urn:uuid:908c4167-7d89-45c1-b438-4a4462827361> | 3.71875 | 797 | News (Org.) | Science & Tech. | 39.916481 | 179 |
The ILC promises extraordinary power in the study of the Terascale. The annihilation of an electron and its antiparticle, the positron, allows the understanding of collisions to an unparalleled level of detail and precision. As others have comprehensively documented, the ILC view of the Terascale, complementary to the LHC's perspective, makes the ILC an essential tool for unraveling new phenomena discovered at these extreme energies. It makes the ILC the top priority at Fermilab for a future global facility.
A superconducting ILC cavity
Credit: Fermilab Visual Media Services
The ILC's opportunities for discovery have motivated the global particle physics community to come together in an effort to design the accelerator and its experimental program. The completion of the Reference Design Report in early 2007 and the structuring of a collaborative worldwide R&D program represent successful community efforts. Fermilab has contributed strongly to this effort: the design of the accelerator; the development of superconducting radio-frequency, or SCRF, technology in the U.S.; the design of the physics and experimental program; the site studies necessary for hosting the ILC at or near Fermilab; and the establishment of a test-beam facility for the development of ILC detectors. The ILC and related SCRF efforts at Fermilab make up by far the laboratory's largest future program.
In the next phase of the ILC effort, Fermilab's aim is to be a leader in the global engineering design and in the development of the SCRF technology, steps necessary to reach a decision early in the next decade to build the ILC. Fermilab is building the required infrastructure and test facilities and is coordinating the national efforts in the development of SCRF technology, in collaboration with national and international partners in Europe, Asia and the U.S. To these efforts Fermilab brings strong engineering capability, accelerator physics expertise and technology development skills.
Innovative detectors will be key to exploiting the ILC physics opportunities. In general, an improvement in resolution of both tracking and calorimetric detectors over the present state-of-the-art detectors will allow experimenters to distinguish the signals of new physics from backgrounds much more efficiently. Fermilab has a strong instrumentation development effort in collaboration with laboratories and universities across the world. Just as important for the global ILC effort, Fermilab has developed and will operate a flexible high-energy test beam to provide a variety of particles and energies for testing detector technologies.
A simulation of the decay of a Z + Higgs to four jets in an ILC detector
Credit: Norman Graf
Fermilab's goal is to host the ILC. Geographically and geologically, the site is nearly optimal and could house the central facilities of the ILC, such as damping rings and experimental halls. Two important aspects of Fermilab's activities over the next three years are the study of the site and the design of conventional facilities necessary for the engineering design and working with the neighboring communities on issues associated with hosting the ILC in the region. Fermilab has vigorously collaborated with local residents over the last two years, first with the Community Task Force and currently with the ILC Citizens' Task Force and the Envoy Program. These activities will strengthen over the next three years of engineering design.
Finally, Fermilab is strengthening its engineering capabilities as the laboratory moves toward the design of global accelerators. Unlike the case of the detector community, which is accustomed to building detectors collaboratively across continents, much less collaboration has taken place in the development of global accelerators. The ILC is breaking new ground in this regard, and it is important that Fermilab have the strongest engineering capabilities and systems in place in order to lead in the integration of components produced around the world into a functioning accelerator.
The ILC is key to the future of U.S. particle physics and to Fermilab's future. | <urn:uuid:b7f61ddd-9532-4c4e-be42-c9da7e694d04> | 2.703125 | 823 | About (Org.) | Science & Tech. | 21.442028 | 180 |
In 2006, high sea temperatures caused severe coral bleaching in the Keppell Islands, in the southern part of the reef — the largest coral reef system in the world. The damaged reefs were then covered by a single species of seaweed which threatened to suffocate the coral and cause further loss.
A "lucky combination" of rare circumstances has meant the reef has been able to make a recovery. Abundant corals have reestablished themselves in a single year, say the researchers from the University of Queensland's Centre for Marine Studies and the ARC Centre of Excellence for Coral Reef Studies (CoECRS).
"Three factors were critical," said Dr Guillermo Diaz-Pulido. "The first was exceptionally high regrowth of fragments of surviving coral tissue. The second was an unusual seasonal dieback in the seaweeds, and the third was the presence of a highly competitive coral species, which was able to outgrow the seaweed."
Coral bleaching occurs in higher sea temperatures when the coral lose the symbiotic algae they need to survive. The reefs then lose their colour and become more susceptible to death from starvation or disease.
The findings are important as it is extremely rare to see reports of reefs that bounce back from mass coral bleaching or other human impacts in less than a decade or two, the scientists said. The study is published in the online journal PLoS one.
"The exceptional aspect was that corals recovered by rapidly regrowing from surviving tissue," said Dr Sophie Dove, also from CoECRS and The University of Queensland.
"Recovery of corals is usually thought to depend on sexual reproduction and the settlement and growth of new corals arriving from other reefs. This study demonstrates that for fast-growing coral species asexual reproduction is a vital component of reef resilience."
Last year, a major global study found that coral reefs did have the ability to recover after major bleaching events, such as the one caused by the El Niño in 1998.
David Obura, the chairman of the International Union for Conservation of Nature climate change and coral reefs working group involved with the report, said: "Ten years after the world's biggest coral bleaching event, we know that reefs can recover – given the chance. Unfortunately, impacts on the scale of 1998 will reoccur in the near future, and there's no time to lose if we want to give reefs and people a chance to suffer as little as possible."
Coral reefs are crucial to the livelihoods of millions of coastal dwellers around the world and contain a huge range of biodiversity. The UN's Millennium Ecosystem Assessment says reefs are worth about $30bn annually to the global economy through tourism, fisheries and coastal protection.
But the ecosystems are under threat worldwide from overfishing, coastal development and runoff from the land, and in some areas, tourism impacts. Natural disasters such as the earthquake that triggered the Indian Ocean tsunami in 2004 have also caused reef loss.
Climate change poses the biggest threat to reefs however, as emissions of carbon dioxide make seawater increasingly acidic.
Last year a study showed that one-fifth of the world's coral reefs have died or been destroyed and the remainder are increasingly vulnerable to the effects of climate change.
The Global Coral Reef Monitoring Network says many surviving reefs could be lost over the coming decades as CO2 emissions continue to increase. | <urn:uuid:5e2f2baf-ab5a-40e4-ad86-116c02b20572> | 4.03125 | 683 | News Article | Science & Tech. | 37.717154 | 181 |
While working with regular expressions, you need quantifiers to specify the number of occurrences to match against. The 3 used quantifiers are ?, + and *.
? means 0 or 1 occurrence
+ means exactly one occurrence
* means 0 or more occurrences
String patternStr = "Java?"; // last a can have zero or 1 occurrence
String patternStr = "Java+"; // last a can have one or more occurrence
String patternStr = "Java*"; // last a can have zero or more occurrence | <urn:uuid:17b99022-dd89-4ec0-9967-482c14526389> | 2.671875 | 105 | Documentation | Software Dev. | 47.596429 | 182 |
Frilled Sharks, Chlamydoselachus anguineus
Taxonomy Animalia Chordata Elasmobranchii Hexanchiformes Chlamydoselachidae Chlamydoselachus anguineus
Description & Behavior
Frilled sharks, Chlamydoselachus anguineus (Garman, 1884), aka frill sharks, frill-gilled sharks, Greenland sharks, scaffold sharks, and silk sharks are members of the most ancient frill and cow sharks order, Hexanchiformes. Hexanchiform sharks have a single dorsal fin, either six or seven gill slits (versus the 5 found in all other existing sharks), and no nictitating membranes (protective third eyelids). The frilled shark, Chlamydoselachus anguineus, is currently one of only two known species of frilled sharks. The southern African frill shark, C. africana, was recently discovered (2009) off southern Angola, Namibia and South Africa. They are both very different in other ways from the cow sharks and are likely to be moved to their own order Chlamydoselachiformes in the near future.
Frilled sharks, Chlamydoselachus anguineus, are deepwater eel-like sharks that reach lengths up to 2 m and are thought to reach sexual maturity when they are 1.35 to 1.5 m long. They are dark brown or gray in color above, sometimes lighter below, and have six pairs of "frilly" gill slits where the first gill slit is joined under their jaws forming a sort of collar. Frilled sharks' heads are broad and flattened with short, rounded snouts. Their nostrils are vertical slits, separated into incurrent and excurrent openings by a leading flap of skin. The moderately large eyes are horizontally oval (like a cat's).
Their mouth is located at the leading edge of their snout (terminal) rather than underneath like most sharks and they have small tricuspid teeth in both jaws. Their rows of teeth are rather widely spaced, numbering 19–28 teeth in their upper jaws and 21–29 teeth in their lower jaws. Each tooth is small, with three slender, needle-like cusps alternating with two cusplets. Their very long jaws are positioned terminally (at the end of the snout), as opposed to the underslung jaws of most sharks.
They have a small lobe-like dorsal fin set far back over their pelvic fins with an anal fin that is larger than their dorsal fin. Their pectoral fins are small and paddle-shaped and their very long caudal fin (tail fin) has a small ventral lobe and without a subterminal notch.
Frilled sharks also have a pair of thick skin folds of unknown function (possibly to help allow for expansion when digesting larger prey) running along their bellies, separated by a groove, and their midsections are relatively longer in females than in males.
Frilled shark differs from their southern African relative, C. africana, by having more vertebrae (160–171 vs 147) and more turns in the spiral valve intestine (35–49 versus 26–28), as well as differences in various proportional measurements such as a longer head and shorter gill slits. The maximum known length is 1.7 m for males and 2.0 m for females.
Frilled sharks are highly specialized for life in the deep sea with reduced, poorly-calcified skeletons and enormous livers filled with low-density lipids, which allows them to maintain their position in water with little effort. They are also one of the few sharks with an "open" lateral line, in which the mechanoreceptive hair cells are positioned in grooves that are directly exposed to the surrounding seawater. This configuration is thought to be the most primitive in sharks and may enhance their sensitivity to minute movements of prey in their proximity.
Many frilled sharks are found with the tips of their tails missing, probably from predatory attacks by other shark species.
These sharks, or a proposed giant relative, have been suggested as a source for reports of sea serpents.
World Range & Habitat
Frilled sharks, Chlamydoselachus anguineus, are an uncommon "primitive" shark species typically found near the sea floor in waters over outer continental and island (insular) shelves and upper slopes, usually at depths between 120 and 1,280 m but up to 1,570 m and occasionally even at the surface.
Frilled sharks are thought to have a wide though patchy distribution (74°N - 58°S, 169°W - 180°E) in the Atlantic and Pacific Oceans. In Suruga Bay, Japan they are most common at depths between 50 m and 200 m.
In the western Indian Ocean they are found off South Africa as C. africana. In the western Pacific, frilled sharks are known to live off Japan and south to New Zealand, New South Wales and Tasmania in Australia. In the eastern/central Pacific they have been observed off Hawaii, southern California to northern Chile. Frilled sharks have also been observed in the eastern Atlantic from waters off northern Norway to northern Namibia, and possibly off the eastern Cape of Good Hope in South Africa.
In the central Atlantic, they have been caught at several locations along the Mid-Atlantic Ridge, from north of the Azores to the Rio Grande Rise off southern Brazil, as well as over the Vavilov Ridge off West Africa. In the western Atlantic, it has been reported from off New England, Georgia, and Suriname.
Feeding Behavior (Ecology)
Frilled sharks, Chlamydoselachus anguineus, feed on cephalopods (mainly squid), other sharks, and bony fishes. Feeding behavior has not yet been observed by this weak-swimming species, though they are thought to capture active, fast-moving squid by taking advantage of injured squid or those that are exhausted and dying after spawning. Alternatively, they may surprise their prey by curving their body like a spring, bracing themselves with rear positioned fins, and launching quick strikes forward like a snake. They may also be able to close their gill slits creating negative internal pressure to suck prey quickly into their mouth. They have many small, sharp, rear-pointing (recurved) teeth that function much like squid jigs which could easily snag the body or tentacles of a squid, particularly as they are rotated outwards when their jaws are protruded. Observations of captive frilled sharks swimming with their mouths open might also suggest that the small teeth, light against their dark mouths, may even fool squid into attacking and entangling themselves.
Using their long, extremely flexible jaws they should be able to swallow large prey (up to half its size!) whole, while their many rows of needle-like teeth would make escape essentially futile. Examining the length and articulation of their jaws appears to show that frilled sharks cannot deliver as strong a bite as more conventionally built sharks. Most captured individuals have been found with no or barely identifiable stomach contents, suggesting that they have a fast digestion rate and/or long intervals between feedings. One 1.6 m long individual, caught off Japan, was found to have swallowed an entire 590 g Japanese catshark, Apristurus japonicus. Squid comprise some 60% of the diet of these sharks in Suruga Bay and this includes not only slow-moving, deep-dwelling squid such as Chiroteuthis and Histioteuthis, but also relatively large, powerful swimmers of the open ocean such as Onychoteuthis, Sthenoteuthis, and Todarodes.
Frilled sharks, Chlamydoselachus anguineus, are aplacental viviparous (aka ovoviviparity) where the embryos emerge from their egg capsules inside their mother's uterus and are nourished by their yolk until birth. Frilled sharks' gestation period may be as long as three and a half years, the longest of any vertebrate. Between 2 and 15 young are born at a time (average is 6) measuring 40–60 cm long, and there appears to be no distinct breeding season (which is expected as these sharks inhabits depths at which there is little to no seasonal influence). Male frill sharks attain sexual maturity at 1.0–1.2 m long and females at 1.3–1.5 m. A possible mating aggregation of 15 male and 19 female frilled sharks was recorded over a seamount on the Mid-Atlantic Ridge.
Conservation Status & Comments
Frilled sharks, Chlamydoselachus anguineus, are listed as Near Threatened (NT) by the IUCN Red List: "A generally rare to uncommon deepwater species, with a few localities where it is taken more commonly as bycatch in several fisheries. Not an important target species, but a regular though small bycatch in many bottom trawl, midwater trawl, deep-set longline, and deep-set gillnet fisheries. As bycatch, this species is variously either used for meat, fishmeal, or discarded. Occasionally kept in aquaria (Japan). There is some concern that expansion of deepwater fisheries effort (geographically and in depth range) will increase the levels of bycatch. Although little is known of its life history, this deepwater species is likely to have very little resilience to depletion as a result of even non-targeted exploitation. It is classified as Near Threatened due to concern that it may meet the Vulnerable A2d+A3d+4d criteria."
On August 27, 2004, the first observation of this species in its natural habitat was made by the ROV Johnson-Sea-Link II, on the Blake Plateau off the southeastern United States (see the first photo above). On January 21, 2007, a Japanese fisherman discovered a 1.6 m long female alive at the surface, perhaps there because of illness or weakness from the warm water. It was brought to Awashima Marine Park in Shizuoka, where it died after a few hours (see the video above). Garman, and numerous authors since, have advanced the frilled shark as an explanation for sea serpent sightings. Because of the shark's modest size, some cryptozoologists have posited the existence of a giant relative, particularly as larger Chlamydoselachus species are known from the fossil record.
References & Further Research
Research Chlamydoselachus anguineus » Barcode of Life ~ BioOne ~ Biodiversity Heritage Library ~ CITES ~ Cornell Macaulay Library [audio / video] ~ Encyclopedia of Life (EOL) ~ ESA Online Journals ~ FishBase ~ Florida Museum of Natural History Ichthyology Department ~ GBIF ~ Google Scholar ~ ITIS ~ IUCN RedList (Threatened Status) ~ Marine Species Identification Portal ~ NCBI (PubMed, GenBank, etc.) ~ Ocean Biogeographic Information System ~ PLOS ~ SCIRIS ~ SIRIS ~ Tree of Life Web Project ~ UNEP-WCMC Species Database ~ WoRMS
Feedback & Citation
Find an error or having trouble with something? Let us know and we'll have a look!
Help us continue to share the wonders of the ocean with the world, raise awareness of marine conservation issues and their solutions, and support marine conservation scientists and students involved in the marine life sciences. Join the MarineBio Conservation Society or make a donation today. We would like to sincerely thank all of our members, donors, and sponsors, we simply could not have achieved what we have without you and we look forward to doing even more. | <urn:uuid:e5679202-a3e8-4aa9-8403-181d8b06bc4f> | 3.484375 | 2,451 | Knowledge Article | Science & Tech. | 41.064659 | 183 |
Here is a fun one,
There was a man who greatly enjoyed golf. He also could make a perfectly consistent swing. So out of curiosity he decided to challenge a mathematician. So first he brought the mathematician to a golf field, with his golf club, a tee, and a ball. He sets the ball on the tee, all ready to swing, and then he asks the mathematician, “Write me a formula where z is the total distance the ball will travel, assuming there is no wind, the ground is level, The ball starts one inch off the ground, and I hit it with x force at y angle, all before I hit the ball.” He then swings his club, hits the ball and much to his surprise the mathematician succeeds. Not only did the mathematician have a flawless formula, but he also had the shortest formula he could have possibly written. What was his formula?
Last edited by TheTick (2013-02-28 15:50:15) | <urn:uuid:070e6cdd-a083-43f2-9577-27e03e835620> | 2.765625 | 201 | Comment Section | Science & Tech. | 65.922443 | 184 |
About this product:
This graphic shows an approximate representation of coastal areas under a hurricane warning (red), hurricane watch (pink), tropical storm warning (blue) and tropical storm watch (yellow). The orange circle indicates the current position of the center of the tropical cyclone. The black line and dots show the National Hurricane Center (NHC) forecast track of the center at the times indicated. The dot indicating the forecast center location will be black if the cyclone is forecast to be tropical and will be white with a black outline if the cyclone is forecast to be extratropical. If only an L is displayed, then the system is forecast to be a remnant low. The letter inside the dot indicates the NHC's forecast intensity for that time:
D: Tropical Depression – wind speed less than 39 MPH
S: Tropical Storm – wind speed between 39 MPH and 73 MPH
H: Hurricane – wind speed between 74 MPH and 110 MPH
M: Major Hurricane – wind speed greater than 110 MPH
NHC tropical cyclone forecast tracks can be in error. This forecast uncertainty is conveyed by the track forecast "cone", the solid white and stippled white areas in the graphic. The solid white area depicts the track forecast uncertainty for days 1-3 of the forecast, while the stippled area depicts the uncertainty on days 4-5. Historical data indicate that the entire 5-day path of the center of the tropical cyclone will remain within the cone about 60-70% of the time. To form the cone, a set of imaginary circles are placed along the forecast track at the 12, 24, 36, 48, 72, 96, and 120 h positions, where the size of each circle is set so that it encloses 67% of the previous five years official forecast errors. The cone is then formed by smoothly connecting the area swept out by the set of circles.
There is also uncertainty in the NHC intensity forecasts. The Maximum 1-minute Wind Speed Probability Table provides intensity forecast and uncertainty information.
It is also important to realize that a tropical cyclone is not a point. Their effects can span many hundreds of miles from the center. The area experiencing hurricane force (one-minute average wind speeds of at least 74 mph) and tropical storm force (one-minute average wind speeds of 39-73 mph) winds can extend well beyond the white areas shown enclosing the most likely track area of the center. The distribution of hurricane and tropical storm force winds in this tropical cyclone can be seen in the Wind History graphic linked above.
Considering the combined forecast uncertainties in track, intensity, and size, the chances that any particular location will experience winds of 34 kt (tropical storm force), 50 kt, or 64 kt (hurricane force) from this tropical cyclone are presented in tabular form for selected locations and forecast positions. This information is also presented in graphical form for the 34 kt, 50 kt, and 64 kt thresholds.
Note: A detailed definition of the NHC track forecast cone is also available. | <urn:uuid:4ce27885-3064-4fd9-bf45-8ddd0080f661> | 3.03125 | 628 | Knowledge Article | Science & Tech. | 44.047651 | 185 |
New on the IBM developerWorks, there's an article looking at using the Scilab software integrated into PHP to perform some more complicated mathematical processing.
Scripting languages like Ruby, Python, and PHP power modern-day server-side Web development. These languages are great because you can easily and rapidly build Web sites. However, their downfall is their inefficiency with complicated algorithms, such as those found in mathematics and the sciences. [...] In this article, we'll investigate one particular way to merge the power of a particular bit of scientific software - Scilab - with the ease of development and Web-friendliness of a server-side language: PHP.
Your script uses the Scilab tool from the command line, called via something like exec, and parsing the output to spit the results back out to the viewer. They show how to create two pages with form elements for allowing the user to interact with the script and one that helps you generate a graph based on some results. | <urn:uuid:134f1f86-6c7d-48c9-abb1-a1be577339f4> | 2.6875 | 199 | Truncated | Software Dev. | 42.139 | 186 |
Gamma ray bursts
are believed to be the most energetic phenomena in the universe.
In one second they can emit more than 100 times the energy that
the sun does throughout its entire 10 billion year life. This energy
output is short lived, however, and within days the burst has faded
forever beyond the reach of our telescopes.
3000 bursts having been detected through their gamma ray emission,
only 30 have been seen with ground-based telescopes, and only one
of these has been observed within an hour.
In an ambitious
project to detect the gamma ray bursts in the crucial first minute
of their occurence, the School of Physics has entered a collaboration
with the University of Michigan, Los Alamos National Laboratories,
and Lawrence Livermore National Laboratory, to place a robotic telescope,
ROTSE-III, at Siding Spring Observatory.
triggered into action by a signal relayed through the Internet from
an earth-orbiting satellite. The specially designed mounting for
ROTSE-III allows it to point to any position in the sky and take
an image within 5-10 seconds. The images are then automatically
analysed for any new or rapidly varying sources, and this information
is made available to other observatories throughout the world within
minutes. The precise positions provided by ROTSE-III are essential
to allow the worlds largest telescopes to observe the gamma
for the new telescope occurred in March 2001. The enclosure and
weather station were installed in April 2001, with the telescope
itself to be delivered in mid-2002. | <urn:uuid:41af5c95-84cb-4b31-990a-6fbb28055062> | 3.875 | 327 | Knowledge Article | Science & Tech. | 29.343642 | 187 |
During this tutorial you will be asked to perform calculations involving trigonometric functiions. You will need a calulator to proceed.
| The purpose of this tutorial
is to review with you the elementary properties of the trigonometric functions.
Facility with this subject is essential to success in all branches of science,
and you are strongly urged to review and practice the concepts presented
here until they are mastered. Let us consider the right-angle triangle
shown in Panel 1. The angle at C is a right angle and the angle
A we will call θ. The lengths of the
sides of the triangle we will denote as p, q and r. From your elementary
geometry, you know several things about this triangle. For example, you
know the Pythagorean relation,
q² = p² + r². That is, the square of the length of the side opposite the right angle, which we call the hypotenuse, is equal to the sum of the squares of the lengths of the other two sides.
We know other things. For example, we know that if the lengths of the three sides of any triangle p, q and r are specified, then the whole triangle is determined, angles included. If you think about this for a moment, you will see it is correct. If I give you three sticks of fixed length and told you to lay them down in a triangle, there's only one triangle which you could make. What we would like to have is a way of relating the angles in the triangle, say θ, to the lengths of the sides.
It turns out that there's no simple analytic way to do this. Even though the triangle is specified by the lengths of the three sides, there is not a simple formula that will allow you to calculate the angle θ. We must specify it in some new way.
|To do this, we define three ratios of the sides of the triangle.
One ratio we call the sine of theta, written sin(θ), and it is defined as the ratio of the side opposite θ to the hypotenuse, that is r/q.
The cosine of θ, written cos(θ), is the side adjacent to θ over the hypotenuse, that is, p/q.
This is really enough, but because it simplifies our mathematics later on, we define the tangent of θ, written tan(θ), as the ratio of the opposite to the adjacent sides, that is r/p. This is not an independent definition since you can readily see that the tangent of θ is equal to the sine of θ divided by the cosine of θ. Verify for yourself that this is correct.
All scientific calculators provide this information. The first thing to ensure is that your calculator is set to the anglular measure that you want. Angles are usually measured in either degrees or radians (see tutorial on DIMENSIONAL ANALYSIS). The angle 2º is a much different angle than 2 radians since 180º = π radians = 3.1416... radians. Make sure that your calculator is set to degrees.
Now suppose that we want the sine of 24º. Simply press 24 followed by the [sin] key and the display should show the value 0.4067. Therefore, the sine of 24º is 0.4067. That is, in a triangle like panel 1 where θ = 24º, the ratio of the sides r to q is 0.4067. Next set your calculator to radians and find the sine of 0.42 radians. To do this enter 0.42 followed by the [sin] key. You should obtain a value of 0.4078. This is nearly the same value as you obtained for the sine of 24º. Using the relation above you should confirm that 24º is close to 0.42 radians
Obviously, using your calculator to find values of sines is very simple. Now find sine of 42º 24 minutes. The sine of 42º 24 minutes is 0.6743. Did you get this result? If not, remember that 24 minutes corresponds to 24/60 or 0.4º. The total angle is then 42.4º
| The determination of
cosines and tangents on your calculator is similar. It is now possible
for us to solve the simple problem concerning triangles. For example, in
Panel 2, the length of the hypotenuse is 3 cm and the angle θ
is 24º. What is the length of the opposite side r? The sine of 24º
as we saw is 0.4067 and it is also, by definition, r/3.
So, sine of 24º = .4067 = r/3, and therefore, r = 3 x 0.4067 = 1.22 cm.
|Conversely, suppose you knew that the opposite side was
2 cm long and the hypotenuse was 3 cm long, as in panel 3, what is the
angle θ? First determine the sine of θ
You should find that the sine of θ is 2/3, which equals 0.6667. Now we need determine what angle has 0.6667 as its sine.
If you want your answer to be in degrees, be sure that your calculator is set to degrees. Then enter 0.6667 followed by the [INV] key and then the [sin] key. You should obtain a value of 41.8º. If your calculator doesn't have a [INV] key, it probably has a [2ndF] key and the inverse sine can be found using it.
|One use of these trigonometric functions which is very important is the calculation of components of vectors. In panel 4 is shown a vector OA in an xy reference frame. We would like to find the y component of this vector. That is, the projection OB of the vector on the y axis. Obviously, OB = CA and CA/OA = sin(θ), so CA = OA sin(θ). Similarly, the x-component of OA is OC. And OC/OA = cos(θ) so OC = OA cos(θ).|
|There are many relations among the trigonometric functions
which are important, but one in particular you will find used quite often.
Panel 1 has been repeated as Panel 5 for you. Let us look at the sum cos²
+ sin². From the figure, this is (p/q)² + (r/q)², which
[(p² + r²) / (q²)]. The Pythagorean theorem tells us that p² + r² = q² so we have
[(p² + r²) / q²] = (q²/q²) = 1. Therefore, we have;
Our discussion so far has been limited to angles between 0 and 90º. One can, using the calculator, find the the sine of larger angles (eg 140º ) or negative angles (eg -32º ) directly. Sometimes, however, it is useful to find the corresponding angle betweeen 0 and 90º. Panel 6 will help us here.
|In this xy reference frame, the angle θ
is clearly between 90º and 180 º, and clearly, the angle a,
which is 180 - θ
( a is marked with a double arc) can be dealt with. In this case, we say that the magnitude of sine, cosine, and tangent of θ are those of the supplement a and we only have to examine whether or not they are positive or negative.
For example, what is the sine, cosine and tangent of 140º? The supplement is 180º - 140º = 40º. Find the sine, the cosine and the tangent of 40º. | <urn:uuid:00f865ac-a066-4877-8d69-479bd1350ad2> | 4.0625 | 1,681 | Tutorial | Science & Tech. | 79.495224 | 188 |
|May20-06, 05:24 PM||#1|
Stuck on couple related rates problems..
1. A ship with a long anchor chain is anchored in 11 fathoms of water. The anchor chain is being wound in at a rate of 10 fathoms/minute, causing the ship to move toward the spot directly above the anchor resting on the seabed. The hawsehole ( the point of contact between ship and chain) is located 1 fathom above the water line. At what speed is the ship moving when there are exatly 13 fathoms of chain still out?
For this problem I started with this drawing.. http://img.photobucket.com/albums/v4...n/untitled.jpg
And then from there, I had no idea where to go... there hawsehole being 1 fathom above the water really gets to me, perhaps making the above drawing void. Another thing I don't understand is that it says it's anchored in 11 fathoms of water.. how could the question be asking what speed the boat would be moving if it were at 13 fathoms?
2. A ladder 41 feet long was leaning against a vertical wall and begins to slip. Its top slides down the wall whilte its bottom moves along the level ground at a constant speed of 4ft/sec. How fast is the top of the ladder moving when it is 9 feet above the ground?
For this one.. I didn't even know what to do.. of course I drew a triangle, hypotenuse of 41 and the vertical side of 9 feet.. and then.......?
Mainly, I think problems such as these are really easy, but I have a really hard time picturing the problem or drawing it out. I don't know which numbers apply to dx/dt and dy/dt..
|May20-06, 05:48 PM||#2|
And tehy are asking what is the speed when there is 13 fathoms of *chain* still out, which is the length of the hypothenuse on your triangle. Of course this length will be larger or equal to 12 fathoms (it will be equal to 12 fathom when the boat will be right above the anchor)
If we call "L" the length of the hypothenuse, then what you want is to write dx/dt in terms of dL/dt (which is the number they give you). All you have to do is to write an expression relating x and L (and other known values), isolate x in terms of those constants and L, and differentiate both dised with respect to t. You will get dx/dt = expression in terms of constant, L and dL/dt.
|Similar Threads for: Stuck on couple related rates problems..|
|im stuck in a couple electromagnetics problems||Introductory Physics Homework||10|
|Related Rates Problems||Calculus & Beyond Homework||1|
|Related rates - some problems =)||Calculus & Beyond Homework||5|
|stuck on a related rates problem||Introductory Physics Homework||5|
|A couple pretty easy integration problems im stuck on||Calculus||3| | <urn:uuid:db125f45-2e18-420f-93eb-28e0f5ee7577> | 2.71875 | 677 | Comment Section | Science & Tech. | 82.256139 | 189 |
Let's Talk About: Cosmic collisions
Share with others:
It has been almost 100 years since Edwin Hubble measured the universe beyond the Milky Way Galaxy. Today, astronomers believe that as many as 100 billion other galaxies are sharing the cosmos. Most of these cosmic islands are classified by shape as either spiral or elliptical, but stargazing scientists have discovered galaxies that don't quite fit these molds.
Common to this "irregular" category are galaxies that interact with other galaxies. These gravitational interactions are often referred to as mergers, and their existence invites the question: Is the Milky Way collision-prone? To evaluate the probability, look to the Andromeda Galaxy. Located more than 2.5 million light-years away, Andromeda appears as a small fuzzy patch in the sky. However, there is nothing miniature about it. Similar to the shape (spiral), size and mass of the Milky Way, Andromeda is home to a trillion other stars.
Astronomers have known for decades that our galactic neighbor is rapidly closing in on us -- at approximately 250,000 miles per hour. They know this because of blueshift, a measured decrease in electromagnetic wavelength caused by the motion of a light-emitting source, in this case Andromeda, as it moves closer to the observer.
Recently, data collected from the Hubble Space Telescope has allowed astronomers to predict a merger with certainty, in 4 billion years. Our sun will still be shining, and Earth will most likely survive the impact. Reason being, galaxies, although single units of stars gravitationally tied together, are mostly gigantic voids. One can compare a galaxy-on-galaxy collision to the pouring of one glass of water into another. The end result is a larger collection of water, or in the case of a cosmic collision, a larger galaxy. Future Earth inhabitants, billions of years from now, could look up and observe only small portions of such an event because it will take 2 billion years for these cosmic islands to become one.
First Published November 29, 2012 12:00 am | <urn:uuid:ebb1ace8-11cc-4b0f-87f2-0f8f23923491> | 3.9375 | 415 | Truncated | Science & Tech. | 38.119837 | 190 |
Last July (2012), I heard from a colleagues working at the edge of the Greenland ice sheet, and from another colleague working up at the Summit. Both were independently writing to report the exceptional conditions they were witnessing. The first was that the bridge over the Watson river by the town of Kangerlussuaq, on the west coast of Greenland, was being breached by the high volumes of meltwater coming down from the ice sheet. The second was that there was a new melt layer forming at the highest point of the ice sheet, where it very rarely melts.
A front loader being swept off a bridge into the Watson River, Kangerlussuaq, Greenland, in July 2012. Fortunately, nobody was in it at the time. Photo: K. Choquette
I’ve been remiss in not writing about these observations until now. I’m prompted to do so by the publication in Nature today (January 23, 2013) of another new finding about Greenland melt. This paper isn’t about the modern climate, but about the climate of the last interglacial period. It has relevance to the modern situation though, a point to which I’ll return at the end of this post. | <urn:uuid:c8dad88b-1cd0-43ad-8153-71e09064a07e> | 2.78125 | 251 | Personal Blog | Science & Tech. | 59.101364 | 191 |
Adopting a New Flight Plan Whooping Crane Migration Route Shifted West into Safer Air Space
By LEN WELLS Courier & Press correspondent (618) 842-2159 or firstname.lastname@example.org
The route of the annual 1,250-mile migration of endangered whooping crane juveniles, led by an ultralight aircraft, has been shifted this fall to a more westerly route because of concerns about pilot and bird safety.
The route, from the Necedah National Wildlife Refuge in central Wisconsin to a closed area of the Chassahowitzka National Wildlife Refuge on the west coast of Florida, will bring the birds through parts of the Tri-State.
It will take the birds the entire length of Illinois and across Western Kentucky, with overnight stops in Wayne County, Ill., and Union County, Ky.
“The route was shifted west because the easterly route was pretty scary,” said Liz Condie, director of communications for Operation Migration, the group that works to ensure the birds’ survival.
“Going over the Cumberland Ridge, there was no place to set down to retrieve a bird if there had been a problem,” she said.
Officials hope, too, for better weather along the westerly route by picking up more favorable winds.
“For the safety of the birds, we
cannot divulge the exact location of each stopover other than down to the county level,” Condie said. “At each stop, the birds will be housed overnight in portable pens to protect them from predators and to keep them far away from human contact.”
While the stopover locations are kept secret, Operation Migration officials try to schedule gathering sites for local residents to catch a glimpse of the birds as they lift off to continue their southerly trek.
“A few days before the scheduled stopover, we try to alert the local residents of where they can congregate to watch a flyover,” Condie said.
Because of fluctuating weather conditions, those interested in tracking the birds should check Operation Migration’s Web site at www.operationmigration.org for a more specific date and time.
The whooping crane chicks that take part in the reintroduction project are hatched at the U.S. Geological Survey’s Patuxent Wildlife Research Center in Laurel, Md. There, imprinting begins with the chicks still inside their eggs being exposed to ultralight aircraft sounds. Once hatched, the young chicks are reared in total isolation from humans.
To ensure the impressionable cranes remain wild, each handler and pilot wears a crane puppet on one arm that can dispense food, or by example, show the young chicks how to forage as would their real mother.
At 45 days of age, the young birds are transported by air, in individual containers, to the reintroduction area at the Necedah National Wildlife Refuge in Wisconsin.
Because of differing age ranges, the birds usually are moved in three shipments and housed at three separate locations within a closed area of the refuge. Over the summer, the Operation Rescue crew of pilots, biologists, veterinarians and interns conditions the birds to follow the aircraft, which, along with its pilot, has been accepted as a surrogate parent.
Once the birds’ dominance structure has been established and their endurance is sufficient, the migration begins, typically in October. Using four ultralight aircraft, Operation Migration’s pilots, along with a ground crew consisting of biologists, handlers, veterinarians and drivers, cover up to 200 miles a day, depending on weather conditions.
This year’s migration to Florida has been scheduled to begin Oct. 17. The shortest migration has taken 48 days to complete. The longest, 97 days, was recorded last year.
Because of destruction of habitat and overhunting, whooping cranes were on the verge of extinction in the 1940s when their population was reduced to only 15 birds. Since falling under the protection of the Endangered Species Act of 1973, the only naturally occurring population of migrating whooping cranes has grown to more than 200 birds.
Named for their loud and penetrating unison calls, whooping cranes live and breed in wetland areas where they feed on crabs, clams, frogs and aquatic plants.
An adult whooping crane stands 5 feet tall, with a white body, black wing tips and a red crest on its head.
Anyone encountering a whooping crane in the wild is asked to avoid approaching it, staying back at least 600 feet. In all cases, officials ask that people remain concealed and not speak loudly enough for the birds to hear them. Especially during the migration, residents are warned not to trespass on private property in an attempt to view the cranes.
(c) 2008 Evansville Courier & Press. Provided by ProQuest LLC. All rights Reserved. | <urn:uuid:1fea6546-1d45-4f80-b956-022db1d4eed7> | 2.640625 | 1,012 | Truncated | Science & Tech. | 46.271536 | 192 |
Mar. 4, 2013 Behind locked doors, in a lab built like a bomb shelter, Perry Gerakines makes something ordinary yet truly alien: ice.
This isn't the ice of snowflakes or ice cubes. No, this ice needs such intense cold and low pressure to form that the right conditions rarely, if ever, occur naturally on Earth. And when Gerakines makes the ice, he must keep the layer so microscopically thin it is dwarfed by a grain of pollen.
These ultrathin layers turn out to be perfect for recreating some of the key chemistry that takes place in space. In these tiny test tubes, Gerakines and his colleagues in the Cosmic Ice Lab at NASA's Goddard Space Flight Center in Greenbelt, Md., can reproduce reactions in ice from almost any time and place in the history of the solar system, including some that might help explain the origin of life.
"This is not the chemistry people remember from high school," says Reggie Hudson, who heads the Cosmic Ice Lab. "This is chemistry in the extreme: bitter cold, harsh radiation and nearly non-existent pressure. And it's usually taking place in gases or solids, because generally speaking, there aren't liquids in interstellar space."
The Cosmic Ice Lab is one of a few laboratories worldwide where researchers have been studying the ultracool chemistry of cosmic ice. With its powerful particle accelerator, the Goddard lab has the special ability to mimic almost any kind of solar or cosmic radiation to drive these reactions. And that lets them dig deep to study the chemistry of ice below the surface of planets and moons as well as ice in space.
Recipe for disorder
In a vacuum chamber about the size of a lunchbox, Gerakines recreates a little patch of deep space, in all its extremes. He pumps out air until the pressure inside drops to a level a billion times lower than normal for Earth, then chills the chamber to minus 433 degrees Fahrenheit (15 kelvins). To get ice, all that remains is to open a valve and let in water vapor.
The instant the sprightly vapor molecules enter the chamber they are literally frozen in their tracks. Still pointing every which way, the molecules are transformed immediately from their gaseous state into the disorderly solid called amorphous ice. Amorphous ice is exactly the opposite of the typical ice on Earth, which forms perfect crystals like those that make up snowflakes or frost needles. These crystals are so orderly and predictable that this ice is considered a mineral, complete with a rating of 2.5 on the Mohs scale of hardness -- the same rating as a fingernail.
Though almost unheard of on Earth, amorphous ice is so widespread in interstellar space that it could be the most common form of water in the universe. Left over from the age when the solar system was born, it is scattered across vast distances, often as particles no bigger than grains of dust. It's also been spotted in comets and icy moons.
The secret to making amorphous ice in the lab, Gerakines finds, is to limit the layer to a depth of about half a micrometer -- thinner than a strand of spider's silk.
"Water is such a good insulator that if the ice gets too thick, only the bottom of the sample, closer to the cooling source, will stay sufficiently cold," says Gerakines. "The ice on top will get warm enough to crystallize."
The superthin ice can be spiked with all kinds of interesting chemicals found in space. One set of chemicals that Gerakines works with is amino acids, which are key players in the chemistry of life on Earth. Researchers have spent decades identifying a whole smorgasbord of amino acids in meteorites (including some involved in life), as well as one found in a sample taken from a comet.
"And because water is the dominant form of frozen material in the interstellar medium and outer solar system," says Gerakines, "any amino acids out there are probably in contact with water at some point."
For his current set of experiments, Gerakines makes three kinds of ice, each spiked with an amorphous form of an amino acid (either glycine, alanine or phenylalanine) that is found in proteins.
The real action begins when Gerakines hits the ice with radiation.
Earlier studies by other researchers have looked at ice chemistry using ultraviolet light. Gerakines opts instead to look at cosmic radiation, which can reach ice hidden below the surface of a planet or moon. To mimic this radiation, he uses a proton beam from the high-voltage particle accelerator, which resides in an underground room lined with immense concrete walls for safety.
With the proton beam, a million years' worth of damage can be reproduced in just half an hour. And by adjusting the radiation dose, Gerakines can treat the ice as if it were lying exposed or buried at different depths of soil in comets or icy moons and planets.
He tests the three kinds of water-plus-amino-acid ice and compares them to ice made from amino acids only. Between blasts, he checks the samples using a "molecular fingerprinting" technique called spectroscopy to see if the amino acids are breaking down and chemical by-products are forming.
As expected, more and more of the amino acids break down as the radiation dose adds up. But Gerakines notices that the amino acids last longer if the ice includes water than if they are left on their own. This is odd, because when water breaks down, one of the fragments it leaves behind is hydroxyl (OH), a chemical well-known for attacking other compounds.
The spectroscopy confirms that some OH is being produced. But overall, says Gerakines, "the water is essentially acting like a radiation shield, probably absorbing a lot of the energy, the same way a layer of rock or soil would."
When he repeats the experiments at two higher temperatures, he is surprised to find the acids fare even better. From these preliminary measurements, he and Hudson calculate how long amino acids could remain intact in icy environments over a range of temperatures.
"We find that some amino acids could survive tens to hundreds of millions of years in ice near the surface of Pluto or Mars and buried at least a centimeter [less than half an inch] deep in places like the comets of the outer solar system," says Gerakines. "For a place that gets heavy radiation, like Europa, they would need to be buried a few feet." (These findings were reported in the journal Icarus in August 2012.)
"The good news for exploration missions," says Hudson, "is it looks as if these amino acids are actually more stable than anybody realized at temperatures typical of places like Pluto, Europa and even Mars."
The Cosmic Ice Lab is part of the Astrochemistry Laboratory in Goddard's Solar System Exploration Division and is funded in part by the Goddard Center for Astrobiology and the NASA Astrobiology Institute.
Other social bookmarking and sharing tools:
Note: If no author is given, the source is cited instead. | <urn:uuid:74190acf-e8c7-4820-8247-7a59a228b7ac> | 3.640625 | 1,473 | News Article | Science & Tech. | 41.927071 | 193 |
Quantum Teleportation Leaps to New Distance Record
A new record of roughly 60 miles has been set in the field of qubit transmission. "This is just a transmission method, so it could have wide utility, though I expect the cost will initially make it best for huge data streams," said analyst Rob Enderle. "Something like this could turn us into a SaaS world."
Scientists in China have transmitted quantum bits, or qubits, over a record distance of 97 km, or roughly 60 miles.
This is more than six times the distance of the previous record of 16 km, set by another team of Chinese researchers in May of 2010, as reported in Nature.com.
The results represent a step toward the establishment of a global quantum network, and the methods used in the experiment could be utilized for satellite-based quantum communications, the scientists said.
"This is just a transmission method, so it could have wide utility, though I expect the cost will initially make it best for huge data streams," Rob Enderle, principal analyst at the Enderle Group, told TechNewsWorld.
This technology "could end up changing much of the world" because it's both potentially higher bandwidth and lower latency, approaching zero, and these factors "could drive massive computer centralization on a world scale and force a massive shakeout of security, networking and computer companies," Enderle continued. "Something like this could turn us into a SaaS (Software as a Service) world."
The Theory Behind the Experiment
The latest experiment demonstrated quantum teleportation of an independent unknown state between two optical free-space links 97 km apart with multi-photon entanglement.
Quantum teleportation is a process for transmitting information using quantum physics to, in effect, encrypt the data transmitted. It's also known as entanglement-assisted teleportation.
In quantum teleportation experiments, beams of lights are used to encode qubits. The encoded beam of light, which is described as quantum entangled, is split into two and transmitted. When a qubit at one receiver is observed and take a defined form, the other half of the qubit at the other receiver takes the same defined form.
Quantum entanglement results when particles such as photons or electrons interact physically and then become separated but remain in the same quantum state. A quantum state is a set of mathematical variables, including position, momentum and spin, that fully describes a quantum system.
What the Researchers Did
The researchers used an ultra-bright entangled photon source based on Type-II spontaneous parametric down-conversion (SPDC). The SPDC process involves using a non-linear crystal to split photons into two. Those photons in a pair whose polarizations are perpendicular to one another are termed "Type II" photons.
In an SPDC apparatus, a strong laser beam, called the "pump" beam, is directed at a beta-barium borate (BBO) crystal. That's exactly what the researchers did. This generated the beam of light, which was split and sent across Lake Qinghai in China. | <urn:uuid:9e0ed9cf-7ff0-4c4d-ae4c-98d6e5f83d62> | 3.171875 | 629 | News Article | Science & Tech. | 37.636034 | 194 |
No one knows how the first organisms or even the first organic precursors formed on Earth, but one theory is that they didn't. Rather, they were imported from space. Scientists have been finding what looks like biological raw material in meteorites for years, but it's usually been shown to be ground contamination. This year, however, investigators studying a dozen meteorites that landed in Antarctica found traces of adenine and guanine two of the four nucleobases that make DNA. That's not a big surprise, since nucleobases have been found in meteorites before. But these were found in the company of other molecules that were similar in structure but not identical. Those had never been detected in previous meteorite samples and they were also not found on the ground where the space rocks landed. That rules out contamination and rules in space organics. A little adenine and guanine in the company of other mysterious stuff is a long, long way from something living but it's closer than we were before.
Next Star Wars Gets Real | <urn:uuid:3687b7dc-36d0-40be-aad4-b342f2eaf02b> | 3.78125 | 213 | Truncated | Science & Tech. | 43.005537 | 195 |
© 2013 TutorialsPoint.COM
SOAP Fault Element
When an error occurs during processing, the response to a SOAP message is a SOAP fault element in the body of the message, and the fault is returned to the sender of the SOAP message.
The SOAP fault mechanism returns specific information about the error, including a predefined code, a description, the address of the SOAP processor that generated
A SOAP Message can carry only one fault block
Fault element is an optional part of SOAP Message
For the HTTP binding, a successful response is linked to the 200 to 299 range of status codes;
SOAP fault is linked to the 500 to 599 range of status codes.
The SOAP Fault element has the following sub elements:
||A text code used to indicate a class of errors. See the next Table for a listing of predefined fault codes.
||A text message explaning the error
||A text string indicating who caused the fault. This is useful if the SOAP
message travels through several nodes in the SOAP message path, and the
client needs to know which node caused the error. A node that does not act
as the ultimate destination must include a faultActor element.
An element used to carry application-specific error messages. The detail element can contain child elements, called detail entries.
SOAP Fault Codes
The faultCode values defined below must be used in the faultcode element when describing faults
||Found an invalid namespace for the SOAP Envelope element
||An immediate child element of the Header element, with the mustUnderstand attribute set to "1", was
||The message was incorrectly formed or contained incorrect information
||There was a problem with the server so the message could not proceed
SOAP Fault Example
The following code is a sample Fault. The client has requested a method named ValidateCreditCard , but the service does not support such a method. This represents a client request error, and the server returns the following SOAP response:
<?xml version='1.0' encoding='UTF-8'?>
Failed to locate method (ValidateCreditCard) in class
(examplesCreditCard) at /usr/local/ActivePerl-5.6/lib/
site_perl/5.6.0/SOAP/Lite.pm line 1555. | <urn:uuid:f75871bb-4d8b-4a6b-afbe-cb9297118093> | 2.6875 | 494 | Documentation | Software Dev. | 46.896652 | 196 |
Search: Nuclear chemistry, Darmstadtium, Germany
In honour of scientist and astronomer Nicolaus Copernicus (1473-1543), the discovering team around Professor Sigurd Hofmann suggested the name copernicium with the element symbol Cp for the new element 112, discovered at the GSI Helmholtzzentrum für Schwerionenforschung (Center for Heavy Ion Research) in Darmstadt. It was Copernicus who discovered that the Earth orbits the Sun, thus paving the way for our modern view of the world. Thirteen years ago, element 112 was discovered by an international team of scientists at the GSI accelerator facility. A few weeks ago, the International Union of Pure and Applied Chemistry, IUPAC, officially confirmed their discovery. In around six months, IUPAC will officially endorse the new element's name. This period is set to allow the scientific community to discuss the suggested name copernicium before the IUPAC naming.
"After IUPAC officially recognized our discovery, we – that is all scientists involved in the discovery – agreed on proposing the name copernicium for the new element 112. We would like to honor an outstanding scientist, who changed our view of the world", says Sigurd Hofmann, head of the discovering team.
Copernicus was born 1473 in Torun; he died 1543 in Frombork, Poland. Working in the field of astronomy, he realized that the planets circle the Sun. His discovery refuted the then accepted belief that the Earth was the center of the universe. His finding was pivotal for the discovery of the gravitational force, which is responsible for the motion of the planets. It also led to the conclusion that the stars are incredibly far away and the universe inconceivably large, as the size and position of the stars does not change even though the Earth is moving. Furthermore, the new world view inspired by Copernicus had an impact on the human self-concept in theology and philosophy: humankind could no longer be seen as the center of the world.
With its planets revolving around the Sun on different orbits, the solar system is also a model for other physical systems. The structure of an atom is like a microcosm: its electrons orbit the atomic nucleus like the planets orbit the Sun. Exactly 112 electrons circle the atomic nucleus in an atom of the new element "copernicium".
Element 112 is the heaviest element in the periodic table, 277 times heavier than hydrogen. It is produced by a nuclear fusion, when bombarding zinc ions onto a lead target. As the element already decays after a split second, its existence can only be proved with the help of extremely fast and sensitive analysis methods. Twenty-one scientists from Germany, Finland, Russia and Slovakia have been involved in the experiments that led to the discovery of element 112.
Since 1981, GSI accelerator experiments have yielded the discovery of six chemical elements, which carry the atomic numbers 107 to 112. The discovering teams at GSI already named five of them: element 107 is called bohrium, element 108 hassium, element 109 meitnerium, element 110 darmstadtium, and element 111 is named roentgenium.
The new element 112 discovered by GSI has been officially recognized and will be named by the Darmstadt group in due course. Their suggestion should be made public over this summer.
The element 112, discovered at the GSI Helmholtzzentrum für Schwerionenforschung (Centre for Heavy Ion Research) in Darmstadt, has been officially recognized as a new element by the International Union of Pure and Applied Chemistry (IUPAC). IUPAC confirmed the recognition of element 112 in an official letter to the head of the discovering team, Professor Sigurd Hofmann. The letter furthermore asks the discoverers to propose a name for the new element. Their suggestion will be submitted within the next weeks. In about 6 months, after the proposed name has been thoroughly assessed by IUPAC, the element will receive its official name. The new element is approximately 277 times heavier than hydrogen, making it the heaviest element in the periodic table.
“We are delighted that now the sixth element – and thus all of the elements discovered at GSI during the past 30 years – has been officially recognized. During the next few weeks, the scientists of the discovering team will deliberate on a name for the new element”, says Sigurd Hofmann. 21 scientists from Germany, Finland, Russia and Slovakia were involved in the experiments around the discovery of the new element 112.
Since 1981, GSI accelerator experiments have yielded the discovery of six chemical elements, which carry the atomic numbers 107 to 112. GSI has already named their officially recognized elements 107 to 111: element 107 is called Bohrium, element 108 Hassium, element 109 Meitnerium, element 110 Darmstadtium, and element 111 is named Roentgenium.
Recommendation for the Naming of Element of Atomic Number 110
Prepared for publication by J. Corish and G. M. Rosenblatt
A joint IUPAC-IUPAP Working Party confirms the discovery of element number 110 and this by the collaboration of Hofmann et al. from the Gesellschaft für Schwerionenforschung mbH (GSI) in Darmstadt, Germany.
In accord with IUPAC procedures, the discoverers have proposed a name and symbol for the element. The Inorganic Chemistry Division Committee now recommends this proposal for acceptance. The proposed name is darmstadtium with symbol Ds. This proposal lies within the long established tradition of naming an element after the place of its discovery. | <urn:uuid:149ab25b-f1f4-4231-88ea-4e1968ed8a9d> | 3.671875 | 1,178 | Knowledge Article | Science & Tech. | 37.686338 | 197 |
Given all the evidence presently available, we believe it entirely reasonable that Mars is inhabited with living organisms and that life independently originated there
The conclusion of a study by the National Academy of Sciences in March 1965, after 88 years of surveying the red planet through blurry telescopes. Four months later, NASA’s Mariner 4 spacecraft would beam back the first satellite images of Mars confirming the opposite.
After Earth and Mars were born four and a half billion years ago, they both contained all the elements necessary for life. After initially having surface water and an atmosphere, scientists now believe Mars lost it’s atmosphere four billion years ago, with Earth getting an oxygenated atmosphere around half a billion years later.
According to the chief scientist on NASA’s Curiosity mission, if life ever existed on Mars it was most likely microscopic and lived more than three and a half billion years ago. But even on Earth, fossils that old are vanishingly rare. “You can count them on one hand,” he says. “Five locations. You can waste time looking at hundreds of thousands of rocks and not find anything.”
The impact of a 40kg meteor on the Moon on March 17 was bright enough to see from Earth without a telescope, according to NASA, who captured the impact through a Moon-monitoring telescope.
Now NASA’s Lunar Reconnaissance Orbiter will try and search out the impact crater, which could be up to 20 metres wide. | <urn:uuid:132d7809-ba28-4c89-8ce0-867a2a81c1e6> | 4.1875 | 300 | Content Listing | Science & Tech. | 42.244446 | 198 |
As the years tick by with most of the planet doing little in the way of reducing carbon emissions, researchers are getting increasingly serious about the possibility of carbon sequestration. If it looks like we're going to be burning coal for decades, carbon sequestration offers us the best chance of limiting its impact on climate change and ocean acidification. A paper that will appear in today's PNAS describes a fantastic resource for carbon sequestration that happens to be located right next to many of the US' major urban centers on the East Coast.
Assuming that capturing the carbon dioxide is financially and energetically feasible, the big concern becomes where to put it so that it will stay out of the atmosphere for centuries. There appear to be two main schools of thought here. One is that areas that hold large deposits of natural gas should be able to trap other gasses for the long term. The one concern here is that, unlike natural gas, CO2 readily dissolves in water, and may escape via groundwater that flows through these features. The alternative approach turns that problem into a virtue: dissolved CO2 can react with minerals in rocks called basalts (the product of major volcanic activity), forming insoluble carbonate minerals. This should provide an irreversible chemical sequestration.
The new paper helpfully points out that if we're looking for basalts, the East Coast of the US, home to many of its major urban centers and their associated carbon emissions, has an embarrassment of riches. The rifting that broke up the supercontinent called Pangea and formed the Atlantic Ocean's basin triggered some massive basalt flows at the time, which are now part of the Central Atlantic Magmatic Province, or CAMP. The authors estimate that prior to some erosion, CAMP had the equivalent of the largest basalt flows we're currently aware of, the Siberian and Deccan Traps.
Some of this basalt is on land—anyone in northern Manhattan can look across the Hudson River and see it in the sheer cliffs of the Palisades. But much, much more of it is off the coast under the Atlantic Ocean. The authors provide some evidence in the form of drill cores and seismic readings that indicate there are large basalt deposits in basins offshore of New Jersey and New York, extending up to southern New England.
These areas are now covered with millions of years of sediment, which should provide a largely impermeable barrier that will trap any gas injected into the basalt for many years. The deposits should also have reached equilibrium with the seawater above, which will provide the water necessary for the chemical reactions that precipitate out carbonate minerals.
Using a drill core from an onshore deposit, the authors show that the basalt deposits are also composed of many distinct flows of material. Each of these flows would have undergone rapid cooling on both its upper and lower surface, which fragmented the rock. The core samples show porosity levels between 10 and 20 percent, which should allow any CO2 pumped into the deposits to spread widely.
The authors estimate that New Jersey's Sandy Hook basin, a relatively small deposit, is sufficient to house 40 years' worth of emissions from coal plants that produce 4GW of electricity. And the Sandy Hook basin is dwarfed by one that lies off the Carolinas and Georgia. They estimate that the South Georgia Rift basin covers roughly 40,000 square kilometers.
The authors argue that although laboratory simulations suggest the basic idea of using basalts for carbon sequestration is sound, the actual effectiveness in a given region can depend on local quirks of geology, so pilot tests in the field are absolutely essential for determining whether a given deposit is suitable. So far, only one small-scale test has been performed on any of the CAMP deposits.
Given the area's proximity to significant sources of CO2 and the infrastructure that could be brought into play if full-scale sequestration is attempted, it seems like one of the most promising proposals to date.
PNAS, 2010. DOI: 10.1073/pnas.0913721107 | <urn:uuid:0f4b5328-483d-437b-b4b6-8cf4bfa3968b> | 3.90625 | 823 | News Article | Science & Tech. | 38.218821 | 199 |