text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
This is a tricky question to answer because weather, what you experience at your house right now, is not really that same thing as climate, the patterns of global air and sea movements that bring weather.
So milder winters can be a possibility in certain locations, as they will be exposed to an overall warming of the entire atmosphere. But colder winters can be experienced.
Since the mid 1970s, global temperatures have been warming at around 0.2 degrees Celsius per decade. However, weather imposes its own dramatic ups and downs over the long term trend. We expect to see record cold temperatures even during global warming. Nevertheless over the last decade, daily record high temperatures occurred twice as often as record lows. This tendency towards hotter days is expected to increase as global warming continues into the 21st Century.
Vladimir Petoukhov, a climate scientist at the Potsdam Institute for Climate Impact Research, has recently completed a study on the effect of climate change on winter. According to Petoukhov,
These anomalies could triple the probability of cold winter extremes in Europe and northern Asia. Recent severe winters like last year's or the one of 2005-06 do not conflict with the global warming picture, but rather supplement it.
Weather being a local response to climatic conditions means that you have to understand what has changed in the climatic patterns in your region. What are your local weather drivers? How have they changed since the 1970s?
Thus, you could end up with some areas experiencing colder winters; due to greater moisture levels in the air, more precipitation of snow, greater heat loss at night due to clear skies, etc. Or you could have an area that will experience milder temps in winter due to warmer air currents, warmer oceans, localised heat island impacts, etc.
For further information you should investigate the weather and climate agencies publications for your area. | <urn:uuid:46831906-816c-4216-bb15-f2eeb6252799> | 2.953125 | 383 | Q&A Forum | Science & Tech. | 43.250079 | 600 |
The Stormwater Ecological Enhancement Project (SEEP) began in 1995 as a take-home final exam for the course Ecosystems of Florida. The objective was to develop a management plan to enhance a stormwater retention basin located within the University of Florida Natural Area and Teaching Lab (NATL) for species diversity while optimizing the basin's use for research and education. Since that time, the Wetlands Club at UF has taken this project further and implemented a full-scale created wetland that achieves not only the original objectives but also improves wildlife habitat, water quality, and aesthetics. These efforts have been in close coordination with the NATL Advisory Committee.
What is a Stormwater Retention Basin?
Water that runs off the land during and after a rainstorm is called stormwater runoff. This runoff and any pollutants it carries flows into streams, rivers, lakes and depressions throughout the landscape. In an urbanized landscape natural physical, chemical and biological processes are disrupted and leaves, litter, animal waste, oil, greases, heavy metals, fertilizers and pesticides are transported downstream. A stormwater retention basin provides temporary storage for the runoff generated by development in the watershed, releasing it slowly and reducing the potential for flooding. The basin also provides some treatment of pollution carried by the stormwater runoff.
While wetlands have historically been considered of little importance, our increasing understanding of these systems is changing this misconception. Wetlands are now recognized for providing many vital benefits. Some of these benefits include:
- habitat for commercially valuable fish and shellfish,
- improved water quality.
Although we have lost more than 50 percent of the historic wetlands in the lower 48 states, protection of wetlands has increased considerably over the past 15 years due to recognition of these values.
Wetlands and Stormwater Basins
Wetlands can be found alongside rivers and lake shores, and as low areas in the landscape that often become flooded during storms. These wetlands are the natural stormwater basins of the landscape. As humans create stormwater basins to reduce the effects of development, it seems only logical to mimic these natural stormwater basins. This provides benefits beyond that of water storage as the basin becomes a multipurpose area serving our needs to reduce flooding while offsetting wetland functions that have been lost over the past 200 years. The water treatment component of the retention basin would also be substantially enhanced by the diversity of vegetation and complexity of the integrated wetland community. The integration of these "free" services provided by a natural system with the needs of our growing world has been termed Ecological Engineering. This new approach to urban and regional planning is not only a more environmentally sensitive approach, but one that uses processes that have been working naturally for millions of years.
The Retention Pond at NATL
The 3-acre retention pond is the low point of a 39.75 acre watershed. The majority of the basin was constructed in 1988 with additional storage created in 1990. Structures within this watershed contributing significant runoff to the basin include the Center for Performing Arts, Entomology and Nematology buildings, the Park & Ride commuter lot and roadways between and around these buildings. The total storage of the basin to offset the increased runoff generated by these impervious surfaces is 478,000 cubic feet. As originally designed the bottom of the basin is essentially flat, with uniform slopes on the north, south and east sides. To the west of the basin the slope is low and quickly grades into the preexisting depression of the area. Because the basin is almost uniform in elevation the established vegetation was dominated by Cattail.
Ecologically Enhanced Design
The primary goal of the project is to increase the diversity of flooding depths and frequency of flooding that will occur since this is the primary factor regulating species composition in a wetland. To do this two depressions, one 4-feet, the other 5-feet deep, were dug at the southeastern end of the pond providing a deep, open-water habitat. At the north end a low berm was constructed to temporarily impound 80% of the entering stormwater. This forebay provides the first phase of treatment and was planted with species known to take up heavy metals and remove nutrients. Water from the forebay is then slowly released, first flowing through an area planted to resemble a bottom-land hardwood swamp, and move into a shallow freshwater marsh before entering the deep-water ponds.
At the southeastern end of the pond another small berm was built to divert stormwater away from the deep-water ponds, increasing treatment time. At the end of this berm a knoll was built and planted with trees to provide nesting or roosting sites for birds. The basin was planted with species that resemble those found in wetlands of North Central Florida. A boardwalk also will be constructed.
Expected SEEP Benefits
The SEEP project already has provided a great learning experience for Wetlands Club members through project design and organization, regulatory agency interaction and team work. Other benefits of the project include:
-Species diversity. The variety of plantings and topographic diversity on the sight provides new genetic material as well as suitable establishment sites for long-term increases in vegetative species diversity within the basin.
- Wildlife habitat. Vegetative diversity as well as diversity of aquatic habitat provides a multitude of new biotic niches not previously available in the basin. The value of this habitat becomes increasingly important as other areas on campus and in the Gainesville community are encroached upon.
-Aesthetics. Retention basins are notoriously unattractive, often fenced in, littered with trash, and square. Although the retention basin at the NATL is pleasant compared to some, its appeal would be improved if it resembled a diverse wetland.
-Water Quality. Construction of the forebay, planting of species known to have high treatment potential, and diversion of stormwater to maximize treatment all improve the water treatment potential of the basin.
-Research. Since integration of wetlands and stormwater basins is still a relatively new concept, little is known about optimization and performance of these systems. Implementing SEEP provides a unique opportunity to test the principles of this integration, pushing the University of Florida to the forefront of this technology. The location of this site on campus as well as the location of the site within NATL allows for easy access and control over activities within the site. Faculty, staff and state agencies interested in this topic will be able to use this as a long-term study site.
-Education. Educational opportunities for both students and the public enormous for this site. The University has one of only three wetland centers in the country with some of the founding faculty in principles of Ecological Engineering. Many courses throughout the campus use the area for various components of their curriculum. Public education opportunities abound with the construction of the new Florida Museum of Natural History within a stones throw of the basin. | <urn:uuid:e885a217-6dff-4a4b-8bcb-7da91d0d9436> | 3.859375 | 1,407 | About (Org.) | Science & Tech. | 28.166124 | 601 |
Lightning Mapping Array (LMA)
The SPoRT program works with three total lightning networks. These include the Lightning Mapping Arrays in North Alabama and Washington, D.C. as well as the Lightning Detection and Ranging Network at the Kennedy Space Center. Each card represents one of these networks. A green card marked as "Evaluation Product," is being used by at least one National Weather Service Forecast Office. Blue cards, marked "Research," exist for academic purposes and research. Real-time data are available by following the link at the bottom right of the card. The Overview section below describes the North Alabama network, but the basic concepts are applicable to each total lightning network.
Real-time 2-minute data on a 2 x 2 km grid from the North Alabama Lightning Mapping Array.
Figure 1: The location of the 11 North Alabama Lightning Mapping
Array sensors (green dots and blue dot) and communications relays
(open green circles) across north Alabama.
The North Alabama Lightning Mapping Array (NALMA) was first activated in 2001 and officially transitioned to the Huntsville, Alabama National Weather Service Office in early 2003. Since the initial transition, SPoRT has successfully transitioned NALMA data to three other partner forecast offices. These include the Birmingham, Alabama as well as Morristown and Nashville, Tennessee Weather Forecast Offices. In addition, SPoRT has worked collaboratively with the lightning group here at the National Space Science and Technology Center (NSSTC) in Huntsville, Alabama to provide near real-time total lightning data to partner forecast offices in Melbourne, Florida and Sterling, Virginia using networks located in those regions. Sterling uses the Washington D.C. Lightning Mapping Array (DCLMA) while Melbourne receives data from the Kennedy Space Center Lightning Detection and Ranging Network (LDAR). Both of these networks are functionally similar to the NALMA network and forecast applications developed for one network can be used with another.
Figure 2. A comparison between what a cloud-to-ground
network observes in a lightning flash (left) versus what a total
lightning network will observe in a lightning flash (right).
Note how the cloud-to-ground network only provides a single
point of information. Also, the cloud-to-ground network
would observe nothing if the flash were solely intra-cloud.
The NALMA is a three-dimensional very high frequency (VHF) detection network of 11 VHF receivers deployed across northern Alabama with a base station and receiver located at the NSSTC (Figure 1). Solid green circles indicate a VHF receiver, while open green circles are wireless relay stations. The blue dot is the base station and 11th sensor located at the NSSTC. As of May 2009, two additional sensors located in Atlanta, Georgia have been added, in collaboration with researchers at Georgia Tech University. These are testing the effectiveness of the NALMA network using long baselines in the sensor placement.
Figure 3. A sample of 31 thunderstorms observed by the Kennedy Space
Center Lightning Detection and Ranging network showing the number of
cloud-to-ground strikes versus total lightning observed in each storm.
Notice how the intra-cloud component dominates the total lightning
observed in each storm. It is also interesting to note that two storms
had no cloud-to-ground strikes at all, yet were still very electrically active.
The NALMA system locates the sources of impulsive VHF radio signals from lightning by accurately measuring the time that the signals arrive at the different receiving stations. Each station records the magnitude and time of the peak lightning radiation signal in successive 80 microsecond intervals within a local unused television channel (channel 5, 76-82 MHz). Typically, hundreds of sources per flash can be reconstructed, which in turn produces accurate 3-dimensional lightning image maps (nominally <50 m error within a 150 km range). The sources can be thought of as the individual stepped leaders within a lightning flash. More detailed information can be found in Goodman et al. (2005). The primary advantage of NALMA, and the other total lightning networks, is that the networks detect total lightning, which is the combination of both cloud-to-ground and intra-cloud lightning. Figure 2 shows a rough comparison of what is detected between standard cloud-to-ground networks versus NALMA or any other lightning mapping array. The importance of detecting the intra-cloud flashes is that the intra-cloud flashes typically dominate the full number of flashes in a thunderstorm (Figure 3). With only cloud-to-ground data, forecasters are not receiving the full breadth of knowledge of how the storm is developing. Also, total lightning data are updated every 2 minutes, giving forecasters additional information about storm development in between radar volume scans.
Figure 4. A screen capture from AWIPS II showing the total lightning
flash extent density (colored contours) versus the cloud-to-ground
strike locations (negative and plus signs). Notice how the total
lightning indicates that lightning flashes are covering a wide area
whereas the cloud-to-ground observations only show single locations.
Operationally, total lightning data provide several advantages to forecasters. First, total lightning data often give a 3-5 minute lead time ahead of the first cloud-to-ground lightning strike. This improves lightning safety for the National Weather Service's Terminal Aerodrome Forecasts (TAFs) and Airport Weather Warnings (AWWs). This safety feature also can be used for incident support of special events. In addition, the total lightning data provides information about the spatial extent of lightning that is not available in the traditional cloud-to-ground data. Figure 4 shows the comparison of what is seen between a cloud-to-ground network observation and NALMA. Furthermore, the trend of total lightning in a thunderstorm can be used to provide advanced lead time on the development of severe weather. Forecasters often look for a lightning jump signature, where the total lightning observations rapidly increase in a short period of time. This lightning jump is indicative of a strengthening thunderstorm updraft. This insight into a storm's evolutionary development helps forecasters pinpoint which thunderstorms are intensifying or not. This provides a powerful tool in reducing the number of false alarms issued by the Weather Service as well as providing increased warning lead time. Figure 5 illustrates a lightning jump, both in a time series plot and with two screen captures from the National Weather Service's own decision support computer system. Additional information can be found at the SPoRT training page.
Figure 5. A time trend plot (top) of a storm that had two separate
lightning jumps at 1906 and 1920 UTC that led to the issuance of a
tornado warning at 1920 UTC ahead of the touchdown of an EF-1
tornado. The bottom two images show the AWIPS display before
(left) and during (right) the lightning jump.
SPoRT also utilizes the NALMA observations, and observations from other total lightning networks, as a risk reduction project for the GOES-R Geostationary Lightning Mapper (GLM) system set for launch later this decade. The GLM will be the first total lightning observation instrument in geostationary orbit and will provide total lightning observations over a massive domain, as opposed to the very small domains of the lightning mapping arrays. SPoRT uses the ground-based networks to help prepare for the GLM and its impacts on forecasting. More information can be found on SPoRT's GOES-R Proving Ground page.
Figure 6. The domain covered by the North Alabama Lightning Mapping Array.
For our end users, SPoRT provides a three dimensional total lightning data set that is updated every 2 minutes. Figure 6 shows that the NALMA network provides full coverage to the Huntsville and Nashville Weather Service county warning areas as well as partial coverage to Birmingham and Morristown offices. The grid has a horizontal extent of 460 x 460 km, with a 2 x 2 km grid resolution centered on the NSSTC. The vertical grid resolution is 1 km from 0-17 km. By providing NALMA data in AWIPS and AWIPS II, forecasters are able to interrogate the data on any of the 17 horizontal levels or examine the cumulative lightning density maps. The importance of using AWIPS / AWIPS II is that it puts the NALMA data into the forecasters' own decision support tool where they can readily compare the NALMA data to NEXRAD radar observations or any other available data sets to enhance situational awareness, particularly during severe weather events.
Forecasters predominantly use the cumulative lightning density map in real-time operations as opposed to any single vertical level due to forecasting time constraints. However, with the greater flexibility of AWIPS II, SPoRT is working with our partners to potentially include more of the available three dimensional observations.
References: Goodman, S. J., and Coauthors, 2005: The North Alabama Lightning Mapping Array: Recent severe storm observations and future prospects. Atmos. Res., 76, 423-437. | <urn:uuid:6224495b-bf02-41e8-952b-958ff7bbdd7f> | 2.65625 | 1,873 | Knowledge Article | Science & Tech. | 38.788804 | 602 |
Ecologists are warning all citizens of earth… we are running out of natural resources!
According to top ecologists from around the world, from 1970 to 2008 the biological resources of our planet have shrunk by 68%.
And according to ecology experts in South America and the Australian state of Tasmania, in comparison to 1966, the biggest consumer of natural resources i is the USA.
Specialists from The World Is Ending Fund studied the development of more than 9000 mammals, fish, birds, reptiles and amphibians. The report put out by the Tasmanian ecologist states that if the whole world were to use natural resources to the extent that Americans use them, we will run out of natural resources in two weeks.
So in order to support the eco-balance on the planet, we would need five Earths to provide the natural resources we need.
America isn’t the only hog of natural resources. There are others gobbling up everything. Here they are:
3) The United Arab Emirates
8) The Netherlands
The amount of garbage in the Pacific Ocean has increased by 150 times in the last 40 years, according to the Tasmanians.
The most littered part of the ocean is the section between California and Hawaii. Researchers have noted that the smaller particles present a serious threat to sea life since they can get into their respiratory or digestive systems. Around 10% of fish caught in the areas close to the islands of garbage have small particles of plastic in their stomachs and many of the fish have McDonald’s wrappers inside their intestines.
So what do we do? How do we stop the demise of the Earth.
“Nothing we can do,” said the lead Tasmanian researcher. ”We should just all party like it’s 2015!”
Here is one researcher who thinks we can make it 40 more years at our current burn rate. | <urn:uuid:7f2792bf-cce7-4faa-aa60-6928e02530d4> | 2.640625 | 385 | News Article | Science & Tech. | 52.462714 | 603 |
Most Active Stories
- Four Concerts Scheduled In Expanded, Larger Back Porch Music Series In Durham
- Why Do Political Activists Burn Out?
- First Openly Lesbian Presbyterian Pastor, One Year In
- As Costa Concordia Sank, Newlyweds Allowed Others To Take Life Boats First
- Duke Professor Carries On Tradition Of Black Radical Poetry
Hosts, Reporters and Producers
Mon February 27, 2012
Study: Climate Change Altering Bird Migration
There's more evidence that climate change is altering bird migration patterns. A new study from UNC-Chapel Hill finds some species along the east coast are migrating three-to-six days earlier than they were just ten years ago. Allen Hurlbert is an assistant professor of biology at UNC. He says birds face problems if they get the timing wrong.
Allen Hurlbert: Individuals arriving too early may face adverse conditions weather-wise, they may arrive and there are very limited resources there for them to eat. But individuals that arrive too late, they may face disadvantages in establishing breeding territories or finding high quality mates.
Hurlbert says the study also found some species are better at adapting than others. He says species that struggle to adjust could face threats to their populations. | <urn:uuid:db574a39-af83-44c3-927a-04b7a934afb2> | 2.53125 | 262 | News Article | Science & Tech. | 33.883768 | 604 |
|Harnessing the Bacterial Power of Nanomagnets|
Nanometer-size magnets have wide-ranging uses, from directed cancer therapy and drug delivery systems to magnetic recording media and transducers. Such applications require the production of nanoparticles with well-controlled size and tunable magnetic properties. The synthesis of such nanomagnets, however, often requires elevated temperatures and toxic solvents, resulting in high environmental and energy costs. Metal-reducing microorganisms offer an untapped resource to produce these materials in an environmentally benign way. At the ALS, researchers from the University of Manchester have shown that Fe(III)-reducing bacteria can be used to synthesize magnetic iron oxide nanoparticles with high yields, narrow size distribution, and magnetic properties equal to the best chemically synthesized materials.
A relatively unexplored resource for magnetic nanomaterial production is a type of subsurface microorganism capable of producing large quantities of nanoscale magnetite (Fe3O4) at ambient temperatures. Metal-reducing bacteria live in soils deficient in oxygen and conserve energy for growth through the oxidation of hydrogen or organic electron donors, coupled to the reduction of oxidized metals such as Fe(III)-bearing minerals. This can result in the formation of magnetite via the extracellular reduction of amorphous Fe(III)-oxyhydroxides, releasing soluble Fe(II) and completely recrystallizing the amorphous mineral into a new phase.
The Manchester team developed a method for producing large quantities of highly crystalline magnetite and cobalt ferrite (CoFe2O4) nanoparticles using the Fe(III)-reducing bacterium, Geobacter sulfurreducens. In particular, they demonstrated that cobalt ferrite nanoparticles with the high coercivity (i.e., resistance to demagnetization) important for applications can be manufactured through this biotechnological route. Three samples containing increasing amounts of Co in the biogenic magnetite structure were analyzed. X-ray diffraction and transmission electron microscopy showed that the material is nanocrystalline. Moreover, the coercivity of the samples increases with increasing Co content, so that it can be tuned for specific applications.
The cation distribution in the ferrite nanoparticles was investigated using x ray absorption (XA) and x-ray magnetic circular dichroism (XMCD) at the Fe L2,3 and Co L2,3 edges, measured at ALS Beamline 4.0.2. An XMCD spectrum is obtained as the difference between two XA spectra measured in opposite external magnetic fields. Magnetite has an inverse spinel crystal structure, which contains tetrahedral (Td) and octahedral (Oh) sites accommodating Fe2+ and Fe3+ cations. Each specific cation in the spinel structure generates a unique XMCD signature determined by its valence state (number of d electrons), site symmetry (i.e., Td or Oh), and moment direction, which can be computed using atomic multiplet calculations. By fitting a weighted sum of these calculated spectra to the measured XMCD spectra, the site occupations of the Fe cations can be obtained.
The biogenic materials show a striking change with increasing Co amount, namely a decrease in intensity of the leading negative peak in the Fe L3 edge, which implies that Co is predominantly replacing Fe2+ cations in octahedral sites. Similarly, the site occupancy and oxidation state of the Co can be directly assessed by examining the Co L2,3 XA and XMCD spectra. The close similarity with the spectra for synthetically produced CoFe2O4 thin films confirmed that the bacteria were able to suitably accommodate Co in the ferrite structure with the Co2+ residing primarily on Oh sites.
The XMCD measurements indicate a dramatic enhancement in the magnetic properties of biogenically produced nanoparticles when large quantities of Co are introduced into the spinel structure, a major advance over previous biomineralization studies. Inclusion of other transition metals into the spinel structure by Fe(III)-reducing bacteria to tailor the magnetic properties of nanoferrites could lead to a suite of materials required for different technological uses. The successful production of highly ordered crystalline nanoparticulate ferrites demonstrates the potential for scaled-up industrial manufacture of nanoparticles using environmentally benign and energy-efficient methodologies.
Research conducted by V.S. Coker, N.D. Telling, R.A.D. Pattrick, C.I. Pearce, J.R. Lloyd, F. Tuna, and R.E.P. Winpenny (University of Manchester, UK); G. van der Laan (Diamond Light Source, UK); and E. Arenholz (ALS).
Research funding: UK Engineering and Physical Sciences Research Council and UK Biotechnology and Biological Sciences Research Council. Operation of the ALS is supported by the U.S. Department of Energy, Office of Basic Energy Sciences.
Publication about this research: V.S. Coker, N.D. Telling, G. van der Laan, R.A.D. Pattrick, C.I. Pearce, E. Arenholz, F. Tuna, R. Winpenny, and J.R. Lloyd, "Harnessing the extracellular bacterial production of nanoscale cobalt ferrite with exploitable magnetic properties," ACS Nano3, 1922 (2009). | <urn:uuid:b285fa31-c270-460d-bc87-d9f735919786> | 3.21875 | 1,132 | Academic Writing | Science & Tech. | 25.442132 | 605 |
Science Fair Project Encyclopedia
In differential geometry, a pseudo-Riemannian manifold is a smooth manifold equipped with a smooth, symmetric, (0,2) tensor which is nondegenerate at each point on the manifold. This tensor is called a pseudo-Riemannian metric or, simply, a (pseudo-)metric tensor.
The key difference between a Riemannian metric and a pseudo-Riemannian metric is that a pseudo-Riemannian metric need not be positive-definite, merely nondegenerate. Since every positive-definite form is also nondegenerate a Riemannian metric is a special case of a pseudo-Riemannian one. Thus pseudo-Riemannian manifolds can be considered generalizations of Riemannian manifolds.
Every nondegenerate, symmetric, bilinear form has a fixed signature (p,q). Here p and q denote the number of positive and negative eigenvalues of the form. The signature of a pseudo-Riemannian manifold is just the signature of the metric (one should insist that the signature is the same on every connected component). Note that p + q = n is the dimension of the manifold. Riemannian manifolds are simply those with signature (n,0).
Pseudo-Riemannian metrics of signature (p,1) (or sometimes (1,q), see sign convention) are called Lorentzian metrics. A manifold equipped with a Lorentzian metric is naturally called a Lorentzian manifold. After Riemannian manifolds, Lorentzian manifolds form the most important subclass of pseudo-Riemannian manifolds. They are important because of their physical applications to the theory of general relativity. A principal assumption of general relativity is that spacetime can be modeled as a Lorentzian manifold of signature (3,1).
Just as Euclidean space can be thought of as the model Riemannian manifold, Minkowski space with the flat Minkowski metric is the model Lorentzian manifold. Likewise, the model space for a pseudo-Riemannian manifold of signature (p,q) is with the metric
Some basic theorems of Riemannian geometry can be generalized to the pseudo-Riemannian case. In particular, the fundamental theorem of Riemannian geometry is true of pseudo-Riemannian manifolds as well. This allows one to speak of the Levi-Civita connection on a pseudo-Riemannian manifold along with the associated curvature tensor. On the other hand, there are many theorems in Riemannian geometry which do not hold in the generalized case. For example, it is not true that every smooth manifold admits a pseudo-Riemannian metric of a given signature; there are certain topological obstructions.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:b9e1a8f8-67e4-46d6-add8-080dedfd2f12> | 3.625 | 644 | Knowledge Article | Science & Tech. | 25.882828 | 606 |
Endangered Species Get A Chilly (and Warm!) Reception In New York City
Samples from the American crocodile, the Channel Islands fox, and the Hawaiian gooseall endangered species studied within the U.S. National Parksare set to join neotropical butterflies, rare leeches, and snippets of sea stars in the American Museum of Natural History's frozen library, the Ambrose Monell Cryo Collection. Currently, this collection cryogenically-preserves the DNA of about 40,000 species in nitrogen-cooled vats and distributes to requests to geneticists free of charge.
On July 7, 2009, representatives of the Park Service came to New York to sign a formal agreement with the Museum. Material collected from the most important biological resources of the country, endangered species, will be routinely iced in the Museum's pristine, meticulous lab. This video, introduced by George Amato, Director of the Sackler Institute for Comparative Genomics, documents the signing of the agreement by Bert Frost, Associate Director of Natural Resource Stewardship and Science at the National Park Service, and Darrel Frost, Associate Dean of Science for Collections at the Museum.
Media Inquiries: Department of Communications, 212-769-5800 | <urn:uuid:ffc7f0dc-ec65-4234-9448-eb54a3ae249d> | 2.984375 | 254 | News (Org.) | Science & Tech. | 25.219535 | 607 |
Formation of large (≃100 μm) ice crystals near the tropical tropopause
1NASA Ames Research Center, Moffett Field, CA, USA
2SPEC Inc., Boulder, CO, USA
3Centro de Ciencias de la Atmosfera, Universidad Nacional Autonoma de Mexico, Circuito Exterior, Mexico
4Harvard University, Cambridge, MA, USA
5Colorado Research Associates, Boulder, CO, USA
6University of Colorado, Boulder, CO, USA
Abstract. Recent high-altitude aircraft measurements with in situ imaging instruments indicated the presence of relatively large (≃100 μm length), thin (aspect ratios of ≃6:1 or larger) hexagonal plate ice crystals near the tropical tropopause in very low concentrations (<0.01 L−1). These crystals were not produced by deep convection or aggregation. We use simple growth-sedimentation calculations as well as detailed cloud simulations to evaluate the conditions required to grow the large crystals. Uncertainties in crystal aspect ratio leave a range of possibilities, which could be constrained by knowledge of the water vapor concentration in the air where the crystal growth occurred. Unfortunately, water vapor measurements made in the cloud formation region near the tropopause with different instruments ranged from <2 ppmv to ≃3.5 ppmv. The higher water vapor concentrations correspond to very large ice supersaturations (relative humidities with respect to ice of about 200%). If the aspect ratios of the hexagonal plate crystals are as small as the image analysis suggests (6:1, see companion paper (Lawson et al., 2008)) then growth of the large crystals before they sediment out of the supersaturated layer would only be possible if the water vapor concentration were on the high end of the range indicated by the different measurements (>3 ppmv). On the other hand, if the crystal aspect ratios are quite a bit larger (≃10:1), then H2O concentrations toward the low end of the measurement range (≃2–2.5 ppmv) would suffice to grow the large crystals. Gravity-wave driven temperature and vertical wind perturbations only slightly modify the H2O concentrations needed to grow the crystals. We find that it would not be possible to grow the large crystals with water concentrations less than 2 ppmv, even with assumptions of a very high aspect ratio of 15 and steady upward motion of 2 cm s−1 to loft the crystals in the tropopause region. These calculations would seem to imply that the measurements indicating water vapor concentrations less than 2 ppmv are implausible, but we cannot rule out the possibility that higher humidity prevailed upstream of the aircraft measurements and the air was dehydrated by the cloud formation. Simulations of the cloud formation with a detailed model indicate that homogeneous freezing should generate ice concentrations larger than the observed concencentrations (20 L−1), and even concentrations as low as 20 L−1 should have depleted the vapor in excess of saturation and prevented growth of large crystals. It seems likely that the large crystals resulted from ice nucleation on effective heterogeneous nuclei at low ice supersaturations. Improvements in our understanding of detailed cloud microphysical processes require resolution of the water vapor measurement discrepancies in these very cold, dry regions of the atmosphere. | <urn:uuid:d0bb34a9-013e-4e5e-b816-bf8fee515d52> | 2.796875 | 676 | Academic Writing | Science & Tech. | 26.105273 | 608 |
WEST LAFAYETTE, Ind. - Even large amounts of manufactured nanoparticles, also known as Buckyballs, don't faze microscopic organisms that are charged with cleaning up the environment, according to Purdue University researchers.
In the first published study to examine Buckyball toxicity on microbes that break down organic substances in wastewater, the scientists used an amount of the nanoparticles on the microbes that was equivalent to pouring 10 pounds of talcum powder on a person. Because high amounts of even normally safe compounds, such as talcum powder, can be toxic, the microbes' resiliency to high Buckyball levels was an important finding, the Purdue investigators said.
The experiment on Buckyballs, which are carbon molecules C60, also led the scientists to develop a better method to determine the impact of nanoparticles on the microbial community.
"It's important to look at the entire microbial community when nanomaterials are introduced because the microbes are all interdependent for survival and growth," said Leila Nyberg, a doctoral student in the School of Civil Engineering and the study's lead author. "If we see a minor change in these microorganisms it could negatively impact ecosystems."
The microbes used in the study live without oxygen and also exist in subsurface soil and the stomachs of ruminant animals, such as cows and goats, where they aid digestion.
"We found no effect by any amount of C60 on the structure or the function of the microbial community over a short time," Nyberg said. "Based on what we know about the properties of C60, this is a realistic model of what would happen if high concentrations of nanoparticles were released into the environment."
The third naturally occurring pure carbon molecule known, Buckyballs are nano-sized, multiple-sided structures that look like soccer balls.
Nyberg and her colleagues Ron Turco and Larry Nies, professors of agronomy and ci
|Contact: Susan A. Steeves| | <urn:uuid:a45b666e-60e3-4153-8529-a347a2bc3557> | 3.0625 | 409 | News Article | Science & Tech. | 22.997779 | 609 |
Questions: G to A and C to T substitutions, is this a rule ?
zxiong at arizvm1.ccit.arizona.edu
Mon Apr 11 11:59:05 EST 1994
Being not familiar with molecular evolution. I have been troubled with some of
my data in RNA virus sequence. We are working on a small RNA virus and nearly
complete the sequence of the viral RNA genome from cDNA clones. RNA viruses
are known to be heterogenous (quasi-species), so it was not surprising to see
nucleotide sequence variations when sequences are obtained from different
clones. What was surprising was a consistent rule of sequence variaions. G is
always subsituted with a A, or vice versa. C is always substituted with a T,
or vice versa. But there is never a G to (C, T) change or vice versa.
Let me try to explain it a little better. We have found 16 nucleotide
substitutions in about 1500 nucleotide of overlapping sequences. There are 11
C to T or T to C substitutions and 5 G to A or A to G substitutions. We have
not found any other possible substituions.
Is there a theory describing the rule of nucleotide substitution during
evolution? I feel very ignorant and hope someone can give me a pointer to how
to explain my observation.
Any comments or suggestions are welcome.
Zxiong at arizvm1.ccit.arizona.edu
More information about the Methods | <urn:uuid:46fef4e6-8218-47b0-a114-75e38b50bf1a> | 2.875 | 327 | Q&A Forum | Science & Tech. | 59.739091 | 610 |
Paul Painter¹ and Lucas McConnell²
¹Materials Science and Engineering and The Energy Institute
²Renewergy Corporation, Erie PA.
Presently, biofuels in this country usually means one of two things, ethanol (in the U.S. principally produced from corn) or biodiesel (largely from oilseeds or yellow grease). However, large-scale production of these fuels will inevitably lead to the displacement of croplands used to produce food and there will clearly be a limit on the quantity of ethanol and biodiesel that can be obtained from these sources. Furthermore, although both biodiesel and ethanol have a number of attractive properties (in addition to being derived from a renewable source), they are not without problems (lower energy content, clogging of fuel lines and filters because of their ability to dissolve gums and other deposits, etc.).
It would clearly be advantageous if a cheap, relatively simple method were available to produce a predominantly hydrocarbon fuel (i.e., largely decarboxylated oils) from feedstocks that contain high contents of free fatty acids. One source that we wish to particularly focus on is algae, for the purposes of this project being produced on by Renewergy Corporation. Renewergy has developed a proprietary, “aeroponic algalculture” technique that uses a fraction of the water needed by conventional processes and a simple way of increasing surface area for light and CO2 absorption.
In preliminary work, we have applied Kolbe electrolysis to the processing of algal oil. Kolbe electrolysis of fatty (alkanoic) acids was the first known electrochemical synthesis. Faraday had originally observed (in 1834) that hydrocarbons are formed upon electrolysis of acetate solutions, but it was H. Kolbe who performed the first detailed investigations of the reactions of carboxylic acids at an anode some fifteen years later. Essentially, the reaction involves the electrochemical oxidative decarboxylation of carboxylic acid salts that leads to radicals, which can then combine to form simple hydrocarbons. We have found that a number of side reactions occur, but these can be advantageous in producing biofuels. | <urn:uuid:5808c6cc-13e7-4e25-8330-97fe63475956> | 3.625 | 446 | Academic Writing | Science & Tech. | 21.524274 | 611 |
For years now, we have been deluged with the news that the earth’s oceans are warming as a result of atmospheric changes due to the combustion of fossil fuels.
Typical of these was a 2005 story titled “Where’s The Heat? Think Deep Blue,” from United Press International, describing a recent paper in Science by NASA climate modeler James Hansen. UPI’s “Space Daily” wrote that “Over the past ten years, the heat content of the ocean has grown dramatically.”
Hansen’s study covered more than just the ocean surface temperature, which can fluctuate considerably from year to year. Rather, by considering a much deeper layer of water (the top 2,500 feet), Hansen actually calculated the increasing amount of heat being stored. According to the UPI story, this provided “a match” with computer model projections of global warming.
The ocean is a huge tub that integrates and stores long-term climate changes. Consequently, when computer models are based on ever-increasing atmospheric concentrations of carbon dioxide, the deep oceans warm, warm, and warm. Like a big pot on a small burner, it takes time to start up, but once the process starts, nothing should be able to stop it.
That’s the conventional wisdom of our climate models, but like the conventional wisdom on so many other aspects of life, it’s not true to nature.
In the next few weeks, John Lyman of the National Oceanic and Atmospheric Administration will publish a paper in the refereed journal Geophysical Research Letters showing that, globally, the top 2,500 feet of the ocean lost a tremendous amount of heat between 2003 and 2005 — in fact, about 20% of all the heat gained in the last half-century.
Needless to say, Lyman’s figures have climate scientists scratching their heads. No computer model predicts such behavior. And further, the changes in surface temperatures haven’t corresponded (yet?) to the average changes at depth, although deep-water temperatures have also dropped some. Nor has the sea level dropped by an amount commensurate with the cooling (water volume varies slightly with temperature).
This last observation has led scientists to speculate that much more ice must be melting into the ocean than they normally assume — but no one has been able to find it, and it’s not for a lack of looking.
There’s another hypothesis out there that has received very little attention. It has to do with the amount of carbon dioxide accumulating in the atmosphere.
If carbon dioxide increases at a constant rate, basic physics — as understood since the 1860s — says that surface temperature will rise, but that the rate of heating will become lower and lower. In other words, in order for temperatures to increase at a constant rate, as has been observed since 1975, carbon dioxide would have to go up at an ever-increasing rate.
But the ocean is so vast and slow to change that it takes several decades to realize the heating caused by carbon dioxide. Consequently, a change in the rate of carbon dioxide accumulation in the atmosphere wouldn’t be noticed for 30 to 60 years, depending upon whose calculations one believes.
Between the time atmospheric carbon dioxide was first directly measured, at Mauna Loa, Hawaii, in 1957, and 1975, it clearly increased exponentially. And once the ocean temperature began to rise, it did so at a constant rate.
Then, about 30 years ago, something very peculiar began to occur. Since 1975, it has been impossible to tell whether the amount of atmospheric carbon dioxide is increasing at an exponential or simply a constant rate.
Because of the lag time required for the oceans to register the change in carbon dioxide, it may not be a surprise that an interval of cooling has been detected. The timing is about right: around 30 years.
But that’s just another climate change hypothesis that time will test. Be forewarned, though. As we’ve learned from the completely unexpected cooling of the deep ocean that began in 2003, we know a lot less about climate change than we think. | <urn:uuid:01ae20bb-ca76-4d9e-9014-6c7d5ee35304> | 3.578125 | 855 | Nonfiction Writing | Science & Tech. | 45.291555 | 612 |
Roy J. Plunkett
Roy J. Plunkett with a cable insulated with Teflon and a Teflon-coated muffin tin. Gift of Roy Plunkett. Courtesy Hagley Museum and Library.
From the 1930s to the present, beginning with neoprene and nylon, the American chemical industry has introduced a cornucopia of polymers to the consumer. Teflon, discovered by Roy J. Plunkett (1910–1994) at the DuPont Company’s Jackson Laboratory in 1938, was an accidental invention—unlike most of the other polymer products. But as Plunkett often told student audiences, his mind was prepared by education and training to recognize novelty.
As a poor Ohio farm boy during the Depression, Plunkett attended Manchester College in Indiana. His roommate for a time at this small college was Paul Flory, who would win the 1974 Nobel Prize in chemistry for his contributions to the theory of polymers. Like Flory, Plunkett went on to The Ohio State University for a doctorate, and also like Flory he was hired by DuPont. Unlike Flory, Plunkett made his entire career at DuPont.
Reenactment of the 1938 discovery of Teflon. Left to right: Jack Rebok, Robert McHarness, and Roy Plunkett. Courtesy Hagley Museum and Library.
Plunkett’s first assignment at DuPont was researching new chlorofluorocarbon refrigerants—then seen as great advances over earlier refrigerants like sulfur dioxide and ammonia, which regularly poisoned food-industry workers and people in their homes. Plunkett had produced 100 pounds of tetrafluoroethylene gas (TFE) and stored it in small cylinders at dry-ice temperatures preparatory to chlorinating it. When he and his helper prepared a cylinder for use, none of the gas came out—yet the cylinder weighed the same as before. They opened it and found a white powder, which Plunkett had the presence of mind to characterize for properties other than refrigeration potential. He found the substance to be heat resistant and chemically inert, and to have very low surface friction so that most other substances would not adhere to it. Plunkett realized that, against the predictions of polymer science of the day, TFE had polymerized to produce this substance—later named Teflon—with such potentially useful characteristics. Chemists and engineers in the Central Research Department with special experience in polymer research and development investigated the substance further. Meanwhile, Plunkett was transferred to the tetraethyl lead division of DuPont, which produced the additive that for many years boosted gasoline octane levels.
At first it seemed that Teflon was so expensive to produce that it would never find a market. Its first use was fulfilling the requirements of the gaseous diffusion process of the Manhattan Project for materials that could resist corrosion by fluorine or its compounds (see Ralph Landau). Teflon pots and pans were invented years later. The awarding of Philadelphia’s Scott Medal in 1951 to Plunkett—the first of many honors for his discovery—provided the occasion for the introduction of Teflon bakeware to the public: each guest at the banquet went home with a Teflon-coated muffin tin. | <urn:uuid:0b8f571c-a249-4c01-9b3a-6ebe34815c6d> | 3.3125 | 683 | Knowledge Article | Science & Tech. | 37.662211 | 613 |
Over the past 10 years, nitrogen concentration trends are downward at about half (16 out of 33) monitoring sites within the Bay watershed. The trend results indicate that in many locations, management actions, such as improved wastewater treatment and nonpoint-source pollution controls (i.e. urban stormwater runoff and agricultural runoff controls), have reduced nitrogen concentrations in streams. In addition, in the last 5 years, higher yields indicate a tendency to be located in the northern half of the watershed, conversely. lower yields are more numerous in the lower half of the watershed. The short-term flow-adjusted trends and yields indicator is calculated, and results and maps are published annually by the U.S. Geological Survey as part of a larger effort to determine loads and trends in nutrient and sediment concentrations and streamflow in the Chesapeake Bay watershed.
Date created: Mar 10 2011 / Download ( 4.32 MB ) | <urn:uuid:00a88782-e239-4494-a633-c0c9ea1952a7> | 3.328125 | 183 | Truncated | Science & Tech. | 45.790695 | 614 |
Now that you're comfortable using the MySQL client tools to manipulate data in the database, you can begin using PHP to display and modify data from the database. PHP has standard functions for working with the database.
First, we're going to discuss PHPs built-in database functions. We'll also show you how to use the PEAR database functions that provide the ability to use the same functions to access any supported database. This type of flexibility comes from a process called abstraction. Abstraction is the information you need to log into a database that is placed into a standard format. This standard format allows you to interact with MySQL as well as other databases using the same format. Similarly, MySQL-specific functions are replaced with generic ones that know how to talk to many databases.
In this chapter, you'll learn how to connect to a MySQL server from PHP, learn how to use PHP to access and retrieve stored data, and how to correctly display information to the user.
The basic steps of performing a query, whether using the mysql command-line tool or PHP, are the same:
Connect to the database.
Select the database to use.
Build a SELECT statement.
Perform the query.
Display the results.
We'll walk through each of these steps for both plain PHP and PEAR functions.
When connecting to a MySQL database, you will use two new resources. The first is the link identifier that holds all of the information necessary to connect to the database for an active connection. The other resource is the results resource. It contains all information required to retrieve results from an active database query's result set. You'll be creating and assigning both resources in this chapter. | <urn:uuid:c21bbd34-5b0b-4017-90f1-4b9d4a0ca4df> | 3.140625 | 343 | Tutorial | Software Dev. | 52.928786 | 615 |
In 1811, Joseph Fourier, the 43-year-old prefect of the French district of Isere, entered a competition in heat research sponsored by the French Academy of Sciences. The paper he submitted described a novel analytical technique that we today call the Fourier transform, and it won the competition; but the prize jury declined to publish it, criticizing the sloppiness of Fourier's reasoning. According to Jean-Pierre Kahane, a French mathematician and current member of the academy, as late as the early 1970s, Fourier's name still didn't turn up in the major French encyclopedia the Encyclopdia Universalis.
Now, however, his name is everywhere. The Fourier transform is a way to decompose a signal into its constituent frequencies, and versions of it are used to generate and filter cell-phone and Wi-Fi transmissions, to compress audio, image, and video files so that they take up less bandwidth, and to solve differential equations, among other things. It's so ubiquitous that "you don't really study the Fourier transform for what it is," says Laurent Demanet, an assistant professor of applied mathematics at MIT. "You take a class in signal processing, and there it is. You don't have any choice."
The Fourier transform comes in three varieties: the plain old Fourier transform, the Fourier series, and the discrete Fourier transform. But it's the discrete Fourier transform (DFT) that accounts for the Fourier revival. In 1965, the computer scientists James Cooley and John Tukey described an algorithm called the fast Fourier transform, which made it much easier to calculate DFTs on a computer. All of a sudden, the DFT became a practical way to process digital signals.
To get a sense of what the DFT does, consider an MP3 player plugged into a loudspeaker. The MP3 player sends the speaker audio information as fluctuations in the voltage of an electrical signal. Those fluctuations cause the speaker drum to vibrate, which in turn causes air particles to move, producing sound.
An audio signal's fluctuations over time can be depicted as a graph: the x-axis is time, and the y-axis is the voltage of the electrical signal, or perhaps the movement of the speaker drum or air particles. Either way, the signal ends up looking like an erratic wavelike squiggle. But when you listen to the sound produced from that squiggle, you can clearly distinguish all the instruments in a symphony orchestra, playing discrete notes at the same time.
That's because the erratic squiggle is, effectively, the sum of a number of much more regular squiggles, which represent different frequencies of sound. "Frequency" just means the rate at which air molecules go back and forth, or a voltage fluctuates, and it can be represented as the rate at which a regular squiggle goes up and down. When you add two frequencies together, the resulting squiggle goes up where both the component frequencies go up, goes down where they both go down, and does something in between where they're going in different directions.
The DFT does mathematically what the human ear does physically: decompose a signal into its component frequencies. Unlike the analog signal from, say, a record player, the digital signal from an MP3 player is just a series of numbers, representing very short samples of a real-world sound: CD-quality digital audio recording, for instance, collects 44,100 samples a second. If you extract some number of consecutive values from a digital signal -- 8, or 128, or 1,000 -- the DFT represents them as the weighted sum of an equivalent number of frequencies. ("Weighted" just means that some of the frequencies count more than others toward the total.)
The application of the DFT to wireless technologies is fairly straightforward: the ability to break a signal into its constituent frequencies lets cell-phone towers, for instance, disentangle transmissions from different users, allowing more of them to share the air.
The application to data compression is less intuitive. But if you extract an 8x8 block of pixels from an image, each row or column is simply a sequence of eight numbers -- like a digital signal with eight samples. The whole block can thus be represented as the weighted sum of 64 frequencies. If there's little variation in color across the block, the weights of most of those frequencies will be zero or near zero. Throwing out the frequencies with low weights allows the block to be represented with fewer bits but little loss of fidelity.
Demanet points out that the DFT has plenty of other applications, in areas like spectroscopy, magnetic resonance imaging, and quantum computing. But ultimately, he says, "It's hard to explain what sort of impact Fourier's had," because the Fourier transform is such a fundamental concept that by now, "it's part of the language." | <urn:uuid:0b27cdae-7df5-46b7-81b6-0cd8f59a6b4d> | 3.59375 | 1,010 | Nonfiction Writing | Science & Tech. | 44.473116 | 616 |
Sea-levels are rising unevenly around the world, with Pacific countries in particular suffering significant increases over the past two decades, according to accurate new satellite data.
On average, global sea-levels have been rising at about three millimeters (mm) a year, however, this masks large differences between regions of the world.
While some regions have seen sea-level rises of 12 mm a year, others have actually seen decreases of about 12 mm a year.
The results are based on radar readings from the European Space Agency (ESA) over an 18-year period from October 1992 to March 2010.
ESA used its satellites to send radar pulses to the sea surface below, recording the time delay in its return and creating a precise measurement of their height above the surface.
Scientists say sea-level rises are the result of the expansion of water due to rising temperatures, melting of glaciers and the melting of polar ice sheets.
The worst hit regions over the past two decades, according to the ESA data, have been the Pacific countries of Indonesia, Papua New Guinea, Philippines and vulnerable Pacific islands like the Solomon Islands.
The Philippines for one is already frequently subjected to flooding and landslides caused by heavy rain, with seasonal monsoon rains in August killing at least 11 people.
Scientists suggest regions that have seen high sea level rises over the past 20 years will not necessarily continue to see higher than average sea-level rises in the future.
"We suspect that the bigger the differences get, the more they will tend to level out in the future," says Robert Meisner, a spokesperson for ESA. | <urn:uuid:6d728551-c8bc-4464-a2ba-460b4a692d95> | 3.6875 | 325 | News Article | Science & Tech. | 32.980934 | 617 |
Some elements of a window interface contain collections of items, for example rows of buttons, lists of filenames, and groups of menu items. Such elements are known in the CAPI as collections .
In most collections, items may be selected by the user -- for example, a row of buttons. Collections whose items can be selected are known as choices . Each button in a row of buttons is either checked or unchecked, showing something about the application's state -- perhaps that color graphics are switched on and sound is switched off. This selection state came about as the result of a choice the user made when running the application, or default choices made by the application itself.
The CAPI provides a convenient way of producing groups of items from which collections and choices can be made. The abstract class
provides a means of specifying a group of items. The subclass
provides groups of selectable items, where you may specify what initial state they are in, and what happens when the selection is changed. Subclasses of
used for producing particular kinds of grouped elements are described in the sections that follow.
All the choices described in this chapter can be given a print function via the
keyword. This allows you to control the way in which items in the element are displayed. For example, passing the argument
would capitalize the initial letters of all the words of text that an instance of a choice displays.
Some of the examples in this chapter require the functions
which were introduced in Creating Common Windows. | <urn:uuid:7221b702-b5e1-48e6-bcba-348b013f6d0f> | 3.703125 | 304 | Documentation | Software Dev. | 45.642056 | 618 |
The First (and Last) Voyage to the Bottom of the Sea
A half-century ago, humanity arrived somewhere no one had ever gone before the deepest place on Earth.
Before the Apollo missions landed men on the moon, the U.S. Navy dove to the bottom of the sea the Challenger Deep in the Mariana Trench, some 35,797 feet (10,911 meters) down.
Just as no one has visited the moon since Apollo, nobody has returned to this abyss since that first voyage to the bottom of the trench in 1960. However, just as scientists are revisiting the moon with space probes, so too are researchers now deploying robots to explore this deepest depth of the ocean .
The research vessel used to reach the record-setting depth near Guam in the Pacific Ocean on Jan. 23, 1960 was named the Trieste, a Swiss-designed bathyscaphe or "deep boat" named after the Italian city where much of it was built. Its two-man crew Lt. Don Walsh of the U.S. Navy and scientist Jacques Piccard, son of the craft's designer nestled inside a roughly 6.5-foot (2-meter) wide white pressure sphere on the underside of the submersible. The rest of the nearly 60-foot (18-meter)long Trieste was filled with floats loaded with some 33,350 gallons (126,243 liters) of gasoline for buoyancy, along with nine tons of iron pellets to weigh it down.
To withstand the high pressure at the bottom of Challenger Deep roughly eight tons per square inch the sphere's walls were 5 inches (12.7 cm) thick. To see outside, the crew relied on a window made of a single cone-shaped block of Plexiglas, the only transparent compound they could find strong enough to survive the pressure at the thickness needed, along with lamps to light up the sunless abyss.
"The pressure is tremendous," said geophysicist David Sandwell at the University of California, San Diego, who helped create the first detailed global maps of the seafloor.
The descent the first and only manned voyage to the bottom of Challenger Deep took 4 hours and 48 minutes at a rate of about a yard (0.9 meters) a second. As if to highlight the dangers of the dive, after passing about 27,000 feet (9,000 meters) one of the outer window panes cracked, violently shaking the entire vessel.
The two men spent just 20 minutes at the ocean floor, eating chocolate bars for energy in the cold deep, the temperature in the cabin was only 45 degrees Fahrenheit (7 degrees Celsius). They actually managed to speak with the craft's mothership using a sonar-hydrophone system at a speed of nearly a mile per second, it still took about seven seconds for a voice message to travel from the craft upward.
While at the bottom, the explorers not only saw jellyfish and shrimp-like creatures, but actually spied a couple of small white flatfish swimming away, proving that at least some vertebrate life could withstand the extremes of the bottom of the ocean. The floor of Challenger Deep seemed to be made of diatomaceous ooze a fine white silt made of microscopic algae known as diatoms.
To ascend, they magnetically released the ballast, a trip that took 3 hours, 15 minutes. Since then, no man has ever returned to Challenger Deep.
"It's hard to build something that can survive that kind of pressure and have people inside," Sandwell noted.
In many ways, the Trieste laid the foundation for the Navy's deep-submergence program. In fact, in 1963, it was used to locate the sunken nuclear submarine USS Thresher.
In addition, in recent years, robots have made the journey back to Challenger Deep. In 1995, the Japanese craft Kaiko reached the bottom, while the Nereus hybrid remotely operated vehicle reached the bottom last year.
Perhaps as explorers one day hope to return to the moon, so too might adventurers, and not just robots, revisit the deeps in the future.
- The World's Biggest Oceans and Seas
- Infographic: Under the Ocean's Surface
- World's Deepest Undersea Vents Discovered
MORE FROM LiveScience.com | <urn:uuid:1cfec21c-f34f-4a68-b967-6109f33f8afc> | 3.9375 | 889 | News Article | Science & Tech. | 57.300443 | 619 |
Why Does Copper Turn Green?
Copper turns green because of chemical reactions with the elements.
For the same reason that iron rusts.
Just as iron that is left unprotected in open air will corrode and form a flaky orange-red outer layer, copper that is exposed to the elements undergoes a series of chemical reactions that give the shiny metal a pale green outer layer called a patina.
The patina actually protects the copper below the surface from further corrosion, making it a good water-proofing material for roofs (which is why the roofs of so many old buildings are bright green).
In fact, the weathering and oxidation of the Statue of Liberty's copper skin has amounted to just .005 of an inch over the last century, according to the Copper Development Association.
MORE FROM LiveScience.com | <urn:uuid:74fec3df-1ff7-4026-9308-fe6758e0d9e2> | 3.546875 | 169 | Knowledge Article | Science & Tech. | 51.391467 | 620 |
Published in Lunar and Planetary Science XXVII, pp. 1183-1184, LPI, Houston.
Introduction: Crustal processes and reactions during hydrothermal and biogenic activity result in extreme degrees of sulfur isotopic fractionation on Earth. For example, delta 34S in terrestrial sulfides ranges from -70 to +70 on Earth . In contrast, delta 34S values for sulfides from other planetary bodies that have been sampled (Moon, asteroids) show a very limited mass fractionation. The standard deviation in the bulk isotopic composition of sulfur in meteorites of all types is less than 0.1 . However, the isotopic composition of sulfides in meteorites shows slightly more variability. Troilite in Orgueil, a carbonaceous chondrite, has a delta 34S of 2.6 . Kaplan and Hulston showed that sulfides in enstatite chondrites have delta 34S of between +1.6 to +2.5. The delta 34S in troilite from ordinary chondrites ranges from -2.7 to +2.5 . The slight fractionation of 34S into these sulfides has been attributed to nebular heterogeneity , low temperature (100°C) reactions between water and elemental sulfur , and oxidation of FeS in an aqueous environment [2,4,5]. Lunar materials exhibit a much broader variation in bulk delta 34S than has been observed in meteorites. Whereas bulk lunar rocks show variability on the order of +0.37 to +0.68, lunar soils have delta 34S as high as +9.76 . These high values in the bulk lunar soils have been attributed to preferential volatilization of 32S during sputtering caused by micrometeorite bombardment . Until now, S fractionation processes on the larger terrestrial planets such as Mercury, Venus, and Mars has been only speculative. With the discovery of a possible Martian meteorite with an imprint of a Martian hydrothermal system, we can gain insights into S fractionation on another planet.
SNC Meteorite ALH 84001: ALH 84001 is a coarse-grained, clastic orthopyroxenite meteorite related to the SNC meteorite group . A hydrothermal signature is supimposed upon the orthopyroxene-dominant igneous mineral assemblage. This hydrothermal overprint consists of carbonate assemblages occurring in spheroidal aggregates and as fine-grained carbonate and sulfide vug-filling structures [7-10]. The sulfide has been identified as pyrite . Textural interpretations of shock features in the carbonates has lead to the interpretation that the carbonate-sulfide mineralization was a result of influxes of fluids during Martian hydrothermal activity .
Isotopic Analysis of Pyrite in ALH 84001: The sulfur isotopic measurements were made using a Cameca IMS-4f ion microprobe operated by a University of New Mexico-Sandia National Laboratory consortium on the UNM campus. A Cs+ primary beam was focused to a spot of between 8 and 10 µm. 32S- and 34S- were analyzed in the secondary ion beam. A S-isotope pyrite standard was analyzed in order to measure the degree of instrument-induced fractionation, precision, accuracy, and instrument drift over the period of an analytical session. The analytical precision measured on the standards is better than ±0.2, whereas the analytical precision measured on the samples is better than ±0.5. These reported precision values far exceed those reported in the literature for ion microprobe analysis of sulfur isotopes in sulfides [5,10].
Results: delta 34S values for five pyrite grains were obtained from ALH 84001. Values for the pyrite range from +4.8 to +7. These delta 34S values are 34S enriched relative to Canon Diablo troilite. Based on the 2-sigma precision, there are real isotopic differences among pyrite grains.
Discussion: Sulfur isotopic characteristics of sulfides are constrained by a large number of variables, such as the sulfur isotopic characteristics of the hydrothermal fluid, temperature, pH, and fO2 . The stability field of pyrite also influences the range of expected delta 34S values of the pyrite . Therefore, although sulfur isotopic systematics provide some information concerning the hydrothermal system, they are best used in conjunction with other data (mineral stability, other stable isotopes). In comparison with sulfides from other meteorites, the delta 34S of the pyrite from ALH 84001 is enriched in 34S. This signature implies that the planetary body represented by ALH 84001 experienced processes capable of fractionating S isotopes that were not functional on asteroidal bodies represented by chondrite and achondrite meteorites.
As was noted previously, the terrestrial delta 34S exhibits a wide variability. In particular, the large negative values in terrestrial delta 34S has been attributed, in many cases, to the bacterial reduction of sulfate to sulfide. The positive delta 34S measured in the ALH 84001 pyrite therefore suggests that the sulfur in this hydrothermal sulfide was not processed by bacteria in a manner analogous to terrestrial processes.
The positive delta 34S measured in the ALH 84001 pyrite may be attributed to several different processes that may be functioning on the Martian surface or in the shallow Martian crust:
Model 1: Assuming that the delta 34S in the fluid was essentially 0, the pyrite may be enriched in delta 34S by pH, temperature, and fO2 conditions during precipitation. The pH and the fugacity of oxygen may be approximated using the delta 34S data presented here, delta 13C data on the carbonates , a relatively low Sigma S, a temperature of precipitation of ~100°C and the coexistence of pyrite and carbonate. Making these assumptions, precipitation occurred in a reduced and moderately alkaline environment with the dominant sulfur-bearing species in solution being HS-. At higher temperatures (~700°C) as suggested by [7,9], the delta 34S of pyrite in the stability fields of carhonate + pyrite will not have values that approach +5 to +8.
Model 2: The above interpretation makes the assumption that the delta 34S in the fluid was equal to 0. At more acidic conditions than suggested above (but at the same reducing conditions), delta 34S will not be strongly fractionated during pyrite precipitation from an aqueous solution . Therefore, under these conditions, the pyrite will approximate the delta 34S in the fluid . There are several potential processes that can generate positive delta 34S in the fluid under these pH and fO2 conditions: (2a) Previous isotopic studies of SNC meteorites indicated that the present Martian atmosphere is isotopically heavy in O, C, N, and H . Therefore, it is perhaps not surprising that other stable isotopes in the Martian atmosphere such as S should also be isotopically heavy. (2b) Alternatively, it has been documented that during lunar regolith formation and evolution, the bulk delta 34S increases . Therefore, impact-generated hydrothermal system models as suggested by may result in the preferential volatilization of 32S relative to 34S during impact. (2c) Assessments of Martian soil mineralogy based on both Viking XRF measurements and SNC documentation have suggested that phases such as clays, Fe-oxides, carbonates, and Ca- and Mg-sulfates will be stable in the oxidizing Martian environment [i.e., 15]. It is expected that under such weathering environments, particularly with the stabilization of sulfates, 34S should be enriched in water-soluble components (Ca- and Mg-sulfates) in the soil. Leaching of the 34S-enriched water-soluble minerals in Martian soil produced by processes 2a,b,c will result in a positive delta 34Sfluid. Model (2a,b,c) implies that the source for the sulfur is rather shallow and that this groundwater-hydrothermal system is in isotopic communication with processes occurring at the Martian surface. Under this second model, the temperature of precipitation cannot be constrained by the sulfur data.
Conclusions: Our data indicates that the sulfur isotopes 32S and 34S in the sulfides in meteorite ALH 84001 have been fractionated to a greater extent than what has been documented in other meteorites. This, in itself, is another piece of information that links this orthopyroxenite to a planetary body that has experienced processes not present on chondrite and achondrite parent bodies. Mineralogical data suggests that the alteration assemblages were deposited under reducing conditions and that SO42- was not a dominant species in the solution. Therefore, the extent of sulfur isotopic fractionation during pyrite precipitation from the hydrothermal solution was moderate, at alkaline conditions (delta 34Sfluid < delta 34Spyrite), to minor at low pH conditions (delta 34Sfluid = delta 34Spyrite) This suggests two different models for the generation of positive delta 34S in the pyrite. If the pyrite precipitated at low temperature (100°-150°C) reducing conditions and high pH (<9), a delta 34Sfluid equal to 0 would precipitate pyrite with delta 34Spyrite between 5 and 8. Under more acidic conditions, the delta 34Sfluid will be equal to that of the pyrite. This requires the positive delta 34Sfluid signature to be produced prior to pyrite deposition. The positive delta 34S in the fluid may be attributed to upper atmospheric processes, impact processes, or low-temperature weathering reactions enriching the soil in 34S. These components may then be leached and their delta 34S signature transported to the location of precipitation. This process requires isotopic communication between the hydrothermal system and the Martian surface. If the isotopic signature of the sulfide reflects communication with surfacial-atmospheric processes, it may constrain additional aspects of Martian atmosphere evolution.
References: Ohmoto H. and Rye R. O. (1979) in Geochemistry of Hydrothermal Ore Deposits (ed. Barnes H. L.), pp. 509-567. Pillinger C. T. (1984) Geochim. Cosmochim. Acta, 48, 2739-2766. Monster J. et al. (1965) Geochim. Cosmochim. Acta, 29, 773-779. Kaplan I. R. and Hulston J. R. (1965) Geochim. Cosmochim. Acta, 30, 479-496. Paterson B. A. et al. (1994) Lunar and Planetary Science XXV, 1057-1058. Kerridge J. F. and Kaplan I. R. (1978) Proc. Lunar Planet. Sci. Conf. 9th, 1687-1709. Mittlefehldt D. W. (1994) Meteoritics, 29, 214-221. Romanek C. S. et al. (1995) Meteoritics, 30, 567-568. Harvey R. P. and McSween H. Y. (1995) Lunar and Planetary Science XXVI, 555-556. McKibben M. A. and Eldridge C. S. (1995) Economic Geology, 90, 228-245. Rye R. O. and Ohmoto H. (1974) Economic Geology, 69, 826-842. Romanek C. S. et al. (1994) Nature, 372, 655-656. Wentworth S. J. and Gooding J. L. (1995) Lunar and Planetary Science XXVI, 1489-1490. Jakosky B. M. (1993) Geophys. Res. Lett., 20, 1591-1594. Gooding J. L. et al. (1988) Meteoritics, 26, 135-143. | <urn:uuid:bccae671-f765-40b4-9299-ad77496383ea> | 3.390625 | 2,578 | Academic Writing | Science & Tech. | 44.855372 | 621 |
Functions in Lisp
|Column Tag:||Lisp Listener
"Functions in Lisp"
By Andy Cohen, Human Factors Engineering, Hughes Aircraft, MacTutor Contributing Editor
As you may recall from the first installment of the Lisp Listener, a procedure is a description of an action or computation. A primitive is a predefined or "builtin" procedure (e.g. "+"). As in Forth, Lisp can have procedures which are defined by the programmer. DEFUN, from DEfine FUNction, is used for this purpose. The syntax for DEFUN in Experlisp is as follows:
(DEFUN FunctionName (symbols)
(All sorts of computations which may or may Not use the values
represented by the symbols))
The function name is exactly that. Whenever the name is used the defined procedure associated with that function name is performed. The symbols are values which may or may not be required by the procedures within the defined function. If required, the values must follow the function name. When given, these values are assigned to the symbol. This is similar to the way values are assigned to a symbol when using SETQ. It is easier to see how DEFUN works when observed within an example:
;(DEFUN Reciprocal (n)
(/ 1 n))
The word "Reciprocal" is the function name and the numbers following are the values for which the reciprocal (1/n) are found. After the list containing DEFUN is entered and the carriage return is pressed the function and it's title are assigned a location in memory. The function name is then printed in the Listener window.
;(DEFUN Square (x)
(* x x))
;(DEFUN Cubed (y)
(* y (* y y))
;(DEFUN AVERAGE (W X Y Z)
(/ (+ W X Y Z) 4))
;(Average 2 3 4 5)
You might recognize "Average" from last month's Lisp Listener. One might imagine using defined functions inside other defined functions. If it was possible to have variables which have the same values in each procedure, then the version of Lisp used has what is called dynamic scoping. In this context the values of the variable are determined by the Lisp environment which is resident when the procedure is called. Experlisp, however, is lexically scoped. That means that variable values are local to each procedure. Two defined procedures can use the same labels for variables, but the values will not be considered as the same. Each variable is defined locally. This is in accordance to the Common Lisp standard. Lexical scoping makes it easier to debug someone elses'' programs. If you don't know what I mean yet, don't worry. This subject will come up again in more detail later.
If no values are required by the defined function then "nil" or an empty list must follow the function name.
;(DEFUN Line ()
The empty list obviously contains no atoms (I'll describe the above function, "Line" later in the section on bunnies). It is synonymous to the special term nil, which is considered by Lisp as the opposite of T or True. Nil is used in many other contexts.
;(cddr '( one two))
In the above, the first cdr returns "two". The second cdr returns nothing, hence "nil". The values of true and false are returned by procedures called predicates. While nil represents a false condition, anything other then nil, including "T", is generally considered true. Please note that I used lowercase letters in the above. ExperLisp recognizes both upper and lowercase. I've been using uppercase only to make it clear within the text when I'm referring to Lisp
EQUAL is a predicate which checks the equality of two arguments. Note the arguments can be integers or symbols. If the two arguments are equal then "T" is returned. If they are not equal then "nil" is returned.
;(EQUAL try try)
;(EQUAL 6732837 6732837)
;(EQUAL 6732837 6732833)
;(EQUAL First Second)
ATOM checks to see if it's argument is a list or an atom. Remember, the single quote is used to indicate that what follows is a not evaluated as in the case of a list. Symbols are evaluated.
;(ATOM (A B C D))
In the first of the above 'thing is an atom due to the single quote. In the second, thing is considered a symbol. A symbol is evaluated and contains a value or values as a list. In the third, (A B C D) is obviously a list.
LISTP checks if it's argument is a list.
;(LISTP '( 23 45 65 12 1))
;(SETQ babble '(wd ihc wi kw))
One interesting observation is that nil is both an atom and a list, ()=nil. Therefore ATOM and LISTP both return true for nil.
When one needs to know if a list is empty, NULL does the job.
;(NULL (X Y Z))
NUMBERP checks if the argument that follows is or represents a number rather than a string.
;(SETQ fifty-six '(56))
Now for a real slick one. MEMBER tests whether or not an argument is a part of a list. An easy demonstration follows:
;(MEMBER 'bananas (apples pears bananas))
;(apples pears bananas)
;(MEMBER 'grapes (apples pears bananas))
When the argument is a member, then the contents of the list are given. If not then nil is returned. MEMBER also checks symbols of lists.
;(SETQ fruit '(apples grapes pears))
;(MEMBER 'grapes fruit)
; (apples grapes pears)
;(MEMBER 'banana fruit)
EVENP tests to see if an integer is even and MINUSP checks if an integer is negative. ODDP and PLUSP are not needed since they are simply opposite of the first two.
;(EVENP (- 806 35))
;(MINUSP (-34 86))
In the second and fourth examples above the lists contained within are calculated prior to MEMBERP evaluation. (806-35=771 & 34-86=-52. There's a few more simple predicates such as NOT, <, >, and ZEROP. I'll discuss them along with conditionals next month. Now for something completely different.
If you've ever learned Logo, the concept of Bunny graphics should sound familiar. As mentioned last month, the Bunny is Expertelligence's version of the Turtle. All one needs to do in order to make a Bunny move is to tell it to. FORWARD X initially moves the Bunny upwards on the screen for 'X' display pixels. A negative number initially moves it down. When one enters the following in the Listener window,
the default graphics window (I'll discuss windows in more detail very soon in future installments) is then opened and the following is drawn:
RIGHT X aims the front of the line to the right by X degrees. If one then uses forward again the line moves in a different direction. For example:
;((RIGHT 50) (FORWARD 50))
or better yet
(RIGHT 50) (FORWARD 50))
After a line is moved, the end of the line remains where it was. If one made the Bunny move again the beginning of the new line would begin where the old left off. The original starting point is the graphics window default home position. This position is in the center of each graphics window when the window is first created. In order to return the Bunny to the original starting point one must use HOME.
The following produces a much neater triangle:
(DEFUN Triangle ()
(Penup) (Left 45)
(Forward 10) (Pendown)
(Right 90) (Forward 25)
(Right 90) (Forward 50)
(Right 135) (Forward 71)
(Right 135) (Forward 25))
After the above is typed into the edit buffer the "Compile All" selection should be chosen from the Menu Bar. The source code in the Edit Buffer quickly inverts to white letters on a black background as if the whole file was selected for a moment. The function name "Triangle is then printed in the Listener window. If the user enters the following in the Listener Window a different triangle is drawn in the default Graphics Window:
If you Look at the in Triangle you will see a couple more Bunny commands. LEFT does the same as RIGHT but in the opposite direction. PENUP raises the Bunny's pen so that when the Bunny moves no lines are drawn. PENDOWN returns the Bunny to the drawing orientation. The first line of code in "Triangle" puts the Bunny off the Home position so that the drawn triangle will be centered on the screen. As mentioned earlier, the orientation of the bunny remains. The last line of code in "Triangle left the Bunny aimed at about 1:00 rather than the initial position, 12:00. If we were to make "Triangle" execute ten times without eliminating the Graphics Window the following would result:
In getting "Triangle" to execute recompilation of the code in the edit buffer is not necessary. To get the above one can type the function name into a list ten times within the Listener window. The following however, is easier:
;(Dotimes (a 10) (Triangle))
DOTIMES is very similar to the FOR...NEXT looping routine in BASIC. I'll discuss it next month in a description of iteration and recursion in ExperLisp.
If we wanted to use a three dimensional bunny then the following would be added before "Triangle" in the Edit Buffer window:
(SETQ curbun (new3dbun))
(Pitch 30) (Yaw 45) (Roll 50)
Something like the following is drawn after the source code is recompiled and "(Triangle)" is entered into the Listener Window:
CURBUN is a special symbol in ExperLisp which always refers to the Bunny cursor. NEW3DBUN is a special term which always changes CURBUN. The default Bunny is 2 dimensional. If one wanted the Spherical Bunny then the following would be entered into the beginning of the first version of "Triangle":
(SETQ curbun (newspbun))
This would then produce what follows:
In order to have the above drawn in a different orientation, different Bunny direction would be required. Windows, two and three dimensional Bunny graphics and toolbox graphics use the same X,Y coordinate system. Home is 0,0. Dual negative coordinates are situated towards the upper left corner. Dual positive coordinates are situated towards the lower right corner. The range is +32767 to -32768 for each dimension. In ExperLisp one can sometimes use the third dimension, as in the 3D sample of "Triangle". Negative Z values are behind Home, while positive Z values are in front. The following illustrates the coordinate system in ExperLisp:
The ExperLisp disk contains three essential files; Compiler, LispENV and Experlisp. Compiler is not actually the entire Lisp compiler. It contains the information needed in generating all of the higher level Lisp syntactics, such as the Bunny graphics. LispENV stands for Lisp Environment and it is simply a duplication of Compile. LispENV contains information on how the Macintosh memory was organized by the programmer and ExperLisp during the previous session. It also contains information on the system configuration such as the number of disk drives, the amount of memory, etc. Sometimes LispENV can be messed up (i.e. by changing the variable table). When this happens one might not be able to start ExperLisp. In this case LispENV should be removed from the disk. Afterward, when ExperLisp is opened, Compiler generates a new LispENV. Compiler is not needed on the disk unless the LispENV is ruined. Deleting it will provide 100K more space on the disk. Before eliminating it from the disk however, be sure you have a backup as it is an essential file. The Experlisp file contains the assembly language routines which represent the lower level Lisp routines like CAR and CDR. It also allows access to the Macintosh toolbox routines and contains the Listener Window. One opens the Experlisp file in starting a programming session with ExperLisp. Another file on the disk is automatically loaded and activated when Experlisp is booted. It is labeled ªlispinit. The contents of this file can be added to so that when one boots up ExperLisp a program can be automatically executed. It can also do automatic configurations. However the contents of ªlispinit should not be changed since it configures the Macintosh memory for Exper- Lisp.
Next month I'll discuss a few more predicate procedures. I also hope to start discussing iteration, recursion and conditionals. If there is enough room left over I might also begin discussing how to access the toolbox graphics. | <urn:uuid:9ed9dd59-1ad8-422d-be1a-e1d062d262b6> | 3.53125 | 2,807 | Documentation | Software Dev. | 54.579133 | 622 |
Interactive Java Tutorials
Refractive Index Determination
Oblique illumination is sometimes utilized as an alternative to the Becke line test to determine whether the refractive index of a specimen is higher or lower than that of the surrounding medium. This interactive tutorial explores how variations in the refractive index of a specimen and its surrounding medium alter visibility in the microscope when utilizing oblique illumination techniques.
The tutorial initializes with a specimen having a refractive index of 1.15 positioned in a surrounding medium of refractive index 1.33 (representing water or lightly buffered aqueous saline solution), and being illuminated with off-axis light rays originating from the lower left-hand side of the tutorial window. In order to operate the tutorial, translate the Specimen (refractive index value) slider between values of 1.0 and 1.5. As the slider is moved from right to left, the specimen appearance is altered in the Eyepiece View window. To change the refractive index of the surrounding medium, move the Surround (refractive index value) slider to the right or left (this slider also has a range of refractive indices between 1.0 and 1.5). Translating the Surround slider will also affect specimen appearance and visibility, as discussed below. The two sliders are interactive and positioning of one slider will affect the range of motion of the other.
In situations where the specimen is mounted in a medium of lower refractive index, shading that results from the anaxial illumination will appear on the side opposite to that from which the light enters the specimen, and vice versa, as illustrated in Figure 1. For both diagrams presented in Figure 1, two equal sized oblique light rays are depicted entering the specimen through the surrounding medium at the same angle of incidence. At point A on the left-hand diagram, the light is spread over a larger area of the specimen than at point B, so that the area near point A on the specimen appears darker than the area near point B. Under these conditions, one side of the specimen will appear shaded or somewhat darker than the other side when viewed through the microscope eyepieces (represented by points A' and B' in the upper left portion of Figure 1). This is the case when the specimen refractive index is higher than that of the surrounding medium.
The opposite effect occurs when the specimen has a lower refractive index than that of the surrounding medium (see the right-hand side of Figure 1). In this case, the shaded or darker side of the specimen will be on the side that is nearest to the oblique light sector stop. When the specimen and the surrounding medium have identical refractive indices, then the specimen will be transparent (or invisible) and will have no refractive effects on the oblique illumination.
The sensitivity of this refractive index determination technique is highly dependent on the condenser focal length, the iris diaphragm position, and the geometry of sector stops (if employed). In general, the best results are obtained when the condenser is carefully focused and an even field of illumination is achieved.
Matthew J. Parry-Hill and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
BACK TO OBLIQUE ILLUMINATION
Questions or comments? Send us an email.
© 1998-2013 by
Michael W. Davidson and The Florida State University.
All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
Last modification: Thursday, Jun 15, 2006 at 03:39 PM
Access Count Since July 22, 2002: 24065
For more information on microscope manufacturers,
use the buttons below to navigate to their websites: | <urn:uuid:34972ce5-3120-43af-a5f0-66855b128dd7> | 3.71875 | 813 | Tutorial | Science & Tech. | 37.144765 | 623 |
How do you picture climate change? Aside from the warming part, another problem lies in the future: rising sea levels, which means a higher risk of damaging floods in coastal communities.
According to Climate Central’s recent Surging Seas report, sea levels are rising fast. Since 1880, they have increased by about eight inches, and by 2030 they’re expected to increase as much as eight more inches.
Some people are already planning ahead for disaster. The president of Kiribati, a country in the South Pacific Islands, is planning to buy land in Fiji for the purpose of relocating the island’s entire population. This seems like a worst-case scenario, but it’s based on fact—flooding means displacement. Kiribati sits about two meters above sea level, and some of its flat coral reefs have already disappeared. The study estimates that 700 million people may become climate refugees by 2050. The countries most likely to be affected are Bangladesh, China, Vietnam, Thailand, and India, all with substantial low-lying coastal areas.
The threat of rising seas goes along with the recent findings of a panel of climate scientists, who confirmed that extreme weather disasters are imminent. In particular, parts of Mumbai in India could become uninhabitable due to floods and storms. The people most vulnerable to rising seas and extreme weather live in less developed regions of the world.
Here in the U.S., we won’t be spared. The report found that about five million people in the U.S. live in low-lying areas that are likely to be affected by 2050. Even land that sits four feet above high tide line will be vulnerable as sea levels rise. And more than people are threatened. Buildings, hospitals, military bases, agricultural lands, toxic waste dumps, and even some nuclear power plants sit in these areas that are at risk.
Florida tops the list of vulnerable states. It has the greatest population living less than four feet above high tide. Florida has already felt the effects of rising sea levels. Increased flooding has occurred in the southern tip of the state, especially the Miami-Dade and Broward counties. Freshwater aquifers serving southeastern Florida (Miami-Dade, Broward, and Palm Beach Counties) and the Florida Keys have been contaminated by salt water.
Climate Central has compiled a list of action plans and resources that states and organizations have developed. Some suggestions for how we can adapt to rising sea levels include engineered solutions, like seawalls, levees, and dikes, for the “impossible” places that need it most. High-risk states such as California and New Jersey have created resources of their own. | <urn:uuid:bb721fc4-ec4a-461b-8f94-a6d17ca13520> | 3.65625 | 550 | News Article | Science & Tech. | 47.16697 | 624 |
A tree-killing invasive insect, the hemlock woolly adelgid (HWA), was found for the first time in Indiana on a landscape tree in LaPorte County in mid-April.
Since its introduction to the Eastern United States in the mid-1920s, the HWA has infested about half the native range of Eastern hemlock. In certain areas of the Great Smoky Mountains, as many as 80 percent of the hemlocks have died due to infestation.
The finding of the tiny aphid-like insect, which destroys native hemlocks by feeding on the tree sap at the base of the needles, was confirmed by the USDA Animal Plant Health Inspection Service (APHIS). The insect was identified on a single hemlock as a result of a homeowner’s report. The infested tree may have originated from a landscape planting in Michigan and been brought into Indiana about five years ago. Preliminary searches have revealed no other infested trees in the area, but an extensive survey is underway.
“Fortunately, this find occurred outside of the native range of hemlock trees in Indiana, which greatly increases our chances of preventing spread to them,” said Phil Marshall, state entomologist for the DNR.
In Indiana, forests containing hemlocks are scattered throughout the west central and southern half of the state. Evergreen hemlock trees dot the steep slopes along Big Walnut Creek in Putnam County, relics of an earlier, cooler climate. The Nature Conservancy and the DNR Division of Nature Preserves own and manage over 2,000 acres along this creek to protect the hemlock trees, as well as the rest of the forested land. “It’s hard to imagine losing this species from Indiana’s forests”, said Chad Bladow, Director of Southern Indiana Stewardship. “There are already few places in the state where visitors can see hemlocks, and HWA could eliminate all of them”. Other Indiana sites which are well-known for having eastern hemlock include Turkey Run State Park and Shades State Park in Parke County and Hemlock Cliffs in Hoosier National Forest in Crawford County. The Conservancy has acquired lands to help expand each of these sites.
HWA is easily spread by wind, movement on birds and mammals such as deer, but most rapidly as a hitchhiker on infested horticultural material. The best way to protect hemlocks in Indiana from HWA is to simply not buy or plant hemlocks.
“Purchasing plant materials from areas of known HWA infestation are very likely to provide the source of any potential infestation in Indiana,” said Tom Swinford, regional ecologist for the DNR, noting that not every tree is inspected to guarantee it is not infected. “We should do everything we can to protect our unique and beautiful eastern Hemlock trees in Indiana. A visit to the Smoky Mountains shows just how sad and devastating this scourge can be.”
"HWA will be very destructive if it reaches our native hemlocks, but the more people who become aware of the dangers of moving plant material and firewood over long distances, the better chance we have at protecting our forests,” Marshall said.
The Conservancy works to prevent invasive species from taking hold in Indiana. “Prevention is the best medicine when it comes to invasive species,” notes Ellen Jacquart, Director of Northern Indiana Stewardship and coordinator for Invasive species issues for the Conservancy in Indiana. “Don’t buy hemlock for landscaping – choose another native tree instead, and help make sure our native hemlock stands survive.”
Named for the cottony covering over its body, HWA somewhat resembles a cotton swab attached to the underside of young hemlock twigs. Within two years, its feeding causes graying and thinning of needles. Highly infested trees will stop putting on new growth, and major branches die, beginning in the lower part of the tree. Eventually the whole tree is killed.
If you suspect an HWA infestation, call the Indiana DNR Invasive Species Hotline at 1-866-NO-EXOTIC.
The Nature Conservancy is a leading conservation organization working around the world to protect ecologically important lands and waters for nature and people. The Conservancy and its more than 1 million members have protected nearly 120 million acres worldwide. Visit The Nature Conservancy on the Web at www.nature.org. | <urn:uuid:41c9855e-64d1-4ba6-9dee-73b103d70bef> | 3.171875 | 947 | Knowledge Article | Science & Tech. | 40.973803 | 625 |
SAN FRANCISCO — A tsunami-producing fault in Lake Tahoe is overdue for another earthquake, scientists said here Tuesday at the annual meeting of the American Geophysical Union.
The West Tahoe Fault is capable of producing a magnitude-7.3 earthquake and tsunamis up to 30 feet (10 meters) high in the clear blue lake, where million-dollar homes line the shore, researchers said.
Earthquakes strike every 3,000 to 4,000 years on the fault, and the most recent shaker was 4,500 years ago, indicating the fault is overdue for another earthquake, said Jillian Maloney, a graduate student at the Scripps Institution of Oceanography in San Diego.
The West Tahoe fault defines the west shore of the lake, coming on shore at Baldwin Beach, passing through the southern third of Fallen Leaf Lake, and then descending into Christmas Valley near Echo Summit.
Science news from NBCNews.com
To trace the fault's history, Maloney and her colleagues examined data from a CHIRP seismic imaging system, which details underwater sediment layers at very high resolution. (CHIRP stands for compressed high intensity radar pulse.) The researchers correlated landslide deposits, which could be related to past earthquakes, throughout western Lake Tahoe and in small lakes immediately to the south with radiocarbon dates from the sediments.
The West Tahoe Fault has a complicated history, the analysis reveals. The fault appears to alternate between breaking all at once, in a 31-mile long (50 kilometer) fracture, and in smaller, shorter segments. The discovery has implications for the Tahoe's seismic hazard, because the size of an earthquake relates to the length of a fault rupture, Maloney said. The biggest earthquakes come from the longest fault fractures.
The correlations, while still at an early stage, indicate the last time the fault's entire length ruptured was 7,800 years ago, Maloney told OurAmazingPlanet. More recent quakes occurred on individual segments, she said.
Because the fault crosses the lake, scientists worry a future earthquake will cause a tsunami in Lake Tahoe. The monster waves could form in two ways: by the fault displacing ground under the lake, similar to Japan's Tohoku tsunami, or by causing landslides that displace the water. A combination of both could also create an even bigger wave.
Layers of sediment preserved in and around Lake Tahoe record evidence of past tsunamis, said Graham Kent, director of the Nevada Seismological Laboratory in Reno.
However, having smaller earthquakes on the West Tahoe Fault would be better for the ski town. "If it breaks up into multiple segments, it might not be as great a tsunami risk," Kent told OurAmazingPlanet.
The most recent earthquake in the Tahoe region was about 575 years ago, on the Incline Fault, which becomes active about every 10,000 to 15,000 years. Scientists estimate its earthquake size potential at magnitude 7.
At more than 1,645 feet (501 meters) deep, Lake Tahoe, which straddles the California and Nevada border in the seismically active Sierra Nevada region, is one of the world's deepest freshwater lakes.
- Waves of Destruction: History's Biggest Tsunamis
- WATCH LIVE: Latest News from the 2012 AGU Meeting
- What's the Most Earthquake-Prone State in the US?
© 2012 OurAmazingPlanet. All rights reserved. More from OurAmazingPlanet. | <urn:uuid:546ff675-1a22-4367-959b-435520e9d5ab> | 2.953125 | 719 | News Article | Science & Tech. | 43.044083 | 626 |
One of the most important information sources for stock assessments are living marine resource surveys. GIS is used for planning these surveys and for interpreting results. Surveys and biological studies provide data on species distribution, life history, migration patterns, diet and behavior that are fundamental to responding to all of NOAA Fisheries' mandates.
|W. pollock habitat||2004 Gulf Sturgeon Relocation|
|3-D Mako Shark Track||Fishery Mapper Tool|
|Southeast Alaska Shallow Nearshore Waters Fish Atlas||Phytoplankton| | <urn:uuid:96e4b5bf-20bc-4068-9f0c-3a76b15d6e00> | 2.53125 | 113 | Knowledge Article | Science & Tech. | 22.802632 | 627 |
The Missing Key To Physics, ESP, Intelligence, Aging...
As you know the discoveries of Nikola Tesla have been purposefully minimized in the media, college curriculums, and in the history books. Well, there was another absolutely brilliant scientist whose discoveries are on a par with Tesla's, but his work was very diligently minimized before it ever received widespread attention, and that scientist's name was Albert Roy Davis. It may seem presumptuous to compare any scientist to Tesla, but if there is one that you can compare to him, in my opinion, it's Albert Roy Davis.
In 1936 Davis discovered that the North and South poles of magnetism are two separate energies with exact opposite effects on all matter. The North pole energy spins counterclockwise and causes matter to contract, and the South pole energy spins clockwise and causes matter to expand. Davis and his associate, Walter C. Rawls, Jr., found that this discovery had incredible implications in many areas of research.
It is the key to understanding the "new physics", the physics of UFO's. Modern physics considers the two poles to be a singular form of energy, not two separate energies. Magnetism is the foundation of physics. If you've ever heard Richard C. Hoagland talk about the "Russian physics", let me tell you this: the Russians knew nothing of this "new physics" until they adopted Davis' discovery.
It is the key to tapping into the true potential of the human mind. Davis and Rawls discovered that North pole magnetism can be used to dramatically increase our intelligence and our psychic abilities.
It is the key to understanding the legends of giants in ancient history, and the 2-3 foot beings as well. Davis and Rawls discovered that North pole exposed animals grew into much smaller, physically weaker adults. South pole exposed animals grew into much larger, physically stronger adults.
It is the key to aging. Davis and Rawls discovered that both North pole and South pole exposed animals lived much longer than animals with no magnetic exposure. | <urn:uuid:a6a92512-456a-44ea-9720-ba5675179fcd> | 2.625 | 411 | Comment Section | Science & Tech. | 46.155437 | 628 |
Solar electrical power is the particular conversion of sunshine into electricity, either straight making use of photovoltaics (PV), and / or indirectly utilizing concentrated solar electric power (CSP).
Commercial concentrated solar force vegetation were 1st developed within the 1980s. The actual 354 MW SEGS CSP installing the components is actually the biggest solar electrical power plant throughout the globe, placed within the Mojave Desert of California. Some other big CSP plants include the actual Solnova Solar Power Station (150 MW) plus the Andasol solar electric power station (150 MW), both throughout Spain. The actual 214 MW Gujarat Solar Park inside India, is actually the entire globe s biggest photovoltaic plant.
Solar force typically is the conversion of sunshine into electricity. Sunshine is converted straight into electricity applying photovoltaics (PV), or simply indirectly with concentrated solar force (CSP), that normally concentrates the actual sun's stamina that would boil water that is afterward used to provide energy. Other technologies furthermore exist, for instance Stirling engine dishes which incorporate a Stirling cycle motor to electrical power a generator. Photovoltaics were initially chosen to power little and also medium-sized applications, from the calculator driven by a single solar mobile that would off-grid homes driven by a photovoltaic range.
A parabolic trough consists of the linear parabolic reflector which concentrates light onto a receiver placed over the reflector's focal line. The recipient is a tube positioned right above the entire center associated with the parabolic mirror and also is actually filled with a functioning liquid. The particular reflector is actually built to follow the actual Sun during the entire daylight hours by tracking along a single axis. Parabolic trough systems provide the ideal land-use factor of any solar development. The actual SEGS vegetation within California not to mention Acciona's Nevada Solar One near Boulder City, Nevada are generally representatives of this development. Compact Linear Fresnel Reflectors happen to be CSP-plants which employ countless thin mirror strips instead of parabolic mirrors to be able to focus sunshine onto two tubes alongside working liquid. This particular has the advantage which flat mirrors will likely be chosen that are much cheaper compared to parabolic mirrors, and additionally that more reflectors will be put within the exact same amount of space, permitting more of the available sunshine in order to be chosen. Focusing linear fresnel reflectors can certainly be utilized in either big or even more compact plants. For More Information, Check Out: preturi panouri solare nepresurizate .
The entire Stirling solar dish combines a parabolic concentrating dish with a Stirling motor that usually forces a great electrical generator. The blessings of Stirling solar over photovoltaic cells are generally higher than average effectiveness of converting sunlight into electricity and additionally longer lifetime. Parabolic dish systems grant the greatest efficiency among CSP technologies. The particular 50 kW Big Dish within Canberra, Australia typically is a example of this development. For More Information, Check Out: preturi panouri solare presurizate .
About Ecoplay SRL
Soseaua Odai nr.245B Sector 1,Bucuresti,Romania
Tel: (+40) 731.998.335
Tag Words: None | <urn:uuid:219a7c65-e194-42a6-a8bf-51873343b562> | 3.671875 | 674 | Knowledge Article | Science & Tech. | 19.299651 | 629 |
Trends in Amphibian Occupancy in the United States
Michael J. Adams, David A. W. Miller, Erin Muths, Paul Stephen Corn, Evan H. Campbell Grant, Larissa L. Bailey, Gary M. Fellers, Robert N. Fisher, Walter J. Sadinski, Hardin Waddle, Susan C. Walls
Public Library of Science ONE
22 May 2012.
What we found
Based on sampling on protected areas from across the United States, including from the mid-Atlantic and from National Parks and Refuges across the northeast , ARMI has produced the first estimate of how fast we are losing amphibians.
Even though the declines seem small and negligible on the surface, they are not; small numbers build up to dramatic declines with time. For example, a species that disappears from 2.7 % of the places it is found per year will disappear from half of the places it occurs in 26 years if trends continue. More concerning is that even the species we thought were faring well – that is, fairly common and widespread -- are declining, on average. Fowler’s Toad (9 total years of data at 1 area: -0.06% annual trend) and Spring Peepers (26 total years at 5 areas: -0.06%) are examples of IUCN Least Concern Species for which we found a declining trend at the places we monitor. We also found evidence that amphibian declines are even taking place in protected areas like National Parks and National Wildlife Refuges. Check out the full publication here.
What we are doing
The Amphibian Research and Monitoring Initiative (ARMI) brings scientists and resource managers together to make real progress on a difficult problem. The ARMI program is a model for a productive program that links management and cutting edge science – since its inception in 2000, ARMI has produced over 430 publications on amphibian ecology, methodological advances for studying wildlife populations, and information useful to our DOI partners and beyond. We now have the first continental scale amphibian monitoring program at a point where broad-scale analyses can occur. This gives us new ways to study amphibian declines and look for ways to address the problem.
In the northeast, we are working with our resource management partners in NPS and FWS to identify and implement management strategies we think are optimal for maintaining populations - typically involving habitat manipulation. In addition, we will continue to monitor populations, and to develop novel research approaches to better understand what is causing declines, which will help to generate support for management options. | <urn:uuid:e2eb29c5-8df2-4b84-849c-9fb426183f45> | 2.71875 | 521 | News (Org.) | Science & Tech. | 46.174955 | 630 |
Photonics: Sensing on the way
Published online 01 August 2012
Hollow optical fibers containing light-emitting liquids hold big promises for biological sensing applications
Schematic illustration of a hollow fiber. The chemiluminescent liquid in the core (yellow) is guided through the fiber, also with help of further hole structures (dark blue).
Processing biological samples on a small substrate the size of a computer chip is becoming a common task for biotechnology applications. Given the small working area, however, probing samples on the substrate with light can be difficult. To address this issue, Xia Yu and co-workers at the A*STAR Singapore Institute of Manufacturing Technology have now developed an optical fiber system that is able to deliver light to microfluidic chips with high efficiency1.
“Our compact optical fibers are designed for use with high-throughput detection systems,” says Yu. “They are ideal for use in space-restrictive locations.”
A common way of probing biological samples is by light. In this method, the sample is excited by an external light source and the light emitted in response is detected, which provides a unique fingerprint of the substance. Conventional techniques are able to deliver light to samples and probe the response, but they are not very efficient at probing a small sample volume. A solution to this is to use optical fibers that are able to guide light to small spaces. The drawback with this technique, however, has been that it can be difficult to insert the external probe light into the optical fiber with sufficient efficiencies.
Yu and her co-workers have now circumvented this problem by using optical fibers with a hollow core (see image). The empty hollow core can be filled with liquids — in this case, with chemiluminescent solutions. The liquid is important to promote the transport of light through the core. In addition, these solutions consist of two liquids that when brought together initiate a chemical reaction that emits light. If such a solution is placed directly within the hollow core the problem of coupling light into the fiber is circumvented. This not only avoids external light sources but also promotes an established technology.
“The use of chemical luminescence is a common technique for a variety of detection assays in biology,” says Yu. “By incorporating the emission mechanism into optical fibers, we can use it as a light source for sensing applications in microfluidics systems.”
First tests for such sensing applications are already underway, although some challenges remain. For example, there might be losses in the light emitted by the fluid if the emitted light is not perfectly confined within the fiber. Such problems can be solved through improved fiber designs and an appropriate choice of materials, and applications of these fibers for microfluidic systems are promising.
The A*STAR-affiliated researchers contributing to this research are from the Singapore Institute of Manufacturing Technology
- Yu, X. et al. Chemiluminescence detection in liquid-core microstructured optical fibers. Sensors and Actuators B: Chemical 160, 800–803 (2011). | article | <urn:uuid:512b4ec2-ba1a-4c6f-8ec8-d09831c1eed6> | 3.125 | 637 | Academic Writing | Science & Tech. | 32.100344 | 631 |
Chemical technology news from across RSC Publishing.
Nanofactories monitor bacteria communication
03 March 2010
Scientists in the US have developed a microdevice that investigates how bacteria communicate with each other to enhance their resistance to drugs.
Bacteria communicate in a process called quorum sensing, in which they secrete small signalling molecules called autoinducers. When bacteria produce a quorum, their resistance to drugs is enhanced. William Bentley and co-workers from the University of Maryland have developed bio-inspired nanoscale factories that capture bacteria, deliver a drug right on the surface of the bacteria and test their responses.
'The overall goal is to understand how pathogens communicate with each other to make a more formidable team than each individual cell. We're trying to break down what exactly a quorum is and how it works', explains Bentley.
Microdevice could help develop the next generation of antimicrobials
The nanofactories assemble themselves on a chitosan coated electrode within a microfluidic device. They contain multiple modules that each perform a different function, including targeting and capturing bacteria cells, sensing raw materials in the vicinity and converting the raw materials into autoinducer molecules and transporting these back to the bacteria cell surface. Bentley used bacteria cells that were specially constructed to express a fluorescent protein in response to autoinducer signalling, which could be easily seen. The autoinducer molecules made by the nanofactories triggered the quorum sensing response of the bacteria, causing them to express the fluorescent protein.
'We're developing tools that enable rapid, cost-effective assembly of complex biological systems on devices so that the device can interrogate what the biology is doing', Bentley adds.
Michael Shuler, an expert in bioengineering at Cornell University, Ithaca, US, called the concept of nanofactories 'highly intriguing and novel'. He said that while applying the technique to the capture of quorum sensing bacteria was important for controlling some types of bacteria without antibiotics, the most exciting thing for him was the potential of the nanofactories to be integrated with microfluidics or other nanotechnologies.
In the future Bentley hopes that increasingly complex biological systems could be assembled to recreate the environment that bacteria see. He hopes to use the method to study other systems including epithelial and cancer cells.
Enjoy this story? Spread the word using the 'tools' menu on the left or add a comment to the Chemistry World blog.
Link to journal article
Biological nanofactories facilitate spatially selective capture and manipulation of quorum sensing bacteria in a bioMEMS device
Rohan Fernandes, Xiaolong Luo, Chen-Yu Tsao, Gregory F. Payne, Reza Ghodssi, Gary W. Rubloff and William E. Bentley, Lab Chip, 2010, 10, 1128 | <urn:uuid:eedbf89b-7026-4cd1-989f-3214da02a91c> | 3.265625 | 583 | News (Org.) | Science & Tech. | 19.472217 | 632 |
July 26, 2012 Excitation of neurons depends on the selected influx of certain ions, namely sodium, calcium and potassium through specific channels. Obviously, these channels were crucial for the evolution of nervous systems in animals. How such channels could have evolved their selectivity has been a puzzle until now.
Yehu Moran and Ulrich Technau from the University of Vienna together with Scientists from Tel Aviv University and the Woods Hole Oceanographic Institution (USA) have now revealed that voltage-gated sodium channels, which are responsible for neuronal signaling in the nerves of animals, evolved twice in higher and lower animals.
These results were published in Cell Reports.
The opening and closing of ion channels enable flow of ions that constitute the electrical signaling in all nervous systems. Every thought we have or every move we make is the result of the highly accurate opening and closing of numerous ion channels. Whereas the channels of most lower animals and their unicellular relatives cannot discern between sodium and calcium ions, those of higher animals are highly specific for sodium, a characteristic that is important for fast and accurate signaling in complex nervous system.
Surprising results in sea anemones and jellyfish
However, the researchers found that a group of basal animals with simple nerve nets including sea anemones and jellyfish also possess voltage-gated sodium channels, which differ from those found in higher animals, yet show the same selectivity for sodium. Since cnidarians separated from the rest of the animals more than 600 million years ago, these findings suggest that the channels of both cnidarians and higher animals originated independently twice, from ancient non-selective channels which also transmit calcium.
Since many other processes of internal cell signaling are highly dependent on calcium ions, the use of non-selective ion channels in neurons would accidently trigger various signaling systems inside the cells and will cause damage. The evolution of selectivity for sodium ions is therefore considered as an important step in the evolution of nervous systems with fast transmission. This study shows that different parts of the channel changed in a convergent manner during the evolution of cnidarians and higher animals in order to perform the same task, namely to select for sodium ions.
This demonstrates that important components for the functional nervous systems evolved twice in basal and higher animals, which suggests that more complex nervous systems that rely on such ion-selective channels could have also evolved twice independently.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:67baeb57-a18c-4b01-8bb0-8558a00d3c52> | 3.609375 | 531 | News Article | Science & Tech. | 20.43233 | 633 |
By John Fleck
Web edition: February 12, 2010 Print edition: February 27, 2010; Vol.177 #5 (p. 30)
Buy this book
Young adults can learn how scientists use tree rings to document climate change.University of New Mexico Press, 2009, 91 p., $21.95.
Please alert Science News to any inappropriate posts by clicking the REPORT SPAM link within the post.
Comments will be reviewed before posting.
You must register with Science News to add a comment. To log-in
To register as a new user, follow this link.
© Society for Science & the Public 2000 - 2013 All rights reserved. | <urn:uuid:d86faf02-af22-4034-9716-90c2739df0e6> | 3.125 | 136 | Truncated | Science & Tech. | 80.627273 | 634 |
Hot Sites and Cool Books
Recommended Web sites:
Information about the 2006 dinosaur dig at the 5E Ranch can be found at www.montanadinosaurdigs.com/sauro.htm (Judith River Dinosaur Institute).
Perkins, Sid. 2006. Bone hunt. Science News 170(Aug. 26):138-140. Available at http://www.sciencenews.org/articles/20060826/bob10.asp .
Books recommended by SearchIt!Science:
The Fossil Factory: A Kid's Guide to Digging Up Dinosaurs, Exploring Evolution, and Finding Fossils Niles Eldredge
Published by Addison-Wesley Publishing Co., 1989.
If you think that fossils are dinosaur bones, you're partly right. There are fossils of lots of other things, toograins of pollen, sea creatures, even human beings! How can you find fossils on your own? With black-and-white, cartoon-style drawings and a humorous writing style, a world-famous scientist and his teenage sons explain how fossils are formed, where you can find them, and how to take care of them. Along the way, they also offer a few chuckles as well as fascinating information about the history of life on Earth, the way rocks and continents formed, and what Earth was like during the age of the dinosaurs. Twelve activitiesincluding instructions for making a plaster cast of your own footprintare featured, too, along with step-by-step diagrams. At the end, a timeline shows how life forms evolved over millions of years.
Armored, Plated, and Bone-Headed Dinosaurs: The Ankylosaurs, Stegosaurs, and Pachycephalosaurs Thom Holmes
Published by Enslow Publishers, 2002.
What are the origins of these spiny, armor-plated dinosaurs? What were their feeding habits? How did they defend themselves? Explore the anatomy and physiology of these creatures that are now extinct.
Return to article
From The American Heritage® Student Science Dictionary
and The American Heritage® Children's Science Dictionary
estuary The wide lower end of a river where it flows into the sea. The water in estuaries is a mixture of fresh water and salt water.
fossil The hardened remains of traces of plant or animal that lived long ago. Fossils are often found in sedimentary rocks.
paleontology The scientific study of life in the past, especially through the study of fossils.
sauropod One of the two types of saurichian dinosaurs, widespread during the Jurassic and Cretaceous Periods. Sauropods were plant-eaters and often grew to tremendous size, having a stout body with thick legs, long slender necks with a small head, and long tails. Sauropods included the apatosaurus (brontosaurus) and brachiosuarus.
sedimentary rock A rock that is formed when sediment, such as sand or mud, becomes hard. Sedimentary rocks form when sediments are collected in one place by the action of water, wind, glaciers, or other forces, and are then pressed together. Limestone and shale are sedimentary rocks.
stegosaurus or stegosaur Any of several plant-eating dinosaurs of the Jurassic and Cretaceous Periods. Stegosaurus had a spiked tail and an arched back with a double row of large, triangular, upright, bony plates. Although stegosaurs grew to 20 feet (6.1 meters) in length, they had tiny heads with brains the size of a walnut
Copyright © 2002, 2003 Houghton-Mifflin Company
. All rights reserved. Used with permission.
Return to article
Behind the Scenes
Hot Sites & Cool Books | <urn:uuid:2adbea19-6442-400d-895e-a52d103f9c60> | 2.953125 | 780 | Content Listing | Science & Tech. | 50.30494 | 635 |
In A.D. 79 Mount Vesuvius erupted, annihilating the cities of Pompeii and Herculaneum and killing thousands who did not evacuate in time. To avert a similar fate for present-day Naples, which lies six miles west of the still active Vesuvius, as well as for the cities near volatile Mount Etna in Sicily, a novel laser system could soon forecast volcanic eruptions up to months in advance.
Current methods to predict eruptions have downsides. Seismometers can monitor tremors and other ground activity that signal a volcano's awakening, but their readings can prove imprecise or complicated to interpret. Scanning for escaping gases can reveal whether magma is moving inside, but the instruments used to analyze such emissions are often too delicate and bulky for life outside a laboratory. "You have to collect samples from the volcano, bring them to a lab, and often wait through backlogs of weeks to months before analysis," explains Frank Tittel, an applied physicist at Rice University.
A more promising technique for early detection focuses on changes in carbon isotopes in carbon dioxide. The ratio between carbon 12 and carbon 13 is roughly 90 to one in the atmosphere, but it can differ appreciably in volcanic gases. A ratio change by as little as 0.1 part per million could signal an influx of carbon dioxide from magma either building under or rising up through the volcano.
Lasers can help detect this change: carbon 12 and 13 absorb light at slightly different mid-infrared wavelengths. The lasers must continuously tune across these wavelengths. Previously investigators used lead-salt lasers, which require liquid-nitrogen cooling and thus are impractical in the field. Furthermore, they are low-power devices, generating less than millionths of a watt, and can emit frequencies in an unstable manner. Other isotope scanning techniques are similarly lab-bound.
Tittel and other scientists in the U.S. and Britain, in partnership with the Italian government, have devised a volcano-monitoring system around a quantum-cascade laser. Such a semiconductor laser can produce high power across a wide frequency. Moreover, they are rugged and do not require liquid-nitrogen cooling, making them compact enough to fit inside a shoe box.
The researchers first tried out their device on gas emissions from Nicaraguan craters in 2000. The new field tests will check its performance and accuracy in harsh volcanic locales. Dirk Richter, a research engineer at the National Center for Atmospheric Research in Boulder, Colo., says it would prove difficult to design a system "to work in one of the worst and most challenging environments possible on earth," but "if there's one group in the world that dares to do this, that's Frank Tittel's group."
If the instrument works, the plan is to deploy early-warning systems of lasers around volcanoes, with each device transmitting data in real time. False alarms should not occur, because carbon isotope ratios in magma differ significantly from those in the crust. The changes that the laser helps to detect also take place over weeks to months, providing time to compare data from other instruments, as well as ample evacuation notice. "Our system aims at avoiding a catastrophe like the Vesuvius eruption," says team member Damien Weidmann, a physicist at the Rutherford Appleton Laboratory in Oxfordshire, England. Field tests for the prototype are planned for the spring of 2005 in the volcanic Alban Hills region southeast of Rome, near the summer home of Pope John Paul II, as well as for volcanic areas near Los Alamos, N.M.�
This article was originally published with the title Volcanic Sniffing. | <urn:uuid:9309d171-e9f0-44a8-9de3-926e353cd4e3> | 3.90625 | 748 | Truncated | Science & Tech. | 40.471455 | 636 |
An instrument to measure the altitude of an object above a fixed level.Generally, mean sea level is used for the reference level.
Mid-level cloud (bases generally 2000 - 8000m), made up of grey,puffy masses, sometimes appearing in parallel waves or bands. An indicatorof mid-level instability. Altocumulus can take on various forms such as AcLenticularis, Ac Undulatus, Ac Castellanus, Altocumulus 'mackerel sky'.
A middle level cloud with vertical development that forms from altocumulusclouds. It is composed primarily of ice crystals in its higher portions andcharacterised by its turrets, protuberances or crenulated tops.
Mid-level cloud composed of water droplets and ice crystals. Usuallygives the sun a watery or dimly visible appearance.
A local wind that flows up the side of valleys due to increased heating alongthe valley walls. Often the anabatic wind results in cumulus clouds along theridges either side of the valley. See also Katabatic winds.
A device used to measure wind speed.
The departure of an element from its long-term average for the locationconcerned. For example, if the average maximum temperature for
Melbourne in June is 14 degrees and on one particular day the temperature
only reaches 10 degrees, than the anomaly for that day is -4.
A large scale atmospheric circulation system in which the winds rotate anticlockwise in the Southern Hemisphere (clockwise in Northern Hemisphere).Anticyclones are areas of high atmospheric pressure and are generallyassociated with light winds and stable weather conditions. Interchangeablewith High pressure system.
Rotation in the opposite sense as the Earth's rotation, i.e., anticlockwise in | <urn:uuid:1f2052ca-130e-43ae-ae93-421e6e25982d> | 3.15625 | 366 | Structured Data | Science & Tech. | 24.921433 | 637 |
Zooplankton community dynamics
Ballast water transport as a taxonomic and numeric 'filter'
Zooplankton data collected from ballast tanks at the beginning and end of 25 voyages showed an overall decline in total zooplankton abundance during a voyage. Mortality within tanks could be caused by a number of factors, including:
- Lack of settlement substrate
- Exposure to toxic substances
- Physiological stress caused by changes in physical conditions
In addition, the duration of a voyage or the age of the ballast water is also an important factor, as mortality within tanks appears to increase with time.
For example, our data indicated that:
• On short voyages (<10 days) survival of zooplankton is unpredictable, but typically high (both increases and decreases in abundance were recorded) .
• On long voyages (16-22 days) there were large declines in survivorship (>95% decrease in abundance in all cases.
This an important finding because in order to survive, establish, and thus achieve a successful ballast-mediated invasion, an organism must be delivered in adequate densities to increase the chances of encountering a mate and reproducing. This means that shorter coastwise voyages, where survivorship is more variable and the final densities are relatively higher pose a greater 'invasion threat' than do longer voyages.
Though these data bring us closer to identifying predictors of overall zooplankton survivorship, identifying individual taxonomic groups that are more likely to become successful invaders remains elusive. Survivorship varied both between taxonomic groups and within taxonomic groups for different voyages. Thus, we still do not fully understand the extent to which particular taxonomic groups are better able to survive transport in ballast tanks than others.
These data strongly point to the need for rigorous ballast water management policies across the board to effectively handle the release of domestic ballast water, particularly that portion of it which has only been subject to short-term transport. | <urn:uuid:92e01543-dc7c-4c93-93c8-34dc4304db73> | 3.484375 | 412 | Academic Writing | Science & Tech. | 10.170118 | 638 |
High energy mystery lurks at the galactic centre
PARTICLE PHYSICS AND ASTRONOMY RESEARCH COUNCIL
Posted: September 22, 2004
A mystery lurking at the centre of our own Milky Way galaxy - an object radiating high-energy gamma rays - has been detected by a team of UK astronomers working with international partners. Their research, published today (September 22nd) in the Journal Astronomy and Astrophysics, was carried out using the High Energy Stereoscopic System (H.E.S.S.), an array of four telescopes, in Namibia, South-West Africa.
The Galactic Centre harbours a number of potential gamma-ray sources, including a supermassive black hole, remnants of supernova explosions and possibly an accumulation of exotic 'dark matter' particles, each of which should emit the radiation slightly differently. The radiation observed by the H.E.S.S. team comes from a region very near Sagittarius A*, the black hole at the centre of the galaxy. According to most theories of dark matter, it is too energetic to have been created by the annihilation of dark matter particles. The observed energy spectrum best fits theories of the source being a giant supernova explosion, which should produce a constant stream of radiation.
Dr. Paula Chadwick of the University of Durham said, "We know that a giant supernova exploded in this region 10,000 years ago. Such an explosion could accelerate cosmic gamma rays to the high energies we have seen - a billion times more energy than the radiation used for X-rays in hospitals. But further observations will be needed to determine the exact source."
Professor Ian Halliday, Chief Executive of the Particle Physics and Astronomy Research Council (PPARC) which funds UK involvement in H.E.S.S. said; "Science continues to throw out the unexpected as we push back the frontiers of knowledge." Halliday added "The centre of our Galaxy is a mysterious place, home to exotic phenomena such as a black hole and dark matter. Finding out which of these sources produced the gamma-rays will tell us a lot about the processes taking place in the very heart of the Milky Way."
However, the team's theory doesn't fit with earlier results obtained by the Japanese /Australian CANGAROO instrument or the US Whipple instrument. Both of these have detected high-energy gamma rays from the Galactic Centre in the past (observations from 1995-2002), though not with the same precision as H.E.S.S, and they were unable to pinpoint the exact location as H.E.S.S. has now done, making it harder to deduce the source. These previous results have different characteristics to the H.E.S.S. observations. It is possible that the gamma-ray source at the Galactic Centre varies over the timescale of a year, suggesting that the source is in fact a variable object, such as the central black hole.
The H.E.S.S. team hopes to unravel the mystery with further observations of the Galactic Centre over the next year or two. The full array of four telescopes will be inaugurated on September 29th 2004, see
The H.E.S.S. collaboration
The High Energy Stereoscopic System (H.E.S.S.) team consists of scientists from Germany, France, the UK, the Czech Republic, Ireland, Armenia, South Africa and Namibia.
The H.E.S.S. array
Over the last few years, the H.E.S.S. collaboration have been building a system of four telescopes in the Khomas Highland region of Namibia, to study very-high-energy gamma rays from cosmic particle accelerators. The telescopes, known as Cherenkov telescopes, image the light created when high-energy cosmic gamma rays are absorbed in the atmosphere, and have opened a new energy domain for astronomy. The H.E.S.S. telescopes each feature mirrors of area 107 square metres, and are equipped with highly sensitive and very fast 960-pixel light detectors in the focal planes. Construction of the telescope system started in 2001; the fourth telescope was commissioned in December 2003. Observations were being made even while the system was being built, first using a single telescope, then with two and three telescopes. While only the complete four-telescope system provides the full performance, the first H.E.S.S. telescope alone was already superior to any of the instruments operated previously in the southern hemisphere. Among the first targets to be observed with a two-telescope instrument was the Galactic Centre.
Ares 1-X Patch
The official embroidered patch for the Ares 1-X rocket test flight, is available for purchase.
This beautiful one piece set features the Apollo program emblem surrounded by the individual mission logos.
The official embroidered patch for the International Space Station Expedition 21 crew is now available from our stores.
The official embroidered patch for mission STS-125, the space shuttle's last planned service call to the Hubble Space Telescope, is available for purchase.
INDEX | PLUS | NEWS ARCHIVE | LAUNCH SCHEDULE
ASTRONOMY NOW | STORE
© 2012 Spaceflight Now Inc. | <urn:uuid:9273b4cc-460c-4f23-b257-27a7da9567ba> | 2.90625 | 1,086 | News Article | Science & Tech. | 54.462063 | 639 |
Home › SparkNotes › Chemistry Study Guides › Review of Gases › Gases Review Test
don't seem to have. Please try a different browser.
Scroll through the page to review your answers. The correct answer is
Your incorrect answers (if any) are highlighted in
If you'd like to take the test over again, click the reset button at the end of the test.
Which of the following is a correct interpretation of the ideal gas law?
What is the correct relationship between
An isolated container of gas doubles in pressure and triples in volume. By what factor does T change?
If the volume of a gas is doubled at constant temperature, the factor by which the pressure increases is:
A barometer filled with an unknown liquid has a height of 1 m at 1 atm. During stormy weather, the height
of the column is observed to rise to 1.3 m. What is the atmospheric pressure?
Which of the following are possible units of R?
What are the conditions of STP?
A container contains 32 grams of
gas and 2 grams of
gas. If the total pressure of the vessel
is 16 atm, what is the partial pressure of the
As the average radius of a population of gas molecules increases, how does the factor b of van der Waals
All of the following are properties of an ideal gas except:
The ideal gas law is most valid under these conditions:
For the van der Waals equation:
For the equation
PV = nRT
, the value of T must be expressed in:
Which of the following is not a SI unit
A sample of gas has a volume of 22.4 L at a temperature of 273 K. How many moles are in the sample?
The volume of a sample of gas expands five times at constant pressure. By what factor has the absolute
The following reaction produces
A sample of gas occupies 100 L at STP. If the absolute temperature is halved while all other conditions are
constant, what will be the final volume?
of a sample of
at 300 K.
A closed jar contains 2 moles of
and 3 moles of
. What is the ratio of the partial pressure of
over the total pressure in the jar?
The rate of effusion of gas A is four times that of gas B. What is
The density of a certain gas at STP is 1.43 g/L. What is the identity of the gas?
One end of a mercury manometer is open to the atmosphere (
tm = 760mmHg
). The other end is
connected to a 1 mol sample of
that is at 273 K and occupies 22.4 L. What is the height of the
The Maxwell-Boltzmann distribution graph plots:
James the giant has big shoes to fill. His shoes have a total area of
in contact with the ground.
Unfortunately, James' feet are not so big. Barefoot, his weight is spread over
. What is the ratio of
the pressure he exerts on the ground barefoot over the pressure he exerts with his shoes on?
The "air" in airbags is generated via the decomposition of solid
A sample of an ideal gas is compressed at constant temperature. What happens to the average
kinetic energy of the molecules?
A piston compresses a gas at constant temperature. Initially the gas occupied 1 L and was at a pressure of 1
atm. After compression, the gas occupies 0.1 atm. What is the pressure of the compressed gas?
A collaborator from a foreign country reports that the value of
has probably used units of "woozle" for which of the following variables:
Avogadro's number is:
The following Maxwell-Boltzmann distribution plot was measured for two gases A and B at the same
A rigid container holds a mixture of gases. Within this mixture, the partial pressure of
is 400 torr.
If an additional quantity of
gas is injected into the container such that the total pressure of the
container rises by 760 torr, what is the change in the partial pressure of
? Assume that the
temperature of the container's contents stays constant.
If the pressure of a gas doubles and the temperature quadruples, by what factor does the volume change?
Which of the following are possible units for pressure?
The following Maxwell-Boltzmann distribution plot was measured for a gas at two temperatures A and B:
For the following calculation of
, the molar mass (MM) should be expressed in what units?
By what significant numerical value are Boltzmann's constant (k) and the gas constant (R) related?
The pressure of a gas is tripled while the volume is halved. By what factor does the temperature increase?
The gas constant R:
One end of a manometer is sealed off to a vacuum. The other end of the manometer is connected to a
pressurized gas. The height of the liquid column is indicative of:
A sample of
and a sample of
both have a temperature of 330 K. What is the ratio of the
average kinetic energy of the
over that of the
The density of a gas at STP is 0.089 g/L. What is the molar mass of the gas?
The following Maxwell-Boltzmann distribution plot was measured for two gases A and B at temperatures
Gaseous methane (
) burns completely in gaseous oxygen to produce carbon dioxide gas and water
Liven up your study sesh with one of these playlists!
Enjoy the tunes!
This expertly-crafted playlist is brought to you by
Chris Pine and Zoe Saldana heat up the red carpet!
Auntie SparkNotes can help!
Click here for simple, sexy makeup tricks!
See every single look from the Met Gala!
We already dib'sed Genghis Khan.
Travel back in time!
From super cute to super bad!
What do you think?
When you don't look like J-Law.
What did Star Trek get wrong?
Get Our FREE NOOK Reading Apps
When your books and teachers don't make sense, we do.
©2013 SparkNotes LLC, All Rights Reserved | <urn:uuid:1475843b-afe3-4a21-b409-9b1d07231153> | 3.34375 | 1,320 | Content Listing | Science & Tech. | 65.666305 | 640 |
All procedures in the Verilog HDL are specified within one of the following four statements:
-- initial construct
-- always construct
The initial and always constructs are enabled at the beginning of a simulation. The initial construct shall execute only once and its activity shall cease when the statement has finished. In contrast, the always construct shall execute repeatedly. Its activity shall cease only when the simulation is terminated. There shall be no implied order of execution between initial and always constructs. The initial constructs need not be scheduled and executed before the always constructs. There shall be no limit to the number of initial and always constructs that can be defined in a module
An initial block consists of a statement or a group of statements enclosed in begin... end or a signle statement , which will be executed only once at simulation time 0. If there is more than one block they execute concurrently and independently. The initial block is normally used for initialisation, monitoring, generating wave forms (eg, clock pulses) and processes which are executed once in a simulation. An example of initialisation and wave generation is given below
clock = 1'b0; // variable initialization
begin // multiple statements have to be grouped
alpha = 0;
#10 alpha = 1; // waveform generation
#20 alpha = 0;
#5 alpha = 1;
#7 alpha = 0;
#10 alpha = 1;
#20 alpha = 0;
An always block is similar to the initial block, but the statements inside an always block will repeated continuously, in a looping fashion, until stopped by $finish or $stop.
NOTE: the $finish command actually terminates the simulation where as $stop. merely pauses it and awaits further instructions. Thus $finish is the preferred command unless you are using an interactive version of the simulator.
One way to simulate a clock pulse is shown in the example below. Note, this is not the best way to simulate a clock. See the section on the forever loop for a better method.
initial clock = 1'b0; // start the clock at 0
always #10 clock = ~clock; // toggle every 10 time units
initial #5000 $finish // end the simulation after 5000 time units
Tasks and functions can bu used to in much the same manner but there are some important differences that must be noted.
A function is unable to enable a task however functions can enable other functions.
A function will carry out its required duty in zero simulation time.
Within a function, no event, delay or timing control statements are permitted.
In the invocation of a function their must be at least one argument to be passed.
Functions will only return a single value and can not use either output or inout statements.
Functions are synthesysable.
Disable statements canot be used.
Function canot have numblocking statements.
module function_calling(a, b,c);
input a, b ;
input a, b;
myfunction = (a+b);
assign c = myfunction (a,b);
Tasks are capable of enabling a function as well as enabling other versions of a Task
Tasks also run with a zero simulation however they can if required be executed in a non zero simulation time.
Tasks are allowed to contain any of these statements.
A task is allowed to use zero or more arguments which are of type output, input or inout.
A Task is unable to return a value but has the facility to pass multiple values via the output and inout statements.
Tasks are not synthesisable.
Disable statements can be used.
reg clock, red, amber, green;
parameter on = 1, off = 0, red_tics = 350,
amber_tics = 30, green_tics = 200;
// initialize colors.
initial red = off;
initial amber = | <urn:uuid:2dc62b9f-398e-47e8-9d87-d0375daf0011> | 3 | 806 | Documentation | Software Dev. | 42.033475 | 641 |
SHEFFIELD, U.K. -- An international team of researchers, led by the University of Sheffield, has demonstrated how Atlantic cod responded to past natural climate extremes. The new research could help in determining cods vulnerability to future global warming.
With fishing pressures high and stock size low, there is already major concern over the current sustainability of cod and other fisheries. The new findings, published in the journal, Proceedings of the Royal Society B, show that natural climate change has previously reduced the range of cod to around a fifth of what it is today, but despite this, cod continued to populate both sides of the North Atlantic.
The researchers used a computer model and DNA techniques to estimate where cod could be found in the ice age, when colder temperatures and lower sea-levels caused the extinction of some populations and the isolation of others.
The computer models used to estimate ice-age habitats suitable for cod were developed by Professor Grant Bigg, Head of the University of Sheffield’s Department of Geography. These climatic analyses were combined with genetic studies by US researchers at Duke University and the University of California, and ecological information prepared by colleagues at the University of East Anglia and the Institute of Marine Research in Norway.
On land, plants and animals (including humans) are known to have moved further south when the northern ice sheets reached their maximum extent around 20,000 years ago. Similar migrations must have happened for plankton and fish in the sea. But there were two added complications: firstly, greatly reduced sea levels meant that many shallow and highly productive marine habitats around Europe and North America ceased to exist. Secondly, the ice-age circulation patterns in the North Atlantic caused the temperature change between tropical and polar conditions to occur over a much shorter north-south distance, reducing the area suitable for temperate species – such as cod.
The new analyses included these effects, together with other environmental and ecological information, in order to estimate where it was possible for Atlantic cod to reproduce and survive.
The results indicated that the ice-age range of Atlantic cod extended as far south as northern Spain, but the total area of suitable habitat was much more restricted. Nevertheless, populations of cod continued to exist on both sides of the North Atlantic. These findings were confirmed by genetic data, based on over a thousand DNA analyses of present-day cod populations, from Canada, Greenland, Iceland and around Europe.
Professor Bigg said: “This research shows that cod populations have been able to survive in periods of extreme climatic change, demonstrating a considerable resilience. However this does not necessarily mean that cod will show the same resilience to the effects of future climatic changes due to global warming.”
Views expressed in this article do not necessarily reflect those of UnderwaterTimes.com, its staff or its advertisers. | <urn:uuid:610c68ce-47b9-4ec2-bacd-c261cc5ce47a> | 3.828125 | 572 | News Article | Science & Tech. | 22.125544 | 642 |
Wildlife you see in a national park or other reserved area don't know about the park boundary. Bobcat, martens, mink, and moose need different types of living space and habitat. Development outside the park affects their ability to inhabit the park.
Brief review of bat research in the San Francisco Bay area and southern California providing land managers with information on the occurrence and status of bat species with links to bat inventories for California and related material.
A literature synthesis and annotated bibliography focus on North America and on refereed journals. Additional references include a selection of citations on bat ecology, international research on bats and wind energy, and unpublished reports.
Population size, foaling, deaths, age structure, sex ratio, age-specific survival rates, and more over a 14 year time span. This information will help land and wildlife managers find the best maintenance and conservation strategies. | <urn:uuid:4448345c-0895-4322-b597-44994e0b8dfa> | 3.3125 | 182 | Content Listing | Science & Tech. | 24.002095 | 643 |
Previous | Session 117 | Next | Author Index | Block Schedule
S. J. Edberg (JPL/Caltech)
This poster serves to introduce a series of posters discussing Space Interferometry Mission PlanetQuest (SIM PlanetQuest) science prospects and plans across a wide range of astrophysics.
SIM is being designed and built for NASA's Navigator Program, an element of the Astronomical Search for Origins and Planetary Systems theme in the Science Mission Directorate. It will be the first optical interferometer in space dedicated to precision astrometry. Even though SIM PlanetQuest has undergone a significant redesign since last year, the principle parameters of the instrument and anticipated results from its flight have changed little. With astrometric modes yielding 1 microarcsecond and 4 microarcsecond measurements, SIM offers the opportunity to investigate a wide variety of phenomena. From effects due to planetary gravitation within the solar system to investigating the emission phenomena of quasars and AGNs, SIM will provide breakthrough science. SIM astrometry will provide positions, parallaxes (distances), and proper motions with unprecedented accuracies for thousands of stars.
Searches for Earth-like planets will be made. Investigations of other planetary systems are possible, including the masses and orbits of their planets. Characterizations of stellar masses, from brown dwarfs to stellar-mass black holes and across the H-R diagram are planned. Combined with ground-based observations, SIM observations of MACHOs should yield the masses of the microlensing objects for the first time. The ages of globular clusters will be determined and the Milky Way's mass and its distribution will benefit from the study of halo and tidal tail stars. SIM measurements of the motions of Local Group galaxies will enable tests of models of this system. Quasar jets will be investigated and quasars themselves can be used to tie down a significantly improved celestial reference frame.
This work was performed for the Jet Propulsion Laboratory, California Institute of Technology, sponsored by the National Aeronautics and Space Administration.
If you would like more information about this abstract, please follow the link to http://planetquest.jpl.nasa.gov/SIM/sim\_index.cfm. This link was provided by the author. When you follow it, you will leave the Web site for this meeting; to return, you should use the Back comand on your browser.
Previous | Session 117 | Next
Bulletin of the American Astronomical Society, 37 #4
© 2005. The American Astronomical Soceity. | <urn:uuid:25c234cb-e520-44ac-8580-df4a7639b976> | 2.6875 | 524 | News (Org.) | Science & Tech. | 34.123846 | 644 |
Manoj Nair of the National Oceanic and Atmospheric Administration has devised a new possible method of detecting a deadly tsuami long before the wave crests to dangerous heights. And, in a bit of good news, much of it is already in place.
In a new study in next month’s Earth, Planets, and Space, Nair modeled the massive 2004 tsunami in the Indian Ocean and found that a tsunami picking up steam as it moves across the ocean emits a tiny electromagnetic signature of of about 500 millivolts. That’s enough to have an effect on the communication cables that stretch across the ocean floor, carrying internet messages and phone calls. The electromagnetic signal “is very small compared to a 9-volt battery, but still large enough to be distinguished from background noise on a magnetically quiet day,” said Nair [Daily Camera].
Nair says this kind of system could be a lower-cost alternative to the bottom pressure arrays that directly measure large movements of water. “What we argue is that this is such a simple system to set up and start measuring,” Nair says. “We have a system of submarine cables already existing. The only thing we probably need is a voltmeter, in theory” [Wired.com].
Oleg Godin, one of Nair’s research partners, said any small improvement could make a huge difference. “If you detect tsunamis in the deep ocean — and that’s what we’re working on — meaning far from shore, you have hours, certainly tens of minutes, to warn people,” he said. “If people are well educated, a 15-minute warning is enough to save everybody” [Daily Camera].
80beats: South Pacific Tsunami Kills More than 100 People
80beats: Geologists Find One Cataclysmic Tsunami in Every 600 Years of Thai Dirt
80beats: Haiti Earthquake May Have Released 250 Years of Seismic Stress
Image: flickr / epugachev | <urn:uuid:d92d22cd-5a95-40a9-8916-07fba0df37b8> | 3.515625 | 428 | News Article | Science & Tech. | 42.977567 | 645 |
The Lake Tahoe area on the California-Nevada border can be appreciated from a variety of perspectives: Some people focus on the stunningly beautiful alpine lake nestled in the Sierra Nevada range, while others see it as a mecca for skiers and winter sports enthusiasts. When climate scientists look around, though, they see change. Two recent studies suggest that global warming is already altering that beloved ecosystem.
The first report (pdf), produced by researchers at the UC Davis Tahoe Environmental Research Center, predicts that snowpack melts over the next century will have a drastic impact on both winter tourism and the water supply.
The average snowpack in the northern Sierra Nevada mountains that ring the lake on the California-Nevada border will decline by 40 to 60 percent by 2100 “under the most optimistic projections,” says the report from three researchers at the University of California, Davis.
Under less optimistic models, the melt-off could be accelerated. By the end of the century, precipitation in the region “could be all rain and no snow,” and peak snowmelt in the Upper Truckee River — which is the largest tributary flowing into Lake Tahoe — could occur four to six weeks earlier by 2100, the report says. [New York Times]
The changes to the region’s hydrology could lead to new problems with runoff, erosion, and overflowing stormwater basins. While the researchers note that there is always some uncertainty when predicting far into the future, they also point out that the computer models they used are based on 100 years of data describing the changes in temperature and precipitation that have already occurred in the Tahoe area.
The second study, published in the journal Geophysical Research Letters, used infrared (heat) measurements from satellites to examine the changes to the planet’s lakes.
Two NASA scientists used satellite data to look at 104 large inland lakes around the world. They found that on average they have warmed 2 degrees [Celsius] since 1985. That’s about two-and-a-half times the increase in global temperatures in the same time period. [AP]
Lakes in the the Northern Hemisphere’s mid and upper latitudes showed the most warming. That includes Lake Tahoe, which has heated up by 3 degrees Celsius since 1985, putting it behind only Russia’s Lake Ladoga.
80beats: Water Maps Show Stress Spread Out Across the Planet
80beats: Water Woes: The Southwest’s Supply Dwindles; China’s Behemoth Plumbing Project Goes On
80beats: Arctic Report Card: Warm Weather and Melted Ice Are the New Normal
80beats: Aral Sea Shows Signs of Recovery, While the Dead Sea Needs a Lifeline
DISCOVER: 20 Things You Didn’t Know About… Water
Image: Wikimedia Commons | <urn:uuid:c3b8b37a-b1f0-4749-b688-1d7e5bfb16b9> | 3.859375 | 592 | Personal Blog | Science & Tech. | 34.263168 | 646 |
Last Wednesday the National Academy of Sciences held a press conference in Washington, DC, to introduce its newly completed report on priorities for the coming decade in solar and space physics. Daniel Baker of the University of Colorado chaired the committee that wrote the report. Thomas Zurbuchen of the University of Michigan was the vice chair. Together, they summarized the report’s highlights for the assembled reporters, scientists, and bureaucrats.
Like its counterparts in astronomy and planetary science, the latest solar and space physics decadal survey is more than just a shopping list of missions and facilities. Its authors begin by defining their field in a broad and inspiring way:
We live on a planet whose orbit traverses the tenuous outer atmosphere of a variable magnetic star, the Sun. This stellar atmosphere is a rapidly flowing plasma—the solar wind—that envelops Earth as it rushes outward, creating a cavity in the galaxy that extends to some 140 astronomical units (AU). There, the inward pressure from the interstellar medium balances the outward pressure of the solar plasma forming the heliopause, the boundary of our home in the universe. Earth and the other planets of our solar system are embedded deep in this extended stellar atmosphere or “heliosphere,” the domain of solar and space physics.
The report goes on to review past and present accomplishments in solar and space physics before defining the four overarching goals that guided the committee members as they drew up their final recommendations:
- Determine the origins of the Sun’s activity and predict the variations in the space environment.
- Determine the dynamics and coupling of Earth’s magnetosphere, ionosphere, and atmosphere and their response to solar and terrestrial inputs.
- Determine the interaction of the Sun with the solar system and the interstellar medium.
- Discover and characterize fundamental processes that occur both within the heliosphere and throughout the universe.
As I listened to Baker and Zurbuchen’s presentation, it became clear that two other overarching considerations informed the report. The first is a conceptual emphasis on viewing Earth’s aurorae, the solar wind, coronal mass ejections, and other heliospheric phenomena as part of a single system. It will be interesting to see whether this systemic view becomes manifest in journals, conferences, and courses. I, for one, have tended to think of solar physics as belonging more to astronomy than to heliospheric physics.
The second consideration is a realistic and—to use Baker’s word—responsible approach to costs. The committee retained Aerospace Corp, a nonprofit consultancy based in El Segundo, California, to carry out an independent cost appraisal and technical evaluation (CATE) of potential missions. For the most part, the total cost of the committee’s recommended suite of programs lies within the budget envelope that NASA provided the committee for the years 2013–22.
Physicists who remember chuckling when they first encountered the zeroth law of thermodynamics might be amused to learn that the committee’s first recommendation is also numbered zero—for good reason. As NASA and NSF, the other principal sponsor of heliospheric research, look to future missions and facilities, the committee recommends that they first complete their current program.
Among the lineup is Solar Probe Plus (shown here in an artist’s impression). The ambitious mission, whose price tag is $1.4 billion, aims to fly as close as possible to the Sun to determine how the solar corona is heated and how the solar wind is accelerated.
Diversify, realize, integrate, venture, educate
The committee’s second recommendation, numbered 1.0, is to implement an initiative that goes by the acronym DRIVE (for “diversify, realize, integrate, venture, educate”). As far as I can tell, DRIVE aims to reorganize and reinvigorate the way researchers and their students practice heliospheric science.
Surprisingly, given its high priority, DRIVE is not expensive. The committee projects that the initiative will cost at most about $50 million a year. To fulfill the goals embodied by its name, DRIVE seeks to make research opportunities more accessible to universities through small and mid-sized missions, including the shoebox-sized spacecraft called CubeSats.
Funding the analysis and interpretation of data adequately is a key element of DRIVE, as is fostering interdisciplinary approaches to heliospheric research. Indeed, the committee urges NASA and NSF to establish heliospheric science centers, where observers, theorists, and modelers can work together to solve the grand challenges of solar and space physics.
When Baker and Zurbuchen introduced DRIVE, it sounded somewhat woolly to me. Now, having read the DRIVE section of the report, I think it’s a bold and worthwhile model that could be profitably emulated in other fields, such as green energy or neuroscience. But to be effective, DRIVE will probably need a light administrative structure.
Accelerate and expand the Heliophysics Explorer program!
Recommendation 2.0 seeks to revitalize NASA’s Explorer program of modestly sized and priced spacecraft. Begun in 1958, the program, according to the committee, is “arguably the most storied scientific spaceflight program in NASA’s history.” Despite its success, which includes three Nobel prizes, funding for the Explorer program fell in 2004 and has languished since. To quote the report:
The medium-class (MIDEX) and small-class (SMEX) missions of the Explorer program are ideally suited to advancing heliophysics science and have a superb track record for cost-effectiveness. Since 2001, 15 heliophysics Explorer mission proposals have received the highest category of ranking in competition selection reviews, but only 5 have been selected for flight. Thus there is an extensive reservoir of excellent heliophysics science to be accomplished by Explorers.
Because MIDEX and SMEX missions are comparatively cheap, developing and launching more of them would not require a big outlay. The committee recommends that NASA augment the current Explorer program for solar and space physics by $70 million per year.
In addition to more money for the Explorer program, the committee also recommends establishing a faster, more nimble way of accommodating missions of opportunity—that is, missions that are conceived in response to new technologies, new scientific knowledge, or new partnership opportunities with other space agencies.
NASA: Let academia lead space science
Perhaps by coincidence, a commentary by Baker appeared in Nature two weeks before his committee released its report. Entitled “NASA: Let academia lead space science,” the commentary urged the space agency to fund more missions that are small enough in scope that university-based principal investigators (PIs) can develop and lead them.
Whether Baker’s fellow committee members endorsed his commentary is not clear. They do, however, evidently share his belief in the merits of PI-led missions. Recommendation 3.0 calls for NASA to transform its Solar Terrestrial Probes program from a large, centrally directed program to “a moderate-sized, competed, PI-led mission line that is cost-capped at approximately $520-million per mission.”
The STP program aims to elucidate the physics of the Sun’s influence on Earth, on the other bodies in the solar system, and on the interstellar medium. To avoid the risk that a competitive free-for-all would omit important aspects of STP science, the committee outlined three kinds of missions that it would like to see fly:
- IMAP (Interstellar Mapping and Acceleration Probe) to characterize the zone where the Sun’s magnetohydrodynamic influence ceases to prevail in the solar neighborhood.
- DYNAMIC (Dynamical Neutral Atmosphere) to study how Earth’s ionosphere and thermosphere influence, and are influenced by, processes that occur at lower and higher altitudes.
- MEDICI (Magnetosphere Energetics, Dynamics, and Ionospheric Coupling) to determine how the magnetosphere-ionosphere-thermosphere system responds to solar and magnetospheric forcing.
The committee’s enthusiasm for modest missions is not unbridled, however. In the committee’s view, tackling the problem of how and why the Sun varies is a job for large, integrated missions. NASA’s Living with a Star program already includes the Solar Probe Plus and the Radiation Belt Storm Probes missions. Recommendation 4.0 is for Geospace Dynamics Constellation, a set of six formation-flying spacecraft that will characterize how the energy of geomagnetic storms is deposited and transformed in Earth’s atmosphere.
Recharter the National Space Weather Program
In March 1989 a geomagnetic storm caused the collapse of Hydro-Québec’s electricity grid. Five months later another geomagnetic storm shut down electronic trading on Toronto’s stock exchange.
Anticipating such storms—or space weather—and predicting their effects is more important, now that the world’s electrical infrastructure has expanded, the number of Earth-orbiting satellites has increased, and telecommunications have become economically and socially more important.
The current solar cycle, the 24th since records began in 1755, is set to peak next year. To monitor the cycle’s activity, the US relies on a set of spacecraft, such as the Solar and Heliospheric Observatory, whose principal purpose is basic research and whose engineering lifetimes are coming to an end.
To avoid gaps in coverage, the committee recommends that NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense should plan ahead and plan together. Of particular importance, the committee says, is maintaining a permanent monitoring capability at L1, the first Lagrange point of the Sun–Earth system. Lying between the two bodies 1.5 million km from Earth, L1 is an ideal vantage for tracking solar activity.
The US has a comprehensive plan, the National Space Weather Program, for dealing with space weather. The trouble is, as the committee puts it, “implementation of such a program would require funding well above what the survey committee assumes to be currently available.” Accordingly, the committee recommends that the NSWP
should be rechartered under the auspices of the National Science and Technology Council and should include the active participation of the Office of Science and Technology Policy and the Office of Management and Budget. The plan should build on current agency efforts, leverage the new capabilities and knowledge that will arise from implementation of the programs recommended in this report, and develop additional capabilities, on the ground and in space, that are specifically tailored to space weather monitoring and prediction.
I haven’t read all 455 pages of the committee’s report. In venturing to summarize it, I have no doubt missed some important points and emphases. But what I have read has impressed me. Here is a plan to study the heliosphere as a system in a comprehensive, multidisciplinary, and cost-effective way. I hope its recommendations are heeded. | <urn:uuid:dfeab920-d146-490d-b528-0a5dc3071873> | 2.96875 | 2,306 | Personal Blog | Science & Tech. | 27.110422 | 647 |
Climate Action for Nature
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change."
- Charles Darwin
ECCo scientists recognize that climate change has the potential to jeopardize much of the regional conservation and restoration work that has been done over the past 30 years. As a member of Chicago Wilderness (CW), an alliance of over 250 organizations dedicated to restoring biodiversity in the region, the team helped to develop and launch the CW Climate Action Plan for Nature (CAPN) in 2010. This Plan specifically addresses climate change impacts to the natural communities spanning across a four-state region and represents an ecosystem-based approach to responding to climate change. The CAPN complements the Chicago Climate Action Plan, focused on human health and the built and natural environments within the City of Chicago, and ECCo ecologists work collaboratively with Chicago's Department of Environment on urban ecosystem adaptation strategies.
The team serves on the CW Climate Change Task Force and is leading several of the efforts to implement the CAPN, including developing Climate Clinics aimed at building the capacity of CW members to put the Plan's actions into practice. In Spring 2011 ECCo ecologists completed a revision of CW's Biodiversity Recovery Plan, a road map to restoration and management in the region, that incorporates climate change impacts to biodiversity. These “climate-smart” management practices are intended to not only protect current conservation investments, but also increase the likelihood natural resources can continue to provide the ecological services both human and natural communities rely upon as the environment continues to shift.
To subscribe to the bi-monthly Chicago Wilderness
Learn more about ECCo's work Engaging Chicago Communities in Climate Action | <urn:uuid:d4a62a0b-82b9-44ee-bcbf-5980863b4df5> | 2.890625 | 356 | About (Org.) | Science & Tech. | 11.40843 | 648 |
There are nearly two million known species on the planet. But many of those won't be around much longer; one out of every eight known bird species, one in four mammal species, and one in three amphibian species are at risk for extinction, according to the World Conservation Union (IUCN), which maintains the Red List, a catalog of the world's species classified according to their risk of extinction.
"It's supposed to inform conservation practice, to be a wake-up call for the extinctions that are happening," says Caroline Pollock, a program officer with the Red List unit. Animals that are classified as "critically endangered" are at the highest risk--their numbers in the wild may be extraordinarily low or their territories incredibly small. "It is possible to bring them back," Pollock says, "but it is quite work-intensive and financially expensive." Here, a look at five species on the brink.
Native to Spain and Portugal, there are fewer than 250 of these felines left in the wild. Habitat destruction has been a major cause of its decline as agriculture spreads through its homeland. Additionally, disease has claimed a large percentage of the region's rabbits, one of the lynx's primary food sources. Intensive captive breeding programs are currently underway to help save the lynx, Pollock says. If they do disappear, the lynx will be the first wild cat to go extinct in more than 2,000 years.
The wild population of these frogs has declined more than 80 percent in the last decade. The plummeting numbers of the frogs, which are endemic to Panama, is largely a result of chytridiomycosis, an infectious fungal disease that seems to be causing mass amphibian die-offs. The disease is still spreading, and deforestation is adding to the pressures faced by the frogs. Though there are captive-breeding programs in place for these amphibians, they will not be released into the wild until conditions improve.
Fewer than 100 of these birds, which are confined to one small island in Cape Verde, remain in the wild. The birds have been threatened by drought and increasing desertification on the island, conditions that may worsen as a result of global climate change. Because they build their nests on the ground, they also face risks from cats, dogs, and rats that have been introduced to the island.
Only 34 of these trees, native to Mexico, remain. The plants have a low rate of pollination--and don't reach maturity until they are approximately 25 years old--and are also profoundly threatened by agriculture. One tree was cut down in 2006 to expand farmland, and insecticides decrease the number of pollinators available to help the trees spread. Human-caused fires have also destroyed or damaged a number of these plants.
It could already be too late for the Yangtze River dolphin, or baiji. There has not been a documented sighting of these cetaceans, which lived in China's Yangtze River and nearby lakes, since 2002. A search for the dolphin--and the signature sounds that they make--was conducted in late 2006 but turned up no evidence of the mammals. However, further surveys are still needed to determine whether the dolphins truly have disappeared forever. The baiji's population decline is due, in large part, to the development of Chinese waterways and the expansion of commercial fishing.
Read more on helping endangered species by breeding captive animals in DISCOVER's Recall of the Wild
Emotion researcher Jaak Panksepp
Read More »
Sign up to get the latest science news delivered weekly right to your inbox! | <urn:uuid:246be445-3dd9-4c3d-a6e4-a6d193647b45> | 3.65625 | 741 | Content Listing | Science & Tech. | 43.885858 | 649 |
Light Scattering System
NanoBiophysics Core Facility has a full set of Light Scattering equipment from Wyatt including Multiangle Light Scattering (MALS) device, Dynamic Light Scattering (DLS) device, and HPLC system (Agilent) linked to MALS.
Light scattering is a non-invasive technique for characterizing macromolecules and a wide range of particles in solution. In contrast to most methods for characterization, it does not require outside calibration standards. In this sense it is an absolute technique. Wyatt Technology instruments make two different types of light scattering measurements for absolute molecular characterization:
* Classical light scattering: Here, the intensity of the scattered light is measured as a function of angle. For the case of macromolecules, this is often called Rayleigh scattering and can yield the molar mass, rms radius, and second virial coefficient (A2). For certain classes of particles, classical light scattering can yield the size, shape, and structure.
* Quasi-elastic (QELS) or dynamic light scattering (DLS): In a QELS measurement, time-dependent fluctuations in the scattered light signal are measured using a fast photon counter. QELS measurements can determine the hydrodynamic radius of macromolecules or particles.
Light scattering is a technique that can be applied in either batch or chromatography mode. In either instance the sample may be recovered at the end of the measurement. Since light scattering provides the weight-averaged molar mass for all molecules in solution, it is generally more useful to utilize the chromatography mode, though each technique has its advantages.
Although absolute molecular weights can be determined also via mass spectrometry, membrane osmometry, and sedimentation equilibrium (analytical centrifugation), only light scattering covers so broad a range of macromolecules including their oligomeric states. Most importantly, light scattering permits measurement of the solution properties of macromolecules. While a sedimentation equilibrium run may require 72 hours, a size exclusion chromatography/light scattering study may be completed in well under an hour, and a batch mode analysis in a few minutes. These comparatively short run times coupled with the absolute determination of molar mass, size, and A2 make light scattering the method of choice for accurate and fast macromolecular characterization.
For more information go to www.wyatt.com
Core Facility will help you with obtaining free downloads of manuals, tutorials and software. | <urn:uuid:383315da-e20e-4bfd-8cad-7c0c4da0230a> | 2.703125 | 507 | Product Page | Science & Tech. | 14.896778 | 650 |
A semi-arid climate or steppe climate describes climatic regions that receive precipitation below potential evapotranspiration, but not extremely. A more precise definition is given by the Köppen climate classification that treats steppe climates (BSk and BSh) as intermediates between desert climates (BW) and humid climates in ecological characteristics and agricultural potential. Semi-arid climates tend to support short or scrubby vegetation, with semi-arid areas usually are dominated by either grasses or shrubs.
To determine if a location has a semi-arid climate, the precipitation threshold must first be determined. Finding the precipitation threshold (in millimeters) involves first multiplying the average annual temperature in °C by 20, then adding 280 if 70% or more of the total precipitation is in the high-sun half of the year (April through September in the Northern Hemisphere, or October through March in the Southern), or 140 if 30%–70% of the total precipitation is received during the applicable period, or 0 if less than 30% of the total precipitation is so received. If the area's annual precipitation is less than the threshold but more than half the threshold, it is classified as a BS (steppe climate).
Furthermore, to delineate "hot semi-arid climates" from "cold semi-arid climates", there are three widely used isotherms: Either a mean annual temperature of 18°C, or a mean temperature of 0°C or −3°C in the coldest month, so that a location with a "BS" type climate with the appropriate temperature above whichever isotherm is being used is classified as "hot semi-arid" (BSh), and a location with the appropriate temperature below the given isotherm is classified as "cold semi-arid" (BSk).
Hot semi-arid climates
Hot semi-arid climates (type "BSh") tend to be located in the tropics and subtropics. These climates tend to have hot, sometimes extremely hot, summers and mild to warm winters. Snow rarely (if ever) falls in these regions. Hot semi-arid climates are most commonly found around the fringes of subtropical deserts. The most common variant of a hot semi-arid climate, found in regions such as West Africa, India, parts of Mexico and small parts of Pakistan experiences the seasonal effects of monsoons and has a short but well-defined wet season, but is not sufficiently wet overall to qualify as a tropical savanna climate. In Australia, a large portion of the Outback, surrounding the central desert regions, lies within the hot semi-arid climate regime. Hot semi-arid climates can also be found in sections of South America such as the sertão and on the poleward side of the arid deserts where they typically feature a Mediterranean precipitation pattern, with generally rainless summers and wetter winters.
Cold semi-arid climates
Cold semi-arid climates (type "BSk") tend to be located in temperate zones. They are typically found in continental interiors some distance from large bodies of water. Cold semi-arid climates usually feature hot and dry (often exceptionally hot) summers, though their summers are typically not quite as hot as those of hot semi-arid climates. Unlike hot semi-arid climates, areas with cold semi-arid climates tend to have cold winters. These areas usually see some snowfall during the winter, though snowfall is much lower than at locations at similar latitudes with more humid climates. Areas featuring cold semi-arid climates tend to have higher elevations than areas with hot semi-arid climates, and are sometimes subject to major temperature swings between day and night, sometimes by as much as 15℃/27℉ or more in that time frame. These temperature swings are seldom seen in hot semi-arid climates. Cold semi-arid climates at higher latitudes tend to have dry winters and wetter summers, while cold semi-arid climates at lower latitudes tend to have precipitation patterns more akin to Mediterranean climates, with dry summers, relatively wet winters, and even wetter springs and autumns. Cold semi-arid climates are most commonly found in Asia and North America. However, it can also be found in Northern Africa, South Africa, Europe, (primarily in Spain) sections of South America and sections of interior southern Australia.
Regions of varying classification
Three isotherms means that delineate between hot and cold semi-arid climates -- the 18°C average annual temperature or that of the coldest month (0°C or −3°C), the warm side of the isotherm of choice defining a BSh climate from the BSk on the cooler side. As a result of this, some areas can have climates that are classified as hot or cold semi-arid depending on the isotherm used. One such location is San Diego, California (at its main airport), which has cool summers for the latitude due to prevailing winds off the ocean (so the average annual temperature is below 18°C) but mild winters (average temperature in January, 14°C, and closer to the 18.0°C isotherm that separates tropical and subtropical climates than to the 0°C or −3°C isotherm for the coldest month that separates temperate and continental climates).
See also
- Continental climate
- Dust Bowl (an era of devastating dust storms, mostly in the 1930s, in semi-arid areas on the Great Plains of the United States and Canada)
- Goyder's Line (a boundary marking the limit of semi-arid climates in the Australian state of South Australia)
- Palliser's Triangle (semi-arid area of Canada)
- Köppen climate classification | <urn:uuid:85784f22-cf21-4032-ad5f-e40defbe8eb2> | 3.859375 | 1,215 | Knowledge Article | Science & Tech. | 22.964 | 651 |
Photo: Scott Zona
There is nothing more emblematic of spring and summer than flowers, but why do plants have flowers, and how did they evolve?
Botanists know that flowering plants, that is, plants that reproduce by producing seeds, evolved from non-flowering plants. According to evolutionary theory, nature would have selected plants with flowering tendencies because it gave these plants a reproductive advantage. It’s within the protective casing of flower petals, after all, that flowers are pollinated and make seeds. The strategy has been hugely successful. The vast majority of plants today are flowering plants.
The precise origin of flowering plants, though, is puzzling. In fact, exactly when, how, and why plants first developed flowers remains one of the biggest mysteries of evolutionary paleontology.
However, two discoveries have begun to unravel the mystery of how plants got flowers. Four years ago, scientists in China found a fossil of the oldest known flowering plant. The reed-like plants lived at least 125 million years ago in a lake, suggesting that flowering plants first evolved in water. The scientists speculate that the plant’s seeds floated along the shore and germinated near the banks.
More recently, scientist William Friedman of the University of Colorado found a clue in a plant called Amborella trichopoda, which grows in South Pacific rain forests. The plant’s female reproductive system has an extra, sterile egg cell. Friedman thinks that the extraneous part is a remnant from a more primitive reproductive apparatus and could link the plant to non-flowering plants like pines and firs.
The origin of flowers is still a difficult puzzle, of course, but with further discoveries and research, flowering plants will become a bit less mysterious. | <urn:uuid:8a9b6621-ec71-444c-9309-238303e6477a> | 4.28125 | 359 | News Article | Science & Tech. | 39.875088 | 652 |
More sharks on the Red List – Expert workshop releases findings on the status of North and Central American shark and ray populations
25 June 2004 | News story
Gland, Switzerland, 25 June 2004 (IUCN - The World Conservation Union). The number of species of sharks and rays on the IUCN Red List of Threatened Species is set to grow. This was the finding of a week-long expert workshop at Mote Marine Laboratory, Florida, to examine the conservation status of the species found in North and Central American waters.
Workshop findings confirm the widely-held belief that slow growing sharks and rays are exceptionally vulnerable to over-fishing, but also reveal that species can recover from depletion if strict management is imposed before populations reach critical levels. The results highlight how species can become endangered through incidental catch, without being the target of fisheries. In many cases, species of “Least Concern” in US waters still face serious threats from unregulated fishing off Mexico and Central America.
Nearly 200 species of sharks and rays in the region were evaluated using the IUCN Red List Categories and Criteria. Categories range from "Extinct" to "Least Concern" and "Data Deficient." Species classified as "Vulnerable," "Endangered" or "Critically Endangered," are considered threatened with extinction and are added to the global Red List. The Red List Categories and Criteria were also used to assess certain regional and specific populations, as well as global ones. The Shark Specialist Group of IUCN’s Species Survival Commission, which convened the meeting, will compile the assessments into a report that will include recommendations for conservation action.
Proposed additions to the Red List include the oceanic whitetip shark of the Gulf of Mexico and New England's thorny skate, both classified as "Critically Endangered," as well as two species of hammerhead sharks, now considered "Endangered." The demise of the oceanic whitetip is blamed on incidental catch (or "bycatch") in high seas tuna and swordfish fisheries combined with demand for their fins. Hammerhead populations have declined due to a combination of factors including recreational over-fishing, high commercial value of their fins and bycatch. Thorny skate was taken from US waters for a European market until last year, but is still caught incidentally in regional fisheries for cod, haddock and flounder.
Participants heightened the alarm over the US Atlantic sand tiger shark, which is proposed to move from a "Vulnerable" listing to the more serious "Endangered" classification. This species produces only two young every two years and is not recovering despite being protected since 1999. The group proposed to retain the 2000 "Vulnerable" classification for the protected Atlantic dusky shark, but stressed an urgent need for a more in-depth population assessment for this exceptionally slow-growing species.
The workshop did reveal some good news for sharks. Thanks to a decade of catch controls, the US population of commercially-important blacktip sharks has been rebuilding and its IUCN threat status was proposed this week as "Least Concern”. The species is still considered threatened off Central America due to the lack of fishing regulations and persistent fishing pressure outside the US. The threat status of barndoor skate off New England was proposed for downlisting from "Endangered" to "Near Threatened" based on a steady population increase over many years, while the Canadian population remains "Endangered."
More than 50 experts took part in the meeting, including scientists from government agencies, universities, private institutions and researchers from Central America. The workshop was the fifth in a global series to assess all the world's shark and ray species and develop regional conservation priorities. Resulting Red List proposals are preliminary until accepted by the global Shark Specialist Group network.
Anna Knee or Andrew McMullin, IUCN/SSC Communications Officers, email@example.com or firstname.lastname@example.org; Tel: +41 22 999 0153 | <urn:uuid:f3b6f9fe-b3b8-488f-9341-7eccf978e15d> | 3 | 821 | News (Org.) | Science & Tech. | 31.457595 | 653 |
Mediterranean Seagrass Meadows: Resilience and Contribution to Climate Change Mitigation
16 May 2012 | Media advisory
This new study will be presented in Málaga during the Seagrass meadows event in Spain and provides an insight into their potential for carbon sequestration at a time when carbon credit schemes are becoming increasingly important in combating climate change.
Published by IUCN and produced by the IUCN Centre for Mediterranean Cooperation, this document is a short summary of a technical report on the current state of affairs in the Mediterranean basin and a must-read for policy-makers.
Authors of the book place special attention to the impact of climate change on Mediterranean seagrass ecosystems and their role they play in mitigating the effects of climate change, in respect of extreme weather events and blue carbon sequestration.
• What are the impacts of climate change on Magnoliophyta in the Mediterranean?
“Mediterranean seagrass meadows reflect the history and biogeograhical diversity of this particular area”, says Alain Jeudy de Grissac, Coordinator of IUCN-Med Marine Programme. “Along with the disruptions brought about by many human pressures, climate change could lead to a general warming of the Mediterranean with ‘meridionalization’ or even ‘tropicalization’ depending on the sector, and to increasing frequency of the sea water events”.
• What is resilience? “This new concept represents an exercise in realism, aiming to accommodate the idea that ecosystems change within and between various stable states”, says Gérard Pergent, one of the study coordinators from Corse University (France). “Depending on the characteristics specific to the various species of Magnoliophyta found in the Mediterranean (physiological, biological and ecological), their resilience, adjustment stability and capacity to adapt may differ”
• How much seagrass may contribute to climate change mitigation? “Seagrasses play a significant but quantitatively moderate role in carbon sequestration globally. They are estimated to account for 40% of the carbon stored each year by coastal vegetation”, says Miguel Ángel Mateo, Centre d’Estudis Avançats de Blanes (CSIC-Spain) “ “It is the large carbon stock accumulated during thousands of years what makes seagrasses potentially highly valuable in the context of global warming. Specifically, it is estimated that Posidonia oceanica is retaining up to 89% of the total CO2 emitted by all Mediterranean countries since the Industrial Revolution.
Materials for the Media:
• Photos for download here
IMPORTANT: Please note that these images can only be used to promote this book.
Media Team: email@example.com | <urn:uuid:63b81f42-69f6-4037-a213-a273538def59> | 2.921875 | 583 | News (Org.) | Science & Tech. | 9.205126 | 654 |
Ratio and Proportion
Date: 7/4/96 at 19:21:20 From: Anonymous Subject: Ratio and Proportion Six men can complete a piece of work in one day while 5 boys would take 2 days to complete the same piece of job. 44 men can build a tower in 5 days; how long will 40 men and 80 boys take to build the tower?
Date: 7/5/96 at 8:37:52 From: Doctor Anthony Subject: Re: Ratio and Proportion From the first statement, 3 men would take 2 days to do the job. It follows that 5 boys are equivalent to 3 men. Since we have 80 boys building the tower, these boys are equivalent to (3/5)*80 = 48 men. So the total workforce on the tower is equivalent to 40+48 = 88 men If 44 men take 5 days, then 88 men would take 5/2 = 2 +1/2 days. -Doctor Anthony, The Math Forum Check out our web site! http://mathforum.org/dr.math/
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994-2013 The Math Forum | <urn:uuid:8ac02b37-5315-4d51-885d-b7c4e97f75ac> | 2.765625 | 240 | Comment Section | Science & Tech. | 94.315957 | 655 |
Sunlight is Earth’s most abundant energy source and is delivered everywhere free of charge. Yet direct use of solar energy—that is, harnessing light’s energy content immediately rather than indirectly in fossil fuels or wind power—makes only a small contribution to humanity’s energy supply. In 2008, about 0.1% of the total energy supply in the United States came from solar sources. In theory, it could be much more. In practice, it will require considerable scientific and engineering progress in the two ways of converting the energy of sunlight into usable forms.
Photovoltaic systems are routinely employed to power a host of devices—from orbiting satellites to pocket calculators—and many companies make roof-sized units for homes and office buildings.
Photovoltaic (PV) systems exploit the photoelectric effect discovered more than a century ago. In certain materials, the energy of incoming light kicks electrons into motion, creating a current. Sheets of these materials are routinely employed to power a host of devices—from orbiting satellites to pocket calculators—and many companies make roof-sized units for homes and office buildings.
At the present time, however, the best commercial PV systems produce electricity at five to six times the cost of other generation methods, though if a system is installed at its point of use, which is often the case, its price may compete successfully at the retail level. PV is an intermittent source, meaning that it’s only available when the Sun is shining. Furthermore, unless PV energy is consumed immediately, it must be stored in batteries or by some other method. Adequate and cost-effective storage solutions await development. One factor favoring PV systems is that they produce maximum power close to the time of peak loads, which are driven by air-conditioning. Peak power is much more expensive than average power. With the advent of time-of-day pricing for power, PV power will grow more economical.
Sunlight can also be focused and concentrated by mirrors and the resulting energy employed to heat liquids that drive turbines to create electricity—a technique called solar thermal generation. Existing systems produce electricity at about twice the cost of fossil-fuel sources. Engineering advances will reduce the cost, but solar thermal generation is unlikely to be feasible outside regions such as the southwestern United States that receive substantial sunlight over long time periods.
Despite the challenges, the idea of drawing our energy from a source that is renewable and that does not emit greenhouse gases has powerful appeal. | <urn:uuid:515d3239-de8f-40e7-adc5-50e951db0245> | 4.09375 | 508 | Knowledge Article | Science & Tech. | 28.740823 | 656 |
NASA has released a computer visualization project called "Perpetual Ocean" that presents a data-created time lapse of the Earth's ocean and sea surface currents over a two-year period.
The animation (see below) shows the globe slowly spinning as white swirls curl and move in the water around landmasses. It looks as if Vincent van Gogh had painted into the oceans -- from the Gulf of Mexico to the Indian Ocean to the Black Sea.
Typically, NASA uses ECCO2 to model global ocean and sea-ice to better understand ocean eddies and other current systems that move heat and carbon in the oceans. The end goal is to study the ocean's role in future climate change scenarios. | <urn:uuid:2f1474e5-3ee2-4c33-a54a-89813f786382> | 4.03125 | 145 | News Article | Science & Tech. | 44.758672 | 657 |
Why 2 high tides?
Name: paul dickerson
Date: 1993 - 1999
It makes sense to me why there is a high tide at or about noon during
new moon, but why is there a high tide at or about mid-night as well?
The reason is very simple, though it takes a bit of thinking to hit the
right direction. Consider the Earth-Moon system, and visualise Earth as a
sphere with a layer of water all around it. The water closer to thMoon will get
attracted more than the Earth giving rise to the noon tide. But, Earth will get attracted more than
the water on the other side this is what gives the midnight tide.
Jasjeet ( Jasjeet S Bagla )
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:179fc92f-8cea-49c6-b6b4-23ed568b253e> | 2.6875 | 174 | Q&A Forum | Science & Tech. | 64.555083 | 658 |
When experiencing alpha decay, atoms shed alpha particles made of 2 protons and 2 neutrons. Why can't we have other types of particles made of more or less protons?
The reason why alpha particles heavily dominate as the proton-neutron mix most likely to be emitted from most (not all!) radioactive components is the extreme stability of this particular combination. That same stability is also why helium dominates after hydrogen as the most common element in the universe, and why other higher elements had to be forged in the hearts and shells of supernovas in order to come into existence at all.
Here's one way to think of it: You could in principle pop off something like helium-3 from an unstable nucleus - that's two protons and one neutron - and very likely give a net reduction in nuclear stress. But what would happen is this: The moment the trio started to depart, a neutron would come screaming in saying look how much better it would be if I joined you!! And the neutron would be correct: The total reduction in energy obtained by forming a helium-4 nucleus instead of helium-3 would in almost any instance be so superior that any self-respecting (and energy-respecting) nucleus would just have to go along with the idea.
Now all of what I just said can (and in the right circumstances should) be said far more precisely in terms of issues such as tunneling probabilities, but it would not really change the message much: Helium-4 nuclei pop off preferentially because they are so hugely stable that it just makes sense from a stability viewpoint for them to do so.
The next most likely candidates are isolated neutrons and protons, incidentally. Other mixed versions are rare until you get up into the fission range, in which case the whole nucleus is so unstable that it can rip apart in very creative ways (as aptly noted by the earlier comment).
$\alpha$ particles are really $He^4_2$ nucleus i.e made up of 2 neutron and 2 protons.
As you can see in this graph, $He^4_2$ ion has a high binding energy per nucleon, i.e. it is highly stable among all the neighboring nuclei. This makes them easy to sustain their existence and makes it easier for the nuclei to emit them in radioactive decay thus making the resultant nuclei much more stable than if a $He_2^3$ would have escaped. | <urn:uuid:2d293bb8-356c-42d9-9b1f-f5c7779b9986> | 3.390625 | 505 | Q&A Forum | Science & Tech. | 48.759948 | 659 |
|Mon March 24, 1969 02:32PM (PST)|
This report supersedes any earlier report of this event
This event has been reviewed by a seismologist
Mon March 24, 1969 02:32PM (PST)
Mon March 24, 1969 22:32 (GMT)
30.1 km ( 18.7 mi) ENE ( 67. azimuth) from Hanford-300, WA
31.7 km ( 19.7 mi) SSE ( 151. azimuth) from Othello, WA
33.5 km ( 20.8 mi) NE ( 46. azimuth) from Hanford-400, WA
|Depth:||7.34 Km (4.48 miles)|
|Horizontal Uncertainty:||26.219 Km|
|Depth Uncertainty:||25.22 Km|
|Azimuthal Gap:||257.0 deg|
|Number of Phases:||7|
Depth within the Earth where an earthquake rupture initiated. PNSN reports depths relative to sea level, so the elevation of the ground above sea level at the location of the epicenter must be added to estimate the depth beneath the Earth's surface.
A measure of how well network seismic stations surround the earthquake. Measured from the epicenter (in degrees), the largest azimuthal gap between azimuthally adjacent stations. The smaller this number, the more reliable the calculated horizontal position of the earthquake.
Number of Phases
How well the given earthquake location predicts the observed phase arrivals (in seconds). Smaller misfits mean more precise locations. The best locations have RMS Misfits smaller than 0.1 seconds.
Number of P First Motions
A P first motion is the direction in which the ground moves at the seismometer when the first P wave arrives. We distinguish between upward and downward first motions. This is the number of observations that were used to obtain the fault plane solution.
Orientation of first possible fault plane
The strike is the angle between the north direction and the direction of the fault trace on the surface, while keeping the dipping fault plane to your right.
The dip is the steepness of the fault plane measured as an angle between the fault plane and the surface. For example, 0 degrees is a horizontal fault and 90 degrees is a vertical fault.
Rake is the angle, measure in the fault plane, between the strike and the direction in which the material above the fault moved relative to the material on the bottom of the fault (slip direction).
Orientation of second possible fault plane
The orientation of the two possible fault planes is the best solution we can find to match the observed first motions at the seismometers using a grid search method. The uncertainty of the strike, dip, and rake indicate the number of degrees by which those values can vary and still match the observations satisfactorily.
Code, or name, to designate a particular seismic station
Network Code indicates the organization responsible for a particular station, the PNSN consists of UW=University of Washington, UO=University of Oregon, and CC=Cascade Volcano Observatory
The quality of an observed P arrival polarity indicates how well you can tell whether it is up or down and can range from 0 (poor) to 1 (good).
The channel name allows one to distinguish between data from different kinds of sensors. The first character indicates the sample rate of the data, examples are E=100Hz, B=40 or 50Hz, H=80 or 100 Hz. The second character indicates whether the channel is a high (H) gain or low (L) gain velocity channel or a strong-motion acceleration channel (N). The third character indicates the direction of motion measured, Z=up/down, E=east/west, N=north/south.
Polarity means the direction of motion, in this context it means whether it is up (U) or down (D). | <urn:uuid:04b85eb8-944d-4176-8e7a-b2e53c9f1916> | 2.890625 | 829 | Structured Data | Science & Tech. | 61.790159 | 660 |
Interpreter for Zoom Language
Zoom language is new language developed at DePaul University by Dr. Jia. ZOOM
stands for Z-based Object Oriented Modeling notation. It's made up of 3
different parts: zoom specification notations ZOOM-S, zoom design notation
ZOOM-D and zoom implementation language ZOOM-I. The syntax of ZOOM-I is closely
based on syntax of java language. It adds several extensions to java language
such as enumerations, set and list formations, relations and function mappings
and more. Programming language design is a challenging task. Development and
testing of first implementation of the language is much easier and flexible done
by implementing an interpreter. Changes to static or dynamic semantics of a
language are more easily done in interpreter than compiler.
My project is to implement interpreter for the ZOOM-I language. The interpreter is going to be GUI application with easy development of zoom programs.
working on basic statements and expressions of Zoom-I langauge.
- 5/11/03: basic java statements and expressions for primitive types
- 5/19/03: extended expressions for List declaration and manipulation
- 5/26/03: extended expressions for Set declaration and manipulation
- 5/31/03: Start work on Object oriented features
- 7/31/03: Object-oriented features finished
- 8/01/03: TBD
- Expected Completion: November 2003
- Initial Presentation - Power Point Slides
- David A. Watt & Deryck F. Brown. Programming Language Processors in Java. Prentice Hall, 2000.
- Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman. Compilers: Principles, Techniques and Tools. Addison-Wesley,1988.
- Ravi Sethi. Programming Languages, Concepts & Constructs. Addison-Wesley, 1996.
- Randy M. Kaplan. Constructing Language Processors for Little Languages. John Wiley & Sons, Inc., 1994. | <urn:uuid:6df337f0-b74a-4b93-a6ac-81d0729d798c> | 2.765625 | 433 | Personal Blog | Software Dev. | 46.991907 | 661 |
The scene: Scientist Jian Chen adjusts optics mounted for an experiment at one of several PULSE laser laboratories housed at SLAC. (PULSE is a joint SLAC/Stanford University laser science institute.)
In this experiment, a small fleck of sample material is held in a special “diamond anvil cell” and torqued to pressures up to 12 gigapascals—120,000 times greater than atmospheric pressure, similar to conditions deep inside the Earth. Chen and colleagues then use three separate, highly precise beams of pulsed laser light, bouncing variously through the specialized optics, to measure the behavior of electrons in the material under pressure. Experiments of this sort give scientists clues about the nature and dynamics of the atomic world that could aid in developing new materials with exotic properties.
The shot: Canon 5D Mk II, 17-35mm/f2.8L lens @ 17mm, f/7.1. ISO 200, 1/40 sec exposure. Three lights (all Speedlites), one triggered with a Pocket Wizard II, the others with optical slaves: one camera left (close, with a red gel), one camera right (at full power, to cast the hard shadows), and one camera left (farther from the camera, with grid, visible in frame) to illuminate Chen. Used a tripod and remote trigger for this one. (All while wearing the same goggles Chen is wearing… tough way to shoot!) | <urn:uuid:aa900d17-ee05-43a1-9828-f3db9c23769c> | 2.90625 | 301 | Personal Blog | Science & Tech. | 49.949323 | 662 |
The "Methane" experiment was proposed during the TransCom 2008 meeting in Utrecht. The first protocol was discussed during the post-ICDC8 TransCom meeting in Jena, followed by the final protocol in 2010. Since then 16 models or model variants have performed the simulations. Previous TransCom experiments focused on chemically non-reactive species (SF6, CO2, 222Rn). A CH4 intercomparison requires introduction of atmospheric chemistry, which means a significant new model development for the traditional TransCom participants. However, to focus on model transport properties, the CH4 chemistry is reduced to offline radical (OH, O1D, Cl) only, which means the full-chemistry modellers have to scale down chemistry. During discussion at Jena, methyl chloroform (CH3CCl3) was included for tracking tropospheric OH abundance in the models, as well as SF6 and 222Rn for model transport evaluations. Prescribed fluxes are input to a transport model and 20 years of simulation is run with meteorological forcing appropriate for 1988-2007. Hourly concentrations of all species are output for 280 locations. At 115 locations, species profiles, surface fluxes and meteorological variables are also output.
The protocol (version 7) . It details the input fluxes, regridding instructions and lists of the output sites and required file formats (similar to TransCom continuous experiment). Instructions are included for accessing the ftp site for downloading input files and uploading model submissions.
The model output is freely available for research purposes but please note the "conditions of use" . The data are available in two formats: the original model submissions containing output for all sites. Output files can be downloaded from ftp fxp.nies.go.jp (refer to the Protocol files for access information). In an effort towards ease of access, time series at a subset of surface sites are archived at the WMO World Data Centre for Greenhouse Gases (http://gaw.kishou.go.jp/)
publications and presentations
Patra, P. K., S. Houweling, M. Krol, P. Bousquet, L. Bruhwiler, and D. Jacob (2010), Protocol for TransCom CH4 intercomparison, Version 7, April (available online at transcom.project.asu.edu/pdf/transcom/T4.methane.protocol_v7.pdf ).
Patra, P. K., S. Houweling, M. Krol, P. Bousquet, D. Belikov, D. Bergmann, H. Bian, P. Cameron-Smith, M. P. Chipperfield, K. Corbin, A. Fortems-Cheiney, A. Fraser, E. Gloor, P. Hess, A. Ito, S. R. Kawa, R. M. Law, Z. Loh, S. Maksyutov, L. Meng, P. I. Palmer, R. G. Prinn, M. Rigby, R. Saito, C. Wilson, TransCom model simulations of CH4 and related species: Linking transport, surface flux and chemical loss with CH4 variability in the troposphere and lower stratosphere, Atmos. Chem. Phys. Discuss., Submitted, 2011.
presentations at the 10th TransCom workshop, University of California, Berkeley, 2010, (Saturday Session) are available on the TransCom-CH4 FTP server at NIES.
for more information | <urn:uuid:fafff650-2a32-4b22-b6ba-3ad9c568a763> | 2.796875 | 742 | Knowledge Article | Science & Tech. | 56.851609 | 663 |
Tri States Public Radio Staff
Fri September 7, 2012
Volcano Shoots Geyser Of Water Up Into Space
Originally published on Tue October 9, 2012 10:53 am
What we have here is a moon — a small one (slightly wider than the state of Arizona) — circling Saturn.
If you look closely, you will see a small splay of light at its top, looking like a circular fountain.
That's because it is a fountain — of sorts. A bunch of volcano-like jets are sending fantastically high geysers of water vapor up into the sky, so high that you can see them in this remarkable print by Michael Benson, back lit by light bouncing off of Saturn.
It turns out this moon, called Enceladus, is a snowball containing what may be a sea of liquid water, warmed by the squishes and stretches of Saturn and other moons that pass nearby (plus it may have a hot, rocky core.) All that gravity pushing and pulling on this little ball squeezes the liquid inside so it shoots up through some fissures at the top.
Nobody knew these fountains were there until the Cassini spacecraft flew near enough to Enceladus to find them. But now comes the amazing part.
Water Hose In The Sky
Some of that water vapor turns into ice and the crystals fall like snow back onto the moon at a rate of 0.02 inches a year; but some ice is thrown so high, it joins a ring around Saturn, one of the outer rings, labeled "E."
Take a look at this image of the same moon, Enceladus — it's the dark spot inside the bright flare — getting real close to the E ring. According to Sascha Kempf of the Max Planck Institute for Nuclear Physics in Heidelberg, this moon is "feeding" water crystals into Saturn's ring.
Who knew that a moon could spray ice onto a planetary ring? Before these photos were taken, scientists thought teeny meteorites, called micrometeoroids, would slam into Saturn's moons kicking up dust (adding to dust from a long exploded moon) and that's how the rings were formed.
Nobody imagined that the rings would be fed by geysers. But that seems to be what's happening to the E ring. According to Kempf, the ring will carry those ice nuggets around Saturn for an orbit or two, until they meet the moon again and are recaptured. But some crystals just keep circling and circling for 50, maybe 400 years.
The E ring is astonishingly thin. Its debris is thousands of miles across, but often only 3 meters (about 9.3 feet) high. A giraffe traveling on this ring would poke out like a giant.
Seeing "True Color"
Michael Benson just published his print of water shooting off Enceladus from a digital transmission sent by the Cassini probe. It appears in his about-to-be published book Planetfall: New Solar System Visions.
What Cassini saw came back as a batch of digital information — lots of ones and zeroes — which can be turned into black and white images. Working from a series of picture fragments that Cassini transmits in small batches, Michael put them together into a single shot, then chose the hues and levels of light based on what is called "true color," what a person would see if he happened on the scene.
"I believe I was the first to see this sight the way it would appear to an actual visitor, simply by virtue of having logged the time to create the composite image, which is made of 19 raw spacecraft frames, and took several days to composite," he wrote me.
This is the way I like to tour the solar system. Find a chair. Sit. Turn some pages. Gaze. Wonder. The price isn't bad, either. | <urn:uuid:36b9fd02-b922-4e8c-bf6d-7649f1bf6e32> | 3.46875 | 802 | News Article | Science & Tech. | 62.110823 | 664 |
Samarium: the essentials
Samarium has a bright silver lustre and is reasonably stable in air. It ignites in air at 150°C. It is a rare earth metal. It is found with other rare earth elements in minerals including monazite and bastnaesite and is used in electronics industries.
Samarium: historical information
Samarium was discovered spectroscopically by its sharp absorption lines in 1853 by Jean Charles Galissard de Marignac in an "earth" called didymia. The element was isolated in 1879 by Lecoq de Boisbaudran from the mineral samarskite, named in honour of a Russian mine official, Colonel Samarski, and which therefore gave samarium its name.
Samarium: physical properties
Samarium: orbital properties
Isolation: samarium metal is available commercially so it is not normally necessary to make it in the laboratory, which is just as well as it is difficult to isolate as the pure metal. This is largely because of the way it is found in nature. The lanthanoids are found in nature in a number of minerals. The most important are xenotime, monazite, and bastnaesite. The first two are orthophosphate minerals LnPO4 (Ln deonotes a mixture of all the lanthanoids except promethium which is vanishingly rare) and the third is a fluoride carbonate LnCO3F. Lanthanoids with even atomic numbers are more common. The most comon lanthanoids in these minerals are, in order, cerium, lanthanum, neodymium, and praseodymium. Monazite also contains thorium and ytrrium which makes handling difficult since thorium and its decomposition products are radioactive.
For many purposes it is not particularly necessary to separate the metals, but if separation into individual metals is required, the process is complex. Initially, the metals are extracted as salts from the ores by extraction with sulphuric acid (H2SO4), hydrochloric acid (HCl), and sodium hydroxide (NaOH). Modern purification techniques for these lanthanoid salt mixtures are ingenious and involve selective complexation techniques, solvent extractions, and ion exchange chromatography.
Pure samarium is available through the electrolysis of a mixture of molten SmCl3 and NaCl (or CaCl2) in a graphite cell which acts as cathode using graphite as anode. The other product is chlorine gas.
WebElements now has a WebElements shop at which you can buy periodic table posters, mugs, T-shirts, games, molecular models, and more. | <urn:uuid:03679806-1247-4d30-82a5-5ab86e77b708> | 3.34375 | 560 | Knowledge Article | Science & Tech. | 23.493616 | 665 |
Plants to be Studied:
Tomato, especially Solanum lycopersicum and Solanum pennellii.
- Project Objectives:
Plants acquire the bulk of their energy from light capture by leaves, and leaf shape has direct consequences on the efficiency of light capture and photosynthetic carbon fixation. As a result, leaf shape must be optimized in response to variation in light quality. To understand the genetic programs controlling fundamental developmental processes, genetic networks regulating both environmental response and morphological form must be integrated. This proposal uses a genomics approach to understand natural variation in leaf morphology and light response, and to investigate the mechanism by which these two genetic networks are integrated to ensure optimal developmental pattern.
- Experimental Approaches:
To elucidate developmental networks, we are using a "genetical genomics" approach, taking advantage of near isogenic mapping lines (NILs) where regions of the S. pennellii genome have been introgressed into S. lycopersicum. Importantly, the parental species vary significantly in both light response and leaf complexity. We are sequencing the parental line transcriptomes to deep coverage to acquire genome-wide mRNA sequence and SNP information. The resulting data will be used to expand the tomato UniGene set and to develop a dense genome-wide marker database for S. lycopersicum and S. pennellii. The NIL population will be phenotyped for leaf development and light-response traits and characterized for genome-wide transcript levels and genotypes by massively parallel short-read sequencing. Construction of genetic networks regulating leaf morphology and light development from this genotype, phenotype, and trancript profile data will be coupled with genetic and transgenic approaches to identify central regulators of development and developmental variation. The resulting network will then be used as a guide to survey natural variation found in additional wild tomato accessions. | <urn:uuid:9d8cac09-f3e0-48a5-841c-5bd3bb77ec82> | 3.15625 | 382 | Academic Writing | Science & Tech. | 6.753176 | 666 |
Early galaxies full of cosmic dust
Space dust Astronomers have found dusty giant galaxies were already in existence 13 billion years ago, far earlier than previously thought.
The discovery reported in the Astrophysical Journal, means planets, which are made from coalescing dust particles, may also have already formed that far back in time.
The study's lead author Assistant Professor Steven Finkelstein from the University of Texas, says the discovery came as a complete surprise.
"I don't think we really expected that," says Finkelstein.
"We thought that 13 billion years ago would have been so early in the universe, that dust really didn't have a chance to form."
"But we now know that's simply not the case, at least in the most massive galaxies."
Using NASA's Hubble Space Telescope, Finkelstein and colleagues found that on average, galaxies appear less dusty the further back in time they look.
"If you go far enough back, dust doesn't exist in galaxies," says Finkelstein.
"That's what you would expect, because only hydrogen and helium were made in the big bang, and dust is made up of heavier elements like carbon, silicon and magnesium produced by the first generations of stars."
Finkelstein hypothesises why dust is only found in these early massive galaxies.
"Galaxies have large outflows of gas and dust from their interstellar medium, and it's a lot easier for that outflow to occur in low mass galaxies, where there's less gravity," says Finkelstein.
"Dust may be forming in all early epoch galaxies, but it's only sticking around in big galaxies because they have enough mass to hang on to their dust."
The findings are based on data from CANDELS, the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey, a huge two-month study carried out by the Hubble Space Telescope.
Finkelstein and colleagues examined the colour of galaxies in the Hubble images to see how red they look.
"Dust makes things appear red, so the redder a galaxy looks, the more dust it has," says Finkelstein.
"As well as being important for planet formation, the dust also blocks out some of the light, making it more difficult to determine how luminous a galaxy is, and consequently how much star formation is taking place in it."
Finkelstein and colleagues have more work to do, including taking spectroscopy of these galaxies to work out what they're made of.
"We can do some of that now with today's ten-metre telescopes," says Finkelstein.
"But we're really waiting for the next generation of big telescopes, such as the 25-metre Giant Magellan Telescope, and the space-based James Webb Telescope which will let us look even farther back in space-time to see what's there." | <urn:uuid:d1f94744-89f3-420a-93aa-2d160fb48f7b> | 4.0625 | 588 | News Article | Science & Tech. | 44.205292 | 667 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Monday, 11 March 2013
The ability of ecosystems to adapt to climate change has been put under the microscope and the news is good for tuna and tropical rainforests.
Wednesday, 6 June 2012
Consumers in developed countries are adding significantly to threats to biodiversity in developing countries via international trade.
Thursday, 12 January 2012
With voices hardly louder than an insect's buzz, the tiniest frogs ever discovered are smaller than a coin.
Friday, 2 December 2011
A huge treasure trove of artefacts including thousands of fragments of pottery provides the first evidence that the sea-faring Lapita people settled in mainland Papua New Guinea.
Wednesday, 29 July 2009
The high rate of extinction in Australia, New Zealand and the Pacific Islands is set to get worse unless urgent action is taken to protect biodiversity in the area, warn scientists.
Wednesday, 17 December 2008
When it comes to stick insects it seems you really can't judge a tree lobster by its cover, a new study finds.
Tuesday, 2 December 2008
Australia's best defence against an outbreak of avian influenza is an invisible line passing through the Indonesian archipelago between Bali and Lombok that birds are reluctant to cross, a team of zoologists says.
Monday, 10 November 2008
A rock commonly found in southeast corner of the Arabian Peninsula could be used to soak up the main greenhouse gas carbon dioxide at a rate that could help slow global warming, scientists say.
Friday, 5 September 2008
More attention should be given to a potential environmental disaster in Papua New Guinea downstream from one of the world's largest copper mines, say some scientists.
Wednesday, 27 August 2008
The animated clownfish Nemo may have found his way home a lot sooner if he had trusted his nose, according to researchers.
Thursday, 7 August 2008
New Australian research has resolved an ongoing debate regarding the timing of a major tectonic plate collision in the South Pacific region, the effects of which we're still experiencing.
Monday, 7 July 2008
Indigenous Australian body art, such as tattoos and intentional scarring may help to unravel mysteries about where certain groups traveled in the past, what their values and rituals were, and how they related to other cultures, according to an Australian researcher.
Wednesday, 30 April 2008
Some of Australia's most important watery war graves could be located for about one-tenth of the cost of finding HMAS Sydney, the nation's most high-profile naval shipwreck, a researcher says.
Friday, 7 March 2008
Sea levels are set to fall over millions of years, making the current rise blamed on climate change a brief interruption of an ancient geological trend, scientists say.
Tuesday, 8 May 2007
Aboriginal Australians are descended from the same modern human ancestors who left Africa to populate other parts of the world, says a new genetic study. | <urn:uuid:a2c99965-ac1b-47b7-a078-ded093323589> | 2.578125 | 602 | Content Listing | Science & Tech. | 32.793435 | 668 |
Mlive: Ann Arbor could see a Geminid meteor shower Thursday night
A rare, clear night sky in the Ann Arbor area Thursday night may increase the visibility of the expected Geminid meteor shower, MLive.com reports.
The Geminid shower is expected to be visible between 11 p.m. and 2 a.m., according to NASA website. MLive.com reports the meteor shower happens in December as a result of debris from the extinct comet 3200 Phaethonm, coming in contact with the earth's atmosphere.
According to NASA, the week of Dec. 10-16 is a good window for seeing the shower, but Thursday night is expected to be the anticipated peak.
Michigan often misses the shower due to a layer of overnight lake-effect clouds and in December the cold air blowing over the warmer Great Lakes creates a lot of clouds, however, the mild air Thursday is expected to keep the lake clouds from forming.
NASA estimates there may be as many as 30 meteors per hour. For those who still might not be able to see the shower due to city lights, NASA's Marshall Space Flight Center in Huntsville, Ala., will have a live Ustream feed beginning at 11 p.m., of the meteor shower.
Read the full report here. | <urn:uuid:de541129-23b1-4631-93dd-33e420653c3e> | 2.59375 | 266 | News Article | Science & Tech. | 69.314308 | 669 |
Oxygen Fuels the Fires of Time
Scientists from The Field Museum in Chicago and Royal Holloway University of London, publishing their results this week in the journal Nature Geoscience, have shown that the amount of charcoal preserved in ancient peat bogs, now coal, gives a measure of how much oxygen there was in the past.
Until now scientists have relied on geochemical models to estimate atmospheric oxygen concentrations. However, a number of competing models exist, each with significant discrepancies and no clear way to resolve an answer. All models agree that around 300 million years ago, in the Late Paleozoic, atmospheric oxygen levels were much higher than today. These elevated concentrations have been linked to gigantism in some animal groups, in particular insects, the dragonfly Meganeura monyi with a wingspan of over two feet epitomizing this. Some scientists think these higher concentrations of atmospheric oxygen may also have allowed vertebrates to colonize the land.
These higher levels of oxygen were a direct consequence of the colonization of land by plants. When plants photosynthesize they evolve oxygen. However, when the carbon stored in plant tissues decays atmospheric oxygen is used up. To produce a net increase in atmospheric oxygen over time organic matter must be buried. The colonization of land by plants not only led to new plant growth but also a dramatic increase in the burial of carbon. This burial was particularly high during the Late Paleozoic when huge coal deposits accumulated.
Dr. Ian J. Glasspool from the Department of Geology at the Field Museum explained that: "Atmospheric oxygen concentration is strongly related to flammability. At levels below 15% wildfires could not have spread. However, at levels significantly above 25% even wet plants could have burned, while at levels around 30 to 35%, as have been proposed for the Late Paleozoic, wildfires would have been frequent and catastrophic".
However, there were periods in Earth's history when the charcoal percentage in the coals was as high as 70%. This indicates very high levels of atmospheric oxygen that would have promoted many frequent, large, and extremely hot fires. These intervals include the Carboniferous and Permian Periods from 320-250 million years ago and the Middle Cretaceous Period approximately 100 million years ago.
"It is interesting", Professor Scott points out, "that these were times of major change in the evolution of vegetation on land with the evolution and spread of new plant groups, the conifers in the late Carboniferous and flowering plants in the Cretaceous".
These periods of high fire resulting from elevated atmospheric oxygen concentration might have been self-perpetuating, with more fire meaning greater plant mortality, and in turn more erosion and therefore greater burial of organic carbon, which would have then promoted elevated atmospheric oxygen concentrations.
"The mystery to us", Scott states, "is why oxygen levels appear to have more or less stabilized about 50 million years ago". | <urn:uuid:b82f6dde-52a7-4c70-a146-bfaf0d7d1e3b> | 4 | 598 | Knowledge Article | Science & Tech. | 28.329925 | 670 |
If he casts the right fly, an angler can catch some really big fish. Scientists are the same way, needing the right type of microscope to visualize nature's smallest molecules and atoms. Now, researchers are redesigning their light microscopes to catch a glimpse of some of the most miniscule molecules, those that make proteins in bacteria and archaea.
A promising solution is the use of fluorescence in situ hybridization (FISH) and stochastical optical reconstruction microscopy (STORM). Together, these techniques are improving our understanding of how bacteria and archaea transcribe DNA to RNA and then translate RNA to proteins. In addition, they are re-shaping how cell biology studies relate to environmental microbes.
Luring and Lighting Biomolecules
"Light microscopy has been a workhorse in cell biological research," says Harvard biophysicist Xiaowei Zhuang. She says scientists want to use light microscopy to study cells, especially live ones, because it is non-invasive. The problem, however, with zooming in on biomolecules and their movements in bacteria and archaea is the small size of the individual cells.
At only about three micrometers long and a micrometer wide, bacterial and archaeal cells come into focus just around the diffraction limit of light, which is about 200 nanometers. With light microscopy, scientists can see a cell but not its nuclear and cellular machinery. Even though these cells are relatively simpler than mammalian cells and other eukaryotic ones, scientists still know little about them.
To get a better look, Zhuang and her collaborators developed STORM in 2008 (1). Zhuang's group has used it to image individually labeled proteins in live cells, including bacteria and archaea. And, like pairing the right fly with a great bait, other researchers are using STORM with their own techniques to "look at the distribution and dynamics of nuclear targets at a resolution that is far from the reach of conventional microscopy," says Bakshi.
For example, Cristina Moraru of the Max Planck Institute for Marine Microbiology in Germany and colleagues wanted to know where ribosomes sit within the cell because those molecular machines interact with the nucleoid—the carrier of the genetic information in archaea and bacteria. Based on where ribosomes are located, there are different models of interactions, which can significantly shape regulation of transcription, translation, and other cellular processes.
In a paper recently published in Systematic and Applied Microbiology (2), Moraru’s group reported on a combined STORM and FISH approach to locate ribosomes in an Escherichia coli cell. Moraru’s team used FISH to label specific sequences of ribosomal RNA with fluorescent probes, and then imaged the samples with STORM.
"In the end, all these differences could reflect in the way the cell answer to environmental changes, and therefore, in the fitness and survival," says Moraru. In the near future, she adds, scientists could use STORM, FISH, and other super-resolution techniques to count of the number of ribosomes in a bacterium.
Ribosomal Catch and Release
Counting the number of ribosomes is essential to understanding how bacteria grow. Moraru explains that "the regulation of ribosome numbers in microbial cells is complex and, probably, there will not always be a direct correlation between ribosome numbers and metabolic activity." But it is likely that a cell with a high ribosome content will be more active compared with one with a low ribosome content. If scientists can count ribosomes, they could get a sense of the level of metabolic activity in microbial cells.
But scientists have not yet counted the exact numbers of ribosomes per cell; the FISH protocol and RNA probes need to be more efficient at hybridization. "Work in this direction is in progress, and we are confident that there is only a matter of time till ribosome quantification per cell will be achieved," says Moraru.
So far, prokaryotic cell biology studies have been limited because many methods are not compatible with uncultivated microorganisms. But because the FISH-STORM approach uses RNA probes that target different microbial taxa in environmental samples, scientists could study ribosome variation across bacterial species. "By looking at samples from different environmental conditions, from warm season versus cold season, or, from high salinity versus low salinity, the variation of ribosome number across environmental conditions could be assessed," says Moraru.
In structured environments, such as biofilms, activated sludge and tissue samples, FISH also preserves the spatial information and reveals potential interactions between different species and community members in a sample. "Targeting rRNA by super-resolution FISH is only the beginning. In the near future, we envision targeting the other nucleic acid components of microbial cells to reveal the sub-cellular localization and numbers of specific genes and mRNAs," says Moraru.
A Different Kettle
But the FISH-STORM approach isn't the only way to bait biomolecules in small cells. Bakshi, a graduate student in University of Wisconsin-Madison chemist James Weisshaar's lab, uses a technique called pointillism to do sub-diffraction limit imaging. With this technique, he constructs an image of a cell by localizing a large number of single molecules iteratively. This requires labels that can be switched on and off, but generates resolution up to 20–30 nanometers. In contrast to FISH, Bakshi’s approachcan be used for live-cell imaging.
To truly understand the complexity and heterogeneity of the behavior of any biomolecule, says Bakshi, requires that scientists can probe one molecule at a time. His team's technique gives them the position and movement of a single object in a cell at a high spatio-temporal resolution. "When we are looking at a ribosome, it enables us to determine which molecules are involved in translation and where they are inside the cell," he says.
In a 2012 paper published in Molecular Microbiology (3), he and Weisshaar reported that most of E. coli's translation is not coupled with transcription—a discovery that runs counter to the common view in the scientific literature. Bakshi says that since bacteria lack a nuclear membrane—which separates the nucleoid from the rest of the cytoplasm—co-transcriptional translation is possible in the cells. To what extent the translation process is coupled to transcription, however, was not clear.
Electron microscope images of ribosomes in cell extract, published in the 1970s, suggested that all translating ribosomes are joined to the chromosome through transcriptional coupling. "When we found that our results suggest that most translation is actually happening without such coupling, we were very surprised," says Bakshi. The team eventually figured out that the lifetime of an mRNA in E. coli is much longer than the time taken for its transcription. The mRNA gets released from proteins associated with the nucleoid once transcription terminates and is then translated by ribosomes without being attached to DNA for the rest of its lifetime, he says.
The techniques—whether it's FISH, STORM, or something else—ultimately let biologists cast deeper lines into individual cells of bacteria and archaea, learning more about their molecular and metabolic dynamics.
1. Huang, B., W. Wang, M. Bates, and X. Zhuang. 2008. Three-Dimensional Super-Resolution imaging by stochastic optical reconstruction microscopy. Science 319(5864):810-813.
2. Moraru, C. and Amann, R. (2012). "Crystal ball: Fluorescence in situ hybridization in the age of super-resolution microscopy." Systematic and Applied Microbiology. In Press.
3. Bakshi, S. et al. (2012). Super-resolution imaging of ribosomes and RNA polymerase in live Escherichia coli cells." Molecular Microbiology 85 (1): 21–38
4. Wang, W. et al. (2011). "Chromosome Organization by a Nucleoid-Associated Protein in Live Bacteria." Science 333: 1445 -1449. | <urn:uuid:321fbaa9-de0c-4c46-9752-a86ab7c290c2> | 3.53125 | 1,712 | Knowledge Article | Science & Tech. | 35.103599 | 671 |
Phytoplankton Under Ice
Beneath the Arctic ice—over 12 feet deep in some areas—lies a dark, cold and lifeless sea. Or so we thought.
“If someone had asked me before the expedition whether we would see under-ice blooms, I would have told them it was impossible,” says Arrigo. “This discovery was a complete surprise.”
The researchers discovered an abundance of phytoplankton—microscopic life that forms the base of the marine food chain. Phytoplankton require sunlight for photosynthesis, just like plants. And sunlight has a tough time penetrating thick sea ice.
But that thick sea ice is changing. Not only are warmer temperatures thinning the ice, but as the ice melts in summer, it forms pools of water that act like transient skylights and magnifying lenses. These pools focus sunlight through the ice and into the ocean, where currents steer nutrient-rich deep waters up toward the surface. Phytoplankton under the ice evolved to take advantage of this narrow window of light and nutrients.
The phytoplankton displayed extreme activity, doubling in number more than once a day. Blooms in open waters grow at a much slower rate, doubling in two to three days. These growth rates are among the highest ever measured for polar waters. Researchers estimate that phytoplankton production under the ice in parts of the Arctic could be up to 10 times higher than in the nearby open ocean.
The phytoplankton bloom discovered by Arrigo and his colleagues in the Chukchi Sea (just north of Alaska) extends tens of meters deep in spots and about 100 kilometers (62 miles) across.
“At this point we don’t know whether these rich phytoplankton blooms have been happening in the Arctic for a long time and we just haven’t observed them before,” Arrigo says. “These blooms could become more widespread in the future, however, if the Arctic sea ice cover continues to thin.”
The discovery of these previously unknown under-ice blooms could have serious implications for the broader Arctic ecosystem, including migratory species such as whales and birds. Phytoplankton are eaten by small ocean animals, which are eaten by larger fish and ocean animals.
“It could make it harder and harder for migratory species to time their life cycles to be in the Arctic when the bloom is at its peak,” Arrigo says. “If their food supply is coming earlier, they might be missing the boat.”
The research is published this week in Science. | <urn:uuid:dda98bee-cbfe-4019-acc9-adace79b16b1> | 4.15625 | 565 | Knowledge Article | Science & Tech. | 47.431837 | 672 |
Per Square Meter
Warm-up: Relationships in Ecosystems (10 minutes)
1. Begin this lesson by presenting the powerpoint, “Per Square Meter”.
2. After the presentation, ask students to think of animal relationships that correspond to each of the following types; Competition, Predation, Parasitism, and Mutualism
a. For example, two animals that compete for food are lions and cheetahs (they compete for zebras and antelopes)
3. Record the different types of relationships on the board.
Activity One: My Own Square Meter (30 minutes)
1. Have students go outside and pick a small area (about a square meter each) to explore. It is preferable that this area be grassy or ‘natural’. The school playground might be a good spot.
2. Each student should keep a list of both the living organisms and man-made products found in their area (i.e grass, birds, insects, flowers, sidewalk etc.) Students are allowed to collect a few specimens from this area to show to the class. If students do not have jars, they can draw their observations. *See Reproducible #1
Activity Two: Who lives in our playground? (10 minutes)
1. After listing, collecting, and drawing specimens, students should return to the classroom and present their findings.
a. Have the students sit in a circle. Each student should read his or her list of findings out loud. If they collected specimens or drew observations, have them present them to the class.
2. Make a list of these findings on the board. Only write repeated findings once (to avoid writing grass as many times as there are students). Keep one list of living organisms and one list of man-made products.
3. For now, focus on the list of living organisms. As a class, help students name possible relationships between the organisms. See if they can find one of each type of relationship. For example, a bee on a flower is an example of mutualism because the nectar from the flower nourishes the bee and in return, the bee pollinates the flower.
Activity Three: Humans and the Environment: Human Effect on one Square Meter (15 minutes)
1. Now that students have focused on the animal relationships of their square meter, it is time to examine the effect of humans on the natural environment. Focus on the human-made product list. Ask students to consider the possible relationships between the human-made products and the environment. Prompt a brief class discussion on the effects of man-made products on the environment. Use the following questions as guidelines.
a. What is the effect of an empty drink bottle (or any other piece of trash) in a grassy field? Will it decompose? Will it be used by an animal as a habitat or food?
Answer: Trash is an invasive man-made product. Most trash is non-bio degradable and is harmful to the environment and to eco-system relations.Therefore, it is a harmful addition to the square meter.
b. Who left the bottle there? Do you think they are still thinking about it? Did they leave it there on purpose? Why did they leave it there?
Answer: Most people litter thoughtlessly. They are not thinking about their actions and how they may effect the environment or eco-systems. It is important that people recognize that litter has a major effect on the environment.
c. What about a bench? Does a park bench have the same effect on the environment as a piece of trash?
Answer: A park bench can be considered as a positive human-made product. A park bench has little negative effect on the environment and even helps humans further appreciate eco-systems. The Park Bench may even provide shelter or a perch for the eco-systems living organisms.
d. Is there a difference between positive human-made products and negative ones? What are some examples of each?
Answer: Yes, there is a difference between positive and negative human-made products. Positive products have minimal effect on the functioning of eco systems whereas negative products have major effects on eco systems. An example of a positive human-made product would be a solar powered house. An example of a negative human-made product would be a car that produces a lot of pollution.
Wrap Up: Our Classroom Eco-Web (20-30 minutes)
1. Have students create classroom artwork by illustrating the relationships between their eco-systems.
2. Each student should draw at least two components of his or her square meter.
3. After everyone has finished their illustrations, create a web relating the illustrations. Draw arrows between illustrated components with written indications of the type of relationship exemplified.
4. Post the finished product in the classroom so that students can see the interconnectedness of the earth’s eco-systems.
Extension: Exploring Aquatic Eco-Systems (On-going Activity)
Students can explore another type of eco-system by creating a classroom aquarium or terrarium. The supplies for both of these mini eco-systems can be found at your local pet store. Students should help set up and maintain the aquarium or terrarium throughout the year. Periodically, students should observe how the mini-ecosystem is progressing, note changes, and assess the relationships between the organisms of the eco-system. This way, students are able to directly participate in the functioning of a natural system.
Another related activity might be to take your students on a field trip to a different eco-system from that of your school. If you live near a river, lake, or ocean take them there to explore different ecological relations. If you live in a city, examples of diverse eco-systems can be found at the local zoo or aquarium. | <urn:uuid:c76adb43-fdc6-442d-882e-b7781f7e7d83> | 3.921875 | 1,207 | Tutorial | Science & Tech. | 52.925334 | 673 |
Dinosaurs' active lifestyles suggest they were warm-blooded
H. Pontzer, V. Allen, J.R. Hutchinson/PLoS ONE
Whether dinosaurs were warm-blooded or cold-blooded has been a long-standing question in paleobiology. Now, new research on how two-legged dinosaurs walked and ran adds new evidence to the argument for warm-bloodedness, and suggests that even the earliest dinosaurs may have been warm-blooded.
Warm-blooded (or endothermic) dinosaurs — able to regulate their own body temperatures — would have been more active and could have inhabited colder climates than cold-blooded (or ectothermic) dinos, which would have functioned more like modern reptiles — animals that become animated only as temperatures warm. Endothermic dinosaurs would have also required more energy to maintain their higher metabolic rates. Evidence such as rapidly growing bones, bird-like feathers and athletic builds have led most paleontologists to believe that dinosaurs were endothermic, says paleobiologist Greg Erickson of Florida State University in Tallahassee, Fla., who was not involved in the new research.
But many scientists are still averse to the idea of warm-blooded dinosaurs. For example, some researchers have suggested that larger, more massive dinosaurs may have radiated much less heat than smaller dinosaurs — and thus, they could have been cold-blooded while still able to maintain relatively high body temperatures.
In the new study, published today in PLoS ONE, biomechanist Herman Pontzer of Washington University in St. Louis, Mo., and colleagues sought to figure out whether the lower metabolism of an ectotherm would have afforded dinosaurs the energy they needed to walk and run. To test this possibility, the team looked at two factors thought to be linked with energy requirements in modern animals: hip height and the volume of muscle used to hold up and move an animal’s body forward. If the limb length and active muscle volumes of dinosaurs required more energy than an ectotherm’s metabolism would have been able to provide, Pontzer and colleagues reasoned, then the dinosaurs were likely endothermic.
The team studied 13 different two-legged dinosaur species, ranging in size from Tyrannosaurus to the tiny, bird-like Archaeopteryx, as well as one early dinosaur relative, Marasuchus. Based on hip height, the results showed that the five largest dinosaurs (including Tyrannosaurus) would have needed endothermic metabolisms just to have the energy to walk, and all of the dinosaurs would have required endothermy to run at a moderate speed. Results based on estimated active muscle volume revealed a similar pattern: The five largest dino species would have needed to be endothermic to walk or run, while smaller, very active dinosaurs such as Velociraptor, must have been endothermic to be able to run.
In addition, even the most ancient dinosaur-like relative, Marasuchus, may have been endothermic based on the data from the hip study, Pontzer says, suggesting that endothermy evolved very early on in the dinosaur lineage. Therefore, the results also suggest that all dinosaurs were endothermic, the team wrote.
“I think their study is pointing to what a lot of other studies are saying — that these animals were endothermic,” Erickson says. “It’s just, what grade of endothermy were we dealing with?” For example, modern marsupials, although endothermic, generally grow more slowly and have lower metabolic rates than other mammals, he says.
The study may not put the final "nail in the coffin" for the idea that large dinosaurs could have been ectothermic, but it does provide positive evidence for an alternative metabolic strategy, says Patrick O’Connor, a paleontologist at the Ohio University College of Osteopathic Medicine in Athens who was also not involved in the new research. "Studies like this add crucial new lines of evidence that help us refine existing hypotheses," O'Connor says.
Estimating dinosaur metabolisms based on modern animals can only go so far, according to Erickson. For example, Pontzer and colleagues focused on two-legged dinosaurs because if they had used four-legged dinosaurs, they would have also needed to estimate how the dinosaurs’ weight was distributed across all four legs.
But because all modern ectotherms, such as alligators, are four-legged, Pontzer and colleagues had to gauge the hypothetical ectothermic capacity for the two-legged dinosaurs against four-legged modern animals, Erickson notes. Moreover, even the largest modern ectotherms are much smaller than a 6-metric-ton Tyrannosaurus. “There are limitations from living organisms that make it so we may never be able to test all these ideas,” Erickson says.
Still, Erickson says he thinks scientists are “honing in on the real answer” on the question of when endothermy evolved in dinosaurs and other ancient vertebrates. Other evidence, such as rates of bone growth, suggests pterosaurs, or flying reptiles, were also endothermic. “When you have all these different lines of evidence kind of pointing towards [endothermy],” he says, “I think it’s fairly compelling collectively.” | <urn:uuid:a459a3fc-e2ca-4318-b2d2-359bd270ce2f> | 4.40625 | 1,101 | Truncated | Science & Tech. | 24.63094 | 674 |
A User-interface for Proofs and Certified Software
by Janet Bertot, Yves Bertot, Yann Coscoy, Healfdene Goguen and Francis Montagnac
By making it possible to express the properties of procedures and functions, proofs assistants can be used to help develop certified software. However, these proof assistants are often complicated to use and deserve real user-interfaces to make software development feasible. Since 1990, The CROAP team at INRIA Sophia-Antipolis has been studying the development of user-interfaces for theorem provers to reduce this level of complication. We have implemented a powerful prototype, CtCoq, that has been used successfully in the development of certified algorithms for program manipulation or polynomial mathematics. The last version of this proof environment has been released in February 1997.
The semantics of programs can be mathematically described using relations between inputs and outputs or using functions from the domain of inputs to the domain of outputs. When these relations and functions are formally described, it is possible to use a computer to check mechanically some of their properties. This leads to the perspective of checking that programs fulfil a formal specification and ultimately to zero-default software. Since the correction of a given program may rely on an arbitrarily complex corpus of mathematics, the system used for the verification needs to have very powerful proving capabilities. To date, only the systems known as theorem provers or proof checkers provide enough mathematical capabilities for this task.
The Coq proof assistant is one such proof checker (see previous article). It uses type theory to express the properties of functions and encode powerful mathematical tools such as recursion and algebraic structures. Intuitively, the types used in a programming language like Pascal or C make it possible to verify simple consistency properties between the components of a software. When using language with more expressive types, the properties that can be expressed using types can actually cover the complete specification of a software system.
The CtCoq user-interface is an independent front-end for the Coq proof assistant. It uses technologies from the domain of programming environments to help the proof developer in several ways.
The first element taken from programming environment technology is the use of syntax directed tools. These tools use a precise description of the proof assistant's syntax to help in the rapid construction of syntactically correct logical sentences, specifications, and proof commands. For instance, syntax directed menus make it possible to perform transformations on expressions or commands that respect the syntactic correctness of these expressions, thus reducing the time spent in correcting low-level errors. Syntax aware tools also make it easier to recognize usual mathematical notations and render them using multiple-font display, in a wysiwyg fashion.
These tools make semantic manipulation of data possible, with interpretation of the user's pointing or dragging gestures using the mouse. For instance, pointing at an expression can be interpreted guiding the proof process towards this expression. In the same realm, dragging an expression can be used to rearrange data when the algebraic properties make it possible.
Other tools taken from programming environments use the analysis of dependence graphs between functions, mathematical objects, and proof commands. This analysis can lead to quicker tools to help finding and correcting errors in specifications, thus making the development of completely proved software quicker.
Powerful analyses also make it possible to extract natural language presentation from proofs data structures, thus making the results of proof developments understandable by mathematicians and engineers outside the community of Coq and CtCoq users.
The CtCoq proof environment has been used successfully in the development of algorithms for symbolic computation, trajectory planning, and program partial evaluation.
Future research around this user-interface aims on one side at a better integration with symbolic computation and computer algebra systems and on the other side at a better use of dependency graphs to make large proof maintenance and re-engineering feasible.
Publication references for this research can be found at: http://www.inria.fr/croap/publications.html
The CtCoq system can be retrieved by following the instructions found at: http://www.inria.fr/ctcoq/ctcoq-eng.html
Yves Bertot - INRIA
Tel: +33 4 9365 7739 | <urn:uuid:a67667a3-9a44-48a3-9476-dff878b94636> | 2.515625 | 870 | Academic Writing | Software Dev. | 23.299594 | 675 |
What are tachyons?
Tachyons are hypothetical particles that can only travel faster than the speed of light. As you probably know, objects with a real number for mass can never travel at the speed of light because of Einstein's theory of relativity. As a consequence of this theory, as a objects velocity increases its mass increases. As is it can be seen by the following formula mass=rest_mass*1/sqrt(1-v^2/c^2). At the speed of light the mass becomes infinite. So, it would take an infinite amount of energy for a massive particle to reach the speed of light. These objects are sometimes called tardyons. Photons can travel at the speed of light because they have no mass and their energy is E=planck's constant * nu(frequency of the photon).
In order for something to travel at the speed of light it would have to have an imaginary number for its mass. An imaginary number is a number that is a multiple of the square root of a negative number. As a particle travels faster than the speed of light the denominator of mass=rest_mass*1/sqrt(1-v^2/c^2) becomes imaginary, the imaginary mass would counteract this and we (in the rest frame) would see something that had real mass in the rest frame but something that always traveled faster than the speed of light.
There have been a few experiments to find tachyons using a detector called a cerenkov detector. This detector is able to measure the speed of a particle traveling through a medium. Photons travel at a slower speed inside a medium. If a particle travels though a medium at a speed that is greater than light for that medium cerenkov radiation occurs. This is analogous to the sonic boom produced when an airplane travels faster than the speed of sound in air or the shock wave at the bow of a ship.
If tachyons existed you would be able to see cerenkov radiation in a vacuum. A few cerenkov experiments were conducted in a vacuum and no radiation was found, so it is generally accepted that tachyons do no exist.
I hope this helped you.
Christina L. Hebert Graduate Student at Fermilab
|last modified 12/11/1999 firstname.lastname@example.org| | <urn:uuid:0a836341-2dd7-41a3-aa34-9109e795e68e> | 3.828125 | 482 | Q&A Forum | Science & Tech. | 58.802976 | 676 |
No one knows how much warming is "safe". What we do know is that climate change is already harming people and ecosystems. Its reality can be seen in melting glaciers, disintegrating polar ice, thawing permafrost, changing monsoon patterns, rising sea levels, changing ecosystems and fatal heat waves.
Scientists are not the only ones talking about these changes. From the apple growers in Himachal to the farmers in Vidharbha and those living in disappearing islands in the Sunderbans are already struggling with the impacts of climate change.
But this is just the beginning. We need to act to avoid catastrophic climate change. While not all regional effects are known yet, here are some likely future effects if we allow current trends to continue.
Relatively likely and early effects of small to moderate warming:
Natural systems, including glaciers, coral reefs, mangroves, Arctic ecosystems, alpine ecosystems, Boreal forests, tropical forests, prairie wetlands and native grasslands, will be severely threatened.
Longer term catastrophic effects if warming continues:
Greenland and Antarctic ice sheets are melting. Unless checked, warming from emissions may trigger the irreversible meltdown of the Greenland ice sheet in the coming decades, which would add up to a seven meters rise in sea-level over some centuries. New evidence showing the rate of ice discharge from parts of the Antarctic means that it is also facing a risk of meltdown.
Never before has humanity been forced to grapple with such an immense environmental crisis. If we do not take urgent and immediate action to stop global warming, the damage could become irreversible. | <urn:uuid:93f23c86-06b2-4c01-8d4e-f0341afe508c> | 3.75 | 324 | Knowledge Article | Science & Tech. | 31.269855 | 677 |
Do Radioisotope Clocks Need Repair? Testing the Assumptions of Isochron Dating Using K-Ar, Rb-Sr, Sm-Nd, and Pb-Pb Isotopes
by Steven A. Austin, Ph.D.
RATE II: Radioisotopes and the Age of The Earth: Results of a Young-Earth Creationist Research Initiative, (Volume II), L. Vardiman et al., eds. (San Diego, CA: Institute for Creation Research and the Creation Research Society, 2005)
The assumptions of conventional whole-rock and mineral isochron radioisotope dating were tested using a suite of radioisotopes from two Precambrian rocks. Amphibolite from the Beartooth Mountains of Wyoming shows evidence of thorough metamorphism by isochemical processes from andesite by an early Precambrian magma-intrusion event. A diabase sill, exposed within the wall of Grand Canyon at Bass Rapids, formed by a rapid intrusion event. The event segregated minerals gravitationally, apparently starting from an isotopically homogeneous magma. Although K-Ar, Rb-Sr, Sm-Nd, and Pb-Pb methods ought to yield concordant isochron dates for each of these magmatic events, these four radioisotope pairs gave significantly discordant ages. Special allowance was made for larger-than-conventional uncertainties expressed as 2σ errors associated with the calculated “ages.” Within a single Beartooth amphibolite sample, three discordant mineral isochron “ages” range from 2515±110 Ma (Rb-Sr mineral isochron) to 2886±190 Ma (Sm-Nd mineral isochron). The diabase sill in Grand Canyon displays discordant isochron “ages” ranging from 841.5±164 Ma (K-Ar whole-rock isochron) to 1379±140 Ma (Sm-Nd mineral isochron). Although significant discordance exists between the K-Ar, Rb-Sr, Sm-Nd, and Pb-Pb radioisotope methods, each radioisotope pair appears to yield concordant “ages” internally between whole-rocks and minerals. Internal concordance is best illustrated from the Bass Rapids diabase sill by the tightly constrained Rb-Sr whole-rock and mineral isochron “ages” of 1055±46 Ma and 1060±24 Ma, respectively. The most problematic discordance is the Sm-Nd and Pb-Pb whole-rock and mineral isochron “ages” that significantly exceed the robust Rb-Sr whole-rock and mineral isochron “ages.” It could be argued that the robust Rb-Sr whole-rock and mineral isochron “ages” are in error, but an adequate explanation for the error has not been offered. The geological context of these Precambrian rocks places severe limitations on possible explanations for isochron discordance. Inheritance of minerals, slow cooling, and post-magmatic loss of daughter radioisotopes are not supported as processes causing isochron discordance in Beartooth amphibolite or Bass Rapids diabase. Recently, geochronologists researching the Great Dyke, a Precambrian layered mafic and ultramafic intrusion of Zimbabwe in southeast Africa, have documented a similar pattern of radioisotope discordance. Alpha-emitting radioisotopes (147Sm, 235U, and 238U) give older “ages” than β-emitting radioisotopes (87Rb and 40K) when applied to the same rocks. Therefore, it can be argued that a change in radioisotope decay rates in the past could account for these discordant isochron “ages” for the same geologic event. Conventional radioisotope clocks need repair.
radioisotope decay rates, isochron dating, RATE II
For Full Text
Please see the Download PDF link above for the entire article. | <urn:uuid:22201536-2f4b-440e-b7c3-52532dee3e4e> | 2.90625 | 877 | Academic Writing | Science & Tech. | 34.373456 | 678 |
The principal purpose of this project is to demonstrate the feasibility of instrumenting heavily icecovered fjords to obtain real-time data of the upper ocean.
Greenland's ice-covered fjords are the connections between the Greenland Ice Sheet and the open ocean. These dynamic environments enable access of warm ocean water to outlet glaciers, causing large amounts of melting under floating tongues (e.g. Rignot and Steffen, 2008; Motyka et al., in press). On the other hand, deep fjords also enable ice to break up mechanically through the process of calving. These icebergs are then transported away from the glaciers, where they eventually melt. The interactions between the ocean, its ice cover (the melange), the glacier ice, and the atmosphere remain poorly understood, mostly due to the extremely difficult conditions for direct observations (e.g. Amundson et al., 2010). Yet, it is increasingly clear that the dynamic behavior of the ice sheet is dominated by its interaction with the surrounding oceans (e.g. Rignot and Kanagaratnam, 2006; Joughin et al., 2008; Holland et al., 2008). It is therefore imperative to gain a better understanding of the physical processes that determine the heat and mass exchange between ocean and ice. This is an issue not only for Greenland, but at all the larger glaciated areas of the planet. The current inability of predicting changes at marine-terminating glaciers is responsible for the lack of a reliable estimate of the future cryospheric contribution to sea level rise (IPCC, 2007; Truffer and Fahnestock, 2008).
To make progress in the task of predicting the behavior of outlet glaciers, a better understanding of physical processes in glacier-fed fjords is necessary. This will require direct observations. The physical environment for this type of work is extremely challenging. The inner fjords are often covered in brash ice and large ice bergs, sometimes mixed with sea ice. Large ice bergs can roll, creating hazards to boats. The area very close to glaciers can have turbulent upwelling with fast currents, the proximity of the glacier is too dangerous to work in due to calving activity, and calving events can send meter-scale waves through the fjord. Moorings are difficult to deploy, and have to be deeply submerged to avoid interaction with the keels of the bigger ice bergs, making it impossible to measure processes and exchanges at the critical atmosphere-ice-ocean boundary. Here we propose to measure the properties of the upper water column using drifting buoys. The proposed experiment carries a certain risk, as the equipment could get destroyed. We will attempt to minimize this risk by letting the buoys drift, and by constructing them more solidly, so they are better able to absorb impacts. Also, they will be equipped with Iridium satellite modems, so that data can be uploaded on a regular basis and will not be lost should the buoys fail.
We expect to obtain a record of up to one year length of temperature, salinity and currents in the upper water column (down to ~30m) of the inner Godthabsfjord, near the main outlet glacier Kangiata Nunata Sermia (KNS). We propose to deploy four Lagrangian drifters; two on the glacier side and two on the outer side of a sill that was created by a previous glacier advance (Mortensen et al., subm. to JGR). The deployments in the heavily ice-covered inner fjord are considered higher risk. The deployments on either side of the sill balance the risk of deploying in heavy ice with the desire to obtain data at those locations. The other expected result is to gain experience with instrumenting these difficult areas, where many details of physical processes have remained elusive. For example, if drifting buoys prove to be successful, one could develop these further into profiling instruments that are capable of sampling the entire water column. Another possible application is to develop drifting depth sounders to obtain geometric observations where boats cannot penetrate. Before such plans are implemented, it is imperative to gain some experience with lower cost instruments. | <urn:uuid:7e5d03a3-1703-4c51-b973-7b9dbb1300f8> | 3.4375 | 852 | Academic Writing | Science & Tech. | 40.168238 | 679 |
A bundle of recent genetic studies have suggested modern humans had sex with Neanderthals thousands of years ago when the two populations roamed the planet alongside each other. However, the bones left behind by the two species don't bear any obvious traces of interbreeding, and a new study of monkeys in Mexico shows why we shouldn't expect them to.
Researchers examined blood samples, hair samples and measurements collected from mantled howler monkeys and black howler monkeys that were live-captured and released in Mexico and Guatemala between 1998 and 2008. The two monkey species splintered off from a common ancestor about 3 million years ago; today they live in mostly separate habitats, except for a "hybrid zone" in the state of Tabasco in southeastern Mexico, where they coexist and interbreed.
Through an analysis of genetic markers, from both mitochondrial DNA (the DNA in the cells' energy-making structures that gets passed down by mothers) and nuclear DNA, the researchers identified 128 hybrid individuals that were likely the product of several generations of interbreeding. Even so, these hybrids shared most of their genome with either one of the two species and were physically indistinguishable from the pure individuals of that species, the team found.
"The implications of these results are that physical features are not always reliable for identifying individuals of hybrid ancestry," Liliana Cortés-Ortiz, an evolutionary biologist and primatologist at the University of Michigan, said in a statement. "Therefore, it is possible that hybridization has been underestimated in the human fossil record."
Science news from NBCNews.com
The work on howler monkeys was part of the doctoral dissertation of Mary Kelaita, now a postdoctoral fellow at the University of Texas at San Antonio. Kelaita added that the study "suggests that the lack of strong evidence for hybridization in the fossil record does not negate the role it could have played in shaping early human lineage diversity."
When scientists finally finished sequencing the Neanderthal genome in 2010, they revealed that between 1 percent and 4 percent of some modern humans' DNA came from the stocky hominids. This suggested humans had sex with Neanderthals, picking up some genes, and possibly even an immunity boost, from Neanderthals before the population disappeared about 30,000 years ago. But not all scientists are convinced the genetic evidence alone proves ancient interbreeding and a study last year found that even if humans and Neanderthals did have sex, those encounters would have rarely produced offspring.
The scientists of the new study say more work is needed to learn about interbreeding and the factors governing the expression of physical characteristics in hybrid individuals.
The research was detailed online Friday in the American Journal of Physical Anthropology.
- Gallery: Monkey Mug Shots
- Top 10 Mysteries of the First Humans
- Image Gallery: Our Closest Human Ancestor
© 2012 LiveScience.com. All rights reserved. | <urn:uuid:490085d7-698f-48cf-9420-4e97837c7072> | 3.671875 | 593 | News Article | Science & Tech. | 26.23494 | 680 |
because accumulation exceeds ablation in a location. This
accumulation zone after it thickens to more than 30 m begins to
For a glacier to survive it must have a consistent and
persistent accumulation zone.
To diagnose a glacier
that is disappearing look for
1) Emergence of rock
outcrops in upper region of the glacier.
2) Recession of the
margin of the glacier in upper reaches of the glacier.
3) Lack of consistent
snowcover at the end of the summer in the accumulation zone of
Published Paper in The Cyrosphere 2010
Quaternary International Paper 2011
Why these criteria?
Glaciers respond to climate in an attempt to achieve
equilibrium. A glacier advances due to a climate
cooling/snowfall increase that causes positive mass balance.
A climate warming/snowfall decrease leads to negative mass
balances and glacier retreat. To reestablish equilibrium a
retreating glacier must lose enough of its highest ablating
sections, usually at the lowest elevations, so that accumulating
snows in the near the head of the glacier once again are
equivalent to overall ablation, and an equilibrium balance is
approached. If a glacier cannot retreat to a point where
equilibrium is established, it is in disequilibrium with the
climate system. A glacier that is in disequilibrium with present
climate will melt away with a continuation of this climate.
We often focus on
terminus change of a glacier which tells us how the glacier is
currently responding to recent climate. A glacier can retreat
rapidly and still survive if it has an accumulation zone.
Thus, to forecast survival we need to focus on the accumulation
zone, not just the terminus. If the accumulation zone no
longer retains accumulation consistently it will begin to thin.
A glacier needs 50-70% of its surface area to be snowcovered
event at the end of the summer to be healthy. A thinning accumulation zone is evident when the margins of
the glacier in this accumulation zone-upper potion of the glacier recede.
Also new outcrops of rock maybe exposed in the accumulation
zone due to thinning. This has been
observed both in the North Cascades and on Swiss glaciers.
Below are examples
of glaciers that have disappeared, will disappear and that can
retreat to a new position of equilibrium with current climate.
This is not to say that further warming will not eliminate many of
the the glaciers that have an accumulation zone today. Sometimes adjacent glaciers can have
differing forecasts based on their varied response to recent
climate. It is unusual for an entire mountain range to be
inhospitable to glaciers today.
of the rapid loss of all glaciers in Glacier National Park or the
Nepal Himalaya are exaggerated. In each case the glaciers
are retreating notably, but some of the glaciers still have
persistent accumulation zones. In the Himalaya the most
photographed glacier is probably the
Glacier on the south
side of Mount Everest. Above the famed Khumbu Icefall
there is a persistent accumulation zone, indicating it can
retreat to a new point of equilibrium with current climate.
Above is Foss Glacier in 1985, still
covering a large area of the east slope of Mount Hinman. | <urn:uuid:f43774c9-7747-41d3-8bfc-66df060ce147> | 4.09375 | 697 | Academic Writing | Science & Tech. | 35.405603 | 681 |
Information found on this page has been archived and is for reference, research or recordkeeping purposes. Please visit NRC's new site for the most recent information.
Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats by contacting us.
Silicon is a semiconducting element. It behaves physically and chemically as a non-metal but is able to conduct electricity although not as well as the metals.
Silicon is used to make the "chips" or tiny circuits found in everything from computers to VCRs. Scientists at the National Research Council Canada (NRC) have now developed a way to "wire" the surface of a silicon crystal with a single strand of molecules.
Their goal is to produce nano-structures, molecular electronic devices one thousand times smaller than a single bacterium. | <urn:uuid:58cd456e-130d-479b-b5fd-0302bc9fa692> | 3.515625 | 219 | Knowledge Article | Science & Tech. | 35.468882 | 682 |
Clouds and cosmos
J'ai découvert par hazard les travaux de Henrik Svensmark sur le processus de formation des nuages.
Ces derniers remettaient en cause les thèses défendues par les tenants du réchauffement climatique du aux activités humaines. En effectuant quelques recherches sur Internet j'ai pu mesurer l'incroyable fourvoiement de ceux qui conseillent les gouvernements sur les décisions à prendre. Et pas uniquement fourvoiement mais aussi manipulation des informations, intoxication et enfumage des foules.
Les derniers travaux du Cern sur le sujet confirment les conclusions de Svensmark.
Affaire à suivre, car une remise au pas del'IPCC/GIEC s'avère nécessaire.
I discovered by chance the research conducted by Henrik Svensmark on the process of cloud formation.
The results called into question the policies advocated by the proponents of global warming due to human activities. In doing some research online I could measure the incredible misdirection of those who advise governments on decisions. And not just misdirection but also manipulation of information, intoxication and smoking crowd.
The latest work on the subject of Cern confirm the findings of Svensmark.
Stay tuned, because in depth analysis of IPCC / GIEC turpitudes is necessary. Aug 30
"If it is an unusually warm winter in New York, it is probably also warm in Washington, D.C., for example," Hansen explained. "At high- and mid-latitudes Rossby Waves are the dominant cause of short-term temperature variations. And since those are fairly long waves we didn't think we needed a station at every one degree of separation."
5 October 2012 ESO celebrates its 50th anniversary
The Cosmics Leaving Outdoor Droplets (CLOUD) experiment uses a special cloud chamber to study the possible link between galactic cosmic rays and cloud formation. Based at the Proton Synchrotron (PS) at CERN, this is the first time a high-energy physics accelerator has been used to study atmospheric and climate science. The results should contribute much to our understanding of clouds and climate. Cosmic rays are charged particles that bombard the Earth's atmosphere from outer space. Studies suggest they may have an influence on the amount of cloud cover through the formation of new aerosols (tiny particles suspended in the air that seed cloud droplets). This is supported by satellite measurements, which show a possible correlation between cosmic-ray intensity and the amount of low cloud cover.
CERN Finds “Significant” Cosmic Ray Cloud Effect Best known for its studies of the fundamental constituents of matter, the CERN particle-physics laboratory in Geneva is now also being used to study the climate. Researchers in the CLOUD collaboration have released the first results from their experiment designed to mimic conditions in the Earth’s atmosphere. By firing beams of particles from the lab’s Proton Synchrotron accelerator into a gas-filled chamber, they have discovered that cosmic rays could have a role to play in climate by enhancing the production of potentially cloud-seeding aerosols. – Physics World, 24 August 2011 If Henrik Svensmark is right, then we are going down the wrong path of taking all these expensive measures to cut carbon emissions; if he is right, we could carry on with carbon emissions as normal.
Jasper Kirkby is a superb scientist, but he has been a lousy politician. In 1998, anticipating he'd be leading a path-breaking experiment into the sun's role in global warming, he made the mistake of stating that the sun and cosmic rays "will probably be able to account for somewhere between a half and the whole of the increase in the Earth's temperature that we have seen in the last century." Global warming, he theorized, may be part of a natural cycle in the Earth's temperature.
CHURCHVILLE, VA—Get ready for the next big bombshell in the man-made warming debate.
Climate Change: News and Comments
e physicien danois Henrik Swensmark ne se doutait probablement pas, en fournissant ses données et en faisant part de ses remarques aux réalisateurs de l'expérience "CLOUD" au CERN à Genève, que les résultats de cette expérience soulèveraient des problèmes politiques importants.
WUWT reader Max_B tips us off to this article and video. According to Nigel Calder’s Blog , CERN’s CLOUD experiment (testing Svensmarks’s cosmic-ray theory) shows a large enhancement of aerosol production and the results are due for release in 2 or 3 months’ time. There is a short Physics World interview with Jasper Kirkby which is worthwhile viewing and was published a couple of days ago…
J.A. performed the nucleation rate analysis. S.S. conducted the APi-TOF analysis.
Results paper published in GIGS (January 2013)
- Article posté le 07 avril 2010 - “La pluralité des voix n’est pas une preuve qui vaille. Car lorsqu’une vérité est quelque peu difficile à découvrir, il serait étonnant que tout un peuple l’ait rencontrée plutôt qu’un homme seul”.
The Climatic Research Unit email controversy (also known as "Climategate" ) [ 2 ] [ 3 ] began in November 2009 with the hacking of a server at the Climatic Research Unit (CRU) at the University of East Anglia (UEA) by an external attacker. [ 4 ] [ 5 ] Several weeks before the Copenhagen Summit on climate change, an unknown individual or group breached CRU's server and copied thousands of emails and computer files to various locations on the Internet. | <urn:uuid:57ca599a-245e-40c8-95d4-d743e3a8f855> | 2.78125 | 1,298 | Personal Blog | Science & Tech. | 43.698981 | 683 |
Synthetic Biology: TUM Researchers Develop Novel Kind Of Fluorescent Protein
Proteins are the most important functional biomolecules in nature with numerous applications in life science research, biotechnology and medicine. So how can they be modified in the most effective way to attain certain desired properties? In the past, the modifications were usually carried out either chemically or via genetic engineering. The team of Professor Arne Skerra from the TUM Chair of Biological Chemistry has now developed a more elegant combined solution: By extending the otherwise universal genetic code, the scientists are able to coerce bacterial cells to produce tailored proteins with synthetic functional groups. To put their idea to the test, they set out to crack a particularly hard nut: The scientists wanted to incorporate a non-natural amino acid at a specific site into a widely used natural protein.
In bioresearch this protein is commonly known as "GFP" (= green fluorescent protein). It emits a bright green glow and stems originally from a jellyfish that uses the protein to make itself visible in the darkness of the deep sea. The team chose a pale lavender coumarin pigment, serving as side chain of a non-natural amino acid, as the synthetic group. The scientists "fed" this artificial amino acid to a laboratory culture of Escherichia coli bacteria ““ the microorganism workhorses of genetic engineering, whose natural siblings are also found in the human intestine. Since the team had transferred the modified genetic blueprints for the GFP to the bacteria ““ including the necessary biosynthesis machinery ““ it incorporated the coumarin amino acid at a very specific site into the fluorescent protein.
This spot in the GFP was carefully chosen, explains Professor Skerra: "We positioned the synthetic amino acid at a very close distance from the fluorescence center of the natural protein." The scientists employed the principle of the so-called Foerster resonance energy transfer, or FRET for short. Under favorable conditions, this process of physical energy transfer, named after the German physical chemist Theodor Foerster, allows energy to be conveyed from one stimulated pigment to another in a radiation-less manner.
It was precisely this FRET effect that the scientists implemented very elegantly in the new fluorescent protein. They defined the distance between the imported chemical pigment and the biological blue-green (cyan, to be more precise) pigment of the jellyfish protein in such a way that the interplay between the two dyes resulted in a completely novel kind of fluorescent chimeric biomolecule. Because of the extreme proximity of the two luminescent groups the pale lavender of the synthetic amino acid can no longer be detected; instead, the typical blue-green color of the fluorescent protein dominates. "What is special here, and different from the natural GFP, is that, thanks to the synthetically incorporated amino acid, the fluorescence can be excited with a commercially available black-light lamp in place of an expensive dedicated LASER apparatus," explains Sebastian Kuhn, who conducted these groundbreaking experiments as part of his doctoral thesis.
According to Skerra, the design principle of the novel bio-molecule, which is characterized by a particularly large and hard to achieve wavelength difference between excitation and emitted light, should open numerous interesting applications: "We have now demonstrated that the technology works. Our strategy will enable the preparation of customized fluorescent proteins in various colors for manifold future purposes." This research project was financially supported by the German Research Foundation (DFG) as part of the Excellence Cluster "Munich Center for Integrated Protein Science" (CIPS-M).
On the Net: | <urn:uuid:d3ef8be8-d68b-4d8f-bea4-d1248144d3a5> | 3.390625 | 746 | News Article | Science & Tech. | 13.356932 | 684 |
Bill Scanlon, NREL
January 02, 2013 | 3 Comments
It takes outside-the-box thinking to outsmart the solar spectrum and set a world record for solar cell efficiency. The solar spectrum has boundaries and immutable rules. No matter how much solar cell manufacturers want to bend those rules, they can't.
So how can we make a solar cell that has a higher efficiency than the rules allow?
That's the question scientists in the III-V Multijunction Photovoltaics Group at the U.S. Department of Energy's (DOE) National Renewable Energy Laboratory (NREL) faced 15 years ago as they searched for materials they could grow easily that also have the ideal combinations of band gaps for converting photons from the sun into electricity with unprecedented efficiency.
A band gap is an energy that characterizes how a semiconductor material absorbs photons, and how efficiently a solar cell made from that material can extract the useful energy from those photons.
"The ideal band gaps for a solar cell are determined by the solar spectrum," said Daniel Friedman, manager of the NREL III-V Multijunction Photovoltaics Group. "There's no way around that."
But this year, Friedman's team succeeded so spectacularly in bending the rules of the solar spectrum that NREL and its industry partner, Solar Junction, won a coveted R&D 100 award from R&D Magazine for a world-record multijunction solar cell. The three-layered cell, SJ3, converted 43.5% of the energy in sunlight into electrical energy — a rate that has stimulated demand for the cell to be used in concentrator photovoltaic (CPV) arrays for utility-scale energy production.
Last month, that record of 43.5% efficiency at 415 suns was eclipsed with a 44% efficiency at 947 suns. Both records were verified by NREL. This is NREL's third R&D 100 award for advances in ultra-high-efficiency multijunction cells. CPV technology gains efficiency by using low-cost lenses to multiply the sun's intensity, which scientists refer to as numbers of suns.
Friedman says earlier success with multijunction cells — layered semiconductors each optimized to capture different wavelengths of light at their junctions — gave NREL a head start.
The SJ3 cells fit into the market for utility-scale CPV projects. They're designed for application under sunlight concentrated to 1,000 times its normal intensity by low-cost lenses that gather the light and direct it at each cell. In regions of clear atmosphere and intense sunlight, such as the U.S. desert Southwest, CPV has outstanding potential for lowest-cost solar electricity. There is enough available sunlight in these areas to supply the electrical energy needs of the entire United States many times over.
Bending Material to the Band Gaps on the Solar Spectrum
Sunlight is made up of photons of a wide range of energies from roughly zero to four electron volts (eV). This broad range of energies presents a fundamental challenge to conventional solar cells, which have a single photovoltaic junction with a single characteristic band gap energy.
Conventional cells most efficiently convert those photons that very nearly match the band gap of the semiconductors in the cell. Higher-energy photons give up their excess energy to the solar cell as waste heat, while lower-energy photons are not collected by the solar cell, and their energy is completely lost.
This behavior sets a fundamental limit on the efficiency of a conventional solar cell. Scientists overcome this limitation by using multijunction solar cells. Using multiple layers of materials in the cells, they create multiple junctions, each with different band gap energies. Each converts a different energy range of the solar spectrum. An invention in the mid-1980s by NREL's Jerry Olson and Sarah Kurtz led to the first practical, commercial multijunction solar cell, a GaInP/GaAs two-junction cell with 1.85-eV and 1.4-eV bandgaps that was recognized with an R&D 100 award in 1990, and later to the three-junction commercial cell based on GaInP/GaAs/Ge that won an R&D 100 award in 2001.
The researchers at NREL knew that if they could replace the 0.67-eV third junction with one better tuned to the solar spectrum, the resulting cell would capture more of the sun's light throughout the day. But they needed a material that had an atomic structure that matched the lattice of the layer above it — and that also had the ideal band gap.
"We knew from the shape of the solar spectrum and modeling solar cells that what we wanted was a third junction that has a band gap of about 1.0 electron volt, lattice-matched to gallium arsenide," Friedman said. "The lattice match makes materials easier to grow."
They concentrated on materials from the third and fifth columns of the periodic table because these so-called III-V semiconductors have similar crystal structures and ideal diffusion, absorption, and mobility properties for solar cells.
But there was seemingly no way to capture the benefits of the gallium arsenide material while matching the lattice of the layer below, because no known III-V material compatible with gallium arsenide growth had both the desired 1-eV band gap and the lattice-constant match to gallium arsenide.
That changed in the early 1990s, when a research group at NTT Laboratories in Tokyo working on an unrelated problem made an unexpected discovery. Even though gallium nitride has a higher band gap than gallium arsenide, when you add a bit of nitrogen to gallium arsenide, the band gap shrinks — exactly the opposite of what was expected to happen.
"That was very surprising, and it stimulated a great deal of work all over the world, including here at NREL," Friedman said. "It helped push us to start making solar cells with this new dilute nitride material."
Good Band Gaps, but Not So Good Solar Material
The new solar cells NREL developed had two things going for them — and one big issue.
"The good things were that we could make the material very easily, and we did get the band gap and the lattice match that we wanted," Friedman said. "The bad thing was that it wasn't a good solar cell material. It wasn't very good at converting absorbed photons into electrical energy. Materials quality is critical for high-performance solar cells, so this was a big problem."
Still, NREL continued to search for a solution.
"We worked on it for quite a while, and we got to a point where we realized we had to choose between two ways of collecting current from a solar cell," Friedman said. "One way is to let the electrical carriers just diffuse along without the aid of an electric field. That's what you do if you have good material." | <urn:uuid:27f26aae-a3e4-42ab-933d-39c486196c55> | 3.359375 | 1,441 | News Article | Science & Tech. | 51.338573 | 685 |
Deforestation monitoring needs better capacity and access to technologies
Most tropical developing countries are struggling to monitor and report their greenhouse gas emissions from forest loss, and will need international support to implement the UN REDD+ scheme, according to a study.
The Reducing Emissions from Deforestation and Degradation (REDD) scheme aims to reverse forest cover loss and curb related carbon emissions by putting a financial value on stored carbon.
Countries voluntarily report back on their implementation of REDD+, but many lack the capacity to monitor forest loss and carbon emissions using key technologies such as satellite remote sensing, according to a paper in the May–June issue of Environmental Science and Policy.
The study ranked tropical developing countries according to their ability to implement REDD+, and found that few such countries had improved their monitoring capacity between 2005 and 2010, with some even losing capacity, such as Burkina Faso and Mozambique.
African countries were of most concern, as poor Internet connections and satellite coverage limit access to data. Meanwhile, mountainous countries such as Ecuador and Peru face technical challenges in analysing satellite images in areas with significant variations in altitude.
Just four of the 99 analysed countries — Argentina, China, India and Mexico — had very small capacity gaps. These countries had also managed to increase their total forest cover between 2005 and 2010, unlike countries with larger gaps, where there was a net loss of forests in the same period.
The paper recommends that the former group of countries could serve as advisors in South-South capacity building activities and regional collaboration efforts that could reduce the cost of accessing, processing and analysing remote sensing data.
The international community should invest in better access to satellite data, especially for Central African and American countries, the study further recommended. Monitoring of forest fires and vulnerable high-carbon areas, such as tropical peatland systems in South-East Asia which are being lost to oil palm and pulpwood plantations, was also identified as a priority.
Louis Verchot, a co-author of the study from the Center for International Forestry Research in Bogor, Indonesia, called for swift efforts to close capacity gaps.
He told SciDev.Net that investment in countries suffering such gaps could yield high returns.
"We laid out the study on a country by country basis, so this should help investors to lay out priorities and help target different types of intervention," Verchot added.
The study provides useful insights on developing a steady emission reduction scheme for REDD+, said Nirarta Samadhi from Indonesia's REDD+ Task Force. He said it highlighted important details about capability gaps that would be valuable to global supporters.
Environmental Science and Policy doi: 10.1016/j.envsci.2012.01.005 (2012)
pdjmoo ( The Natural Eye Project | United States of America )
6 May 2012
There is no time left to be fooling about with more reports on greenhouse gas relative to forest loss and the REDD+ programs. We all know enough now to demand that we cease and desist from any further deforestation for many reasons, the least of which is climate change, not to mention all the life and ecosystems being devastated that ultimately impact we humans and indigenous peoples. Further deforestation is a no-win for life and the planet. The only win is for profits and we just have to find a biodegradable alternative to timber for consumer needs. The palm oil and agriculture can be addressed without destruction of forests. A better use of our time and money. We can continue to kick the bucket down the road with dates like 2020 or find a way to have a global moratorium on forest destruction NOW...and that will require courage and cooperation from all levels. The matter is urgent. Then you can do all the reports, analysis and studies you want, once the destruction has ceased. A lot of food for thought here and willingness to move beyond our vested interests and old positions for the betterment and good of all life on this planet.
Jorge Laine ( Venezuela )
8 May 2012
Tropical deforestation does not necessarily mean eventual greenhouse gas increment. Scientists must look for land use changes promoting atmospheric carbon capture and storage: for example greening of deserts constituting almost 1/3 of earth nonpermafrost land.
All SciDev.Net material is free to reproduce providing that the source and author are appropriately credited. For further details see Creative Commons. | <urn:uuid:a39a2d08-01a8-4cfd-a64b-30208373568d> | 3.15625 | 898 | Comment Section | Science & Tech. | 35.328865 | 686 |
Oct. 22, 1998 Oct. 21, 1998 -- A vibrant celestial photo album of some of NASA Hubble Space Telescope's most stunning views of the universe is being unveiled today on the Internet.
Called the Hubble Heritage Program, this Technicolor gallery is being assembled by a team of astronomers at Hubble's science operations center, the Space Telescope Science Institute (STScI) in Baltimore, MD.
The Hubble Heritage program is intended to provide the public with some of the very best celestial views the Space Telescope has to offer. A "newly processed" Hubble "picture of the month" will be shared with the public on an ongoing basis at a dedicated web site: http://heritage.stsci.edu. A new image will be posted on the first Thursday of every month.
The STScI team is sifting through Hubble telescope's treasure trove of space images to uncover some of the most striking pictures ever taken by the orbiting observatory.
The Hubble images were originally taken for astronomical research. The images are digitally stored on optical disks in the Hubble archives for other scientists to retrieve for further research.
Aside from scientific value, the images offer compelling views of the universe's infinite wonders. They include all types of astronomical phenomena, from nearby planets, to colorful nebulae, to remote galaxies.
The first batch of pictures released today includes a view into the star-studded hub of our galaxy; Saturn in "natural color"; a stellar-wind sculpted bubble carved by a massive hot star; and an overhead view of a magnificent spiral galaxy, dubbed "sunny side up."
Since its launch in 1990 the Hubble Space Telescope has taken pictures of over 10,000 celestial objects. The most scientifically interesting observations have been released to news organizations routinely. A large number of pictures have not previously been presented to the public.
The task of selecting images for the Hubble Heritage project involves more than just flipping through Hubble's 5.4-terabyte scrapbook of over 130,000 space pictures. Beautiful color pictures have been meticulously assembled by skilled image processing specialists at STScI.
The images selected from the archive are originally black and white and must be combined with other pictures of the same object, taken through different filters. Photographic film, home video cameras, and even the human eye reconstruct color views in a similar manner.
The Institute's image processing specialists carefully selected colors to bring out the most detail in the pictures. These aesthetic pictures can also yield new insights into the nature of a celestial object.
The team continues working away on Hubble images, and assembling enticing new views of celestial wonders for the public.
"These images communicate, at a visceral level, the awe and excitement that we experience when exploring the universe with Hubble. It is our chance to repay the public that supports us," says Heritage program scientist Keith Noll.
-- end --
The Space Telescope Science Institute is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) for NASA, under contract with the Goddard Space Flight Center, Greenbelt, MD. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency (ESA).
EDITOR'S NOTE: Images and photo captions associated with this release are available on the Internet at: http://heritage.stsci.edu and http://oposite.stsci.edu/pubinfo/1998/28 or via links in http://oposite.stsci.edu/pubinfo/latest.html or http://oposite.stsci.edu/pubinfo/pictures.html.
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by Space Telescope Science Institute.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:053a2981-e5e8-4f85-aba2-445edb1c65a6> | 2.859375 | 795 | News Article | Science & Tech. | 36.245673 | 687 |
Gallery: Images of Mars from the Curiosity rover
Curiosity has stopped in its tracks on Mars as scientists investigate a shiny object on the planet surface that probably came from the rover itself.
Images from a sandy area called Rocknest, where Curiosity began scooping soil samples last weekend, show a small, oblong object.
"The rover team's assessment is that the bright object is something from the rover, not Martian material," JPL's Mars Science Laboratory team wrote in a status report on Tuesday. "It appears to be a shred of plastic material, likely benign, but it has not been definitively identified."
Scientists are now taking more photos in hopes of identifying the object.
In the meantime, the rover has stopped mid-action, with a soil sample still sitting in its scoop. The team suspended scientific activities on the 62nd Martian day of the mission (or sol 62), including use of the rover's robotic arm.
"To proceed cautiously, the team is continuing the investigation for another day before deciding whether to resume processing of the sample in the scoop. Plans include imaging of surroundings with the Mastcam," the JPL team wrote.
A press conference is planned Thursday.
Speculation about the object was rampant on Twitter and message boards,
A Twitter account dedicated to the object, @benignplastic, also popped up Tuesday:
"I'm a little insulted @NASA seems to think I am so benign...that's how all the SciFi movies start you know..."
Curiosity's soil sampling will likely resume once scientists are confident the object won't contaminate testing.
The rover had stopped at Rocknest to begin its first major analysis, determining the chemical and mineralogical composition of the sand.
But Curiosity never got to the first stage, depositing the sand into its internal mechanism and shaking it out in a cleaning process.
Once the soil sampling is completed, Curiosity will drive about 100 yards further to Glenelg, an area where three different types of terrain converge.
626-578-6300, ext. 4475 | <urn:uuid:a8b822e2-df2c-4df1-a699-aae55a3a25de> | 2.703125 | 422 | News Article | Science & Tech. | 47.834182 | 688 |
7. A point is on the perpendicular bisector of a line segment if and only if it lies the same distance from the two endpoints.
There are two things that need to be proved here. The first is that if a point is on the perpendicular bisector of a line segment, then it is equidistant from the two endpoints of the segment.
If we only use two column proofs, the student might get the idea that all proofs have to be two column proofs. This is not so. It is just that two column proofs work very well for congruent triangle proofs. In a congruent triangle proof, we first need to get the three parts of one triangle congruent to the corresponding three parts in the other triangle, note that we have congruent triangles, then conclude that the things we are trying to prove to be congruent will then be corresponding parts of the congruent triangles. That is a minimum of five steps, each step having a reason, which is a previously established statement. The two column format helps the student to keep all of these ideas straight and organized.
However, the fact of the matter is, that when we get away from congruent triangle proofs, the two column format does not always work as well. This result is an example. While it is possible to devise a two column proof, a prose proof using the isosceles triangle theorems might prove to be simpler.
If the point is on the perpendicular bisector of the line segment between the two points, then in the triangle formed by the base being the line segment, and the point being the vertex, the line from the vertex of the triangle to the midpoint of the base is perpendicular to the base, so the triangle is isosceles, and the point is equidistant from the endpoints of the line segment.
For the converse - if the point is equidistant from the endpoints of the line segment, then we again have an isosceles triangle, and the line from the vertex to the midpoint of the base will be perpendicular to the base, and thus be the perpendicular bisector of the base. | <urn:uuid:f8b49205-6972-49a9-83ab-a66cb2a926a1> | 3.78125 | 445 | Academic Writing | Science & Tech. | 56.653818 | 689 |
Introduction to Integrals
The Definite Integral
The definite integral is a convenient notation used the represent the left-hand and right-hand approximations discussed in the previous section. f (x)dx means the area of the region bounded by f , the y -axis and the lines x = a and x = b. Writing f (x)dx is equivalent to writing
on the interval [a, b] , but it is a much more compact way of doing so. Note also the similarity between the two expressions. This should serve as a clear reminder that the definite integral is just the limit of right-hand and left-hand approximations.
Unlike the indefinite integral, which represents a function, the definite integral represents a number, and is simply the signed area under the curve of f . The area is considered "signed" because according to the method of calculating the areas by subdivisions, the regions located below the x -axis will be counted as negative, and the regions above will be counted as positive. Negative regions cancel out positive regions, and the definite integral represents the total balance between the two over the given interval. For example, find
Based on the picture of the region being considered, it should be clear that the answer is zero. Here, the negative region is exactly the same size as the positive region:
Properties of the Definite Integral
The definite integral has certain properties that should be intuitive, given its definition as the signed area under the curve:
- cf (x)dx = c f (x)dx
- f (x)+g(x) dx = f (x)dx + g(x)dx
is on the interval
f (x)dx = f (x)dx + f (x)dx
This means that we can break up a graph into convenient units and find the definite integral of each section and then add the results to find the total signed area for the whole region.
The Fundamental Theorem of CalculusThe fundamental theorem of calculus, or "FTC", offers a quick and powerful method of evaluating definite integrals. It states: if F is an antiderivative of f , then
f (x)dx=F(b) - F(a)
x 2 dx = (1)3 - (0)3 =
Often, a shorthand is used that means the same as what is written above:
x 2 dx = x 3 =
One interpretation of the FTC is that the area under the graph of the derivative is equal to the total change in the original function. For example, recall that velocity is the derivative of position. So,
v(t)dt=s(b) - s(a)
This means that the change in area under the velocity curve represents the total change in position. | <urn:uuid:f092ba97-0d03-497d-811a-07a6f4f4e39d> | 4.5 | 580 | Tutorial | Science & Tech. | 46.219777 | 690 |
From the time of Aristotle (384-322 BC) until the late 1500’s, gravity was believed to act differently on different objects.
- Drop a metal bar and a feather at the same time… which one hits the ground first?
- Obviously, common sense will tell you that the bar will hit first, while the feather slowly flutters to the ground.
- In Aristotle’s view, this was because the bar was being pulled harder (and faster) by gravity because of its physical properties.
- Because everyone sees this when they drop different objects, it wasn’t questioned for almost 2000 years.
Galileo Galilei was the first major scientist to refute (prove wrong) Aristotle’s theories.
- In his famous (at least to Physicists!) experiment, Galileo went to the top of the leaning tower of Pisa and dropped a wooden ball and a lead ball, both the same size, but different masses.
- They both hit the ground at the same time, even though Aristotle would say that the heavier metal ball should hit first.
- Galileo had shown that the different rates at which some objects fall is due to air resistance, a type of friction.
- Get rid of friction (air resistance) and all objects will fall at the same rate.
- Galileo said that the acceleration of any object (in the absence of air resistance) is the same.
- To this day we follow the model that Galileo created.
ag = g = 9.81m/s2
ag = g = acceleration due to gravity
Since gravity is just an acceleration like any other, it can be used in any of the formulas that we have used so far.
- Just be careful about using the correct sign (positive or negative) depending on the problem.
Examples of Calculations with Gravity
Example 1: A ball is thrown up into the air at an initial velocity of 56.3m/s. Determine its velocity after 4.52s have passed.
In the question the velocity upwards is positive, and I’ll keep it that way. That just means that I have to make sure that I use gravity as a negative number, since gravity always acts down.
vf = vi + at
= 56.3m/s + (-9.81m/s2)(4.52s)
vf = 12.0 m/s
This value is still positive, but smaller. The ball is slowing down as it rises into the air.
Example 2: I throw a ball down off the top of a cliff so that it leaves my hand at 12m/s. Determine how fast is it going 3.47 seconds later.
In this question I gave a downward velocity as positive. I might as well stick with this, but that means I have defined down as positive. That means gravity will be positive as well.vf = vi + at
= 12m/s + (9.81m/s2)(3.47s)
vf = 46 m/s
Here the number is getting bigger. It’s positive, but in this question I’ve defined down as positive, so it’s speeding up in the positive direction.
Example 3: I throw up a ball at 56.3 m/s again. Determine how fast is it going after 8.0s.
We’re defining up as positive again.
vf = vi + at
= 56.3m/s + (-9.81m/s2)(8.0s)
vf = -22 m/s
Why did I get a negative answer?
- The ball reached its maximum height, where it stopped, and then started to fall down.
- Falling down means a negative velocity.
There’s a few rules that you have to keep track of. Let’s look at the way an object thrown up into the air moves.
As the ball is going up…
- It starts at the bottom at the maximum speed.
- As it rises, it slows down.
- It finally reaches it’s maximum height, where for a moment its velocity is zero.
- This is exactly half ways through the flight time.
As the ball is coming down…
- The ball begins to speed up, but downwards.
- When it reaches the same height that it started from, it will be going at the same speed as it was originally moving at.
- It takes just as long to go up as it takes to come down.
Example 4: I throw my ball up into the (again) at a velocity of 56.3 m/s.
a) Determine how much time does it take to reach its maximum height.
- It reaches its maximum height when its velocity is zero. We’ll use that as the final velocity.
- Also, if we define up as positive, we need to remember to define down (like gravity) as negative.
a = (vf - vi) / t
t = (vf - vi) / a
= (0 - 56.3m/s) / -9.81m/s2
t = 5.74s
b) Determine how high it goes.
- It’s best to try to avoid using the number you calculated in part (a), since if you made a mistake, this answer will be wrong also.
- If you can’t avoid it, then go ahead and use it.
vf2 = vi2 + 2ad
d = (vf2 = vi2) / 2a
= (0 - 56.32) / 2(-9.81m/s2)
d = 1.62e2 m
c) Determine how fast is it going when it reaches my hand again.
- Ignoring air resistance, it will be going as fast coming down as it was going up.
You might have heard people in movies say how many "gee’s" they were feeling.
- All this means is that they are comparing the acceleration they are feeling to regular gravity.
- So, right now, you are experiencing 1g… regular gravity.
- During lift-off the astronauts in the space shuttle experience about 4g’s.
- That works out to about 39m/s2.
- Gravity on the moon is about 1.7m/s2 = 0.17g | <urn:uuid:43ce7457-915e-4a8a-b78f-fca95b28656c> | 3.953125 | 1,359 | Tutorial | Science & Tech. | 81.255107 | 691 |
- Scientists & Leadership
- ISB Research
- Education & Outreach
SEATTLE - Relocating is stressful, even for a microbe. Knowing how a microorganism can quickly adapt to challenges of a new habitat helps researchers better understand how commensals (good microbes) and pathogens colonize diverse environments including soil, plant roots, and the human gut. Institute for Systems Biology (ISB) researchers are the first to discover that a protein once thought to have no regulatory function in microbes actually helps them to rapidly adapt to new environments.
ISB scientists had previously discovered that the gene encoding this protein, called transcription factor B (TFB), is present in multiple copies in many microorganisms called archaea – especially those that are known to live in environments that are constantly changing. To understand why, the researchers used an interdisciplinary systems approach that systematically analyzed across many environments the consequences of deleting each copy of the gene or introducing a mutated copy on the health of one such organism, Halobacterium salinarum, that lives in saturate brine.
Simultaneously, they observed how all of the other genes and the complex molecular networks in H. salinarum responded to these genetic manipulations. They integrated millions of data points generated from thousands of such experiments and analyzed patterns in these data across evolutionary timescales by analyzing genome sequences of diverse organisms. They made the remarkable discovery that the microbe gained capability for acclimating to new environments by simply transferring genetic information from one copy of the TFB gene to another, akin to cutting and replacing text in one copy of a document from an edited version. Because it has seven variant copies of TFBs, H. salinarum could perform a large array of such mix-and-match experiments to explore new solutions for adaptation.
How does this work? Nitin Baliga, Professor and Director of ISB and senior author on the paper, explained that "TFBs bind to different locations in the genome and function like wires inside the cell to execute programs that determine which genes in the genome need to be turned on or off and when." In other words, by transferring information across TFBs, an organism can rapidly rewire its networks to generate new programs that enable new capabilities with the same set of genes.
"It´s astounding," remarked Dr. Baliga.
This discovery helps us to understand how archaea colonize diverse environments to give structure and function to microbial communities. This is important for two reasons: First, archaea make up 20 percent of biomass on earth and serve important roles in biogeochemical cycles, which are similar to our circulatory systems and necessary to maintain a health planet. Second, understanding the mechanics of adaptation will help us better understand and predict how microbes and communities might respond to pollution or climate change due to anthropogenic activities. Furthermore, because they have similar functions in eukaryotic organisms we can also begin to understand how duplicated copies of TFIIB proteins reorganize networks for development of body plans in animals.
Understanding that this family of proteins in archaea have regulatory consequences for adaptation into new environments is "knowledge that can be applied to understanding how the TFIIB proteins might have come to mediate the encoding and execution of regulatory programs in humans," said Serdar Turkarslan, the lead author of the paper, which was published on Nov. 22 in "Molecular Systems Biology."
This study was supported by the U.S. Department of Energy´s Genomic Science Funding, the National Institutes of Health, and the National Science Foundation.
About the Institute for Systems Biology
The Institute for Systems Biology (ISB) is an internationally renowned, non-profit research institute headquartered in Seattle and dedicated to the study and application of systems biology. Founded by Leroy Hood, Alan Aderem and Ruedi Aebersold, ISB seeks to unravel the mysteries of human biology and identify strategies for predicting and preventing diseases such as cancer, diabetes and AIDS. ISB's systems approach integrates biology, computation and technological development, enabling scientists to analyze all elements in a biological system rather than one gene or protein at a time. Founded in 2000, the Institute has grown to 13 faculty and more than 300 staff members; an annual budget of more than $50 million; and an extensive network of academic and industrial partners. For more information about ISB, visit www.systemsbiology.org | <urn:uuid:d7eb0b34-44df-4277-83dc-95246fd5100e> | 3.203125 | 895 | News (Org.) | Science & Tech. | 19.074564 | 692 |
Researchers in Kerala to use modern techniques for DNA analysis
Alison is not aware of her white cousin housed in a rescue centre at Puducherry. She, however, is on chattering terms with the grey striped version of her species scurrying about in a cage near hers in the laboratory of the Department of Zoology, University of Kerala, here.
Ever since she was captured from the outskirts of the city in 2008, the black squirrel, named after a United Kingdom-based scientist who helped identify the animal, has been the subject of intense scientific curiosity.
Following up on preliminary investigations that have confirmed the black and white animals to be variants of the Indian striped palm squirrel, researchers here have launched a mission to decipher the genetic causes of the colour change.
The multi-institutional project will use modern techniques for DNA analysis.
Efforts are focussed on identifying the chromosome responsible for the genetic mutation in the grey squirrel, and the resultant colour change.
“Apparently, one of the genes in the animal acted as a switch to activate the change in pigmentation,” explains Oommen V. Oommen, Council of Scientific and Industrial Research Emeritus Scientist, who heads the project.
While the research team has black and grey squirrels in its possession, the scientists have collected blood and hair samples of the white variant from Puducherry where the animal is kept in a rescue centre operated by the Forest Department.
Unlike the United Kingdom, the United States and Canada where the black squirrel has attained sizeable populations, there have been no reports of the mutant versions being sighted anywhere else in India.
“We had earlier carried out gene sequencing to establish that the black squirrel is a variant of Funambulus palmarum [Indian three-striped palm squirrel]. That work will have to be repeated to ascertain the mutation responsible for melanisation [black pigmentation],” says Dr. Oommen.
“Our attempt is to understand the basic science behind the colour change, what it is that throws the switch. The project could have far reaching implications for mankind. It would perhaps obviate the need to use bleaching creams for a fair skin or have a sunbath for a tan.”
The sequencing programmes are expected to generate meaningful data in the next two months.
The team includes Dr. Sanal George, Rajiv Gandhi Centre for Biotechnology; Dr. Dileep Kumar, Anaswara Krishnan and Dr. Achuth Sankar S. Nair; Department of Computational Biology and Bioinformatics; K. Ramachandran and A.S. Vijayasree, Department Of Zoology, University of Kerala; Dr. Divya, Central University, Kasaragode; Dr. Helen Mcrobie , East Anglia University, Cambridge; Dr. M.A. Akbarsha, Bharatidasan University; Dr. Anil Kumar, Deputy Conservator Of Forests, Puducherry; and Dr. Jacob Alexander, Veterinary surgeon, Thiruvananthapuram Zoo.
This is not the first time Dr. Oommen and his team have been on the trail of animals exhibiting abnormal colour characteristics. The scientists have already lined up their next project, to carry out gene sequencing of a hen that changes colour.
Belonging to a small-time farmer near here, the bird has acquired a celebrity status for its ability to change from black to white and back without shedding feathers. “Unlike the squirrel, the switch seems to be active throughout the life of the bird,” says Dr. Oommen. | <urn:uuid:7606c777-7c3b-439b-bce5-8cf1d65f8f89> | 3.25 | 745 | News Article | Science & Tech. | 39.304142 | 693 |
Boffins simulate plasma-eating dusty 'life-forms'
Dust to dust, etc
Physicists have discovered that charged particles of dust can form themselves into life-like structures that appear to be capable of reproducing and passing information along, behaviour reminiscent of life on Earth.
The researchers, (led by V N Tsytovich of the General Physics Institute, Russian Academy of Science, in Moscow, along with boffins from the Max-Planck Institute for Extraterrestrial Physics in Germany, and the University of Sydney) have developed a computer model to help them understand "the behaviour of complex mixtures of inorganic materials in a plasma".
Although convention dictates that there would be very little organisation in a system of such particles, the researchers demonstrated that under the right conditions, order could emerge.
As the plasma becomes polarised, the model shows microscopic strands of particles twisting into helical, or corkscrew structures.
The simulation suggests that the dusty corkscrews have two stable configurations - a large spiral and a small spiral. Each helix could contain various sequences of these two states, the researchers say, which raises the possibility that they could store information.
The team reports that the structures can divide, form copies (transmit their stored information information), interact with neighbouring spirals, and even induce changes in other spirals. More speculatively, they suggest these changes could evolve as less stable structures break down.
So, are there corkscrew-shaped dust-aliens floating about in interstellar space?
Gregor Morfill of the Max Planck Institute for Extraterrestrial Physics in Germany is not prepared to go quite that far. He told New Scientist: "It has a lot of the hallmarks for how we define life at present, but we have not simulated life. To us, they're just a special form of plasma crystal."
However, Tsytovich is prepared to be a bit more flexible on his definition of what might constitute life, saying that the spirals "exhibit all the necessary properties to qualify them as candidates for inorganic living matter. They are autonomous, they reproduce, and they evolve".
The next step is to go hunting for a real environment where such structures could have emerged. Morfill suggests that planetary rings would be the best place to start the search.
The research is reported in the 14 August edition of the New Journal of Physics, and New Scientist has a more extensive write up here. ®
2:7 And the LORD God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul
Bit behind the times
Haven't they ever heard of Birkland currents and read existing Plasma
research which already demonstrates this type of helical twisted threading
in plasma at all scales from lab to space.
Re-inventing the research and calling their own in a slightly different
dark satanic clouds
Now the phrase 'dust devil' seems so much more personal. | <urn:uuid:8e42c785-f1b0-4a69-b060-51bf90081460> | 3.015625 | 622 | News Article | Science & Tech. | 32.437895 | 694 |
Image of the Sun from SOHO
Click on image for full size
Courtesy of NASA
SOHO Catches Glimpse of the Sun's "Far Side"
News story originally written on June 23, 1999
The Solar and Heliospheric Observatory (SOHO) caught a rare view of the far side of the Sun. Scientists can now see if a solar storm is coming before it reaches Earth. This may save the satellite industry millions of dollars each year.
When the Sun releases large amounts of energy, the light makes patches of hydrogen gas glow. This glow is invisible to Earth, but not to SOHO. This new technology can give scientists a few days warning before the storm actually hits.
SOHO also captured the largest shadow ever seen. When Comet Hale-Bopp passed by in 1997, SOHO took a few photographs. Behind the comet, was a shadow over 150 million kilometers long. When the comet came near the Sun, it developed a long tail made of hydrogen. This tail and the comet itself were projected onto the sky.
Shop Windows to the Universe Science Store!
Our online store
on science education, ranging from evolution
, classroom research
, and the need for science and math literacy
You might also be interested in:
Hale-Bopp continues to offer new surprises as two astronomers report of their study of the comet. Using the Hubble Space Telescope and the International Ultraviolet Explorer, the astronomers did a year-long...more
It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more
The Space Shuttle Discovery lifted off from Kennedy Space Center on October 29th at 2:19 p.m. EST. The weather was great as Discovery took 8 1/2 minutes to reach orbit. This was the United States' 123rd...more
A moon was discovered orbiting the asteroid, Eugenia. This is only the second time in history that a satellite has been seen circling an asteroid. A special mirror allowed scientists to find the moon...more
Will Russia ever put the service module for the International Space Station in space? NASA officials want an answer from the Russian government. The necessary service module is currently waiting to be...more
A coronal mass ejection (CME) happened on the Sun early last month. The material that was thrown out from this explosion passed the ACE spacecraft. The SWICS instrument on ACE has produced a new and very...more
J.S. Maini of the Canadian Forest Service called forests the "heart and lungs of the world." This is because forests filter air and water pollution, absorb carbon dioxide, release oxygen, and maintain...more | <urn:uuid:db527059-363c-414b-b23c-d5c04a7be983> | 3.578125 | 570 | Content Listing | Science & Tech. | 56.929373 | 695 |
This weather balloon is full of helium gas. It is surrounded by Earth's atmosphere, which is mostly nitrogen and oxygen gasses. Helium is "lighter" (less dense) than nitrogen or oxygen, so the balloon will rise when the scientist lets go of it.
Click on image for full size
Image courtesy of the University Corporation for Atmospheric Research.
Gas is one of the four common states of matter. The three others are liquid, solid, and plasma. There are also some other exotic states of matter that have been discovered in recent years.
The air in Earth's atmosphere is mostly a mixture of different types of gases. A gas usually has much lower density than a solid or liquid. A quantity of gas doesn't have a specific shape; in this way it is like a liquid and different from a solid. If a gas is enclosed in a container, it will take on the shape of the container (a liquid will too).
The volume of a gas changes if the temperature or pressure changes. There are several scientific laws, called the "gas laws", that describe how the volume, temperature, and pressure of a gas are related.
The molecules or atoms in a gas are much further apart than in a solid or a liquid. Gas molecules or atoms are usually flying around at very high speeds, occasionally bouncing off each other or the walls of the container the gas is in.
When a gas is cooled or placed under high pressure, it can condense and turn into a liquid. If a liquid boils or evaporates, it will become a gas. Under some circumstances, usually very low pressure, a solid can turn directly into a gas (without first melting and becoming a liquid). When a solid turns directly into a gas, it is called "sublimation".
Most of the air in Earth's atmosphere is either nitrogen or oxygen gas. Balloons are often filled with helium gas; since helium is lighter (less dense) than air, helium balloons "float" or rise up in air. When liquid water boils or evaporates, it turns into a gas called "water vapor". Most of the gas in the atmospheres of the giant planets Jupiter and Saturn is hydrogen gas. In recent years, carbon dioxide gas has become quite famous because of its role in the Greenhouse Effect and global warming.
Shop Windows to the Universe Science Store!
Our online store
includes fun classroom activities
for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist
are also full of classroom activities on different topics in Earth and space science!
You might also be interested in:
Solid is one of the four common states of matter. The three others are gas, liquid, and plasma. There are also some other exotic states of matter that have been discovered in recent years. Unlike liquids...more
Plasma is known as the fourth state of matter. The other three states are solid, liquid and gas.Almost everything is made up of atoms (your dog, your science book, this computer...). The atom has a nucleus...more
Density is a measure of how much mass is contained in a given unit volume (density = mass/volume). Put simply, if mass is a measure of how much ‘stuff’ there is in an object, density is a measure of how...more
Most things around us are made of groups of atoms connected together into packages called molecules. Molecules are made from atoms of one or more elements. Some molecules are made of only one type of...more
A snowman, glass of water and steam might look very different but they are made of the same stuff! Just like any substance, water has three different forms, called states: solid, liquid and gas. The state...more
Have you ever left a glass of water out for a long time? Did you notice that the water disappears after a few days? That's because it evaporated! Evaporation is when water passes from a liquid to a gas....more
There is more nitrogen gas in the air than any other kind of gas. About four out of five of the molecules in Earth's atmosphere is nitrogen gas! A molecule of nitrogen gas is made up of two nitrogen atoms....more | <urn:uuid:7bf1d597-0484-4471-8f33-5503ed8fe8ab> | 3.875 | 860 | Knowledge Article | Science & Tech. | 56.872293 | 696 |
Introduction to Enzymes
The following has been excerpted from a very popular Worthington publication which was originally published in 1972 as the Manual of Clinical Enzyme Measurements. While some of the presentation may seem somewhat dated, the basic concepts are still helpful for researchers who must use enzymes but who have little background in enzymology.
Early Enzyme Discoveries
The existence of enzymes has been known for well over a century. Some of the earliest studies were performed in 1835 by the Swedish chemist Jon Jakob Berzelius who termed their chemical action catalytic. It was not until 1926, however, that the first enzyme was obtained in pure form, a feat accomplished by James B. Sumner of Cornell University. Sumner was able to isolate and crystallize the enzyme urease from the jack bean. His work was to earn him the 1947 Nobel Prize.
John H. Northrop and Wendell M. Stanley of the Rockefeller Institute for Medical Research shared the 1947 Nobel Prize with Sumner. They discovered a complex procedure for isolating pepsin. This precipitation technique devised by Northrop and Stanley has been used to crystallize several enzymes. | <urn:uuid:b1f146ea-468c-4e1e-980b-c4c17efb5378> | 3.734375 | 235 | Knowledge Article | Science & Tech. | 36.452391 | 697 |
Introduction to Enzymes
The following has been excerpted from a very popular Worthington publication which was originally published in 1972 as the Manual of Clinical Enzyme Measurements. While some of the presentation may seem somewhat dated, the basic concepts are still helpful for researchers who must use enzymes but who have little background in enzymology.
Effects of pH
Enzymes are affected by changes in pH. The most favorable pH value - the point where the enzyme is most active - is known as the optimum pH. This is graphically illustrated in Figure 14.
Extremely high or low pH values generally result in complete loss of activity for most enzymes. pH is also a factor in the stability of enzymes. As with activity, for each enzyme there is also a region of pH optimal stability.
The optimum pH value will vary greatly from one enzyme to another, as Table II shows:
In addition to temperature and pH there are other factors, such as ionic strength, which can affect the enzymatic reaction. Each of these physical and chemical parameters must be considered and optimized in order for an enzymatic reaction to be accurate and reproducible. | <urn:uuid:950e10c6-23a1-4ac4-896e-da30b265af84> | 3.828125 | 234 | Knowledge Article | Science & Tech. | 32.865254 | 698 |
Ames Research Center, Calif.
Oct. 04, 2004
NASA Infrared Images May Provide Volcano Clues
NASA scientists took infrared (IR) digital images of Mount Saint Helens' last week. The images revealed signs of heat below the surface one day before the volcano erupted last Friday in southern Washington. The images may provide valuable clues as to how the volcano erupted.
Scientists flew an IR imaging system aboard a small Cessna Caravan aircraft over the mountain to acquire the data. "Based on the IR signal, the team predicted an imminent eruption," said Steve Hipskind, acting chief of the Earth Science Division at NASA's Ames Research Center (ARC), Moffett Field, Calif.
"We were seeing some thermal artifacts in the floor of the Mount Saint Helens' crater in southern Washington," said Bruce Coffland, a member of the Airborne Sensor Facility at ARC. " We flew Thursday and used the 50-channel MODIS/ASTER Airborne Simulator (MASTER) digital imaging system. We are working to create images from the IR data that depict the thermal signatures on the dome," Coffland added.
MASTER is an airborne simulator instrument similar to the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) high-resolution infrared imager carried on NASA's Terra Earth observation satellite. Scientists plan to fly the MASTER instrument again over the volcano early this week.
The ARC airborne sensor team was in the area taking data for a United States Geological Survey (USGS) study examining some of the effects of the 1980 Mount Saint Helens' eruption. "This had been planned for some time, and we were there totally by coincidence," Coffland said. The science objectives for the USGS study were to outline the boundaries of the lava flows associated with Mt. St. Helens' previous eruptions in 1980.
"We flew four flight lines over the mountain," Coffland said. "It's a continuous scan image, eight miles long (13 kilometers) and about 2.3 miles (3.7 kilometers) wide." There were four adjoining flight lines flown for Joel Robinson, an investigator at USGS, Menlo Park, Calif.
After the plane landed, technicians downloaded data from a computer hard drive, and began to process the data to produce an image format for use by scientists. NASA will post the pre and post eruption infrared images on the Web.
Sky Research, based in Ashland, Ore. provided the Cessna Caravan, a propeller driven, single-engine airplane that carried the IR imager.
To access images on the Internet as they become available, visit: http://amesnews.arc.nasa.gov/releases/2004/helen/helen.html
- end -
text-only version of this release
NASA press releases and other information are available automatically by sending a blank e-mail message to
To unsubscribe from this mailing list, send a blank e-mail message to
Back to NASA Newsroom |
Back to NASA Homepage | <urn:uuid:db720b17-d1f3-4e42-9053-ce04a92539e9> | 3.625 | 625 | News (Org.) | Science & Tech. | 43.268661 | 699 |