text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Previous abstract Next abstract
Session 40 - The Interstellar Medium.
Display session, Tuesday, June 09
Gamma Ray Burst (GRB) explosions can make kpc-size shells and holes in the interstellar media (ISM) of spiral galaxies if much of the energy heats the local gas to above 10^7 K. Disk blowout is probably the major cause for energy loss in this case, but the momentum acquired during the pressurized expansion phase can be large enough that the bubble still snowplows to a kpc diameter. This differs from the standard model for the origin of such shells by multiple supernovae, which may have problems with radiative cooling, evaporative losses, and disk blow-out. Evidence for giant shells with energies of \sim10^53 ergs are summarized. Some contain no obvious central star clusters and may be GRB remnants, although sufficiently old clusters would be hard to detect. The expected frequency of GRBs in normal galaxies can account for the number of such shells.
Program listing for Tuesday | <urn:uuid:e2300ad5-01dd-4e80-92b3-7ec88785cc9d> | 2.765625 | 208 | Content Listing | Science & Tech. | 47.385488 | 0 |
Wikipedia sobre física de partículas
Rapidinho. Me falaram que a definição de física de partículas da Wikipedia era muito ruim. E de fato, era assim:
Particle physics is a branch of physics that studies the elementary particle|elementary subatomic constituents of matter and radiation, and their interactions. The field is also called high energy physics, because many elementary particles do not occur under ambient conditions on Earth. They can only be created artificially during high energy collisions with other particles in particle accelerators.
Particle physics has evolved out of its parent field of nuclear physics and is typically still taught in close association with it. Scientific research in this area has produced a long list of particles.
Mas hein? Partículas que só podem ser criadas em aceleradores? Física de partículas é ensinada junto com física nuclear? A pesquisa produz partículas (essa é ótima!)?
Em que mundo essa pessoa vive? Reescrevi:
Particle Physics is a branch of physics that studies the existence and interactions of particles, which are the constituents of what is usually referred as matter or radiation. In our current understanding, particles are excitations of quantum fields and interact following their dynamics. Most of the interest in this area is in fundamental fields, those that cannot be described as a bound state of other fields. The set of fundamental fields and their dynamics are summarized in a model called the Standard Model and, therefore, Particle Physics is largely the study of the Standard Model particle content and its possible extensions.
Eu acho que ficou bem melhor. Vamos ver em quanto tempo algum editor esquentado da Wikipedia vai demorar para reverter. Atualmente está um saco participar da Wikipedia por causa dessas pessoas. | <urn:uuid:e7f0a003-07f1-4148-a77c-6e0cb215fc0e> | 3 | 419 | Comment Section | Science & Tech. | 30.5235 | 1 |
Belgian physicist Francois Englert, left, speaks with British physicist… (Fabrice Coffrini / AFP/Getty…)
For physicists, it was a moment like landing on the moon or the discovery of DNA.
The focus was the Higgs boson, a subatomic particle that exists for a mere fraction of a second. Long theorized but never glimpsed, the so-called God particle is thought to be key to understanding the existence of all mass in the universe. The revelation Wednesday that it -- or some version of it -- had almost certainly been detected amid more than hundreds of trillions of high-speed collisions in a 17-mile track near Geneva prompted a group of normally reserved scientists to erupt with joy.
For The Record
Los Angeles Times Friday, July 06, 2012 Home Edition Main News Part A Page 4 News Desk 1 inches; 48 words Type of Material: Correction
Large Hadron Collider: In some copies of the July 5 edition, an article in Section A about the machine used by physicists at the European Organization for Nuclear Research to search for the Higgs boson referred to the $5-billion Large Hadron Collider. The correct amount is $10 billion.
Peter Higgs, one of the scientists who first hypothesized the existence of the particle, reportedly shed tears as the data were presented in a jampacked and applause-heavy seminar at CERN, the European Organization for Nuclear Research.
"It's a gigantic triumph for physics," said Frank Wilczek, an MIT physicist and Nobel laureate. "It's a tremendous demonstration of a community dedicated to understanding nature."
The achievement, nearly 50 years in the making, confirms physicists' understanding of how mass -- the stuff that makes stars, planets and even people -- arose in the universe, they said.
It also points the way toward a new path of scientific inquiry into the mass-generating mechanism that was never before possible, said UCLA physicist Robert Cousins, a member of one of the two research teams that has been chasing the Higgs boson at CERN.
"I compare it to turning the corner and walking around a building -- there's a whole new set of things you can look at," he said. "It is a beginning, not an end."
Leaders of the two teams reported independent results that suggested the existence of a previously unseen subatomic particle with a mass of about 125 to 126 billion electron volts. Both groups got results at a "five sigma" level of confidence -- the statistical requirement for declaring a scientific "discovery."
"The chance that either of the two experiments had seen a fluke is less than three parts in 10 million," said UC San Diego physicist Vivek Sharma, a former leader of one of the Higgs research groups. "There is no doubt that we have found something."
But he and others stopped just shy of saying that this new particle was indeed the long-sought Higgs boson. "All we can tell right now is that it quacks like a duck and it walks like a duck," Sharma said.
In this case, quacking was enough for most.
"If it looks like a duck and quacks like a duck, it's probably at least a bird," said Wilczek, who stayed up past 3 a.m. to watch the seminar live over the Web while vacationing in New Hampshire.
Certainly CERN leaders in Geneva, even as they referred to their discovery simply as "a new particle," didn't bother hiding their excitement.
The original plan had been to present the latest results on the Higgs search at the International Conference on High Energy Physics, a big scientific meeting that began Wednesday in Melbourne.
But as it dawned on CERN scientists that they were on the verge of "a big announcement," Cousins said, officials decided to honor tradition and instead present the results on CERN's turf.
The small number of scientists who theorized the existence of the Higgs boson in the 1960s -- including Higgs of the University of Edinburgh -- were invited to fly to Geneva.
For the non-VIP set, lines to get into the auditorium began forming late Tuesday. Many spent the night in sleeping bags.
All the hubbub was due to the fact that the discovery of the Higgs boson is the last piece of the puzzle needed to complete the so-called Standard Model of particle physics -- the big picture that describes the subatomic particles that make up everything in the universe, and the forces that work between them.
Over the course of the 20th century, as physicists learned more about the Standard Model, they struggled to answer one very basic question: Why does matter exist?
Higgs and others came up with a possible explanation: that particles gain mass by traveling through an energy field. One way to think about it is that the field sticks to the particles, slowing them down and imparting mass.
That energy field came to be known as the Higgs field. The particle associated with the field was dubbed the Higgs boson.
Higgs published his theory in 1964. In the 48 years since, physicists have eagerly chased the Higgs boson. Finding it would provide the experimental confirmation they needed to show that their current understanding of the Standard Model was correct.
On the other hand, ruling it out would mean a return to the drawing board to look for an alternative Higgs particle, or several alternative Higgs particles, or perhaps to rethink the Standard Model from the bottom up.
Either outcome would be monumental, scientists said. | <urn:uuid:fb237ffb-9cc0-4077-99d5-56c6fce1ca5f> | 2.59375 | 1,134 | Truncated | Science & Tech. | 48.351553 | 2 |
By Jason Kohn, Contributing Columnist
Like many of us, scientific researchers tend to be creatures of habit. This includes research teams working for the National Oceanic and Atmospheric Administration (NOAA), the U.S. government agency charged with measuring the behavior of oceans, atmosphere, and weather.
Many of these climate scientists work with massive amounts of data – for example, the National Weather Service collecting up-to-the-minute temperature, humidity, and barometric readings from thousands of sites across the United States to help forecast weather. Research teams then rely on some the largest, most powerful high-performance computing (HPC) systems in the world to run models, forecasts, and other research computations.
Given the reliance on HPC resources, NOAA climate researchers have traditionally worked onsite at major supercomputing facilities, such as Oak Ridge National Laboratory in Tennessee, where access to supercomputers are just steps away. As researchers crate ever more sophisticated models of ocean and atmospheric behavior, however, the HPC requirements have become truly staggering.
Now, NOAA is using a super-high-speed network called “n-wave” to connect research sites across the United States with the computing resources they need. The network has been operating for several years, and today transports enough data to fill a 10-Gbps network to full capacity, all day, every day. NOAA is now upgrading this network to allow even more data traffic, with the goal of ultimately supporting 100-Gbps data rates.
“Our scientists were really used to having a computer in their basement,” says Jerry Janssen, manager, n-wave Network, NOAA, in a video about the project. “When that computer moved a couple thousand miles away, we had to give them a lot of assurances that, one, the data would actually move at the speed they needed it to move, but also that they could rely on it to be there. The amount of data that will be generated under this model will exceed 80-100 Terabits per day.”
The n-wave project means much more than just a massive new data pipe. It represents a fundamental shift in the way that scientists can conduct their research, allowing them to perform hugely demanding supercomputer runs of their data from dozens of remote locations. As a result, it gives NOAA climate scientists much more flexibility in where and how they work.
“For the first time, NOAA scientists and engineers in completely separate parts of the country, all the way to places like Alaska and Hawaii and Puerto Rico, will have the bandwidth they need, without restriction,” says Janssen. “NOAA will now be able to do things it never thought it could do before.”
In addition to providing fast, stable access to HPC resources, n-wave is also allowing NOAA climate scientists to share resources much more easily with scientists in the U.S. Department of Energy and other government agencies. Ideally, this level of collaboration and access to supercomputing resources will help climate scientists continue to develop more effective climate models, improve weather forecasts, and allow us to better understand our climate.
Powering Vital Climate Research
The high-speed nationwide HPC connectivity capability provided by n-wave is now enabling a broad range of NOAA basic science and research activities. Examples include:
- Basic data dissemination, allowing research teams to collect up-to-the-minute data on ocean, atmosphere, and weather from across the country, and make that data available to other research teams and agencies nationwide.
- Ensemble forecasting, where researchers run multiple HPC simulations using different initial conditions and modeling techniques, in order to refine their atmospheric forecasts and minimize errors.
- Severe weather modeling, where scientists draw on HPC simulations, real-time atmospheric data, and archived storm data to better understand and predict the behavior of storms.
- Advancing understanding of the environment to be able to better predict short-term and long-term environmental changes, mitigate threats, and provide the most accurate data to inform policy decisions.
All of this work is important, and will help advance our understanding of Earth’s climate. And it is all a testament to the amazing networking technologies and infrastructure that scientists now have at their disposal, which puts the most powerful supercomputing resources in the world at their fingertips – even when they are thousands of miles away. | <urn:uuid:c23e3842-a002-4f6b-9554-bafecec0beed> | 3.3125 | 899 | News (Org.) | Science & Tech. | 28.323894 | 3 |
Tornadoes are the most intense storms on the planet, and they’re never discussed without at least some mention of the term wind shear. Many of us sitting at home, though, have no idea what wind shear is, or if we do, how it affects tornado production.
What is Wind Shear
Wind shear, although it might sound complex, is a simple concept. Wind shear is merely the change in wind with height, in terms of wind direction and speed. I think that we all understand that the wind is generally stronger in the atmosphere over our heads than it is here on the ground, and if we think of the atmosphere in terms of the three dimensions that it has, it should not be surprising that the wind above us might also be blowing from a different direction than the wind at the ground. When that happens–the wind speed and direction vary with height–wind shear is occurring.
Wind Shear and Supercell Thunderstorms
This wind shear is an important part of the process in the development of a supercell thunderstorm, from which the vast majority of strong tornadoes form.
All thunderstorms are produced by a powerful updraft–a surge of air that rises from the ground into the upper levels of the atmosphere, and when this updraft forms in an area where wind shear is present, the updraft is influence by this speed and different direction of the wind above, pushing the column of air in the updraft into a more vertical alignment.
Rain’s Influence on Tornado Production
Needless to say, thunderstorms typically produce very heavy rain, and rain-cooled air is much heavier than the warm air of the updraft, so the rain-cooled air, produces a compensating downdraft (what comes up, must come down). This downdraft pushes the part of the rotating air that was forced in its direction by the stronger wind aloft downward, and the result is a horizontal column of rotating air.
That’s Not a Tornado!
I know what you’re thinking that you’ve seen enough TLC or Discovery Channel shows to know that a horizontal column of air is NOT a tornado; you need a vertical column of air.
This Can Be a Tornado
You’re right, but remember the updraft that is driving the thunderstorm is still working, and it’s able to pull the horizontal, spinning column of air into the thunderstorm, resulting in a vertical column of spinning air.
(NOAA image showing vertical column of air in a supercell thunderstorm)
The result is a rotating thunderstorm capable of producing a tornado, and it would not be possible without wind shear.
(NOAA image showing tornado formation in supercell thunderstorm) | <urn:uuid:7400301c-e625-46d5-be90-1020cf8d52f8> | 4.15625 | 573 | Personal Blog | Science & Tech. | 45.080294 | 4 |
Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration
Goswami, Nandu and Roma, Peter G. and De Boever, Patrick and Clément, Gilles and Hargens, Alan R. and Loeppky, Jack A. and Evans, Joyce M. and Stein, T. Peter and Blaber, Andrew P. and Van Loon, Jack J.W.A. and Mano, Tadaaki and Iwase, Satoshi and Reitz, Guenther and Hinghofer-Szalkay, Helmut G. (2012) Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration. Planetary and Space Science, Epub ahead of print (in press). Elsevier. DOI: 10.1016/j.pss.2012.07.030.
Full text not available from this repository.
Due to its proximity to Earth, the Moon is a promising candidate for the location of an extra-terrestrial human colony. In addition to being a high-fidelity platform for research on reduced gravity, radiation risk, and circadian disruption, the Moon qualifies as an isolated, confined, and extreme (ICE) environment suitable as an analogue for studying the psychosocial effects of long-duration human space exploration missions and understanding these processes. In contrast, the various Antarctic research outposts such as Concordia and McMurdo serve as valuable platforms for studying biobehavioral adaptations to ICE environments, but are still Earth-bound, and thus lack the low-gravity and radiation risks of space. The International Space Station (ISS), itself now considered an analogue environment for long-duration missions, better approximates the habitable infrastructure limitations of a lunar colony than most Antarctic settlements in an altered gravity setting. However, the ISS is still protected against cosmic radiation by the earth magnetic field, which prevents high exposures due to solar particle events and reduces exposures to galactic cosmic radiation. On Moon the ICE environments are strengthened, radiations of all energies are present capable of inducing performance degradation, as well as reduced gravity and lunar dust. The interaction of reduced gravity, radiation exposure, and ICE conditions may affect biology and behavior--and ultimately mission success--in ways the scientific and operational communities have yet to appreciate, therefore a long-term or permanent human presence on the Moon would ultimately provide invaluable high-fidelity opportunities for integrated multidisciplinary research and for preparations of a manned mission to Mars.
|Title:||Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration|
|Journal or Publication Title:||Planetary and Space Science|
|In Open Access:||No|
|In ISI Web of Science:||Yes|
|Volume:||Epub ahead of print (in press)|
|Keywords:||Physiology, Orthostatic tolerance, Muscle deconditioning, Behavioural health, Psychosocial adaptation, Radiation, Lunar dust, Genes, Proteomics|
|HGF - Research field:||Aeronautics, Space and Transport, Aeronautics, Space and Transport|
|HGF - Program:||Space, Raumfahrt|
|HGF - Program Themes:||W EW - Erforschung des Weltraums, R EW - Erforschung des Weltraums|
|DLR - Research area:||Space, Raumfahrt|
|DLR - Program:||W EW - Erforschung des Weltraums, R EW - Erforschung des Weltraums|
|DLR - Research theme (Project):||W - Vorhaben MSL-Radiation (old), R - Vorhaben MSL-Radiation|
|Institutes and Institutions:||Institute of Aerospace Medicine > Radiation Biology|
|Deposited By:||Kerstin Kopp|
|Deposited On:||27 Aug 2012 08:05|
|Last Modified:||07 Feb 2013 20:40|
Repository Staff Only: item control page | <urn:uuid:25dbfda6-18d6-4e04-9bf5-fe7dcc73d69b> | 3.09375 | 887 | Academic Writing | Science & Tech. | 24.740737 | 5 |
Science -- Asher et al. 307 (5712): 1091:
We describe several fossils referable to Gomphos elkema from deposits close to the Paleocene-Eocene boundary at Tsagan Khushu, Mongolia. Gomphos shares a suite of cranioskeletal characters with extant rabbits, hares, and pikas but retains a primitive dentition and jaw compared to its modern relatives. Phylogenetic analysis supports the position of Gomphos as a stem lagomorph and excludes Cretaceous taxa from the crown radiation of placental mammals. Our results support the hypothesis that rodents and lagomorphs radiated during the Cenozoic and diverged from other placental mammals close to the Cretaceous-Tertiary boundary.
Lagomorphs are rabbits, hares, and pikas. This might be referred to as a "missing link" of the rodents. Why do we care? Most mammals are rodents, and this tells us about the evolution of the most successful group of mammals. Cool! | <urn:uuid:fa9d11c3-ad57-40a6-8915-a8b1cd687729> | 2.921875 | 220 | Personal Blog | Science & Tech. | 36.115 | 6 |
Basic Use To make a new number, a simple initialization suffices:
var foo = 0; // or whatever number you want
foo = 1; //foo = 1 foo += 2; //foo = 3 (the two gets added on) foo -= 2; //foo = 1 (the two gets removed)
Number literals define the number value. In particular: They appear as a set of digits of varying length. Negative literal numbers have a minus sign before the set of digits. Floating point literal numbers contain one decimal point, and may optionally use the E notation with the character e. An integer literal may be prepended with "0", to indicate that a number is in base-8. (8 and 9 are not octal digits, and if found, cause the integer to be read in the normal base-10). An integer literal may also be found with "0x", to indicate a hexadecimal number. The Math Object Unlike strings, arrays, and dates, the numbers aren't objects. The Math object provides numeric functions and constants as methods and properties. The methods and properties of the Math object are referenced using the dot operator in the usual way, for example:
var varOne = Math.ceil(8.5); var varPi = Math.PI; var sqrt3 = Math.sqrt(3);
Methods random() Generates a pseudo-random number.
var myInt = Math.random();
max(int1, int2) Returns the highest number from the two numbers passed as arguments.
var myInt = Math.max(8, 9); document.write(myInt); //9
min(int1, int2) Returns the lowest number from the two numbers passed as arguments.
var myInt = Math.min(8, 9); document.write(myInt); //8
floor(float) Returns the greatest integer less than the number passed as an argument.
var myInt = Math.floor(90.8); document.write(myInt); //90;
ceil(float) Returns the least integer greater than the number passed as an argument.
var myInt = Math.ceil(90.8); document.write(myInt); //91;
round(float) Returns the closest integer to the number passed as an argument.
var myInt = Math.round(90.8); document.write(myInt); //91; | <urn:uuid:eecdd55e-49d8-40e4-9834-6f3dce28fa4c> | 3.96875 | 508 | Documentation | Software Dev. | 72.693517 | 7 |
Data structures for manipulating (biological) sequences.
Generally supports both nucleotide and protein sequences, some functions,
like revcompl, only makes sense for nucleotides.
|A sequence is a header, sequence data itself, and optional quality data.
Sequences are type-tagged to identify them as nucleotide, amino acids,
or unknown type.
All items are lazy bytestrings. The Offset type can be used for indexing.
|A sequence consists of a header, the sequence data itself, and optional quality data.
The type parameter is a phantom type to separate nucleotide and amino acid sequences
|An offset, index, or length of a SeqData
|The basic data type used in Sequences
|Quality data is normally associated with nucleotide sequences
|Basic type for quality data. Range 0..255. Typical Phred output is in
the range 6..50, with 20 as the line in the sand separating good from bad.
|Quality data is a Qual vector, currently implemented as a ByteString.
|Read the character at the specified position in the sequence.
|Return sequence length.
|Return sequence label (first word of header)
|Return full header.
|Return the sequence data.
|Check whether the sequence has associated quality data.
|Return the quality data, or error if none exist. Use hasqual if in doubt.
|Adding information to header
|Modify the header by appending text, or by replacing
all but the sequence label (i.e. first word).
|Converting to and from [Char]
|Convert a String to SeqData
|Convert a SeqData to a String
Returns a sequence with all internal storage freshly copied and
with sequence and quality data present as a single chunk.
By freshly copying internal storage, defragSeq allows garbage
collection of the original data source whence the sequence was
read; otherwise, use of just a short sequence name can cause an
entire sequence file buffer to be retained.
By compacting sequence data into a single chunk, defragSeq avoids
linear-time traversal of sequence chunks during random access into
|map over sequences, treating them as a sequence of (char,word8) pairs.
This will work on sequences without quality, as long as the function doesn't
try to examine it.
The current implementation is not very efficient.
|Phantom type functionality, unchecked conversion between sequence types
|Nucleotide sequences contain the alphabet [A,C,G,T].
IUPAC specifies an extended nucleotide alphabet with wildcards, but
it is not supported at this point.
|Complement a single character. I.e. identify the nucleotide it
can hybridize with. Note that for multiple nucleotides, you usually
want the reverse complement (see revcompl for that).
|Calculate the reverse complement.
This is only relevant for the nucleotide alphabet,
and it leaves other characters unmodified.
|Calculate the reverse complent for SeqData only.
|For type tagging sequences (protein sequences use Amino below)
|Proteins are chains of amino acids, represented by the IUPAC alphabet.
|Translate a nucleotide sequence into the corresponding protein
sequence. This works rather blindly, with no attempt to identify ORFs
or otherwise QA the result.
|Convert a sequence in IUPAC format to a list of amino acids.
|Convert a list of amino acids to a sequence in IUPAC format.
|Display a nicely formated sequence.
|A simple function to display a sequence: we generate the sequence string and
| call putStrLn
|Returns a properly formatted and probably highlighted string
| representation of a sequence. Highlighting is done using ANSI-Escape
|Default type for sequences
|Produced by Haddock version 2.6.1| | <urn:uuid:0811e322-860e-4f42-9263-ac9ca9ec229a> | 2.59375 | 838 | Documentation | Software Dev. | 35.095195 | 8 |
The Javan rhinoceros is one of the most rare animals in the world and it was just spotted on video tape.
Seamen have long reported miraculous sightings of luminous, glowing seawater.
You know how animals are supposed to be able sense disasters before they happen? Well some believe it’s a myth, though there are lots of reports of animals behaving strangely days before the tsunamis hit in Indonesia. Hundreds of thousands of ants were seen scurrying away from the beach. Elephants, dogs, and zoo animals were all reported to have been acting strangely. What can explain it? Learn more on this Moment of Science. | <urn:uuid:19ce4a7d-7ae3-489c-8be3-7b90046f895d> | 2.53125 | 134 | Content Listing | Science & Tech. | 49.290268 | 9 |
Chinese researchers have turned to the light absorbing properties of butterfly wings to significantly increase the efficiency of solar hydrogen cells, using biomimetics to copy the nanostructure that allows for incredible light and heat absorption.
Butterflies are known to use heat from the sun to warm themselves beyond what their bodies can provide, and this new research takes a page from their evolution to improve hydrogen fuel generation. Analyzing the wings of Papilio helenus, the researchers found scales that are described as having:
[...] Ridges running the length of the scale with very small holes on either side that opened up onto an underlying layer. The steep walls of the ridges help funnel light into the holes. The walls absorb longer wavelengths of light while allowing shorter wavelengths to reach a membrane below the scales. Using the images of the scales, the researchers created computer models to confirm this filtering effect. The nano-hole arrays change from wave guides for short wavelengths to barriers and absorbers for longer wavelengths, which act just like a high-pass filtering layer.
So, what does this have to do with fuel cells? Splitting water into hydrogen and oxygen takes energy, and is a drain on the amount you can get out of a cell. To split the water, the process uses a catalyst, and certain catalysts — say, titanium dioxide — function by exposure to light. The researchers synthesized a titanium dioxide catalyst using the pattern from the butterfly's wings, and paired it with platinum nanoparticles to make it more efficient at splitting water. The result? A 230% uptick in the amount of hydrogen produced. The structure of the butterfly's wing means that it's better at absorbing light — so who knows, you might also see the same technique on solar panels, too. | <urn:uuid:9a374252-df3c-4004-8693-6678182914d9> | 3.765625 | 355 | News Article | Science & Tech. | 45.392983 | 10 |
By Irene Klotz
CAPE CANAVERAL, Florida (Reuters) - Despite searing daytime temperatures, Mercury, the planet closest to the sun, has ice and frozen organic materials inside permanently shadowed craters in its north pole, NASA scientists said on Thursday.
Earth-based telescopes have been compiling evidence for ice on Mercury for 20 years, but the finding of organics was a surprise, say researchers with NASA's MESSENGER spacecraft, the first probe to orbit Mercury.
Both ice and organic materials, which are similar to tar or coal, were believed to have been delivered millions of years ago by comets and asteroids crashing into the planet.
"It's not something we expected to see, but then of course you realize it kind of makes sense because we see this in other places," such as icy bodies in the outer solar system and in the nuclei of comets, planetary scientist David Paige, with the University of California, Los Angeles, told Reuters.
Unlike NASA's Mars rover Curiosity, which will be sampling rocks and soils to look for organic materials directly, the MESSENGER probe bounces laser beams, counts particles, measures gamma rays and collects other data remotely from orbit.
The discoveries of ice and organics, painstakingly pieced together for more than a year, are based on computer models, laboratory experiments and deduction, not direct analysis.
"The explanation that seems to fit all the data is that it's organic material," said lead MESSENGER scientist Sean Solomon, with Columbia University in New York.
Added Paige, "It's not just a crazy hypothesis. No one has got anything else that seems to fit all the observations better."
Scientists believe the organic material, which is about twice as dark as most of Mercury's surface, was mixed in with comet- or asteroid-delivered ice eons ago.
The ice vaporized, then re-solidified where it was colder, leaving dark deposits on the surface. Radar imagery shows the dark patches subside at the coldest parts of the crater, where ice can exist on the surface.
The areas where the dark patches are seen are not cold enough for surface ice without the overlying layer of what is believed to be organics.
So remote was the idea of organics on Mercury that MESSENGER got a relatively easy pass by NASA's planetary protection protocols that were established to minimize the chance of contaminating any indigenous life-potential material with hitchhiking microbes from Earth.
Scientists don't believe Mercury is or was suitable for ancient life, but the discovery of organics on an inner planet of the solar system may shed light on how life got started on Earth and how life may evolve on planets beyond the solar system.
"Finding a place in the inner solar system where some of these same ingredients that may have led to life on Earth are preserved for us is really exciting," Paige said.
MESSENGER, which stands for Mercury Surface, Space Environment, Geochemistry and Ranging, is due to complete its two-year mission at Mercury in March.
Scientists are seeking NASA funding to continue operations for at least part of a third year. The probe will remain in Mercury's orbit until the planet's gravity eventually causes it to crash onto the surface.
Whether the discovery of organics now prompts NASA to select a crash zone rather than leave it up to chance remains to be seen. Microbes that may have hitched a ride on MESSENGER likely have been killed off by the harsh radiation environment at Mercury.
The research is published in this week's edition of the journal Science.
(Editing by Kevin Gray and Vicki Allen) | <urn:uuid:954bdf7e-7951-42c2-a6f6-6f7912bad693> | 3.234375 | 753 | News Article | Science & Tech. | 30.158056 | 11 |
Jim Lake and Maria Rivera, at the University of California-Los Angeles (UCLA), report their finding in the Sept. 9 issue of the journal Nature.
Scientists refer to both bacteria and Archaea as "prokaryotes"--a cell type that has no distinct nucleus to contain the genetic material, DNA, and few other specialized components. More-complex cells, known as "eukaryotes," contain a well-defined nucleus as well as compartmentalized "organelles" that carry out metabolism and transport molecules throughout the cell. Yeast cells are some of the most-primitive eukaryotes, whereas the highly specialized cells of human beings and other mammals are among the most complex.
"A major unsolved question in biology has been where eukaryotes came from, where we came from," Lake said. "The answer is that we have two parents, and we now know who those parents were."
Further, he added, the results provide a new picture of evolutionary pathways. "At least 2 billion years ago, ancestors of these two diverse prokaryotic groups fused their genomes to form the first eukaryote, and in the processes two different branches of the tree of life were fused to form the ring of life," Lake said.
The work is part of an effort supported by the National Science Foundation--the federal agency that supports research and education across all disciplines of science and engineering--to re-examine historical schemes for classifying Earth's living creatures, a process that was once based on easily observable traits. Microbes, plants or animals wer
Contact: Leslie Fink
National Science Foundation | <urn:uuid:baf824b2-7e06-471a-8510-efd5abab1567> | 3.796875 | 335 | News Article | Science & Tech. | 30.012417 | 12 |
Refraction and Acceleration
Name: Christopher S.
Why is it that when light travels from a more dense to a
less dense medium, its speed is higher? I've read answers to this
question in your archives but, sadly, still don't get it. One answer
(Jasjeet S Bagla) says that we must not ask the question because light is
massless, hence questions of acceleration don't make sense. It does,
however, seem to be OK to talk about different speeds of light. If you
start at one speed and end at a higher one, why is one not allowed to
talk about acceleration? Bagla goes on to say that it depends on how the
em fields behave in a given medium. It begs the question: what is it
about, say, Perspex and air that makes light accelerate, oops, travel at
different speeds? If you're dealing with the same ray of light, one is
forced to speak of acceleration, no? What other explanation is there for
final velocity>initial velocity? Arthur Smith mentioned a very small
"evanescent" component that travels ahead at c. Where can I learn more
about this? Sorry for the long question. I understand that F=ma and if
there is no m, you cannot talk about a, but, again, you have one velocity
higher than another for the same thing. I need to know more than "that's
just the way em fields are!"
An explanation that satisfies me relates to travel through an interactive
medium. When light interacts with an atom, the photon of light is absorbed
and then emitted. For a moment, the energy of the light is within the atom.
This causes a slight delay. Light travels at the standard speed of light
until interacting with another atom. It is absorbed and emitted, causing
another slight delay. The average effect is taking more time to travel a
meter through glass than through air. This works like a slower speed. An
individual photon does not actually slow down. It gets delayed repeatedly by
the atoms of the medium. A more dense medium has more atoms per meter to
Dr. Ken Mellendorf
Illinois Central College
Congratulations! on not being willing to accept "that is just the way em
fields are!" The answer to your inquiry is not all that simple (my opinion),
but I won't try to do so in the limited space allowed here, not to say my
own limitations of knowledge.
Like so many "simple" physics questions, I find the most lucid, but
accurate, explanation in
Richard Feynman's, "Lectures on Physics" which most libraries will have.
Volume I, Chapter 31-1 through 31-6, which describes refraction, dispersion,
diffraction. The "answer" has to do with how matter alters the electric
field of incident radiation, but I won't pretend to be able to do a better
job than Feynman.
The answer is that you are not dealing with the same ray of light. In
vacuum a photon just keeps going at the speed of light. In a medium,
however, it interacts with the atoms, often being absorbed while bumping
an atomic or molecular motion into a higher energy state. The excited
atom/molecule then can jump to a lower energy state, emitting a photon
while doing so. This can obviously make light appear to travel slower in a
In detail, it is a very complicated question, requiring at least a
graduate course in electromagnetism to begin to understand. Why, for
example do the emitted photons tend to travel in the same direction?
Best, Richard J. Plano
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:d2b35c16-35c7-477e-80c7-8dded3739ec4> | 3.03125 | 794 | Q&A Forum | Science & Tech. | 58.858511 | 13 |
Attempts to relay mail by issuing a predefined combination of SMTP commands. The goal of this script is to tell if a SMTP server is vulnerable to mail relaying.
An SMTP server that works as an open relay, is a email server that does not verify if the user is authorised to send email from the specified email address. Therefore, users would be able to send email originating from any third-party email address that they want.
The checks are done based in combinations of MAIL FROM and RCPT TO commands. The list is hardcoded in the source file. The script will output all the working combinations that the server allows if nmap is in verbose mode otherwise the script will print the number of successful tests. The script will not output if the server requires authentication.
If debug is enabled and an error occurrs while testing the target host, the error will be printed with the list of any combinations that were found prior to the error.
Use this to change the IP address to be used (default is the target IP address)
Define the destination email address to be used (without the domain, default is relaytest)
or smtp-open-relay.domain Define the domain to be used in the anti-spam tests and EHLO command (default is nmap.scanme.org)
Define the source email address to be used (without the domain, default is antispam)
smbdomain, smbhash, smbnoguest, smbpassword, smbtype, smbusernameSee the documentation for the smbauth library.
nmap --script smtp-open-relay.nse [--script-args smtp-open-relay.domain=<domain>,smtp-open-relay.ip=<address>,...] -p 25,465,587 <host>
Host script results: | smtp-open-relay: Server is an open relay (1/16 tests) |_MAIL FROM:<email@example.com> -> RCPT TO:<firstname.lastname@example.org>
Author: Arturo 'Buanzo' Busleiman
License: Same as Nmap--See http://nmap.org/book/man-legal.html | <urn:uuid:2fc62870-a21f-42bb-90f1-0b5d5c8d75a5> | 2.71875 | 483 | Documentation | Software Dev. | 63.215 | 14 |
Giant Manta Ray
Giant Manta Ray Manta birostris
Divers often describe the experience of swimming beneath a manta ray as like being overtaken by a huge flying saucer. This ray is the biggest in the world, but like the biggest shark, the whale shark, it is a harmless consumer of plankton.
When feeding, it swims along with its cavernous mouth wide open, beating its huge triangular wings slowly up and down. On either side of the mouth, which is at the front of the head, there are two long paddles, called cephalic lobes. These lobes help funnel plankton into the mouth. A stingerless whiplike tail trails behind.
Giant manta rays tend to be found over high points like seamounts where currents bring plankton up to them. Small fish called remoras often travel attached to these giants, feeding on food scraps along the way. Giant mantas are ovoviviparous, so the eggs develop and hatch inside the mother. These rays can leap high out of the water, to escape predators, clean their skin of parasites or communicate. | <urn:uuid:f3984201-a44a-42d6-802f-de566b1e8a6e> | 3.09375 | 238 | Knowledge Article | Science & Tech. | 55.646214 | 15 |
Topics covered: Ideal solutions
Instructor/speaker: Moungi Bawendi, Keith Nelson
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: So. In the meantime, you've started looking at two phase equilibrium. So now we're starting to look at mixtures. And so now we have more than one constituent. And we have more than one phase present. Right? So you've started to look at things that look like this, where you've got, let's say, two components. Both in the gas phase. And now to try to figure out what the phase equilibria look like. Of course it's now a little bit more complicated than what you went through before, where you can get pressure temperature phase diagrams with just a single component. Now we want to worry about what's the composition. Of each of the components. In each of the phases. And what's the temperature and the pressure. Total and partial pressures and all of that. So you can really figure out everything about both phases. And there are all sorts of important reasons to do that, obviously lots of chemistry happens in liquid mixtures. Some in gas mixtures. Some where they're in equilibrium. All sorts of chemical processes. Distillation, for example, takes advantage of the properties of liquid and gas mixtures. Where one of them might be richer, will be richer, and the more volatile of the components. That can be used as a basis for purification. You mix ethanol and water together so you've got a liquid with a certain composition of each. The gas is going to be richer and the more volatile of the two, the ethanol. So in a distillation, where you put things up in the gas, more of the ethanol comes up. You could then collect that gas, right? And re-condense it, and make a new liquid. Which is much richer in ethanol than the original liquid was. Then you could make, then you could put some of them up into the gas phase. Where it will be still richer in ethanol. And then you could collect that and repeat the process. So the point is that properties of liquid gas, two-component or multi-component mixtures like this can be exploited. Basically, the different volatilities of the different components can be exploited for things like purification.
Also if you want to calculate chemical equilibria in the liquid and gas phase, of course, now you've seen chemical equilibrium, so the amount of reaction depends on the composition. So of course if you want reactions to go, then this also can be exploited by looking at which phase might be richer in one reactant or another. And thereby pushing the equilibrium toward one direction or the other. OK. So. we've got some total temperature and pressure. And we have compositions. So in the gas phase, we've got mole fractions yA and yB. In the liquid phase we've got mole fractions xA and xB. So that's our system. One of the things that you established last time is that, so there are the total number of variables including the temperature and the pressure. And let's say the mole fraction of A in each of the liquid and gas phases, right? But then there are constraints. Because the chemical potentials have to be equal, right? Chemical potential of A has to be equal in the liquid and gas. Same with B. Those two constraints reduce the number of independent variables. So there'll be two in this case rather than four independent variables. If you control those, then everything else will follow. What that means is if you've got a, if you control, if you fix the temperature and the total pressure, everything else should be determinable. No more free variables.
And then, what you saw is that in simple or ideal liquid mixtures, a result called Raoult's law would hold. Which just says that the partial pressure of A is equal to the mole fraction of A in the liquid times the pressure of pure A over the liquid. And so what this gives you is a diagram that looks like this. If we plot this versus xB, this is mole fraction of B in the liquid going from zero to one. Then we could construct a diagram of this sort. So this is the total pressure of A and B. The partial pressures are given by these lines. So this is our pA star and pB star. The pressures over the pure liquid A and B at the limits of mole fraction of B being zero and one. So in this situation, for example, A is the more volatile of the components. So it's partial pressure over its pure liquid. At this temperature. Is higher than the partial pressure of B over its pure liquid. A would be the ethanol, for example and B the water in that mixture. OK. Then you started looking at both the gas and the liquid phase in the same diagram. So this is the mole fraction of the liquid. If you look and see, well, OK now we should be able to determine the mole fraction in the gas as well. Again, if we note total temperature and pressure, everything else must follow.
And so, you saw this worked out. Relation between p and yA, for example. The result was p is pA star times pB star over pA star plus pB star minus pA star times yA. And the point here is that unlike this case, where you have a linear relationship, the relationship between the pressure and the liquid mole fraction isn't linear. We can still plot it, of course. So if we do that, then we end up with a diagram that looks like the following. Now I'm going to keep both mole fractions, xB and yB, I've got some total pressure. I still have my linear relationship. And then I have a non-linear relationship between the pressure and the mole fraction in the gas phase. So let's just fill this in. Here is pA star still. Here's pB star. Of course, at the limits they're still, both mole fractions they're zero and one.
OK. I believe this is this is where you ended up at the end of the last lecture. But it's probably not so clear exactly how you read something like this. And use it. It's extremely useful. You just have to kind of learn how to follow what happens in a diagram like this. And that's what I want to spend some of today doing. Is just, walking through what's happening physically, with a container with a mixture of the two. And how does that correspond to what gets read off the diagram under different conditions. So. Let's just start somewhere on a phase diagram like this.
Let's start up here at some point one, so we're in the pure - well, not pure, you're in the all liquid phase. It's still a mixture. It's not a pure substance. pA star, pB star. There's the gas phase. So, if we start at one, and now there's some total pressure. And now we're going to reduce it. What happens? We start with a pure - with an all-liquid mixture. No gas. And now we're going to bring down the pressure. Allowing some of the liquid to go up into the gas phase. So, we can do that. And once we reach point two, then we find a coexistence curve. Now the liquid and gas are going to coexist. So this is the liquid phase. And that means that this must be xB. And it's xB at one, but it's also xB at two, and I want to emphasize that. So let's put our pressure for two. And if we go over here, this is telling us about the mole fraction in the gas phase. That's what these curves are, remember. So this is the one that's showing us the mole fraction in the liquid phase. This nonlinear one in the gas phase. So that means just reading off it, this is xB, that's the liquid mole fraction. Here's yB. The gas mole fraction. They're not the same, right, because of course the components have different volatility. A's more volatile.
So that means that the mole fraction of B in the liquid phase is higher than the mole fraction of B in the gas phase. Because A is the more volatile component. So more, relatively more, of A, the mole fraction of A is going to be higher up in the gas phase. Which means the mole fraction of B is lower in the gas phase. So, yB less than xB if A is more volatile. OK, so now what's happening physically? Well, we started at a point where we only had the liquid present. So at our initial pressure, we just have all liquid. There's some xB at one. That's all there is, there isn't any gas yet. Now, what happened here? Well, now we lowered the pressure. So you could imagine, well, we made the box bigger. Now, if the liquid was under pressure, being squeezed by the box, right then you could make the box a little bit bigger. And there's still no gas. That's moving down like this. But then you get to a point where there's just barely any pressure on top of the liquid. And then you keep expanding the box. Now some gas is going to form.
So now we're going to go to our case two. We've got a bigger box. And now, right around where this was, this is going to be liquid. And there's gas up here. So up here is yB at pressure two. Here's xB at pressure two. Liquid and gas. So that's where we are at point two here.
Now, what happens if we keep going? Let's lower the pressure some more. Well, we can lower it and do this. But really if we want to see what's happening in each of the phases, we have to stay on the coexistence curves. Those are what tell us what the pressures are. What the partial pressure are going to be in each of the phases. In each of the two, in the liquid and the gas phases. So let's say we lower the pressure a little more. What's going to happen is, then we'll end up somewhere over here. In the liquid, and that'll correspond to something over here in the gas. So here's three.
So now we're going to have, that's going to be xB at pressure three. And over here is going to be yB at pressure three. And all we've done, of course, is we've just expanded this further. So now we've got a still taller box. And the liquid is going to be a little lower because some of it has evaporated, formed the gas phase. So here's xB at three. Here's yB at three, here's our gas phase. Now we could decrease even further. And this is the sort of thing that you maybe can't do in real life. But I can do on a blackboard. I'm going to give myself more room on this curve, to finish this illustration. There. Beautiful. So now we can lower a little bit further, and what I want to illustrate is, if we keep going down, eventually we get to a pressure where now if we look over in the gas phase, we're at the same pressure, mole fraction that we had originally in the liquid phase. So let's make four even lower pressure. What does that mean? What it means is, we're running out of liquid. So what's supposed to happen is A is the more volatile component. So as we start opening up some room for gas to form, you get more of A in the gas phase. But of course, and the liquid is richer in B. But of course, eventually you run out of liquid. You make the box pretty big, and you run out, or you have the very last drop of liquid. So what's the mole fraction of B in the gas phase? It has to be the same as what it started in in the liquid phase. Because after all the total number of moles of A and B hasn't changed any. So if you take them all from the liquid and put them all up into the gas phase, it must be the same. So yB of four. Once you just have the last drop. So then yB of four is basically equal to xB of one. Because everything's now up in the gas phase. So in principle, there's still a tiny, tiny bit of xB at pressure four.
Well, we could keep lowering the pressure. We could make the box a little bigger. Then the very last of the liquid is going to be gone. And what'll happen then is, we're all here. There's no more liquid. We're not going down on the coexistence curve any more. We don't have a liquid gas coexistence any more. We just have a gas phase. Of course, we can continue to lower the pressure. And then what we're doing is just going down here. So there's five. And five is the same as this only bigger. And so forth.
OK, any questions about how this works? It's really important to just gain facility in reading these things and seeing, OK, what is it that this is telling you. And you can see it's not complicated to do it, but it takes a little bit of practice. OK.
Now, of course, we could do exactly the same thing starting from the gas phase. And raising the pressure. And although you may anticipate that it's kind of pedantic, I really do want to illustrate something by it. So let me just imagine that we're going to do that. Let's start all in the gas phase. Up here's the liquid. pA star, pB star. And now let's start somewhere here. So we're down somewhere in the gas phase with some composition. So it's the same story, except now we're starting here. It's all gas. And we're going to start squeezing. We're increasing the pressure. And eventually here's one, will reach two, so of course here's our yB. We started with all gas, no liquid. So this is yB of one. It's the same as yB of two, I'm just raising the pressure enough to just reach the coexistence curve. And of course, out here tells us xB of two, right? So what is it saying? We've squeezed and started to form some liquid. And the liquid is richer in component B. Maybe it's ethanol water again. And we squeeze, and now we've got more water in the liquid phase than in the gas phase. Because water's the less volatile component. It's what's going to condense first.
So the liquid is rich in the less volatile of the components. Now, obviously, we can continue in doing exactly the reverse of what I showed you. But all I want to really illustrate is, this is a strategy for purification of the less volatile component. Once you've done this, well now you've got some liquid. Now you could collect that liquid in a separate vessel.
So let's collect the liquid mixture with xB of two. So it's got some mole fraction of B. So we've purified that. But now we're going to start, we've got pure liquid. Now let's make the vessel big. So it all goes into the gas phase. Then lower p. All gas. So we start with yB of three, which equals xB of two. In other words, it's the same mole fraction. So let's reconstruct that. So here's p of two. And now we're going to go to some new pressure. And the point is, now we're going to start, since the mole fraction in the gas phase that we're starting from is the same number as this was. So it's around here somewhere. That's yB of three equals xB of two. And we're down here. In other words, all we've done is make the container big enough so the pressure's low and it's all in the gas phase. That's all we have, is the gas. But the composition is whatever the composition is that we extracted here from the liquid. So this xB, which is the liquid mole fraction, is now yB, the gas mole fraction. Of course, the pressure is different. Lower than it was before.
Great. Now let's increase. So here's three. And now let's increase the pressure to four. And of course what happens, now we've got coexistence. So here's liquid. Here's gas. So, now we're over here again. There's xB at pressure four. Pure still in component B. We can repeat the same procedure. Collect it. All liquid, put it in a new vessel. Expand it, lower the pressure, all goes back into the gas phase. Do it all again. And the point is, what you're doing is walking along here. Here to here. Then you start down here, and go from here to here. From here to here. And you can purify. Now, of course, the optimal procedure, you have to think a little bit. Because if you really do precisely what I said, you're going to have a mighty little bit of material each time you do that. So yes it'll be the little bit you've gotten at the end is going to be really pure, but there's not a whole lot of it. Because, remember, what we said is let's raise the pressure until we just start being on the coexistence curve. So we've still got mostly gas. Little bit of liquid. Now, I could raise the pressure a bit higher. So that in the interest of having more of the liquid, when I do that, though, the liquid that I have at this higher pressure won't be as enriched as it was down here. Now, I could still do this procedure. I could just do more of them. So it takes a little bit of judiciousness to figure out how to optimize that. In the end, though, you can continue to walk your way down through these coexistence curves and purify repeatedly the component B, the less volatile of them, and end up with some amount of it. And there'll be some balance between the amount that you feel like you need to end up with and how pure you need it to be. Any questions about how this works?
So purification of less volatile components. Now, how much of each of these quantities in each of these phases? So, pertinent to this discussion, of course we need to know that. If you want to try to optimize a procedure like that, of course it's going to be crucial to be able to understand and calculate for any pressure that you decide to raise to, just how many moles do you have in each of the phases? So at the end of the day, you can figure out, OK, now when I reach a certain degree of purification, here's how much of the stuff I end up with. Well, that turns out to be reasonably straightforward to do. And so what I'll go through is a simple mathematical derivation. And it turns out that it allows you to just read right off the diagram how much of each material you're going to end up with.
So, here's what happens. This is something called the lever rule. How much of each component is there in each phase? So let's consider a case like this. Let me draw yet once again, just to get the numbering consistent. With how we'll treat this. So we're going to start here. And I want to draw it right in the middle, so I've got plenty of room. And we're going to go up to some pressure. And somewhere out there, now I can go to my coexistence curves. Liquid. And gas. And I can read off my values. So this is the liquid xB. So I'm going to go up to some point two, here's xB of two. Here's yB of two. Great. Now let's get these written in.
So let's just define terms a little bit. nA, nB. Or just our total number of moles. ng and n liquid, of course, total number of moles. In the gas and liquid phases. So let's just do the calculation for each of these two cases. We'll start with one. That's the easier case. Because then we have only the gas. So at one, all gas. It says pure gas in the notes, but of course that isn't the pure gas. It's the mixture of the two components. So. How many moles of A? Well it's the mole fraction of A in the gas. Times the total number of moles in the gas. Let me put one in here. Just to be clear. And since we have all gas, the number of moles in the gas is just the total number of moles. So this is just yA at one times n total. Let's just write that in. And of course n total is equal to nA plus nB.
So now let's look at condition two. Now we have to look a little more carefully. Because we have a liquid gas mixture. So nA is equal to yA at pressure two. Times the number of moles of gas at pressure two. Plus xA, at pressure two, times the number of moles of liquid at pressure two.
Now, of course, these things have to be equal. The total number of moles of A didn't change, right? So those are equal. Then yA of two times ng of two. Plus xA of two times n liquid of two, that's equal to yA of one times n total. Which is of course equal to yA of one times n gas at two plus n liquid at two. I suppose I could be, add that equality. Of course, it's an obvious one. But let me do it anyway. The total number of moles is equal to nA plus nB. But it's also equal to n liquid plus n gas. And that's all I'm taking advantage of here.
And now I'm just going to rearrange the terms. So I'm going to write yA at one minus yA at two, times ng at two, is equal to, and I'm going to take the other terms, the xA term. xA of two minus yA of one times n liquid at two. So I've just rearranged the terms. And I've done that because now, I think I omitted something here. yA of one times ng. No, I forgot a bracket, is what I did. yA of one there. And I did this because now I want to do is look at the ratio of liquid to gas at pressure two. So, ratio of I'll put it gas to liquid, that's ng of two over n liquid at two. And that's just equal to xA of two minus yA at one minus yA at one minus yA at two.
So what does it mean? It's the ratio of these lever arms. That's what it's telling me. I can look, so I raise the pressure up to two. And so here's xB at two, here's yB at two. And I'm here somewhere. And this little amount and this little amount, that's that difference. And it's just telling me that ratio of those arms is the ratio of the total number of moles of gas to liquid. And that's great. Because now when I go back to the problem that we were just looking at, where I say, well I'm going to purify the less volatile component by raising the pressure until I'm at coexistence starting in the gas phase. Raise the pressure, I've got some liquid. But I also want some finite amount of liquid. But I don't want to just, when I get the very, very first drop of liquid now collected, of course it's enriched in the less volatile component. But there may be a minuscule amount, right? So I'll raise the pressure a bit more. I'll go up in pressure. And now, of course, when I do that the amount of enrichment of the liquid isn't as big as it was if I just raised it up enough to barely have any liquid. Then I'd be out here. But I've got more material in the liquid phase to collect. And that's what this allows me to calculate. Is how much do I get in the end. So it's very handy. You can also see, if I go all the way to the limit where the mole fraction in the liquid at the end is equal to what it was in the gas when I started, what that says is that there's no more gas left any more. In other words, these two things are equal. If I go all the way to the point where I've got all the, this is the amount I started with, in the pure gas phase, now I keep raising it all the way. Until I've got the same mole fraction in the liquid. Of course, we know what that really means. That means that I've gone all the way from pure gas to pure liquid. And the mole fraction in that case has to be the same. And what this is just telling us mathematically is, when that happens this is zero. That means I don't have any gas left. Yeah.
PROFESSOR: No. Because, so it's the mole fraction in the gas phase. But you've started with some amount that it's only going to go down from there.
PROFESSOR: Yeah. Yeah. Any other questions? OK.
Well, now what I want to do is just put up a slightly different kind of diagram, but different in an important way. Namely, instead of showing the mole fractions as a function of the pressure. And I haven't written it in, but all of these are at constant temperature, right? I've assumed the temperature is constant in all these things. Now let's consider the other possibility, the other simple possibility, which is, let's hold the pressure constant and vary the temperature. Of course, you know in the lab, that's usually what's easiest to do. Now, unfortunately, the arithmetic gets more complicated. It's not monumentally complicated, but here in this case, where you have one linear relationship, which is very convenient. From Raoult's law. And then you have one non-linear relationship there for the mole fraction of the gas. In the case of temperature, they're both, neither one is linear. Nevertheless, we can just sketch what the diagram looks like. And of course it's very useful to do that, and see how to read off it. And I should say the derivation of the curves isn't particularly complicated. It's not particularly more complicated than what I think you saw last time to derive this. There's no complicated math involved. But the point is, the derivation doesn't yield a linear relationship for either the gas or the liquid part of the coexistence curve.
OK, so we're going to look at temperature and mole fraction phase diagrams. Again, a little more complicated mathematically but more practical in real use. And this is T. And here is the, sort of, form that these things take. So again, neither one is linear. Up here, now, of course if you raise the temperatures, that's where you end up with gas. If you lower the temperature, you condense and get the liquid. So, this is TA star. TB star. So now I want to stick with A as the more volatile component. At constant temperature, that meant that pA star is bigger than pB star. In other words, the vapor pressure over pure liquid A is higher than the vapor pressure over pure liquid B. Similarly, now I've got constant pressure and really what I'm looking at, let's say I'm at the limit where I've got the pure liquid. Or the pure A. And now I'm going to, let's say, raise the temperature until I'm at the liquid-gas equilibrium. That's just the boiling point. So if A is the more volatile component, it has the lower boiling point. And that's what this reflects. So higher pB star A corresponds to lower TA star A. Which is just the boiling point of pure A.
So, this is called the bubble line. That's called the dew line. All that means is, let's say I'm at high temperature. I've got all gas. Right no coexistence, no liquid yet. And I start to cool things off. Just to where I just barely start to get liquid. What you see that as is, dew starts forming. A little bit of condensation. If you're outside, it means on the grass a little bit of dew is forming. Similarly, if I start at low temperature, all liquid now I start raising the temperature until I just start to boil. I just start to see the first bubbles forming. And so that's why these things have those names.
So now let's just follow along what happens when I do the same sort of thing that I illustrated there. I want to start at one point in this phase diagram. And then start changing the conditions. So let's start here. So I'm going to start all in the liquid phase. That is, the temperature is low. Here's xB. And my original temperature. Now I'm going to raise it. So if I raise it a little bit, I reach a point at which I first start to boil. Start to find some gas above the liquid. And if I look right here, that'll be my composition. Let me raise it a little farther, now that we've already seen the lever rule and so forth. I'll raise it up to here. And that means that out here, I suppose I should do here.
So, here is the liquid mole fraction at temperature two. xB at temperature two. This is yB at temperature two. The gas mole fraction. So as you should expect, what's going to happen here is that the gas, this is going to be lower in B. A, that means that the mole fraction of A must be higher in the gas phase. That's one minus yB. So xA is one minus -- yA, which is one minus yB higher in gas phase. Than xA, which is one minus xB. In other words, the less volatile component is enriched up in the gas phase.
Now, what does that mean? That means I could follow the same sort of procedure that I indicated before when we looked at the pressure mole fraction phase diagram. Namely, I could do this and now I could take the gas phase. Which has less of B. It has more of A. And I can collect it. And then I can reduce the temperature. So it liquefies. So I can condense it, in other words. So now I'm going to start with, let's say I lower the temperature enough so I've got basically pure liquid. But its composition is the same as the gas here. Because of course that's what that liquid is formed from. I collected the gas and separated it. So now I could start all over again. Except instead of being here, I'll be down here. And then I can raise the temperature again. To some place where I choose. I could choose here, and go all the way to hear. A great amount of enrichment. But I know from the lever rule that if I do that, I'm going to have precious little material over here. So I might prefer to raise the temperature a little more. Still get a substantial amount of enrichment. And now I've got, in the gas phase, I'll further enriched in component A. And again I can collect the gas. Condense it. Now I'm out here somewhere, I've got all liquid and I'll raise the temperature again. And I can again keep walking my way over.
And that's what happens during an ordinary distillation. Each step of the distillation walks along in the phase diagram at some selected point. And of course what you're doing is, you're always condensing the gas. And starting with fresh liquid that now is enriched in more volatile of the components. So of course if you're really purifying, say, ethanol from an ethanol water mixture, that's how you do it. Ethanol is the more volatile component. So a still is set up. It will boil the stuff and collect the gas and and condense it. And boil it again, and so forth. And the whole thing can be set up in a very efficient way. So you have essentially continuous distillation. Where you have a whole sequence of collection and condensation and reheating and so forth events. So then, in a practical way, it's possible to walk quite far along the distillation, the coexistence curve, and distill to really a high degree of purification. Any questions about how that works? OK.
I'll leave till next time the discussion of the chemical potentials. But what we'll do, just to foreshadow a little bit, what I'll do at the beginning of the next lecture is what's at the end of your notes here. Which is just to say OK, now if we look at Raoult's law, it's straightforward to say what is the chemical potential for each of the substances in the liquid and the gas phase. Of course, it has to be equal. Given that, that's for an ideal solution. We can gain some insight from that. And then look at real solutions, non-ideal solutions, and understand a lot of their behavior as well. Just from starting from our understanding of what the chemical potential does even in a simple ideal mixture. So we'll look at the chemical potentials. And then we'll look at non-ideal solution mixtures next time. See you then. | <urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857> | 3.921875 | 7,164 | Audio Transcript | Science & Tech. | 77.794819 | 16 |
We had a running joke in science ed that kids get so overexposed to discrepant events involving density and air pressure that they tend to try to explain anything and everything they don't understand with respect to science in terms of those two concepts. Why do we have seasons? Ummm... air pressure? Why did Dr. Smith use that particular research design? Ummm... density?
I think we need another catch-all explanation. I suggest index of refraction.
To simplify greatly, index of refraction describes the amount of bending a light ray will undergo as it passes from one medium to another (it's also related to the velocity of light in both media, but I do want to keep this simple). If the two media have significantly different indices, light passing from one to the other at an angle (not perpendicularly, in which case there is no bending) will be bent more than if indices of the two are similar. The first four data points are from Hyperphysics, the final one from Wikipedia... glass has a wide range of compositions and thus indices of refraction.
Water at 20 C: 1.33
Typical soda-lime glass: close to 1.5
Since glycerine and glass have similar IoR, light passing from one to the other isn't bent; as long as both are transparent and similarly colored, each will be effectively "invisible" against the other.
So, why does it rain? Umm... index of refraction?
A Bright Moon Impact
12 hours ago | <urn:uuid:7eeb7ef3-3122-42f0-86c8-01da8f3d7396> | 3.1875 | 313 | Comment Section | Science & Tech. | 62.924413 | 17 |
|Gallium metal is silver-white and melts at approximately body temperature (Wikipedia image).|
|Atomic Number:||31||Atomic Radius:||187 pm (Van der Waals)|
|Atomic Symbol:||Ga||Melting Point:||29.76 °C|
|Atomic Weight:||69.72||Boiling Point:||2204 °C|
|Electron Configuration:||[Ar]4s23d104p1||Oxidation States:||3|
From the Latin word Gallia, France; also from Latin, gallus, a translation of "Lecoq," a cock. Predicted and described by Mendeleev as ekaaluminum, and discovered spectroscopically by Lecoq de Boisbaudran in 1875, who in the same year obtained the free metal by electrolysis of a solution of the hydroxide in KOH.
Gallium is often found as a trace element in diaspore, sphalerite, germanite, bauxite, and coal. Some flue dusts from burning coal have been shown to contain as much 1.5 percent gallium.
It is one of four metals -- mercury, cesium, and rubidium -- which can be liquid near room temperature and, thus, can be used in high-temperature thermometers. It has one of the longest liquid ranges of any metal and has a low vapor pressure even at high temperatures.
There is a strong tendency for gallium to supercool below its freezing point. Therefore, seeding may be necessary to initiate solidification.
Ultra-pure gallium has a beautiful, silvery appearance, and the solid metal exhibits a conchoidal fracture similar to glass. The metal expands 3.1 percent on solidifying; therefore, it should not be stored in glass or metal containers, because they may break as the metal solidifies.
High-purity gallium is attacked only slowly by mineral acids.
Gallium wets glass or porcelain and forms a brilliant mirror when it is painted on glass. It is widely used in doping semiconductors and producing solid-state devices such as transistors.
Magnesium gallate containing divalent impurities, such as Mn+2, is finding use in commercial ultraviolet-activated powder phosphors. Gallium arsenide is capable of converting electricity directly into coherent light. Gallium readily alloys with most metals, and has been used as a component in low-melting alloys.
Its toxicity appears to be of a low order, but should be handled with care until more data is available. | <urn:uuid:317a0fc8-b8f1-4147-a9ac-f69a1f176048> | 3.46875 | 546 | Knowledge Article | Science & Tech. | 38.890701 | 18 |
If superparticles were to exist the decay would happen far more often. This test is one of the "golden" tests for supersymmetry and it is one that on the face of it this hugely popular theory among physicists has failed.
Prof Val Gibson, leader of the Cambridge LHCb team, said that the new result was "putting our supersymmetry theory colleagues in a spin".
The results are in fact completely in line with what one would expect from the Standard Model. There is already concern that the LHCb's sister detectors might have expected to have detected superparticles by now, yet none have been found so far.This certainly does not rule out SUSY, but it is getting to the same level as cold fusion if positive experimental result does not come soon. | <urn:uuid:72def0d3-296d-49d8-bdf5-73c351dd6672> | 2.6875 | 163 | Personal Blog | Science & Tech. | 46.709545 | 19 |
Major Section: BREAK-REWRITE
Example: (brr@ :target) ; the term being rewritten (brr@ :unify-subst) ; the unifying substitutionwhere
General Form: (brr@ :symbol)
:symbolis one of the following keywords. Those marked with
*probably require an implementor's knowledge of the system to use effectively. They are supported but not well documented. More is said on this topic following the table.
:symbol (brr@ :symbol) ------- ---------------------In general
:target the term to be rewritten. This term is an instantiation of the left-hand side of the conclusion of the rewrite-rule being broken. This term is in translated form! Thus, if you are expecting (equal x nil) -- and your expectation is almost right -- you will see (equal x 'nil); similarly, instead of (cadr a) you will see (car (cdr a)). In translated forms, all constants are quoted (even nil, t, strings and numbers) and all macros are expanded.
:unify-subst the substitution that, when applied to :target, produces the left-hand side of the rule being broken. This substitution is an alist pairing variable symbols to translated (!) terms.
:wonp t or nil indicating whether the rune was successfully applied. (brr@ :wonp) returns nil if evaluated before :EVALing the rule.
:rewritten-rhs the result of successfully applying the rule or else nil if (brr@ :wonp) is nil. The result of successfully applying the rule is always a translated (!) term and is never nil.
:failure-reason some non-nil lisp object indicating why the rule was not applied or else nil. Before the rule is :EVALed, (brr@ :failure-reason) is nil. After :EVALing the rule, (brr@ :failure-reason) is nil if (brr@ :wonp) is t. Rather than document the various non-nil objects returned as the failure reason, we encourage you simply to evaluate (brr@ :failure-reason) in the contexts of interest. Alternatively, study the ACL2 function tilde-@- failure-reason-phrase.
:lemma * the rewrite rule being broken. For example, (access rewrite-rule (brr@ :lemma) :lhs) will return the left-hand side of the conclusion of the rule.
:type-alist * a display of the type-alist governing :target. Elements on the displayed list are of the form (term type), where term is a term and type describes information about term assumed to hold in the current context. The type-alist may be used to determine the current assumptions, e.g., whether A is a CONSP.
:ancestors * a stack of frames indicating the backchain history of the current context. The theorem prover is in the process of trying to establish each hypothesis in this stack. Thus, the negation of each hypothesis can be assumed false. Each frame also records the rules on behalf of which this backchaining is being done and the weight (function symbol count) of the hypothesis. All three items are involved in the heuristic for preventing infinite backchaining. Exception: Some frames are ``binding hypotheses'' (equal var term) or (equiv var (double-rewrite term)) that bind variable var to the result of rewriting term.
:gstack * the current goal stack. The gstack is maintained by rewrite and is the data structure printed as the current ``path.'' Thus, any information derivable from the :path brr command is derivable from gstack. For example, from gstack one might determine that the current term is the second hypothesis of a certain rewrite rule.
brr@-expressionsare used in break conditions, the expressions that determine whether interactive breaks occur when monitored runes are applied. See monitor. For example, you might want to break only those attempts in which one particular term is being rewritten or only those attempts in which the binding for the variable
ais known to be a
consp. Such conditions can be expressed using ACL2 system functions and the information provided by
brr@. Unfortunately, digging some of this information out of the internal data structures may be awkward or may, at least, require intimate knowledge of the system functions. But since conditional expressions may employ arbitrary functions and macros, we anticipate that a set of convenient primitives will gradually evolve within the ACL2 community. It is to encourage this evolution that
brr@provides access to the | <urn:uuid:460fe123-8906-4320-9cc8-f581b79ced1f> | 2.6875 | 976 | Documentation | Software Dev. | 47.978 | 20 |
May 16, 2011
If you fuel your truck with biodiesel made from palm oil grown on a patch of cleared rainforest, you could be putting into the atmosphere 10 times more greenhouse gasses than if you’d used conventional fossil fuels. It’s a scenario so ugly that, in its worst case, it makes even diesel created from coal (the “coal to liquids” fuel dreaded by climate campaigners the world over) look “green.”
The biggest factor determining whether or not a biofuel ultimately leads to more greenhouse-gas emissions than conventional fossil fuels is the type of land used to grow it, says a new study from researchers at MIT. The carbon released when you clear a patch of rainforest is the reason that palm oil grown on that patch of land leads to 55 times the greenhouse-gas emissions of palm oil grown on land that had already been cleared or was not located in a rainforest, said the study’s lead author.
The solution to this biofuels dilemma is more research. Unlike solar and wind, it’s truly an area in which the world is desperate for scientific breakthroughs, such as biofuels from algae or salt-tolerant salicornia. | <urn:uuid:15d19448-aa73-495a-802e-5b1e68a460f3> | 3.484375 | 253 | News Article | Science & Tech. | 46.947667 | 21 |
This work is licensed under the GPLv2 license. See License.txt for details
Autobuild imports, configures, builds and installs various kinds of software packages. It can be used in software development to make sure that nothing is broken in the build process of a set of packages, or can be used as an automated installation tool.
Autobuild config files are Ruby scripts which configure rake to
imports the package from a SCM or (optionnaly) updates it
configures it. This phase can handle code generation, configuration (for instance for autotools-based packages), …
It takes the dependencies between packages into account in its build
process, updates the needed environment variables | <urn:uuid:d4c570b0-6a4e-47fd-afe7-15b6daac7169> | 2.84375 | 144 | Documentation | Software Dev. | 27.461 | 22 |
Let and be two differentiable functions. We will say that and are proportional if and only if there exists a constant C such that . Clearly any function is proportional to the zero-function. If the constant C is not important in nature and we are only interested into the proportionality of the two functions, then we would like to come up with an equivalent criteria. The following statements are equivalent:
Therefore, we have the following:
Define the Wronskian of and to be , that is
The following formula is very useful (see reduction of order technique):
Remark: Proportionality of two functions is equivalent to their linear dependence. Following the above discussion, we may use the Wronskian to determine the dependence or independence of two functions. In fact, the above discussion cannot be reproduced as is for more than two functions while the Wronskian does.... | <urn:uuid:b7bc34b8-0f1f-4df8-8e8d-e56fc9c8fec5> | 2.6875 | 180 | Knowledge Article | Science & Tech. | 38.502318 | 23 |
Forecast Texas Fire Danger (TFD)
The Texas Fire Danger(TFD) map is produced by the National Fire Danger Rating System (NFDRS). Weather information is provided by remote, automated weather stations and then used as an input to the Weather Information Management System (WIMS). The NFDRS processor in WIMS produces a fire danger rating based on fuels, weather, and topography. Fire danger maps are produced daily. In addition, the Texas A&M Forest Service, along with the SSL, has developed a five day running average fire danger rating map.
Daily RAWS information is derived from an experimental project - DO NOT DISTRIBUTE | <urn:uuid:a789fd8d-b873-45cf-b01d-af6eca242a5d> | 3.015625 | 136 | Knowledge Article | Science & Tech. | 31.717 | 24 |
The Gram-Schmidt Process
Now that we have a real or complex inner product, we have notions of length and angle. This lets us define what it means for a collection of vectors to be “orthonormal”: each pair of distinct vectors is perpendicular, and each vector has unit length. In formulas, we say that the collection is orthonormal if . These can be useful things to have, but how do we get our hands on them?
It turns out that if we have a linearly independent collection of vectors then we can come up with an orthonormal collection spanning the same subspace of . Even better, we can pick it so that the first vectors span the same subspace as . The method goes back to Laplace and Cauchy, but gets its name from Jørgen Gram and Erhard Schmidt.
We proceed by induction on the number of vectors in the collection. If , then we simply set
This “normalizes” the vector to have unit length, but doesn’t change its direction. It spans the same one-dimensional subspace, and since it’s alone it forms an orthonormal collection.
Now, lets assume the procedure works for collections of size and start out with a linearly independent collection of vectors. First, we can orthonormalize the first vectors using our inductive hypothesis. This gives a collection which spans the same subspace as (and so on down, as noted above). But isn’t in the subspace spanned by the first vectors (or else the original collection wouldn’t have been linearly independent). So it points at least somewhat in a new direction.
To find this new direction, we define
This vector will be orthogonal to all the vectors from to , since for any such we can check
where we use the orthonormality of the collection to show that most of these inner products come out to be zero.
So we’ve got a vector orthogonal to all the ones we collected so far, but it might not have unit length. So we normalize it:
and we’re done. | <urn:uuid:4a2ad899-7ba0-4bfc-9276-c5c5c0845fe6> | 3.625 | 447 | Tutorial | Science & Tech. | 55.786307 | 25 |
x2/3 + y2/3 = a2/3
x = a cos3(t), y = a sin3(t)
Click below to see one of the Associated curves.
|Definitions of the Associated curves||Evolute|
|Involute 1||Involute 2|
|Inverse curve wrt origin||Inverse wrt another circle|
|Pedal curve wrt origin||Pedal wrt another point|
|Negative pedal curve wrt origin||Negative pedal wrt another point|
|Caustic wrt horizontal rays||Caustic curve wrt another point|
The astroid only acquired its present name in 1836 in a book published in Vienna. It has been known by various names in the literature, even after 1836, including cubocycloid and paracycle.
The length of the astroid is 6a and its area is 3πa2/8.
The gradient of the tangent T from the point with parameter p is -tan(p). The equation of this tangent T is
x sin(p) + y cos(p) = a sin(2p)/2
Let T cut the x-axis and the y-axis at X and Y respectively. Then the length XY is a constant and is equal to a.
It can be formed by rolling a circle of radius a/4 on the inside of a circle of radius a.
It can also be formed as the envelope produced when a line segment is moved with each end on one of a pair of perpendicular axes. It is therefore a glissette.
Other Web site:
|Main index||Famous curves index|
|Previous curve||Next curve|
|History Topics Index||Birthplace Maps|
|Mathematicians of the day||Anniversaries for the year|
|Societies, honours, etc||Search Form|
The URL of this page is: | <urn:uuid:367a0525-d005-4467-93f1-a7ac123614d1> | 2.71875 | 409 | Knowledge Article | Science & Tech. | 54.846538 | 26 |
Arctic meltdown not caused by nature
Rapid loss of Arctic sea ice - 80 per cent has disappeared since 1980 - is not caused by natural cycles such as changes in the Earth's orbit around the Sun, says Dr Karl.
The situation is getting rather messy with regard to the ice melting in the Arctic. Now the volume of the ice varies throughout the year, rising to its peak after midwinter, and falling to its minimum after midsummer, usually in the month of September.
Over most of the last 1,400 years, the volume of ice remaining each September has stayed pretty constant. But since 1980, we have lost 80 per cent of that ice.
Now one thing to appreciate is that over the last 4.7 billion years, there have been many natural cycles in the climate — both heating and cooling. What's happening today in the Arctic is not a cycle caused by nature, but something that we humans did by burning fossil fuels and dumping slightly over one trillion tonnes of carbon into the atmosphere over the last century.
So what are these natural cycles? There are many many of them, but let's just look at the Milankovitch cycles. These cycles relate to the Earth and its orbit around the Sun. There are three main Milankovitch cycles. They each affect how much solar radiation lands on the Earth, and whether it lands on ice, land or water, and when it lands.
The first Milankovitch cycle is that the orbit of the Earth changes from mostly circular to slightly elliptical. It does this on a predominantly 100,000-year cycle. When the Earth is close to the Sun it receives more heat energy, and when it is further away it gets less. At the moment the orbit of the Earth is about halfway between "nearly circular" and "slightly elliptical". So the change in the distance to the Sun in each calendar year is currently about 5.1 million kilometres, which translates to about 6.8 per cent difference in incoming solar radiation. But when the orbit of the Earth is at its most elliptical, there will be a 23 per cent difference in how much solar radiation lands on the Earth.
The second Milankovitch cycle affecting the solar radiation landing on our planet is the tilt of the north-south spin axis compared to the plane of the orbit of the Earth around the Sun. This tilt rocks gently between 22.1 degrees and 24.5 degrees from the vertical. This cycle has a period of about 41,000 years. At the moment we are roughly halfway in the middle — we're about 23.44 degrees from the vertical and heading down to 22.1 degrees. As we head to the minimum around the year 11,800, the trend is that the summers in each hemisphere will get less solar radiation, while the winters will get more, and there will be a slight overall cooling.
The third Milankovitch cycle that affects how much solar radiation lands on our planet is a little more tricky to understand. It's called 'precession'. As our Earth orbits the Sun, the north-south spin axis does more than just rock gently between 22.1 degrees and 24.5 degrees. It also — very slowly, just like a giant spinning top — sweeps out a complete 360 degrees circle, and it takes about 26,000 years to do this. So on January 4, when the Earth is at its closest to the Sun, it's the South Pole (yep, the Antarctic) that points towards the Sun.
So at the moment, everything else being equal, it's the southern hemisphere that has a warmer summer because it's getting more solar radiation, but six months later it will have a colder winter. And correspondingly, the northern hemisphere will have a warmer winter and a cooler summer.
But of course, "everything else" is not equal. There's more land in the northern hemisphere but more ocean in a southern hemisphere. The Arctic is ice that is floating on water and surrounded by land. The Antarctic is the opposite — ice that is sitting on land and surrounded by water. You begin to see how complicated it all is.
We have had, in this current cycle, repeated ice ages on Earth over the last three-million years. During an ice age, the ice can be three kilometres thick and cover practically all of Canada. It can spread through most of Siberia and Europe and reach almost to where London is today. Of course, the water to make this ice comes out of the ocean, and so in the past, the ocean level has dropped by some 125 metres.
From three million years ago to one million years ago, the ice advanced and retreated on a 41,000-year cycle. But from one million years ago until the present, the ice has advanced and retreated on a 100,000-year cycle.
What we are seeing in the Arctic today — the 80 per cent loss in the volume of the ice since 1980 — is an amazingly huge change in an amazingly short period of time. But it seems as though the rate of climate change is accelerating, and I'll talk more about that, next time …
Published 27 November 2012
© 2013 Karl S. Kruszelnicki Pty Ltd | <urn:uuid:3a4ac59c-d59d-470b-adad-88e5e1c8a45a> | 3.5625 | 1,065 | News Article | Science & Tech. | 65.159255 | 27 |
Black holes growing faster than expected
Black hole find Existing theories on the relationship between the size of a galaxy and its central black hole are wrong according to a new Australian study.
The discovery by Dr Nicholas Scott and Professor Alister Graham, from Melbourne's Swinburne University of Technology, found smaller galaxies have far smaller black holes than previously estimated.
Central black holes, millions to billions of times more massive than the Sun, reside in the core of most galaxies, and are thought to be integral to galactic formation and evolution.
However astronomers are still trying to understand this relationship.
Scott and Graham combined data from observatories in Chile, Hawaii and the Hubble Space Telescope, to develop a data base listing the masses of 77 galaxies and their central supermassive black holes.
The astronomers determined the mass of each central black hole by measuring how fast stars are orbiting it.
Existing theories suggest a direct ratio between the mass of a galaxy and that of its central black hole.
"This ratio worked for larger galaxies, but with improved technology we're now able to examine far smaller galaxies and the current theories don't hold up," says Scott.
In a paper to be published in the Astrophysical Journal, they found that for each ten-fold decrease in a galaxy's mass, there was a one hundred-fold decrease in its central black hole mass.
"That was a surprising result which we hadn't been anticipating," says Scott.
The study also found that smaller galaxies have far denser stellar populations near their centres than larger galaxies.
According to Scott, this also means the central black holes in smaller galaxies grow much faster than their larger counterparts.
Black holes grow by merging with other black holes when their galaxies collide.
"When large galaxies merge they double in size and so do their central black holes," says Scott.
"But when small galaxies merge their central black holes quadruple in size because of the greater densities of nearby stars to feed on."
Somewhere in between
The findings also solve the long standing problem of missing intermediate mass black holes.
For decades, scientists have been searching for something in between stellar mass black holes formed when the largest stars die, and supermassive black holes at the centre of galaxies.
"If the central black holes in smaller galaxies have lower mass than originally thought, they may represent the intermediate mass black hole population astronomers have been hunting for," says Graham.
"Intermediate sized black holes are between ten thousand and a few hundred thousand times the mass of the Sun, and we think we've found several good candidates."
"These may be big enough to be seen directly by the new generation of extremely large telescopes now being built," says Graham. | <urn:uuid:e617c5fd-d556-4d43-be1f-042e7e7f2c60> | 4.25 | 552 | News Article | Science & Tech. | 38.051734 | 28 |
Hoodoos may be seismic gurus
Hoodoo prediction Towering chimney-like sedimentary rock spires known as hoodoos may provide an indication of an area's past earthquake activity.
The research by scientists including Dr Rasool Anooshehpoor, from the United States Nuclear Regulatory Commission, may provide scientists with a new tool to test the accuracy of current hazard models.
Hoodoo formations are often found in desert regions, and are common in North America, the Middle East and northern Africa.
They are caused by the uneven weathering of different layers of sedimentary rocks, that leave boulders or thin caps of hard rock perched on softer rock.
By knowing the strengths of different types of sedimentary layers, scientists can determine the amount of stress needed to cause those rocks to fracture.
The United States Geological Survey (USGS) use seismic hazard models to predict the type of ground motion likely to occur in an area during a seismic event. But, according to Anooshehpoor, these models lack long term data.
"Existing hazard maps use models based on scant data going back a hundred years or so," says Anooshehpoor. "But earthquakes have return periods lasting hundreds or thousands of years, so there is nothing to test these hazard models against."
The researchers examined two unfractured hoodoos within a few kilometres of the Garlock fault, which is an active strike-slip fault zone in California's Red Rock Canyon.
Their findings are reported in the Bulletin of the Seismological Society of America.
"Although we can't put a precise age on hoodoos because of their erosion characteristics, we can use them to provide physical limits on the level of ground shaking that could potentially have occurred in the area," says Anooshehpoor.
The researchers developed a three-dimensional model of each hoodoo and determined the most likely place where each spire would fail in an earthquake.
They then tested rock samples similar to the hoodoo pillars to measure their tensile strength and compared their results with previously published data.
USGS records suggest at least one large magnitude earthquake occurred along the fault in the last 550 years, resulting in seven metres of slip, yet the hoodoos are still standing.
This finding is consistent with a median level of ground motion associated with the large quakes in this region, says Anooshehpoor.
"If an earthquake occurred with a higher level of ground motion, the hoodoos would have collapsed," he says.
"Nobody can predict earthquakes, but this will help predict what ground motions are associated with these earthquakes when they happen."
Dr Juan Carlos Afonso from the Department of Earth and Planetary Sciences at Sydney's Macquarie University says it's an exciting development.
"In seismic hazard studies, it's not just difficult to cover the entire planet, it's hard to cover even small active regions near populated areas," says Afonso.
"You need lots of instruments, so it's great if you can rely on nature and natural objects to help you."
He says while the work is still very new and needs to be proven, the physics seems sound. | <urn:uuid:85a979cb-9571-4e06-b38a-2f79912abb44> | 4.3125 | 644 | News Article | Science & Tech. | 37.919371 | 29 |
Science Fair Project Encyclopedia
The chloride ion is formed when the element chlorine picks up one electron to form the anion (negatively charged ion) Cl−. The salts of hydrochloric acid HCl contain chloride ions and are also called chlorides. An example is table salt, which is sodium chloride with the chemical formula NaCl. In water, it dissolves into Na+ and Cl− ions.
The word chloride can also refer to a chemical compound in which one or more chlorine atoms are covalently bonded in the molecule. This means that chlorides can be either inorganic or organic compounds. The simplest example of an inorganic covalently bonded chloride is hydrogen chloride, HCl. A simple example of an organic covalently bonded chloride is chloromethane (CH3Cl), often called methyl chloride.
Other examples of inorganic covalently bonded chlorides which are used as reactants are:
- phosphorus trichloride, phosphorus pentachloride, and thionyl chloride - all three are reactive chlorinating reagents which have been used in a laboratory.
- Disulfur dichloride (SCl2) - used for vulcanization of rubber.
Chloride ions have important physiological roles. For instance, in the central nervous system the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl− into specific neurons.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:4e76b8fd-c479-45d7-8ee7-faf61495aecb> | 4.59375 | 320 | Knowledge Article | Science & Tech. | 27.864975 | 30 |
Next: Radiative heat flux Up: Loading Previous: Distributed heat flux Contents
Convective heat flux is a flux depending on the temperature difference between the body and the adjacent fluid (liquid or gas) and is triggered by the *FILM card. It takes the form
where is the a flux normal to the surface, is the film coefficient, is the body temperature and is the environment fluid temperature (also called sink temperature). Generally, the sink temperature is known. If it is not, it is an unknown in the system. Physically, the convection along the surface can be forced or free. Forced convection means that the mass flow rate of the adjacent fluid (gas or liquid) is known and its temperature is the result of heat exchange between body and fluid. This case can be simulated by CalculiX by defining network elements and using the *BOUNDARY card for the first degree of freedom in the midside node of the element. Free convection, for which the mass flow rate is a n unknown too and a result of temperature differences, cannot be simulated.
Next: Radiative heat flux Up: Loading Previous: Distributed heat flux Contents guido dhondt 2012-10-06 | <urn:uuid:47d24057-e332-41de-bbe6-0338e16b49a6> | 3.3125 | 249 | Tutorial | Science & Tech. | 41.094375 | 31 |
RR Lyrae starArticle Free Pass
RR Lyrae star, any of a group of old giant stars of the class called pulsating variables (see variable star) that pulsate with periods of about 0.2–1 day. They belong to the broad Population II class of stars (see Populations I and II) and are found mainly in the thick disk and halo of the Milky Way Galaxy and often in globular clusters. There are several subclasses—designated RRa, RRb, RRc, and RRd—based on the manner in which the light varies with time. The intrinsic luminosities of RR Lyrae stars are relatively well-determined, which makes them useful as distance indicators.
What made you want to look up "RR Lyrae star"? Please share what surprised you most... | <urn:uuid:ca821097-b750-4e33-85da-b6754420e0dc> | 2.921875 | 171 | Knowledge Article | Science & Tech. | 63.468978 | 32 |
NOAA scientists agree the risks are high, but say Hansen overstates what science can really say for sure
Jim Hansen at the University of Colorado’s World Affairs Conference (Photo: Tom Yulsman)
Speaking to a packed auditorium at the University of Colorado’s World Affairs Conference on Thursday, NASA climatologist James Hansen found a friendly audience for his argument that we face a planetary emergency thanks to global warming.
Despite the fact that the temperature rise has so far been relatively modest, “we do have a crisis,” he said.
With his characteristic under-stated manner, Hansen made a compelling case. But after speaking with two NOAA scientists today, I think Hansen put himself in a familiar position: out on a scientific limb. And after sifting through my many pages of notes from two days of immersion in climate issues, I’m as convinced as ever that journalists must be exceedingly careful not to overstate what we know for sure and what is still up for scientific debate.
Crawling out on the limb, Hansen argued that global warming has already caused the levels of water in Lake Powell and Lake Mead — the two giant reservoirs on the Colorado River than insure water supplies for tens of millions of Westerners — to fall to 50 percent of capacity. The reservoirs “probably will not be full again unless we decrease CO2 in the atmosphere,” he asserted.
Hansen is arguing that simply reducing our emissions and stabilizing CO2 at about 450 parts per million, as many scientists argue is necessary, is not nearly good enough. We must reduce the concentration from today’s 387 ppm to below 35o ppm.
“We have already passed into the dangerous zone,” Hansen said. If we don’t reduce CO2 in the atmosphere, “we would be sending the planet toward an ice free state. We would have a chaotic journey to get there, but we would be creating a very different planet, and chaos for our children.” Hansen’s argument (see a paper on the subject here) is based on paleoclimate data which show that the last time atmospheric CO2 concentrations were this high, the Earth was ice free, and sea level was far higher than it is today.
“I agree with the sense of urgency,” said Peter Tans, a carbon cycle expert at the National Oceanic and Atmospheric Administration here in Boulder, in a meeting with our Ted Scripps Fellows in Environmental Journalism. “But I don’t agree with a lot of the specifics. I don’t agree with Jim Hansen’s naming of 350 ppm as a tipping point. Actually we may have already gone too far, except we just don’t know.”
A key factor, Tans said, is timing. “If it takes a million years for the ice caps to disappear, no problem. The issue is how fast? Nobody can give that answer.”
Martin Hoerling, a NOAA meteorologist who is working on ways to better determine the links between climate change and regional impacts, such as drought in the West, pointed out that the paleoclimate data Hansen bases his assertions on are coarse. They do not record year-to-year events, just big changes that took place over very long time periods. So that data give no indication just how long it takes to de-glaciate Antarctica and Greenland.
Hoerling also took issue with Hansen’s assertions about lakes Powell and Mead. While it is true that “the West has had the most radical change in temperature in the U.S.,” there is no evidence yet that this is a cause of increasing drought, he said.
Flows in the Colorado River have been averaging about 12 million acre feet each year, yet we are consuming 14 million acre feet. “Where are we getting the extra from? Well, we’re tapping into our 401K plan,” he said. That would be the two giant reservoirs, and that’s why their water levels have been declining.
“Why is there less flow in the river?” Hoerling said. “Low precipitation — not every year, but in many recent years, the snow pack has been lower.” And here’s his almost counter-intuitive point: science shows that the reduced precipitation “is due to natural climate variability . . . We see little indication that the warming trend is affecting the precipitation.”
In my conversation with Tans and Hoerling today, I saw a tension between what they believe and what they think they can demonstrate scientifically.
“I like to frame the issue differently,” Tans said. “Sure, we canot predict what the climate is going to look like in a couple of dcades. There are feedbacks in the system we don’t understand. In fact, we don’t even know all the feedbacks . . . To pick all this apart is extremely difficult — until things really happen. So I’m pessimistic.”
There is, Tans said, “a finite risk of catastrophic climate change. Maybe it is 1 in 6, or maybe 1 in 20 or 1 in 3. Yet if we had a risk like that of being hit by an asteroid, we’d know what to do. But the problem here is that we are the asteroid.”
Tans argues that whether or not we can pin down the degree of risk we are now facing, one thing is obvious: “We have a society based on ever increasing consumption and economic expectations. Three percent growth forever is considered ideal. But of course it’s a disaster.”
Hoerling says we are living like the Easter Islanders, who were faced with collapse from over consumption of resources but didn’t see it coming. Like them, he says, we are living in denial.
“I think we are in that type of risk,” Tans said. “But is that moving people? It moves me. But I was already convinced in 1972.” | <urn:uuid:f9441dcc-dc2a-4077-aac8-1b49394182e2> | 2.546875 | 1,273 | News Article | Science & Tech. | 58.102556 | 33 |
Study promoter activity using the Living Colors Fluorescent Timer, a fluorescent protein that shifts color from green to red over time (1). This color change provides a way to visualize the time frame of promoter activity, indicating where in an organism the promoter is active and also when it becomes inactive. Easily detect the red and green emissions indicating promoter activity with fluorescence microscopy or flow cytometry.
Easily Characterize Promoter Activity
The Fluorescent Timer is a mutant form of the DsRed fluorescent reporter, containing two amino acid substitutions which increase its fluorescence intensity and endow it with a distinct spectral property: as the Fluorescent Timer matures, it changes color—in a matter of hours, depending on the expression system used. Shortly after its synthesis, the Fluorescent Timer begins emitting green fluorescence but as time passes, the fluorophore undergoes additional changes that shift its fluorescence to longer wavelengths. When fully matured the protein is bright red. The protein’s color shift can be used to follow the on and off phases of gene expression (e.g., during embryogenesis and cell differentiation).
Fluorescent Timer under the control of the heat shock promoter hsp16-41 in a transgenic C. elegans embryo. The embryo was heat-shocked in a 33°C water bath. Promoter activity was studied during the heat shock recovery period. Green fluorescence was observed in the embryo as early as two hr into the recovery period. By 50 hr after heat shock, promoter activity had ceased, as indicated by the lack of green color.
pTimer (left) is primarily intended to serve as a convenient source of the Fluorescent Timer cDNA. Use pTimer-1 (right) to monitor transcription from different promoters and promoter/ enhancer combinations inserted into the MCS located upstream of the Fluorescent Timer coding sequence. Without the addition of a functional promoter, this vector will not express the Fluorescent Timer.
Detecting Timer Fluorescent Protein
You can detect the Fluorescent Timer with the DsRed Polyclonal Antibody.
You can use the DsRed1-C Sequencing Primer to sequence wild-type DsRed1 C-terminal gene fusions, including Timer fusions.
Terskikh, A., et al. (2000) Science290(5496):1585–1588. | <urn:uuid:fee85558-8ff7-41a4-9a52-a042d84e5f3a> | 2.6875 | 499 | Knowledge Article | Science & Tech. | 36.829775 | 34 |
Download source - 8 Kb
This tutorial is based off of the MSDN Article #ID: Q194873. But, for a beginner, following these MSDN articles can be intimidating to say the least. One of the most often asked questions I see as a Visual C++ and Visual Basic programmer is how to call a VB DLL from VC++. Well, I am hoping to show you exactly that today. I am not going to go over the basic details of COM as this would take too long, so I am assuming you have an understanding of VB, VC++ and a little COM knowledge. It's not too hard to learn; just takes a little time. So let's get started.
The first thing you need to do is fire up Visual Basic 6 (VB 5 should work as well). With VB running, create a new "ActiveX DLL" project. Rename the project to "vbTestCOM" and the class to "clsTestClass". You can do this by clicking in the VB Project Explorer Window on the Project1 item (Step 1), then clicking in the Properties window and selecting the name property (Step 2). Do the same for the Class. Click on the class (Step 3), then the name property and enter the name mentioned above (Step 4). Your project so far should look like the folowing right hand side picture:
Ok, now we are ready to add some code to the VB Class. Click on the "Tools" menu, then select the "Add Procedure" menu item. The Add Procedure window will open up. In this window we need to add some information. First (Step 1) make sure the type is set to Function. Second (Step 2) enter a Function name called "CountStringLength". Finally hit the Ok button and VB will generate the new function in the class.
You should have an empty function with which to work. The first thing we will do is specify a return type and an input parameter. Edit your code to look like this:
Public Function CountStringLength(ByVal strValue As String) As Long
What are we doing here? We are taking one parameter, as a String type in this case, then returning the length through the return type, which is a Long. We specify the input parameter as ByVal, meaning VB will make a copy of this variable and use the copy in the function, rather than the default ByRef, which passes the variable by reference. This way we can be sure that we do not modify the string by accident that was passed to us by the calling program. Let's add the code now.
Public Function CountStringLength(ByVal strValue As String) As Long
If strValue = vbNullString Then
CountStringLength = 0
CountStringLength = Len(strValue)
In the first line of code we are checking to see if the calling program passed us an empty, or NULL, string. If so we return 0 as the length. If the user did pass something other than an empty string, then we count it's length and return the length back to the calling program.
Now would be a good time to save your project. Accept the default names and put it in a safe directory. We need to compile this project now. Go to the File menu and select the "Make vbCOMTest.dll..." menu item.
The compiler will produce a file called surprisingly enough: vbCOMTest.dll. The compiler will also do us the favor of entering this new DLL into the system registry. We have finished the VB side of this project, so let's start the VC++ side of it. Fire up a copy of VC++, then select from the menu, "New Project". The New Project window should appear. Select a "Win32 Console Application" (Step 1), then give it a name of "TestVBCOM" (Step 2). Finally, enter a directory you want to build this project in (Step 3 - your directory will vary from what I have entered).
Click on the "OK" button and the "Win32 Console Application - Step 1 of 1" window will appear. Leave everything on this page as the default, and click the "Finish" button. One final window will appear after this titled "New Project Information". Simply click the "Ok" button here. You should now have an empty Win32 Console project. Press the "Ctrl" and hit the "N" key. Another window titled "New" will appear. Select the "C++ Source File" (Step 1), then enter the new name for this file called, "TestVBCOM.cpp" (Step 2 - make sure the Add to Project checkbox is checked and the correct project name is in the drop down combo box), then click the "Ok" button to finish.
Now we are going to get fancy!
You need to go to your Start Menu in Windows and navigate to the "Visual Studio 6" menu and go into the "Microsoft Visual Studio 6.0 Tools" sub-menu. In here you will see an icon with the name "OLE View". Click on it.
The OLE View tool will open up. You will see a window similar to this one:
Collapse all the trees, if they are not already. This will make it easier to navigate to where we want to go. Highlight the "Type Libraries" (Step 1) and expand it. You should see a fairly massive listing. We need to locate our VB DLL. Now, remember what we named the project? Right, we need to look for vbTestCOM. Scroll down until you find this. Once you have found it, double click on it. A new window should appear - the "ITypeLib Viewer" window. We are only interested in the IDL (Interface Definition Language) code on the right side of the window. Select the entire IDL text and hit the "Ctrl" and "C" buttons to copy it to the clipboard.
You can close this window and the OLE View window now as we are done with the tool. We need to add the contents of the IDL file into our VC++ project folder. Go to the folder you told VC++ to create your project in and create a new text file there (If you are in Windows Explorer, you can right click in the directory and select "New" then scroll over following the arrow and select "Text Document"). Rename the text document to "vbCOMTEST.idl". Then double click on the new IDL file (VC++ should open it if you named it correctly with an IDL extension). Now paste the code in the file by pressing the "Ctrl" and "V" keys. The IDL text should be pasted into the file. So far, so good. Now, this IDL file is not going to do us much good until we compile it. That way, VC++ can use the files it generates to talk to the VB DLL. Let's do that now. Open a DOS window and navigate to the directory you created your VC++ project in. Once in that directory, at the prompt you need to type the following to invoke the MIDL compiler:
E:\VCSource\TestVBCOM\TestVBCOM\midl vbTestCOM.idl /h vbTestCOM.h
Hit the "Enter" key and let MIDL do its magic. You should see results similar to the following:
Close the DOS window and head back into VC++. We need to add the newly generated vbTestCOM.h and vbTestCOM_i.c files to the project. You can do this by going to the "Project" menu, then selecting the "Add to Project" item, and scrolling over to the "Files" menu item and clicking on it. A window titled, "Insert Files into Project" will open. Select the two files highlighted in the next picture, then select the "Ok" button.
These two files were generated by MIDL for us, and VC++ needs them in order to talk to the VB DLL (actually VC++ does not need the "vbCOMTest_i.c" file in the project, but it is handy have in the project to review). We are going to add the following code to the "TestVBCOM.cpp" file now, so navigate to that file in VC++ using the "Workspace" window. Open the file by double clicking it and VC++ will display the empty file for editing.
Now add the following code to the "TestVBCOM.cpp" file:
_clsVBTestClass *IVBTestClass = NULL;
hr = CoInitialize(0);
hr = CoCreateInstance( CLSID_clsVBTestClass,
_bstr_t bstrValue("Hello World");
hr = IVBTestClass->CountStringLength(bstrValue,
cout << "The string is: " << ReturnValue << " characters in
length." << endl;
hr = IVBTestClass->Release();
cout << "CoCreateInstance Failed." << endl;
If all the code is entered in correctly, then press the "F7" key to compile this project. Once the project has compiled cleanly, then press the "Ctrl" and the "F5" keys to run it. In the C++ code, we include the MIDL created "vbTestCOM.h" file, the "Comdef.h" file for the _bstr_t class support and the "iostream.h" file for the "cout" support. The rest of the comments should speak for themselves as to what's occurring. This simple tutorial shows how well a person can integrate VB and VC++ apps together using COM. Not too tough actually. | <urn:uuid:a5fbb498-1ce4-4861-9a4d-ac9d31394472> | 2.796875 | 2,064 | Tutorial | Software Dev. | 74.372644 | 35 |
Hold the salt: UCLA engineers develop revolutionary new desalination membrane
Process uses atmospheric pressure plasma to create filtering 'brush layer'
Desalination can become more economical and used as a viable alternate water resource.
By Wileen Wong Kromhout
Originally published in UCLA Newsroom
Researchers from the UCLA Henry Samueli School of Engineering and Applied Science have unveiled a new class of reverse-osmosis membranes for desalination that resist the clogging which typically occurs when seawater, brackish water and waste water are purified.
The highly permeable, surface-structured membrane can easily be incorporated into today's commercial production system, the researchers say, and could help to significantly reduce desalination operating costs. Their findings appear in the current issue of the Journal of Materials Chemistry.
Reverse-osmosis (RO) desalination uses high pressure to force polluted water through the pores of a membrane. While water molecules pass through the pores, mineral salt ions, bacteria and other impurities cannot. Over time, these particles build up on the membrane's surface, leading to clogging and membrane damage. This scaling and fouling places higher energy demands on the pumping system and necessitates costly cleanup and membrane replacement.
The new UCLA membrane's novel surface topography and chemistry allow it to avoid such drawbacks.
"Besides possessing high water permeability, the new membrane also shows high rejection characteristics and long-term stability," said Nancy H. Lin, a UCLA Engineering senior researcher and the study's lead author. "Structuring the membrane surface does not require a long reaction time, high reaction temperature or the use of a vacuum chamber. The anti-scaling property, which can increase membrane life and decrease operational costs, is superior to existing commercial membranes."
The new membrane was synthesized through a three-step process. First, researchers synthesized a polyamide thin-film composite membrane using conventional interfacial polymerization. Next, they activated the polyamide surface with atmospheric pressure plasma to create active sites on the surface. Finally, these active sites were used to initiate a graft polymerization reaction with a monomer solution to create a polymer "brush layer" on the polyamide surface. This graft polymerization is carried out for a specific period of time at a specific temperature in order to control the brush layer thickness and topography.
"In the early years, surface plasma treatment could only be accomplished in a vacuum chamber," said Yoram Cohen, UCLA professor of chemical and biomolecular engineering and a corresponding author of the study. "It wasn't practical for large-scale commercialization because thousands of meters of membranes could not be synthesized in a vacuum chamber. It's too costly. But now, with the advent of atmospheric pressure plasma, we don't even need to initiate the reaction chemically. It's as simple as brushing the surface with plasma, and it can be done for almost any surface."
In this new membrane, the polymer chains of the tethered brush layer are in constant motion. The chains are chemically anchored to the surface and are thus more thermally stable, relative to physically coated polymer films. Water flow also adds to the brush layer's movement, making it extremely difficult for bacteria and other colloidal matter to anchor to the surface of the membrane.
"If you've ever snorkeled, you'll know that sea kelp move back and forth with the current or water flow," Cohen said. "So imagine that you have this varied structure with continuous movement. Protein or bacteria need to be able to anchor to multiple spots on the membrane to attach themselves to the surface — a task which is extremely difficult to attain due to the constant motion of the brush layer. The polymer chains protect and screen the membrane surface underneath."
Another factor in preventing adhesion is the surface charge of the membrane. Cohen's team is able to choose the chemistry of the brush layer to impart the desired surface charge, enabling the membrane to repel molecules of an opposite charge.
The team's next step is to expand the membrane synthesis into a much larger, continuous process and to optimize the new membrane's performance for different water sources.
"We want to be able to narrow down and create a membrane selection system for different water sources that have different fouling tendencies," Lin said. "With such knowledge, one can optimize the membrane surface properties with different polymer brush layers to delay or prevent the onset of membrane fouling and scaling.
"The cost of desalination will therefore decrease when we reduce the cost of chemicals [used for membrane cleaning], as well as process operation [for membrane replacement]. Desalination can become more economical and used as a viable alternate water resource."
Cohen's team, in collaboration with the UCLA Water Technology Research (WaTeR) Center, is currently carrying out specific studies to test the performance of the new membrane's fouling properties under field conditions.
"We work directly with industry and water agencies on everything that we're doing here in water technology," Cohen said. "The reason for this is simple: If we are to accelerate the transfer of knowledge technology from the university to the real world, where those solutions are needed, we have to make sure we address the real issues. This also provides our students with a tremendous opportunity to work with industry, government and local agencies."
A paper providing a preliminary introduction to the new membrane also appeared in the Journal of Membrane Science last month.
Published: Thursday, April 08, 2010 | <urn:uuid:c0b175bb-65fb-420e-a881-a80b91d00ecd> | 2.8125 | 1,115 | News Article | Science & Tech. | 24.364388 | 36 |
Killing Emacs means ending the execution of the Emacs process.
If you started Emacs from a terminal, the parent process normally
resumes control. The low-level primitive for killing Emacs is
This command calls the hook
kill-emacs-hook, then exits the Emacs process and kills it.
If exit-data is an integer, that is used as the exit status of the Emacs process. (This is useful primarily in batch operation; see Batch Mode.)
If exit-data is a string, its contents are stuffed into the terminal input buffer so that the shell (or whatever program next reads input) can read them.
kill-emacs function is normally called via the
higher-level command C-x C-c
save-buffers-kill-terminal). See Exiting. It is also called automatically if Emacs receives a
SIGHUP operating system signal (e.g., when the
controlling terminal is disconnected), or if it receives a
SIGINT signal while running in batch mode (see Batch Mode).
This normal hook is run by
kill-emacs, before it kills Emacs.
kill-emacscan be called in situations where user interaction is impossible (e.g., when the terminal is disconnected), functions on this hook should not attempt to interact with the user. If you want to interact with the user when Emacs is shutting down, use
kill-emacs-query-functions, described below.
When Emacs is killed, all the information in the Emacs process,
aside from files that have been saved, is lost. Because killing Emacs
inadvertently can lose a lot of work, the
save-buffers-kill-terminal command queries for confirmation if
you have buffers that need saving or subprocesses that are running.
It also runs the abnormal hook
save-buffers-kill-terminalis killing Emacs, it calls the functions in this hook, after asking the standard questions and before calling
kill-emacs. The functions are called in order of appearance, with no arguments. Each function can ask for additional confirmation from the user. If any of them returns
save-buffers-kill-emacsdoes not kill Emacs, and does not run the remaining functions in this hook. Calling
kill-emacsdirectly does not run this hook. | <urn:uuid:af93ad35-c5de-4297-a667-afc7347bbc6c> | 2.6875 | 488 | Documentation | Software Dev. | 51.422678 | 37 |
In fact, the United States apparently just emerged from the hottest spring on record.
The period between June 2011 and May of this year was the warmest on record since NOAA record keeping began in 1985. Aside from Washington, every state experienced higher-than-average temperatures during that period, which also featured the second-warmest summer and fourth-warmest winter in almost 28 years.
The nation's average temperature during those 12 months hovered at 56 degrees Fahrenheit, reportedly 3.2 degrees above the long-term average, surpassing the previous record, which was just set in April, in an analysis of temperatures between May 2011 and April 2012. The warmer-than-average conditions persisted through the winter and spring, resulting in a limited snowfall that the Rutgers Global Snow Lab reports was the third-smallest on record for the contiguous U.S.
The rising temperatures may have altered precipitation patterns as well, according to NOAA. While the country as a whole actually experienced a drier spring than usual, the West Coast, Northern Plains and Upper Midwest regions were simultaneously wetter than average.
On a more concerning note, the prevalence of natural disasters, such as the disastrous tornado in Joplin, Mo., and the massive, hurricane-caused flooding in Vermont, that plagued the country over the past year were also far form usual. The U.S. Climate Extreme Index, which tracks extremes in temperatures, precipitation, drought and tropical cyclones, reached 44 percent this past spring. That's twice the average value.
The NOAA report is not the only recent analysis to note the prevalence, and consequences, of rising temperatures. On Thursday, NASA reported that scientists have discovered unprecedented blooms of plant life beneath the waters of the Arctic Ocean. While that certainly does not seem like cause for concern, NASA noted it was likely caused by a thinning of the Arctic Ocean's three-foot thick layer of ice, allowing the sun to penetrate that ice to foster plant life under the sea.
A continuous rise in summer temperatures is expected to triple the number of heat-related deaths in the U.S. by the end of the century, the Natural Resources Defense Council reported last month. In an analysis of peer-reviewed data, the organization said summer temperatures could rise by 4 to 11 degrees Fahrenheit by that time due to human-induced climate change, which could increase the number of summer heat-related deaths from 1,300 to 4,600 a year. | <urn:uuid:628e935a-7678-4d56-8179-04a384233ade> | 3.625 | 499 | News Article | Science & Tech. | 42.651575 | 38 |
|Scientific Name:||Phoebastria albatrus|
|Species Authority:||(Pallas, 1769)|
|Red List Category & Criteria:||Vulnerable D2 ver 3.1|
|Reviewer/s:||Butchart, S. & Taylor, J.|
|Contributor/s:||Balogh, G., Chan, S., Hasegawa, H., Peet, N., Rivera, K. & Suryan, R.|
This species is listed as Vulnerable because, although conservation efforts have resulted in a steady population increase, it still has a very small breeding range, limited to Torishima and Minami-kojima (Senkaku Islands), rendering it susceptible to stochastic events and human impacts.
Phoebastria albatrus breeds on Torishima (Japan), and Minami-kojima (Senkaku Islands), that are claimed jointly by Japan, mainland China and Chinese Taipei. Historically there are believed to have been at least nine colonies south of Japan and in the East China Sea (Piatt et al. 2006). Its marine range covers most of the northern Pacific Ocean, but it occurs in highest densities in areas of upwelling along shelf waters of the Pacific Rim, particularly along the coasts of Japan, eastern Russia, the Aleutians and Alaska (Piatt et al. 2006, Suryan et al. 2007). During breeding (December - May) it is found in highest densities around Japan. Satellite tracking has indicated that during the post-breeding period, females spend more time offshore of Japan and Russia, while males and juveniles spend greater time around the Aleutian Islands, Bering Sea and off the coast of North America (Suryan et al. 2007). Juveniles have been shown to travel twice the distances per day and spend more time within continental shelf habitat than adult birds (Suryan et al. 2008). The species declined dramatically during the 19th and 20th centuries owing to exploitation for feathers, and was believed extinct in 1949, but was rediscovered in 1951. The current population is estimated, via direct counts and modelling based on productivity data, to be 2,364 individuals, with 1,922 birds on Torishima and 442 birds on Minami-kojima (G.R. Balogh in litt. 2008). In 1954, 25 birds (including at least six pairs) were present on Torishima. Given that there are now c.426 breeding pairs on Torishima (G.R. Balogh in litt. 2008), the species has undergone an enormous increase since its rediscovery and the onset of conservation efforts. In addition, in 2010, one nesting pair was observed on Kure Atoll (Hawaii, USA), but was probably female-female and unsuccessful, and one chick was produced on Midway Atoll (M. Naughton pers. comm. 2011). A tsunami which hit Midway Atoll in March 2011, did not impact on the single pair nesting on Eastern Island (U.S. Fish & Wildlife Service 2008).
Native:Canada; China; Japan; Korea, Republic of; Mexico; Russian Federation; Taiwan, Province of China; United States; United States Minor Outlying Islands
Present - origin uncertain:Northern Mariana Islands; Philippines
|Range Map:||Click here to open the map viewer and explore range.|
|Population:||At the end of the 2006-2007 breeding season, the global population was estimated to be 2,364 individuals, with 1,922 birds on Torishima and 442 birds on Minami-kojima (Senkaku Islands). This estimate is based on: direct observation of breeding pairs on Torishima; an assumption on numbers of non-breeding birds; an estimate for the Minami-kojima population that is based upon a 2002 estimate and an assumption of population growth rate (which, together, puts the Minami-kojima population at about 15% of the global population [G.R. Balogh in litt. 2008]). More recently, Brazil (2009) estimates the population in Japan at c.100-10,000 breeding pairs and c.50-1,000 individuals on migration. The population is taken here as likely to number 2,200-2,500 individuals based on these estimates, roughly equating to 1,500-1,700 mature individuals.|
|Habitat and Ecology:||Behaviour Phoebastria albatrus is a colonial, annually breeding species, with each breeding cycle lasting about 8 months. Birds begin to arrive at the main colony on Torishima Island in early October. A single egg is laid in late October to late November and incubation lasts 64 to 65 days. Hatching occurs in late December through January. Chicks begin to fledge in late May into June. There is little information on timing of breeding on Minami-kojima. First breeding sometimes occurs when birds are five years old, but more commonly when birds are aged six. It forages diurnally and potentially nocturnally, either singly or in groups primarily taking prey by surface-seizing (ACAP 2009). During the breeding season, individuals nesting off Japan forage over the continental shelf (Kiyota and Minami 2008). Habitat Breeding Historically, it preferred level, open, areas adjacent to tall clumps of the grass Miscanthus sinensis for nesting. Diet It feeds mainly on squid, but also takes shrimp, fish, flying fish eggs and other crustaceans (ACAP 2009). It has been recorded following ships to feed on scraps and fish offal.|
|Major Threat(s):||Its historical decline was caused by exploitation. Today, the key threats are the instability of soil on its main breeding site (Torishima), the threat of mortality and habitat loss from the active volcano on Torishima, and mortality caused by fisheries. Torishima is also vulnerable to other natural disasters, such as typhoons. Introduced predators are a potential threat at colonies. Environmental contaminants at sea (oil based compounds) may also be a threat (G.R. Balogh in litt. 2008). Threats at sea (fisheries, oil pollution) are exacerbated by the fact that birds concentrate into predictable hotspots (Piatt et al. 2006). Modelling work has showed that even a small increase in low level chronic mortality (such as fisheries bycatch) has more of an impact on population growth rates than stochastic and theoretically catastrophic events, such as volcanic eruptions (Finkelstein et al. 2010). Phoebastria albatrus has the greatest potential overlap with fisheries that occur in the shallower waters along continental shelf break and slope regions, e.g., sablefish and Pacific halibut longline fisheries off the coasts of Alaska and British Columbia. Although, overlap between the distribution of birds and fishery effort does not mean that interactions between birds and boats necessarily occur, P. albatrus are known to have been killed in U.S. and Russian longline fisheries for Pacific cod and Pacific halibut. In addition, birds on Torishima have been observed with hooks in their mouths of the style used in Japanese fisheries near the island (ACAP 2009).|
Conservation Actions Underway
It is legally protected in Japan, Canada and the USA. A draft recovery plan has been developed (USFWS 2005). Mitigation measures have been established in the Alaska demersal longline fishery and in the Hawaii-based pelagic longline fishery (NOAA 2008). Streamer lines (both heavy weight lines for large boats and lightweight lines for smaller vessels) have been designed to keep birds from longline hooks as they are set, and these are being distributed free to the Alaskan longline fleet (USFWS 2005), though they are not deployed in near-shore waters. In 2006, the Western and Central Pacific Fisheries Commission passed a measure which requires large tuna and swordfish longline vessels (>24m long) to use a combination of two seabird bycatch mitigation measures when fishing north of 23 degrees North. Torishima has been established as a National Wildlife Protection Area. In 1981-1982, native plants were transplanted into the Torishima nesting colony in order to stabilise the nesting habitat and the nest structures. This has enhanced breeding success, with over 60% of eggs now resulting in fledged young. Decoys have been used to attract birds to nest at another site on Torishima since 1993 and the first pair started breeding at this new site in November 1995. The number of chicks fledged from this new colony has increased from one chick in 2004; four chicks in 2005; 13 chicks in 2006; 16 chicks in 2007. In October-November 2007, 35 eggs were laid at this new site (Sato 2009). In 2007, the Japanese government approved a project to translocate chicks from Torishima to Mukojima, 300 km away. All ten chicks of the first translocations in March 2008 fledged (Jacobs 2009). If successful, this project will translocate at least ten chicks per year for five years. Conservation Actions Proposed
Continue to promote measures designed to protect this species from becoming hooked or entangled by commercial fishing gear. Re-establish birds within historic range as insurance against natural disasters on primary breeding colony. Promote conservation measures for the Minami-kojima population. Continue research into the at-sea distribution and marine habitat use through satellite telemetry studies. Continue land-based management and population monitoring.
|Citation:||BirdLife International 2012. Phoebastria albatrus. In: IUCN 2012. IUCN Red List of Threatened Species. Version 2012.2. <www.iucnredlist.org>. Downloaded on 18 May 2013.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please fill in the feedback form so that we can correct or extend the information provided| | <urn:uuid:573c77f2-d484-430d-94d7-05417faf55af> | 3.203125 | 2,094 | Knowledge Article | Science & Tech. | 46.976922 | 39 |
Boulder trails are common to the interior of Menelaus crater as materials erode from higher topography and roll toward the crater floor. Downhill is to the left, image width is 500 m, LROC NAC M139802338L [NASA/GSFC/Arizona State University].
Most boulder trails are relatively high reflectance, but running through the center of this image is a lower reflectance trail. This trail is smaller than the others, and its features may be influenced by factors such as mass of the boulder, boulder speed as it traveled downhill, and elevation from which the boulder originated. For example, is the boulder trail less distinct than the others because the boulder was smaller? What about the spacing of boulder tracks? The spacing of bounce-marks along boulder trails may say something about boulder mass and boulder speed. But why is this boulder trail low reflectance when all of the surrounding trails are higher reflectance? Perhaps this boulder trail is lower reflectance because the boulder gently bounced as it traveled downhill, and barely disturbed a thin layer of regolith? The contrast certainly appears similar to the astronauts' footprints and paths around the Apollo landing sites. Or, maybe the boulder fell apart during its downhill travel and the trail is simply made up of pieces of the boulder - we just don't know yet.
LROC WAC context of Menelaus crater at the boundary between Mare Serenitatis and the highlands (dotted line). The arrow marks the location of today's featured image at contact between the crater floor and NE crater wall [NASA/GSFC/Arizona State University].
What do you think? Why don't you follow the trail to its source in the full LROC NAC frame and see if you can find any other low reflectance trails. | <urn:uuid:ce50e516-2229-404a-b328-7d80cdfd0d33> | 3.25 | 362 | Comment Section | Science & Tech. | 50.615374 | 40 |
Ulva spp. on freshwater-influenced or unstable upper eulittoral rock
Ecological and functional relationships
The community predominantly consists of algae which cover the rock surface and creates a patchy canopy. In doing so, the algae provides an amenable habitat in an otherwise hostile environment, exploitable on a temporary basis by other species. For instance, Ulva intestinalis provides shelter for the orange harpacticoid copepod, Tigriopus brevicornis, and the chironomid larva of Halocladius fucicola (McAllen, 1999). The copepod and chironomid species utilize the hollow thalli of Ulva intestinalis as a moist refuge from desiccation when rockpools completely dry. Several hundred individuals of Tigriopus brevicornis have been observed in a single thallus of Ulva intestinalis (McAllen, 1999). The occasional grazing gastropods that survive in this biotope no doubt graze Ulva.
Seasonal and longer term change
- During the winter, elevated levels of freshwater runoff would be expected owing to seasonal rainfall. Also, winter storm action may disturb the relatively soft substratum of chalk and firm mud, or boulders may be overturned.
- Seasonal fluctuation in the abundance of Ulva spp. Would therefore be expected with the biotope thriving in winter months. Porphyra also tends to be regarded as a winter seaweed, abundant from late autumn to the succeeding spring, owing to the fact that the blade shaped fronds of the gametophyte develop in early autumn, whilst the microscopic filamentous stages of the spring and summer are less apparent (see recruitment process, below).
Habitat structure and complexity
Habitat complexity in this biotope is relatively limited in comparison to other biotopes. The upper shore substrata, consisting of chalk, firm mud, bedrock or boulders, will probably offer a variety of surfaces for colonization, whilst the patchy covering of ephemeral algae provides a refuge for faunal species and an additional substratum for colonization. However, species diversity in this biotope is poor owing to disturbance and changes in the prevailing environmental factors, e.g. desiccation, salinity and temperature. Only species able to tolerate changes/disturbance or those able to seek refuge will thrive.
The biotope is characterized by primary producers. Rocky shore communities are highly productive and are an important source of food and nutrients for neighbouring terrestrial and marine ecosystems (Hill et al., 1998). Macroalgae exude considerable amounts of dissolved organic carbon which is taken up readily by bacteria and may even be taken up directly by some larger invertebrates. Dissolved organic carbon, algal fragments and microbial film organisms are continually removed by the sea. This may enter the food chain of local, subtidal ecosystems, or be exported further offshore. Rocky shores make a contribution to the food of many marine species through the production of planktonic larvae and propagules which contribute to pelagic food chains.
The life histories of common algae on the shore are generally complex and varied, but follow a basic pattern, whereby there is an alternation of a haploid, gamete-producing phase (gametophyte-producing eggs and sperm) and a diploid spore-producing (sporophyte) phase. All have dispersive phases which are circulated around in the water column before settling on the rock and growing into a germling (Hawkins & Jones, 1992).
Ulva intestinalis is generally considered to be an opportunistic species, with an 'r-type' strategy for survival. The r-strategists have a high growth rate and high reproductive rate. For instance, the thalli of Ulva intestinalis, which arise from spores and zygotes, grow within a few weeks into thalli that reproduce again, and the majority of the cell contents are converted into reproductive cells. The species is also capable of dispersal over a considerable distance. For instance, Amsler & Searles (1980) showed that 'swarmers' of a coastal population of Ulva reached exposed artificial substrata on a submarine plateau 35 km away.
The life cycle of Porphyra involves a heteromorphic (of different form) alternation of generations, that are either blade shaped or filamentous. Two kinds of reproductive bodies (male and female (carpogonium)) are found on the blade shaped frond of Porphyra that is abundant during winter. On release these fuse and thereafter, division of the fertilized carpogonium is mitotic, and packets of diploid carpospores are formed. The released carpospores develop into the 'conchocelis' phase (the diploid sporophyte consisting of microscopic filaments), which bore into shells (and probably the chalk rock) and grow vegetatively. The conchocelis filaments reproduce asexually. In the presence of decreasing day length and falling temperatures, terminal cells of the conchocelis phase produce conchospores inside conchosporangia. Meiosis occurs during the germination of the conchospore and produces the macroscopic gametophyte (blade shaped phase) and the cycle is repeated (Cole & Conway, 1980).
Time for community to reach maturity
Disturbance is an important factor structuring the biotope, consequently the biotope is characterized by ephemeral algae able to rapidly exploit newly available substrata and that are tolerant of changes in the prevailing conditions, e.g. temperature, salinity and desiccation. For instance, following the Torrey Canyon tanker oil spill in mid March 1967, which bleached filamentous algae such as Ulva and adhered to the thin fronds of Porphyra, which after a few weeks became brittle and were washed away, regeneration of Porphyra and Ulva was noted by the end of April at Marazion, Cornwall. Similarly, at Sennen Cove where rocks had completely lost their cover of Porphyra and Ulva during April, by mid-May had occasional blade-shaped fronds of Porphyra sp. up to 15 cm long. These had either regenerated from basal parts of the 'Porphyra' phase or from the 'conchocelis' phase on the rocks (see recruitment processes). By mid-August these regenerated specimens were common and well grown but darkly pigmented and reproductively immature. Besides the Porphyra, a very thick coating of Ulva (as Enteromorpha) was recorded in mid-August (Smith 1968). Such evidence suggests that the community would reach maturity relatively rapidly and probably be considered mature in terms of the species present and ability to reproduce well within six months.
No text entered.
This review can be cited as follows:
Ulva spp. on freshwater-influenced or unstable upper eulittoral rock.
Marine Life Information Network: Biology and Sensitivity Key Information Sub-programme [on-line].
Plymouth: Marine Biological Association of the United Kingdom.
Available from: <http://www.marlin.ac.uk/habitatecology.php?habitatid=104&code=2004> | <urn:uuid:13da434f-f140-49e3-8fdb-67019653693a> | 3.625 | 1,520 | Knowledge Article | Science & Tech. | 19.802367 | 41 |
of the Giant Squid scientifically known as Architeuthis
dux, is the largest of all invertebrates. Scientists
believe it can be as long as 18 metres (60 feet). This specimen
was collected by Dr Gordon Williamson who worked as the resident
ships biologist for the whaling company Salvesons. He examined
the stomach contents of 250 Sperm Whales Physeter macrocephalus
keeping the largest squid beak and discarding the smaller until
he ended up with this magnificent specimen. | <urn:uuid:03dc2cd4-80be-4c32-8ff8-4b196542656b> | 3.03125 | 105 | Knowledge Article | Science & Tech. | 43.41975 | 42 |
You're using more water than you think
A water footprint is the total volume of freshwater used to produce the goods and services consumed. Here are some ways to lighten your water footprint.
Fri, Aug 31 2012 at 11:28 AM
Prodded by environmental consciousness — or penny pinching — you installed low-flow showerheads and fixed all the drippy facets. Knowing that your manicured lawn was sucking down an unnatural amount of water — nearly 7 billion gallons of water is used to irrigate home landscaping, according to the U.S. Environmental Protection Agency — you ripped up the turf and replaced it with native plants.
You’re still using a lot more water than you think.
The drought of 2012 has generated images of parched landscapes and sun-baked lakebeds. At least 36 states are projecting water shortages between now and 2013, according to a survey by the federal General Accounting Office. Water supplies are finite, and fickle.
Water, we all know, is essential to life. It is also essential to agriculture, industry, energy and the production of trendy T-shirts. We all use water in ways that go way beyond the kitchen and bathroom. The measure of both direct and indirect water use is known as the water footprint.
Your water footprint is the total volume of freshwater used to produce the goods and services consumed, according to the Water Footprint Network, an international nonprofit foundation based in the Netherlands. The Water Footprint Network has crunched the numbers and developed an online calculator to help you determine the size of your footprint.
You’ll be astonished to know how much water you’re using … once you’ve converted all those metric measurements into something you can understand.
The average American home uses about 260 gallons of water per day, according to the EPA.
That quarter-pound burger you just gobbled down? More than 600 gallons of water.
That Ramones T-shirt? More than 700 gallons.
So, adjustments to your diet and buying habits can have a much greater impact on the size of your water footprint than taking 40-second showers.
A pound of beef, for example, takes nearly 1,800 gallons of water to produce, with most of that going to irrigate the grains and grass used to feed the cattle. A pound of chicken demands just 468 gallons. If you really want to save water, eat more goat. A pound of goat requires 127 gallons of water.
We’ve been told to cut down on our use of paper to save the forests, but going paperless also saves water. It takes more than 1,300 gallons of water to produce a ream of copy paper.
Even getting treated water to your house requires electricity. Letting your faucet run for five minutes, the EPA says, uses about as much energy as burning a 60-watt light bulb for 14 hours. Reducing your water footprint also reduces your carbon footprint, the amount of greenhouse gases your lifestyle contributes to the atmosphere and global warming.
So, you could say that conserving water is more than hot air. It’s connected to almost everything you do.
Related water stories on MNN: | <urn:uuid:cca5126a-d443-4b80-89a5-01bc0108a268> | 2.640625 | 660 | News Article | Science & Tech. | 54.947142 | 43 |
PHP, while originally designed and built to run on Unix, has had the ability since version 3 to run on Windows. That includes 9x, ME, NT, and 2000. In this article I'm going to go through the process of installing PHP on Windows and explain what you should look out for.
On Windows, as on Unix, you have two options for installing PHP: as a CGI or as an ISAPI module. The obvious benefit of the latter is speed. The downside is that this is still somewhat new and may not be as stable. But, before you do anything, you have to do some prep work, which is pretty simple. Once you've downloaded and unzipped the Windows binary version of PHP, you have to copy
php4isapi.dll from the sapi/ directory to WINNT/system or WINDOWS/system directories. You'll also probably want to move
php.ini-dist from your installation directory to the
WINDOWS/ directory and rename it to
php.ini, if you plan on changing any of the precompiled defaults. Now you're ready to go, regardless of whether you use PHP as a CGI or ISAPI module.
For NT/2000, you'll need to tell IIS how to recognize PHP. Thanks to the wonders of GUI, this can easily be accomplished with a few mouse clicks. First, fire up the Microsoft Management Console or Internet Service Manager, depending on whether you're using NT or 2000. Click on the Properties button of the web node you'll be working with. In this example we'll use Default Web Server. Click the ISAPI Filters tab and then click Add. Use
php as the name and in the path put the location of
php4isapi.dll, which should be
C:\WINNT\system\ in this example.
Configuring IIS to recognize PHP.
Under the Home Directory tab, click the Configuration button, then click Add for Application Mappings. Use the same location of
php4isapi.dll as you did with ISAP Filters and use
.php as the extension. Here comes the caveat!! As a test, I tried using as my path the temporary location of
php4isapi.dll, which was in the install directory on my desktop. Windows 2000 popped up a wonderfully annoying little box telling me I'm stupid and I should go dunk my head in the sand. OK, so it wasn't quite that wordy, but that's how I took it. Apparently Windows 2000 and IIS require App Mappings to be under the WINDOWS dir at least. So keep that in mind if you don't like sand up your nose.
Now, click OK on the Properties dialog. The next thing to do is start and stop IIS. This isn't the same as pushing the stop and start buttons on the Management Console. You should go to a command (or cmd, as it were) window and type
net stop issadmin. Wait for it to tell you what it's doing, as if you were nosy enough to care, and then type
net start w3svc. Why they didn't make it
net start iisadmin is beyond me. But I'm a Unix guy, and logic seems to be my downfall here. But I digress...
So now you have PHP installed as an ISAPI module on Windows! Aren't you happy? Probably not, because you probably don't believe me. Well, if you need proof just go to
C:\inetpub\wwwroot\ and put a file called
test.php in there. Inside that file put the following code:
<?php phpinfo(); ?>
Then pull up
test.php in your browser. If you see the PHP Info page, it worked! Now this probably goes without saying, so I'll say it. You have to pull up
test.php as a web page in the browser using
http://. If you give the path for the file using
file://, then you'll get a raw output of the code, which is no fun.
The PHP Info page indicates success.
On Windows 9x/ME, you're somewhat restricted to PWS (Personal Web Server). Of course there are alternatives such as OmniHTTPd, but we don't have all day here! Anyway, this is far simpler. It's mainly just a matter of a registry entry.
After you've moved
php4ts.dll to the appropriate directories, you should open up your favorite registry editor. At this point I note the obligatory warnings about fooling around with the registry, blah, blah, blah... In
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w3svc\parameters\Script Map, you'll want to add an entry with the name of
.php and a value of
Next, go start up the PWS Manager. For each of the directories that you want to make PHP aware, you have to right-click on that directory and check the executable box.
Now you're ready. Perform the same test (
test.php) as I described above, and have fun making all those ASP cronies jealous at the the guru-like glow that surrounds you!
Darrell Brogdon is a web developer for SourceForge at VA Linux Systems and has been using PHP since 1996.
Read more PHP Admin Basics columns.
Discuss this article in the O'Reilly Network PHP Forum.
Return to the PHP DevCenter.
Copyright © 2009 O'Reilly Media, Inc. | <urn:uuid:26746c26-10ba-46d1-9738-1e79ec76d82b> | 2.609375 | 1,155 | Tutorial | Software Dev. | 75.591953 | 44 |
is a C based interpreter (runloop) that executes, what different compiler (like Mildew ) produce.
If you want to help SMOP, you can just take on one of the lowlevel S1P implementations and write it. If you have any questions ask ruoso or pmurias at #perl6 @ irc.freenode.org.
The Slides for the talk Perl 6 is just a SMOP are available, it introduces a bit of the reasoning behind SMOP. A newer version of the talk presented at YAPC::EU 2008 is available
SMOP is an alternative implementation of a C engine to run Perl 6. It is focused in getting the most pragmatic approach possible, but still focusing in being able to support all Perl 6 features. Its core resembles Perl 5 in some ways, and it differs from Parrot in many ways, including the fact that SMOP is not a Virtual Machine. SMOP is simply a runtime engine that happens to have a interpreter run loop.
The main difference between SMOP and Parrot (besides the not-being-a-vm thing), is that SMOP is from bottom-up an implementation of the Perl 6 OO features, in a way that SMOP should be able to do a full bootstrap of the Perl 6 type system. Parrot on the other hand have a much more static low-level implementation (the PMC)
The same way PGE is a project on top of Parrot, SMOP will need a grammar engine for itself.
SMOP is the implementation that is stressing the meta object protocol more than any other implementation, and so far that has been a very fruitful exercise, with Larry making many clarifications on the object system thanks to SMOP.
Important topics on SMOP
- SMOP doesn't recurse in the C stack, and it doesn't actually define a mandatory paradigm (stack-based or register-based). SMOP has a Polymorphic Eval, that allows you to switch from one interpreter loop to another using Continuation Passing Style. See SMOP Stackless.
- SMOP doesn't define a object system in its own. The only thing it defines is the concept of SMOP Responder Interface, which then encapsulates whatever object system. This feature is fundamental to implement the SMOP Native Types.
- SMOP is intended to bootstrap itself from the low-level to the high-level. This is achieved by the fact that everything in SMOP is an Object. This way, even the low-level objects can be exposed to the high level runtime. See SMOP OO Bootstrap.
- SMOP won't implement a parser in its own, it will use STD or whatever parser gets ported to its runtime first.
- In order to enable the bootstrap, the runtime have a set of SMOP Constant Identifiers that are available for the sub-language compilers to use.
- There are some special SMOP Values Not Subject to Garbage Collection.
- A new interpreter implementation SMOP Mold replaced SLIME
- The "official" smop Perl 6 compiler is mildew - it lives in v6/mildew
- Currently there exists an old Elf backend which targets SMOP - it lives in misc/elfish/elfX
SMOP GSoC 2009
See the Old SMOP Changelog | <urn:uuid:9ef4d308-fa15-4196-86db-2db8b4c54358> | 2.875 | 694 | Knowledge Article | Software Dev. | 53.614756 | 45 |
The process of accretion is important in the formation of planets, stars, and black holes; it is also believed to power some of the most energetic phenomena in the universe. In an accretion disc, the accretion rate is controlled by the outward transport of angular momentum. Collisional processes like friction or viscosity are typically too small to account for the observed rates, and it is universally believed that astrophysical accretion discs are turbulent. However the origin of this turbulence is not clear since discs with velocity profiles close to Keplerian are stable to infinitesimal perturbations. In the early nineties it was realized that the stability picture changes dramatically if magnetic fields or nonlinear effects are present. In this talk I will describe how some of these issues can be discussed within the framework of the flow of a conducting fluid between coaxial rotating cylinders. I will also describe some of the experiments that are on the way to study these flows as well as the computational efforts to clarify the nature of the ensuing turbulent transport.
ANL Physics Division Colloquium Schedule | <urn:uuid:9bf53c84-d3fd-428b-bc4b-63892ad85de5> | 2.625 | 215 | Academic Writing | Science & Tech. | 18.868453 | 46 |
Titan's Ethane Lake
This artist concept shows a mirror-smooth lake on the surface of the smoggy moon Titan.
Cassini scientists have concluded that at least one of the large lakes observed on Saturn's moon Titan contains liquid hydrocarbons, and have positively identified ethane. This result makes Titan the only place in our solar system beyond Earth known to have liquid on its surface.
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter was designed, developed and assembled at JPL.
For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov. | <urn:uuid:36c0c102-e78a-494d-9b3b-78d6003c8994> | 3.34375 | 185 | Knowledge Article | Science & Tech. | 35.831515 | 47 |
Oct. 9, 1998 COLUMBIA, Mo.--Ducks, geese and bald eagles soaring over areas the size of small towns are envisioned when talking about federally protected wetlands, not areas that are maybe as big as a small swimming pool and apparently void of life. University of Missouri-Columbia Professor Ray Semlitsch is trying to change that view and explain the importance of smaller wetlands before they are managed out of existence.
"Large wetlands are beautiful and need to be protected, but for some animal species such as frogs, toads and salamanders, it is small wetlands that support greater species diversity," said Semlitsch, who along with his graduate research assistant, Russ Bodie, recently published their research in Conservation Biology. "These smaller, temporary wetlands--because they are dry at certain times during the year--are much harder to appreciate than vast marsh areas. But without these smaller wetlands, it is very possible that much of the animal and plant life that make wetlands rich, productive habitats would not survive. We need to worry about the conservation of smaller wetlands as well as the larger ones."
Small wetlands currently are defined as being less than 4 hectares, or about 8 to 9 acres. The majority of the nation's wetlands are much smaller than might be imagined, closer to 1 to 2 acres and sometimes as small as several square yards. These small wetlands may comprise the majority of wetlands in the United States and help support a vast diversity of wetland species. However, unlike the large wetlands, these smaller areas are not protected to the same extent.
Recently, the Army Corp of Engineers, which manages wetlands of all sizes throughout the United States, drafted regulations that will change the way wetlands are managed in the future. They have put off any change in management regulations until April, but the MU researchers argue that the changes in the regulations could manage these smaller wetlands out of existence.
"Right now we can't detect losses of small wetlands by satellite imagery, a technique used to assess environmental change," Bodie said. "We lose thousands of acres each year in wetlands and these smaller ones are not even taken into account. Yet, they play a vital role in the ecosystem and support a great variety of organisms."
Research done by Semlitsch and Bodie has indicated that when some individuals of a species move between wetlands, this increases their chances of survival. By populating many different wetlands, various species thrive, even during drought years when some wetlands are dry. When smaller wetlands are destroyed, the chances of survival for many species' populations may decrease dramatically because distances between individual wetlands become longer, making movement between wetlands more difficult. These small wetland breeding sites for amphibians are especially critical in light of purported world-wide declines, Semlitsch said.
Wetlands in general also have direct benefits to humans as they filter out chemicals and silt, buffer lands from flooding, and are a favorite of hunters and fishers. They also are very costly and difficult to develop for construction or other purposes.
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by University Of Missouri, Columbia.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:33275736-ac37-49fe-a13a-d130e6ad29c6> | 3.125 | 674 | Truncated | Science & Tech. | 33.419667 | 48 |
Nov. 27, 2009 Physicists from the Japanese-led multi-national T2K neutrino collaboration have just announced that over the weekend they detected the first neutrino events generated by their newly built neutrino beam at the J-PARC (Japan Proton Accelerator Research Complex) accelerator laboratory in Tokai, Japan.
Protons from the 30-GeV Main Ring synchrotron were directed onto a carbon target, where their collisions produced charged particles called pions. These pions travelled through a helium-filled volume where they decayed to produce a beam of the elusive particles called neutrinos. These neutrinos then flew 200 metres through the earth to a sophisticated detector system capable of making detailed measurements of their energy, direction, and type. The data from the complex detector system is still being analysed, but the physicists have seen at least 3 neutrino events, in line with the expectation based on the current beam and detector performance.
This detection therefore marks the beginning of the operational phase of the T2K experiment, a 474-physicist, 13-nation collaboration to measure new properties of the ghostly neutrino. Neutrinos interact only weakly with matter, and thus pass effortlessly through the earth (and mostly through the detectors!). Neutrinos exist in three types, called electron, muon, and tau; linked by particle interactions to their more familiar charged cousins like the electron. Measurements over the last few decades, notably by the Super Kamiokande and KamLAND neutrino experiments in western Japan, have shown that neutrinos possess the strange property of neutrino oscillations, whereby one type of neutrino will turn into another as they propagate through space. Neutrino oscillations, which require neutrinos to have mass and therefore were not allowed in our previous theoretical understanding of particle physics, probe new physical laws and are thus of great interest in the study of the fundamental constituents of matter.
They may even be related to the mystery of why there is more matter than anti-matter in the universe, and thus are the focus of intense study worldwide.
Precision measurements of neutrino oscillations can be made using artificial neutrino beams, as pioneered by the K2K neutrino experiment where neutrinos from the KEK laboratory were detected using the vast Super Kamiokande neutrino detector near Toyama. T2K is a more powerful and sophisticated version of the K2K experiment, with a more intense neutrino beam derived from the newly-built Main Ring synchrotron at the J-PARC accelerator laboratory.
The beam was built by physicists from KEK in cooperation with other Japanese institutions and with assistance from the US, Canadian, UK and French T2K institutes. Prof. Chang Kee Jung of Stony Brook University, Stony Brook, New York, leader of the US T2K project, said "I am somewhat stunned by this seemingly effortless achievement considering the complexity of the machinery, the operation and international nature of the project. This is a result of a strong support from the Japanese government for basic science, which I hope will continue, and hard work and ingenuity of all involved. I am excited about more ground breaking findings from this experiment in the near future."
The beam is aimed once again at Super-Kamiokande, which has been upgraded for this experiment with new electronics and software. Before the neutrinos leave the J-PARC facility their properties are determined by a sophisticated "near" detector, partly based on a huge magnet donated from CERN where it had earlier been used for neutrino experiments (and for the UA1 experiment, which won the Nobel Prize for the discovery of the W and Z bosons which are the basis of neutrino interactions), and it is this detector which caught the first events.
The first neutrino events were detected in a specialize detector, called the INGRID, whose purpose is to determine the neutrino beam's direction and profile. Further tests of the T2K neutrino beam are scheduled for December, and the experiment plans to begin production running in mid-January. Another major milestone should be observed soon after -- the first observation of a neutrino event from the T2K beam in the Super-Kamiokande experiment. Running will continue until the summer, by which time the experiment hopes to have made the most sensitive search yet achieved for a so-far unobserved critical neutrino oscillation mode dominated by oscillations between all three types of neutrinos.
In the coming years this search will be improved even further, with the hope that the 3-mode oscillation will be observed, allowing measurements to begin comparing the oscillations of neutrinos and anti-neutrinos, probing the physics of matter/ anti-matter asymmetry in the neutrino sector.
Other social bookmarking and sharing tools:
Note: If no author is given, the source is cited instead. | <urn:uuid:73f94bf7-72a9-431b-90ac-37db05302858> | 3.34375 | 1,033 | News Article | Science & Tech. | 22.25657 | 49 |
June 22, 2010 Millions of years before humans began battling it out over beachfront property, a similar phenomenon was unfolding in a diverse group of island lizards.
Often mistaken for chameleons or geckos, Anolis lizards fight fiercely for resources, responding to rivals by doing push-ups and puffing out their throat pouches. But anoles also compete in ways that shape their bodies over evolutionary time, says a new study in the journal Evolution.
Anolis lizards colonized the Caribbean from South America some 40 million years ago and quickly evolved a wide range of shapes and sizes. "When anoles first arrived in the islands there were no other lizards quite like them, so there was abundant opportunity to diversify," said author Luke Mahler of Harvard University.
Free from rivals in their new island homes, Anolis lizards evolved differences in leg length, body size, and other characteristics as they adapted to different habitats. Today, the islands of Cuba, Hispaniola, Jamaica and Puerto Rico -- collectively known as the Greater Antilles -- are home to more than 100 Anolis species, ranging from lanky lizards that perch in bushes, to stocky, long-legged lizards that live on tree trunks, to foot-long 'giants' that roam the upper branches of trees.
"Each body type is specialized for using different parts of a tree or bush," said Mahler.
Alongside researchers from the University of Rochester, Harvard University, and the National Evolutionary Synthesis Center, Mahler wanted to understand how and when this wide range of shapes and sizes came to be.
To do that, the team used DNA and body measurements from species living today to reconstruct how they evolved in the past. In addition to measuring the head, limbs, and tail of over a thousand museum specimens representing nearly every Anolis species in the Greater Antilles -- including several Cuban species that were previously inaccessible to North American scientists -- they also used the Anolis family tree to infer what species lived on which islands, and when.
By doing so, they discovered that the widest variety of anole shapes and sizes arose among the evolutionary early-birds. Then as the number of anole species on each island increased, the range of new body types began to fizzle.
Late-comers in lizard evolution underwent finer and finer tinkering as time went on. As species proliferated on each island, their descendants were forced to partition the remaining real estate in increasingly subtle ways, said co-author Liam Revell of the National Evolutionary Synthesis Center in Durham, NC.
"Over time there were fewer distinct niches available on each island," said Revell. "Ancient evolutionary changes in body proportions were large, but more recent evolutionary changes have been more subtle."
The researchers saw the same trend on each island. "The islands are like Petri dishes where species diversification unfolded in similar ways," said Mahler. "The more species there were, the more they put the brakes on body evolution."
The study sheds new light on how biodiversity comes to be. "We're not just looking at species number, we're also looking at how the shape of life changes over time," said Mahler.
The team's findings are published in the journal Evolution.
Richard Glor of the University of Rochester and Jonathan Losos of Harvard University were also authors on this study.
Other social bookmarking and sharing tools:
- D. Luke Mahler, Liam J. Revell, Richard E. Glor, Jonathan B. Losos. Ecological opportunity and the rate of morphological evolution in the diversification of Greater Antillean anoles. Evolution, 2010; DOI: 10.1111/j.1558-5646.2010.01026.x
Note: If no author is given, the source is cited instead. | <urn:uuid:d0b67315-27de-4788-9b3c-46132eef151f> | 3.640625 | 791 | News Article | Science & Tech. | 36.878159 | 50 |
The word vivisection was first coined in the 1800s to denote the experimental dissection of live animals - or humans. It was created by activists who opposed the practice of experimenting on animals. The Roman physician Celsus claimed that in Alexandria in the 3rd century BCE physicians had performed vivisections on sentenced criminals, but vivisection on humans was generally outlawed. Experimenters frequently used living animals. Most early modern researchers considered this practice acceptable, believing that animals felt no pain. Even those who opposed vivisection in the early modern period did not usually do so out of consideration for the animals, but because they thought that this practice would coarsen the experimenter, or because they were concerned that animals stressed under experimental conditions did not represent the normal state of the body.
Prompted by the rise of experimental physiology and the increasing use of animals, an anti-vivisection movement started in the 1860s. Its driving force, the British journalist Frances Power Cobbe (1822-1904), founded the British Victoria Street Society in 1875, which gave rise to the British government's Cruelty to Animals Act of 1876. This law regulated the use of live animals for experimental purposes.
R A Kopaladze, 'Ivan P. Pavlov's view on vivisection', Integr. Physiol. Behav. Sci., 4 (2000), pp 266-271
C Lansbury, The Old Brown Dog: Women, Workers, and Vivisection in Edwardian England (Madison: University of Wisconsin Press, 1985)
P Mason, The Brown Dog Affair: The Story of a Monument that Divided the Nation (London: Two Stevens, 1997)
N A Rupke, (ed.) Vivisection in Historical Perspective (London: Crooms Helm, 1987)
The science of the functioning of living organisms and their component parts. | <urn:uuid:302a84f1-d0b1-4e14-8e71-b2ded9ee5190> | 3.71875 | 392 | Knowledge Article | Science & Tech. | 36.06538 | 51 |
by I. Peterson
Unlike an ordinary, incandescent bulb, a laser produces light of a single wavelength. Moreover, the emitted light waves are coherent, meaning that all of the energy peaks and troughs are precisely in step.
Now, a team at the Massachusetts Institute of Technology has demonstrated experimentally that a cloud consisting of millions of atoms can also be made coherent. Instead of flying about and colliding randomly, the atoms display coordinated behavior, acting as if the entire assemblage were a single entity.
According to quantum mechanics, atoms can behave like waves. Thus, two overlapping clouds made up of atoms in coherent states should produce a zebra-striped interference pattern of dark and light fringes, just like those generated when two beams of ordinary laser light overlap.
By detecting such a pattern, the researchers proved that the clouds' atoms are coherent and constitute an "atom laser," says physicist Wolfgang Ketterle, who heads the MIT group. These matter waves, in principle, can be focused just like light.
Ketterle and his coworkers describe their observations in the Jan. 31 Science.
The demonstration of coherence involving large numbers of atoms is the latest step in a series of studies of a remarkable state of matter called a Bose-Einstein condensate. Chilled to temperatures barely above absolute zero, theory predicted, the atoms would collectively enter the same quantum state and behave like a single unit, or superparticle, with a specific wavelength.
First created in the laboratory in 1995 by Eric A. Cornell and his collaborators at the University of Colorado and the National Institute of Standards and Technology, both in Boulder, Bose-Einstein condensates have been the subject of intense investigation ever since (SN: 7/15/95, p. 36; 5/25/96, p. 327).
At MIT, Ketterle and his colleagues cool sodium atoms to temperatures below 2 microkelvins. The frigid atoms are then confined in a special magnetic trap inside a vacuum chamber.
To determine whether the atoms in the resulting condensate are indeed as coherent as photons in a laser beam, the researchers developed a novel method of extracting a clump of atoms from the trap.
In effect, they manipulate the magnetic states of the atoms to expel an adjustable fraction of the original cloud; under the influence of gravity, the released clump falls. The method can produce a sequence of descending clumps, with each containing 100,000 to several million coherent atoms.
The apparatus acts like a dripping faucet, Ketterle says. He and his colleagues describe the technique in the Jan. 27 Physical Review Letters.
To demonstrate interference, the MIT group created a double magnetic trap so that two pulses of coherent atoms could be released at the same time. As the two clumps fell, they started to spread and overlap. The researchers could then observe interference between the atomic waves of the droplets.
"The signal was almost too good to be true," Ketterle says. "We saw a high-contrast, very regular pattern."
"It's a beautiful result," Cornell remarks. "This work really shows that Bose-Einstein condensation is an atom laser."
From the pattern, the MIT researchers deduced that the condensate of sodium atoms has a wavelength of about 30 micrometers, considerably longer than the 0.04-nanometer wavelength typical of individual atoms at room temperature.
Ketterle and his colleagues are already planning several improvements to their primitive atom laser, including getting more atoms into the emitted pulses and going from pulses to a continuous beam.
Practical use of an atom laser for improving the precision of atomic clocks and for manipulating atoms is still distant, however, Cornell notes. | <urn:uuid:5a667bf7-c324-483a-8231-ce8448d754f3> | 4 | 769 | News Article | Science & Tech. | 35.487766 | 52 |
The Weekly Newsmagazine of Science
Volume 155, Number 19 (May 8, 1999)
|<<Back to Contents|
By J. Raloff
Canadian scientists have identified the likely culprit behind some historic, regional declines in Atlantic salmon. The researchers find that a near-ubiquitous water pollutant can render young, migrating fish unable to survive a life at sea.
Heavy, late-spring spraying of forests with a pesticide laced with nonylphenol during the 1970s and '80s was the clue that led the biologists to unmask that chemical's role in the transitory decline of salmon in East Canada. Though these sprays have ended, concentrations of nonylphenols in forest runoff then were comparable to those in the effluent of some pulp mills, industrial facilities, and sewage-treatment plants today. Downstream of such areas, the scientists argue, salmon and other migratory fish may still be at risk.
Nonylphenols are surfactants used in products from pesticides to dishwashing detergents, cosmetics, plastics, and spermicides. Because waste-treatment plants don't remove nonylphenols well, these chemicals can build up in downstream waters (SN: 1/8/94, p. 24).
When British studies linked ambient nonylphenol pollution to reproductive problems in fish (SN: 2/26/94, p. 142), Wayne L. Fairchild of Canada's Department of Fisheries and Oceans in Moncton, New Brunswick, became concerned. He recalled that an insecticide used on local forests for more than a decade had contained large amounts of nonylphenols. They helped aminocarb, the oily active ingredient in Matacil 1.8D, dissolve in water for easier spraying.
Runoff of the pesticide during rains loaded the spawning and nursery waters of Atlantic salmon with nonylphenols. Moreover, this aerial spraying had tended to coincide with the final stages of smoltificationthe fish's transformation for life at sea.
To probe for effects of forest spraying, Fairchild and his colleagues surveyed more than a decade of river-by-river data on fish. They overlaid these numbers with archival data on local aerial spraying with Matacil 1.8D or either of two nonylphenol-free pesticides. One contained the same active ingredient, aminocarb, as Matacil 1.8D does.
Most of the lowest adult salmon counts between 1973 and 1990 occurred in rivers where smolts would earlier have encountered runoff of Matacil 1.8D, Fairchild's group found. In 9 of 19 cases of Matacil 1.8D spraying for which they had good data, salmon returns were lower than they were within the 5 years earlier and 5 years later, they report in the May Environmental Health Perspectives. No population declines were associated with the other two pesticides.
The researchers have now exposed smolts in the laboratory to various nonylphenol concentrations, including some typical of Canadian rivers during the 1970s. The fish remained healthyuntil they entered salt water, at which point they exhibited a failure-to-thrive syndrome.
"They looked like they were starving," Fairchild told Science News. Within 2 months, he notes, 20 to 30 percent died. Untreated smolts adjusted normally to salt water and fattened up.
Steffen S. Madsen, a fish ecophysiologist at Odense University in Denmark, is not surprised, based on his own experiments.
To move from fresh water to the sea, a fish must undergo major hormonal changes that adapt it for pumping out excess salt. A female preparing to spawn in fresh water must undergo the opposite change. Since estrogen triggers her adaptation, Madsen and a colleague decided to test how smolts would respond to estrogen or nonylphenol, an estrogen mimic.
In the lab, they periodically injected salmon smolts with estrogen or nonylphenol over 30 days, and at various points placed them in seawater for 24 hours. Salt in the fish's blood skyrocketed during the day-long trials, unlike salt in untreated smolts. "Our preliminary evidence indicates that natural and environ- mental estrogens screw up the pituitary," Madsen says. The gland responds by making prolactin, a hormone that drives freshwater adaptation.
Judging by Fairchild's data, Madsen now suspects that any fish that migrates between fresh and salt water may be similarly vulnerable to high concentrations of pollutants that mimic estrogen.
From Science News, Vol. 155, No. 19, May 8, 1999, p. 293. Copyright © 1999, Science Service.
Copyright © 1999 Science Service | <urn:uuid:3ac50003-34df-4326-9ff5-f4278ff44a0b> | 3.109375 | 978 | Truncated | Science & Tech. | 47.450967 | 53 |
Gaia theory is a class of scientific models of the geo-biosphere in which life as a whole fosters and maintains suitable conditions for itself by helping to create an environment on Earth suitable for its continuity. The first such theory was created by the atmospheric scientist and chemist, Sir James Lovelock, who developed his hypotheses in the 1960s before formally publishing the concept, first in the New Scientist (February 13, 1975) and then in the 1979 book "Quest for Gaia". He hypothesized that the living matter of the planet functioned like a single organism and named this self-regulating living system after the Greek goddess, Gaia, using a suggestion of novelist William Golding.
Gaia "theories" have non-technical predecessors in the ideas of several cultures. Today, "Gaia theory" is sometimes used among non-scientists to refer to hypotheses of a self-regulating Earth that are non-technical but take inspiration from scientific models. Among some scientists, "Gaia" carries connotations of lack of scientific rigor, quasi-mystical thinking about the planet arth, and therefore Lovelock's hypothesis was received initially with much antagonism by much of the scientific community. No controversy exists, however, that life and the physical environment significantly influence one another.
Gaia theory today is a spectrum of hypotheses, ranging from the undeniable (Weak Gaia) to the radical (Strong Gaia).
At one end of this spectrum is the undeniable statement that the organisms on the Earth have radically altered its composition. A stronger position is that the Earth's biosphere effectively acts as if it is a self-organizing system, which works in such a way as to keep its systems in some kind of meta-equilibrium that is broadly conducive to life. The history of evolution, ecology and climate show that the exact characteristics of this equilibrium intermittently have undergone rapid changes, which are believed to have caused extinctions and felled civilisations.
Biologists and earth scientists usually view the factors that stabilize the characteristics of a period as an undirected emergent property or entelechy of the system; as each individual species pursues its own self-interest, for example, their combined actions tend to have counterbalancing effects on environmental change. Opponents of this view sometimes point to examples of life's actions that have resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from a reducing environment to an oxygen-rich one. However, proponents will point out that those atmospheric composition changes created an environment even more suitable to life.
Some go a step further and hypothesize that all lifeforms are part of a single living planetary being called Gaia. In this view, the atmosphere, the seas and the terrestrial crust would be results of interventions carried out by Gaia through the coevolving diversity of living organisms. While it is arguable that the Earth as a unit does not match the generally accepted biological criteria for life itself (Gaia has not yet reproduced, for instance), many scientists would be comfortable characterising the earth as a single "system".
The most extreme form of Gaia theory is that the entire Earth is a single unified organism; in this view the Earth's biosphere is consciously manipulating the climate in order to make conditions more conducive to life. Scientists contend that there is no evidence at all to support this last point of view, and it has come about because many people do not understand the concept of homeostasis. Many non-scientists instinctively see homeostasis as an activity that requires conscious control, although this is not so.
Much more speculative versions of Gaia theory, including all versions in which it is held that the Earth is actually conscious or part of some universe-wide evolution, are currently held to be outside the bounds of science.
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Gaia". | <urn:uuid:7a3fa081-9c60-42a7-8ec4-1d8c386b4009> | 3.4375 | 794 | Knowledge Article | Science & Tech. | 23.657602 | 54 |
Giant Water Scavenger Beetle
|Geographical Range||North America|
|Scientific Name||Hydrophilus triangularis|
|Conservation Status||Not listed by IUCN|
The name says it all. This large beetle lives in water, where it scavenges vegetation and insect parts. The insect can store a supply of air within its silvery belly, much like a deep-sea diver stores air in a tank. | <urn:uuid:469863a4-9f80-47c2-ad04-ee7f0adecfd5> | 3.078125 | 91 | Knowledge Article | Science & Tech. | 34.880113 | 55 |
WAKING the GIANT Bill McGuire
While we transmit more than two million tweets a day and nearly one hundred trillion emails each year, we're also emitting record amounts of carbon dioxide (CO2). Bill McGuire, professor of geophysical and climate hazards at University College London, expects our continued rise in greenhouse gas emissions to awaken a slumbering giant: the Earth's crust. In Waking the Giant: How a Changing Climate Triggers Earthquakes, Tsunamis and Volcanoes (Oxford University Press), he explains that when the Earth's crust (or geosphere) becomes disrupted from rising temperatures and a C[O.sub.2]-rich atmosphere, natural disasters strike more frequently and with catastrophic force.
Applying a "straightforward presentation of what we know about how climate and the geosphere interact," the book links previous warming periods 20,000 to 5,000 years ago with a greater abundance of tsunamis, landslides, seismic activity and volcanic eruptions. McGuire urgently warns of the "tempestuous future of our own making" as we progressively inch toward a similar climate.
Despite his scientific testimony to Congress stating "what is going on in the Arctic now is the biggest and fastest thing that Nature has ever done" and the "incontrovertible" data that the Earth's climate draws lively response from the geosphere, brutal weather events are still not widely seen as being connected to human influence. Is our global population sleepwalking toward imminent destruction, he asks, until "it is obvious, even to the most entrenched denier, that our climate is being transformed?" | <urn:uuid:46ed79e4-97dd-492f-bf29-99304e01f4ee> | 3.046875 | 330 | Nonfiction Writing | Science & Tech. | 28.729356 | 56 |
SEA level rises and climate change are linked, say top scientists as they prepare the next major global climate change update.
More than 250 experts from 39 countries are in Hobart this week to review the latest draft of the Intergovernmental Panel on Climate Change's report including a new chapter on sea level.
The co-ordinating lead author on the new chapter, CSIRO's Dr John Church, said sea level is clearly linked to climate change.
"The sea level is rising, the rate of the rise has increased and will continue to increase," he said.
He said the rate had increased from a few tenths of a millimetre a year before the 20th century to more than 3mm a year in the past 20 years.
"It's clear the rate of sea level rise has already increased," he said.
"Whether that 3mm is a further acceleration or not is yet unclear but we do expect a further acceleration during the 21st century and it's clearly linked to greater levels of greenhouse gases."
He said thermal expansion because of ocean warming and the melting of glaciers were two key causes of sea level rise.
CSIRO's Dr Steve Rintoul, who is involved with the report's ocean observations chapter, said oceans were very important for climate because of the amount of heat they absorbed and stored.
He said the temperature of the ocean surface had increased by 0.3-0.5C over the past 50 years.
"There's no disputing the oceans are warming," he said.
"It's clear from the published literature that greenhouse gases as well as natural variability have contributed to this observed warming of the ocean."
He said oceans around Tasmania were changing, with recordings showing that temperatures around Maria Island have increased by 1.5C over the past 60 years.
"It's a very large number compared with other parts of the ocean," he said.
The Hobart conference is the last meeting before scientists prepare the final draft of the IPCC report's physical science section. The final report will be submitted in September. | <urn:uuid:4b0a824e-2759-4652-bb9b-cadd0312b57c> | 2.953125 | 420 | News Article | Science & Tech. | 55.436489 | 57 |
Want to stay on top of all the space news? Follow @universetoday on Twitter
Sidereal Time is the time is takes for celestial bodies to ascend and descend in the night sky. We know that celestial bodies are in reality, fixed in their positions. The reason for their dramatic movement in the night is because of the rotation of the earth. This is the same reason why the Sun and the Moon seem to rise and set. For the longest time, this motion caused many philosophers and astronomers to assume that the Earth was the center of the Universe. Fortunately later astronomers like Copernicus were able to discern the true movements of the Earth, Moon, and Sun helping to explain their movements. The time that it takes for a star, planet or other fixed celestial body to ascend and descend in the night sky is also called sidereal period. Coincidentally this time corresponds to the time it takes for the Earth to rotate one revolution which is just under 24 hours.
Sidereal time is not like solar time which is measured by the movement of the sun. Or the lunar cycles which take about 28 days. It is the relative angle of a celestial object to the prime meridian of the vernal equinox of the earth. IF these terms are confusing, here is what they mean. In cartography, the Earth is bisected by two major lines of longitude and latitude. These lines are the 0 degree points on the globe. The 0 degree point for the latitude is the Equator the point where the Earth is perfectly bisected. It cut through South America and Africa. The 0 degree point for the longitude is the prime meridian. It exact location is Greenwich, UK. The Equinoxes are essentially the times of the year when the sun rise and sets at the exact same point of the horizon at the equator. This means that these are the only times the solar day is equally divided into 12 hours of day and 12 hours of night. The hour angle for a celestial object relative to this meridian is what we call sidereal time.This angle changes with the rotation of the Earth creating a pattern of ascension and descent for celestial bodies in the Earth’s sky.
With the knowledge of sidereal time astronomers can predict the positions of stars. The values for the sidereal time of celestial objects is compile in a table or start chart called an ephemeris. With this guide to sidereal time astronomers can find a celestial object regardless of the change in their position over the year.
There are also some great resources on the net. The U.S. Naval observatory has an online clock to help you find out the sidereal time in your area. There is also a great explanation on the astronomy section of the Cornell university site. | <urn:uuid:678e8811-82bd-4c27-af17-f540e64bc52a> | 3.75 | 564 | Knowledge Article | Science & Tech. | 54.148823 | 58 |
French and American scientists won the Nobel Prize in physics Tuesday for their work with light and matter, which may lead the way to superfast computers and "the most precise clocks ever seen," the prize committee said.
Serge Haroche of France and David Wineland of the United States will share the $1.2 million prize, the second of six Nobel Prizes announced this month.
The award surprised those who expected the physics Nobel this year to be related to the discovery of the Higgs boson, considered one of the top scientific achievements of the past 50 years.
Wineland and Haroche work in the field of quantum optics, approaching the same principles from opposite directions. The American uses light particles to measure the properties of matter, whereas his French colleague focuses on tracking light particles by using atoms.
Both Nobel laureates have found ways to isolate the subatomic particles and keep their properties intact at the same time, scientists at the Royal Swedish Academy of Sciences said in Stockholm, Sweden.
Usually when these particles interact with the outside world, the properties that scientists would like to directly observe disappear, leaving researchers postulating over what is going on with them.
The two have found a way around this, making direct observation possible. "The new methods allow them to examine, control and count the particles," the academy said.
Haroche is a professor at the College de France and Ecole Normale Superieure in Paris, and Wineland is group leader and NIST Fellow at the National Institute of Standards and Technology and the University of Colorado Boulder.
Their work has some potential side benefits to future technology.
"Their ground-breaking methods have enabled this field of research to take the very first steps towards building a new type of super fast computer based on quantum physics," the academy said. | <urn:uuid:5652996e-b25f-42c5-9282-ece62c5dfdc1> | 2.59375 | 365 | News Article | Science & Tech. | 33.936457 | 59 |
Here's the way the NWS defines it:
Forecasts issued by the National Weather Service routinely include a "PoP" (probability of precipitation) statement, which is often expressed as the "chance of rain" or "chance of precipitation".http://www.srh.noaa.gov/ffc/?n=pop
ZONE FORECASTS FOR NORTH AND CENTRAL GEORGIA
NATIONAL WEATHER SERVICE PEACHTREE CITY GA
119 PM EDT THU MAY 8 2008
INCLUDING THE CITIES OF...ATLANTA...CONYERS...DECATUR...
119 PM EDT THU MAY x 2008
.THIS AFTERNOON...MOSTLY CLOUDY WITH A 40 PERCENT CHANCE OF
SHOWERS AND THUNDERSTORMS. WINDY. HIGHS IN THE LOWER 80S. NEAR
STEADY TEMPERATURE IN THE LOWER 80S. SOUTH WINDS 15 TO 25 MPH.
.TONIGHT...MOSTLY CLOUDY WITH A CHANCE OF SHOWERS AND
THUNDERSTORMS IN THE EVENING...THEN A SLIGHT CHANCE OF SHOWERS
AND THUNDERSTORMS AFTER MIDNIGHT. LOWS IN THE MID 60S. SOUTHWEST
WINDS 5 TO 15 MPH. CHANCE OF RAIN 40 PERCENT.
What does this "40 percent" mean? ...will it rain 40 percent of of the time? ...will it rain over 40 percent of the area?
The "Probability of Precipitation" (PoP) describes the chance of precipitation occurring at any point you select in the area.
How do forecasters arrive at this value?
Mathematically, PoP is defined as follows:
PoP = C x A where "C" = the confidence that precipitation will occur somewhere in the forecast area, and where "A" = the percent of the area that will receive measureable precipitation, if it occurs at all.
So... in the case of the forecast above, if the forecaster knows precipitation is sure to occur ( confidence is 100% ), he/she is expressing how much of the area will receive measurable rain. ( PoP = "C" x "A" or "1" times ".4" which equals .4 or 40%.)
But, most of the time, the forecaster is expressing a combination of degree of confidence and areal coverage. If the forecaster is only 50% sure that precipitation will occur, and expects that, if it does occur, it will produce measurable rain over about 80 percent of the area, the PoP (chance of rain) is 40%. ( PoP = .5 x .8 which equals .4 or 40%. )
In either event, the correct way to interpret the forecast is: there is a 40 percent chance that rain will occur at any given point in the area. | <urn:uuid:64f70112-bac2-48dc-87e7-d1404797fade> | 3.421875 | 616 | Comment Section | Science & Tech. | 73.397381 | 60 |
Australian Museum Marine Invertebrate Collections
The Marine Invertebrate collection contains specimens from all invertebrate groups except molluscs, insects and spiders.
Crustaceans are animals that have:
- a segmented body with a hardened shell
- seven or more pairs of appendages for feeding, moving and reproduction
- limbs which generally have two branches
- two pairs of antennae
- gills for breathing
Polychaetes are animals that typically have:
- a long, basically cylindrical body
- a body segmented both internally and externally
- a pair of leg-like appendages (not jointed) attached to every body segment
About the collection
The current focus of the collection is on polychaetes (segmented worms) and crustaceans (lobsters, crayfish, prawns, crabs, seed shrimps, barnacles, slaters and pill bugs) which reflects the research interests of the marine invertebrate staff.
The Marine Invertebrate collections contain registered specimens, microscope slides, Scanning Electron Microscope (SEM) preparations and photographic images. They include various marine invertebrates, and all other invertebrates except molluscs, insects and spiders, including freshwater and terrestrial representatives. The specimens contained in the collections are predominantly from New South Wales, Australia and the Indo-Pacific
The type collection comprises more than 9000 type lots, including more than 2000 primary types (types are the original specimens on which the first description of a species is based).
In addition to the registered collections there are also additional unsorted and unidentified collections, categorised by various taxonomic levels.
Combined with the Australian Museum Research Library the section also houses one of the largest collections of books and journal reprints in Australia providing taxonomic information on many invertebrate groups. This resource is available for use by scientists, students and the public by appointment. The reprint collection is currently being entered onto a computerised bibliographic database.
Marine Invertebrate Collections - Overview of taxonomic groups held
Dr Stephen Keable , Collection Manager, Marine Invertebrates | <urn:uuid:35678d88-fc35-4da5-83b8-3bf331289ba4> | 2.921875 | 446 | About (Org.) | Science & Tech. | -8.108485 | 61 |
Report Highlights Declining Health of Caribbean Corals
7 September 2012: A new International Union for Conservation of Nature (IUCN) report highlights that average live coral cover on Caribbean reefs has declined to just 8% of the reef today, compared with more than 50% in the 1970s. The report stems from a workshop held by the Global Coral Reef Monitoring Network (GCRMN) at the Smithsonian Tropical Research Institute in the Republic of Panama, from 29 April-5 May 2012.
According to the report, rates of decline on most reefs show no signs of slowing. However, many reefs in the Netherlands Antilles and Cayman Islands have 30% or more live coral cover. The causes of these regional differences in reef conditions are not well understood, beyond the role of human exploitation and disturbance.
Carl Gustaf Lundin, Director, IUCN Global Marine and Polar Programme, notes that the major causes of coral decline include overfishing, pollution, disease, and bleaching caused by rising temperatures resulting from the burning of fossil fuels.
IUCN has recommended local action to improve the health of corals, including limits on fishing through catch quotas, an extension of marine protected areas (MPAs), a halt to nutrient runoff from land, and a reduction on the global reliance on fossil fuels. [IUCN Press Release] [Publication: Tropical Americas Coral Reef Resilience Workshop: Executive Summary] | <urn:uuid:263ac1bf-33fa-4b18-8d48-715c94ebc9dd> | 3.515625 | 287 | News (Org.) | Science & Tech. | 24.193182 | 62 |
Combined Gas Law
The Combined Gas Law combines Charles Law, Boyle s Law and Gay Lussac s Law. The Combined Gas Law states that a gas pressure x volume x temperature = constant.
Alright. In class you should have learned about the three different gas laws. the first one being Boyle's law and it talks about the relationship between pressure and volume of a particular gas. The next one should be Charles law which talks about the volume and temperature of a particular gas. And the last one should be Gay Lussac's law which talks about the relationship between pressure and temperature of a particular gas. Okay. But what happens when you have pressure, volume and temperature all changing? Well, we're actually going to combine these gas laws to form one giant gas law called the combined gas law. Okay.
If you notice then these three gas laws the pressure and volume are always in the numerator. So we're going to keep them on the numerator. p1v1. And notice the temperature is in the denominator over t1. So all these things are just squished into one and then p2v2 over t2. Okay. So this is what we're going to call the combined gas law. So let's actually get an example and do one together.
Alright, so I have a problem up here that says a gas at 110 kilo pascals and 33 celsius fills a flexible container with an initial volume of two litres, okay? If the temperature is raised to 80 degrees celsius and the pressure is raised to 440 kilo pascals, what is the new volume? Okay. So notice we have three variables. We're talking about pressure, temperature and volume. Okay, so now we're going to employ this combined gas law dealing with all three of these variables. So we're going to look at our first, our first number 110 kilo pascals and that's going to, that is the unit of pressure. So we know that's p1. Our p1 is 110 kilo pascals, at 30 degree celsius. I don't like things with celsius so I'm going to change this to kelvin. So I'm going to add 273 to that which makes it 303 kelvin. That's our temperature. And my initial volume is two litres so I'm going to say v1=2 litres. Okay then I continue reading. If the temperature is raised at 80 degree celsius, again we want it in kelvin, so we're going to add 273 making it to 353. So our t2 is 353 kelvin and the pressure increased to 440 kilo pascals, the pressure p2 is equal to 440 kilo pascals which I'm very happy that I kept it in kilo pascals that I kept it in kilo pascals. I've got to make sure these units are the same because pressure can be measured in several different units. I'm going to make sure all units are the same. And what is the new volume? So our v2 is our variable, what we're trying to find. Okay.
So let's basically plug all these variable in into our combined gas law to figure out what the new volume would be. Okay. So I'm going to erase this and say our pressure one is 110 kilo pascals. Our volume one is two litres. Our temperature one is 303 kelvin. Our pressure two is 440 kilo pascals. We don't know our volume so we're just going to say v2 over 353 kelvin. Okay. When I'm looking for a variable I'm going to cross multiply these guys. So I'm going to say 353 times 110 times 2 and that should give me seven, 77660, if you put that in a calculator. So I just cross multiply these guys. And I cross multiply these guys 303 times 440 times v2 gives me 133320v2. Okay, so then I want to get my, I want to isolate my variable, so I'm going to divide 133320. 133320. And I find that my new volume is 0.58. 0.58 metres. And that is how you do the combined gas law. | <urn:uuid:5f1963a4-8da7-4d73-9dda-3c8691608115> | 4.09375 | 873 | Tutorial | Science & Tech. | 76.50661 | 63 |
A compiler is a computer program that takes code and generates either object code or translates code in one language into another language. When it generates code into another language usually the other language is either compiled (into object code) , interpreted , or even compiled again into another language. Object code can be run on your computer as a regular program. In the days when compute time cost thousands of dollars compilation was done by hand. Now compilation is usually done by a program.
edit-hint Expand to compilation techniques? | <urn:uuid:880d3bad-144c-4602-89ac-2eec0a853e79> | 3.40625 | 102 | Knowledge Article | Software Dev. | 25.430852 | 64 |
GloMax®-Multi Jr Method for DNA Quantitation Using Hoechst 33258
- Comments & Ratings
Quantitation of DNA is an important step for many practices in molecular biology. Common techniques that use DNA, such as sequencing, cDNA synthesis and cloning, RNA transcription, transfection, nucleic acid labeling (e.g., random prime labeling), etc., all benefit from a defined template concentration. Failure to produce results from these techniques sometimes can be attributed to an incorrect estimate of the DNA template used. The concentration of a nucleic acid most commonly is measured by UV absorbance at 260nm (A260). Absorbance methods are limited in sensitivity, however, due to a high level of background interference. | <urn:uuid:8cdb1656-8511-466e-b3f6-681a7cf80615> | 2.734375 | 149 | Knowledge Article | Science & Tech. | 28.8033 | 65 |
As a policy, Python doesn't run user-specified code on startup of Python programs. (Only interactive sessions execute the script specified in the PYTHONSTARTUP environment variable if it exists).
However, some programs or sites may find it convenient to allow users to have a standard customization file, which gets run when a program requests it. This module implements such a mechanism. A program that wishes to use the mechanism must execute the statement
The user module looks for a file .pythonrc.py in the user's home directory and if it can be opened, executes it (using execfile()) in its own (the module user's) global namespace. Errors during this phase are not caught; that's up to the program that imports the user module, if it wishes. The home directory is assumed to be named by the HOME environment variable; if this is not set, the current directory is used.
The user's .pythonrc.py could conceivably test for
sys.version if it wishes to do different things depending on
the Python version.
A warning to users: be very conservative in what you place in your .pythonrc.py file. Since you don't know which programs will use it, changing the behavior of standard modules or functions is generally not a good idea.
A suggestion for programmers who wish to use this mechanism: a simple
way to let users specify options for your package is to have them
define variables in their .pythonrc.py file that you test in
your module. For example, a module spam that has a verbosity
level can look for a variable
user.spam_verbose, as follows:
import user verbose = bool(getattr(user, "spam_verbose", 0))
(The three-argument form of getattr() is used in case
the user has not defined
spam_verbose in their
Programs with extensive customization needs are better off reading a program-specific customization file.
Programs with security or privacy concerns should not import this module; a user can easily break into a program by placing arbitrary code in the .pythonrc.py file.
Modules for general use should not import this module; it may interfere with the operation of the importing program. | <urn:uuid:0d7a934e-59a5-4304-9f2f-f2ae12c2f02f> | 2.796875 | 469 | Documentation | Software Dev. | 46.007379 | 66 |
.NET Type Design Guidelines
|Visual C# Tutorials|
|.NET Framework Tutorials|
.NET Type Design Guidelines
|© 2006 Microsoft Corp.|
|This tutorial—.NET Type Design Guidelines—is from Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, by Krzysztof Cwalina, Brad Abrams. Copyright © 2006 Microsoft Corp.. All rights reserved. This article is reproduced by permission. This tutorial has been edited especially for C# Online.NET. Read the book review!|
(This article was written and annotated by members of the Microsoft Common Language Runtime (CLR) and .NET teams and other experts.)
Type Design Guidelines in .NET
From the CLR perspective, there are only two categories of types—reference types and value types—but for the purpose of framework design discussion we divide types into more logical groups, each with its own specific design rules. Figure 4-1 shows these logical groups.
Classes are the general case of reference types. They make up the bulk of types in the majority of frameworks. Classes owe their popularity to the rich set of object-oriented features they support and to their general applicability. Base classes and abstract classes are special logical groups related to extensibility. Extensibility and base classes are covered in Chapter 6.
Interfaces are types that can be implemented both by reference types and value types. This allows them to serve as roots of polymorphic hierarchies of reference types and value types. In addition, interfaces can be used to simulate multiple inheritance, which is not natively supported by the CLR.
Structs are the general case of value types and should be reserved for small, simple types, similar to language primitives.
Enums are a special case of value types used to define short sets of values, such as days of the week, console colors, and so on.
Static classes are types intended as containers for static members. They are commonly used to provide shortcuts to other operations.
Delegates, exceptions, attributes, arrays, and collections are all special cases of reference types intended for specific uses, and guidelines for their design and usage are discussed elsewhere in this book.
- DO ensure that each type is a well-defined set of related members, not just a random collection of unrelated functionality.
- It is important that a type can be described in one simple sentence. A good definition should also rule out functionality that is only tangentially related.
|If you have ever managed a team of people you know that they don't do well without a crisp set of responsibilities. Well, types work the same way. I have noticed that types without a firm and focused scope tend to be magnets for more random functionality, which, over time, make a small problem a lot worse. It becomes more difficult to justify why the next member with even more random functionality does not belong in the type. As the focus of the members in a type blurs, the developer's ability to predict where to find a given functionality is impaired, and therefore so is productivity.|
|Good types are like good diagrams: What has been omitted is as important to clarity and usability as what has been included. Every additional member you add to a type starts at a net negative value and only by proven usefulness does it go from there to positive. If you add too much in an attempt to make the type more useful to some, you are just as likely to make the type useless to everyone.|
| When I was learning OOP back in the early 1980s, I was taught a mantra that I still honor today: If things get too complicated, make more types. Sometimes, I find that I am thinking really hard trying to define a good set of methods for a type. When I start to feel that I'm spending too much time on this or when things just don't seem to fit together well, I remember my mantra and I define more, smaller types where each type has well-defined functionality. This has worked extremely well for me over the years. On the flip side, sometimes types do end up being dumping grounds for various loosely related functions. The .NET Framework offers several types like this, such as | | <urn:uuid:6c35af72-3e52-40ad-bf2e-d5f5676c535e> | 3 | 868 | Documentation | Software Dev. | 42.544528 | 67 |
Benefits of XQuery
The principal benefits of XQuery are:
- Expressiveness - XQuery can query many different data structures and its recursive nature makes it ideal for querying tree and graph structures
- Brevity - XQuery statements are shorter than similar SQL or XSLT programs
- Flexibility - XQuery can query both hierarchical and tabular data
- Consistency - XQuery has a consistent syntax and can be used with other XML standards such as XML Schema datatypes
XQuery is frequently compared with two other languages, SQL and XSLT, but has a number of advantages over these.
Advantages over SQL
Unlike SQL, XQuery returns not just tables but arbitrary tree structures. This allows XQuery to directly create XHTML structures that can be used in web pages. XQuery is for XML-based object databases, and object databases are much more flexible and powerful than databases which store in purely tabular format.
Unlike XSLT, XQuery can be learned by anyone familiar with SQL. Many of the constructs are very similar such as:
- Ordering Results: Both XQuery and SQL add an
order byclause to the query.
- Selecting Distinct Values: Both XQuery and SQL have easy ways to select distinct values from a result set
- Restricting Rows: Both XQuery and SQL have a WHERE X=Y clause that can be added to an XQuery
Another big advantage is that XQuery is essentially the native query language of the World Wide Web. One can query actual web pages with XQuery, but not SQL. Even if one uses SQL-based databases to store HTML/XHTML pages or fragments of such pages, one will miss many of the advantages of XQuery's simple tag/attribute search (which is akin to searching for column names within column names).
Advantages over XSLT
Unlike XSLT, XQuery can be quickly learned by anyone familiar with SQL. XSLT has many patterns that are unfamiliar to many procedural software developers. Also, whereas XSLT is good for using as a static means to convert one type of document to another, for example RSS to HTML, XQuery is a much more dynamic querying tool, useful for pulling out sections of data from large documents and/or large number of documents.
The Debate about XQuery vs. XSLT for Document Transformation
There has been a debate of sorts about the merits of the two languages for transforming XML: XSLT and XQuery. A common misconception is that "XQuery is best for querying or selecting XML, and XSLT is best for transforming it." In reality, both methods are capable of transforming XML. Despite XSLT's longer history and larger install base, the "XQuery typeswitch" method of transforming XML provides numerous advantages.
Most people who need to transform XML hear that they need to learn a language called XSLT. XSLT, whose first version was published by the W3C in 1999, was a huge innovation for its time and, indeed, remains dominant. It was one of the very first languages dedicated to transforming XML documents, and it was the first domain-specific language (DSL) to use advanced theories from the world of functional programming to create very reliable, side-effect free transformations. Many XML developers still feel strong indebted to this groundbreaking language, since it helped them see a new model of software development: one focused around the transformation of models and empowering them to fuse both the requirements and documentation of a transformation routing into a single, modular program.
On the other hand, learning XSLT requires overcoming a very substantial learning curve. XSLT's difficulty is due, in part, to one of the key design decisions by its architects: to express the transformation rules using XML itself, rather than creating a brand new syntax and grammar for storing the transformation rules. XSLT's unique approach to transformation rules also contributes to the steepness of the learning curve. The learning curve can be overcome, but it is fair to say that this learning curve has created a opening for an alternative approach.
XQuery has filled this demand for an alternative among a growing community of users: they find XQuery has a lower learning curve, it meets their needs for transforming XML, and, together with XQuery's other advantages, it has become a compelling "all-in-one" language. Like XSLT, XQuery was created by the W3C to handle XML. But instead of expressing the language in XML syntax, the architects of XQuery chose a new syntax that would be more familiar to users of server-side scripting languages such as PHP, Perl, or Python. XQuery was designed to be similar to users of relational database query languages such as SQL, while still remaining true to functional programming practices. Despite its relative youth (XQuery 1.0 was only released in 2007 when XSLT had already reached its version 2.0), XQuery was born remarkably mature. XML servers like eXist-db and MarkLogic were already using XQuery as their language for querying XML and performing web server operations (obviating the need for learning PHP, Perl, or Python).
So, in the face of the XSLT community's contention that "XSLT is best for transforming documents and XQuery is best for querying databases", this community of users was surprised to find that XQuery has entirely replaced their need for XSLT. They have come to argue unabashedly that they prefer XQuery for this purpose.
How does XQuery accomplish the task of transforming XML? The primary technique in XQuery for transforming XML is a little-known expression added by the authors of XQuery, called "typeswitch." Although it is quite simple, typeswitch enables XQuery to perform nearly the full set of transformations that XSLT does. A typeswitch expression quickly looks at a node's type, and depending on the node's type, performs the operation you specify for that type of node. What this means is that each distinct element of a document can have its own rule, and these rules can be stored in modular XQuery functions. This humble addition to the XQuery language allows developers to transform documents with complex content and unpredictable order - something commonly believed to be best reserved for the domain of XSLT. Despite the differences in syntax and approach to transformation, a growing community has actually come to see the XQuery typeswitch expression as a valid, even superior, way to store their document transformation logic.
By structuring a set of XQuery functions around the typeswitch expression, you can achieve the same result as XSLT-style transforms while retaining the benefits of XQuery: ease of learning and integration with native XML databases. Even more important for those users of native XML databases, the availability of typeswitch means that they only need to learn a single language for their database queries, web server operations, and document transformations. These XQuery typeswitch routines have proved easy to build, test, and maintain - some believe easier than XSLT. XQuery typeswitch has given these users a high degree of agility, allowing them to master XQuery fully rather than splitting their time and attention between XQuery and XSLT.
That said, there is still a large body of legacy XSLT transforms that work well, and there are XSLT developers who see little benefit from transitioning to a typeswitch-style XQuery. Both are valid approaches to document transformation. A natural tension has arisen between the proponents of XQuery typeswitch and XSLT, each promoting what they are most comfortable with and believe to be superior. In practice you might be best served by trying both techniques and determining what style is right for you and your organization. Without presuming a background or interest in XSLT, this article and its companion article help you to understand the key patterns for using XQuery typeswitch for your XML transformation needs. | <urn:uuid:4e5da1e0-fbf3-4bf9-bfb9-40a6d6a93da8> | 2.75 | 1,630 | Knowledge Article | Software Dev. | 38.045051 | 68 |
In statistics, a confidence region is a multi-dimensional generalization of a confidence interval. It is a set of points in an n-dimensional space, often represented as an ellipsoid around a point which is an estimated solution to a problem, although other shapes can occur.
The confidence region is calculated in such a way that if a set of measurements were repeated many times and a confidence region calculated in the same way on each set of measurements, then a certain percentage of the time, on average, (e.g. 95%) the confidence region would include the point representing the "true" values of the set of variables being estimated. However, unless certain assumptions about prior probabilities are made, it does not mean, when one confidence region has been calculated, that there is a 95% probability that the "true" values lie inside the region, since we do not assume any particular probability distribution of the "true" values and we may or may not have other information about where they are likely to lie.
The case of independent, identically normally-distributed errors
Suppose we have found a solution to the following overdetermined problem:
where Y is an n-dimensional column vector containing observed values, X is an n-by-p matrix which can represent a physical model and which is assumed to be known exactly, is a column vector containing the p parameters which are to be estimated, and is an n-dimensional column vector of errors which are assumed to be independently distributed with normal distributions with zero mean and each having the same unknown variance .
A joint 100(1 - ) % confidence region for the elements of is represented by the set of values of the vector b which satisfy the following inequality:
where the variable b represents any point in the confidence region, p is the number of parameters, i.e. number of elements of the vector and s2 is an unbiased estimate of equal to
The above inequality defines an ellipsoidal region in the p-dimensional Cartesian parameter space Rp. The centre of the ellipsoid is at the solution . According to Press et al., it's easier to plot the ellipsoid after doing singular value decomposition. The lengths of the axes of the ellipsoid are proportional to the reciprocals of the values on the diagonals of the diagonal matrix, and the directions of these axes are given by the rows of the 3rd matrix of the decomposition.
Weighted and generalised least squares
Now let us consider the more general case where some distinct elements of have known nonzero covariance (in other words, the errors in the observations are not independently distributed), and/or the standard deviations of the errors are not all equal. Suppose the covariance matrix of is , where V is an n-by-n nonsingular matrix which was equal to in the more specific case handled in the previous section, (where I is the identity matrix,) but here is allowed to have nonzero off-diagonal elements representing the covariance of pairs of individual observations, as well as not necessarily having all the diagonal elements equal.
It is possible to find a nonsingular symmetric matrix P such that
In effect, P is a square root of the covariance matrix V.
The least-squares problem
can then be transformed by left-multiplying each term by the inverse of P, forming the new problem formulation
A joint confidence region for the parameters, i.e. for the elements of , is then bounded by the ellipsoid given by:
Nonlinear problems
Confidence regions can be defined for any probability distribution. The experimenter can choose the significance level and the shape of the region, and then the size of the region is determined by the probability distribution. A natural choice is to use as a boundary a set of points with constant (chi-squared) values.
One approach is to use a linear approximation to the nonlinear model, which may be a close approximation in the vicinity of the solution, and then apply the analysis for a linear problem to find an approximate confidence region. This may be a reasonable approach if the confidence region is not very large and the second derivatives of the model are also not very large.
See Uncertainty Quantification#Methodologies for forward uncertainty propagation for related concepts.
See also
- Draper and Smith (1981, p. 94)
- Draper and Smith (1981, p. 108)
- Draper and Smith (1981, p. 109)
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (September 2011)|
- Draper, N.R.; H. Smith (1981) . Applied Regression Analysis (2nd ed.). USA: John Wiley and Sons Ltd. ISBN 0-471-02995-5.
- Press, W.H.; S.A. Teukolsky, W.T. Vetterling, B.P. Flannery (1992) . Numerical Recipes in C: The Art of Scientific Computing (2nd ed.). Cambridge UK: Cambridge University Press. | <urn:uuid:e4fbedec-4c46-43aa-b850-373a3030b4cf> | 3.59375 | 1,054 | Knowledge Article | Science & Tech. | 42.304445 | 69 |
Real form (Lie theory)
In mathematics, the notion of a real form relates objects defined over the field of real and complex numbers. A real Lie algebra g0 is called a real form of a complex Lie algebra g if g is the complexification of g0:
Real forms for Lie groups and algebraic groups
Using the Lie correspondence between Lie groups and Lie algebras, the notion of a real form can be defined for Lie groups. In the case of linear algebraic groups, the notions of complexification and real form have a natural description in the language of algebraic geometry.
Just as complex semisimple Lie algebras are classified by Dynkin diagrams, the real forms of a semisimple Lie algebra are classified by Satake diagrams, which are obtained from the Dynkin diagram of the complex form by labeling some vertices black (filled), and connecting some other vertices in pairs by arrows, according to certain rules.
It is a basic fact in the structure theory of complex semisimple Lie algebras that every such algebra has two special real forms: one is the compact real form and corresponds to a compact Lie group under the Lie correspondence (its Satake diagram has all vertices blackened), and the other is the split real form and corresponds to a Lie group that is as far as possible from being compact (its Satake diagram has no vertices blackened and no arrows). In the case of the complex special linear group SL(n,C), the compact real form is the special unitary group SU(n) and the split real form is the real special linear group SL(n,R). The classification of real forms of semisimple Lie algebras was accomplished by Élie Cartan in the context of Riemannian symmetric spaces. In general, there may be more than two real forms.
Suppose that g0 is a semisimple Lie algebra over the field of real numbers. By Cartan's criterion, the Killing form is nondegenerate, and can be diagonalized in a suitable basis with the diagonal entries +1 or -1. By Sylvester's law of inertia, the number of positive entries, or the positive index of intertia, is an invariant of the bilinear form, i.e. it does not depend on the choice of the diagonalizing basis. This is a number between 0 and the dimension of g which is an important invariant of the real Lie algebra, called its index.
Split real form
A real form g0 of a complex semisimple Lie algebra g is said to be split, or normal, if in each Cartan decomposition g0 = k0 ⊕ p0, the space p0 contains a maximal Abelian subalgebra of g0, i.e. its Cartan subalgebra. Élie Cartan proved that every complex semisimple Lie algebra g has a split real form, which is unique up to isomorphism. It has maximal index among all real forms.
The split form corresponds to the Satake diagram with no vertices blackened and no arrows.
Compact real form
A real Lie algebra g0 is called compact if the Killing form is negative definite, i.e. the index of g0 is zero. In this case g0 = k0 is a compact Lie algebra. It is known that under the Lie correspondence, compact Lie algebras correspond to compact Lie groups.
The compact form corresponds to the Satake diagram with all vertices blackened.
Construction of the compact real form
In general, the construction of the compact real form uses structure theory of semisimple Lie algebras. For classical Lie algebras there is a more explicit construction.
Let g0 be a real Lie algebra of matrices over R that is closed under the transpose map,
The complexification g of g0 decomposes into the direct sum of g0 and ig0. The real vector space of matrices
is a subspace of the complex Lie algebra g that is closed under the commutators and consists of skew-hermitian matrices. It follows that u0 is a real Lie subalgebra of g, that its Killing form is negative definite (making it a compact Lie algebra), and that the complexification of u0 is g. Therefore, u0 is a compact form of g.
See also
- Helgason 1978, p. 426 | <urn:uuid:307c1388-d49d-46d7-a722-4f52f24df709> | 2.875 | 938 | Knowledge Article | Science & Tech. | 52.949664 | 70 |
Molecular Biology and Genetics
Statistics of barcoding coverage
|Specimen Records:||10||Public Records:||0|
|Specimens with Sequences:||8||Public Species:||0|
|Specimens with Barcodes:||0||Public BINs:||0|
|Species With Barcodes:||0|
Locations of barcode samples
The Geastrales are an order of gasterocarpic basidiomycetes (fungi) that relates to Cantharellales. The order contains the single family Geastraceae, commonly known as "earthstars". It includes the genera Geastrum and Myriostoma. About 64 species are classified in this family, divided among eight genera. Older classifications place this family in the order Lycoperdales, but more recently they had been placed in Phallales. As of 2010, the family is classified as the sole taxon in the Geastrales order.
One member of the Geastraceae, Sphaerobolus stellatus—a nuisance organism in landscapes known as "shotgun fungus" or "cannonball fungus"—colonizes wood-based mulches and may throw black, spore-containing globs onto nearby painted surfaces.
The fruit bodies of several earthstars are hygroscopic: in dry weather the "petals" will dry and curl up around the soft spore sac, protecting it. In this state, often the whole fungus becomes detached from the ground and may roll around as a tumbleweed does. When the weather dampens, the "petals" moisten and uncurl and some even curl backward lifting the spore sac up. This then allows rain or animal movement to hit the spore sac so it will puff out spores when enough moisture is present for them to germinate.
- Hosaka K, Bates ST, Beever RE, Castellano MA, Colgan W 3rd, Domínguez LS, Nouhra ER, Geml J, Giachini AJ, Kenney SR, Simpson NB, Spatafora JW, Trappe JM. (2006). "Molecular phylogenetics of the gomphoid-phalloid fungi with an establishment of the new subclass Phallomycetidae and two new orders". Mycologia 98 (6): 949–59. doi:10.3852/mycologia.98.6.949. PMID 17486971.
- Corda ACJ. (1842). Icones fungorum hucusque cognitorum (in Latin) 5. Prague: J.G. Calve. pp. 1–92 (see p. 25).
- Kirk et al. (2008), p. 648.
- Kirk PM, Cannon PF, David JC, Stalpers JA. (2001). Ainsworth & Bisby's Dictionary of the Fungi (9th ed.). Oxon, UK: CABI Bioscience. p. 205. ISBN 0-85199-377-X.
- Kirk PM, Cannon PF, Minter DW, Stalpers JA. (2008). Dictionary of the Fungi (10th ed.). Wallingford, UK: CAB International. p. 274. ISBN 9780851998268.
To request an improvement, please leave a comment on the page. Thank you! | <urn:uuid:23dbc58d-5aa3-4282-819f-daadba5a08c9> | 2.796875 | 725 | Knowledge Article | Science & Tech. | 65.125014 | 71 |
there are several other differences
including different meaning for the symbols ( [
different rules for which symbols need escaping (they can't be the same as both standard posix and extended posix)
you should read the full documentation for PCRE before chaging any posix regex to use pcre.
Differences from POSIX regex
As of PHP 5.3.0, the POSIX Regex extension is deprecated. There are a number of differences between POSIX regex and PCRE regex. This page lists the most notable ones that are necessary to know when converting to PCRE.
- The PCRE functions require that the pattern is enclosed by delimiters.
- Unlike POSIX, the PCRE extension does not have dedicated functions for case-insensitive matching. Instead, this is supported using the i (PCRE_CASELESS) pattern modifier. Other pattern modifiers are also available for changing the matching strategy.
- The POSIX functions find the longest of the leftmost match, but PCRE stops on the first valid match. If the string doesn't match at all it makes no difference, but if it matches it may have dramatic effects on both the resulting match and the matching speed. To illustrate this difference, consider the following example from "Mastering Regular Expressions" by Jeffrey Friedl. Using the pattern one(self)?(selfsufficient)? on the string oneselfsufficient with PCRE will result in matching oneself, but using POSIX the result will be the full string oneselfsufficient. Both (sub)strings match the original string, but POSIX requires that the longest be the result.
add a note User Contributed Notes Differences from POSIX regex - [1 notes]
jasen at treshna dot com ¶
1 year ago | <urn:uuid:31708733-dbee-47f1-88c5-054e01a19ca2> | 2.5625 | 362 | Documentation | Software Dev. | 45.265829 | 72 |
Playing with Equations to Solve Problems
Date: 09/16/2003 at 10:48:13 From: Mara Subject: To state the geometric property of an equation I need to give the geometric property common to all lines in the family x - ky = 1 I know that the answer is that all lines in this family have an x-intercept at x=1 but I am totally clueless about showing why this is the case. At first I thought that using the double-intercept equation (x/a) + (y/b) = 1 would work but I couldn't get it in the correct form. Then I tried to solve for x and y and got x = 1 + ky and y = (-1/k) + (-x/k) but now I do not know what to do with this. So I was wondering if you knew how to go about solving this?
Date: 09/16/2003 at 12:28:52 From: Doctor Peterson Subject: Re: To state the geometric property of an equation Hi, Mara. I don't think there is any method you can use to solve this sort of problem without a lot of thinking and testing. Let's see how I personally would approach it (as well as I can construct it, considering that I know the answer already!). Then I'll look at some alternative approaches you might take. We have x - ky = 1 and we want to know what property all these lines have in common. Probably, since this is an open-ended question and I don't expect it to be straightforward, I would start by just "playing" with the equation, getting a feel for how it works by trying a few special cases. I might take k = 0, 1, and -1 and graph those three lines, x = 1 x - y = 1 x + y = 1 I would find that they all intersect at (1,0), and my answer would be that all the lines seem to contain that point. (In a sense this is a more purely "geometric" property than the x-intercept, since it does not refer to the coordinates.) Then I would want to prove that this is true for ALL k, to make sure I hadn't fooled myself by choosing three cases that happened to intersect. Thinking of this as a point shared by all lines in the family, I would prove it by simply substituting x=1, y=0 in the general equation: x - ky = 1 1 - k*0 = 1 is true for all k so (1,0) is indeed on all the lines, not just the three I tried. Or, I might think of it as a common x-intercept, as you said; then I would do what you suggested and put the equation into two-intercept form x/a + y/b = 1 That's easy; all it takes is to interpret -ky as y divided by -1/k: x/1 + y/(-1/k) = 1 So the x-intercept is 1 for all these lines, and the y-intercept is -1/k. So my approach is to experiment (the more adult word for "play"!) and make a conjecture (the more adult word for "guess"), and then prove that conjecture. Now, is there any other way you might approach this? If you were really smart (and I might possibly have done this if I were faced with the equation afresh), you could just see that the equation looks like the two-intercept form, and gone directly to the proof. If you could do that, fine; but you can't depend on such insight! You might instead just go through each form of the equation, starting probably with slope-intercept, and see whether any important feature (such as the slope or y-intercept) is constant. When that failed, it would be hard to move on to the point-slope or two-point form, because you would have to choose the point(s), and there is no obvious basis for that choice. So you would probably next try the two- intercept form (which many studuents never see, so you're lucky). Your approach came close. When you solved for x, you just had to look and see that the x-intercept (the constant in that form) is always 1. But since that form, the slope-x-intercept form, is little-known, it's not surprising that you did not know what to do with it. But I really think that my approach is the most reasonable hope to find the answer quickly. If you have any further questions, feel free to write back. - Doctor Peterson, The Math Forum http://mathforum.org/dr.math/
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994-2013 The Math Forum | <urn:uuid:69dca799-4921-4967-8dea-92828ca59f08> | 2.546875 | 1,006 | Comment Section | Science & Tech. | 75.561952 | 73 |
Problem: An oil drilling rig located 14 miles off of a straight coastline is to be connected by a pipeline to a renery 10 miles down the coast from the point directly opposite the drilling rig. Laying pipe under water costs 500,000 dollars per mile. Laying pipe on land costs 300,000 dollars per mile. What combination of underwater and land-based pipe will minimize the total cost of the pipeline?
For now I'd just like help drawing the picture because in my eyes this is very poorly written. So can someone help me out with a picture for this? | <urn:uuid:7821765f-cc32-4d66-bc92-9a0826f8b290> | 2.625 | 116 | Q&A Forum | Science & Tech. | 63.509412 | 74 |
The Physics Help Forum not working today, at least not from my ISP, so this goes here. It's basically a math deal anyway:
The formula to calculate the force of a point of mass, let's call them planets, that results from its being gravitationally attracted by another point of mass is Newton's:
Where is the force of the planet that results from the gravitational attraction exerted upon it by the other planet, is Newton's gravity constant, and are the respecive masses of the planets, and d is the distance between them.
For simplicity's sake let's say all the planets considered are of the same mass, so we can write instead of .
Now, if I'm not mistaken, the formula for calculating the force of a planet resulting from the gravitational attraction of more than two planets is:
Where is the force on the jth planet resulting from the gravitational attraction of the other planets, and is the distance between the jth planet and the kth planet.
My question is "where is the vector addition?" That is, when considering the force on one planet that results from the gravitational attraction of many other planets, we have to take into account not only the distance of the other planets from planet j but also their position with respect to it (right?).
Take for example the simple case of three planets in the same plane. Planet j is at the origin. Planet k is one unit to the right of j on the x axis, while planet l is one unit up the y axis. If the masses all equal 1, then, by the formula above, the force on planet j would be:
But is a function of both the distance and the position, right? So we must consider not only the Gravitational Forces individually exerted upon j by k and l, but also the angle at which these forces are exerted. That is, we must add the vectors. To add vectors you just plug in the x value and y value sums of the added vectors into pythagoras' formula. The force on planet j should therefore be:
(Where is the angle subtended by a line drawn from planet j to planet x. I.e. and )
So, what am I missing here? I am fully aware that I, and not Newton, am missing something here. Someone please help point this out for me. | <urn:uuid:7d99e7e1-4e2a-4168-989f-9de25f473394> | 3.53125 | 480 | Q&A Forum | Science & Tech. | 58.705377 | 75 |
News tagged with renewable energy
Related topics: energy , wind turbines , electricity , solar panels , fossil fuels
Renewable energy is energy generated from natural resources—such as sunlight, wind, rain, tides, and geothermal heat—which are renewable (naturally replenished). In 2006, about 18% of global final energy consumption came from renewables, with 13% coming from traditional biomass, such as wood-burning. Hydroelectricity was the next largest renewable source, providing 3% of global energy consumption and 15% of global electricity generation.
Wind power is growing at the rate of 30 percent annually, with a worldwide installed capacity of 121,000 megawatts (MW) in 2008, and is widely used in European countries and the United States. The annual manufacturing output of the photovoltaics industry reached 6,900 MW in 2008, and photovoltaic (PV) power stations are popular in Germany and Spain. Solar thermal power stations operate in the USA and Spain, and the largest of these is the 354 MW SEGS power plant in the Mojave Desert. The world's largest geothermal power installation is The Geysers in California, with a rated capacity of 750 MW. Brazil has one of the largest renewable energy programs in the world, involving production of ethanol fuel from sugar cane, and ethanol now provides 18 percent of the country's automotive fuel. Ethanol fuel is also widely available in the USA. While most renewable energy projects and production is large-scale, renewable technologies are also suited to small off-grid applications, sometimes in rural and remote areas, where energy is often crucial in human development. Kenya has the world's highest household solar ownership rate with roughly 30,000 small (20–100 watt) solar power systems sold per year.
Some renewable energy technologies are criticised for being intermittent or unsightly, yet the renewable energy market continues to grow. Climate change concerns coupled with high oil prices, peak oil and increasing government support are driving increasing renewable energy legislation, incentives and commercialization. New government spending, regulation, and policies should help the industry weather the 2009 economic crisis better than many other sectors.
This text uses material from Wikipedia and is available under the GNU Free Documentation License. | <urn:uuid:1408e655-896d-4edf-80ab-9e169370da2f> | 2.6875 | 452 | Knowledge Article | Science & Tech. | 23.335473 | 76 |
New study challenges previous findings that humans are an altruistic anomaly, and positions chimpanzees as cooperative, especially when their partners are patient.
Researchers at the Yerkes National Primate Research Center, have shown chimpanzees have a significant bias for prosocial behavior. This, the study authors report, is in contrast to previous studies that positioned chimpanzees as reluctant altruists and led to the widely held belief that human altruism evolved in the last six million years only after humans split from apes. The current study findings are available in the online edition of Proceedings of the National Academy of Sciences.
According to Yerkes researchers Victoria Horner, PhD, Frans de Waal, PhD, and their colleagues, chimpanzees may not have shown prosocial behaviors in other studies because of design issues, such as the complexity of the apparatus used to deliver rewards and the distance between the animals.
“I have always been skeptical of the previous negative findings and their over-interpretation, says Dr. de Waal. “This study confirms the prosocial nature of chimpanzees with a different test, better adapted to the species,” he continues. | <urn:uuid:5d537746-8ad2-44d6-8586-ae6a035cf9b2> | 3.09375 | 228 | Personal Blog | Science & Tech. | 26.0775 | 77 |
Like the Sound of Music‘s Von Trapp family hiding in the Alps, plants may find refuge from a warming climate in the mountains.
Research in the Swiss Alps suggests diverse mountain habitats could act as stepping stones to allow plants to escape into more hospitable hideaways as their usual homes heat up.
A large, flat savannah offers little variation in temperature. If the temperature warms up, the whole area warms up.
But Daniel Scherrer and Christian Körner from the University of Basel, Switzerland found a broad spectrum of habitats in the central Swiss Alps after studying an alpine meadow for two seasons. In the rugged mountain landscape, different conditions existed close together.
The plants growing in those varied conditions were adapted to the particular set of temperatures of the micro-climates, the scientists found. The research suggests that these plants could start growing in neighboring habitats as the temperature increases.
To test this, Scherrer and Körner used a computer model to simulate what would happen if the temperature went up 3.6 degrees Fahrenheit. They found that only 3 percent of all temperature conditions disappeared. Some of the cooler habitats shrank or shifted, but pockets remained. This suggests that plants have the opportunity to shift habitats, instead of just dying off.
Preserving mountain habitats is even more new important now in light of this research. A diverse Alpine meadow could save many different habitats, compared to a single habitat in a grassland of equal size.
“It is known from earlier geological periods that mountains were always important for survival of species during periods of climatic change such as in glacial cycles, because of their ‘habitat diversity,’” concluded Körner.
“Mountains are therefore particularly important areas for the conservation of biodiversity in a given region under climatic change and thus deserve particular protection,” Körner said.
Photo: Different habitats exist close together in the Alps. Wikimedia Commons | <urn:uuid:e12fbfc8-76ea-4109-8fcc-20b3e9f9c58f> | 4.21875 | 401 | News Article | Science & Tech. | 35.181071 | 78 |
- It is not possible to clone Lonesome George now, but other endangered animals have been successfully cloned.
- In the future, cloning and further studying Lonesome George might be possible, so scientists are focusing on preserving his tissues now.
- Biobanks known as "frozen zoos" hold tissues and other remains of certain endangered animals.
The recent death of Lonesome George, the famed Galapagos tortoise believed to be the last representative of his subspecies, has many experts wondering how we should try to save other endangered and at-risk animals.
Cloning is one option. While cloning methods for reptiles are not as advanced as those for mammals, scientists also say they face other incredible obstacles.
"At the most, I could envision one male turtle of this subspecies cloned in future or maybe two males, but where are you going to get a female?" asked Martha Gomez, a senior scientist with the Audubon Nature Institute, which has one of the world's few "frozen zoos."
Frozen Zoos stockpile biological materials from a wide variety of rare and critically endangered species. The biological material is usually composed of gametes (sperm and egg cells), embryos, tissue samples, serum and other items. Together, they represent a bank vault of irreplaceable genetic information that can be preserved for possibly hundreds of years or more. In most cases, the materials are stored in holding tanks filled with liquid nitrogen.
Oliver Ryder, director of genetics at the San Diego Zoo, spoke to Discovery News as his team was racing to the Galapagos Islands to help preserve the tissues of Lonesome George. The San Diego Zoo operates one of the other few frozen zoos.
"This is an extremely urgent matter," Ryder said. "We had planned to meet in the Galapagos in two weeks to discuss preservation of the tortoises there. It is a bitter irony that Lonesome George died before we could even finish setting up the plans. It underscores the importance of preserving such animals."
"We are facing some logistical problems now, but we don't want to look back with 'what if's,'" he added. "This may be the only chance we'll have to preserve, document and study this tortoise subspecies."
Ryder believes discussions of cloning Lonesome George are premature at this point. Before that takes place, he thinks more must be learned about this particular tortoise's physiology and reproduction. Studying Lonesome George's remains may also help to reveal how tortoises often live to advanced ages, information that could one day lead to breakthroughs in extending human lifespans.
For cloning, researchers are focusing more on "species where we have detailed knowledge of their reproductive biology," Ryder said. That is one reason why cats, dogs and mice were among the first animals to be cloned. Scientists are now working to clone endangered relatives of such animals, in hopes of releasing those individuals into the wild to strengthen natural populations.
Earlier this year, Gomez and her colleagues successfully cloned endangered black-footed cats. An endangered wild ox, called a gaur, and a banteng (wild cattle) have also been successfully cloned. Work is underway to clone and otherwise increase the population of Sumatran rhinos, which presently number only about 200-300 in the wild.
While one healthy clone is an interesting novelty, clones must also be able to reproduce in order to be fully successful. Gomez said that kittens of cloned wild cat parents have died "due to problems with nuclear programing," but some normal kittens have resulted and continue to thrive.
Both she and Ryder say that there is no international policy calling for cloning and preservation of highly endangered species. Instead, isolated facilities and the work of dedicated individuals are responsible for the successes.
"The effort needs to be more widespread and organized," Gomez said.
Through published papers and talks, Ryder and his colleagues have repeatedly called for an organized global effort. It would need an "overarching international body" on par with UNESCO, he believes.
"The first step is saving tissue samples, as we're in the process of doing for Lonesome George," he said. "But we who are among the forefront would like to train others to establish frozen zoo biobanks in other countries."
"I am confident that one day such an international structure will come together, bringing in other conservation work, such as preserving habitat," Ryder concluded. "It's poignant to lose a subspecies like that of Lonesome George. People in the future will be looking back at us, wondering why we didn't act sooner." | <urn:uuid:c901aef6-8e50-46c9-81bb-1ba56593fbbb> | 3.296875 | 954 | News Article | Science & Tech. | 36.053291 | 79 |
for National Geographic News
Explosive population growth is driving human evolution to speed up around the world, according to a new study.
The pace of change accelerated about 40,000 years ago and then picked up even more with the advent of agriculture about 10,000 years ago, the study says.
And while humans are evolving quickly around the world, local cultural and environmental factors are shaping evolution differently on different continents.
"We're evolving away from each other. We're getting more and more different," said Henry Harpending, an anthropologist at the University of Utah in Salt Lake City who co-authored the study.
For example, in Europe natural selection has favored genes for pigmentation like light skin, blue eyes, and blond hair. Asians also have genes selected for light skin, but they are different from the European ones.
"Europeans and Asians are both bleached Africans, but the way they got bleached is different in the two areas," Harpending said.
He and colleagues report the finding this week in the journal Proceedings of the National Academy of Sciences.
Snips of DNA
The researchers analyzed the DNA from 270 people in the International HapMap Project, an effort to identify variation in human genes that cause disease and serve as targets for new medicines.
The study specifically looked for genetic variations called single nucleotide polymorphisms, or SNPs (pronounced "snips"), which are mutations at a single point on a chromosome.
(See an interactive overview of human genetics.)
"We look for parts of chromosomes that are common in the population but are new, and if they are common but recent, they must have gotten to high frequency by selection," Harpending explained.
SOURCES AND RELATED WEB SITES | <urn:uuid:faa2c08a-b40c-4dea-b8ca-9c8852139641> | 3.625 | 363 | News Article | Science & Tech. | 31.980927 | 80 |
Washington, Aug 9 (IANS) The formation in the air of sulphuric acid, which smells like rotten eggs, is significantly impacting our climate and health, says a study.
The study led by Roy Lee Mauldin III, research associate at the University of Colorado-Boulder's atmospheric and oceanic sciences department, charts a previously unknown chemical pathway for the formation of sulphuric acid, which can trigger both increased acid rain and cloud formation as well as harmful respiratory effects on humans.
Sulphuric acid plays an essential role in the Earth's atmosphere, from the ecological impacts of acid precipitation to the formation of new aerosol particles, which have significant climatic and health effects. Our findings demonstrate a newly observed connection between the biosphere and atmospheric chemistry, Mauldin was quoted as saying in the journal Nature.
More than 90 percent of sulphur dioxide emissions are from fossil fuel combustion at power plants and other industrial facilities, says the US Environmental Protection Agency, according to a university statement.
Other sulphur sources include volcanoes and even ocean phytoplankton. Sulphur dioxide reacts with hydroxide to produce sulphuric acid that can form acid rain, harmful to terrestrial and aquatic life on Earth.
Airborne sulphuric acid particles, which form in a wide variety of sizes, play the main role in the formation of clouds, which can have a cooling effect on the atmosphere, Mauldin said.
Most of the lab experiments for the study were conducted at the Leibniz-Institute for Tropospheric Research in Leipzig, Germany. | <urn:uuid:597b3d2d-8580-4486-a845-d0f996cf51af> | 3.390625 | 329 | News Article | Science & Tech. | 20.054193 | 81 |
Warmer temperatures, variable monsoons, and other signs of climate change are a hot topic of conversation among many Himalayan villagers, according to scientific sampling of climate change perception among local peoples.
“This area is cold and it’s often raining. Even during the non-monsoon times there is mist and fog so inevitably conversations here turn to weather,” said Kamal Bawa, biologist at the University of Massachusetts, Boston (UMB), and president of Ashoka Trust for Research in Ecology and the Environment (ATREE) in Bangalore, India. “When you stop and have a cup of tea in someone’s kitchen, the conversation invariably turns to the weather. But then they soon start talking about how the weather has been changing.”
Bawa is also a member of the National Geographic Committee for Research and Exploration.
Bawa didn’t set out to study Himalayan perceptions of climate change. But after hearing the same themes repeated again and again during household conversations he decided to investigate. With UMB graduate student Pahupati Chaudhary, he surveyed some 500 homes spread across 18 villages in Darjeeling Hills, West Bengal, India and Nepal’s Ilam district. The pair found some surprisingly consistent observations.
Three-fourths of the people surveyed believe that their weather is getting warmer and two-thirds believe that summer and monsoon season have begun earlier over the past ten years. Seventy percent believe that water sources are drying up while forty-six percent said that they think there is less snow on the high mountains.
Many villagers also told Chaudhary they’d noticed shifts in some species ranges and earlier flowering and budding of plants. New pests have also arrived, villagers routinely reported, to plague crops and people—including mosquitoes where none had been before.
Most of these changes were reported by much higher percentages of people living at higher altitudes than by those at lower altitudes. “We’ve shown in earlier research that people at high altitudes seem to be more sensitive to climate change, and of course it’s known that climate change is more severe at higher altitudes, so that’s not a surprise,” Bawa said.
Many Himalayan peoples live in areas where predicted and observed impacts of climate change, like species migration, are more acute. Many of them also live “close to the land,” where agricultural-based livelihoods make them especially attuned to weather patterns.
Listening to Locals Can Help Climate Science
Scientific data on climate change have been hard to come by in the region, Bawa reported. Few weather stations dot remote and high-altitude locales and where they do exist their data are often incomplete.
But where data can be found they seem to corroborate local observations, Bawa said, citing his own research on temperature and rainfall records as well as the work of other scientists listed in recent Biology Letters and Current Science reports of Bawa and Chaudhary’s research.
“Governments in the region are now gearing up towards more research,” Bawa said. “But it will take time to gather this climate data.” That’s why local knowledge can be such valuable human intelligence, he added. It can be gathered quickly and widely and used to “jump start” scientific efforts.
“There seems to be quite a bit of knowledge residing with local communities, in the Himalaya and elsewhere, and we can really use that knowledge to formulate scientific questions for further research and make more rapid assessments of the impacts of climate change.”
Bawa said it’s hard to determine to what extent local peoples are familiar with the global dialogue on climate change, or how much that might have influenced their perceptions. But most of those he spoke with didn’t identify a clear cause for the changes they’d observed.
“We’re saying that people seem to be aware that the climate is changing, but they may not necessarily be aware of why it’s changing. I think when you come to that question people don’t have any ideas—or they may have some very different ideas.”
Bawa pointed to a recent study of this topic in Tibet, where many respondents believed that humans are causing climate change—but not by producing greenhouse gasses as most climate scientists believe. “They seem to think that the climate is changing because the Gods are not happy and perhaps the people in the younger generations are not praying enough.”
This research was supported in part by a Committee for Research and Exploration grant of the National Geographical Society. | <urn:uuid:666a34c3-a3b0-48d9-822b-04ba99c0d218> | 3.15625 | 968 | News Article | Science & Tech. | 35.832289 | 82 |
The Solar and Heliospheric Observatory (SOHO) spacecraft is expected to discover its 1,000TH comet this summer.
The SOHO spacecraft is a joint effort between NASA and the European Space Agency. It has accounted for approximately one-half of all comet discoveries with computed orbits in the history of astronomy.
"Before SOHO was launched, only 16 sun grazing comets had been discovered by space observatories. Based on that experience, who could have predicted SOHO would discover more than 60 times that number, and in only nine years," said Dr. Chris St. Cyr. He is senior project scientist for NASA's Living With a Star program at the agency's Goddard Space Flight Center, Greenbelt, Md. "This is truly a remarkable achievement!"
About 85 percent of the comets SOHO discovered belongs to the Kreutz group of sun grazing comets, so named because their orbits take them very close to Earth's star. The Kreutz sun grazers pass within 500,000 miles of the star's visible surface. Mercury, the planet closest to the sun, is about 36 million miles from the solar surface.
SOHO has also been used to discover three other well-populated comet groups: the Meyer, with at least 55 members; Marsden, with at least 21 members; and the Kracht, with 24 members. These groups are named after the astronomers who suggested the comets are related, because they have similar orbits.
Many comet discoveries were made by amateurs using SOHO images on the Internet. SOHO comet hunters come from all over the world. The United States, United Kingdom, China, Japan, Taiwan, Russia, Ukraine, France, Germany, and Lithuania are among the many countries whose citizens have used SOHO to chase comets.
Almost all of SOHO's comets are discovered using images from its Large Angle and Spectrometric Coronagraph (LASCO) instrument. LASCO is used to observe the faint, multimillion-degree outer atmosphere of the sun, called the corona. A disk in the instrument is used to make an artificial eclipse, blocking direct light from the sun, so the much fainter corona can be seen. Sun grazing comets are discovered when they enter LASCO's field of view as they pass close by the star.
"Building coronagraphs like LASCO is still more art than science, because the light we are trying to detect is very faint," said Dr. Joe Gurman, U.S. project scientist for SOHO at Goddard. "Any imperfections in the optics or dust in the instrument will scatter the light, making the images too noisy to be useful. Discovering almost 1,000 comets since SOHO's launch on December 2, 1995 is a testament to the skill of the LASCO team."
SOHO successfully completed its primary mission in April 1998. It has enough fuel to remain on station to keep hunting comets for decades if the LASCO continues to function.
For information about SOHO on the Internet, visit:
Explore further: Long-term warming, short-term variability: Why climate change is still an issue | <urn:uuid:78cbe1bd-1849-4138-b59a-5521e93122a3> | 4 | 663 | News Article | Science & Tech. | 48.522799 | 83 |
You have to break a few (hundred) eggs to make a good crystal
Space Science News home
You have to break a few (hundred) eggs to make a
good crystal Bell curve shape to crystal quality
may point to best candidates for flight
Sept. 20, 1999: Did you ever ask the teacher to grade a tough test "on the curve"? What you were asking was that the grades be adjusted so that a "C" fell under the part of the curve where most of your classmates had scored. A few were to the left and got a D or F; and few were to the right and got a B or an A.
Right: To the crystallographer, this may not be a diamond but it's just as priceless. A lysozyme crystal grown in orbit looks great under a microscope, but the real test is X-ray crystallography. The colors are caused by polarizing filters. Links to 549x379-pixel, 69KB JPG. Credit: NASA/Marshall.
That's basically how the bell curve works. In nature, objects and events quite often can be grouped along a bell curve. In a population of adult animals, most will be around the same size. A few will be larger and a few will be smaller.
"If you talk to statisticians," noted Dr. Russell Judge of the University of Alabama in Huntsville, "variations within populations in nature can be described in terms of distributions."
December 3: Mars Polar Lander nears touchdown
December 2: What next, Leonids?
November 30: Polar Lander Mission Overview
November 30: Learning how to make a clean sweep in space
The question now is whether scientists can use the microgravity of space to shift the curve to the right to grow the large, nearly perfect crystals they need for molecular lock-picking, the first step in designing drugs that can treat a broad range of diseases and disorders.
"We want to determine how the growth of crystals effect their quality," Judge said in May when NASA selected his investigation for development, "and then take that into space to see how microgravity is enhancing the growth characteristics that lead to good crystals. From this we want to develop techniques, so that by observing crystal growth on the ground, we can predict which proteins are likely to benefit the most from microgravity crystallization."
Sign up for our EXPRESS SCIENCE NEWS delivery
These functions are a result not just of a chemical formula, but of structure which can be quite large (on the atomic scale) and fragile. If the shape isn't right, the protein cannot match up with other proteins or chemicals to do its job, just as the wrong key won't unlock a door. Sickle cell anemia, for example, results from structural differences in the hemoglobin that carries oxygen in red blood cells. Designing new treatments means designing altered proteins or other chemicals that act as a skeleton key or as a sophisticated lock pick.
Proteins can form crystals, generated by rows and columns of molecules that form up like soldiers on a parade ground. Shining X-rays through a crystal will produce a pattern of dots that can be decoded to reveal the arrangement of the atoms in the molecules making up the crystal. Like the troops in formation, uniformity and order are everything in X-ray crystallography. X-rays have much sorter wavelengths than visible light, so the best looking crystals under the microscope won't necessarily pass muster under X-rays.
Left: Judge (left) and Dr. Edward Snell, a National Research Council fellow working at NASA/Marshall, inspect the sample holder in the X-ray crystallography unit. Links to 600x616-pixel, 188KB JPG, or click here for a 1207x1240-pixel, 543KB JPG. Credit: NASA/Marshall.
This has become an invaluable tool for understanding the structure and the function of dozens of proteins. But many proteins remain shrouded in mystery because on Earth crystal imperfections are introduced by fluid flows and the settling of the crystals to the bottom of the container. This leaves internal defects that distort or blur the view of the structure.
"In order to have crystals to use for X-ray diffraction studies," Judge said, "you need them to be fairly large and well ordered." Scientists also need lots of crystals since exposure to air, the process of X-raying them, and other factors destroy the crystals. Getting just one perfect specimen isn't enough. Dozens may be needed, and the quality might not be known until well into the analysis.
Research has Heavy Implications. July 1998. Crystal-clear
view of insulin should lead to improved therapies for diabetics
Growing protein crystals in the microgravity of space has yielded striking results, such as determining to a fine resolution how certain molecules of insulin join so scientists can improve injectable insulin needed by diabetics. There have also been disappointments when crystals in other experiments did not grow as expected.
Since the 1970s, scientists have used a variety of approaches in trying to determine what leads to the growth of a large, perfect crystal. Judge tried a different approach that built on results noted by researchers dating as far back as 1946.
He and his team looked at the effects of concentration, temperature, and pH (acid vs. base) on the growth of lysozyme, a common protein in chicken egg white. Lysozyme's structure is well known and it has become a standard in many crystallization studies on Earth and in space. Although lysozyme has an atomic mass of 14,300 Daltons - almost 92 times that of the ordinary sugar that many of us crystallized in elementary school science - it's a relative lightweight in the protein world.
To exclude impurities often found in commercial lysozyme preparations, Judge and his team purified lysozyme extracted from eggs obtained from a local egg farm. While one experiment run required only five dozen eggs, the full series of experiment consumed about 200 eggs.
Judge and his team grew the crystals in trays with small plastic wells filled with solutions containing a trace of salt to help stimulate crystal growth. Temperatures ranged from 4 to 18 deg C (39-64 deg F) and pH from 4.0 to 5.2 (slightly acidic; pure water theoretically has a pH of 7). Judge also varied the driving force behind the crystal growth process, called supersaturation, by varying the initial concentration of protein. Protein concentration must be set above a critical limit, the solubility, in order to form crystals (below this concentrations the protein stays dissolved and never forms crystals)
Left: A bell curve for lysozyme crystals produced in Judge's experiments, and a possible shift in the curve that microgravity experiments might produce. Links to 660x440-pixel, 39KB JPG. Credit: NASA/Marshall.
The tough part was examining each of the over 2000 wells and counting the crystals. It turned out that the solution pH had the largest effect on the growth of the crystals, possibly due to changes in charges on the surface of the molecules.
When solution conditions had been optimized to give a small number of large crystals, a sample of 50 crystals was withdrawn for X-ray diffraction analysis.
Judge hoped that when the ideal conditions were found and then applied to subsequent batches, he would be able to grow consistently large, high quality crystals of lysozyme. The expectation was that with ideal conditions, quality crystals could be cranked out as if in a factory.
Instead, nature put him on the curve.
"Some variation is occurring there," Judge said, "but we haven't quite pinpointed the cause."
Judge got a bell curve when he measured the X-ray clarity, properly called the signal-to-noise ratio (a radio with static has a low signal-to-noise ratio). A graph of the number of crystals versus the signal-to-noise ratio forms a bell curve, albeit slightly skewed to one side.
Right: Distribution of diffraction characteristics - essentially a measure of quality - for a batch of crystals approximates a bell curve. Links to 875x637-pixel, 66KB JPG. Credit: NASA/Marshall.
"The distribution is saying a very few crystals form perfectly in solution," he continued, "and a small number are really poor. The majority of crystals are in-between."
It's doubly puzzling because the crystals were grown from the same batch of lysozyme that was poured into 120 wells in the experiment tray and crystallized under the same conditions.
"We have some ideas," Judge said, "but we haven't tested them yet, so we're hesitant to say it might be this or that."
The research will continue with insulin, the crucial protein that conveys sugar from the blood stream into a body's cells, and with glucose isomerase, a larger (46,000 Daltons) protein used in industrial processes to convert glucose to a sweeter sugar called fructose.
Left: Crystals of insulin grown in space let scientists determine the vital enzyme's structure and linkages with much higher resolution that Earth-grown crystals had allowed. Links to 640x448-pixel, 104KB JPG. Larger format versions of these and related images are available from the NASA Image Exchange and using the keyword "insulin." Credit: NASA/Marshall.
"In all of the proteins we're using the structure is pretty well known," Judge added.
In addition to ground-based experiments, Judge hopes to conduct flight experiments in the next year or so. He would use the Vapor Diffusion Apparatus, a device developed by the University of Alabama in Birmingham and well-proven in a number of Space Shuttle flights.
"Most researchers say that crystals grown in microgravity will be better than those on the ground," Judge said. And a number of experiments bear out that belief. "Somehow, microgravity pushes up the end of the distribution curve."
Right: Crystals of glucose isomerase, a larger molecular weight protein, will be grown to see if they, too, are graded "on the curve." Links to 1018x749-pixel, 365KB JPG. Credit: NASA/Marshall.
With expected flight experiments on lysozyme, insulin and glucose isomerase, Judge will have crystals grown in conditions as close as possible to the ideal conditions he had determined so far. At the same time, he will grow crystals on Earth from the same mix as the flight batch and using identical hardware and conditions so that microgravity is the only variable.
Eventually, he hopes that his studies will lead to a tool for screening candidate proteins for flight.
The Effect of Temperature and Solution pH on the Nucleation of Tetragonal Lysozyme Crystals. Biophysics Journal, September 1999, p. 1585-1593, Vol. 77, No. 3
Russell A. Judge,*Randolph S. Jacobs,#Tyralynn Frazier, §Edward H. Snell, and ¶Marc L. Pusey
*Alliance for Microgravity Material Science and Applications, NASA/Marshall Space Flight Center, Huntsville, Alabama 35812; #Department of Chemical Engineering, University of Alabama in Huntsville, Huntsville, Alabama 35899; § Biochemistry Department, Michigan State University, East Lansing, Michigan 48825; and ¶Biophysics SD48, NASA/Marshall Space Flight Center, Huntsville, Alabama 35812 USA
Part of the challenge of macromolecular crystal growth for structure determination is obtaining crystals with a volume suitable for x-ray analysis. In this respect an understanding of the effect of solution conditions on macromolecule nucleation rates is advantageous. This study investigated the effects of supersaturation, temperature, and pH on the nucleation rate of tetragonal lysozyme crystals. Batch crystallization plates were prepared at given solution concentrations and incubated at set temperatures over 1 week. The number of crystals per well with their size and axial ratios were recorded and correlated with solution conditions. Crystal numbers were found to increase with increasing supersaturation and temperature. The most significant variable, however, was pH; crystal numbers changed by two orders of magnitude over the pH range 4.0-5.2. Crystal size also varied with solution conditions, with the largest crystals obtained at pH 5.2. Having optimized the crystallization conditions, we prepared a batch of crystals under the same initial conditions, and 50 of these crystals were analyzed by x-ray diffraction techniques. The results indicate that even under the same crystallization conditions, a marked variation in crystal properties exists.
More Space Science Headlines - NASA research on the web
Life and Microgravity Sciences and Applications information from NASA HQ on science in space
Microgravity Research Programs Office headquartered at Marshall Space Flight Center
Microgravity News online version of NASA's latest in Microgravity advancements, published quarterly.
Join our growing list of subscribers - sign up for our express news delivery and you will receive a mail message every time we post a new story!!!
For more information, please contact:|
Dr. John M. Horack , Director of Science Communications
Curator: Linda Porter
NASA Official: M. Frank Rose | <urn:uuid:2a7ca019-7b31-4e9b-8c46-9219b443a12f> | 3.09375 | 2,731 | News Article | Science & Tech. | 46.806197 | 84 |
How to do just that particularly in the Virgin Islands was on display Tuesday at Good Hope School as the St. Croix Environmental Association sponsored its second Environmental Science Career Expo in partnership with the V.I. Network of Environmental Educators.
The goal of the event was to enable middle school and high school students to learn about the career path choices, work experience and skills required to attain jobs in science and technological fields with representatives from private enterprises, government and nonprofit agencies participated.
“We want to get kids excited about science. We live on a small piece of land and are surrounded by the ocean and we have to take care of it,” said St. Croix Environmental Association spokeswoman Lynnea Roberts.
“Beyond that, we want kids to be excited about college and going to study science,” she said. “There is a lot of science going on around the island that is sort of undercover and a lot of people don’t know about it.”
Roberts said she hopes to put on two more of these career fairs later this year at both Central and Complex High Schools.
Marcia Taylor of the University of the Virgin Islands’ Center for Marine and Environmental Studies said she was there to recruit the future.
“We want more young people going into careers related to the marine environment,” Taylor said. “We want to get more local students trained so they can take the jobs here in their home.”
“There are a lot of opportunities down here and it would be great if people that lived here could benefit from that,” Taylor said.
She said graduates of the program, if they’ve wanted to stay and work in the territory, have had success in finding careers here.
She also noted the same marine issues Roberts mentioned — overfishing and ocean acidification. She said future job prospects in these fields would be plentiful for those interested in learning how to manage those problems and mitigate the effects of them, especially as they pertain to the V.I. coral reef systems.
“There are more federal dollars being spent down here because we realize we’re on the brink of losing an incredible resource. There’s more grants related to that, more people studying it and that’s an area where there’s more money being pumped in all the time.”
As students wandered from booth to booth learning about the work of those participating organizations, some teachers were even collaborating with agency representatives in the hopes of doing hands on work inside the classroom at a later date.
Leila Muller of the V.I. Energy Office and teacher Sarah Christiansen of AZ Academy were just one example as they were planning renewable energy demonstrations for Christiansen’s fifth- and eighth-grade science students.
“This is the future, right here,” Christiansen said, pointing at some solar energy demonstrations. “Kids need to know what the future is going to be like and need to prepare for the future and what types of jobs and careers will be available.”
Muller said, “We’re also going to be doing more sustainable buildings and green buildings,” adding that’s where the future is. “And we want the young minds to know what opportunities there are in the field of energy.”
AZ Academy sophomores Conrad Yanez and Rick Beggs said they came away from the event with more environmental knowledge of how to protect St. Croix and with a possible goal to attain in the future.
“I might be interested in the science part of it,” Beggs said. “Maybe one day I’ll come up with a new way to protect corals.”
“I think I might be a marine biologist, maybe mangroves or something like that,” Yanez added. “I like working in places like Salt River. It’s pretty interesting.” | <urn:uuid:856f7292-cbb4-4f99-891f-b70d8fb34f56> | 2.578125 | 828 | News Article | Science & Tech. | 49.777181 | 85 |
Elements | Blogs
Wednesday, September 7, 2011
Is There Oxygen in Space?
Yes, this summer astronomers using the Herschel Telescope identified oxygen molecules in space. They found these molecules in the Orion nebula, 1,344 light years away. Oxygen is the third most abundant element in the universe. Until now, scientists have only seen individual oxygen atoms in space. We do not breathe individual oxygen atoms, but rather oxygen molecules. (A molecule is a group of atoms banded together and it is the smallest unit of chemical compound that can take part in a chemical reaction.) Oxygen molecules make up 20% of the air we breathe. Scientists theorize that the oxygen molecules were locked up in water ice that...
Thursday, March 10, 2011
I'm Atoms (Scientific Cover of Jason Mraz's I'm Yours)
Here in Chicago it has been gray for the last three weeks – no sun, just melting snow and rain. This song made our day. It has sunshine, great music and atoms! The lyrics include fabulous lines such as: “Atoms bond together to form molecules Most of what’s surrounding me and you…” This science verse has been set to the music of Jason Mraz’s “I’m Yours”. This is a must watch!
Saturday, February 26, 2011
The Deep Carbon Observatory
Here at SuperSmart Carbon, we love learning about carbon. Apparently, we are not alone. There is a project being launched called the Deep Carbon Observatory that is being funded by the Alfred P. Sloan Foundation. The purpose of this group is to study carbon deep inside the earth. Carbon makes up somewhere from 0.7% to 3.2% of the earth’s elements. We know that there is carbon trapped under the earth’s crust, but we don’t know how much. The Deep Carbon Observatory is going to study how much carbon there is in the earth and what happens to it. Another question is what form is the...
Friday, February 25, 2011
Where does gas come from?
Carbon! (We always love it when the answer is carbon.) The gas we use to power our cars comes from decomposing organic matter. What does that mean? All life has carbon in it -- this includes everything living from you and me to zebras, tapeworms, tulips and seaweed. Since all living things have carbon in them, they are referred to as organic matter. Non-organic matter includes things like rocks, water and metals. When something organic dies, it goes into the earth’s surface. For example, when a leaf falls off a tree, it settles on the ground. Over the next months, it slowly rots and...
Friday, February 11, 2011
How to Name an Element After Yourself
Here on the SuperSmart Carbon blog, I will talk about the elements a lot because "Carbon" is an element. SuperSmart Carbon is a blue guy with a green hat and in this blog, he looks like he is 1 1/2 inches high. He has two rings around him with six yellow spheres. Although cute, SuperSmart Carbon does not exactly look like elements in the real world. Elements are really, really, small. You cannot see them with the naked eye, or even with a microscope. Although you can't see elements, they are all around you. Everything is made up of elements: the computer you are reading this blog on, the table the computer sits on, the air you... | <urn:uuid:b5177112-be1e-4086-9d85-858522f9c4b9> | 2.921875 | 735 | Content Listing | Science & Tech. | 66.67267 | 86 |
Elementary Matrices Generate the General Linear Group
Okay, so we can use elementary row operations to put any matrix into its (unique) reduced row echelon form. As we stated last time, this consists of building up a basis for the image of the transformation the matrix describes by walking through a basis for the domain space and either adding a new, independent basis vector or writing the image of a domain basis vector in terms of the existing image basis vectors.
So let’s say we’ve got a transformation in . Given a basis, we get an invertible matrix (which we’ll also call ). Then we can use elementary row operations to put this matrix into its reduced row echelon form. But now every basis vector gets sent to a vector that’s linearly independent of all the others, or else the transformation wouldn’t be invertible! That is, the reduced row echelon form of the matrix must be the identity matrix.
But remember that every one of our elementary row operations is the result of multiplying on the left by an elementary matrix. So we can take the matrices corresponding to the list of all the elementary row operations and write
which tells us that applying all these elementary row operations one after another leads us to the identity matrix. But this means that the product of all the elementary matrices on the right is . And since we can also apply this to the transformation , we can find a list of elementary matrices whose product is . That is, any invertible linear transformation can be written as the product of a finite list of elementary matrices, and thus the elementary matrices generate the general linear group. | <urn:uuid:cf863937-0ee5-440d-baec-55365a75d0fc> | 3.234375 | 345 | Tutorial | Science & Tech. | 34.446265 | 87 |
Coral reefs aren't just pretty, they're also vital to marine species and island communities. But they're also facing threats from warming seas. NBC's Anne Thompson reports.
More than half of 82 species of coral being evaluated for inclusion under the Endangered Species Act "more likely than not" would go extinct by 2100 if climate policies and technologies remain the same, federal scientists concluded.
The experts cited "anthropogenic," or manmade, releases of carbon dioxide as a key driver of warming seas and oceans absorbing more CO2, in turn making waters more acidic.
"The combined direct and indirect effects of rising temperature, including increased incidence of disease and ocean acidification, both resulting primarily from anthropogenic increases in atmospheric CO2, are likely to represent the greatest risks of extinction to all or most of the candidate coral species over the next century," the experts concluded in a report released Friday by the National Marine Fisheries Service.
The report was part of a process to determine which species, if any, merit protection. The Center for Biological Diversity in 2009 had petitioned for the review of 82 species it considered in jeopardy.
Of the 82 species, all of which are in U.S. waters, 46 are "more likely than not" to face extinction by 2100, while 10 are "likely," the report stated.
The authors did note that the limited science of corals meant that "the overall uncertainty was high."
The fisheries service will next seek public comment as it considers the petition for listing.
The Center for Biological Diversity, which in 2006 petitioned and got protection for staghorn and elkhorn corals, said conditions have only worsened for corals.
"Coral reefs are home to 25 percent of marine life and play a vital function in ocean ecosystems," the center said in a statement. "Since the 1990s, coral growth has grown sluggish in some areas due to ocean acidification, and mass bleaching events are increasingly frequent."
More content from msnbc.com and NBC News:
- Baseball-sized hail, 40 tornadoes reported as dangerous storms slam Midwest
- NRA official accuses media of sensationalizing Trayvon Martin story
- Reports: Secret Service personnel accused of hiring prostitutes
- American Nazi Party gets its first lobbyist
- Judge in Zimmerman case cites possible conflict of interest | <urn:uuid:341895fc-0dbb-4ebb-a24c-bce9caf290b6> | 3.703125 | 471 | News Article | Science & Tech. | 32.165294 | 88 |
Air MassAn extensive body of the atmosphere whose physical properties, particularly temperature and humidity, exhibit only small and continuous differences in the horizontal. It may extend over an area of several million square kilometres and over a depth of several kilometres.
Backing WindCounter-clockwise change of wind direction, in either hemisphere.
Beaufort ScaleWind force scale, original based on the state of the sea, expressed in numbers from 0 to 12.
FetchDistance along a large water surface trajectory over which a wind of almost uniform direction and speed blows.
FogSuspension of very small, usually microscopic water droplets in the air, generally reducing the horizontal visibility at the Earth's surface to less than 1 km.
FrontThe interface or transition zone between air masses of different densities (temperature and humidity).
Gale Force WindWind with a speed between 34 and 47 knots. Beaufort scale wind force 8 or 9.
GustSudden, brief increase of the wind speed over its mean value.
HazeSuspension in the atmosphere of extremely small, dry particles which are invisible to the naked eye but numerous enough to give the sky an opalescent appearance.
HighRegion of the atmosphere where the pressures are high relative to those in the surrounding region at the same level.
HurricaneName given to a warm core tropical cyclone with maximum surface winds of 118 km/h (64 knots) or greater in the North Atlantic, the Caribbean, the Gulf of Mexico and in the Eastern North Pacific Ocean.
KnotUnit of speed equal to one nautical mile per hour. (1.852 km/h)
Land BreezeWind of coastal regions, blowing at night from the land towards a large water surface as a result of the nocturnal cooling of the land surface.
Line SquallSquall which occurs in a line.
LowRegion of the atmosphere in which the pressures are lower then those of the surrounding regions at the same level.
MistSuspension in the air of microscopic water droplets which reduce the visibility at the Earth's surface.
PressureForce per unit area exerted by the atmosphere on any surface by virtue of its weight; it is equivalent to the weight of a vertical column of air extending above a surface of unit area to the outer limit of the atmosphere.
RidgeRegion of the atmosphere in which the pressure is high relative to the surrounding region at the same level.
Sea BreezeWind in coastal regions, blowing by day from a large water surface towards the land as a result of diurnal heating of the land surface.
Sea FogFog which forms in the lower part of a moist air mass moving over a colder surface (water).
Sea StateLocal state of agitation of the sea due to the combined effects of wind and swell.
SquallAtmospheric phenomenon characterizes by an abrupt and large increase of wind speed with a duration of the order of minutes which diminishes suddenly. It is often accompanied by showers or thundershowers.
Storm Force WindWind with a wind speed between 48 and 63 knots. Beaufort scale wind force 10 or 11.
Storm SurgeThe difference between the actual water level under influence of a meteorological disturbance (storm tide) and the level which would have been attained in the absence of the meteorological disturbance (i.e. astronomical tide).
SwellAny system of water waves which has left its generating area.
ThunderstormSudden electrical discharge manifested by a flash of light and a sharp or rumbling sound. Thunderstorms are associated with convective clouds and are, more often, accompanied by precipitation in the form of rain showers, hail, occasionally snow, snow pellets, or ice pellets.
Tropical CycloneGeneric term for a non-frontal synoptic scale cyclone originating over tropical or sub-tropical waters with organized convection and definite cyclonic surface wind circulation.
Tropical DepressionWind speed up to 33 knots.
Tropical DisturbanceLight surface winds with indications of cyclonic circulation.
Tropical StormMaximum wind speed of 34 to 47 knots.
TroughAn elongated area of relatively low atmospheric pressure.
VeeringClockwise change of wind direction, in either hemisphere.
VisibilityGreatest distance at which a black object of suitable dimensions can be seen and recognized against the horizon sky during daylight or could be seen and recognized during the night if the general illumination were raised to the normal daylight level.
WaterspoutA phenomenon consisting of an often violent whirlwind revealed by the presence of a cloud column or inverted cloud cone (funnel cloud), protruding from the base of a cumulonimbus, and of a bush composed of water droplets raised from the surface of the sea. Its behaviour is characterized by a tendency to dissipate upon reaching shore.
Wave HeightVertical distance between the trough and crest of a wave.
Wave PeriodsTime between the passage of two successive wave crests past a fixed point. | <urn:uuid:c43d0fad-4182-427f-88ff-559827fbce8b> | 3.484375 | 1,023 | Structured Data | Science & Tech. | 32.817154 | 89 |
Water creatures caught stealing DNA
Tiny freshwater organisms that have a sex-free lifestyle, may have survived so well because they steal genes from other creatures, US scientists report.
Researchers from the Harvard University in Cambridge, Massachusetts, have found genes from bacteria, fungi and even plants incorporated into the DNA of bdelloid rotifers - minuscule animals that appear to have given up sex 40 million years ago.
Their report appears in this week's edition of Science.
Sex is used by most life forms as a way of coping with changing circumstances, by allowing organisms to develop useful new genes and ditch harmful, mutated ones.
The resilience of bdelloid and their sex-free lifestyle has stumped scientists.
The team, headed by Professor Matthew Meselson, looked at the DNA of bdelloid rotifers to see how they manage to survive and evolve.
It appears they overcome this hurdle by stealing DNA from our organisms.
"Our result shows that genes can enter the genomes of bdelloids in a manner fundamentally different from that which, in other animals, results from the mating of males and females," says Meselson.
"We found many genes that appear to have originated in bacteria, fungi, and plants."
The translucent, waterborne creatures, which range in size from 0.1 to 1 millimetres long, lay eggs, but all their offspring are female.
The researchers believe that when bdelloids dry out, they fracture their genetic material and rupture cellular membranes. When they rehydrate, they rebuild their genomes and their membranes, incorporating shreds of genetic material from other bdelloids and unrelated species in their vicinity.
"These fascinating animals not only have relaxed the barriers to incorporation of foreign genetic material, but, more surprisingly, they even managed to keep some of these alien genes functional," report co-author Dr Irina Arkhipova says.
According to the researchers, the next step is to determine whether bdelloid genomes also contain homologous genes imported from other bdelloids.
Meselson and his colleagues also hope to examine whether the animals actually use any of the hundreds of snippets of foreign DNA they appear to vacuum up.
Understanding how the animals acquire and make use of these new genes could have implications for medicine.
Genetic mutations, which occur constantly in any living organism, underlie cancer, heart disease and various other diseases. | <urn:uuid:7ccc0e8f-b8e3-42aa-8ed8-47b14f3a075c> | 3.328125 | 498 | News Article | Science & Tech. | 28.540656 | 90 |
Science Fair Project Encyclopedia
The sampling frequency or sampling rate defines the number of samples per second taken from a continuous signal to make a discrete signal. The inverse of the sampling frequency is the sampling period or sampling time, which is the time between samples.
The sampling frequency can only be applied to samplers in which each sample is periodically taken. There is no rule that limits a sampler from taking a sample at a non-periodic rate.
If a signal has a bandwidth of 100 Hz then to avoid aliasing the sampling frequency must be greater than 200 Hz.
In some cases, it is desirable to have a sampling frequency more than twice the bandwidth so that a digital filter can be used in exchange for a weaker analog anti-aliasing filter. This process is known as oversampling.
In digital audio, common sampling rates are:
- 8,000 Hz - telephone, adequate for human speech
- 11,025 Hz
- 22,050 Hz - radio
- 44,100 Hz - compact disc
- 48,000 Hz - digital sound used for films and professional audio
- 96,000 or 192,400 Hz - DVD-Audio, some LPCM DVD audio tracks, BD-ROM (Blu-ray Disc) audio tracks, and HD-DVD (High-Definition DVD) audio tracks
In digital video, which uses a CCD as the sensor, the sampling rate is defined the frame/field rate, rather than the notional pixel clock. All modern TV cameras use CCDs, and the image sampling frequency is the repetition rate of the CCD integration period.
- 13.5 MHz - CCIR 601, D1 video
- Continuous signal vs. Discrete signal
- Digital control
- Sample and hold
- Sample (signal)
- Sampling (information theory)
- Signal (information theory)
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:d25b5562-8f30-4fd1-bc51-46f94956427e> | 3.984375 | 414 | Knowledge Article | Science & Tech. | 55.315025 | 91 |
The life-giving ideas of chemistry are not reducible to physics. Or, if one tries to reduce them, they wilt at the edges, lose not only much of their meaning, but interest too. And, most importantly, they lose their chemical utility—their ability to relate seemingly disparate compounds to each other, their fecundity in inspiring new experiments. I'm thinking of concepts such as the chemical bond, a functional group and the logic of substitution, aromaticity, steric effects, acidity and basicity, electronegativity and oxidation-reduction. As well as some theoretical ideas I've been involved in personally—through-bond coupling, orbital symmetry control, the isolobal analogy.
Consider the notion of oxidation state. If you had to choose two words to epitomize the same-and-not-the-same nature of chemistry, would you not pick ferrous and ferric? The concept evolved at the end of the 19th century (not without confusion with "valency"), when the reality of ions in solution was established. As did a multiplicity of notations—ferrous iron is iron in an oxidation state of +2 (or is it 2+?) or Fe(II). Schemes for assigning oxidation states (sometimes called oxidation numbers) adorn every introductory chemistry text. They begin with the indisputable: In compounds, the oxidation states of the most electronegative elements (those that hold on most tightly to their valence electrons), oxygen and fluorine for example, are –2 and –1, respectively. After that the rules grow ornate, desperately struggling to balance wide applicability with simplicity.
The oxidation-state scheme had tremendous classificatory power (for inorganic compounds, not organic ones) from the beginning. Think of the sky blue color of chromium(II) versus the violet or green of chromium(III) salts, the four distinctly colored oxidation states of vanadium. Oliver Sacks writes beautifully of the attraction of these colors for a boy starting out in chemistry. And not only boys.
But there was more to oxidation states than just describing color. Or balancing equations. Chemistry is transformation. The utility of oxidation states dovetailed with the logic of oxidizing and reducing agents—molecules and ions that with ease removed or added electrons to other molecules. Between electron transfer and proton transfer you have much of reaction chemistry.
I want to tell you how this logic leads to quite incredible compounds, but first let's look for trouble. Not for molecules—only for the human beings thinking about them.
Those Charges are Real, Aren't They?
Iron is not only ferrous or ferric, but also comes in oxidation states ranging from +6 (in BaFeO4) to –2 (in Fe(CO)42–, a good organometallic reagent).
Is there really a charge of +6 on the iron in the first compound and a –2 charge in the carbonylate? Of course not, as Linus Pauling told us in one of his many correct (among some incorrect) intuitions. Such large charge separation in a molecule is unnatural. Those iron ions aren't bare—the metal center is surrounded by more or less tightly bound "ligands" of other simple ions (Cl– for instance) or molecular groupings (CN–, H2O, PH3, CO). The surrounding ligands act as sources or sinks of electrons, partly neutralizing the formal charge of the central metal atom. At the end, the net charge on a metal ion, regardless of its oxidation state, rarely lies outside the limits of +1 to –1.
Actually, my question should have been countered critically by another: How do you define the charge on an atom? A problem indeed. A Socratic dialogue on the concept would bring us to the unreality of dividing up electrons so they are all assigned to atoms and not partly to bonds. A kind of tortured pushing of quantum mechanical, delocalized reality into a classical, localized, electrostatic frame. In the course of that discussion it would become clear that the idea of a charge on an atom is a theoretical one, that it necessitates definition of regions of space and algorithms for divvying up electron density. And that discussion would devolve, no doubt acrimoniously, into a fight over the merits of uniquely defined but arbitrary protocols for assigning that density. People in the trade will recognize that I'm talking about "Mulliken population analysis" or "natural bond analysis" or Richard Bader's beautifully worked out scheme for dividing up space in a molecule.
What about experiment? Is there an observable that might gauge a charge on an atom? I think photoelectron spectroscopies (ESCA or Auger) come the closest. Here one measures the energy necessary to promote an inner-core electron to a higher level or to ionize it. Atoms in different oxidation states do tend to group themselves at certain energies. But the theoretical framework that relates these spectra to charges depends on the same assumptions that bedevil the definition of a charge on an atom.
An oxidation state bears little relation to the actual charge on the atom (except in the interior of the sun, where ligands are gone, there is plenty of energy, and you can have iron in oxidation states up to +26). This doesn't stop the occasional theoretician today from making a heap of a story when the copper in a formal Cu(III) complex comes out of a calculation bearing a charge of, say, +0.51.
Nor does it stop oxidation states from being just plain useful. Many chemical reactions involve electron transfer, with an attendant complex of changes in chemical, physical and biological properties. Oxidation state, a formalism and not a representation of the actual electron density at a metal center, is a wonderful way to "bookkeep" electrons in the course of a reaction. Even if that electron, whether added or removed, spends a good part of its time on the ligands.
But enough theory, or, as some of my colleagues would sigh, anthropomorphic platitudes. Let's look at some beautiful chemistry of extreme oxidation states.
Incredible, But True
Recently, a young Polish postdoctoral associate, Wojciech Grochala, led me to look with him at the chemical and theoretical design of novel high-temperature superconductors. We focused on silver (Ag) fluorides (F) with silver in oxidation states II and III. The reasoning that led us there is described in our forthcoming paper. For now let me tell you about some chemistry that I learned in the process. I can only characterize this chemistry as incredible but true. (Some will say that I should have known about it, since it was hardly hidden, but the fact is I didn't.)
Here is what Ag(II), unique to fluorides, can do. In anhydrous HF solutions it oxidizes Xe to Xe(II), generates C6F6+ salts from perfluorobenzene, takes perfluoropropylene to perfluoropropane, and liberates IrF6 from its stable anion. These reactions may seem abstruse to a nonchemist, but believe me, it's not easy to find a reagent that would accomplish them.
Ag(III) is an even stronger oxidizing agent. It oxidizes MF6– (where M=Pt or Ru) to MF6. Here is what Neil Bartlett at the University of California at Berkeley writes of one reaction: "Samples of AgF3 reacted incandescently with metal surfaces when frictional heat from scratching or grinding of the AgF3 occurred."
Ag(II), Ag(III) and F are all about equally hungry for electrons. Throw them one, and it's not at all a sure thing that the electron will wind up on the fluorine to produce fluoride (F–). It may go to the silver instead, in which case you may get some F2 from the recombination of F atoms.
Not that everyone can (or wants to) do chemistry in anhydrous HF, with F2 as a reagent or being produced as well. In a recent microreview, Thomas O'Donnell says (with some understatement), "... this solvent may seem to be an unlikely choice for a model solvent system, given its reactivity towards the usual materials of construction of scientific equipment." (And its reactivity with the "materials of construction" of human beings working with that equipment!) But, O'Donnell goes on to say, "... with the availability of spectroscopic and electrochemical equipment constructed from fluorocarbons such as Teflon and Kel-F, synthetic sapphire and platinum, manipulation of and physicochemical investigation of HF solutions in closed systems is now reasonably straightforward."
For this we must thank the pioneers in the field—generations of fluorine chemists, but especially Bartlett and Boris Zemva of the University of Ljubljana. Bartlett reports the oxidation of AgF2 to AgF4– (as KAgF4) using photochemical irradiation of F2 in anhydrous HF (made less acidic by adding KF to the HF). And Zemva used Kr2+ (in KrF2) to react with AgF2 in anhydrous HF in the presence of XeF6 to make XeF5+AgF4–. What a startling list of reagents!
To appreciate the difficulty and the inspiration of this chemistry, one must look at the original papers, or at the informal letters of the few who have tried it. You can find some of Neil Bartlett's commentary in the article that Wojciech and I wrote, and in an interview with him.
Charge It, Please
Chemists are always changing things. How to tune the propensity of a given oxidation state to oxidize or reduce? One way to do it is by changing the charge on the molecule that contains the oxidizing or reducing center. The syntheses of the silver fluorides cited above contain some splendid examples of this strategy. Let me use Bartlett's words again, just explaining that "electronegativity" gauges in some rough way the tendency of an atom to hold on to electrons. (High electronegativity means the electron is strongly held, low electronegativity that it is weakly held.)
It's easy to make a high oxidation state in an anion because an anion is electron-rich. The electronegativity is lower for a given oxidation state in an anion than it is in a neutral molecule. That in turn, is lower than it is in a cation. If I take silver and I expose it to fluorine in the presence of fluoride ion, in HF, and expose it to light to break of F2 to atoms, I convert the silver to silver(III), AgF4-. This is easy because the AG(III) is in an anion. I can then pass in boron trifluoride and precipitate silver trifluoride, which is now a much more potent oxidizer than AgF4- because the electronegativity in the neutral AgF3 is much higher than it is in the anion. If I can now take away a fluoride ion, and make a cation, I drive the electronegativity even further up. With such a cation, for example, AgF2+, I can steal the electron from PtF6- and make PtF6.... This is an oxidation that even Kr(II) is unable to bring about.
Simple, but powerful reasoning. And it works.
A World Record?
Finally, a recent oxidation-state curiosity: What is the highest oxidation state one could get in a neutral molecule? Pekka Pyykkö and coworkers suggest cautiously, but I think believably, that octahedral UO6, that is U(XII), may exist. There is evidence from other molecules that uranium 6p orbitals can get involved in bonding, which is what they would have to do in UO6.
What wonderful chemistry has come—and still promises to come—from the imperfect logic of oxidation states!
© Roald Hoffmann
I am grateful to Wojciech Grochala, Robert Fay and Debra Rolison for corrections and comments. Thanks to Stan Marcus for suggesting the title of this column. | <urn:uuid:17b06ea8-6a78-4eda-b899-ce63819d7113> | 3.046875 | 2,582 | Comment Section | Science & Tech. | 42.922943 | 92 |
Taking a sample is just the beginning and preserving and processing specimens requires more than just the e-word.
Having completed eight dives, at least sixteen shore excursions, one nightlighting session, six trips to the fish markets, several roadside purchases and a surprise swim up to a fishing boat, the scientists have well and truly justified that purchase of 90 litres of ethanol on Day 1.
On each trip the preserving began in the field and continued back at our accommodation. The specimens will soon be shipped to the Museum where a thorough analysis can be performed, possibly including SEM photography and DNA sequencing. After that, identifications can be confirmed if necessary and scientific papers can be published.
So how did the scientists start to process their samples here in Timor? I spoke with about half the team to find out (because talking to all of them would've made this post longer than a Dead Sea scroll).
Lauren studies amphipods, an order of crustaceans, and was picking samples from dive sites (and one nightlighting session) with these small (usually less then 10mm) creatures in mind. She uses the freshwater dip method which means placing her substrate samples in a bucket of tap water as it encourages saltwater animals to release from their holds.
Lauren then elutriated (swirled) the bucket, causing the heavier sediment to fall to the bottom and the lighter amphipods to rise to the top. The swirling water was slowly tipped out into a sieve. What was filtered out was then placed in a tray and examined for animals, which were picked up with forceps or pipettes and placed into jars of ethanol. After that, the habitat samples that remained in the bucket were also placed in the tray and similarly examined for fauna.
We had a team of five fish scientists (dubbed ‘the fishos’) on this trip who worked together to process their samples and make strange, bawdy jokes whenever possible. While they did collect samples at the fish markets, the vast majority of their specimens were taken on dives and kept in plastic bags of seawater (in eskies) until they returned to camp.
Hijacking the dinner table for hours at a time, their processing often looked and sounded like question time at Parliament, but on closer examination was actually a highly organised and efficient affair. Their processing started with placing the fishes into trays and tubs of ice, dividing the day’s catch roughly by type.
From there the team began to identify the fishes, with each member focussing on those species they specialise in: Barry covered the wrasses, for example, and Jeff the cardinal fishes. They would often check with each other, however, and consult the stack of reference books they brought with them.
Once a fish was named it would go to Mark for photographing, sometimes having their fins pinned out if they had distinctive colours. Sally was the chief scribe during all of this, recording names, sizes and number of specimens, as well as writing small labels for Mark to photograph with the fishes.
If a fish was identified as one that hadn't been previously collected on the trip, then it would go to Amanda who would extract a small piece of muscle tissue with a scalpel and place it into a vial of DMSO solution (which is conducive to DNA analysis). The rest of the fish would be placed in formalin.
Penny came on this trip to collect crustaceans and fishes and was involved in the processing of invertebrate samples. Like Lauren, Penny did most of her sorting in a tray, but her samples remained in saltwater until they were picked out and placed in jars of ethanol.
From experience she was able to remove large pieces of reef rock from the tray that were unlikely to contain animals (those that shake ‘clean’ in the water for example) and make use of chisels to break down the likely ones (those with cracks and crevices).
Nerida and Greg frequently collected sediment from the ocean floor, elutriating at the back of the boat and transferring the filtered portion to plastic bags. Back at their makeshift lab they would examine this remainder in dishes under the microscope, looking for sea slugs and similar animals not much bigger than the grains of sediment they move between.
The process involved multiple dishes and frequent changing and cleaning of the seawater to make it easier to observe these microscopic animals. At the end of the process they would have a collection of specimens and would decide then which ones to process further. These would be photographed while still alive and depending on their size, have either a subsample taken for DNA analysis (and placed in a vial of ethanol) or be placed whole in ethanol or formalin.
Rosemary searched for tiny sea snails in mangrove and intertidal zones in Timor, collecting samples of mud, leaf matter and debris which she would ‘coarsely wash’ in the field using a very fine sieve. Keeping these samples cool in a bucket, she would return to the lab and use a spoon to scoop this ‘washed off’ matter into a petri dish.
The contents of the dish were swirled around so that it would settle into one layer, making it easier to see crawling animals or shapes that she recognises. She placed her specimens in ethanol for photographing and DNA sequencing back at the Museum.
Mandy surveyed the dive sites, markets and random fishing boats for cephalopods (octopuses, squid and cuttlefish). Back at the hotel, she would photograph each specimen and take two tissue samples: one muscle and one from the gills.
These samples would be placed in separate vials of ethanol. Later, the ethanol will be poured off and the samples added to our frozen tissue collection at the Museum and made available for DNA analysis.
The remainder of the specimens she acquired were fixed in formalin, with the beaks and radulas of the squid and the cuttlebones of the cuttlefish being detached beforehand (yes, squid have beaks). These will eventually be transferred to 70% ethanol, given a number in our database and added to our collection for long term storage. | <urn:uuid:81d6fb10-6b67-4af1-9eb5-b3504291e57d> | 2.828125 | 1,274 | News (Org.) | Science & Tech. | 42.172172 | 93 |
"We believe this is the first time bacterial horizontal gene transfer has been observed in eukaryotes at such scale," says senior author Igor Grigoriev of DOE JGI. "This study gets us closer to explaining the dramatic diversity across the genera of diatoms, morphologically, behaviorally, but we still haven't yet explained all the differences conferred by the genes contributed by the other taxa."
From plants, the diatom inherited photosynthesis, and from animals the production of urea. Bowler speculates that the diatom uses urea to store nitrogen, not to eliminate it like animals do, because nitrogen is a precious nutrient in the ocean. What's more, the tiny alga draws the best of both worldsit can convert fat into sugar, as well as sugar into fatextremely useful in times of nutrient shortage.
The team documented more than 300 genes sourced from bacteria and found in both types of diatoms, pointing to their ancient origin and suggesting novel mechanisms of managing nutrientsfor example utilization of organic carbon and nitrogenand detecting cues from their environment.
Diatoms, encapsulated by elaborate lacework-like shells made of glass, are only about one-third of a strand of hair in diameter. "The diatom genomes will help us to understand how they can make these structures at ambient temperatures and pressures, something that humans are not able to do. If we can learn how they do it, we could open up all kinds of new nanotechnologies, like for building miniature silicon chips or for biomedical applications," says Bowler.
Diatoms reside in fresh or salt water and can be divided into two camps, centrics and pennates. The centric Thalassiosira resemble a round "Camembert" cheese box (only much smaller) and pennates like Phaeodactylum look more like a cross between a boomerang and a narrow three-cornered hathence the species name, tricornutum. Not only is their shape and habitat dive
|Contact: David Gilbert|
DOE/Joint Genome Institute | <urn:uuid:ea4e61f1-5a97-49dc-b206-0e94f7a4d478> | 3.78125 | 432 | News Article | Science & Tech. | 21.862894 | 94 |
The amount of nitrogen entering the Gulf each spring has increased about 300 percent since the 1960s, mainly due to increased agricultural runoff, Scavia said.
"Yes, the floodwaters really matter, but the fact that there's so much more nitrogen in the system now than there was back in the '60s is the real issue," he said. Scavia's computer model suggests that if today's floods contained the level of nitrogen from the last comparable flood, in 1973, the predicted dead zone would be 5,800 square miles rather than 8,500.
"The growth of these dead zones is an ecological time bomb," Scavia said. "Without determined local, regional and national efforts to control them, we are putting major fisheries at risk." The Gulf of Mexico/Mississippi River Watershed Nutrient Task Force has set the goal of reducing the size of the dead zone to about 1,900 square miles.
In 2009, the dockside value of commercial fisheries in the Gulf was $629 million. Nearly 3 million recreational fishers further contributed more than $1 billion to the Gulf economy, taking 22 million fishing trips.
The Gulf hypoxia research team is supported by NOAA's Center for Sponsored Coastal Ocean Research and includes scientists from the University of Michigan, Louisiana State University and the Louisiana Universities Marine Consortium. NOAA has funded investigations and forecast development for the dead zone in the Gulf of Mexico since 1990.
"While there is some uncertainty regarding the size, position and timing of this year's hypoxic zone in the Gulf, the forecast models are in overall agreement that hypoxia will be larger than we have typically seen in recent years," said NOAA Administrator Jane Lubchenco.
The actual size of the 2011 Gulf hypoxic zone will be announced
|Contact: Jim Erickson|
University of Michigan | <urn:uuid:c11e46d4-b987-41e6-9352-3ac9cf25dde6> | 3.234375 | 370 | News (Org.) | Science & Tech. | 38.056826 | 95 |
First ever direct measurement of the Earth’s rotation
Geodesists are pinpointing the orientation of the Earth’s axis using the world’s most stable ring laser
A group with researchers at the Technical University of Munich (TUM) and the Federal Agency for Cartography and Geodesy (BKG) are the first to plot changes in the Earth’s axis through laboratory measurements. To do this, they constructed the world’s most stable ring laser in an underground lab and used it to determine changes in the Earth’s rotation. Previously, scientists were only able to track shifts in the polar axis indirectly by monitoring fixed objects in space. Capturing the tilt of the Earth’s axis and its rotational velocity is crucial for precise positional information on Earth – and thus for the accurate functioning of modern navigation systems, for instance. The scientists’ work has been recognized an Exceptional Research Spotlight by the American Physical Society.
The Earth wobbles. Like a spinning top touched in mid-spin, its rotational axis fluctuates in relation to space. This is partly caused by gravitation from the sun and the moon. At the same time, the Earth’s rotational axis constantly changes relative to the Earth’s surface. On the one hand, this is caused by variation in atmospheric pressure, ocean loading and wind. These elements combine in an effect known as the Chandler wobble to create polar motion. Named after the scientist who discovered it, this phenomenon has a period of around 435 days. On the other hand, an event known as the “annual wobble” causes the rotational axis to move over a period of a year. This is due to the Earth’s elliptical orbit around the sun. These two effects cause the Earth’s axis to migrate irregularly along a circular path with a radius of up to six meters.
Capturing these movements is crucial to create a reliable coordinate system that can feed navigation systems or project trajectory paths in space travel. “Locating a point to the exact centimeter for global positioning is an extremely dynamic process – after all, at our latitude, we are moving at around 350 meters to the east per second,” explains Prof. Karl Ulrich Schreiber, meanwhile as station director of the geodetic observatory Wettzell where the ring laser is settled. Karl Ulrich Schreiber had directed the project in TUM’s Research Section Satellite Geodesy. The geodetic observatory Wettzell is run together by TUM and BKG.
The researchers have succeeded in corroborating the Chandler and annual wobble measurements based on the data captured by radio telescopes. They now aim to make the apparatus more accurate, enabling them to determine changes in the Earth’s rotational axis over a single day. The scientists also plan to make the ring laser capable of continuous operation so that it can run for a period of years without any deviations. “In simple terms,” concludes Schreiber, “in future, we want to be able to just pop down into the basement and find out how fast the Earth is accurately turning right now."
For more information please visit the TU München homepage http://portal.mytum.de/pressestelle/pressemitteilungen/NewsArticle_20111220_100621/newsarticle_view?. | <urn:uuid:d4281798-7278-4727-a736-be4cecc072f8> | 3.921875 | 710 | News (Org.) | Science & Tech. | 40.519179 | 96 |
Computer Models How Buds Grow Into Leaves
Posted on March 02, 2012 at 08:24:51 am
"A bud does not grow in all directions at the same rate," said Samantha Fox from the John Innes Centre on Norwich Research Park. "Otherwise leaves would be domed like a bud, not flat with a pointed tip."
By creating a computer model to grow a virtual leaf, the BBSRC-funded scientists managed to discover simple rules of leaf growth.
-ADVERTISEMENT-Similar to the way a compass works, plant cells have an inbuilt orientation system. Instead of a magnetic field, the cells have molecular signals to guide the axis on which they grow. As plant tissues deform during growth, the orientation and axis changes.
The molecular signals become patterned from an early stage within the bud, helping the leaf shape to emerge.
The researchers filmed a growing Arabidopsis leaf, a relative of oil seed rape, to help create a model which could simulate the growing process. They were able to film individual cells and track them as the plant grew.
It was also important to unpick the workings behind the visual changes and to test them in normal and mutant plants.
"The model is not just based on drawings of leaf shape at different stages," said Professor Enrico Coen. "To accurately recreate dynamic growth from bud to leaf, we had to establish the mathematical rules governing how leaf shapes are formed."
With this knowledge programmed into the model, developed in collaboration with Professor Andrew Bangham's team at the University of East Anglia, it can run independently to build a virtual but realistic leaf.
Professor Douglas Kell, Chief Executive of BBSRC said: "This exciting research highlights the potential of using computer and mathematical models for biological research to help us tackle complex questions and make predictions for the future. Computational modelling can give us a deeper and more rapid understanding of the biological systems that are vital to life on earth."
The model could now be used to help identify the genes that control leaf shape and whether different genes are behind different shapes.
"This simple model could account for the basic development and growth of all leaf shapes," said Fox. "The more we understand about how plants grow, the better we can prepare for our future -- providing food, fuel and preserving diversity." | <urn:uuid:c1dfdc96-199a-4526-ae01-3ec1190ffbcd> | 3.609375 | 471 | News Article | Science & Tech. | 43.409487 | 97 |
You have to like the attitude of Thomas Henning (Max-Planck-Institut für Astronomie). The scientist is a member of a team of astronomers whose recent work on planet formation around TW Hydrae was announced this afternoon. Their work used data from ESA’s Herschel space observatory, which has the sensitivity at the needed wavelengths for scanning TW Hydrae’s protoplanetary disk, along with the capability of taking spectra for the telltale molecules they were looking for. But getting observing time on a mission like Herschel is not easy and funding committees expect results, a fact that didn’t daunt the researcher. Says Henning, “If there’s no chance your project can fail, you’re probably not doing very interesting science. TW Hydrae is a good example of how a calculated scientific gamble can pay off.”
I would guess the relevant powers that be are happy with this team’s gamble. The situation is this: TW Hydrae is a young star of about 0.6 Solar masses some 176 light years away. The proximity is significant: This is the closest protoplanetary disk to Earth with strong gas emission lines, some two and a half times closer than the next possible subjects, and thus intensely studied for the insights it offers into planet formation. Out of the dense gas and dust here we can assume that tiny grains of ice and dust are aggregating into larger objects and one day planets.
Image: Artist’s impression of the gas and dust disk around the young star TW Hydrae. New measurements using the Herschel space telescope have shown that the mass of the disk is greater than previously thought. Credit: Axel M. Quetz (MPIA).
The challenge of TW Hydrae, though, has been that the total mass of the molecular hydrogen gas in its disk has remained unclear, leaving us without a good idea of the particulars of how this infant system might produce planets. Molecular hydrogen does not emit detectable radiation, while basing a mass estimate on carbon monoxide is hampered by the opacity of the disk. For that matter, basing a mass estimate on the thermal emissions of dust grains forces astronomers to make guesses about the opacity of the dust, so that we’re left with uncertainty — mass values have been estimated anywhere between 0.5 and 63 Jupiter masses, and that’s a lot of play.
Error bars like these have left us guessing about the properties of this disk. The new work takes a different tack. While hydrogen molecules don’t emit measurable radiation, those hydrogen molecules that contain a deuterium atom, in which the atomic nucleus contains not just a proton but an additional neutron, emit significant amounts of radiation, with an intensity that depends upon the temperature of the gas. Because the ratio of deuterium to hydrogen is relatively constant near the Sun, a detection of hydrogen deuteride can be multiplied out to produce a solid estimate of the amount of molecular hydrogen in the disk.
The Herschel data allow the astronomers to set a lower limit for the disk mass at 52 Jupiter masses, the most useful part of this being that this estimate has an uncertainty ten times lower than the previous results. A disk this massive should be able to produce a planetary system larger than the Solar System, which scientists believe was produced by a much lighter disk. When Henning spoke about taking risks, he doubtless referred to the fact that this was only the second time hydrogen deuteride has been detected outside the Solar System. The pitch to the Herschel committee had to be persuasive to get them to sign off on so tricky a detection.
But 36 Herschel observations (with a total exposure time of almost seven hours) allowed the team to find the hydrogen deuteride they were looking for in the far-infrared. Water vapor in the atmosphere absorbs this kind of radiation, which is why a space-based detection is the only reasonable choice, although the team evidently considered the flying observatory SOFIA, a platform on which they were unlikely to get approval given the problematic nature of the observation. Now we have much better insight into a budding planetary system that is taking the same route our own system did over four billion years ago. What further gains this will help us achieve in testing current models of planet formation will be played out in coming years.
The paper is Bergin et al., “An Old Disk That Can Still Form a Planetary System,” Nature 493 ((31 January 2013), pp. 644–646 (preprint). Be aware as well of Hogerheijde et al., “Detection of the Water Reservoir in a Forming Planetary System,” Science 6054 (2011), p. 338. The latter, many of whose co-authors also worked on the Bergin paper, used Herschel data to detect cold water vapor in the TW Hydrae disk, with this result:
Our Herschel detection of cold water vapor in the outer disk of TW Hya demonstrates the presence of a considerable reservoir of water ice in this protoplanetary disk, sufficient to form several thousand Earth oceans worth of icy bodies. Our observations only directly trace the tip of the iceberg of 0.005 Earth oceans in the form of water vapor.
Clearly, TW Hydrae has much to teach us.
Addendum: This JPL news release notes that although a young star, TW Hydrae had been thought to be past the stage of making giant planets:
“We didn’t expect to see so much gas around this star,” said Edwin Bergin of the University of Michigan in Ann Arbor. Bergin led the new study appearing in the journal Nature. “Typically stars of this age have cleared out their surrounding material, but this star still has enough mass to make the equivalent of 50 Jupiters,” Bergin said. | <urn:uuid:a225f201-6f03-4503-bb76-bd2fde1838a7> | 3.515625 | 1,210 | Knowledge Article | Science & Tech. | 46.712272 | 98 |
Consider four vectors ~ F1, ~ F2, ~ F3, and ~ F4, wheretheir
magnitudes are F1= 43 N, F2= 36 N, F3 = 19 N, andF4 = 54 N.Let
θ1 =120o, θ2 =
−130o,θ3 = 200, and θ4 =
−67o, measured from thepositive x axis with
the counter-clockwiseangular direction aspositive.
What is the magnitudeof the resultant vector ~F , where ~F = ~
F1 +~ F2 +~ F3 +~ F4? Answer in units of N. What is the direction
ofthis resultant vector~F?
Note: Give the anglein degrees, use counterclockwise as the
positiveangular direction, between the limits
from the positive
xaxis. Answer in units ofo
I worked out the first part of thequestion by using
trigonomic rules. My X value=-5.68671and my Y
value=-33.5474. The magnitude came out to 34.026N. I tried
finding the direction by usingθ=tan-1(y/x) but i
cant get the rightanswer. | <urn:uuid:6424f806-15f1-4352-8ed4-15e67ff2dc91> | 3.375 | 267 | Q&A Forum | Science & Tech. | 80.950653 | 99 |