text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
What on Earth is that?
The Richat Structure
in the Sahara Desert of Mauritania
is easily visible from space
because it is nearly 50 kilometers across.
Once thought to be an
impact crater, the
flat middle and lack of shock-altered rock indicates otherwise.
The possibility that the Richat Structure
was formed by a
also seems improbable because of the lack of a dome of igneous or volcanic rock.
Rather, the layered
of the Richat structure is now thought by many to have
been caused by uplifted rock sculpted by
The above image was captured by the
onboard the orbiting orbiting
Why the Richat Structure is nearly circular remains a mystery. | <urn:uuid:708df54b-a4b5-4e55-bb8d-7481162ef449> | 3.6875 | 145 | Knowledge Article | Science & Tech. | 37.966865 |
One way to explain the existence of the Moon is through a giant collision, one that tore off enough material to build a satellite in a planetary orbit. Can Pluto and its moon Charon be explained the same way? Robin Canup thinks so. Canup is assistant director of Southwest Research Institute’s Department of Space Studies; she argues the case in the January 28 issue of Science.
The Moon may seem large in our skies, but it makes up only about 1 percent of Earth’s mass. Charon, on the other hand, is 10 to 15 percent the mass of Pluto, which suggests to Canup that the corresponding collision must have been with an object almost as large as Pluto itself. She also believes that Charon probably formed intact as a result of the collision.
“This work suggests that despite their many differences, our Earth and the tiny, distant Pluto may share a key element in their formation histories. This provides further support for the emerging view that stochastic impact events may have played an important role in shaping final planetary properties in the early solar system,” said Canup.
While the collision theory for both the Moon and the Pluto/Charon pair is not new, Canup is the first scientist to model the scenario successfully. Her work on Earth’s Moon indicated that an impact from a Mars-sized object could have produced it. This work accounted for both the iron depletion of our satellite as well as its mass and angular momentum.
Image: Pluto and Charon as viewed by the Hubble Space Telescope. Note the darker tint of Charon, indicating differences in surface composition. Also, note what may be a surface feature at the center of the Pluto image. Credit: Space Telescope Science Institute.
You can see an animation of the proposed Pluto/Charon collision here.
Sources: Canup’s work on the Earth’s Moon and its formation can be found in Canup, Robin M. and Erik Asphaug, “Origin of the Moon in a giant impact near the end of the Earth’s formation,” Nature 412, 708-712 (16 August 2001). Her most recent article on the Pluto/Charon simulations will appear in the January 28 issue of Science. It follows a series of earlier studies building the case for the collision hypothesis. | <urn:uuid:66828784-3408-4bf2-b9ac-cbd9c834bbad> | 3.765625 | 476 | Knowledge Article | Science & Tech. | 52.5625 |
Overview: This tutorial discusses
isotopes and isotopic abundance. Nuclear binding energy and mass defect are
presented and the technique of mass spectrometry is introduced.
The current system of atomic masses was instituted in 1961 and is based on
the mass of 12C (read carbon twelve). By definition the atomic mass
of a single 12C atom is exactly 12 atomic mass units (denoted by
the abbreviation amu or u). The masses of all other elements are based on this
Note: These latter two processes only occur in nuclear reactions, not in normal
chemical reactions. Chemical reactions are processes in which the number of
electrons held or shared by an atom change. Nuclear reactions are processes
that involve changing the number of neutrons or protons held in the nucleus
of an atom.
How can we determine isotopic masses? It seems we should be able to add together
the masses of the constituent subatomic particles to determine the isotopic mass.
In the following example we will see how accurate this approach is.
Estimate the atomic mass of 7Li based on the masses of the constituent subatomic particles.
7Li: Mass Number: A = 7 = # of protons + # of neutrons
Atomic Number: Z = 3 = # of protons
number of neutrons = 4
number of electrons = 3 (this is a neutral atom).
Atomic mass = (#p+)(mp+) + (#n0)(mn0) + (# e-)(me-)
=(3)(1.00728 u) + (4)(1.00867 u) + (3)(5.5 x 10-4 u)
The experimentally determined value, measured by a technique called mass spectrometry, is 7.016005 u.
The difference in mass is: Dmass = 7.05817 u - 7.016005
u = 0.042165 u.
Approximately 0.6% of the mass is missing. This raises the question, what happened to this mass?
Therefore 3.79 x 109 kJ/mol is the amount of energy needed to break
the nucleus apart. This is much larger than the energy involved in normal chemical
reactions or processes. For instance, to remove an electron from an atom requires
only 5 x 102 kJ/mol. Hence it takes ~107 times more energy
to break apart the 7Li nucleus. This is a massive amount of energy.
The periodic table lists the atomic mass for each element. For instance, the
entry for copper (Cu) in the periodic table indicates an atomic mass of 63.546
u, but what does this really mean? In nature, Cu exists in two different isotopic
forms, 63Cu and 65Cu, and their natural abundances are
69% and 31%, respectively. We can use this data to solve for the elemental
i = an index identifying each isotope for the element
fi = fractional abundance of isotope i
mi = mass of isotope i
The elemental atomic mass is the atomic mass that appears in the periodic table.
It is nothing more than a weighted average of the isotopic masses of all the
naturally occurring isotopes.
We have been talking about isotopes for a while, but still have not formally
defined them. Isotopes are atoms of the same element that differ in the
number of neutrons in the nucleus and therefore they have different masses.
Nevertheless isotopes have practically identical properties in terms of chemical
What is the elemental atomic mass of naturally occurring Silicon? The naturally
occurring isotopes and their isotopic abundances are:
*Note that the natural abundances must add up to 100%!
The atomic mass of a specific atom or molecule is determined by using an experimental
technique called mass spectrometry. This technique separates the different isotopes
of atoms to allow determination of the percent abundance or isotopic composition
of the element in the given sample. Follow this link to learn the details of how a mass spectrometer works:
Each isotope appears as a peak in the mass spectrum. The intensity (height)
of each peak depends on the abundance of that isotope in the sample and the
unique location of the peak on the x-axis indicates the mass-to-charge ratio
(m/q) of the isotope.
Mass spectrometry is used in a diverse range of applications, such as accurate determinations of molecular masses, drug testing, determining the age of archaelogical artifacts (14C dating) and for studying the chemistry of DNA (see the Advanced Application below).
Consider the mass spectrum of silicon, shown below. The abundances are the
same as those in Example 2. As you can see, there are three isotopes. Each peak
represents one of the isotopes. The most abundant isotope has the highest peak
intensity and the least abundant isotope has the smallest intensity. Since the
peak intensities (heights) are proportional to the isotopic abundances, analysis
of the data allows for the determination of the relative abundances of each
isotope in the sample.
There are only two naturally occurring isotopes of Boron, 10B and
11B. If 10B has a mass of 10.013 u and 11B
has a mass of 11.006 u, what are the percent natural abundances of 10B
Now we have two unknowns and only one equation. We need another equation related
to the fractional abundances. We know that the fractional abundances must add
up to 1.00 (the percent abundances must add up to 100%), so:
Notice that there are no units associated with the fractional abundance. So
naturally occuring Boron is 19.64% 10B and 80.36% 11B.
This is all you need to know about mass spectrometry at this point, however,
if you would like to know more about how a mass spectrometer works go to: http://www.chemguide.co.uk/analysis/masspec/howitworks.html
Now you should have a good understanding of atomic, isotopic, and missing masses
along with nuclear binding energy. You should be comfortable performing calculations
to determine isotopic abundances, elemental atomic masses and determining nuclear
binding energies. Additionally, you have been introduced to the method of mass
spectrometry, which is used to determine atomic mass. | <urn:uuid:e4175bdb-1b80-4dcd-ac06-beddab1b79a4> | 3.984375 | 1,352 | Tutorial | Science & Tech. | 47.283843 |
How to Traverse Data with Apply Functions in R
R has a powerful suite of functions that allows you to apply a function repeatedly over the elements of a list. The interesting and crucial thing about this is that it happens without an explicit loop.
Because this is such a useful concept, you’ll come across quite a few different flavors of functions in the apply family of functions. The specific flavor of apply() depends on the structure of data that you want to traverse:
Array or matrix: Use the apply() function. This traverses either the rows or columns of a matrix, applies a function to each resulting vector, and returns a vector of summarized results.
List: Use the lapply() function to traverse a list, apply a function to each element, and return a list of the results. Sometimes it’s possible to simplify the resulting list into a matrix or vector. This is what the sapply() function does.
The ability to apply a function over the elements of a list is one of the distinguishing features of the functional programming style as opposed to an imperative programming style. In the imperative style, you use loops, but in the functional programming style you apply functions. R has a variety of apply-type functions, including apply(), lapply(), and sapply(). | <urn:uuid:75c40c52-dcf6-4b31-95a3-4f4b4b5c51a3> | 3.5625 | 264 | Tutorial | Software Dev. | 45.848903 |
To begin thinking about what equation we would need to construct to show the number of potentially detectable alien civilizations we must first begin by considering the factors that will affect this number.
Before I begin explaining how we go about constructing the equation I want to first say that this equation is a slightly altered version of the Drake equation, it will yield the exact same results (comment for an explanation as to why) but I personally feel this version is more intuitive and far easier to grasp as a concept. I also need to say that although this is an equation of sorts, it does not have an implicit use that will give a correct answer, the reason for this is just we do not know enough about a lot of the variables to make them constants, so they will change depending on each interpretation.
Well let's start by thinking about what we need as the variables in the equation. The obvious first thing to consider is the number of star systems in our galaxy, we will denote this S. Now our best estimate for this is anything from 200 billion to 600 billion stars, with the increase in the power of our telescopes we get more and more accurate estimates constantly.
A lot of these star systems are simply devoid of any planets at all, they are just a star with no planets at all. So the next variable that we need to consider is the fraction of stars with planets, we will denote this as P. Now this really is a pretty massive estimate, we could never know the exact fraction of stars that have planets, but again we do have methods of checking if a star has a planet and currently it is thought that around 50% of stars have planets too.
Now we currently will have an approximation of the number of stars with planets orbiting them, the next thing we will want to do to narrow down this number to the number of detectable alien life is for what fraction of these star systems lies a planet that is capable of supporting life, moreover we want a planet that could be Earth-like, this reason is for what we know now is that life can only develop on planets that are Earth-like. However for all we now their may be incredibly intelligent gaseous beings on a distant planet that has developed a highly technical civilization, we just do not know. However, we will denote this variable as E, for Earth-like planets. To put a number onto this I will just pluck a number completely out of the air and say only 10% of planets could be capable of sustaining life.
Just because the tools are there it does not mean that it will result in their being life. This variable is the fraction of Earth-like planets that do develop life, we will denote this L. However, I think a lot of the planets that are Earth-like will evolve life in some form, there is life at the very deepest depths of the ocean, there is life where a human would be completely obliterated within seconds. For this reason I think that if the planet has the means to sustain life, it often will, I estimate it will at least half of the time, so to be conservative I will put this variable at 0.5.
Again, just because there is life it doesn't mean that it will ever become 'intelligent' enough. Life may not have evolved as well on Earth had the dinosaurs not been wiped out, they may have never developed to an intelligent enough state to communicate using radio waves, they may have just stayed as a less intelligent being. This variable will then be the fraction of life that will develop into intelligent life, we will denote this as I. Now this will be far more rare than their just being life, and in fact there is not even a rough number we can apply to this, but just for the sakes of this let's say that 1% of life will at one stage become intelligent.
The next variable we need to consider is the fraction of these civilizations that communicate via a means that we will be able to detect, we will denote this variable as C. For example humans have been around and intelligent for around thousands of years, but only for the last 80 or so would we be able to be detected, this is because of the discovery of radio waves. So before that we were, as a civilization, undetectable. Also we may discover that in a few hundred years there are far more efficient and productive methods to communicate, other alien civilizations may already be using this. So let's say again, pulling numbers completely at random, that 10% of intelligent civilizations develop a means of communication that we can detect.
The last variable that we need to consider is the fraction the average time the civilization is able to communicate takes up of the average age of star system, we will call this T. How long a civilization is able to communicate is something that although we do not know (as we still exist, just!) we can estimate. We have only been able to communicate via radio waves for 80 years, and every single one of those has been riddled with war. However, I remain optimistic that we, and all intelligent life, should be able to last about 10,000 years in the state of communication. The average age of a start system is around 10 billion years. So the calculation to find T is 10,000/10,000,000,000, which as a decimal is 0.000001.
The equation that we now have, after considering all the things we need to look for is:
Number of Alien Civilizations = Number of Stars * Fraction of stars with Planets * Fraction of Earthlike Plants * * Fraction of Planets with Life * Fraction of Intelligent Planets * Ability to Communicate * (Lifetime of Planet)/(Lifetime of Star)
This is a lot of text for maths, so to put it algebraically:
N = S*P*E*L*I*C*T
Now if we input the estimates that I designated earlier we get:
N = 200,000,000,000 * 0.5 * 0.01 * 0.5 * 0.1 * 0.1 * 0.000001
Now if we do this calculation we get that in this case N = 5. So there should be about 5 alien civilizations that are detectable from my very estimated estimates.
But the point of this equation is not to come up with a concrete number of civilizations that we must be able to communicate with right now. But this equation gives us the types of data that we should be looking for if we want to know the potential amount of aliens in the galaxy. If we know what to be looking for to know how many potentially intelligent aliens could be in our galaxy we can hone our efforts in on what data we need to look for. Also, it is pretty cool to be able to estimate how many aliens are out there and able to be detected in our own galaxy, and what happens if we change certain variables, etc.
If you fancy estimating how many detectable lives our out there using the original Drake equation, try it out for yourself at WolframAlpha. | <urn:uuid:7ce02639-4a27-48ec-8640-d94e95df3e2d> | 2.734375 | 1,443 | Personal Blog | Science & Tech. | 51.443558 |
SQL (Structured Query Language)
SQL is the language used to work with relational databases. SQL statements are referred to as queries and typically they have the capability to INSERT, UPDATE, and DELETE database records as well as create tables, alter tables and others.
- User types their zip code into a web form and clicks a submit button.
- The zip code is sent to the web server and processed in PHP code.
- The PHP code forms an SQL query using the zip code. Maybe: SELECT * FROM cities WHERE cities.zip = ’12345′
- The SQL query will return all the records, in this case, from the cities table that match the zip code the user provided.
- In PHP, we build a results page from the results of the SQL query and send it back to the user.
Depending on the server side language, frameworks, and libraries being used, the developer may not need to write the SQL queries himself. Frameworks often take care of this automatically for the most common cases.
At Fieldstone Software, we use SQL in almost every project. We regularly work with database servers. Normally, we use MySQL and SQLite. At our clients request, we also work with MSSQL, Oracle, and MS Access on some projects.
P.S. That cat picture has nothing to do with SQL but we couldn’t find a picture that did. Enjoy. | <urn:uuid:eabb977d-849c-4685-bb97-8d7b4406a903> | 3.171875 | 293 | Knowledge Article | Software Dev. | 64.339568 |
Physics Homework Help
|PHYSICS AROUND THE WORLD-The world's most comprehensive
index to on-line physics resources!
WHAT ARE ATOMS?-the building blocks of matter.
ATOM BUILDER - You try it! Lets users build a carbon atom particle by particle.
|PHYSICS LAWS-Laws, rules, principles, effects, paradoxes, limits, constants, experiments, & thought-experiments in
PHYSICS BIG BOOK-providing beginning high school science students with an accessible physical science resource.
PHYSICS 4KIDS- Excellent resource. Site has been remade with lots of new features.
STEPHEN HAWKING'S UNIVERSE - Where do we come from? How did the universe begin? Why is the universe the way it is? How will it end?
|KEPLER'S LAWS-A proof of Kepler's laws using Newton's laws of motion and of universal gravitation.|
SIR ISAAC NEWTON-his biography.
SIR ISAAC NEWTON- an entire site devoted to his life and his works, including photos.
THE FOUCAULT PENDULUM-the first terrestrial device to demonstrate the rotation of the earth. Interesting site from the California Academy of Sciences.
INTERNATIONAL SYSTEM OF UNITS- introduction, definitions, rules and style conventions.
INTRODUCTION TO QUANTUM MECHANICS-to give an ordinary person a brief overview of the importance and wonder of quantum mechanics.
THE QUANTUM PAGE-an index to what is to come.
LIST OF LAWS-a list of various laws, rules, principles, and other related topics in physics and astronomy.
SIMPLE MACHINES-earn about pulleys, levers, wedges, screws, inclined planes, and the wheel and axle.
PHYSICS OF SOUND - explains the speed of sound in gases, liquids, and solids.
NATURE OF SOUND - investigate the nature, properties and behaviors of sound waves and apply basic wave principles.
PARTICLE PHYSICS-an explanation of particle physics,"the science of the fundamental nature of matter."
PARTICLE ADVENTURES- An interactive tour of quarks, neutrinos, antimatter, extra demensions, dark matter, acellerators and particle dectors.
ORIGIN OF THE CELSIUS TEMPERATURE SCALE- an interesting piece of science history.
COLD FUSION - The latest on this emerging new technology.
PHYSICS CENTRAL-communicates the excitement and importance of physics to everyone, latest news from the American Physical Society.
BOYLE'S LAW CALCULATOR - Solve pressure-volume problems online.VECTORS - examines some of the elementary ideas concerning vectors.
MORE ON VECTORS - Working with vectors - describes properties and physical applications of vectors. Excellent site.MAKING WAVES - guide to sound and electromagnetic radiation created by the physics students at St. Mary's high school.
MICRO WORLDS - Exploring the Structure of Materials - for Grades 8-12. From Lawrence Berkeley Laboratory of the University of California.
NUCLEAR FUSION BASICS-the energy-producing process which takes place continuously in the sun and stars.
PHYSICAL CONSTANTS- CODATA Internationally recommended values of the fundamental physical constants.
HISTORY OF PHYSICS- from the American Institute of Physics.
HISTORY OF QUANTUM MECHANICS- from Kirchhoff to von Neumann.
BASIC QUANTUM MECHANICS- physics is dominated by its concepts. This page aims
PHYSICS DEMONSTRATIONS-From astronomy to magnetism to waves to optics, this site covers it all.
A CENTURY OF PHYSICS- physics history presented on an interactive timeline. Excellent site.
MAGNETISM - An elementary primer on magnetism and magnetic physics. Contains questions frequently asked about the science of magnetism.
LIGHT -This site gives an indepth look at the physics of light.
PHYSICS OF LIGHT - a series of pages on the natures of particles and waves and their similarities and differences
OPTICS AND YOU -Interactive Java Tutorials- incorporates interactive computer based instruction on optics, light and microscopy.PHYSICS OF SNOW CRYSTALS-snow crystal growth is a fascinating and poorly understood process. Visit this site and learn more.
SIMPLE HARMONIC MOTION - an easy to understand interactive tutorial.
KINETIC MOLECULAR THEORY and GAS LAWS - Online Tutorial
ROTATIONAL MOTION - What is torgue? Right Rule, Units, etc.
FEAR OF PHYSICS - In its unique way, this site attempts to explain the unexplainable, simplify the complex, and generally make sense of this field of science.
[ BACK ] | <urn:uuid:ca4531ff-b7f3-49d0-ba0e-97d037013dcd> | 3.15625 | 1,045 | Content Listing | Science & Tech. | 31.988717 |
Lightning is an atmospheric electrostatic spark accompanied by thunder, which typically occurs during thunderstorms. A leader of a bolt of lightning can travel at speeds of 220,000 km/h (140,000 mph). Lightning is extremely hot. A flash can heat the air around it to temperatures about 30,000 °C (54,000 °F). it is dangerous. About 2,000 people are killed worldwide by lightning each year. | <urn:uuid:032f4302-bedb-4453-b2d6-f1ad09324f0d> | 3.484375 | 88 | Knowledge Article | Science & Tech. | 78.384608 |
Moderate Geomagnetic Storm at Earth
Geomagnetic storms at Earth are currently at a rating of G2 (moderate) on a scale of G1 to G5. This is due to the arrival of a coronal mass ejection that began at 1 p.m. EST on March 10, 2012, in association with the two M-class flares that day. Storms at this rating may have an effect on high frequency radio communications at high latitudes and may cause increased aurora.
› View full disk
This image was captured by the Solar Dynamics Observatory (SDO) on March 10, 2012 at 12:29PM EST in the 304 Angstrom wavelength. An active region on the sun, seen above as the bright spot to the right, has been moving across the face of the sun from left to right since March 2, 2012. Designated AR 1429, the spot has so far produced three X-class flares and numerous M-class flares.Credit: NASA/SDO/AIA
On March 10, 2012, the sun released another two M-class flares. One, rated
as an M5.4, peaked at 12:27 AM EST. The second, rated as an M 8.4, peaked
at 12:44 PM EST.
These two flares came from the same Active Region (AR) on the
sun, designated number 1429, that has already produced three X-class and numerous M-class flares over the past week.
› View larger
SOHO image caption: These three images show the evolution of the coronal mass ejection from March 8, 11:38 PM EST to March 9, 12:53 AM EST as captured by the Solar Heliospheric Observatory (SOHO). The sun is obscured in this image, called a coronograph, so that the dim atmosphere -- or corona -- around the sun can be better seen. The white speckles on the image are “noise” from solar particles hitting the instrument. Credit: SOHO/ESA & NASA
On March 8, 2012 at 10:53 PM EST the sun erupted with an M6.3 class flare, and about an hour later released a coronal mass ejection (CME). These eruptions came from active region 1429 that has so far produced two X class flares, and numerous M-class flares.
NASA's Space Weather Center models measure the CME traveling at speeds of over 700 miles per second. The CME should reach Earth's magnetosphere, the protective envelope of magnetic fields around the planet, early in the morning of March 11.
More news and media to come as it becomes available.
What is a solar flare? What is a coronal mass ejection?
For answers to these and other space weather questions, please visit the Spaceweather Frequently Asked Questions
Karen C. Fox
NASA Goddard Space Flight Center, Greenbelt, Md. | <urn:uuid:5fef97fd-e4dd-463c-bc9d-ef0b4ccc9f69> | 2.921875 | 609 | Knowledge Article | Science & Tech. | 68.244274 |
had been warning New York City for a decade that a hurricane like Sandy barreling in from the east could flood downtown Manhattan and coastal New Jersey. Most recently, he and other scientists published a big 2011 report detailing the threat and recommending ways of protecting the region against storm surges. Jacob felt Sandy's wrath firsthand, too—his house in Piermont, N.Y., on the enormous Hudson River about 25 kilometers north of Manhattan, was flooded with 60 centimeters (two feet) of water. He had already raised his home as much as local building codes would allow, but clearly that was not enough. Jacob, a seismologist and specialist in disaster risk management at Columbia University's Lamont–Doherty Earth Observatory in Palisades, N.Y., graciously took a brief break from his hectic day of shucking out mud from his ruined home (see video by his colleague) to explain to Scientific American what coastal cities and townships must do to prepare for more destruction from rising sea levels and storm surges.
[An edited transcript of the interview follows.]
City and state leaders on the U.S. east coast are suddenly talking about putting barriers outside of New York City and other places. Will those work?
Barriers are not sustainable structures for more than 100 years, so they will not be sufficient for, say, 500 years of sea-level rise. Barriers can work, but you should only build barriers if you have an exit strategy for them [a plan to update them]. Hurricane Katrina in New Orleans overcame man-made barriers because the city kept subsiding and the sea had risen after the levies and walls went up. You have to take action behind the barriers to prepare for their obsolescence—before you design and build them.
Is that what is happening in the Netherlands now?
The Dutch built their barriers [beginning in the 1950s] for a one-in-10,000-year flood. But now that is approaching one in 1,000 because of sea-level rise. By the end of this century the barriers will only be good for a one-in-100-year storm. At that point, with sea level at one or one and a half meters higher, an annual storm could equate to a thousand-year storm. They are rethinking what they must do.
Would it be better for cities to alter their building and transportation infrastructure instead?
They need to do both. But I think it is better to focus on land use and municipal planning. Most immediately, buildings on low ground should pull all their systems out of basements and put them on higher floors. Tall buildings should put their systems on the 10th floor. Let the lower level be a parking garage or something. Then waterproof the basement and low floors. In New York City transportation systems like subways have to close all ventilation grates at the street level and find other ways to vent. Gates are needed for subway entrances—or the entrances should be redesigned; in Taipei, for example, at some station you have to walk up from street level to enter before you can walk down below street level into the subway.
What about retreating from the coast?
Yes, we should retreat in certain low-lying areas. Insurance companies will not insure any property that is at a dangerous elevation. National flood insurance should also be revised; it is almost a hoax right now.
It sounds like a number of issues have to be addressed at the same time. Can cities perhaps share solutions? | <urn:uuid:7d6ac68b-9ae2-4adb-b8bd-50ab47787521> | 3.15625 | 712 | Audio Transcript | Science & Tech. | 59.579872 |
To understand how a pH amplifier works, we must learn a bit about what the pH electrode(Probe) does. In the most basic sense a pH probe is a very simple single cell battery, where the voltage produced is proportional to the hydrogen ion concentration around the probe and therefore proportional to the Log of the hydrogen ion concentration as expressed here pH = -log10(ah).
All this means is when the concentration is greater on the outside of the probe, the ion flow is inward causing a slight voltage(+) difference between the probes electrodes. This tells us whether we are measuring a base or an acid. The voltage can tell us the concentration of the test solution and which side of the probe the concentration is on. For each pH step we see a ten fold concentration change, for example a pH of 8 has 1/10th the ion activity as a pH of 7. | <urn:uuid:7be5d56e-1a57-4cc3-9ddc-4dcdd80822c7> | 3.796875 | 177 | Knowledge Article | Science & Tech. | 51.803844 |
The figure opposite represents a device called " pendulum conique". It is about a simple Pendule but which one attaches the end of the wire to a vertical axis of rotation (rigid stem driven by an engine).
If one makes start the engine slowly and that one increases the number of revolutions gradually, one observes that for a certain value, the pendulum deviates from the axis of a certain angle. The mass of the pendulum describes a circle then. The wire thus describes a cone, from where the name. The angle from which the pendulum deviates is all the more important as the number of revolutions is grande.
(It is noted that this is not with strictly speaking a pendulum since there is no oscillation).
This circular motion is explained by the combined effect of the weight and the tension of the wire which exerts a centripetal force maintaining the mass in rotation around the axis.
If m indicates the mass, L the length of the wire fixed out of O, and C the center of the circle describes by the mass, for a uniform angular velocity , the half angle at the top is such as:
cos = g/l
History of sciences
Christiaan Huygens (1629-1695) will discover by studying this problem the acceleration of a uniform movement (which, let us recall it, was described by Galileo (1568-1642), like NOTHING). Huygens rectified a little by introducing the centrifugal force correctly, but will remain attached to the thought of Galileo, which will lead it to its unhappy theory of the total Relativité.
- simple Pendulum
- Pendulum of spherical Huygens
|Random links:||Legism | Alan Schneider | Poulainville | Friedrich August Marschall von Bieberstein | Christian de Sica | L'exposition_rocheuse_d'horreur| | <urn:uuid:aaa5b20b-c56e-4383-a1da-683de5fe78d5> | 3.90625 | 397 | Knowledge Article | Science & Tech. | 34.725197 |
In this video adapted from NASA, learn how lasers can be used to map the surfaces of planets. Animations illustrate how LIDAR—light detection and ranging—uses reflected laser pulses to measure the distance between the instrument onboard a satellite and the surface of the planet. Find out how scientists compile distance measurements to build a 3-D model of the planet's terrain. In addition, learn about other applications of LIDAR to study Earth.
Light detection and ranging (LIDAR) is a remote sensing technology that uses pulses of lasers to measure characteristics of objects and landscapes that are far away from the device. LIDAR can be used to measure distance, speed, and rotation as well as chemical composition and concentration. The technology is often used to measure the speed of vehicles to enforce speed limits and can also be used to measure distances (at a crime scene, for example). LIDAR is also useful for surveying and terrain-contour mapping; its wide range of applications include agriculture, forestry, oceanography, archaeology, geology, and meteorology.
Like radar, LIDAR sends out pulses of electromagnetic radiation and measures the radiation that is reflected back to detect objects. The relatively high-energy, short-wavelength light (ultraviolet, visible, or near-infrared light) that LIDAR uses resolves images more clearly than the weaker, long-wavelength radio waves used by radar. In addition, lasers produce a tight beam of light that does not spread out as it travels, so LIDAR is much better than radar at targeting specific areas. For example, LIDAR can create a high-resolution map or accurately measure the composition of a small region in the atmosphere.
A LIDAR system onboard a plane in flight or an orbiting satellite can be used to collect data about the surface of Earth. The basic components of such a system would include a laser emitter–receiver scanning unit, which would emit, receive, and then process the pulses of light; and a global positioning system (GPS) and inertial measurement unit, which determine the location of the laser scanner (as well as the movement of the aircraft) to give a precise position. The laser scanner sends pulses of light to the ground and measures the time it takes for each pulse to reflect back to it. Because the speed of light is known, the time measurements can then be used to calculate the distance that the pulse traveled. This information, combined with the information about the location of the laser scanner at the time of each measurement, provides researchers with the data to create a 3-D map of Earth's surface.
In this case, the target object was the ground. However, the pulse of light may hit more than one surface on its way to the target, returning multiple data points (such as the height of a bridge or vegetation). There are methods to account for these multiple returns, such as choosing to use only the "last return," which would be the ground surface.
LIDAR can also be used to study the composition and structure of the atmosphere. By measuring changes in the wavelength of the light caused by phenomena such as scattering and absorption by molecules in the air, researchers can identify and measure concentrations of gases.
Before the Video
During the Video
After the Video
After the Background Essay
Students may benefit from reading the background essay before discussing the following question:
Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co.
We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment. | <urn:uuid:ee4a3c6d-e58e-4a7c-a6e6-bbfac973831d> | 4.1875 | 817 | Knowledge Article | Science & Tech. | 30.798335 |
The Mars Polar Lander during tests.
Click on image for full size
Courtesy of NASA
Mars Lander may be Alive
News story originally written on January 27, 2000
Just when everyone had given up hope, a faint signal was received at Stanford University. Scientists say it most likely came from Mars, although they won't know for sure until later this week. The signals came on December 18, and January 4, which are the two days the lander was told to send a signal to Earth.
"The circumstantial evidence indicates that the signals came from Mars, and if that is the case there is a good chance they came from the Lander," project manager Richard Cooke said. "The signals that were received were like a whisper among a lot of static."
Scientists say it is very unlikely that the Polar Lander could continue the mission, even if it is still alive. They are hoping the craft could at least give them some information about its landing. Investigators are trying to find out what caused the disaster.
The concept of a living lander puzzled and surprised the entire Mars Polar Lander team. "I was blown away," said flight operations manager Sam W. Thurman. "Imagine coming back from the funeral of a dear friend and getting a phone call saying ... he's not dead after all."
Shop Windows to the Universe Science Store!
Our online store
includes fun classroom activities
for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist
are also full of classroom activities on different topics in Earth and space science!
You might also be interested in:
Amongst reports that the Mars Polar Lander fell into a deep canyon, scientists are reporting the cause of the disaster is still unknown.Organizers of the mission also pointed out they knew the canyon...more
It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more
The Space Shuttle Discovery lifted off from Kennedy Space Center at 2:19 p.m. EST, October 29th. The sky was clear and the weather was great as Discovery took 8 1/2 minutes to reach orbit for the Unitied...more
A moon was discovered orbiting the asteroid, Eugenia. This is only the second time in history that a satellite has been seen circling an asteroid. A special mirror allowed scientists to find the moon...more
Will Russia ever put the service module for the International Space Station in space? NASA officials are demanding an answer from the Russian government. The necessary service module is currently waiting...more
During a period of about two days in early May, 1998, the ACE spacecraft was immersed in plasma associated with a coronal mass ejection (CME). The SWICS instrument on ACE, which determines unambiguously...more
J.S. Maini of the Canadian Forest Service has referred to forests as the "heart and lungs of the world." Forests reduce soil erosion, maintain water quality, contribute to atmospheric humidity and cloud...more | <urn:uuid:7a886cb5-4d5d-4fde-8dd5-2f9c8a91724d> | 3.28125 | 633 | Content Listing | Science & Tech. | 56.909114 |
We have written about the solar control on climate many times in the past, and to say the least, the debate continues to rage regarding the solar influence of Earth’s climate. IPCC has been luke warm on the subject, stating in the Technical Summary that “Solar irradiance contributions to global average radiative forcing are considerably smaller than the contribution of increases in greenhouse gases over the industrial period.” Two articles have appeared recently that provide even more evidence that variations in solar output have a profound impact on regional, hemispheric, and global climatic variations.
May 26, 2010
May 20, 2010
Last week, we reported on a truly rare bird—that is, a story in which “global warming” was linked to something good happening to a “good” species, in that case, the grey whale. This week, we have found another rare bird, literally. We report on a findings which suggest that global warming is benefitting another iconic, beloved species, the Trumpeter swan. Maybe there is a growing trend here.
Since it is apparently acceptable practice for Science magazine to accompany an article extolling the evils of anthropogenic climate change (and the need to take action) with an picture of a polar bear (or two) stranded on an ice floe (even though polar bears were not mentioned in the article), perhaps we’ll accompany all of our articles with a photo of a thriving Trumpeter swan. After all, what’s good the goose (or, er, swan)…
Trumpeter swans thriving in a world of enriched CO2.
May 13, 2010
As Senators John Kerry and Joseph Leiberman begin to lay out the details of their American Power Act, one thing becomes immediately clear—whatever impacts the bill may have, they won’t be on the climate.
WCR staff wasted little time pointing this out.
May 11, 2010
A few years ago we identified what we termed the good for bad and bad for good paradigm of global warming impacts—that is, if some plant or animal species were generally regarded as being “good”—penguins, polar bears, butterflies, etc.—then global warming was supposed to do bad things to it. Conversely, if some type of plant or animal was generally viewed in a negative light—jellyfish, poison ivy, ragweed, etc.—then the publicized global warming impacts were, of course, positive.
Reporting anything to the contrary may have the unintended consequence of leading some people to think that global warming may not be so bad after all and may in fact have beneficial consequences. Which, of course, would violate rule No. 1 of the global warming alarmists’ playbook—human alteration to the global climate is B-A-D. Period.
Case and point, the Environmental Protection Agency in justifying its finding that greenhouse gases (GHGs) endanger the public health and welfare went to great pains to play up the negatives all the while downplaying the positive aspects of climate change. After all, you can’t very well justify regulating GHGs if they lead to benefits, now can you?
So, consequently, we rarely hear that something good comes about from climate change.
So, shiver me timbers, were we surprised to read this story from the wires:
One of the ongoing debates in the climate change world involves the popular prediction of more droughts, longer droughts, and droughts of greater intensity. The underpinnings of this prediction are easy to follow, so this is definitely a strong pillar in the climate alarmist camp. As the temperature increases, potential evapotranspiration (PET) will certainly increase. There are many equations describing the relationship between PET and temperature, and they all indeed show PET would increase should the temperature increase. The physics here is solid. So if PET increases, actual evaporation will increase in areas with even a small amount of soil moisture, and in the absence of some compensating increase in rainfall, soil moisture will be depleted. The combination of increasing temperatures and decreasing precipitation should all but guarantee the place will become drier thereby yielding the increase in drought duration, intensity, and frequency. There is always a drought somewhere on the planet to point to as evidence that this is really happening, will likely get worse in the future, and all the rest. We’ve all heard it a million times … “If we don’t act know, ______ will happen” (fill in the blank, but today, we will focus on droughts).
May 3, 2010
Close your eyes. You are sitting by a fireplace, you’ve just open a bottle of vintage Port wine your friend brought back from Portugal, you’ve cracked open Al Gore’s latest book … ouch, things were actually going pretty well with the fireplace and the vintage Port!
We bring this up given some really good news for Port lovers that appeared recently in the Journal of Agricultural and Food Chemistry. We realize you’ve probably read this piece already given the widespread circulation of the journal and the intense media coverage of the article? No?! Then we’ll fill you in.
WOW – looks good for sure! | <urn:uuid:3817f150-196e-484f-800c-cc95f816a6a1> | 2.78125 | 1,085 | Personal Blog | Science & Tech. | 41.455604 |
(reprinted from icecap.us)
In a Geological Society of America abstract, Dr. Don Easterbrook, Professor of Geology at Western Washington University, presents data showing that the global warming cycle from 1977 to 1998 is now over and we have entered into a new global cooling period that should last for the next three decades. He also suggests that since the IPCC climate models are now so far off from what is actually happening that their projections for both this decade and century must be considered highly unreliable.
The Pacific Ocean has a warm temperature mode and a cool temperature mode and in the past century has switched back forth between these two modes every 25-30 years (known as the Pacific Decadal Oscillation or PDO). In 1977 the Pacific abruptly shifted from its cool mode (where it had been since about 1945) into its warm mode, and this initiated global warming from 1977 to 1998. The correlation between the PDO and global climate is well established. The announcement by NASA�s Jet Propulsion Laboratory that the Pacific Decadal Oscillation (PDO) had shifted to its cool phase is right on schedule as predicted by past climate and PDO changes (Easterbrook, 2001, 2006, 2007). The PDO typically lasts 25-30 years and assures North America of cool, wetter climates during its cool phases and warmer, drier climates during its warm phases. The establishment of the cool PDO, together with similar cooling of the North Atlantic Oscillation (NAO), virtually assures several decades of global cooling and the end of the past 30-year warm phase. It also means that the IPCC predictions of catastrophic global warming this century were highly inaccurate.
As shown by the historic pattern of PDOs over the past century and by corresponding global warming and cooling, the pattern is part of ongoing warm/cool cycles that last 25-30 years. The global cooling phase from 1880 to 1910, characterized by advance of glaciers worldwide, was followed by a shift to the warm-phase PDO for 30 years, global warming and rapid glacier recession. The cool-phase PDO returned in ~1945 accompanied by global cooling and glacial advance for 30 years. Shift to the warm-phase PDO in 1977 initiated global warming and recession of glaciers that persisted until 1998. Recent establishment of the PDO cool phase appeared right on target and assuming that its effect will be similar to past history, global climates can be expected to cool over the next 25-30 years. The IPCC prediction of global temperatures 1 F warmer by 2011 and 2 F by 2038 stand little chance of being correct.
The global warming of this century is exactly in phase with the normal climatic pattern of cyclic warming and cooling and we have now switched from a warm phase to a cool phase right at the predicted time. | <urn:uuid:d74b61e3-f3a5-457d-a231-e3b4e42eee01> | 3.25 | 566 | Knowledge Article | Science & Tech. | 44.690278 |
Here are two facts you need to know about retinal ganglion cells to
make sense of Meister's experiment:
1) They have a receptive field which extends over a spatial region.
2) Most of them respond to stimulus with a large transient signal
followed by a smaller sustained signal.
The first thing Meister et. al. did in their experiment was record the
activity of retinal ganglion cells while flashing stationary bars of
light. This allowed them to map out the spatial extent of each
ganglion cell's receptive field. It makes sense to say that the CENTER
of each cell's receptive field corresponds to the position in space
which the brain maps onto that cell's activity.
Now what happens if you move a bar of light across the ganglion cell's
receptive field? IF the ganglion cell was not "transient" -- that is,
if it always gave a constant, sustained signal proportional to the
amount of light falling onto its receptive field -- then the signal
from the ganglion cell would start to rise when the bar of light
entered its receptive field, reach a maximum when the bar of light was
over the center of its receptive field, and drop as the bar moved out
of its receptive field.
But because ganglion cells have a strong initial transient at stimulus
onset in addition to their smaller sustained signal, something
different happens. When the moving bar of light first enters the
cell's receptive field, the cell starts to give its big, transient
signal. While the bar is still moving through the cell's receptive
field, the signal from the ganglion cell starts to drop as it sends its
smaller, "sustained" signal. And of course the signal from the
ganglion cell drops to zero as the bar of light moves out of the
receptive field. The net result is -- depending upon the speed with
which the bar of light is moving -- the signal produced by the ganglion
cell reaches its peak intensity BEFORE the bar of light has reached the
center of the cell's receptive field. In that sense, you could say
that the ganglion cell was "predicting" the motion of the bar. NOT
because it was firing a signal before the bar reached its receptive
field; rather, because the signal from a moving bar reaches maximum
before the bar reaches the CENTER of its receptive field.
We spent some time over lunch discussing whether or not Meister got a
big publication out of saying something which everyone has known for
decades -- namely, that most ganglion cells have a strong transient
signal. We decided that Meister really did show something a bit more
than that. (His results showed that this "predictive coding" effect --
which was a well-known psychophysical phenomenon for quite a few years
and was widely assumed to happen entirely in the visual cortex --
actually happens (to at least some extent) in the retina before signals
even reach the visual cortex. His results also show that at least some
of the ganglion cell's "fast adaptation" (switching from transient to
sustained signal levels) happens over the ganglion cell's entire
dendritic field -- rather than happening entirely at earlier stages of
processing in the photoreceptors or bipolar cells.)
Now for a few other points.
The BBC Sci-Tech article also said:
> The finding revolutionises many previous models of the eye, which
> assumed that it acted simply as a camera - capturing the image presented
> directly in front of it.
They need better editing and less hype over at BBC Sci-Tech.
Nobody who is actually doing research in retinas these days thinks of
the eye as a "simple camera."
I completely agree with one point Steve Jones was trying to make: the
retina is complex. There's a whole lot of signal-processing that gets
done in the retina before the signals ever reach the brain. We already
understand quite a bit of that complex wiring, and there's still quite
a bit that we don't have figured out. The retina is amazing!
I also agree that the "bad design" claim -- that the vertebrate retina
has a huge design flaw -- is scientifically premature.
The vertebrate retina is arranged so that light entering the eye has to
pass through several layers of ganglion cells, amacrine cells, bipolar
cells, horizontal cells, and Mueller cells (and in some cases, blood
vessels) before hitting the photoreceptors. This causes some optical
blurring. Also, when all the axons from the ganglion cells bundle
together to form the optic nerve, this forms a "blind spot" in the
retina where there can't be any photoreceptors. The invertebrate
retina is arranged in the opposite order. Light hits the
photoreceptors first. No blurring; no blind spot.
Is the invertebrate eye designed better? Actually, that's not clear.
The "backwards" arrangements of the vertebrate retina allows the
photoreceptors to be in contact with the pigment epithelium. This
tissue not only blocks further transmission of light into the head, it
also helps recycle the photoreceptors' used photopigments. Recycling
used photopigments is a metabolically intensive process. So there's an
advantage to having the photoreceptors right next pigment epithelium.
This arrangement allows tight spatial packing of photoreceptors, and
allows rapid recycling of photopigments.
How do invertebrates deal with the fact that their photoreceptors are
more distant from the pigment epithelium? I don't know. We could
speculate that the invertebrate eye has to sacrifice either some
photopigment recycling speed or some tight spatial packing, or a bit of
both. Maybe someone who studies invertebrate eyes knows the answer,
but I don't.
The point here is that vertebrate retina construction is not
necessarily an inferior design to the invertebrate construction. That
claim is scientifically premature.
But of course, the real problem with the claim that "bad design of the
vertebrate eye 'proves' that they weren't created" is a theological
problem, not scientific.
Things don't have to arise _de_novo_ to be "created;" things like
retinas can also be created through process. In addition, things like
retinas can even be created with 'sub-optimal design' (given our
definitions of "optimal") and still be declared "good" by a good
Having said that, I do think that comparing retina organization across
different species makes a pretty good case for common ancestry. You
see patterns of similarity in retinal organization (and almost
certainly in patterns of similarity in developmental programs and
genetic sequences) which are nested at the levels of genus, family,
order, and class. The ancestry tree inferred from these nested
similarities of retinal organization matches the ancestry tree inferred
from other biological features and from the fossil record.
Now, that doesn't prove that the eye evolved without miraculous
intervention. Common ancestry is consistent both with evolutionary
creationism and with some versions of episodic creationism which would
argue that the eye is just too complex to have evolved without
miraculous intervention. Well, IS the eye too complex for evolution?
Answer: it's way too soon to give a solid, empirical answer. Popular
literature on evolution gives hand-wavy arguments about how the complex
eye could have evolved from simpler systems. Some proponents of
episodic creation give hand-wavy arguments about how the eye is too
complex to have evolved from simpler systems. Whose hand-wavy
arguments do you want to believe? Neither side has the genetic data
across multiple species *necessary* to make a strong, empirical case.
I'd say the evolutionary biologists do have some corroborating evidence
on their side in that we do see many different levels of eye complexity
in nature if we look at different species. But that's not going to
settle the case. We need more data to settle it. I have other reasons
for favoring evolutionary creation, but none related directly to the
eye, so this seems like a good place to stop. | <urn:uuid:595452c7-7f9f-4460-99f2-4307f37a2ad9> | 3.75 | 1,787 | Comment Section | Science & Tech. | 42.852368 |
Bleach has two main functions: sanitizing HTML based on a whitelist of tags and attributes, and turning URLs into links. It uses html5lib for both.
For more information on using Bleach, see the README included in the source. For more info on how Bleach works, follow below the jump.
clean() function uses a slightly custom version of html5lib’s
HTMLSanitizer tokenizer that adds support for per-tag attribute whitelists. Any entity that is not part of a whitelisted tag or valid entity will be encoded. Legitimate entities and tags are allowed. The default whitelist is set up for AMO.
linkify() function is a little more complicated. Naïve implementations usually rely on a simple regular expression to find URL-like strings, but this quickly becomes insufficient when you need to handle situations like these:
<em>http://example.com</em>(should be linkified)
<a href="http://example.com">test</a>(already linked, no need to linkify)
<a href="http://example.com">http://example.com</a>(really don’t need to linkify)
<em>http://xx.com <a href="http://example.com">http://example.com</a></em>(regular expression freak-out)
linkify() actually uses html5lib to build a document fragment and walks it, only applying the naïve regular expression in safe locations. In pseudocode:
tree = parseFragment(input)
for node in tree:
if node is a text node:
replace node with text nodes and links
else if node is a link:
set rel="nofollow" on node
This avoids attempting to apply the regular expression to things like tag attributes, the inside of
<a> tags, and other places it should generally be avoided. It also lets us do things like set the
rel attribute on links already in the text and pass the
href attribute through the same filter it would go through if we created the link. This filter lets us redirect links through an outbound redirect, so people know they’re leaving a Mozilla site. You could do other things with it, like rickroll your visitors. That’s up to you.
html5lib and construct document trees, using either will fix up code mistakes, like unclosed takes, and escape bare entities.
linkify() allows basically every tag and attribute, so if you need to limit the legal HTML to a subset, use
clean() (or the shortcut
bleach() to clean then linkify).
Bleach is available on Github, or can be installed via
easy_install. Improvements and test cases are very welcome! Actually, there’s one disabled test right now that is not supported. If you can make it work, that would be pretty great! | <urn:uuid:3d6abec2-cdbb-4965-b80d-090476068b91> | 2.703125 | 610 | Documentation | Software Dev. | 54.960106 |
General Summary of Dynamic SQL Processing
The following statements are used when SQL statements are dynamically submitted:
Allocate extended cursor. Allocate SQL descriptor area. Close an open cursor. Deallocate SQL descriptor area. Deallocate prepared SQL statement. Declare a cursor for a statement which will be dynamically submitted. Examine the object form of the statement and assign values to the appropriate parameters in the SQL descriptor area. Execute a prepared statement (except result set generating statements). Shorthand form for PREPARE followed by EXECUTE. This form can only be used for fully-defined non-result set statements with no parameter markers. Fetch rows for a dynamic cursor. Get values from the SQL descriptor area. Open a prepared cursor. Compile an SQL source statement into an internal object form. Set values in the SQL descriptor area.
All statements submitted to dynamic SQL programs must be prepared.
All prepared statements and singleton
SELECTstatements, where the result-set contains only one row, are executed with the
SELECTstatements and calls to result set procedures are executed using
FETCHfor a cursor declared with the prepared statement.
The declaration of a cursor for a statement,
DECLARE CURSOR, must always precede the
PREPAREoperation for the same statement in an application using dynamic SQL.
Mimer Information Technology AB
Voice: +46 18 780 92 00
Fax: +46 18 780 92 40 | <urn:uuid:ba9606a6-a6d1-4c25-99da-98ab02b0cfa5> | 3.046875 | 299 | Documentation | Software Dev. | 33.304899 |
Ma-on formed over the western Pacific Ocean in early July, traveled west then north, and made a U-turn along the shore of Japan. En route to Japan, the storm strengthened into a Category 4 typhoon, and dropped heavy precipitation.
This color-coded image shows rainfall amounts from July 14 to 21, 2011. The lightest rainfall amounts (less than 50 millimeters or about 2 inches) appear in pale green. The heaviest amounts (more than 300 millimeters or about 12 inches) appear in dark blue. The heaviest rainfall occurs over the Pacific Ocean south of Japan, including an area immediately off the coast.
Superimposed on the rainfall amounts is a storm track for Ma-on. Darker shades of orange indicate greater storm strength. Dates on the storm track indicate Ma-on’s location as of midnight UTC on each date. Ma-on peaked late in the day on July 15, and weakened considerably on July 21.
Ma-on was downgraded to a tropical storm before it reached Japan, but the storm still brought heavy rains and strong winds to the country. On July 21, The Japan Times reported that Ma-on disrupted air and rail transportation. Meanwhile, officials blamed the storm for more than 50 injuries and at least one death. Authorities warned residents to remain alert for floods and landslides as Ma-on moved off.
This image is based on data from the Multisatellite Precipitation Analysis produced at Goddard Space Flight Center, which estimates rainfall by combining measurements from many satellites and calibrating them using rainfall measurements from the Tropical Rainfall Measuring Mission (TRMM) satellite.
- The Japan Times. (2011, July 21). Typhoon Ma-on eases, exits, leaves one dead. Accessed July 22, 2011.
- Unisys Weather. (2010, July 22). Typhoon Ma-on. Accessed July 22, 2011.
NASA Earth Observatory image by Jesse Allen, using near-real-time data provided courtesy of TRMM Science Data and Information System at Goddard Space Flight Center. Caption by Michon Scott.
- TRMM - MPA | <urn:uuid:51ac9189-6445-4854-83c0-4187d7458144> | 3.078125 | 429 | Knowledge Article | Science & Tech. | 53.463932 |
Distance: 5500 to 9000 light years
Image and Text by Robert Gendler
A gem of the summer sky, M20 allows us a view into the exciting science of star birth.
M20 is a young HII region (300,000 years old) about 30 light years across and is illuminated by the O-type supergiant HD 164492 at the center of its trilobed emission cloud. The ionizing star is about 30 times the mass of our sun and is the “A” component of a triple system ( A, B, and C components). In all there are seven members (HD 164492 A through G) of the small cluster packed within a half light year at the center of M20.
As an HII region M20 is similar to M42 in its complexity and relationship to its parent molecular cloud but is much younger. A large blue reflection cloud forms the northern border of M20 and is illuminated solely by the F-type supergiant HD 164514. Recent X-ray and infrared observations have discovered an amazing array of very early stars and protostars within M20 giving us a rare glimpse of the earliest stages of star birth. | <urn:uuid:0ada294c-ccda-45bf-b0c1-a62cfcd400c2> | 3.46875 | 247 | Knowledge Article | Science & Tech. | 62.648901 |
How do you solve this expression for T? 2=e^(.003t)
The answer is t=ln2/.003
In general, to "solve" an equation for t, you do, to both sides, the "opposite" of what is done to t, in the reverse order. Here, . To evaluate that, you would do two things: first multiply t by 0.003, then take the exponential. We need to do the "opposite" (inverse) of the exponential, then the "opposite" of "multiply by 0.003". The inverse of the exponential function is the natural logarithm and the inverse of "multiply by 0.003" is "divide by 0.003".
So we would start with
and first take the logarithm of both sides: .
Now, divide both sides by .003: . | <urn:uuid:bb9186b7-8d5d-4702-8099-c6ccad1bb02b> | 2.984375 | 191 | Q&A Forum | Science & Tech. | 82.584523 |
Galaxy interactions can be the most efficient way to produce strong torques, and transfer the angular momentum away. During the interaction period, strong bars are triggered in the galaxy disks, and through the same mechanisms as described before, gas is driven inwards. During the merger, a complete change of geometry also brings most of the gas to the center. The main consequence is the trigger of spectacular starbursts, in particular in major mergers, i.e. merging about equal masses disk galaxies. The star forming activity is concentrated in nuclei, with some exceptions (as the antennae or Arp 299 for instance). The same gas inflow can also fuel an active nucleus, either directly, or indirectly through the coeval dense nuclear clusters formed in the starburst (cf section 2.2).
In these huge starbursts discovered by IRAS, CO concentrations suggest that the the cause of the starburst is the concentration of huge gas masses in the center; the molecular gas represents a significant fraction of the dynamical mass (Scoville et al 1991). This gas must be brought to the center in a time- scale short enough with respect to the feedback time-scale of star-formation, i.e. a few 107yrs (cf Larson 1987). This requires very strong gravity torques. | <urn:uuid:d4dddb1c-3334-4617-af1c-9b4fcd4256b9> | 3.34375 | 264 | Academic Writing | Science & Tech. | 50.212692 |
What are we saving? Developing a standardised approach for conservation action
Sitas, N.; Baillie, J.; Isaac, N.J.B.. 2009 What are we saving? Developing a standardised approach for conservation action. Animal Conservation, 12 (3). 231-237. 10.1111/j.1469-1795.2009.00244.xFull text not available from this repository.
Are all species equal in terms of conservation attention? We developed a novel framework to assess the level of conservation attention given to 697 threatened mammals and 100 critically endangered amphibian species. Our index of conservation attention provides a quantitative framework for assessing how conservation resources are allocated, based on the degree to which conservation interventions have been proposed and implemented. Our results provide evidence of the strong biases in global conservation attention. We find that most threatened species receive little or no conservation, and that the small number receiving substantial attention is extremely biased. Species most likely to receive conservation attention are those which are well-studied, charismatic and that live in the developed world. Conservation status and evolutionary distinctiveness appear to have little importance in conservation decision-making at the global scale. Most species inhabit the tropics and are both poorly known and uncharismatic. Therefore, the majority of biodiversity is being ignored by current conservation action.
|Programmes:||CEH Programmes pre-2009 publications > Biodiversity|
|Additional Keywords:||IUCN Red List, Action Plan, bias, mammals, amphibians, EDGE species|
|NORA Subject Terms:||Zoology
Ecology and Environment
|Date made live:||19 May 2009 14:22|
Actions (login required) | <urn:uuid:e68bb203-5b24-468a-8b4c-64897c62b58b> | 3.0625 | 352 | Academic Writing | Science & Tech. | 31.470563 |
Bruce Mate didn’t wait long. Within days of the April 20 Deepwater Horizon oil well blowout in the Gulf of Mexico, he was on the phone with officials from the U.S. Minerals Management Service. From 2001-2004, the agency had funded him to study the Gulf’s endangered sperm whales. Now, the director of Oregon State University’s Marine Mammal Institute had an idea: By tracking the sperm whales again, he could provide useful data to federal agencies and the well’s owner, British Petroleum, on the impact of spilled oil on the marine ecosystem.
Working through an emergency-response process known as Natural Resource Damage Assessment, Mate negotiated a contract with BP in which OSU would own the data. BP and the National Marine Fisheries Service would have access to determine damages for future settlements. By the end of May, Mate and institute staff members Craig Hayslip and Ladd Irvine were on the research ship Gordon Gunter (owned by the National Oceanic and Atmospheric Administration), which had been quickly re-tasked from the North Atlantic to support five spill-related science missions.
Mate wasn’t the only OSU researcher to respond as the world watched crude spew into what the Census of Marine Life has ranked as one of the globe’s most diverse marine systems. Professor Kim Anderson in the university’s Superfund Research Program marshaled a crew to track chemical contamination along the shore. At four sites from Pensacola, Florida, to Grand Isle, Louisiana, they deployed devices that essentially sniff the air and water for an oil component known as polycyclic aromatic hydrocarbons, or PAHs. And in September, a team led by Stephen Brandt, director of Oregon Sea Grant, conducted an acoustic survey of fish in an area northwest of the spill site.
Researchers are still analyzing data, and while images of oil-soaked pelicans, turtles and other animals are seared in the public mind, it will be a while before the broader biological significance of the spill is known.
Following the Whales
In late December, Mate was following six of the dozen whales that he had tagged in June near the damaged well. One of them was among 58 that he had tagged in the previous project. Data from that effort, he says, form a baseline, which can be used to compare whale behavior after the 2010 spill.
“I don’t expect to see sperm whales directly affected by oil,” Mate says, “but if oil or dispersants have dramatically affected the squid they eat, the secondary effect will likely influence the movements of the whales. They sort of vote with their flukes.”
A pioneer in satellite-based whale tracking, Mate says the whales that had initially traveled northeast from the well (in the direction of oil visible at the surface) had changed course and were in the western Gulf, some close to the Mexican coast. As his lab continues to monitor whale movements, researchers will use the data to analyze the size of the whales’ home ranges. They’ll also consider whether significant differences between 2010 and previous years suggest that whales avoided heavily oiled waters.
Pollutants on the Increase
While Mate was making his plans, Kim Anderson in OSU’s Department of Environmental and Molecular Toxicology was assembling sampling devices and personnel to track PAHs, a group of more than 100 compounds that the U.S. Environmental Protection Agency classifies as “highly potent carcinogens.”
Supported by OSU’s Environmental Health Sciences Center, Anderson and her team, including Ph.D. student Sarah Allan (see “After the Spill”), started deploying their equipment on May 9, before oil began washing ashore. As the oil slicks and tarballs hit beaches and wetlands through the summer, PAH concentrations rose to about 40 times over baseline levels, according to preliminary data.
“There are a range of health effects associated with PAHs,” says Anderson. “They are toxic by several different modes of action. We’re now using a technique that looks at the fraction of PAHs that are bioavailable — that have the potential to move into the food chain.”
Over the next two years, with support from a National Institute of Environmental Health Sciences grant, the lab will continue sampling in each location for more than 1,200 different compounds: PAHs, pesticides, PCBs and other industrial chemicals, many of which are known to disrupt hormone signaling.
For Stephen Brandt, oil is only one of the threats to fish habitat in the Gulf of Mexico. At least as significant is the persistent presence of a low-oxygen region west of the Mississippi River outlet, a.k.a., the “dead zone.” As part of a multi-institution project that began in 2003, Brandt has collected data on water quality and fish behavior in order to assess the dead zone’s impact on fisheries.
A pioneer in the use of acoustics to study fish, Brandt has led five sampling expeditions to the Gulf. His September cruise, with OSU faculty research assistants Sarah Kolesar and Cynthia Sellinger, was the first after a major oil spill, but it was not the first to reflect the presence of crude. Natural oil seeps pour an estimated 41 million gallons into the Gulf every year, he points out.
During eight days of sampling, Brandt and his team saw no oil, but they did see evidence for the first time of “a very intense double-layered dead zone” with low-oxygen patches near the bottom as well as higher in the water column. The location and severity of low-oxygen zones can shift from day to day. It will take additional data analysis to identify the factors behind the 2010 pattern.
Brandt knows it will take time for the Gulf’s rich marine life to respond. In 1979, the region received a large gush of crude from Mexico’s Ixtoc 1 well, which fouled beaches and estuaries from Texas to the Yucatán Peninsula. After that event, it took three to five years for fisheries to come back, he says. Some species, he adds, may never recover.
Article courtesy of Terra Magazine
Photo by Stephen Brandt | <urn:uuid:ce0d7ee9-c87e-4fab-adc5-080cef71892b> | 3.046875 | 1,306 | Knowledge Article | Science & Tech. | 44.567238 |
The Creation Wiki is now operating on a new and improved server.
Complex specified information
From CreationWiki, the encyclopedia of creation science
Simply put, complex specified information is information that is both complex and specified, such that it is highly improbable and specific. The complexity of the information associated with event A is related to the number of bits I(A) associated with probability P(A) of a given event occurring such that I(A) = -log2 P(A). The result is the the more complex information is the more improbable it is.
One common example of complex specified information is a credit card number. Credit card numbers have 16 digits, giving a total of 1016 possible permutations. Now there are about 7 X 109 people in the world; if everyone had 10 credit cards that would be 7 X 1010 active numbers. So the probability of hitting an active number is P = 7 X 1010/ 1016 = 7 X 10-6 with I = 17.124 bits. However the odds of getting a given individual's credit card numbers would be P = 10/1016 = 10-15 with I = 49.83 bits. So individual credit card numbers qualify as complex information, but the fact that each number is associated with a unique individual makes it complex specified information.
The best example of complex specified information is DNA. The DNA of each organism on Earth is unique, because of mutations and other factors, making it the most specified form of information known. The human genome contain more than 30,000 genes, at an estimated 3,000 base pairs per gene for a minimum of 90,000,000 base pairs or 90,000,000 base 4 bits. This results is 490,000,000 or 1054,185,399 possible combinations, the overwhelming majority of with are not viable. So the odds of hitting any individual's DNA by chance is P = 10-54,185,399 with I = 179,999,999 bits. So DNA is both incredibly complex and specific.
Now lets see what this means for a chance origin of life. First of all we need to estimate the number of planets where the conditions are right for life to get started. In this process will try to be has generous as possible.
- There are an estimated 1011 stars in an average galaxy and an estimated 1011 galaxies, giving a total of 1022 stars.
- Current evidence indicates that about 1/2 of all stars have planets resulting in 5 X 1021 planetary systems
- Current evidence indicates that about 90% of planetary systems have gas giants in orbits that eliminate the possibility of terrestrial planets in the habitable zone of the star, so the maximum number of planetary systems with terrestrial planets is 5 X 1020.
- Let be generous and assume that 10% of these system actually have terrestrial planets in the habitable zone with an average of 2 per system resulting in 1020 terrestrial planets in habitable zones.
- Let be generous again and assume that 10% of these planets have ideal conditions for life to form, this results in 1019 potentially habitable planets.
Now based on the average estimated time that a star is on main sequence of 1010 years which is 3.156 X 1017 seconds and assuming one trial every nanosecond (an extremely generous assumption) that works out at 3.156 X 1026 per planet for a total of 3.156 X 1045 trials.
Now let us make another extremely generous assumption, that each trial is fully functional except for needing encoding all of the 124 proteins that even the simplest living organisms need to live; if a trial lacks these, it fails and the process has to start from scratch. These proteins have an average of 400 sequences amino acids. Since there are 124 of them, it requires a total of 49,600 sequences of amino acids requiring 148,800 base pairs or 148,800 base 4 bits each. This produces 4148,800 or 3.36 X 1089586 possibilities. It needs to be noted that this is taking the simplest possible case as such reality would be a far bigger problem.
Assuming that the 124 proteins can be in any order there are 124! = 5.4 X 10205 possible successful combinations. So the odds of a successful trial is P = 5.4 X 10205/ 3.36 X 1089,586 = 1.6 X 10-89381. Given a total of 3.156 X 1045 trials for the entire universe the the odds of getting a successful trial is P = 1.6 X 10-89381 X 3.156 X 1045 = 3 X 10-89336. There is a technical term in probability used to describe events with such small probabilities and that term is impossible. So it is statistically impossible to get proteins needed by even the simplest of living organisms.
The result is that information in DNA is so complex and specified that even making the most reasonable assumptions, it is impossible for the information in DNA to come about by chance. | <urn:uuid:ea4fd3c9-2758-40ef-a20a-0425ff296bd7> | 3.234375 | 1,012 | Knowledge Article | Science & Tech. | 64.945369 |
O'Reilly Book Excerpts: Java Programming with Oracle SQLJ
Contexts and Multithreading
This excerpt is Chapter 8 from Java Programming with Oracle SQLJ, published in August 2001 by O'Reilly.
There are two important objects used in SQLJ that affect the execution of database operations: connection contexts and execution contexts. Connection contexts are used to connect to a database. All embedded SQL statements within a SQLJ program run in a connection context. Connection contexts make it possible to create multiple connections to a database or to connect to more than one database at a time. An execution context is used to hold the number of rows affected by a SQL operation, along with any warnings generated by the database. Execution contexts are used to control certain aspects of how a SQL statement is executed. For example, you can use an execution context to control the timeout period after which a SQL operation is abandoned.
A multithreaded program is one that is able to carry out several tasks in parallel using Java threads. As you will see in this chapter, execution contexts are very important when writing a multithreaded SQLJ program. | <urn:uuid:362e0e5d-dcd1-4562-97d6-3231ff3d21b1> | 3.296875 | 228 | Truncated | Software Dev. | 35.715263 |
Cooking with Python
by Robin Parma, Alex Martelli, Scott David Daniels, Ben Wolfson, Nick Perkins, Anurag Uniya, Tim Keating, Rael Dornfest and Jeremy Hylton, authors of the Python Cookbook
1. Simple Tests Using Exceptions
Usually Python is straightforward, but there are a few unexpected exceptions.
Credits: Robin Parma
You want to know if the contents of a string represent an integer, which is not quite the same thing as checking whether the string contains only digits.
# try/except is generally the best approach to such problems:
def IsInt(str): """Is the given string an integer?""" try:int(str) except ValueError:return 0 else:return 1
Use exceptions to perform simple tests that are otherwise laborious. If you want to know if the contents of a string represent an integer, just try to convert it. That's what
IsInt() does. The try/except mechanism catches the exception raised when the string cannot be converted to an int, turning it into a harmless return 0. The else clause, which runs only
when no exception is raised in the try clause, gives a return 1 when the string is OK.
Don't be misled by the word "exception," or by what is considered good style in different programming languages. Relying on exceptions and try/except is a useful Pythonic idiom.
2. Constants in Python
Liberal Python lets you rebind any variable; yet, there is an instance where you can protect your variables.
Credits: Alex Martelli
You need to define module-level variables that client-code cannot accidentally rebind, such as "named constants."
# Needs Python 2.1 or better. Put in
class _Constants: class ConstError(TypeError): pass def __setattr__(self, name, value): if self.__dict__.has_key(name): raise self.ConstError, "Can't rebind const(%s)"%name self.__dict__[name ] = value def __delattr__(self, name): if self.__dict__.has_key(name): raise self.ConstError, "Can't unbind const(%s)"%name raise NameError, name import sys sys.modules[__name__] = _Constants() # now any client-code can: import const # and bind an attribute ONCE: const.magic =23 # but NOT re-bind it: # const.magic =88 # would raise const.ConstError
In Python, any variable can be re-bound at will. Modules don't let you define special methods, such as an instance's
__setattr__, to stop attribute re-binding. An easy solution (in Python 2.1 and up) is to use an instance as "module."
Python 2.1 and up no longer forces entries in
sys.modules to be module objects. You can install an instance object there and take advantage of
its attribute-access special methods while still having client-code get at it with
import somename. You might see this as a more Pythonic "Singleton"-ish idiom (but also see "Singleton? We don't need no stinkin' Singleton: the Borg non-pattern"). Note that this recipe ensures a constant binding for a given name, not an object's immutability, which is quite a different issue. | <urn:uuid:089cbda8-efc5-4f95-9e06-da857b057655> | 3.34375 | 726 | Tutorial | Software Dev. | 60.586317 |
Four Portland Community College students are joining the international effort to explore Mars as a potentially safe, habitable place.
The students were chosen as National Community College Aerospace Scholars, a NASA educational program. From May 1-3, they will join dozens of other students across the country at the Jet Propulsion Lab in Pasadena, Calif. to design and build prototype Mars rovers.
The prototypes should be able to navigate a course, collect rocks and water and return to a home base. Students will receive briefings from NASA scientists and engineers.
A competitive application process required each student to first design their own mission to Mars.
Justin Martinez, a 29-year-old PCC student studying electrical engineering, proposed sending a "biosphere lander," a capsule holding different types of algae.
Martinez, a Cedar Hills resident, said he's transferring to Oregon State University in the fall and hopes to do more research about renewable energy. He explained his approach to his proposal and thoughts about Mars exploration:The Oregonian: Why did you design your project around algae?
Justin Martinez: As the progression goes on with Mars, they've (NASA) sent a lot of rovers over as far as to see the temperature and what the planet's made out of and looking for samples of water...
We needed to see how we could transport any sort of life form to the planet, so I suggested biosphere canisters of different types of algae so that we can see a) if they can survive the trip over there, and b) how they would react with the planet, such as radiation, temperature...
A big, big factor is that it would have to be a deep-sea algae or Arctic algae, and then when it gets to the planet, we'll see how quickly it could build up in the atmosphere, how rapidly it could grow, that was the main focus.
(Martinez's project had a second mission of using radioactive power sources to possibly refuel rovers on Mars.)
Oregonian: What made you decide to apply for the program? How serious should we be about getting to Mars?
Martinez: The draw for me was... We're running on fossil fuels and we really don't have a plan for how we get from fossil fuels to another energy source. Everything from plastics, cars, travel, it's hitting a critical point on how we're going to thrive if we can't actually figure out a more efficient way to use energy and the natural energy out there. I'm really interested in solar energy, magentic energy, and whatever we're already doing, make it cost a lot less.
Oregonian: President Obama has called for a target of sending astronauts into Mars obit by the mid-2030s. Do you think it's possible, and do you think it's necessary?
Martinez: I definitely think it's possible. As far as it's needed, we definitely need to figure out where our planet is heading. The amount that we consume and developing nations are starting to consume, we definitely need to look at how we're going to stabilize our own planet. We need to be able to find other planets or options to do things out in space.
There's a lot of things you can get from space, such as energy or natural resources that just aren't possible here. I think it's a wonderful idea to think about that in the future, what would be possible, what can we do. The push to get out there is definitely there. There's a lot of nations who really want to go into that.
I'm sure if the technology is quite there. There's a lot of things that are biologically quite scary for us. | <urn:uuid:b626c623-2b33-4673-9bc8-c32efe5c32e6> | 2.9375 | 748 | Audio Transcript | Science & Tech. | 55.453083 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 14 results on physics.org and 40 results in our database of sites
34 are Websites,
0 are Videos,
and 6 are Experiments)
Search results on physics.org
Search results from our links database
Simple explanation of how metal detectors work, allowing people to find hidden treasure or spot weapons at airport security.
A comprehensive explanation of how metal detectors work using magnetism.
Find out how the detectors at CERN tell what particle has been created from the intense collisions.
A description of the OPAL detector at CERN, and the links to the LEP accelerator. Description of components of the detector obtained by clicking on labels.
An carat weight value calculator and a precious metal value calculator, use on-line gem/metal prices to predict a gem or an alloy's value.
A brief description of how these devices detect smoke, exploring the two most common types used today: photoelectric and ionization detectors. Part of Marshall Brain's HowStuffWorks.com. Assumes ...
A good description of what happens to metal when you put it in the microwave, and why you shouldn't do it.
A copmprehensive answer to this question.
Cool panoramic interactive allows you to look around the large hadron collider.
Java Applet simulating a photoelectric effect experiment, measuring current and stopping potential for a variety of metals and monochromatic wavelengths. A sophisticated simulated experiment.
Showing 1 - 10 of 40 | <urn:uuid:114a622b-5c36-4b04-99dc-4d4061bb56b9> | 3.15625 | 351 | Content Listing | Science & Tech. | 49.169684 |
Genetics is the study of genes, heredity and variation in living organisms. It involves research into the molecular structure and function of genes, particularly gene behavior in the context of a cell or organism. It also involves the study of patterns of inheritance and gene distribution, variation and change within populations.
Genetic testing, maintaining a healthy weight, and foregoing hormone replacement therapy at menopause may serve women better than annual mammograms.
3 | October 1, 2010 5:57am |
Researchers announce a breakthrough in materials science. Now, there's a commercially viable way to produce artificial spider silk proteins.
2 | September 30, 2010 11:51am |
Researchers report the first genetic link to ADHD.
5 | September 30, 2010 8:41am |
Researchers can shoot quantum dots into cells. It's like a flu shot to the cell.
2 | September 28, 2010 4:00pm |
British scientists have figured out the key mechanism behind the leading cause of blindness — opening up hope for new treatment options for age-related macular degeneration.
3 | September 27, 2010 8:50am |
Scientists are inspired by nature to construct smarter buildings. We want the buildings to act like skin.
4 | September 23, 2010 12:02pm |
The U.S. FDA is nearing approval of genetically modified salmon for consumption. The landmark decision has a big impact on the world's food supply and the ethics of tinkering with nature.
12 | September 21, 2010 2:37am |
Powering electronic devices like smart packaging and radio-frequency sensing devices could get a boost from paper-thin, rechargeable Li-ion batteries.
1 | September 20, 2010 8:34am |
The fast-growing GM salmon is safe to eat, apparently.
5 | September 10, 2010 10:48am |
Synthetic biology, the design and construction of new biological parts and systems, has the public's support, but many say that risks need to be evaluated, according to a survey.
2 | September 9, 2010 3:42am |
British scientists create liver cells from skin cells. Can this end of the organ shortage and open up treatment options?
August 30, 2010 9:07am |
The study examined the genetic data of more than 50,000 people and could open the door for therapies to prevent migraine attacks, said researchers.
7 | August 30, 2010 7:53am |
Drought-tolerant crops are one step closer to becoming reality after scientists discover what makes the process tick at a molecular level.
August 27, 2010 7:55am |
Oxford researchers have shown how vitamin D interacts with our DNA and how a deficiency in it brings on a slew of serious diseases.
4 | August 27, 2010 7:44am |
Bet you didn't know that the immature eggs of the African Clawed Frog were so useful.
2 | August 27, 2010 12:11am |
In a city already plagued with high rates of childhood asthma, Hurricane Katrina changed the landscape of the disease in New Orleans. An asthma expert talks about managing the epidemic today.
3 | August 25, 2010 1:39am |
Did you know that 20 percent of your genes are patented? Are patents necessary for innovation?
23 | August 24, 2010 11:18am |
MIT researchers enlist a common virus to help recharge soldiers' gadgetry, through their uniforms. For the rest of the citizenry, could the tiny lithium-ion batteries give new meaning to the power...
3 | August 24, 2010 4:00am |
For the first time, researchers find that a dead gene can wake up and cause disease.
6 | August 20, 2010 1:34pm |
You can know your test results. You can Google them. You can connect with others who have similar conditions. You can push your doctor around.
August 13, 2010 8:42am | | <urn:uuid:df4cbb1b-5783-4862-ac0c-14c51b80457c> | 2.6875 | 823 | Content Listing | Science & Tech. | 63.582306 |
A rare encounter between two gas-rich galaxies spotted by ESA’s (European Space Agency) Herschel space observatory indicates a solution to an outstanding problem: how did massive, passive galaxies form in the early Universe?
ESA (European Space Agency) astronaut Luca Parmitano left for Baikonur, Kazakhstan today, his last stop before heading to the International Space Station on 28 May.
The European Space Agency's (ESA) Herschel space observatory has made detailed observations of surprisingly hot molecular gas that may be orbiting or falling towards the supermassive black hole lurking at the center of our Milky Way galaxy.
Dramatic underground explosions, perhaps involving ice, are responsible for the pits inside these two large martian impact craters, recently imaged by the European Space Agency's (ESA) Mars Express.
ESA’s Herschel space observatory has provided the first images of a dust belt – produced by colliding comets or asteroids – orbiting a subgiant star known to host a planetary system.
The European Space Agency's (ESA) Herschel observatory has captured a new view of a vast star-forming cloud called W3.
Astronomers have identified some of the youngest stars ever seen, thanks to the European Space Agency's (ESA) Herschel space observatory.
The European Space Agency (ESA) and the Russian federal space agency, Roscosmos, have signed a formal agreement to work in partnership on the ExoMars program towards the launch of two missions in 2016 and 2018.
ESA’s planning to crash a spacecraft into an asteroid called Didymos, to, well, see what happens.
An asteroid the size of a small office block is due to whizz past Earth on Friday, traveling at over 28,000 miles per hour.
A laser device originally designed to measure carbon on Mars could soon be used here on Earth to root out counterfeit foods, making sure that honey, olive oil and chocolate are what they claim.
Building a base on the moon could theoretically be made much simpler by using a 3D printer to construct it from local materials.
Astronomers using the European Space Agency's Herschel Space Telescope have spotted a star that appears to be making new planets, despite being well past the age at which it would be expected to do so.
NASA has signed up to join the European Space Agency's (ESA's) Euclid mission, a space telescope due to launch in 2020 and designed to investigate dark matter and dark energy.
The European Space Agency's (ESA) Herschel space observatory has identified multiple arcs around Betelgeuse, the nearest red supergiant star to Earth.
The European Space Agency's (ESA) Mars Express has captured a detailed image of the upper part of the Reull Vallis region of Mars.
The European Space Agency is teaming up with NASA for a mission that will take human beings beyond Earth orbit for the first time in 40 years - and eventually, it says, further than ever before.
The massive asteroid Apophis shot past Earth last night, on a trajectory that will bring it even closer during its next flyby in 2029.
Astronomers believe they've found a large asteroid belt around Vega, the second brightest star in northern night skies.
Volcanic eruptions may be the explanation for large changes in the sulfur dioxide content of Venus’s atmosphere, and one intriguing possible explanation is volcanic eruptions. | <urn:uuid:2df38fa6-f1bf-4c83-95e4-9ec465327007> | 3.15625 | 709 | Content Listing | Science & Tech. | 29.938901 |
NASA captures giant comet hitting sun
NASA has caught an astonishing image of a comet smashing into the sun, followed by an immediate coronal mass ejection that goes streaming out as if instigated by the comet, in a cause-effect relationship. Is this additional evidence of the Electric Universe model?
by Sterling D. Allan
Pure Energy Systems News
Here's an awesome video showing something you don't see every day.
Watch as this comet (shown in time lapse, recorded over last Tuesday and Wednesday) smashes into the sun, then a coronal mass ejection (CME) goes streaming out as if instigated by the comet, in a cause-effect relationship.
I may be wrong, but this seems to support the "electric universe" model.
"The Electric Universe theory highlights the importance of electricity throughout the Universe. It is based on the recognition of existing natural electrical phenomena (eg. lightning, St Elmo's Fire), and the known properties of plasmas (ionized "gases") which make up 99.999% of the visible universe, and react strongly to electro-magnetic fields." (Source)
Check out the video, as published by Russia Today on http://www.youtube.com/watch?v=nJlsp0BXBlE
Here's their description:
SOHO (NASA-ESA Solar & Heliospheric Observatory) watched as a fairly bright comet dove towards the Sun in a white streak and was not seen again after its close encounter (May 10-11, 2011). The comet, probably part of the Kreutz family of comets, was discovered by amateur astronomer Sergey Shurpakov. In this coronagraph the Sun (represented by a white circle) is blocked by the red occulting disk so that the faint structures in the Sun's corona can be discerned. Interestingly, a coronal mass ejection blasted out to the right just as the comet is approaching the Sun. Scientists, however, have yet to find a convincing physical connection between sun-grazing comets and coronal mass ejections. In fact, analysis of this CME using images from the Solar Dynamics Observatory shows that the CME erupted before the comet came close enough to the solar surface to interact with strong magnetic fields.
Gizmodo also ran something on this, stating:
Once in a while, a comet hits the Sun and our star goes all nomnomnom on it. SOHO—NASA's Solar & Heliospheric Observatory—has captured a few, but never so spectacularly as in this video...
These comets are called sungrazers, and 90% of them come from the Kreutz comet group, a family of comets that was detected by Sergey Shrupakov. The Kreutz was a giant comet that disintegrated many centuries ago. The pieces are still up there, way too close to the Sun's perihelion, and crash against its surface from time to time. This may be one of those pieces.
The first portion of this video is also posted at flikr. | <urn:uuid:5074f908-6a16-40de-990d-88009cfa0516> | 2.75 | 635 | Comment Section | Science & Tech. | 46.521899 |
Maria Winkelmann Kirch
Maria Winkelmann Kirch was a German astronomer who lived between 1670-1720. She
discovered the comet of 1702. She also published a paper in 1712 on the
conjunction of Jupiter and Saturn.
Maria worked with her husband, Gottfried Kirch in making calendars and
ephemerides (tables showing positions of the planets, Sun, and Moon).
Shop Windows to the Universe Science Store!
Our online store
on science education, ranging from evolution
, classroom research
, and the need for science and math literacy
You might also be interested in:
How did life evolve on Earth? The answer to this question can help us understand our past and prepare for our future. Although evolution provides credible and reliable answers, polls show that many people turn away from science, seeking other explanations with which they are more comfortable....more
Not long ago, many people thought that comets were a portent that something bad was about to happen to them. Since people did not yet understand about the objects in the solar system and how they moved,...more
The Earth's one natural satellite, the Moon, is more than one quarter the size of Earth itself (3,474 km diameter), making the Earth-Moon system virtually a double-planet. Because of its smaller size,...more
Charles Darwin was an English Naturalist who lived between 1809-1882. In 1859, with the publication of The Origin of Species by Means of Natural Selection, he challenged existing views on the appearance...more
Christian Doppler was an Austrian mathematician who lived between 1803-1853. He is known for the principle he first proposed in Concerning the coloured light of double stars in 1842. This principle is...more
Ben Franklin was an American scientist and statesman who lived between 1706-1790. At a time when little was known about electricity, he carried out many experiments to learn of its dangers and possible...more
Edmond Halley was an English astronomer who lived between 1656-1742. Using historical records, his own observations, and Newton's universal law of gravitation, he reasoned that the comets which had appeared...more
William Herschel was born in Germany and lived in England while he worked as an astronomer. He lived between 1738-1822. He built reflecting telescopes of high magnification, that let him observe the universe...more | <urn:uuid:2077ff93-613a-40e3-8615-4425b3ee00bf> | 3.3125 | 498 | Content Listing | Science & Tech. | 55.861417 |
Darwin convinced us that all life on Earth is related by common descent. It was a stunning overturning of the convention wisdom of simultaneous creation.
The task then was to discern the familial relationships of all species of life on Earth, including the many extinct organisms known through the fossil record. Biologists did this on the basis of morphological similarities. A zebra is presumably more closely related to a horse than to a elephant, and so on. By the middle of the last century a family tree of life on Earth had been established to almost everyone's satisfaction.
Then came the most astonishing surprise. In every cell of our bodies we carry a history of our species' past, written in the four-letter genomic code. Forget looking at the morphology of zebras, horses and elephants. Send their DNA unlabeled to the sequencing lab and the genes will tell which creatures are more closely related, and (with a bit of calibrating against the fossil record) how long ago any two species shared a common ancestor.
This is now being done with ever increasing rapidity, and -- voila! -- the tree of life revealed by the DNA is identical to a satisfying extent with the tree established by the generations of zoologists and botanists who followed Darwin. I share almost all of my genes with chimps, and some genes with bacteria. A fleck of my spittle contains the four-billion-year history of Homo sapiens.
This sort of thing sends shivers up my spine, and renews my spirit of relationship with the birds at the window feeder and the peas in my garden. I swat a fly and I'm smushing some of my own genes, genes for making haemoglobin, say -- or near enough to my own to indicate a common ancestor some hundreds of millions of years ago. Our cells sing the unity of creaturedom. | <urn:uuid:f61629d6-6045-41ef-8cfb-c08c1197542a> | 3.125 | 380 | Personal Blog | Science & Tech. | 48.408401 |
Arrays are a very simple data structure. Arrays use a single variable name to store multiple values. The different values are usually referenced by a number, often called index. This is a very convenient thing to do, as it keeps us from creating variables such as a1, a2, a3, a4 and so on. The benefit is especially noticable when you need several hundred variables that hold similar data.
The implementation almost always goes like this: A data block big enough to hold the requested number of elements is created, and when indexing the array the location of the indexed element is found by adding I (index) times the size of one element to the location of the start of the array.
Almost every programming language (except for some esoteric ones) offers arrays. Sometimes they are called vectors. Some even offer multi-dimensional arrays, but this can easily be simulated - for example in two dimensions (X x Y) you make your array size X * Y and index element (N, M) with index (N + M * X).
A useful extension to the idea of an array is dynamic arrays, which can grow on their own and typically support some kind of operation to add and remove elements to the end of them (they can be used as stacks too!). They are implemented by reallocating a bigger block for the array whenever it gets too big and copying the old elements into that block. They come pre-implemented in many languages. All arrays in VB are dynamic, resizable with ReDim. C++ includes std::vector, which implements a dynamic array. | <urn:uuid:6837053e-4994-4b40-9700-0b1cb4dbd721> | 4.0625 | 327 | Knowledge Article | Software Dev. | 46.839182 |
The first example of laser cooling, and also still the most common method (so much so that it is still often referred to simply as 'laser cooling') is Doppler cooling. Other methods of laser cooling include:
- Sisyphus cooling
- Resolved sideband cooling
- Velocity selective coherent population trapping (VSCPT)
- Anti-Stokes inelastic light scattering (typically in the form of fluorescence or Raman scattering)
- Cavity mediated cooling
- Sympathetic cooling
- Use of a Zeeman slower
How it works
A laser photon hits the atom and causes it to emit photons of a higher average energy than the one it absorbed from the laser. The energy difference comes from thermal excitations within the atoms, and this heat from the thermal excitation is converted into light which then leaves the atom as a photon. This can also be seen from the perspective of the law of conservation of momentum. When an atom is traveling towards a laser beam and a photon from the laser is absorbed by the atom, the momentum of the atom is reduced by the amount of momentum of the photon it absorbed. Δp/p = pphoton/mv = Δv/v
Δv = pphoton/m
Momentum of the photon is: p = E/c = h/λ If you were floating on a hovercraft, moving a significant velocity in one direction, and randomly threw metallic balls off the hovercraft, eventually your velocity would slow down and your movements would entirely be dictated by the recoil effect of throwing the balls. That is how laser cooling works..
Pcooling=cooling power in the active material
Pelectric=input electric power to the pump light source
h/λ = p= mv
h=Planck's constant (h=6.626∙〖10〗(-34) J∙s)
λ = de Broglie's wavelength
p = momentum of the atom
m = mass of the atom
v = velocity of the atom
Example: λ = h/mv = x∙λphoton
x = number of photons needed to stop the momentum of an atom with mass m and at velocity v
mNa = 3.818∙〖10〗(-26) kg/atom
vNa ≈ 300meters/second
λphoton = 600 nm
x∙λphoton = h/(mNa vNa ) ⟹ x = 1037
Conclusion: A total of 1037 photons are needed to stop the momentum of one sodium atom with a velocity of about 300 m/s. Experiments in laser cooling have yielded a number of 10^7 photons to be emitted from a laser per second. This sodium atom could be stopped in space in just a matter of 0.1 milliseconds.
Doppler cooling
Doppler cooling, which is usually accompanied by a magnetic trapping force to give a magneto-optical trap, is by far the most common method of laser cooling. It is used to cool low density gases down to the Doppler cooling limit, which for Rubidium 85 is around 150 microkelvin. As Doppler cooling requires a very particular energy level structure, known as a closed optical loop, the method is limited to a small handful of elements.
In Doppler cooling, the frequency of light is tuned slightly below an electronic transition in the atom. Because the light is detuned to the "red" (i.e., at lower frequency) of the transition, the atoms will absorb more photons if they move towards the light source, due to the Doppler effect. Thus if one applies light from two opposite directions, the atoms will always scatter more photons from the laser beam pointing opposite to their direction of motion. In each scattering event the atom loses a momentum equal to the momentum of the photon. If the atom, which is now in the excited state, then emits a photon spontaneously, it will be kicked by the same amount of momentum, but in a random direction. Since the initial momentum loss was opposite to the direction of motion, while the subsequent momentum gain was in a random direction, the overall result of the absorption and emission process is to reduce the speed of the atom (provided its initial speed was larger than the recoil speed from scattering a single photon). If the absorption and emission are repeated many times, the average speed, and therefore the kinetic energy of the atom will be reduced. Since the temperature of a group of atoms is a measure of the average random internal kinetic energy, this is equivalent to cooling the atoms.
Other methods of laser cooling
Several somewhat similar processes are also referred to as laser cooling, in which photons are used to pump heat away from a material and thus cool it. The phenomenon has been demonstrated via anti-Stokes fluorescence, and both electroluminescent upconversion and photoluminescent upconversion have been studied as means to achieve the same effects. In many of these, the coherence of the laser light is not essential to the process, but lasers are typically used to achieve a high irradiance.
Laser cooling is primarily used for experiments in Quantum Physics to achieve temperatures of near absolute zero (−273.15°C, −459.67°F). This is done to observe the unique quantum effects that can only occur at this heat level. Generally, laser cooling has been only used on the atomic level to cool down elements. This may soon change, as a new breakthrough in the technology has successfully cooled a macro-scale object to near absolute zero.
See also
- List of laser articles
- Optical tweezers
- Mössbauer effect
- Mössbauer spectroscopy
- Timeline of low-temperature technology
- Researchers in laser cooling
- Anissimov, Michael, and Bronwyn Harris. What Is Laser Cooling?. WiseGeek. Retrieved April 11, 2013, from http://www.wisegeek.com/what-is-laser-cooling.htm
- Massachusetts Institute of Technology (2007, April 8). Laser-cooling Brings Large Object Near Absolute Zero. ScienceDaily. Retrieved January 14, 2011, from http://www.sciencedaily.com/releases/2007/04/070406171036.htm
- D.J. Wineland, R.E. Drullinger and F.L. Walls (1978). "Radiation-pressure cooling of bound resonant absorbers". Phys. Rev. Lett. 40 (25): 1639. Bibcode:1978PhRvL..40.1639W. doi:10.1103/PhysRevLett.40.1639.
- W. Neuhauser, M. Hohenstatt, P. Toschek and H. Dehmelt (1978). "Optical-sideband cooling of visible atom cloud confined in parabolic well". Phys. Rev. Lett. 41 (4): 233. Bibcode:1978PhRvL..41..233N. doi:10.1103/PhysRevLett.41.233.
- Nobel Lecture by William D. Phillips, Dec 8, 1997.
- Foot, C.J. Atomic Physics. Oxford University Press (2005).
- Cohen-Tanoudji, Claude (2011). Advances in Atomic Physics. World Scientific. p. 791. ISBN 978-981-277-496-5.
- Laser cooling of a semiconductor by 40 kelvin - Jun Zhang, Dehui Li, Renjie Chen & Qihua Xiong | <urn:uuid:5643b9ad-7255-48f6-a9b7-b342eaa98891> | 3.703125 | 1,599 | Knowledge Article | Science & Tech. | 60.782079 |
A quasiprobability distribution is a mathematical object similar to a probability distribution but which relaxes some of Kolmogorov's axioms of probability theory. Although quasiprobabilities share many of the same general features of ordinary probabilities such as the ability to take expectation values with respect to the weights of the distribution, they all violate the third probability axiom because regions integrated under them do not represent probabilities of mutually exclusive states. To compensate, some quasiprobability distributions also counterintuitively have regions of negative probability density, contradicting the first axiom. Quasiprobability distributions arise naturally in the study of quantum mechanics when treated in the phase space formulation, commonly used in quantum optics, time-frequency analysis, and elsewhere.
In the most general form, the dynamics of a quantum-mechanical system are determined by a master equation in Hilbert space: an equation of motion for the density operator (usually written ) of the system. The density operator is defined with respect to a complete orthonormal basis. Although it is possible to directly integrate this equation for very small systems (i.e., systems with few particles or degrees of freedom), this quickly becomes intractable for larger systems. However, it is possible to prove that the density can always be written in a diagonal form, provided that it is with respect to an overcomplete basis. When the density operator is represented in such an overcomplete basis, then it can be written in a way more like an ordinary function, at the expense that the function has the features of a quasiprobability distribution. The evolution of the system is then completely determined by the evolution of the quasiprobability distribution function.
The coherent states, i.e. right eigenstates of the annihilation operator serve as the overcomplete basis in the construction described above. By definition, the coherent states have the following property:
They also have some additional interesting properties. For example, no two coherent states are orthogonal. In fact, if and are a pair of coherent states, then
Note that these states are, however, correctly normalized with . Owing to the completeness of the basis of Fock states, the choice of the basis of coherent states must be overcomplete. Click to show an informal proof.
|Proof of the overcompleteness of the coherent states|
Clearly we can span the Hilbert space by writing a state as
On the other hand, despite correct normalization of the states, the factor of π>1 proves that this basis is overcomplete.
In the coherent states basis, however, it is always possible to express the density operator in the diagonal form
where f is a representation of the phase space distribution. This function f is considered a quasiprobability density because it has the following properties:
- If is an operator that can be expressed as a power series of the creation and annihilation operators in an ordering Ω, then its expectation value is
The function f is not unique. There exists a family of different representations, each connected to a different ordering Ω. The most popular in the general physics literature and historically first of these is the Wigner quasiprobability distribution, which is related to symmetric operator ordering. In quantum optics specifically, often the operators of interest, especially the particle number operator, is naturally expressed in normal order. In that case, the corresponding representation of the phase space distribution is the Glauber–Sudarshan P representation. The quasiprobabilistic nature of these phase space distributions is best understood in the P representation because of the following key statement:
If the quantum system has a classical analog, e.g. a coherent state or thermal radiation, then P is non-negative everywhere like an ordinary probability distribution. If, however, the quantum system has no classical analog, e.g. an incoherent Fock state or entangled system, then P is negative somewhere or more singular than a delta function.
In addition to the representations defined above, there are many other quasiprobability distributions that arise in alternative representations of the phase space distribution. Another popular representation is the Husimi Q representation, which is useful when operators are in anti-normal order. More recently, the positive P representation and a wider class of generalized P representations have been used to solve complex problems in quantum optics. These are all equivalent and interconvertible to each other, viz. Cohen's class distribution function.
Characteristic functions
Analogous to probability theory, quantum quasiprobability distributions can be written in terms of characteristic functions, from which all operator expectation values can be derived. The characteristic functions for the Wigner, Glauber P and Q distributions of an N mode system are as follows:
Here and are vectors containing the annihilation and creation operators for each mode of the system. These characteristic functions can be used to directly evaluate expectation values of operator moments. The ordering of the annihilation and creation operators in these moments is specific to the particular characteristic function. For instance, normally ordered (annihilation operators preceding creation operators) moments can be evaluated in the following way from :
In the same way, expectation values of anti-normally ordered and symmetrically ordered combinations of annihilation and creation operators can be evaluated from the characteristic functions for the Q and Wigner distributions, respectively. The quasiprobability functions themselves are defined as Fourier transforms of the above characteristic functions. That is,
Here and may be identified as coherent state amplitudes in the case of the Glauber P and Q distributions, but simply c-numbers for the Wigner function. Since differentiation in normal space becomes multiplication in fourier space, moments can be calculated from these functions in the following way:
or using the property that convolution is associative
Time evolution and operator correspondences
Since each of the above transformations from through to the distribution function is linear, the equation of motion for each distribution can be obtained by performing the same transformations to . Furthermore, as any master equation which can be expressed in Lindblad form is completely described by the action of combinations of annihilation and creation operators on the density operator, it is useful to consider the effect such operations have on each of the quasiprobability functions.
For instance, consider the annihilation operator acting on . For the characteristic function of the P distribution we have
Taking the Fourier transform with respect to to find the action corresponding action on the Glauber P function, we find
By following this procedure for each of the above distributions, the following operator correspondences can be identified:
Here κ = 0, 1/2 or 1 for P, Wigner and Q distributions, respectively. In this way, master equations can be expressed as an equations of motion of quasiprobability functions.
Coherent state
By construction, P for a coherent state is simply a delta function:
The Wigner and Q representations follows immediately from the Gaussian convolution formulas above:
The Husimi representation can also be found using the formula above for the inner product of two coherent states:
Fock state
The P representation of a Fock state is
Since for n>0 this is more singular than a delta function, a Fock state has no classical analog. The non-classicality is less transparent as one proceeds with the Gaussian convolutions. If Ln is the nth Laguerre polynomial, W is
which can go negative but is bounded. Q always remains positive and bounded:
Damped quantum harmonic oscillator
Consider the damped quantum harmonic oscillator with the following master equation:
This results in the Fokker–Planck equation
where κ=0, 1/2, 1 for the P, W, and Q representations, respectively. If the system is initially in the coherent state , then this has the solution
- L. Cohen (1995), Time-frequency analysis: theory and applications, Prentice-Hall, Upper Saddle River, NJ, ISBN 0-13-594532-1
- E. C. G. Sudarshan "Equivalence of Semiclassical and Quantum Mechanical Descriptions of Statistical Light Beams", Phys. Rev. Lett.,10 (1963) pp. 277–279. doi:10.1103/PhysRevLett.10.277
- J. R. Klauder, The action option and a Feynman quantization of spinor fields in terms of ordinary c-numbers, Ann. Physics 11 (1960) 123–168. doi:10.1016/0003-4916(60)90131-7
- E.P. Wigner, "On the quantum correction for thermodynamic equilibrium", Phys. Rev. 40 (June 1932) 749–759. doi:10.1103/PhysRev.40.749
- R. J. Glauber "Coherent and Incoherent States of the Radiation Field", Phys. Rev.,131 (1963) pp. 2766–2788. doi:10.1103/PhysRev.131.2766
- Mandel, L.; Wolf, E. (1995), Optical Coherence and Quantum Optics, Cambridge UK: Cambridge University Press, ISBN 0-521-41711-2
- O. Cohen "Nonlocality of the original Einstein-Podolsky-Rosen state", Phys. Rev. A,56 (1997) pp. 3484–3492. doi:10.1103/PhysRevA.56.3484
- K. Banaszek and K. Wódkiewicz "Nonlocality of the Einstein-Podolsky-Rosen state in the Wigner representation", Phys. Rev. A,58 (1998) pp. 4345–4347. doi:10.1103/PhysRevA.58.4345 | <urn:uuid:80fcba9b-ea4e-41d6-b953-141adc5fe7fe> | 2.953125 | 2,047 | Knowledge Article | Science & Tech. | 31.912842 |
|This article does not cite any references or sources. (June 2009)|
Near space is the region of Earth's atmosphere that lies between 65,000 and 325,000–350,000 feet (20 to 100 km) above sea level, encompassing the stratosphere, mesosphere, and the lower thermosphere. This is above where airliners fly but below orbiting satellites. The area is of interest for military surveillance purposes, as well as to commercial interests for communications. Craft that fly in near space include high altitude balloons, non-rigid airships and sounding rockets.
The terms "near space" and "upper atmosphere" are generally considered synonymous. However, some sources distinguish between the two. Where such a distinction is made, only the layers closest to the Karman line are called near space, while only the remaining layers between the lower atmosphere and near space are called the upper atmosphere.
Near space was first explored in the 1930s. The early flights flew to the edge of space without computers, spacesuits, and with only crude life support systems. Notable people who flew in near space were Jean Piccard and his wife Jeannette, on the nearcraft The Century of Progress. Later exploration was mainly carried out by unmanned nearcraft, although there have been skydiving attempts made from high altitude balloons.
Use in space travel
There has been a resurgence of interest in near space to launch manned spacecraft by man. Groups like ARCASPACE, as well as the da Vinci Project are planning on launching manned suborbital space vehicles from high altitude balloons.
Atmospheric phenomena in near space
See also
- Near Space as a Combat Effects Enabler
- United States Air Force
- Lack of Persistent Platforms Hurts US Military
- Near Space Systems
- American Digital Networks
- Space Data Corporation
- The B.H.A.L.D.I. Project
- Bloon Near-space flight | <urn:uuid:bb0475b9-2eeb-4b97-b662-ba7cf888bbe6> | 3.28125 | 405 | Knowledge Article | Science & Tech. | 43.235661 |
Forest Fertilization May be Beneficial
Professors and researchers are studying how fertilization of forests can increase productivity and carbon sequestration as part of the Pine Integrated Network Education, Mitigation and Adaptation Project (PineMap).
Fertilizing yards and crops are a common practice, and make one wonder what would happen to forests if they were fertilized. Researchers working on the Pine Integrated Network Education, Mitigation and Adaptation Project (PineMap) intend to find out the answer. Assistant Professor of forest ecosystem science at Texas A&M University, Dr. Jason Vogel, and several other professors and researchers are studying forests to see how fertilization may increase productivity and overall health of those forests.
The entire project is trying to prepare southern pine forest owners for potential climate change, Vogel said. The region in the study is from North Carolina to Oklahoma and Texas, plus everything south. The climate is expected to be warmer, which could induce drought stress on trees. In the southeastern U.S., forests are responsible for 5.5 percent of all the jobs and 7.5 percent of industrial output, he said.
Vogel's primary interest is in the below-ground processes of a forest; he wants to discover how much root mass the trees carry and how soil organisms respond to fertilization and climate. The larger goal is to find the best management scheme that maximizes a forest landowner's investment in a sustainable way.
“Trees are estimated to take up about 13 percent of the carbon dioxide emissions from a region. If they are fertilized, thus growing bigger faster, they can store more carbon in their tissue and in the soil beneath them,” Vogel said.
Through a modeling component of the combined study, Vogel will take what his study finds about the below-ground life of a forest and add it to the other researchers' findings. Part of the project is aimed at letting the smaller landowners with managed forest land know what changes they might make to improve their forest's productivity and resistance to change in climate.
Decisions by small landowners are critical because it is estimated that 65 percent of the forests in Texas are owned by small landowners. The PineMap study will give them the tools needed to help make decisions on the best future avenues to take. | <urn:uuid:04a240a1-efa6-44a1-a62b-30724fd91a0a> | 3.5 | 469 | Truncated | Science & Tech. | 38.561453 |
Found 0 - 1 results of 1 programs matching keyword " path"
On August 1, 2008, a total solar eclipse occurred as the new moon moved directly between the sun and the earth. The moon's umbral shadow fell on parts of Canada, Greenland, the Arctic Ocean, Russia, Mongolia, and China. The Exploratorium's eclipse expedition team (our fifth!) Webcast the eclipse live from the remote Xinjiang Uygur Autonomous Region in northwestern China near the Mongolian border.
Project: Solar Eclipse: Stories from the Path of Totality | Browse All
Date: August 1, 2008
Category: Science in Action
Subject(s): Astronomy/Space Science | <urn:uuid:b6d74d1e-6055-437b-b279-3e645cd2ed56> | 2.921875 | 139 | Content Listing | Science & Tech. | 41.568295 |
On Fri, Nov 2, 2012 at 4:39 AM, Allen Rabinovich <allen...
> A beginner mpmath question here: how do I compute a min (or a max) of two
> mpf numbers? Is just the regular python min(a,b) the answer? Why does that
> work properly, given that mpf's are their own datatypes, and min relies on
> standard comparison operators?
Yes, min(a,b) works fine. It works because mpmath classes provide
comparison methods (__ge__, __gt__, etc.) so that the standard
comparison operators (>=, >, etc.) can be used. This is standard for
custom numeric types in Python. | <urn:uuid:f62b8a85-9b92-457b-b206-6ccd8cbcf7b4> | 3.140625 | 161 | Comment Section | Software Dev. | 73.720568 |
Probably the cutest satellite ever. (poster download available at World Space Week site.)
It’s Space Week! One of my favorite weeks!
What is World Space Week?
It is an international celebration of science and technology, and their contribution to the betterment of the human condition. The United Nations General Assembly declared in 1999 that World Space Week will be held each year from October 4-10. These dates commemorate two events:
- October 4, 1957: Launch of the first human-made Earth satellite, Sputnik 1, thus opening the way for space exploration
- October 10, 1967: The signing of the Treaty on Principles Governing the Activites of States in the Exploration and Peaceful Uses of Outer Space, including the Moon and Other Celestial Bodies.
Where and how is World Space Week celebrated?
It is open to all. Government agencies, industry, non-profit organizations, teachers and individuals can organize events to celebrate World Space Week. The week is coordinated by the United Nations with the support of the World Space Week Association (WSWA). The WSWA leads a global team of National Coordinators, who organize events within their own countries.
What are the goals of World Space Week?
- Educate people around the world about the benefits that they receive from space
- Encourage greater use of space for sustainable economic development
- Demonstrate public support for space programs
- Excite young people about science
- Foster international cooperation in space outreach and education
|Now, for some more lighthearted space fun, a few DIYs that I am all over for Space Week: | <urn:uuid:f1d364fa-0bfa-4ae6-af18-2bd2a0bb19d3> | 3.234375 | 331 | Personal Blog | Science & Tech. | 36.872756 |
Black Holes Get In Early
Artist's conception of galaxy 4C60.07
The Submillimeter Array atop Mauna Kea consists of eight radio dishes observing as one. The dishes can be moved among 24 different locations to give resolution as great as a single 500 meter dish. It was built as a partnership between the Harvard-Smithsonian zObservatory and the Institute of Astronomy and Astrophysics in Taiwan. New observations here suggest that Supermassive Black Holes were common in early galaxies.
Two such galaxies were observed in a collision 12 billion years in the past. Galaxy 4C60.07 was first observed because of its bright radio emission. The radio signal is one sign of a rapidly spinning Black Hole. The latest observations reveal a previously unknown companion galaxy preventing 4C60.07 from forming stars. At a time when the Universe was less than 2 billion years old, both galaxies contained Supermassive Black Holes. CfA Press Release
Every galaxy ever found contains at its centre an enormous Black Hole. Even The Farthest Galaxies, formed barely 700 million years after the Big Bang contains a singularity. This is far too early for the Black Hole to have formed from stellar collapse. These Black Holes are likely primordial, formed from quantum fluctuations at a time near the Big Bang. Rapid expansion of the Universe grew the Black Holes to their enormous size, which seeded the formation of galaxies. Once primordial Black Holes were thought to be tiny. Their size would be limited by a "horizon distance" related to the speed of light. Discovery of primordial Supermassive Black Holes is one more sign of "c change" in physics. | <urn:uuid:02a5d56e-d0f1-4fe5-8269-5faefb06c108> | 3.984375 | 341 | Personal Blog | Science & Tech. | 47.158206 |
The Topological Field of Real Numbers
We’ve defined the topological space we call the real number line as the completion of the rational numbers as a uniform space. But we want to be able to do things like arithmetic on it. That is, we want to put the structure of a field on this set. And because we’ve also got the structure of a topological space, we want the field operations to be continuous maps. Then we’ll have a topological field, or a “field object” (analogous to a group object) in the category of topological spaces.
Not only do we want the field operations to be continuous, we want them to agree with those on the rational numbers. And since is dense in (and similarly is dense in ), we will get unique continuous maps to extend our field operations. In fact the uniqueness is the easy part, due to the following general property of dense subsets.
Consider a topological space with a dense subset . Then every point has a sequence with . Now if and are two continuous functions which agree for every point in , then they agree for all points in . Indeed, picking a sequence in converging to we have
So if we can show the existence of a continuous extension of, say, addition of rational numbers to all real numbers, then the extension is unique. In fact, the continuity will be enough to tell us what the extension should look like. Let’s take real numbers and , and sequences of rational numbers and converging to and , respectively. We should have
but how do we know that the limit on the right exists? Well if we can show that the sequence is a Cauchy sequence of rational numbers, then it must converge because is complete.
Given a rational number we must show that there exists a natural number so that for all . But we know that there’s a number so that for , and a number so that for . Then we can choose to be the larger of and and find
So the sequence of sums is Cauchy, and thus converges.
What if we chose different sequences and converging to and ? Then we get another Cauchy sequence of rational numbers. To show that addition of real numbers is well-defined, we need to show that it’s equivalent to the sequence . So given a rational number does there exist an so that for all ? This is almost exactly the same as the above argument that each sequence is Cauchy! As such, I’ll leave it to you.
So we’ve got a continuous function taking two real numbers and giving back another one, and which agrees with addition of rational numbers. Does it define an Abelian group? The uniqueness property for functions defined on dense subspaces will come to our rescue! We can write down two functions from to defined by and . Since agrees with addition on rational numbers, and since triples of rational numbers are dense in the set of triples of real numbers, these two functions agree on a dense subset of their domains, and so must be equal. If we take the from as the additive identity we can also verify that it acts as an identity real number addition. We can also find the negative of a real number by negating each term of a Cauchy sequence converging to , and verify that this behaves as an additive inverse, and we can show this addition to be commutative, all using the same techniques as above. From here we’ll just write for the sum of real numbers and .
What about the multiplication? Again, we’ll want to choose rational sequences and converging to and , and define our function by
so it will be continuous and agree with rational number multiplication. Now we must show that for every rational number there is an so that for all . This will be a bit clearer if we start by noting that for each rational there is an so that for all . In particular, for sufficiently large we have , so the sequence is bounded above by some . Similarly, given we can pick so that for and get an upper bound for all . Then choosing to be the larger of and we will have
for . Now given a rational we can (with a little work) find and so that the expression on the right will be less than , and so the sequence is Cauchy, as desired.
Then, as for addition, it turns out that a similar proof will show that this definition doesn’t depend on the choice of sequences converging to and , so we get a multiplication. Again, we can use the density of the rational numbers to show that it’s associative and commutative, that serves as its unit, and that multiplication distributes over addition. We’ll just write for the product of real numbers and from here on.
To show that is a field we need a multiplicative inverse for each nonzero real number. That is, for each Cauchy sequence of rational numbers that doesn’t converge to , we would like to consider the sequence , but some of the might equal zero and thus throw us off. However, there can only be a finite number of zeroes in the sequence or else would be an accumulation point of the sequence and it would either converge to or fail to be Cauchy. So we can just change each of those to some nonzero rational number without breaking the Cauchy property or changing the real number it converges to. Then another argument similar to that for multiplication shows that this defines a function from the nonzero reals to themselves which acts as a multiplicative inverse. | <urn:uuid:99e89780-40ab-42b3-a621-32d04486c69e> | 3.21875 | 1,156 | Academic Writing | Science & Tech. | 53.87914 |
g++how to compile a file? say you have test.cc and test.hh now, to compile and link it:
> g++ -c test.cc > g++ -o test test.othen you can run the exutable "test" you got. GCC docs
- -o output file
- -c compile but not link
- -L, -I include and lib paths
- -l link a lib
- -g insert debugging info in your exutable
- -Wall turn on warnings
- -fPIC position independent code (dosen't work sometimes)
- -O optimize
Makefilethe easiest one:
# this is a example of Makefile test: test.o g++ -o test test.o test.o: test.cc test.hh g++ -c test.cc clean: rm test test.oSomething maybe useful is to tell gcc how to compile all, say .cc files:
%.o:%.cpp g++ -c $< $(INCLUDES)The variable "$<" refers to dependency. And "$@" is target
I stole this longer example from http://www.pma.caltech.edu/~physlab/ph21_winter06/make.html, to show you other than compiling, Makefile can also do automatic work for you:
# Makefile CPP = g++ CPPFLAGS = -g -Wall LDFLAGS = -lm # If output data is up to date, plot it. plot: output-data xmgrace output-data # If output data is not up to date, recreate it by running the program. output-data : program input-data ./program # Compile the program. program : program.cpp $(CPP) $(CPPFLAGS) -o program program.cpp $(LDFLAGS) # command to be executed. clean: rm -f program output-data
GNU Auto-toolsI would recommend this documentation: GNU Autobook Below are some shallow examples I have tested
GNU Autoconf and Automake and etcmost of this part comes from http://www.seul.org/docs/autotut/
When your project grows large, it is almost impossible to write all the Makefile by hand. Therefore we need some automatic tool to do the job with minimum input. The production line for automake and autoconf is shown by this
Basically, autoconf will use configure.in to generate the configure script; automake will use Makefile.am to generate Makefile.in. Then configure will check your environment and use Makefile.in to generate Makefile.
A example for autoconf and automakeIn general, all you need to write is a configure.in in the top dir of your package, and Makefile.am in all the dirs within the package. Otherwise, you can run the perl script "autoscan" to get a primitive version of configure.in and you may also want to edit that to best fit your actual needs.
Here is the example of a simple project, there is only once source code, and the structure looks like this:
. |-- AUTHORS |-- COPYING |-- ChangeLog |-- INSTALL |-- Makefile.am |-- NEWS |-- README |-- configure.in `-- src |-- Makefile.am |-- test.cc `-- test.hhThe configure.in is shown below. The important lines are ACINIT, AM_INIT_AUTOMAKE, AC_CONFIG_FILES, and AC_OUTPUT:
# Process this file with autoconf to produce a configure script. AC_INIT(src/test.cc) AM_INIT_AUTOMAKE(test, 0.0.1) AM_CONFIG_HEADER(config.h) # Checks for programs. AC_PROG_AWK AC_PROG_CXX AC_PROG_CC AC_PROG_INSTALL AC_PROG_LN_S AC_CONFIG_FILES([Makefile src/Makefile]) AC_OUTPUTThe Makefile.am looks like:
# used by automake SUBDIRS = srcthen src/Makefile.am looks like:
# used by automake bin_PROGRAMS = test test_SOURCES = test.cc noinst_HEADERS = test.hhTo generate configure script and Makefile.in, you need to run the following commands which belongs to the "auto" family:
aclocal autoheader autoconf --add-missing --copy automakeAfter this, you all know how to install it:
./configure --prefix=<somewhere> make make installThere are some standard features for the Makefile you got by automake, the useful ones are make clean, make dist (generate the tar-ball for your distribution), make distclean and etc.
Aclocal and m4 scriptsif you want to have more complicated features in configure script, you can try aclocal. This tool will generate the file aclocal.m4, this m4 script will be seen by autoconf. For example, you want to test a special package or echo some information in configure, you can do that in a m4 script acinclude.m4, this will be automatically included by aclocal; or you can put them in a dir, then include them by this:
aclocal -I <dir>aclocal will search .m4 scripts in that dir.
libtoolizeto be continued ...
Compiling using MinGW under Windows Vista
1. MinGW must be located on the same partition as the files you want to compile 2. env variable %GCC_EXEC_PREFIX% needs to be set to MinGW path
Standard directories in automake
libtool and .la files
dump gcc predefined macros
gcc -dM -E - < /dev/null | <urn:uuid:e2039a6b-5d96-4df0-a1f2-04a61b5ee26b> | 3.21875 | 1,259 | Documentation | Software Dev. | 57.358227 |
In the following list of Scheme primitives, the argument name denotes the name of a troff request, macro, escape sequence etc. (without any initial period or escape character) and can be supplied in form of a Scheme string, a Scheme symbol, or a Scheme character:
(defrequest "ti" ...) (defrequest 'sp ...) (defescape #\h ...)
Associates the given handler with the given troff request. If handler is a procedure, it is passed the request's name and arguments as strings when called later. Passing the name of the request as the first argument aids in associating the same procedure with several different requests. unroff does not limit the number of arguments to requests, thus, an event handling procedure for a requests that takes a variable number of arguments could be defined like this:
(defrequest 'rm (lambda (rm . args) ...))
If the request is invoked with fewer arguments than the procedure has formal arguments, the remaining arguments are bound to the empty string. If the request is invoked with more arguments than the procedure has formal arguments, the last lambda variable is assigned a string consisting of the (space-delimited) arguments left over after the other formal arguments have been bound to the other actual arguments. However, if handler has only one formal argument, an error message is displayed when the request is called with any arguments at all and the event is skipped. For example, consider the following handler for the (non-existing) request ``xx'':
(defrequest 'xx (lambda (name a b) ...))
.xx foo name="xx" a="foo" b="" .xx foo bar baz name="xx" a="foo" b="bar baz"
Associates handler with the given troff macro, superseding any definition for this macro established by the ordinary ``.de'' request. The only difference between defrequest and defmacro is the way arguments are bound in case handler is a procedure (troff employs slightly different rules when parsing the call to a request and a macro invocation). The quote character can be used in the latter case to surround arguments containing spaces, while quote characters are treated as normal characters in requests, which allows for the following remarkable troff idiom:
.ds xy "hello
Associates handler with the special character whose name is name. The name must have a length of 2. In addition, an empty name can be specified to define a ``fallback'' handler that is called for special characters for which no handler exists. Like all event handler procedures, handler can have arbitrary side-effects in addition to returning a result; for example, the procedure may display a warning message if the special character cannot be represented in the target language and an approximation must be rendered instead.
Associates a handler with the specified troff string. As unroff provides a default handler for the request ``.ds'' to implement used-defined strings, defstring is primarily used to give definitions for strings exported by troff macro packages.
This request behaves like defstring, except that it works on number registers. Note that the Scheme primitive number->string may have to be used by handler (if it is a procedure) to convert a numeric result into a string that can be returned from the handler.
In troff input, number registers as well as strings, special characters, and escape sequences can be denoted using the groff ``long name'' syntax, unless troff compatibility has been enabled:
\n[numreg] \n[string] \f[font] \[em] ...
Associates an event handler with an escape sequence. name must have a length of 1, unless the empty string is given to define a ``fallback'' event handler (as with defspecial). Handlers defined for certain escape sequences are passed a second argument in addition to the name of the escape sequence. This is true for all escape sequences that have an argument according to the troff specification:
\b \c \f \h \k \l \n \o \s \v \w \x \z \* \$ \"
\A \C \L \N \R \V \Y \Z
Handlers registered for the escape sequences `\n' and '\s' are passed an optional third argument, one of the Scheme characters #\+ and #\-, if the escape sequence argument begins with a sign. The sign is then stripped from the actual argument.
As `\n' and `\*' are treated as ordinary escape sequences, handlers can be defined for them to achieve some form of fallback for number register and strings. unroff provides suitable default handlers for `\n', `\*', and '\$' as part of the implementation of user-defined number registers, strings, and macros. These handlers can be overridden if desired.
Associates handler with a character. name must have a length of 1. Each time the specified character is encountered in the troff input, the result (or value) of handler is output in place of the character. Character translations are not applied to the result of event handlers; event procedures can use the Scheme primitive translate (as described below) to execute the character translations established by calls to defchar if desired.
defchar currently has a number of weaknesses. The argument cannot be a special character (that is, name must be a plain character), and the mechanism cannot be used to achieve true output translations as with the troff request ``.tr'' or the groff request ``.char''.
Defines a handler to be consulted on end of sentence. If handler is a procedure, it is passed the punctuation mark ending the sentence as its argument (in form of a Scheme character). In any case, if an event handler has been specified, its result (or value) is output in place of the end-of-sentence mark and the newline character following it.
Defines a handler for eqn inline equations. If handler is a procedure, it is passed the contents of the inline equation (with the delimiters stripped) as an argument. When an inline equation is encountered in the troff input and a handler has been defined for inline equations, the handler's result (or value) is output in place of the equation.
For inline equations to be recognized, delimiters must be defined first by passing eqn input that includes a ``delim'' directive to the Scheme primitive filter-eqn-line (explained below), as is usually done by the event handler associated with the request ``.EQ''. | <urn:uuid:3bc72565-3174-47b1-8024-9950b46ed68f> | 2.828125 | 1,374 | Documentation | Software Dev. | 46.919089 |
first radiation constant
Enter a value into either text box and select units using the drop-down boxes.
What is the First Radiation Constant?
The first radiation constant, c1, is used in the Planck's law. It has the SI units of watts square metres. It is given by the equation:
c1 = 2π.hc2
where h is the Planck constant and c is the speed of light. Its value is 3.741771x10-16 Wm2.
Planck's law gives the radiation intensity emitted by a black body for a given frequency and temperature. It has the form:
u = c1.λ-5 / (εc2/λT - 1) | <urn:uuid:63cbcb22-125c-41e6-8662-a9bccaa770b9> | 3.28125 | 152 | Knowledge Article | Science & Tech. | 91.246519 |
As Dr. Heidi Cullen reports, the suffocating heat comes on the heels of the government's release of the new climate "normals". Every 10 years, scientists from the National Oceanic and Atmospheric Administration calculate the 30-year averages for temperature and precipitation from thousands of U.S. locations.
National Hurricane Center scientist Jack Beven explains the technology used to forecast and track hurricanes as they happen.
This year's Atlantic hurricane season is expected to be busier than usual and the 2011 summer forecast calls for some extreme weather. Heidi Cullen makes the climate connection.
With what feels like an especially long winter coming to an end, Dr. Heidi Cullen gives a climate outlook for spring 2011.
The currents around the equator in the Pacific Ocean are cooler than average this year, which means we are experiencing the phenomenon known as La Niña. This can bring good weather conditions, or poor ones, depending on where you live and your point of view. Dr. Heidi Cullen explains.
Dr. Heidi Cullen has a look behind the numbers of the just-released NOAA temperature analysis, and reports on the 2010 hurricane season as it passes the midway mark.
Take a look at the summer of 2010, with record breaking heat and severe weather.
Dr. Jack Beven explains his work at the National Hurricane Center, where he helps issue hurricane warnings that keep people out of harms way. | <urn:uuid:37b30292-0ac5-48a8-9bb8-29438f03b362> | 3.25 | 280 | Truncated | Science & Tech. | 52.249858 |
On 23 September 1999 the Mars Climate Orbiter, a space probe launched on 11 December 1998 and designed to go into orbit around Mars, was lost after its rockets were fired correctly but steered the probe too close to Mars, causing it to overheat.
A subsequent investigation revealed that the engineering team that designed the rocket system had used imperial (English) units - foot and pound - for its calculations, while the NASA engineers responsible for the programming had been working on the basis of SI units - metre and kilogram.
The $125 million Climate Orbiter was to relay data from an upcoming partner mission called Mars Polar Lander, which was scheduled to set down on Mars in December following Climate Orbiter's arrival. The two space probes were designed to help understand Mars' water history and the potential for life in the planet's past.
The USA are a member of the Metre Convention of 1875 and thus obliged to use the SI system in science. In 1975 Congress passed the Metric Conversion Act for the purpose of finally introducing the metric system into daily life and economic transactions. The Act set up a Metric Conversion Board but provided no binding timetable.
The argument of the business community in the USA against conversion to metric is twofold. It is argued that the great size of the internal market obviates the need for identical measures with other countries, and that the introduction of the metric system would open the USA market to foreign competition.
As a result the science industry - for example the national space organization NASA - does all its calculations in SI units but builds its hardware in imperial units. Occasional errors such as the one that led to the loss of the Mars Climate Orbiter will therefore occur again.
Image: public domain (US government) | <urn:uuid:cd9fd78b-5223-46ac-b6d8-01e14a3a5fc8> | 3.671875 | 354 | Knowledge Article | Science & Tech. | 32.571413 |
hydrogen peroxide, chemical compound, H2O2, a colorless, syrupy liquid that is a strong oxidizing agent and, in water solution, a weak acid. It is miscible with cold water and is soluble in alcohol and ether. Although pure hydrogen peroxide is fairly stable, it decomposes into water and oxygen when heated above about 80°C; it also decomposes in the presence of numerous catalysts, e.g., most metals, acids, or oxidizable organic materials. A small amount of stabilizer, usually acetanilide, is often added to it. Hydrogen peroxide has many uses. It is available for household use as a 3% (by weight) water solution; it is used as a mild bleaching agent and medicinally as an antiseptic. The 3% solution is sometimes called ten volume strength, since one volume of it releases ten volumes of oxygen when it decomposes. Hydrogen peroxide is available for commercial use in several concentrations. Highly concentrated solutions were first used in World War II by the military, e.g., in fuels for rockets and torpedoes. It is used as a bleaching agent for textiles, e.g., wool and silk, and in paper manufacture. It is also used in chemical manufacture. Hydrogen peroxide is prepared commercially by oxidation of alkylhydroanthraquinones and by electrolysis of ammonium bisulfate. It can also be prepared by reaction of barium peroxide with sulfuric acid and is prepared (with acetone) by oxidation of isopropanol. Hydrogen peroxide was discovered (1818) by L. J. Thenard.
More on hydrogen peroxide from Fact Monster:
See more Encyclopedia articles on: Compounds and Elements | <urn:uuid:a2d28631-dbd5-4470-8b74-a6b3263ceed1> | 3.828125 | 365 | Knowledge Article | Science & Tech. | 28.914001 |
Explains the nuts and bolts of HTML (HyperText Markup Language), the programming language used to create web pages, and provides an introduction to HTML5 and CSS.
A complete tutorial and reference for C and C++, from variables to functions and loops, including the C Standard Library and the C++ Standard Template Library.
Looks at cross-document messaging, both within a single domain and across one or more domains, using the HTML5 Messaging API.
Describes how to create editable content on the web using the document-editing application programming interface (API) in HTML5.
Describes how to perform background processing using the Web Workers API in HTML5.
Shows how to architect PostgreSQL databases and integrate them into web applications using PHP.
Prevent page refreshes when updating parts of a page, and make navigation more efficient using the enhancements to the Session History API provided with HTML5.
Save application data such as preferences or form data in the client's browser and use it in applications, including those running offline.
Covers SQLite’s major features in the context of the PHP environment.
Demonstrates how to use Python 3 to create well-designed scripts and maintain existing projects.
Explains the fundamentals of simple and complex programming in Perl 5.
Demonstrates the tools to create and maintain a database in clear, concise tutorials for users new to the program. | <urn:uuid:3949a571-b081-4835-a75c-96f55d5f5277> | 3 | 290 | Content Listing | Software Dev. | 40.216012 |
WRITE down everything you know about Pluto, the ninth planet in our solar system. Odds are, you won't get very far. Even if you are S. Alan Stern, one of the world's foremost experts on planetary science.
"I can tell you everything we know for sure about Pluto on about three 3-by-5 file cards," says Stern, who heads the Southwest Research Institute's space studies department in Boulder, Colorado. "That leaves a lot of room for discovery."
Small wonder then that the excitement was palpable as Stern's team gathered at Cape Canaveral, Florida, in late October. It was the last formal meeting before next month's long-awaited launch of NASA's New Horizons, the first mission to explore Pluto and the outer solar system. The scientists were particularly jazzed because a separate team of astronomers led by Stern had reported just days earlier that in addition to its large moon, Charon, Pluto has ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:ef686445-c827-4015-ac3d-f6d5daca7661> | 3.15625 | 217 | Truncated | Science & Tech. | 56.270404 |
priciples behind nucleic acid isolation - need this for additional lecture information (Aug/02/2005 )
I really need some information on the actual principles behind the following procedures to isolate nucleic acids: alkaline lysis , phenol chloroform extrcation , cesium chloride and silica dioxide.
What actually happens? why it happens ...the chemistry details ...........
I am not going to answer all your questions, but the basis of CsCl centrifugation is differences in density between, RNA, DNA and protein. CsCl is used because it forms a spontaneous gradient when centrifuged of the right density to separate the above three fractions.
DNA sequencing software
try to ask specfic questions.... IT would be easier to answer to a particular point that doing all the chemistry of alkaline lyisi for ex...
Search at google | <urn:uuid:4903033b-d0f4-4ea0-b174-c420bbdb0750> | 3.40625 | 178 | Comment Section | Science & Tech. | 47.962257 |
Astonishing Science. Spectacular museum.
The first atomic clock, Caesium I, was designed by Louis Essen and built at the National Physical Laboratory in Teddington in 1955. Although it was not the first machine to use atoms for timekeeping, it was the first to keep time better than the best pendulum or quartz clocks.
It was also the first clock whose timekeeping was significantly more constant than the rotation of the Earth. Modern atomic clocks are even more accurate than Caesium I and time is now defined in terms of atoms rather than the Earth's motion.
All mechanical clocks work by counting the vibrations of something which has a constant frequency such as a pendulum. Unfortunately, the frequency of a pendulum is not perfectly constant. It is affected by changes in temperature, air pressure and the strength of gravity. This causes the clock to run too quickly or too slowly. The frequencies measured by atomic clocks are much higher than those of a pendulum but vary much less, so atomic clocks keep time much better. Caesium I was so accurate that it would only gain or lose one second in three hundred years. Modern atomic clocks are even more accurate.
It is difficult to imagine that we need to measure time so accurately but it is essential for many aspects of modern life which we take for granted. Modern navigation systems, mobile telephones, digital television and many industrial and commercial activities need this level of accuracy.
Louis Essen was born in Nottingham in 1908. After graduating from Nottingham University in 1928 with a first class honours degree, he was invited to join the National Physical Laboratory (NPL). There, he worked on the development of quartz crystal oscillators, which could measure time as accurately as the best pendulum clocks. By 1938, he had developed the Essen ring. This referred to the shape of the piece of quartz used in his new clock, which was three times as accurate as the earlier versions.
During the Second World War, his work on high-frequency radar led him to develop the cavity resonance wavemeter, which he used from 1946 with Albert Gordon-Smith to measure the speed of light. Recent work has shown that Essen's measurement was much more accurate than earlier attempts had been.
In the early 1950s, Essen became interested in research being conducted in the USA, espesially at the National Bureau of Standards (NBS). This was concerned with the possibility of producing a highly accurate clock based on the radiation emitted or absorbed by atoms. Essen realised that the ammonia molecule, which the NBS had used in a clock in 1948, was not ideally suited for this purpose and based his clocks instead on atoms such as hydrogen, rubidium and caesium.
In 1953, Louis Essen and Jack Parry were given approval to produce an atomic clock at the NPL. At the time, they had little experience of atomic clocks but Essen's knowledge of quartz oscillators and microwave resonators enabled them to have Caesium I running in 1955. Political difficulties in the USA had almost stopped research into atomic clocks at the NBS, so Caesium I became the world's first source of atomic time. By 1964, Essen had reduced the timekeeping errors from one second in 300 years to about one second in 2000 years. In 1967, atomic clocks had become so accurate that the second was redefined as the time taken for 9192631770 cycles of the radiation corresponding to the hyperfine transition of the ground state of caesium-133. The original definition of the second as 1/86400 of a mean solar day was abandoned because the Earth's motion was not reliable enough.
Page 1 of 6Next: Page 2 - Atomic Clocks | <urn:uuid:72201cc1-a69e-4e64-b288-c83b7112d45d> | 4.09375 | 758 | Knowledge Article | Science & Tech. | 45.268455 |
All the gory details .....
First two protons collide and form a deuterium nucleus (one proton + one neutron). This is not as simple as it sounds. Only one collision in ten trillion trillion actually produces deuterium. In fact, the average proton must wait some 10 billion years to be part of the proton-proton chain.
During this collision, a positron and a neutrino are released. A positron is the anti-matter equivalent of an electron (an electron with a positive charge). As soon as the positron encounters an electron, the two anihilate one another to form two gamma rays. The neutrino has virtually no mass and passes right through the sun and out into space.
The deuterium nucleus collides with a proton in less than 1 second. The two form a light helium nucleus which is made up of two protons and one neutron. Another gamma ray is released. The gamma rays are carrying away the energy that results from the conversion of mass to energy.
On the average, about a million years ellapses before two light helium nuclei collide to form a regular helium nucleus (made up of 2 protons and 2 neutrons) with the release of two protons.
Final tally: 4 protons --> 1 He nucleus + 6 gamma rays + 2 neutrinos | <urn:uuid:63b52246-6507-41e9-9ec9-660e5754ebea> | 3.75 | 278 | Knowledge Article | Science & Tech. | 58.213722 |
Item: Snow cornice development and failure monitoring
Title: Snow cornice development and failure monitoring
Proceedings: Proceedings of the 2006 International Snow Science Workshop, Telluride, Colorado
Authors: Robert A. Burrows and David M. McClung, Avalanche Research Group, Department of Geography, University of British Columbia
Abstract: Snow cornices are significant winter alpine and snow avalanche hazards. A cornice is a leeward growing mass of snow overhanging from a ridge or sharp break in slope (perpendicular to the ridgeline) due to windblown snow (Seligman, 1936). A cornice’s usual topographic position and cantilevered slab structure, regularly above large avalanche prone slopes, makes cornices crucial to consider from a risk perspective. Although cornices are extensively controlled in avalanche operations, they have been given little attention in science. Only basic work has been done on their formation, development, structure and control, with limited focus and geographic extent (Kobayashi, 1988, McCarty et al., 1986, McClung and Schaerer, 1993, Montagne et al., 1968). Although significant work has been done on snow slab fracture mechanics, no work has been done on cornice failure and fracture mechanics. Three meteorologically-related triggers of cornice failure have been recognized: 1. Snow loading of the cornice overhang during storm and wind events; 2. abrupt temperature changes at the surfaces of the cornice due to a) abrupt warming or cooling air temperatures, b) rain-on-snow events, c) heating by solar radiation; and 3. seasonal warming/prolonged midwinter warm periods. Failure from snow loading results when newly fallen or wind-blown snow accumulates on the cornice overhang at a rate that induces stresses that exceed the strength/fracture toughness of the cornice. Creep fracture is thought to be the primary mechanism for failure of this type. Rain-on-snow events (and the concurrent abrupt warming) on dry snow slabs are shown to immediately increase the creep rate in the surface layer of the snow slab resulting in decreased slab stability (Conway, 1998). This immediate effect may similarly influence cornice stability, as well as the delayed time effects of increased loading from rainfall and weakening due to longer-term warming. Abrupt changes in air temperature alone are thought to be a trigger of dry snow slab avalanches (McClung, 1996). Further circumstantial and anecdotal evidence suggests that cooling and warming of the snow surface from abrupt changes in air temperature and heating from solar radiation initiate slab avalanches and cornice failures. I intend to monitor the development, physical properties (dimensions/geometry, structure, densities), fracture, and failure of a study cornice along with meteorological parameters on a ridge near Kootenay Pass, BC during two winter seasons. Monitoring the three-dimensional time-lapse development of the cornice will be achieved using periodic photography and terrestrial photogrammetry methods (Eos Systems Inc., 2004). Deformation and temperature inside the cornice will be measured using glide shoe-type instrument packages (Conway, 1998). These measurements will include linear displacement, tilt, and snow temperature. A nearby weather station will monitor air temperature, relative humidity, wind, snow depth, and radiation. The goal is to develop a numerical model based on these data in order to quantify and predict cornice deformation and failure.
Keywords: cornice failure, monitoring, slab fractures, solar radiation
Digital Abstract Not Available | <urn:uuid:9dd7c123-75a4-4dc4-bbd1-676cd1900881> | 3.203125 | 725 | Academic Writing | Science & Tech. | 20.584133 |
NASA Visualization Captures Record Year for Wildfires in the U.S.
The total affected area, which is depicted in a new NASA map, is already the third-largest since records were first kept in 1960, and will likely break previous records by year’s end. The most intense fires occurred in the western U.S., where several major fires during the early summer — sparked by a combination of drought, light winter snow pack, and the long-term effects of climate change — forced evacuations in some areas. In the visualization, which shows all fires that occurred between Jan. 1 and Oct. 31, areas of yellow and orange indicate larger and more intense fires, while many of the less intense fires, shown in red, represent prescribed burns started for brush clearing or agriculture and ecosystem management. The visualization was based on data collected by NASA satellites.
Article appearing courtesy Yale Environment 360.
|Tags: Climate Change drought NASA snow pack visualization wildfires||[ Permalink ]| | <urn:uuid:ae44b622-67fe-4484-826c-8d18f8a2743f> | 3.4375 | 201 | Truncated | Science & Tech. | 41.906364 |
Soon after forming, Hurricane Emilia strengthened over the eastern Pacific Ocean in early July 2012. At 2:00 p.m. Pacific Daylight Time on July 9, 2012, the U.S. National Hurricane Center (NHC) reported that Emilia was a strong Category 2 hurricane, with maximum sustained winds of 110 miles (175 kilometers) per hour. Twelve hours later, the NHC reported that Emilia was now a Category 4 hurricane with winds of 140 miles (220 kilometers) per hour.
The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite captured this natural-color image at 11:25 a.m. PDT on July 9, when Emilia was a Category 2 storm. It was located roughly 680 miles (1,095 kilometers) south of the southern tip of Baja California.
Hurricane Emilia posed no hazards to land as it was moving westward over the open Pacific Ocean, the NHC stated. As Emilia strengthened, another storm in the eastern Pacific, Daniel, gradually weakened. The NHC forecast that Emilia would also slowly weaken, probably starting on the evening of July 10.
- National Hurricane Center. (2012, July 10) Hurricane Advisory Archive. Accessed July 10, 2012.
NASA image courtesy Jeff Schmaltz, LANCE MODIS Rapid Response Team, Goddard Space Flight Center. Caption by Michon Scott.
- Terra - MODIS | <urn:uuid:3b48a981-8bc6-4b92-b6f5-0870811ff599> | 2.828125 | 294 | Knowledge Article | Science & Tech. | 50.520536 |
Effect of Atmospheric Variability and Aircraft Flight Parameters on the Refraction of Sonic Booms
Kästner, Martina and Heimann, Dietrich (2010) Effect of Atmospheric Variability and Aircraft Flight Parameters on the Refraction of Sonic Booms. Acta acustica united with Acustica, 96, pp. 425-436. DOI: 10.3813/AAA.918295.
Full text not available from this repository.
This study deals with the potential audibility of sonic booms from supersonic aircraft at the ground as a consequence of the aircraft flight parameters and the atmospheric variability. A ray tracing model is used to decide whether a sonic boom emitted downwards by a high flying supersonic aircraft hits the ground or is refracted upwards before it reaches the ground. Aircraft altitude, speed, and flight direction are systematically changed within realistic ranges. The meteorological data rely on a homogeneous reanalysis dataset for eleven years over the domain of Europe with a very high vertical resolution. The cases of sonic booms hitting or not hitting the ground are identified for various situations and respective frequency distributions are derived. In addition, the angle of incidence of rays arriving at the ground and the turning-level height of rays not reaching the ground are studied as the intensity of sonic booms also depends on these parameters. It turned out that sonic-boom rays are refracted such that they do not reach the ground if the flight altitude is rather high (long propagation path) and the aircraft speed remains between Mach numbers of 1.05 and 1.25. Sonic-boom rays not reaching the ground are furthermore possible if the aircraft heading is mainly upwind, the temperature mostly increases along the propagation direction of the ray, and a wind speed maximum exists below flight level. The probability of sonic-boom rays not reaching the ground generally increases from north to south, but also depends on the season.
|Title:||Effect of Atmospheric Variability and Aircraft Flight Parameters on the Refraction of Sonic Booms|
|Journal or Publication Title:||Acta acustica united with Acustica|
|In ISI Web of Science:||Yes|
|Page Range:||pp. 425-436|
|Keywords:||Überschallknall Ausbreitung Meteorologie|
|HGF - Research field:||Aeronautics, Space and Transport|
|HGF - Program:||Aeronautics|
|HGF - Program Themes:||L VU - Air Traffic and Environment|
|DLR - Research area:||Aeronautics|
|DLR - Program:||L VU - Air Traffic and Environment|
|DLR - Research theme (Project):||L - Quiet Air Traffic|
|Institutes and Institutions:||Institute of Atmospheric Physics > Atmospheric Dynamics|
|Deposited By:||Dr.rer.nat.hab. Dietrich Heimann|
|Deposited On:||01 Jun 2010 13:22|
|Last Modified:||04 Apr 2013 16:22|
Repository Staff Only: item control page | <urn:uuid:9a5a2740-f2a9-4b24-b404-607195a0920b> | 2.75 | 651 | Academic Writing | Science & Tech. | 39.367826 |
So the wave moves like. a wave, it moves up and down, up and down. But how do photons move? Do they follow the same path or do they just go straight forward without oscillating?
The question is a bit tricky. Actually, a photon is the electromagnetic wave. Photons are quanta of the field. You can imagine a photon as the fact that there is an oscillation in the field.
However, if you are in an approximation that allows you to treat photons as particles, then I would say that the don't oscillate, they just move on at light speed.
I know, the answer is not complete, I should explain better how photons can be defined, but perhaps someone can do it better than me. | <urn:uuid:1b3e1777-f197-47c3-8b32-eb7217716188> | 3.125 | 153 | Q&A Forum | Science & Tech. | 67.817429 |
Self Online Study - Mathematics - Probability - Bays Theorem
An Introduction to Bayes’ Theorem
Bayes’ Theorem is a theorem of probability theory originally stated by the Reverend Thomas Bayes. It can be seen as a way of understanding how the probability that a theory is true is affected by a new piece of evidence. It has been used in a wide variety of contexts, ranging from marine biology to the development of "Bayesian" spam blockers for email systems. In the philosophy of science, it has been used to try to clarify the relationship between theory and evidence. Many insights in the philosophy of science involving confirmation, falsification, the relation between science and pseudosience, and other topics can be made more precise, and sometimes extended or corrected, by using Bayes’ Theorem. These pages will introduce the theorem and its use in the philosophy of science.
Begin by having a look at the theorem, displayed below. Then we’ll look at the notation and terminology involved.
n this formula, T stands for a theory or hypothesis that we are interested in testing, and E represents a new piece of evidence that seems to confirm or disconfirm the theory. For any proposition S, we will use P(S) to stand for our degree of belief, or "subjective probability," that S is true. In particular, P(T) represents our best estimate of the probability of the theory we are considering, prior to consideration of the new piece of evidence. It is known as the prior probability of T.
What we want to discover is the probability that T is true supposing that our new piece of evidence is true. This is a conditional probability, the probability that one proposition is true provided that another proposition is true. For instance, suppose you draw a card from a deck of 52, without showing it to me. Assuming the deck has been well shuffled, I should believe that the probability that the card is a jack, P(J), is 4/52, or 1/13, since there are four jacks in the deck. But now suppose you tell me that the card is a face card. The probability that the card is a jack, given that it is a face card, is 4/12, or 1/3, since there are 12 face cards in the deck. We represent this conditional probability as P(J|F), meaning the probability that the card is a jack given that it is a face card.
(We don’t need to take conditional probability as a primitive notion; we can define it in terms of absolute probabilities: P(A|B) = P(A and B) / P(B), that is, the probability that A and B are both true divided by the probability that B is true.)
Using this idea of conditional probability to express what we want to use Bayes’ Theorem to discover, we say that P(T|E), the probability that T is true given that E is true, is the posterior probability of T. The idea is that P(T|E) represents the probability assigned to T after taking into account the new piece of evidence, E. To calculate this we need, in addition to the prior probability P(T), two further conditional probabilities indicating how probable our piece of evidence is depending on whether our theory is or is not true. We can represent these as P(E|T) and P(E|~T), where ~T is the negation of T, i.e. the proposition that T is false.
Bays Theorem: Let E1,E2,..........,En be mutually exclusive and exhaustive events, associated with a randomexperiment and let E be any event that occurs with some Ei ,Then
Let S be the sample space of the given random experiment. | <urn:uuid:b608a6f9-e3b7-40de-8544-ac7a8561d033> | 4.0625 | 791 | Truncated | Science & Tech. | 54.54697 |
A film Presented by John Massaria reveals Scientific Research explaining why they are spraying the sky’s around the world. The UN Environment Programme: 200 Species Extinct Every Day, Unlike Anything Since Dinosaurs Disappeared 65 Million Years Ago. According to the UN Environment Programme, the Earth is in the midst of a mass extinction of life. Scientists estimate that 150-200 species of plant, insect, bird and mammal become extinct every 24 hours. This is nearly 1,000 times the “natural” or “background” rate and, say many biologists, is greater than anything the world has experienced since the vanishing of the dinosaurs nearly 65m years ago.
The audio is part of a teleconference call between Russ Tanner (Global Skywatch) and Dane Wigington (WIWATS/Activist). | <urn:uuid:921cd857-954c-4d32-b7c7-f29ea5e0123a> | 3.171875 | 167 | Truncated | Science & Tech. | 43.948656 |
You are looking at historical revision 8806 of this page. It may differ significantly from its current revision.
The following page is an introduction to Chicken intended to PHP programmers.
This is a work in progress, I will try to work on as frequently as possible (mostly on sunday). In the meantime, feel free to help me by expanding it, correcting (english is not my native language) and/or by commenting on what I have done so far.
Chicken is an implementation of the Scheme programming language which, in turn is a member of the Lisp family of languages. More information about this on the http://schemers.org/ website.
Where PHP use the general syntax f(args, ...); , scheme use (f args...)
substr("abcdef", 0, 2);
is in Scheme:
(substring "abcdef" 0 2)
You will note that:
- All the expression is enclosed in the parenthesis, not just the arguments
- Arguments are separated by spaces, not by commas
- The expression is closes by the final closing parenthesis so there is no need for a ;
All scheme expressions use this format, including arithmetic.
The php expression:
3 + 5
Is in scheme:
(+ 3 5)
This is called prefix notation. It may take some time to get used to it but it had several advantages; mostly by avoiding any ambiguity in the operator precedences.
A more complex example can be:
3 + 5 - 12 * 2
Is represented in scheme as:
(- (+ 3 5)(* 12 2))
Like PHP, scheme does not type variables. If you assign $var to a string, it is a string. you re-assign it to 3, it become a number.
TRUE, in scheme, is noted #t while FALSE is noted #f. Unlike PHP, all scheme values are considered to be true (#t) except #f itself, including zero (0), the empty string and others empty structures.
All type of numbers are supported by Chicken, including real, complex and rational, natively (without resorting to external libraries).
Number can be expressed in binary, octal, decimal or hexadecimal notation by using the prefixs #b for binary, #o for octal, #d for decimal and #x for hexadecimal. Unprefixed number are deicmal by default (so 21 and #d21 are the same)
Scheme support a vast array of operations on string, including changing case, splitting, extracting substring and regular expressions based searching and replacing
The base data structure of PHP is the Array. In scheme it is the list. | <urn:uuid:aa39b6a5-c37c-47cc-b3d3-4c7c7bee2819> | 2.953125 | 563 | Knowledge Article | Software Dev. | 56.532868 |
Article: Zircons Recast Earth's Earliest Era
For geologists, the first 500 million years of Earth's history is particularly mysterious. How did the planet evolve after it formed 4.5 billion years ago? Until recently, the impressions geologists had of this early period-the Hadean eon-evoked its name: hell on Earth. Scientists believed early Earth was exceptionally hot, with a simmering sea of magma that covered much of the planet. Meteorites frequently battered the surface. Their perception was that early Earth was not a place where continents could form or life could survive.
Ten years ago, there was little evidence to challenge this impression of the early Earth. Indeed, no direct evidence of that time exists. Rocks older than 4 billion years are not available to study, because they have long since eroded away, have been transformed by geologic processes, or are too deep underground to access.
But tiny survivors of Earth's early era do persist: zircon crystals. A common mineral made of the elements zirconium, silicon, and oxygen, zircon has formed throughout geologic history and is exceedingly tough. "It's very, very difficult to destroy zircon," says Martin Whitehouse, a leading zircon researcher. "It is the oldest preserved material that we've got."
Cutting-edge techniques are allowing geologists to extract information from ancient zircons about the conditions in which they grew. These time capsules have brought the hellish scenario of early Earth into question. Zircons are also enabling scientists to formulate intriguing new hypotheses about when major planetary features, such as continents and oceans, formed. "If we didn't have zircon," says Whitehouse, "we'd understand the Earth a whole lot less."
To the Ends of the Earth
The tiny, treeless island of Akilia, located off Greenland's west coast, is a prime spot to find ancient zircons. Akilia is made of some of the oldest surviving rocks on Earth, and the lack of vegetation makes them easy to detect and collect. A fist-sized sample of rock from the island may contain dozens of zircon crystals. Whitehouse extracts the zircons in his lab at the Swedish Museum of Natural History by pulverizing and sifting rock samples with a variety of devices, including one called a jaw-crusher.
During deep burial, rocks are exposed to high temperatures and pressures, which may cause their constituent minerals to break down. Zircon crystals, however, usually survive and may in fact grow larger. These minerals not only survive geological processes such as melting, weathering, and chemical attack, which can destroy rocks and other minerals, but they record each event as a ring of new growth. Consequently, zircons can form a series of growth rings over time, much like tree rings.
In recent years, geologists have figured out how to determine the age of each microscopic growth ring in a zircon. To date a zircon, scientists take advantage of the fact that zircon usually incorporates a small amount of uranium as it grows. This uranium slowly decays to lead. Because zircons incorporate very little lead as they grow and because scientists know uranium's decay rate, the ratio of uranium to lead in the zircon can be used to calculate age.
Whitehouse and his colleagues use a high-resolution device called a secondary ion mass spectrometer, or ion probe, to analyze zircons. The probe emits a beam of ions (charged particles) that strikes the zircon, causing it to release "secondary" ions such as uranium ions and lead ions. The beam can be focused on a very small spot, enough for the analysis of individual growth rings only 10 to 15 microns wide (1 micron is one-thousandth of a millimeter). The device then measures the concentrations of uranium and lead ions from the sample to yield an age.
Decoding Zircon's Dates
The most complex single grains of zircon from Akilia preserve several phases of growth over a billion years of history. It's not uncommon for Whitehouse to analyze specimens with cores aged 3.85 billion years. "One way to interpret these grains is that the rock [that contained the zircon cores] formed 3.85 billion years ago," he says. The zircon cores likely crystallized as the rock solidified from cooling magma.
Darker growth rings around the ancient cores are 3.65 billion years old, and hint at another significant event. The host rock, says Whitehouse, may have melted, or it may have experienced metamorphism: a change in its mineral structure from high-temperature, high-pressure conditions.
What Earth processes can bury rocks at great depths, where high pressures and temperatures can cause such changes and form new rings on zircons? Tectonic processes, those that result from the movement of individual plates of Earth's solid outer shell, called the lithosphere. One example is when colliding tectonic plates build mountain ranges. "In effect, you can look at a complicated zircon like this, and see a record of large-scale events on the planet, . . . the sort of things that nowadays are building the Himalaya or the Alps," says Whitehouse. "These sorts of events were happening when these rings were growing. In 3.7 billion years' time, a geologist will be able to sit here and look at a Himalayan zircon and see this event happening, also."
Hot Times on Early Earth?
So far, the oldest zircons come from rocks in the Jack Hills of central Australia. Using the uranium-lead decay system, scientists have determined that these zircons are 4.375 billion years old, which means that they formed during the Hadean eon. E. Bruce Watson, a professor at the Rensselaer Polytechnic Institute (RPI) in Troy, New York, leads a team that is developing new techniques to extract even more information from zircons to illuminate conditions during the earliest era of Earth.
Watson supposed that another zircon impurity, titanium, could be used as a thermometer to gauge ancient Earth temperatures. When a crystal grows, the amount of impurities it incorporates often depends upon temperature, explains Watson. "But we have no advanced knowledge of what that dependence on temperature is."
To determine this dependence for zircon, the RPI team remakes the conditions of early Earth in their laboratory. They manufacture zircons in the presence of titanium dioxide at various temperatures and pressures. Watson then uses an ion probe to measure the amount of titanium that entered the zircons' crystal structures and studies the relationship of temperature and pressure to titanium concentration. Watson has found that pressure does not affect the titanium content-but temperature does.
"The higher the temperature of crystallization, the more titanium you get in the structure," says Watson. "We can then go to an unknown zircon, analyze the titanium concentration, and thereby determine the temperature at which it formed."
Watson's lab has used this "titanium thermometer" technique to determine the temperatures of the 4.375-million-year-old Hadean zircons from Australia. Most of them, his team has found, crystallized at temperatures around 680 degrees Celsius (1,256 degrees Fahrenheit). For the early Earth, that is actually quite cool.
The results are painting a very different picture of Earth's first 500 million years. "The picture coming out from the zircons is that the early Earth really wasn't a bubbling, boiling magma ocean," says Martin Whitehouse. About 4.4 billion years ago, says RPI's Watson, Earth was cool enough that it "had continents that were above sea level, that erosion of those continents was occurring, and sediments were forming. That necessitates the presence of oceans, so that means liquid water on the surface of the Earth. It was cool enough so that oceans didn't boil-potentially cool enough that living organisms could get a foothold."
"In some sense the physical conditions at the surface of the early Earth, as seen through the eyes of these time capsules from that period, was not that different from today," Watson says. "That is what is revolutionary about this idea."
More About This Resource...
Our innovative Science Bulletins are an online and exhibition program that offers the public a window into the excitement of scientific discovery. This essay was published in April 2010 as part of the Earth Feature called Zircons: Time Capsules from the Early Earth.
- The essay begins by explaining why the first 500 million years of Earth's history is particularly mysterious.
- It then introduces zircon crystals and the techniques that are allowing geologists to extract information from them.
- It concludes by detailing the evidence that has been found, which paints a very different picture of Earth's first 500 million years.
Supplement a study of gelogy with a classroom activity drawn from this Science Bulletin essay.
- Explain that Earth's first 500 years as a planet are known as the Hadean one, which roughly means "hell on Earth." Ask students what clues they think that name might offer about conditions on the planet.
- Have students read the essay (either online or a printed copy).
- Have them write a one-page reaction to the article, explaining the difference between what was believed about the Hadean eon until quite recently and what the study of zircon crystals has shown geologists.
SubtopicMinerals and Resources | <urn:uuid:82efb908-ab3b-426f-9725-6f27391602db> | 4 | 1,979 | Knowledge Article | Science & Tech. | 45.291162 |
sharp cosmic portrait features
and obscuring dust clouds in IC 1795,
a star forming region in the northern constellation
Also cataloged as NGC 896, the nebula's remarkable details,
shown in its dominant red color,
were captured using a sensitive camera,
and long exposures that include image data from a narrowband filter.
The narrow filter transmits only
the red light of hydrogen atoms.
Ionized by ultraviolet light from energetic
young stars, a hydrogen atom emits the characteristic H-alpha light as
its single electron is recaptured and transitions to lower energy
Not far on the sky from the famous Double Star
Cluster in Perseus, IC 1795 is itself located next to IC 1805,
the Heart Nebula, as part of a
of star forming regions that lie
at the edge of a large molecular cloud.
Located just over 6,000 light-years away, the larger
star forming complex sprawls along the Perseus spiral arm of
our Milky Way Galaxy.
At that distance, this picture would span about 70 light-years
across IC 1795.
Bob and Janice Fera | <urn:uuid:646cc079-5f12-44ca-b779-23f3ab25b772> | 3.046875 | 240 | Knowledge Article | Science & Tech. | 35.074045 |
It’s OK to FRET
We all experience fret now and then, that is, worried, distressed, vexed, or troubled feelings and emotions. Not good.
FRET, by contrast, is not so bad when it stands for fluorescence resonance energy transfer. This occurs when a dye molecule absorbs light of a particular color, transfers the energy to a different dye molecule, which in turn emits light of an altogether new color.
And now thanks to work from the Chemistry Department at the University of Connecticut, we may have a way to use FRET to create something potentially practical: DNA lightbulbs.
Neret al. produced nanofibers of DNA containing the two types of fluorescent dyes and showed that ultraviolet (i.e., colorless) excitation could produce blue, orange, or white emission, depending on the conditions (Angewandte Chemie 48: 28 (9 June 2009), 5134–5138). It turns out that nanofibers are much more efficient (brighter) than simpler DNA films containing the same ingredients. Structure matters.
Why should you care? Suppose all the bazillions of semiconductor LEDs (light emitting diodes) we encounter in everyday existence could be replaced with a purely organic material. Quoting the authors, this possibility “is appealing from the perspective of both environmental disposal and utilization of a renewable resource.”
Green chemistry inches ever closer to reality, one experiment at a time. | <urn:uuid:95da5eea-e0de-4204-8e0d-67c9cb9f2c63> | 3.1875 | 302 | Personal Blog | Science & Tech. | 38.794676 |
How to Calculate Power Based on Force and Speed
In physics, you can calculate power based on force and speed. Because work equals force times distance, you can write the equation for power the following way, assuming that the force acts along the direction of travel:
where s is the distance traveled. However, the object’s speed, v, is just s divided by t, so the equation breaks down to
That’s an interesting result — power equals force times speed? Yep, that’s what it says. However, because you often have to account for acceleration when you apply a force, you usually write the equation in terms of average power and average speed:
Here’s an example. Suppose your brother got himself a snappy new car. You think it’s kind of small, but he claims it has over 100 horsepower. Okay, you say, getting out your clipboard. Let’s put it to the test.
Your brother’s car has a mass of
On the big Physics Test Track on the edge of town, you measure its acceleration as 4.60 meters/second2 over 5.00 seconds when the car started from rest. How much horsepower is that?
You know that
so all you need to calculate is the average speed and the net applied force. Take the net force first. You know that F = ma, so you can plug in the values to get
Okay, so the force applied to accelerate the car steadily is 5,060 newtons. Now all you need is the average speed. Say the starting speed was vi and the ending speed vf . You know that vi = 0 m/s, so what is vf? Well, you also know that because the acceleration was constant, the following equation is true:
vf = vi + at
As it happens, you know that acceleration and the time the car was accelerated over:
vf = 0 m/s + (4.60 m/s2)(5.00 s) = 23.0 m/s
Because the acceleration was constant, the average speed is
Because vi = 0 m/s, this breaks down to
Plugging in the numbers gives you the average velocity:
Great — now you know the force applied and the average speed. You can use the equation
to find the average power. In particular
You still need to convert to horsepower. One horsepower = 745.7 watts, so
Therefore, the car developed an average of 78.0 horsepower, not 100 horsepower. Rats, says your brother. I demand a recount.
Okay, so you agree to calculate power another way. You know you can also calculate average power as work divided by time:
And the work done by the car is the difference in the beginning and ending kinetic energies:
W = KEf – KEi
The car started at rest, so KEi = 0 J. That leaves only the final kinetic energy to calculate:
Plugging in the numbers gives you:
and the work done was
you get the following:
And, as before
Double rats, your brother says. | <urn:uuid:3ceb2efc-1c45-4df0-86f6-3e9a7b97262d> | 4.125 | 648 | Tutorial | Science & Tech. | 69.193389 |
All About Coral Reefs
Some fun facts about coral reefs!
Coral reefs are found in warm, shallow and clear waters of tropical oceans all around the world. They are extremely beautiful, diverse and productive. Today coral reefs are highly endangered mostly due to human activities.
- Coral reefs are massive limestone structures that occupy only 0.7% of the ocean floor but provide shelter for over 25 percent of all marine life.
- Although coral is often mistaken for a rock or a plant, it is a living organism composed of tiny, fragile animals called coral polyps.
- Coral reefs provide protection for harbors and beaches from heavy wave action caused by coastal storms.
- They are responsible for building the largest biological structure on earth -- the Great Barrier Reef. | <urn:uuid:d91a1558-2b90-49d7-ac6f-f5ddf36fe97f> | 3.609375 | 155 | Knowledge Article | Science & Tech. | 43.567857 |
Protons in a magnetic field have a microscopic magnetization and act like tiny toy tops that wobble as they spin.The rate of the wobbling or precession is the resonance or Larmor frequency. In the magnetic field of an MRI scanner at room temperature, there is approximately the same number of proton nuclei aligned with the main magnetic field Bo as counter aligned. The aligned position is slightly favored, as the nucleus is at a lower energy in this position. For every one-million nuclei, there is about one extra aligned with the Bo field as opposed to the field. This results in a net or macroscopic magnetization pointing in the direction of the main magnetic field. Exposure of individual nuclei to RF radiation (B1 field) at the Larmor frequency causes nuclei in the lower energy state to jump into the higher energy state.
On a macroscopic level, exposure of an object or person to RF radiation at the Larmor frequency, causes the net magnetization to spiral away from the Bo field. In the rotating frame of reference, the net magnetization vector rotate from a longitudinal position a distance proportional to the time length of the RF pulse. After a certain length of time, the net magnetization vector rotates 90 degrees and lies in the transverse or x-y plane. It is in this position that the net magnetization can be detected on MRI. The angle that the net magnetization vector rotates is commonly called the 'flip' or 'tip' angle. At angles greater than or less than 90 degrees there will still be a small component of the magnetization that will be in the x-y plane, and therefore be detected. | <urn:uuid:bff22a07-0ec8-49df-9792-0731b2cbfbbd> | 3.71875 | 341 | Knowledge Article | Science & Tech. | 40.382929 |
David Shiga, reporter
"Has NASA discovered extraterrestrial life?" asked blogger Jason Kottke on Tuesday, in a post that spread like wildfire around the web.
The speculation was prompted by a cryptic notice on NASA's website about "an astrobiology finding that will impact the search for evidence of extraterrestrial life".
Physicist Michio Kaku mused about several possibilities, including that "NASA might announce they found some form of evidence of microbial life in the universe".
Washington Post bloggers had more fun with it:
Have we made contact with little green martians? E.T.? Tribbles? Body snatchers? Or, is this just NASA's splashy way of announcing the discovery of microscopic bacteria -- interstellar snot, if you will?
The truth, while intriguing, is less Earth-shattering. A paper in Science, discussed in a press conference today, announced the discovery of terrestrial microbes that can incorporate normally toxic arsenic into their DNA.
A similar series of events transpired a couple of years ago, when speculation abounded - including on NewScientist.com - about something that NASA's Phoenix Mars lander had found. The truth turned out to be a little deflating in that case too. Phoenix had not found signs of life, but did detect chemicals called perchlorates in the soil that could serve as food for potential microbes.
If nothing else, these episodes show how big the public appetite is for news about the search for alien life. If we do discover ET someday, maybe it won't be as shocking to society as movies like Contact would suggest. We seem quite ready to believe it. | <urn:uuid:da4f3224-4859-4877-ac77-0469d5ee6d44> | 3.015625 | 335 | Personal Blog | Science & Tech. | 44.485802 |
Mechanics: Vectors and Projectiles
Vectors and Projectiles: Audio Guided Solution
During the Vector Addition lab, Mac and Tosh start at the classroom door and walk 40.0 m, north, 32.5 m east, 15.5 m south, 68.5 m west, and 2.5 m, north. Determine the magnitude and direction of the resultant displacement of Mac and Tosh.
Audio Guided Solution
Click to show or hide the answer!
45.0 m, 36.9° N of W (or 143.1° CCW)
Habits of an Effective Problem Solver
- Read the problem carefully and develop a mental picture of the physical situation. If necessary, sketch a simple diagram of the physical situation to help you visualize it.
- Identify the known and unknown quantities in an organized manner. Equate given values to the symbols used to represent the corresponding quantity - e.g., vox = 12.4 m/s, voy = 0.0 m/s, dx = 32.7 m, dy = ???.
- Use physics formulas and conceptual reasoning to plot a strategy for solving for the unknown quantity.
- Identify the appropriate formula(s) to use.
- Perform substitutions and algebraic manipulations in order to solve for the unknown quantity.
Read About It!
Get more information on the topic of Vectors and Projectiles at The Physics Classroom Tutorial.
Return to Problem Set
Return to Overview | <urn:uuid:d7bb6246-44b8-42e2-80cf-8e451981b4ea> | 3.53125 | 315 | Tutorial | Science & Tech. | 71.157529 |
From the edifice of the American judicial system through to the liberal coven of the open source community, it's a widely accepted fact that Microsoft's business is to write software that's tied to its' Windows operating system. Although Windows is often disparaged by unbelievers as being technically inferior, difficult to use, and expensive, by consistently coupling its software products to the operating system, Microsoft grew into one of the most successful business empires the world has ever seen.
Although Microsoft bet the company on .NET, they also had the foresight to present .NET's core technology as open standards through the international standards organizations ECMA and ISO. Apart from making .NET development easier on Windows, the standardization of the Common Language Infrastructure (the fundamental .NET technology) and C# (the primal .NET language) means that .NET is not just suitable for Windows development, but is also a practical solution for developing software that runs on a variety of operating systems, including GNU/Linux and Mac OS X
Cross-Platform .NET Development
Since .NET is primarily seen as a Windows development tool, and most .NET developers are from a Windows background it seemed only right that a book be written to investigate how .NET can be used as a cross-platform development tool. With this in mind M.J. Easton and Jason King wrote Cross-Platform .NET Development (ISBN: 1-59059-330-8), which was published in September 2004 by APress.
If you're interested in Windows development, .NET development or cross-platform development then the book serves as an exposition into the technicalities of Microsoft's .NET Framework and the open source implementations of .NET that position it as the ideal cross-platform tool for the new millennium. | <urn:uuid:f9b7e379-997a-4a82-97ee-8d21cb9d66b8> | 2.828125 | 357 | Knowledge Article | Software Dev. | 44.676144 |
View Full Version : Discussion: Melt Through the Ice to Find Life
2005-Jul-19, 06:23 PM
SUMMARY: Scientists can tell us what our climate on Earth was like in past by examining ice cores taken from glaciers. Tiny bubbles of air are trapped in the ice and maintain a historical record of ancient atmospheres. The effects of life make their mark in these ice samples as well. What if you examined the icecaps on Mars, or the layers of ice on Europa? NASA is considering a proposal for a small spacecraft that would land on Mars or Europa and melt its way throught the ice, collecting data as it descended, searching for clues about the presence of life.
View full article (http://www.universetoday.com/am/publish/dig_through_ice_life.html)
What do you think about this story? Post your comments below.
2005-Jul-19, 07:41 PM
They stole my idea! I tell you true, I thought about the concept some years ago when there was some hype about Europa. Granted, these guys have the details figured out much better that I do (directional manoeuvering, radiation sheilding, etc), but I still think I deserve royalties.
Suffice to say, I think this is a pretty neat idea.
2005-Jul-20, 11:02 AM
I think a mission to Europa should be at the top of Nasa's list. I'd love to see whats lurking down there in that ocean. Probably not fish as this article suggests... but perhaps bacteria, or even small single and multi celled organisms, algae or something we simply hanvt seen yet.
2005-Jul-20, 12:18 PM
Yes! This is what I think is exciting. These missions just take sooo looong from beginning to landing. Of course, I'm still waiting to see if there is anything in Lake Vostok right here on Earth.
2005-Jul-20, 01:55 PM
Just remember 2001 by Mr Clarke, if he gets the next bit right we will have a small and distant sun soon lol :lol: :D :P
2005-Jul-20, 02:11 PM
Originally posted by mark mclellan@Jul 20 2005, 01:55 PM
Just remember 2001 by Mr Clarke
I don't recall which book the Europa thing was in. I think it was in "2010", but it might have been in "2061".
There could be life on Europa. That might be one explanation for the cracks in the ice being brown. Or, they could simply be colored by sulfur compounds dredged up into the ice covered ocean by volcanic action. It will be interesting to find out.
2005-Jul-20, 06:24 PM
Or, they could simply be colored by sulfur compounds dredged up into the ice covered ocean by volcanic action.
Personally, I think this is the more likely explanation. To me this would be a VERY exciting finding, given what we have discovered around volcanic vents in the deep oceans of Earth.
Either way, I too would love to see a mission get a "go".
Powered by vBulletin® Version 4.2.0 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved. | <urn:uuid:d44eee14-eaf6-4cd4-850f-14e3b3282f9d> | 3.046875 | 691 | Comment Section | Science & Tech. | 75.862022 |
Until now, it was a rarely pondered question: Between the virtual bookends of someone searching for revealing pictures of Lindsay Lohan online and a search engine producing said pictures, how much energy is consumed?
Thanks to an Internet mini-controversy this week, inquisitive minds now have a pretty good approximation: 0.0003 kilowatt hours.
A recent story in The Sunday Times of London focused on the research of Harvard University physicist Alex Wissner-Gross, who studies the energy use associated with Internet search engines. Apparently taking some liberties with the scientist’s work, the story claimed that two Google searches produce roughly the same amount of carbon dioxide as boiling a kettle for a cup of tea.
Dr. Wissner-Gross quickly shot back this week, telling a technology website that his work has nothing to do with the ubiquitous search engine and that his findings instead showed it takes an average of 20 milligrams of carbon dioxide a second to visit a website – no mention of Google; no mention of kettles. By then, however, the blog mill had already caught on to the story, and the Web was abuzz with musings about the link between dead trees and search results.
However, the most interesting response to the story came from Google itself, which went about analyzing exactly how much energy a single search uses.
Urs Hölzle, Google’s senior vice-president of operations, countered on the company’s official blog that the average search time is about 0.2 seconds, meaning the servers that do the heavy lifting work on a query for only thousandths of a second. Mr. Hölzle said that in the time it takes to run a Google search, the user’s personal computer consumes more energy than the company does to answer the query.
In addition to the work performed before the search request, Mr. Hölzle produced an estimate of 0.0003 kilowatt hours of energy for each search, equivalent to about one kilojoule.
“For comparison, the average adult needs about 8,000 kJ a day of energy from food, so a Google search uses just about the same amount of energy that your body burns in 10 seconds,” Mr. Hölzle wrote.
While the numbers make for interesting hypothetical arithmetic, it is the speed with which Google produced them that is perhaps more telling. Google’s energy consumption aside, the company’s co-founders, Larry Page and Sergey Brin, are known for their support of myriad environmental initiatives, and Google wasted little time responding to The Sunday Times’s article.
The company also provided an estimate of how much carbon dioxide a single search is equivalent to: 200 milligrams. Using tailpipe emission standards, Mr. Hölzle estimated that an average car driven one kilometre generates as many greenhouse gasses as 1,000 Google searches.
Mr. Hölzle did not mention that Google’s websites receive hundreds of millions of search requests a day. However, even those numbers don’t bring the company anywhere near the energy consumption of firms in other industries, such as automotive or manufacturing. And Mr. Hölzle does point out that, before the Internet age, recovering information would have involved travelling to the local library and looking it up. | <urn:uuid:541a0d44-34d5-44f5-a5eb-ffd1478604ea> | 2.703125 | 702 | Personal Blog | Science & Tech. | 43.942669 |
ANTLR (ANother Tool for Language Recognition) is a language tool that provides a framework for constructing recognizers, compilers, and translators from grammatical descriptions containing C++, Java, or Sather actions. It is similar to the popular compiler generator YACC, however ANTLR is much more powerful and easy to use. ANTLR-produced parsers are not only highly efficient, but are both human-readable and human-debuggable (especially with the interactive ParseView debugging tool). ANTLR can generate parsers, lexers, and tree-parsers in either C++, Java, or Sather. ANTLR is currently written in Java.
ICU provides a Unicode implementation, with functions for formatting numbers, dates, times, and currencies (according to locale conventions, transliteration, and parsing text in those formats). It provides flexible patterns for formatting messages, where the pattern determines the order of the variable parts of the messages, and the format for each of those variables. These patterns can be stored in resource files for translation to different languages. Included are more than 100 codepage converters for interaction with non-unicode systems.
Emdros is a corpus query system for storing and searching linguistically annotated text. It is very generic, supporting almost any kind of annotation from almost any linguistic theory. All linguistic levels of analysis are supported, including phonology, morphology, the lexical level, syntax, and discourse. The core libraries act as a middleware layer between a client and an underlying SQL database. MySQL, PostgreSQL, and SQLite are supported.
SILGraphite (formerly OpenGraphite) is a project within SIL's Non-Roman Script Initiative and Language Software Development groups to provide extensible cross-platform rendering capabilities for complex non-Roman writing systems. It consists of a rule-based programming language, Graphite Description Language (GDL), that can be used to describe the behavior of a writing system, a compiler for that language, and a rendering engine that can serve as the backend of a text processing application. SILGraphite renders TrueType fonts that have been extended by means of compiling a GDL program. It is currently being integrated into Gecko/Mozilla through the SILA project, a GNU/Linux port is also underway, and there are plans for OpenOffice.org and Abiword integration. | <urn:uuid:5b30fd85-9b4a-4eb3-885f-04a1051409d7> | 2.859375 | 491 | Content Listing | Software Dev. | 21.530245 |
You've probably never given much thought to the ice cube swirling amid the contents of your Cuba Libre. When watching Titanic, you probably don't ponder how the iceberg that did the ship in is actually a great example of why we have life on earth. We all know that water is kind of important. While not the most exciting of drinks, just try going more than a few hours without taking a sip. But there's something extra-special about water that makes it one of the strangest substances on earth.
To understand water's wackiness, you first need to understand how things are "supposed" to work. Think back, way back, to science class and you might remember talking about the three states of matter: Solids, liquids and gases. The molecules of a gas are very bouncy and far apart. As the gas cools, the molecules get closer together, condensing into a liquid. Cool that liquid even more and the molecules squeeze even more tightly together into a solid.
But water is weird. Unlike almost every other substance on earth, as it freezes it actually expands, making it lighter than its liquid counterpart. To get a really clear view of this, put on your science pants and try the following experiment: Add a few drops of food colouring to the bottom of a large glass. Next, fill the glass a little less than halfway with vegetable oil. Now slowly pour in some baby oil, leaving a few centimetres of space at the top of the glass. You can't really see it, but the baby oil will sit on top of the vegetable oil (for an explanation of why this happens, check out this column).
Drop an ice cube into the oil mix and it will settle in the middle. Now you have to wait. But your patience will be rewarded. Eventually, you will see a drop of water start to peel off the ice as it melts. Eventually it will be joined by more drops that will settle on the bottom of the glass, mixing with the food colouring and demonstrating that liquid water is more dense than ice.
This property is great news for us. If water behaved like other liquids oceans and lakes would freeze solid, turning the earth into a giant ball of ice with no chance for life as we know it. There is however a downside. Living things tend to have cells containing lots of water. If cells freeze, the icy expansion means they get damaged causing a potentially fatal case of frostbite.
But some critters that live in places like the Arctic have found a way around this by making stuff that acts like antifreeze in their blood. One day researchers may find a way to adapt this for use in humans. Who knows? This might be the ticket to sending sleeping astronauts to far-flung galactic destinations. And it gives you plenty to think about while you sip on that highball.
You Will Need:
- Large Glass or Beaker
- Baby Oil
- Vegetable Oil
- Ice Cube
Follow Maila on Twitter @mailarible for more interesting scientific facts. | <urn:uuid:1cbb678b-bce0-474a-bc78-548d97bf2535> | 3.296875 | 623 | Personal Blog | Science & Tech. | 61.538421 |
Manual Section... (3) - page: fflush
NAMEfflush - flush a stream
DESCRIPTIONFor output streams, fflush() forces a write of all user-space buffered data for the given output or update stream via the stream's underlying write function. For input streams, fflush() discards any buffered data that has been fetched from the underlying file, but has not been by the application. The open status of the stream is unaffected.
If the stream argument is NULL, fflush() flushes all open output streams.
For a nonlocking counterpart, see unlocked_stdio(3).
RETURN VALUEUpon successful completion 0 is returned. Otherwise, EOF is returned and errno is set to indicate the error.
- Stream is not an open stream, or is not open for writing.
The function fflush() may also fail and set errno for any of the errors specified for write(2).
CONFORMING TOC89, C99, POSIX.1-2001, POSIX.1-2008.
NOTESNote that fflush() only flushes the user space buffers provided by the C library. To ensure that the data is physically stored on disk the kernel buffers must be flushed too, for example, with sync(2) or fsync(2).
SEE ALSOfsync(2), sync(2), write(2), fclose(3), fopen(3), setbuf(3), unlocked_stdio(3)
COLOPHONThis page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://www.kernel.org/doc/man-pages/.
This document was created by man2html, using the manual pages.
Time: 15:27:06 GMT, June 11, 2010 | <urn:uuid:09645ffa-fd23-45a3-a748-97e2a8fc78df> | 3.1875 | 395 | Documentation | Software Dev. | 71.049098 |
"Virtualization" is the act of making a set of processes believe that it
has a dedicated system to itself. There are a number of approaches being
taken to the virtualization problem, with Xen, VMWare, and User-mode Linux
being some of the better-known options. Those are relatively heavy-weight
solutions, however, with a separate kernel being run for each virtual
machine. Often, that is exactly the right solution to the problem; running
independent kernels gives strong separation between environments and
enables the running of multiple operating systems on the same hardware.
Full virtualization and paravirtualization are not the only approaches
being taken, however. An alternative is lightweight virtualization,
generally based on some sort of container concept. With containers, a
group of processes still appears to have its own dedicated system, but it
is really running in a specially isolated environment. All containers run
on top of the same kernel. With containers, the ability to run different
operating systems is lost, as is the strong separation between virtual
systems. Thus, one might not want to give root access to processes running
within a container environment. On the other hand, containers can have
considerable performance advantages, enabling large numbers of them to run
on the same physical host.
There is no shortage of container-oriented projects. These include
relatively simple efforts like the BSD jail module through more
thorough efforts like Linux-VServer, OpenVZ, and the proprietary Virtuozzo (based on OpenVZ) offering. Many of these
projects would like to get at least some of their code into the kernel and
shed the load of carrying out-of-tree patches. There is little
interest, however, in merging code which only supports some of these
projects. The container people are going to have to get together and work
out some common solutions which they can all use.
It appears that this is exactly what the container developers are doing. A
loose agreement has been put in place
wherein developers from a few projects will discuss proposed changes and
jointly work them into a form where they meet everybody's needs. Once a
particular patch has reached a point where all of the developers are
willing to sign off on it, it can be forwarded for eventual merging into
The more complex and intrusive changes, such as PID virtualization, appear to be
on hold for now. Instead, it looks like the first jointly-agreed patch
might be the UTS namespace
virtualization patch. The aim of the patch is relatively straightforward:
it allows each container (as represented by a family tree of processes) to
have its own version of the utsname structure, which holds the
node name, domain name, operating system version, and a few other things.
In essence, it replaces a single global structure with multiple structures
attached at various places in the process tree. It still requires a
five-part patch, with every reference to the global system_utsname
structure replaced by a call to the new utsname() function.
Longer-range plans call for the virtualization of every global namespace in
the kernel, including SYSV IPC, process IDs, and even netfilter rules.
There was an interesting discussion on the virtualization of security
modules; some think that each container should be able to load its own
security policy, while others argue in favor of a single system security
policy which is aware of (and able to use) containers. Unsurprisingly,
SELinux is already equipped with a type hierarchy mechanism which can be
used with containers in the single-policy approach.
Containers might still prove to be a hard sell with some developers, who
will see them as complicating access to many internal kernel data structures
without adding a whole lot of value. It is clear, however, that there is a
demand for this sort of lightweight virtualization - OpenVZ, alone, claims to be running over 300,000 virtual
environments. So the pressure to standardize this code and move it into
the mainline will only grow over time. Once they are clean enough to
satisfy the development community, pieces of the container concept are
likely to be merged.
to post comments) | <urn:uuid:ef63fac0-0e2e-4edb-8a2b-617378571981> | 2.71875 | 904 | Comment Section | Software Dev. | 31.124784 |
Science Fair Project Encyclopedia
- For Acoustic uses in spectrographs of sound waves, see below.
A spectrometer is an optical instrument for measuring properties of light over some portion of the electromagnetic spectrum. The measured variable is often the light intensity but could also be the polarization state, for instance. The independent variable is often the wavelength of the light, usually expressed as some fraction of a meter, but it is sometimes expressed as some unit directly proportional to the photon energy, such as wavenumber or electron volts, which has a reciprocal relationship to wavelength. A spectrometer is used in spectroscopy for producing spectral lines and measuring their wavelengths and intensities. Spectrometer is a term that is applied to instruments that operate over a very wide range of wavelengths, from gamma rays and x rays into the far infrared. In general any particular instrument will operate over a small portion of this total range because of the different techniques used to measure different portions of the spectrum. The radio electronics spectrum analyzer is a closely related device.
Spectrometers known as 'spectroscopes' are used in spectroscopic analysis to identify materials. Spectroscopes are used often in astronomy and some branches of chemistry. Early spectroscopes were simply a prism with graduations marking wavelengths. Modern spectroscopes generally use a diffraction grating, a movable slit and some kind of photodetector, all automated and controlled by a computer. The spectroscope was invented by Gustav Robert Georg Kirchhoff and Robert Wilhelm Bunsen.
When a material is heated to incandescence it emits light that is characteristic of the atomic makeup of the material. In the original spectroscope design in the early 19th century, light entered a slit and a collimating lens transformed the light into a thin beam of parallel rays. The light was then passed through a prism that refracted the beam into a spectrum because different wavelengths were refracted different amounts because of dispersion. This image is then viewed through a tube with a scale that was transposed upon the spectral image, enabling its direct measurement. Particular light frequencies give rise to sharply defined bands on the scale which can be thought of as fingerprints. The element sodium has a very characteristic double yellow band known as the Sodium D-lines at 588.9950 and 589.5924 nanometers.
With the development of photographic film, the more accurate spectrograph was created. It was based on the same principle as the spectroscope, but it had a camera in place of the viewing tube. In recent years the electronic circuits built around the photomultiplier tube have replaced the camera, allowing real-time spectrographic analysis with far greater accuracy. Arrays of photosensors are also used in place of film in spectrographic systems. Such spectral analysis, or spectroscopy, has become an important scientific tool for analyzing the composition of unknown material and for studying astronomical phenomena and testing astronomical theories.
A spectrograph is an instrument that transforms an incoming time-domain waveform into a frequency-domain spectrum, or generally a sequence of such spectra. There are several kinds of machines referred to as spectrographs, depending on the precise nature of the waves.
In optics, a spectrograph separates incoming light according to its wavelength and records the resulting spectrum in some detector. It is a type of spectrometer and superseded the spectroscope for scientific applications.
The first spectrographs used photographic paper as the detector. The star spectral classification and discovery of the main sequence, Hubble's law and the Hubble sequence were all made with spectrographs that used photographic paper.
In acoustics, a spectrograph converts a sound wave into a sound spectrogram. The first acoustic spectrograph was developed during World War II at Bell Telephone Laboratories, and was widely used in speech science , acoustic phonetics and audiology research, before eventually being superseded by digital signal processing techniques.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:63e1ad7c-32ab-4b97-8201-9cfea6754ca8> | 3.734375 | 843 | Knowledge Article | Science & Tech. | 24.237714 |
This tutorial explains in full detail the nature and usage of links in HTML. It will not only show the basic syntax of links, but will also go through various characteristics that are usually not used and can provide a lot of hidden information for different interpreters (e.g., search engines).
Table of contentsBypass table of contents
Links are a vital part of HTML and the whole concept of World Wide Web. Internet is considered a web becuase of how links connects separated pieces or documents, based on the idea of a simple reference improoved with interactivity. As result, a simple click takes you from the referring document to the one mentioned in the link.
A link in HTML documents can be basically considered as a reference to other resource. This reference establishes an implicit relationship between the referring document and the resource linked. Links can be classified as visual (placed in between the document's content) or hidden (defining general relational information).
The HTML a element can be used to insert links inside the content of a document. This may help authors to recommend other resources related to the actual topic that users can, most times, access by simply clicking on the linked content. The linked content is just the content of the HTML a element (i.e., the piece of HTML code that's placed between the start and end tags). Note that links are usually rendered in a different way by browsers to help users to recognize them.
There are many ways to build a link, but a basic link must contain at least a content and the address of the resource (defined with the "href" attribute). In the example below we'll define a basic link:
Note: to learn more about how to locate and organize resources please refer to our "Organizing a website" tutorial.
In the example above we made a simple link using text as content, but you can actually link almost anything using the HTML a element. In the next example we'll make a link with a piece of document that includes an image and text.
Note the closing " />" used in empty tags to achieve XHTML compatibility.
When using text as the content of a link (from now on, we'll referer to this text as "anchor text") there is one thing to take into account: the text must describe briefly (as the HTML title element) the content of the resource linked. This is a thing left to each author consideration, but using descriptive anchors may provide with information to users and search engines (very important when promoting a site).
Another thing to consider is that when linking to a folder (not a file, e.g., http://www.htmlquick.com) you should always add a trailing slash to the URL (e.g., http://www.htmlquick.com/). Even when most browsers or servers automatically solve this problem is a good practice that will add to the properness of your coding.
The target attribute allows authors to decide where the resource linked through the HTML a element must be loaded (e.g., it can be loaded in a new browser's window, in a specific frame if there is a frame set page or simply in the same window). This may result useful sometimes but also makes your code not compatible with Strict XHTML code. If you really want to use it you'll need to opt for a Transitional DTD (in the HTML !DOCTYPE tag).
In this simple example, we show the code for two links: the first is opened in a new window and the second in a frame named "main".
Diseño y desarrollo: Latitud29.com | <urn:uuid:99b14616-632a-4d06-ace2-a697de637b03> | 4.25 | 742 | Tutorial | Software Dev. | 57.15658 |
5 Everyday Items Invented By Nasa
Think about what you'd really like to do on your vacation and create a list to narrow your choices - whether it's hitting the beach, going shopping, climbing a mountain or visiting a museum. Consider whether you can do this somewhere nearby, or whether you know anyone who has done your chosen activities before on a similar budget. Alternatively, travel agencies or even chat rooms on the topic can provide great advice on accommodations, places to dine, things to do and tourist traps to avoid. Internet sites such as Yahoo! Travel, Expedia and Priceline are often useful when seeking reasonable fares.
Recently, NASA landed Curiosity on the surface of Mars. Curiosity will explore the red planet and send back pictures and data that NASA will analyze for years, but the mission wasn't cheap. The $2.5 billion price tag has left some Americans in sticker shock given the ballooning budget deficit, but NASA argues that this mission gives us a better understanding of Mars. Engineering missions of this complexity have a long history of producing technology that finds its way into our everyday lives. Here are just a few of those.
You've likely never heard of translucent polycrystalline alumina, but if you have invisible braces, you have a whole mouth full of it. NASA, in conjunction with another company, developed the material to protect infrared antennae and now dental companies use it for invisible braces.
Using a process called direct ion deposition, a thin layer of diamond-like carbon (DLC) makes glasses 10 times more scratch resistant than conventional lenses. In addition, this coating is more water resistant allowing water to run off lenses without spotting.
NASA originally developed memory foam to lessen impact during landings, but the material is now used commercially in a variety of ways. Most know memory foam as the material used to make mattresses and pillows more comfortable. It's also used to lessen the friction between prosthetic limbs and as protection for racecar drivers.
Riblets are small grooves barely visible to the naked eye. No deeper than a scratch, they have a surprisingly large effect on the aerodynamics of aircraft wings. Now, riblets are used in pipes to reduce friction and on yachts used for racing. For a brief time, NASA partnered with Speedo to develop a competitive swimsuit using riblets, but it was banned from competition after the 2008 Beijing Olympic Games.
NASA knows how to take a temperature. It had been doing it for years by using infrared technology to take the temperature of stars. Diatek, a company that wanted to reduce the amount of time nurses were using to take patients' temperatures, used NASA infrared technology to develop the ear thermometer. | <urn:uuid:be03be05-b0ed-4a39-ac63-3da79f968292> | 2.96875 | 552 | Listicle | Science & Tech. | 36.509404 |
Climate influences the environment, natural resources, the economy, and other aspects of life in all parts of the world. Natural and human contributions to changes in climate may have substantial environmental, economic, and societal consequences. Decision-makers, resource managers, and other interested individuals need reliable science-based information to make informed decisions regarding policy and actions. In order to understand climate change and its impacts, we need to understand some of the range and complexity of the climate system itself.
Climate Change Information Highlights
1. What is the Difference between Weather and Climate?
2. What is Climate Change?
3. What is the Difference between Climate Change and Climate Variability?
4. The Greenhouse Effect and Climate Change
5. Climate Change Impacts on Agricultural Production
Conservation and Climate Change
Comet 2.0 is the latest NRCS greenhouse gas estimation tool available for farmers, landowners and agricultural producers. The on-line tool estimates greenhouse gas (GHG) emission changes in soil carbon sequestration, fuel-use and fertilizer use. Read more.
We have developed a qualitative ranking of NRCS Conservation Practice Standards which can be effectively applied to the Greenhouse Gas and Carbon Sequestration Resource Concern. Read more.
Images: Photos courtesy of USDA-NRCS | <urn:uuid:e5dc972e-ecf9-484b-ab87-ee3c7fff382b> | 3.96875 | 261 | Knowledge Article | Science & Tech. | 23.862857 |
Three specialists tell Zuhaila Sedek how geology can be employed to solve environmental issues
FOR centuries, geology has enlightened us about the Earth we live on. But this study of the Earth can also be a powerful “detective” tool, offering clues on how we can help solve environmental problems.
We talk to two geologists, Seet Chin Peng and Dr Saim Suratman, and a specialist in engineering hydrogeology, Dr Azuhan Mohamed, about the role of geology in environment conservation. They will be speaking at the Conference on
Groundwater, which is part of the outreach programme Geology Made Simple, organised by Institute of Geology Malaysia (IGM), on Tuesday and Wednesday at One World Hotel in Kuala Lumpur.
“Geology allows us to strike a balance between getting what we want (from nature) and learning how to care for the environment,” says Seet, who is also the IGM coordinator.
The science of geology can be of help to, say, the property sector when it comes to building skyscrapers, says Saim. “By consulting a geologist (at the planning stage), one can determine whether the foundation is safe (for construction work) by looking at the type of rock, for example.” Failing which, he adds, the environment can be jeopardised and tragedies such as a landslide could happen.
Geology can also be used to help man find their wealth, for example, by determining the likelihood of a location to contain gold and oil, as well as fine-tune ways to better extract such natural resources.
“People often oppose mining work. But if there’s no mining, we won’t have things like gold, iron or copper. We need these minerals for our lives. Understanding geology can help one make an informed decision when it comes to mining them,” explains Seet.
Saim, who is also director of the Research Centre For Geohydrology at National Hydraulic Research Institute Of Malaysia (Nahrim), points out that geology can help in dealing with pollution by teaching one how to control the usage of natural resources.
“While geologists can help find natural resources, we have to keep in mind that mining work involves burning fossil fuels and too much of that can lead to global warming.”
Nahrim forecasts that the temperature in Malaysia will go up tremendously by 2100 in tandem with rising sea levels. If this happens, our country will have more water than land!
SOURCE FROM UNDER
Azuhan says Malaysia is blessed with having rain all year round and so it has plenty of underground water.
“Underground water is a source of water supply, which, if fully used, can help reduce the construction of dams,” explains the head of water resources at Erinco Sdn Bhd, a firm of consulting engineers and environmental consultants.
The construction of dams is viewed by environment lovers as an activity that brings harm to nature as it alters the flow of the rivers, which, in turn, can affect the lives of river creatures.
“People can use underground water by building wells. But to build a proper well, its geographical aspect has to be evaluated to determine whether the area is suitable for such a construction. Otherwise, the water in the well may get polluted by arsenic or the ground could gradually sink,” explains Azuhan, who is currently promoting the use of underground water in the country.
He adds: “The country, of course, needs to carry out a study on underground water before it can be practised on a mega scale.”
REAL-LIFE INDIANA JONES
Despite the important roles of geology, experts in this area are often overlooked. “They are the unsung heroes who work behind the scenes,” says Seet, adding that the profession would suit those who love the outdoors.
“I recall climbing Gunung Tahan and there was no place for the team to set up camp. We were all shivering cold and we even had to seek cover from the roaming elephants and tigers. Such experiences are hard to come by.” Saim finds geologists in the early days more hands-on with their work than geologists today.
“We now have the term desktop geologist, a person who doesn’t go for field work and does everything at the desk only.
“When I was younger, I recall having to go into the jungles and carrying a special hammer and a compass. Now, there’s GPS and if we’re lost in the jungles, we can use the handphone to call for help.”
Seet and Saim point out that their work as a geologist is not only limited to the study of rocks, but can also contribute to many aspects of life such as the economy, culture and environment. They also see geology as the study of the history of the planet so unfortunate events, man-made or otherwise, can be avoided.
The study of rocks never sounded so cool as when Seet and Saim described it. It’s apparent that every rock has a thousand stories to tell and it is through a geologist that these stories can be told. email@example.com | <urn:uuid:3f553613-d898-4e09-9076-c84a09e6ca18> | 3.21875 | 1,105 | Truncated | Science & Tech. | 47.420225 |
Sooner or later, it's bound to happen.
Sooner or later, scientists who study Earth-crossing asteroids say, astronomers will find one that has a significant chance of striking the planet.
Unlike several recently discovered asteroids that were first given very long odds for a collision, this time more precise orbital calculations won't eliminate the possibility. This one will be an asteroid ''with our name on it,'' in the words of David Morrison, a scientist at the NASA Ames Research Center and one member of a small community of astronomers, physicists, engineers and other scientists who think a lot about such an unthinkable event.
It is not clear what would happen then, though Dr. Morrison and others are trying to awaken governments and the public to the need to at least think about developing a way to respond. ''Eventually we will discover something,'' Dr. Morrison said, though maybe not in this century or even this millennium. ''Society should start planning for that unexpected but potentially tragic possibility.''
But it is becoming clear that a longtime assumption of many scientists -- and of Hollywood filmmakers -- that a nuclear weapon is the best way to save the planet from a threatening asteroid is no longer in such favor. Increasingly, those scientists who study asteroid hazards say that a subtler, quieter, slower approach might be called for. These scientists are turning T. S. Eliot on his head: it's not that the world will end with a whimper rather than a bang, they say. It's that it may not end that way.
A nuclear detonation, some scientists say, could break the asteroid into several large pieces, increasing, rather than eliminating, the threat. And a blast some distance from an asteroid, designed to shove it into a slightly different orbit, might not work either; the asteroid might soak up the energy like a sponge. ''I'd say forget that,'' said Dr. Keith A. Holsapple, a professor at the University of Washington who studies the effects of simulated nuclear explosions.
By contrast, most of the alternative approaches would build up force gradually, gently nudging, rather than shoving, the asteroid. They would rely on the same basic Newtonian principle -- that for every action there is an equal and opposite reaction -- only written small, with tiny actions creating tiny opposite reactions that, given enough time, could shift an asteroid's orbit enough to change a hit into a close call.
Among the approaches being talked about are some that have been the stuff of science fiction for years: a mass driver, a sort of electromagnetic conveyor belt that would be planted on an asteroid and hurl dirt from its surface, or a solar concentrator, a parabolic mirror that would orbit the body and heat up the surface, creating a plume of vaporized material.
Perhaps the most intriguing idea -- and one that may not be as far-fetched as it sounds -- has been put forth by Dr. Joseph Spitale, a scientist at the University of Arizona. To move an asteroid, he says, just change its color.
This ''paint it black'' approach would change how much sunlight it absorbs, and how hot it gets. Heat radiating from an asteroid (in the form of thermal photons) creates a small force in the opposite direction -- a phenomenon called the Yarkovsky effect, after I. O. Yarkovsky, a Russian engineer who first described it a century ago. Changing the amount of heat would change the force, affecting the orbit. The sun would move the asteroid, one photon at a time. | <urn:uuid:4f0f1968-7109-435d-9401-ebaf7616359e> | 3.453125 | 718 | Truncated | Science & Tech. | 48.767028 |
The Search for Strange Matter; January 1994; Scientific American Magazine; by Crawford, Greiner; 6 Page(s)
For some years, physicists have enjoyed toying with a particularly intriguing puzzle. Protons and neutrons readily form either tiny clumps of matter (the various atomic nuclei) or very large clumps of matter (neutron stars). Yet between the invisible nucleus and the ultradense neutron star (really a vast nucleus that is some 11 kilometers or more in circumference), no form of nuclear matter has been detected. What is going on here? Do the laws of physics as we know them forbid nuclear particles from assembling themselves into objects that could fill this "middle" range? Or is this nuclear desert actually filled with new forms of matter, different in structure from ordinary nuclear matter, that investigators have failed to find?
In fact, the theory that embodies our current understanding of physics, the Standard Model, seems to be consistent with the existence of new forms of nuclear matter that might populate the desert. And if the Standard Model is right, the detection of such matter could solve a major cosmological mystery: the nature of the "missing" matter, thought to account for 90 percent of the observable universe. This is a prize worth winning. So, in an experiment at Brookhaven National Laboratory, we, along with many collaborators from other research institutions, are searching for evidence of the existence of this form of nuclear matter that might fill the void. | <urn:uuid:1d18ed09-e602-4135-95cd-f1adf0f7c29c> | 3.484375 | 296 | Truncated | Science & Tech. | 35.990927 |
Title: Middle Rio Grande Basin Research Report 2008
Author: Finch, Deborah M.; Dold, Catherine, eds.
Source: Albuquerque, NM: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station, Middle Rio Grande Ecosystem Management Research Unit. 20 p.
Description: An ecosystem is rarely static. A natural system composed of plants, animals, and microorganisms interacting with an area's physical factors, an ecosystem is always fluctuating and evolving. But sometimes, often at the hands of humans, ecosystems change too much. Such is the case with many of the ecosystems of the Middle Rio Grande Basin of New Mexico.
Keywords: Middle Rio Grande Basin, New Mexico, ecosystem, conservation, restoration, drought, bark beetle, grazing, fire, Valles Caldera National Preserve, climate change, bosque, Rio Grande silvery minnow, Hybognathus amarus
View and Print this Publication (625 K)
- We recommend that you also print this page and attach it to the printout of the article, to retain the full citation information.
- This article was written and prepared by U.S. Government employees on official time, and is therefore in the public domain.
Get the latest version of the Adobe Acrobat reader or Acrobat Reader for Windows with Search and Accessibility
Finch, Deborah M.; Dold, Catherine, eds. 2008. Middle Rio Grande Basin Research Report 2008. Albuquerque, NM: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station, Middle Rio Grande Ecosystem Management Research Unit. 20 p.. | <urn:uuid:f8598ded-d846-487e-8b48-70aa20fc348e> | 3.046875 | 331 | Truncated | Science & Tech. | 39.623261 |
Hydroids are actually colonies of individuals called polyps, which grow on a common stalk. Polyps in a typical colony each have specific jobs and look different from each other. These jobs include defense, feeding, and reproduction. In order to build up their own colony the hydroids reproduce by budding; all polyp types can bud. New colonies in Tubularia are formed when reproductive polyps make little polyps that stay attached until they develop into mature polyps. The mature polyps drop away from the "parent" and settle on the ground, then grow a new colony.
There are four major groups of cnidarians:
- Anthozoa, which includes true corals, anemones, and sea pens;
- Cubozoa, the amazing box jellies with complex eyes and potent toxins;
- Hydrozoa, the most diverse group with siphonophores, hydroids, fire corals, and many medusae; and
- Scyphozoa, the true jellyfish.
Hydras (class Hydrozoa) are a bit like elongated sea anemones with long, waving tentacles. They are quite common in fresh water, where they can be found clinging to plants and feeding on any small water animals that happen to blunder into their arms.
Hydroids are colonies of polyps. Each polyp is similar to a small sea anemone i.e. has its own jelly like body and a mouth surrounded by tentacles. The difference is that individuals are organised to benefit the whole colony. They do this, by being interconnected via a common tube the stolon. The stolon which can be tough and horny, allows the transfer of food between the polyps.
Hydroids have interesting life cycles, which vary depending on the species, but generally they go through three stages.
1. Free swimming planktonic larvae that settles and forms sessile polyps.
2. The polyps produce free swimming medusa (small jellyfish like stage).
3. The medusa in turn produce planktonic larva.
The picture depicts an example of what a hydroid colony can look like. It is also possible to see the different stages of the hydroids life cycle. Notice how the colony consists of several genetically identical individuals called polyps When the planula larvae have attached themselves to the substrate, the first individual grows out and starts catching food with its tentacles. The colony grows when new polyps bud off. Polyps that comprise the colony can share nutrition via the stems, and it is possible for different polyps to have different functions within the colony. In the picture above it is possible to see the feeding polyps that catch food with their poisonousness tentacles and the reproductive polyps that produce the free-swimming medusae that build either sperms or eggs. The colony that is depicted in the picture above is about 3 cm high and the medusae are about 5 mm wide. The egg, sperm and planula larvae are smaller than 1 mm. Hydroid colonies can vary greatly in composition. Many colonies can for example contain defensive polyps. The type of life cycle can vary between different colonies. There are hydroids that lack a colony building polyp stage, while with other specie the medusae stage is lacking. It is the leptomedusae Obelia geniculata depicted above.
To find hydroids one has to look on the lower shore particularly during spring tides when the kelp is exposed. They will be found attached to seaweeds and rocks. They are not always easy to find as many are small or form seaweed like colonies. Many species do not possess a non scientific name The species found in Cornwall include the Sea Fir Obelia geniculata, Sea Oak Dynameita pionila, and Clava multicornis. Included within the hydroids are jellyfish like creatures that are easily confused with true jellyfish.
Two species commonly confused are the Portuguese Man of War, Physalia physalis and the By-the-wind sailor Velella velella. Both species are not true jellyfish but free floating hydroids. Both are regularly stranded on Cornish shores, particularly the By-the-wind sailor which can be stranded in the thousands. | <urn:uuid:b4b81842-58c9-41e1-b813-a65f7b4c0ef4> | 3.84375 | 879 | Personal Blog | Science & Tech. | 42.400707 |
Only a week or so ago I was lamenting that few Americans were aware of the threat of avian flu. Now suddenly, the story is everywhere. No need to rehash here the potential catastrophe that has in recent days been splashed all over the media.
But here is another story, and an important one: Biologists have recreated the virus responsible for the 1918 flu pandemic that killed as many as 50 million people worldwide. Fragments of viral RNA were retrieved from a flu victim buried in Alaskan permafrost in 1918. The RNA was converted into DNA and sequenced. Overlapping sequences were pieced together to get the entire genome, and the viral DNA was synthesized in the lab. The synthesized DNA was injected into human kidney cells, which produced tens of viruses. These were isolated and used to infect mice. The mice all died within days of a virus that was indeed many times more virulent than the pathogens responsible for the pandemics of 1957 and 1968.
A century old flu virus has been recreated. Good or bad? Some say good. The team that synthesized the virus got permission from the Centers for Disease Control and the National Institute of Allergy and infectious Diseases, the relevant national overseers. Apparently, they consider the benefits outweigh the risks. As it turns out, the 1918 virus was derived wholly from a bird virus, and thus may help understand the current threat and perhaps hasten the development of effective vaccines. Others say bad. What has in fact been recreated is a virulent killer that might escape the laboratory by accident or malfeasance. The DNA sequence of the 1918 virus has been published, available to any terrorist organization that might muster the technical requirements to recreate the virus. Not quite the Jurassic Park scenario, but not far from it. | <urn:uuid:589952fd-2c7c-4b5d-a233-9079fa71284b> | 2.703125 | 366 | Personal Blog | Science & Tech. | 45.432007 |
Snakes have been around for nearly 100 million years and scientists have found many fossils of extinct species. But this astonishing specimen is different. This serpent is Sanajeh indicus. It is sitting in a dinosaur nest and its coils surround three eggs and the body of a hatchling.
There are many reasons to think that this prehistoric tableau represented a predator caught in the act of hunting, rather than a mash-up of unconnected players thrown together by chance. The snake is perfectly posed, with its head resting atop a coil and its body encircling a crushed egg. All the pieces are very well preserved and very little of the snake, the dinosaur or the crushed egg have been deformed. All of this suggests that the animals were caught unawares and quickly buried in sediment.
The hatchling in question is a baby sauropod part of the dinosaur lineage that included the largest land animals of all time. It was probably a titanosaur, and being in India, that narrows things down to two known species – Isisaurus and Jainosaurus. The adults were formidable animals, 20-25 metres in length and protected by bony armour running down their backs. But even the largest dinosaurs must have hatched out of a small egg, and at that point, they were vulnerable. The hatchling that Sanajeh was about to dispatch was just 50 centimetres long, while the snake itself was measured 3.5 metres.
Despite this size discrepancy, the hatchling would still have been a substantial mouthful. Most modern snakes wouldn’t have any problem with that. Their lower jaws can unhinge to give them a massive gape and their flexible skulls are made of bones that can move against each other.
Sanajeh was halfway towards developing these specialisations. It didn’t have the fixed skulls and narrow gapes of the most primitive of modern snakes, nor could its maw open quite as wide as today’s record-breakers. Nonetheless, it could certainly swallow a sauropod infant and that ability earned Sanajeh inidcus its name. The words are Sanskrit for “ancient gape from the Indus”.
Sanajeh dates back to 67 million years ago but even after its bones were unearthed, it still took 26 years to reach the public eye. Dhananjay Mohabey first dug up the incredible specimen in 1984, near the Indian village of Dholi Dungri. He correctly identified the dinosaur baby and the remains of its egg but thought nothing more of it. The specimen’s true nature only became clear 17 years later, when Jeffrey A. Wilson from the University of Michigan visited Mohabey and re-examined the specimen. To his amazement, he spotted the distinctive backbones of a serpent, intertwined around the baby.
Peering through the Geological Society of India’s archives, the duo found a second block that had been collected at the same time but never been described. It snapped onto the first like pieces of a jigsaw, completing the loop of the snake’s coils around the crushed dinosaur egg. Even then, it took years of negotiations with the Government of India Ministry of Mines before Sanajeh could be taken to Michigan for careful preparation and study, and before the Mohabey and Wilson could return to the original dig site to find more specimens.
They did eventually find two Sanajeh individuals at the same site, both associated with sauropod clutches. This suggests that their original drama wasn’t a one-off production. At least in this area, this snake seems to have made a habit of feasting on would-be giants.
Perhaps the snakes are drawn to the nests by the presence of newly-hatched infants. After all, it’s one of the few moments when they would actually dwarf their prey, a size advantage that would disappear within months. Alternatively, Wilson suggests that Sanajeh may even have deliberately crushed the egg to free the meal within. Today, the Mexican burrowing snake (Loxocemus bicolor) certainly sets a precedent for this – it breaks the eggs of Olive Ridley turtles before eating the contents.
Reference: Wilson, J., Mohabey, D., Peters, S., & Head, J. (2010). Predation upon Hatchling Dinosaurs by a New Snake from the Late Cretaceous of India PLoS Biology, 8 (3) DOI: 10.1371/journal.pbio.1000322
Model by Tyler Keillor and photographed by Ximena Erickson
More on snakes:
- Titanoboa – thirteen metres, one tonne, largest snake ever.
- The tentacled snake turns a fish’s defence into a death march
- The snake that eats toads to steal their poison
- Big-headed tiger snakes support long-neglected theory of genetic assimilation
- Snake proteins have gone through massive evolutionary redesign
- Immune snakes outrun toxic newts in evolutionary arms races | <urn:uuid:e772a6b1-7f26-4ed5-a1fa-501764caeb1f> | 3.5625 | 1,034 | Personal Blog | Science & Tech. | 47.985587 |
Soon enough seahorses may start to become a much more popular thing in our aquariums again thanks to The Penghu Marine Biology Research Center of Taiwan.
Their research and process has allowed them to breed “tens of thousands” of seahorses in one single batch. The potential this offers the marine ornamental industry is large since most wild caught sea horses have a terrible survival rate.
Leave it to those enterprising Taiwanese to go ahead and make another break through in breeding fish!
Source: Taiwan Today | <urn:uuid:96c515c8-cda8-4536-a32c-237a2807af29> | 2.84375 | 107 | Truncated | Science & Tech. | 37.962721 |
During the 20th century the search for a theory of how the physical world works at its most fundamental level went from one success to another. The earliest years of the century saw revolutionary new ideas including Einstein's special relativity and the beginnings of quantum theory, while the decades that followed each were times of surprising new insights. By the mid-1970s all the elements of what is now called the Standard Model were in place, and the final decades of the century were ones dominated by endless experimental results confirming this theory's predictions. By the end of the millennium, we were left in an uncomfortable state: the Standard Model was not fully satisfactory, leaving various important questions unanswered, but no experimental results disagreeing with it. Physicists had little to nothing in the way of hints as to how to proceed.
The LHC was supposed to be the answer to this problem. It could produce Higgs particles, allowing study of a crucial and less than satisfactory part of the Standard Model that had never been tested. A raft of heavily promoted speculative and unconvincing schemes for "Beyond Standard Model" physics all promised exciting new phenomena to be found at LHC-accessible energies.
Results from the LHC have now started to come in, and these are carrying disturbing implications. Unsurprisingly, none of the promised "Beyond Standard Model" particles have put in an appearance. More worrisome though is the big LHC success: the discovery of the Higgs. Within the still large experimental uncertainties, now that we've finally seen Higgs particles they look all too much as if they're behaving just the way the Standard Model predicted they would. What physicists are facing now is a possibility that they always knew was there, but couldn't believe would really come to pass: the "Nightmare Scenario" of the LHC finding a Standard Model Higgs and nothing more.
For the experimentalists, this leaves the way forward unclear. The case for the LHC was obvious: the technology was available, and the Higgs or something else had to be there for it to discover. Going to higher energies though is extremely difficult, and there's now no good reason to expect to find anything especially new. A lower energy "Higgs Factory" special purpose machine designed for detailed study of the Higgs may be the best bet. In the longer term, technological breakthroughs may be needed to allow studies of physics at higher energies at affordable cost.
Theorists in principle are immune to the constraints imposed by technology, but they face the challenge of dealing with the unprecedented collapse of decades of speculative work, and no help from the experiment on the question of where to turn to for new ideas. The sociological structure of the field is ill-equipped to handle this situation. Already we have seen a turn away from confronting difficult problems and towards promoting fatalistic arguments that nothing can be done. Arguments are being made that because of random fluctuations, we live in a corner of a "multiverse" of possibilities, with no hope of ever answering some basic questions about why things are the way they are.
These worries are in some sense just those of a narrow group of scientists, but I think they may have much wider implications. After centuries of great progress, moving towards ever deeper understanding of the universe we live in, we may be entering a new kind of era. Will intellectual progress become just a memory, with an important aspect of human civilization increasingly characterized by an unfamiliar and disturbing stasis? This unfortunately seems to becoming something worth worrying about. | <urn:uuid:bc8f605b-4ed9-4af5-bade-077577383c51> | 2.875 | 709 | Nonfiction Writing | Science & Tech. | 35.476702 |
Matter: Gases, Liquids, and Solids
These sites explain and define the three stages (or phases) of matter: gas, liquid, and solid. There is also information about a fourth phase called plasmas. Includes animations of particles, videos, illustrations, and activity ideas. There are link to eThemes resources on the properties of matter and mass and weight.
Chem4Kids: Four Stages of Matter
Read descriptions of the properties and the relationships of gases, liquids, solids, and plasmas. The site is mostly text with some illustrations. NOTE: The site includes ads and a link to Google searching.
Why Files: What Are the States of Matter?
Read a short description on each state of matter. Click on the "More" link at the bottom for more fun facts.
Phases of Matter
This site includes a description of each state of matter and a diagram of water shows its phases at various temperatures and pressures. NOTE: This site contains ads.
Tech Topics: Matter
This site includes a description of the states of matter. Use the "Site Map" link to navigate.
Scholastic: Matter and Its Three Phases
Read this short explanation of the three stages of matter and includes an experiment for students to try. NOTE: This site includes ads.
What is Matter?
Specifically designed for fourth grade students, this site has links to several chapters on matter. Includes some pictures.
A Matter of State
Here is a science lesson and experiment on how temperature changes particle movement. The site gives an in-depth description of how to conduct the experiment as well as an assessment. NOTE: This site includes ads.
View the Action
Wait for the page to finish loading, then click on the different stages of matter to see an animation of what the particles look like.
States of Matter
This lesson plan for grades 1-3 has in-class activities that can help kids understand matter and the different states of matter. These activities also introduce kids to the concept of molecules and molecular behavior in different states of matter.
The Three States of Matter
This activity is for younger students. It includes a recipe for no-bake cookies where kids can identify and count how many different states of matter are used in making the cookies.
Solids, Liquids, and Gases
These activities are designed to show kids about the different states of matter and how matter can change the different states.
EdHelper: States of Matter
This is a crossword puzzle for older kids to test their knowledge of states of matter. Includes answers.
BrainPOP: States of Mattter
Watch the movie about the phases of matter, then try the quiz. NOTE: Site available by subscription only.
BBC: Science: Materials- Gases, Liquids, and Solids
Test your knowledge by to see if you can classify the correct category of solids, liquids, or gases. Includes a revision and a quiz. NOTE: This site has a link to a discussion forum.
BBC: Changing State
Learn about changing state of matter through an interactive experiment. Includes a revision and a quiz. Note: This site has a link to a discussion forum.
SuccessLink: Changes That Matter
This lesson plan is designed for upper elementary students to learn about the concept of chemical interaction through an experiment.
SuccessLink: Dry-Ice Lab/Demonstration
In this lesson plan, students do an experiment with dry ice and observe its different stages.
eThemes Resource: Matter: Properties
Find out how properties of matter describe its various states. Discover the states of matter that exist on our planet and in the universe. Includes experiments with different states of matter and observing the changes as well as animated movies, lesson plans, worksheets, and online quizzes. There is a link to an eThemes on gases, liquids, and solids.
eThemes Resource: Matter: Mass and Weight
What is the difference between mass and weight? Find out by using these sites. Included are eThemes on metric measurement, gravity, and the properties of matter.
Request State Standards | <urn:uuid:30e6d541-adae-47a7-a1e6-e132f35ab38a> | 4.125 | 850 | Content Listing | Science & Tech. | 46.464372 |
After this efficient formation, most of the gas is not in favorable conditions for star formation. Indeed, larger potential wells have not formed yet, and the density threshold is not reached. It is therefore likely that the gas will slowly accumulate in bigger potentials to form galaxies, as sketched in fig. 3. In the inner parts of these early galaxies, star formation can then begin. Then galaxy interactions will stirr and heat the gas through shocks and gravitational perturbations.
Figure 3. Schematic view of the evolution of the baryonic dark matter, under the form of cold H2 gas: formation of clumpuscules at a redshift around 100, which are progressively involved in the formation of larger structures and proto-galaxies. A typical early galaxy is shown at z = 6, with the cold gas settling into a flaring disk, and the star formation beginning in the center, where the surface density is above threshold. Later on, when groups and clusters virialise, the gas is stripped and heated to contribute to the hot X-ray gas.
Besides, interactions accelerate the angular momentum transfer: part of the HI gas is dragged outwards in tails. Most of the gas is driven inwards, giving rise to huge nuclear starbursts (and may be AGN). Galaxy evolution is highly accelerated. The cold gas that was settled around each galaxy is heated and virialised in the new common potential and might be visible through X-rays. | <urn:uuid:d24ea747-7a19-46f4-996c-74a3fc6e4ce0> | 3.53125 | 298 | Academic Writing | Science & Tech. | 47.938576 |
Action/reaction pairs are describing momentum flow. Momentum is a vector, so it is more difficult to explain intuitively, so you should start with money, which is a scalar. Let me call a "payment" money that enters your posession. A payment can be negative, in which case you lose money, like when you buy a hat.
Newton's third law of finance says: for any payment, there is a negative equal payment associated to it on somone else (if you aren't a central bank!). So if you have a payment of -100 dollars, someone else got 100 dollars. This should be completely intuitive, because, outside of banking, on the personal level, money is a conserved quantity.
Newton's law is the same: the conserved quantity is momentum, and the momentum is flowing between objects. The flow is called the force, and the force is the "payment", it tells you how many units of momentum are incoming per unit time. The third law says that every payment is associated with a reverse payment going the other way (just like money, except the quantity is a vector).
So when the Earth pulls on you, it is paying you downward momentum, which means that you are paying the Earth upward momentum. That's the action reaction pair. If you are on a scale, the scale pays you up momentum (it pushes you up), and you pay the scale down-momentum (you push the scale down). The end result is that the force from the Earth and the scale cancel out, and the gravitational force on the Earth from you plus the downward force you exert on the Earth through the scale cancel out, and nothing ends up moving.
This is like a closed circuit of momentum, and elucidating the way in which momentum is flowing, even though the objects don't move, is the subject of statics. Newton's laws add to this the interpretation of momentum as a dynamical quantity, mass times velocity, so when an object accumulates momentum, you know how fast it is going.
This point of view is very useful, but it is not often explicitly taught. | <urn:uuid:5edd2d2d-568b-436d-baa5-b48a409c7a06> | 3.171875 | 438 | Q&A Forum | Science & Tech. | 56.274397 |
More news from Spaceweather.com:
COMPLEX ERUPTION ON THE SUN: On August 1st, the entire Earth-facing side of the sun erupted in a tumult of activity. There was a C3-class solar flare, a solar tsunami, multiple filaments of magnetism lifting off the stellar surface, large-scale shaking of the solar corona, radio bursts, a coronal mass ejection and more. Click on the image to view just a fraction of the action:
The movie recorded by extreme UV cameras onboard the Solar Dynamics Observatory shows an enormous magnetic filament breaking away from the sun. Some of the breakaway material is now en route to Earth in the form of a coronal mass ejection (CME, movie).
Seeing the sun erupt on such a global scale has galvanized the international community of solar physicists. Researchers are still sorting out the complex sequence of events and trying to understand why they all happened at once. Stay tuned for more movies and analyses in the days ahead | <urn:uuid:d61e68ea-cada-4196-874d-af64610a8fad> | 3.0625 | 206 | Comment Section | Science & Tech. | 37.584268 |
Plate tectonics: A deep time and planetary perspective
John Weber, Geology Department, Grand Valley State University
James Lawrence Powell, in his poignant and insightful popular science book, Mysteries of Terra Firma, lists and discusses the three big ideas (paradigms) in Earth science: time (age of the Earth), drift (plate tectonics), and chance (impact). Perhaps a fourth, ice ages and global climate change, could be added.
Much to our surprise, and after much exploration and expectation – e.g., we had expected to find evidence for plate tectonics on Venus, – it turns out that Earth is the only planet in the solar system on which plate tectonics occurs, today. Some interesting questions arise: When did plate tectonics get started on planet Earth, and how long has it been operating? If not always plate tectonics, what other kinds of tectonics operated on Earth? Why do all of the other terrestrial planets have single-plates and what has been called "stagnant lid" tectonics? Could these neighbors have had plate tectonics in the past? What about geology and planetary history might help us address these questions?
Small objects (< 200 km in diameter) in the solar system (e.g., Phobos, Gaspra) are lumpy and have irregular shapes. The nearly perfectly spherical shapes of planets and other objects in the solar system that are larger than ~200 km indicates that they must have initially formed as completely molten drops of liquid.
An early Earth (Hadean; 4.6-3.8 billion years ago; no record preserved) with a magma ocean is thus inferred, on which the presence of plates and plate tectonics would not have been possible. The ubiquitous record of sunken greenstone belts and diapirically emplaced TTG (tonalite-trondhjemite-granite intrusive igneous rocks) suites in rocks 3.8-2.5 billion years old can be taken to infer that the Archean Earth may have supported a largely vertical style of tectonics. Mesoproterozoic (1.6 billion years ago) rocks, including passive margin sequences, blueschists, UHP (ultra-high pressure) rocks, and ophiolites, provide the first convincing record that can easily be related to modern plate tectonic-like processes. Thus, Proterozoic and Phanerozoic (< 1.6 billion years ago) Earth tectonics were probably much like modern plate tectonics.
As originally formulated in 1968, Earth's rigid plate motion is measured using transform fault azimuths, seafloor magnetic "zebra" stripes, and earthquake slip vectors. These features average relative plate motion across plate boundaries over the past 1-2 million years. With few exceptions, plate motions measured using space geodetic techniques, like GPS (the Global Positioning System), average plate motion over just the past several decades, and match those (at > 95% confidence) measured using the longer-term geological data.
Earth's plates are probably driven by subducted oceanic slabs that cool, sink, and pull the plates around. The presence of water on Earth may be the key ingredient that caused plate tectonics here. | <urn:uuid:79dde4e8-8453-482b-9493-3245d1910e09> | 3.484375 | 686 | Academic Writing | Science & Tech. | 40.651211 |
You are looking at historical revision 10560 of this page. It may differ significantly from its current revision.
The fpio library contains a routines for converting floating-point numbers between IEEE binary representation and decimal string representation. It is based on the gdtoa
There are two routines included with the library, string->fp and fp->string.
fp->string:: NUMBER [* NDIGITS] -> STRING
Converts the given floating-point number to decimal string representation. If optional argument NDIGITS is positive, the conversion is done to the specified number of decimal places. If NDIGITS is zero (the default) or negative, the conversion is done to the shortest decimal string that rounds to the given floating point value.
string->fp:: STRING [* ROUNDING] -> NUMBER
Converts the given decimal string number to binary IEEE floating point representation. Optional argument ROUNDING is a symbol that can be one of:
- indicates rounding-towards-zero mode (the default)
- indicates rounding-towards-nearest mode
- indicates rounding-towards +Inf mode
- indicates rounding-towards -Inf mode
- Initial version
Copyright 2008 Ivan Raikov.
Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that the copyright notice and this permission notice appear in supporting documentation. | <urn:uuid:72126a9c-652b-4790-a82d-d3dd9f294cf4> | 2.859375 | 312 | Documentation | Software Dev. | 26.830174 |
Science Fair Project Encyclopedia
Lynn Margulis (born 1938) is a biologist and a professor at the University of Massachusetts Amherst. In 1967 she proposed a contentious new hypothesis which became her most important scientific contribution as the endosymbiotic theory of the origin of mitochondria as separate organisms that long ago entered a symbiotic relationship with eukaryotic cells through endosymbiosis.
- "She is best known for her theory of symbiogenesis, which challenges a central tenet of neodarwinism. She argues that inherited variation, significant in evolution, does not come mainly from random mutations. Rather new tissues, organs, and even new species evolve primarily through the long-lasting intimacy of strangers. The fusion of genomes in symbioses followed by natural selection, she suggests, leads to increasingly complex levels of individuality."
- " After the proposal of the endosymbiotic theory, Margulis predicted that if organelles were prokaryotic symbionts, then the organelles will have their own DNA that would be different from the DNA of the cell. This prediction was actually proven in the 1980's in mitochondria, centrioles, and chloroplasts."
She was criticized as a radical and her scientific work was rejected by mainstream biology for many years. Her work has more recently received widespread support and acclaim. Margulis was inducted into the World Academy of Art and Science, the Russian Academy of Natural Sciences, and the American Academy of Arts and Sciences between 1995 and 1998.
Publications - books
- Margulis, Lynn, 1970, Origin of Eukaryotic Cells, Yale University Press, ISBN 0300013531
- Margulis, Lynn, 1982, Early Life, Science Books International, ISBN 0867200057
- Margulis, Lynn and Dorion Sagan, 1986, Origins of Sex : Three Billion Years of Genetic Recombination, Yale University Press, ISBN 0300033400
- Margulis, Lynn and Dorion Sagan, 1987, Microcosmos: Four Billion Years of Evolution from Our Microbial Ancestors, HarperCollins, ISBN 004570015X
- Margulis, Lynn and Dorion Sagan, 1991, Mystery Dance: On the Evolution of Human Sexuality, Summit Books, ISBN 0671633414
- Margulis, Lynn, ed, 1991, Symbiosis as a Source of Evolutionary Innovation: Speciation and Morphogenesis, The MIT Press, ISBN 0262132699
- Margulis, Lynn, 1992, Symbiosis in Cell Evolution: Microbial Communities in the Archean and Proterozoic Eons, W.H. Freeman, ISBN 0716770288
- Margulis, Lynn and Dorion Sagan, 1997, Slanted Truths: Essays on Gaia, Symbiosis, and Evolution, Copernicus Books, ISBN 0387949275
- Lynn Margulis and Karlene V. Schwartz, 1997, Five Kingdoms: An Illustrated Guide to the Phyla of Life on Earth, W.H. Freeman & Company, ISBN 0613923383
- Margulis, Lynn, 1998, Symbiotic Planet : A New Look at Evolution, Basic Books, ISBN 0465072712
- Margulis, Lynn and Dorion Sagan, 2002, Acquiring Genomes: A Theory of the Origins of Species, Perseus Books Group, ISBN 0465043917
- Margulis, Lynn, et. al., 2002, The Ice Chronicles: The Quest to Understand Global Climate Change, University of New Hampshire, ISBN 1584650621
- UMass Bio Dept. (includes a partial list of technical publications) Accessed 3/11/05.
- UMass Geo Dept. Accessed 3/11/05.
- www.immaculata.edu Accessed 3/11/05.
- San Jose Science, Technology and Society, 2004-2005 Linus Pauling Memorial Lectures Accessed 3/11/05.
- The Endosymbiotic Theory Accessed 3/11/05.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:cc5f6d04-d2df-457d-b782-16ef1d831cdd> | 2.875 | 877 | Knowledge Article | Science & Tech. | 39.139047 |
Life After Death in the Deep Sea
Following immolation by volcanic eruption, the community around a hydrothermal vent recovers spectacularly
The first examples of life forms not dependent on solar energy were discovered by scientists using towed cameras and the submersible Alvin in 1977 along hydrothermal vents of the Galapagos Rift. Since then, investigators have made hundreds of dives aboard Alvin to learn more about these unusual ecological communities. In the spring of 1991, Alvin and its tender, Atlantis II, happened to be on station above the East Pacific Rise between 9 and 10 degrees north latitude only a few days after the axial summit trough 2,550 meters below the surface erupted, obliterating a thriving vent community. The authors made numerous dives on the 9N Biotransect over the ensuing 10 years. Their article describes the return of life to the vents and the ecological succession they witnessed.
Go to Article | <urn:uuid:aa83cf6a-c183-46ec-8a16-d5a23e055a3f> | 3.296875 | 182 | Truncated | Science & Tech. | 20.428358 |