text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Region of the electromagnetic spectrum extending from about
This is the wavelength
region to which the human eye is sensitive.
There are no precise limits for the spectral range of visible radiation since
they depend upon the amount of radiant power
reaching the retina and on the responsivity
of the observer.
PAC, 2007, 79, 293
(Glossary of terms used in photochemistry, 3rd edition (IUPAC Recommendations 2006))
on page 439
IUPAC. Compendium of Chemical Terminology, 2nd ed. (the "Gold Book"). Compiled by A. D. McNaught and A. Wilkinson. Blackwell Scientific Publications, Oxford (1997). XML on-line corrected version: http://goldbook.iupac.org (2006-) created by M. Nic, J. Jirat, B. Kosata; updates compiled by A. Jenkins. ISBN 0-9678550-9-8. doi:10.1351/goldbook | <urn:uuid:8b486a13-4392-45f6-aef8-6686ec56e3f7> | 3.15625 | 205 | Structured Data | Science & Tech. | 65.805951 | 1,200 |
Pascal’s Triangle is a special triangular arrangement of numbers used in many areas of mathematics. It is named after the famous 17th century French mathematician Blaise Pascal because he developed so many of the triangle’s properties. However, this triangular arrangement of numbers was known by the Arabian poet and mathematician Omar Khayyam (c 1044-1123) and the Chinese mathematician Zhu Shijie (c 1260-1320) some 250 years before Pascal.
At the top of the triangle is a 1, which makes up the 0th row. The 1st row (1, 1) contains two 1s each formed by adding the two numbers above them, one to the left and one to the right, in this case 0 and 1. (All numbers outside the triangle are 0s.) Do the same to create the 2nd row; 0 + 1 = 1, 1 + 1 = 2, 1 + 0 = 1 and all subsequent rows.
A number in the triangle can be found by using nCr (n choose r), where n is the number of the row and r is the number of the element in that row. This is especially helpful to find a particular term in the expansion of a binomial in the form (x + y)n.
Find the 4th term in the 6th row of the triangle.
(Remember: the first 1 in each row is the 0th element so this is correct.)
Sum of rows: The sum of the numbers in any row is equal to 2n, when n is the number of the row.
20 = 1 = 1
21 = 2 = 1 + 1
22 = 4 = 1 + 2 + 1
23 = 8 = 1 + 3 + 3 + 1
24 = 16 = 1 + 4 + 6 + 4 + 1 and so forth.
Prime numbers: If the first element in a row is a prime number (remember the first 1 in any row is the 0th element.) all of the numbers in that row (excluding the 1s) are divisible by it.
For example in the 7 th row (1, 7, 21, 35, 35, 21, 7, 1) 7, 21, 35 are divisible by 7.
In Algebra, each row in Pascal’s Triangle contains the coefficients of the binomial (x + y) raised to the power of the row.
(x + y)0 = 1
(x + y)1 = 1x + 1y
(x + y)2 = 1x2 + 2xy + 1y2
(x + y)3 = 1x3 + 3x2y + 3xy2 + 1y2
(x + y)4 = 1x4 + 4x3y + 6x2y2 + 4xy3 + 1y4 and so forth.
Another major area where Pascal’s Triangle shows up and is very useful is in probability where it can be used to find combinations.
Interesting Number Patterns:
Many interesting number patterns can be found in the triangle. Included are the Fibonacci sequence, Triangular and Square Numbers (found in the diagonals starting with row 3.) and Polygonal Numbers.
Another interesting connection is to Sierpinski’s Triangle. When all of the odd numbers in Pascal’s Triangle are filled in and the evens are left blank, the recursive Sierpinski Triangle fractal is revealed.
Each of these are fascinating topics which warrant further research on your part. | <urn:uuid:7a83b8ab-ac7e-4b18-be31-d445000cca1b> | 3.890625 | 737 | Knowledge Article | Science & Tech. | 80.608913 | 1,201 |
Evolution is generally regarded as pretty good at specializing species for certain ecological niches, but with a glacial pace of adaptation, sometimes it needs a helping hand. Meet the team of the Weizmann Institute of Science in Israel, who decided they wanted to beef up the paraoxonase 1 (PON1) to the point it could combat sarin and other G-type nerve agents.
PON1 is an enzyme that is produced by our livers that can counteract sarin, tabun, soman, and cyclosarin, but not well enough for use in case of emergency — requiring people to use masks and suits, which can be penetrated. So the researchers undertook a path of "directed enzyme evolution," intentionally speeding up the course of adaptation to further their goal.
This technique is, essentially, the sort of wonderful mad science that we love here at io9. They artificially induced mutations into the gene that encoded target enzyme, creating variations in its efficiency and ability, and picking the best out of the lot. Within four generations of this process, the enzyme was functioning 340 times more efficiently than its natural form.
The hope is that the modified enzyme will be able to given to people going into an area exposed to these nerve agents to protect them, as well as being used to fight those already subjected to it. It also shows that with some judicious use of mutations and population pressure, "directed evolution" can provide very speedy results.
Photo by dr_ed_needs_a_bicycle.
Evolved Stereoselective Hydrolases for Broad-Spectrum G-Type Nerve Agent Detoxification [Chemistry & Biology] | <urn:uuid:ca631b54-235b-4a94-a3c9-fb08d2a7335c> | 3.390625 | 341 | News Article | Science & Tech. | 25.530688 | 1,202 |
Take an $n \times n$ matrix $A$, and suppose that $v$ is an eigenvector of $A$, with all entries of $v$ equal to a constant $k$. Naturally, $k \ne 0$. Let $\lambda$ be the eigenvalue of $A$ that has $v$ as an eigenvector. If $(b_1, b_2, \dots, b_n)$ is any row of $A$, then by the definition of eigenvalue and eigenvector, we have
$$kb_1+kb_2+\cdots +kb_n=\lambda k,$$
from which we conclude that $b_1+b_2+\cdots+b_n=\lambda$. It follows that each row sum of the matrix is equal to $\lambda$.
Conversely, suppose that all row sums of $A$ are equal to $\sigma$. Let $v$ be the vector with all entries equal to $1$. Then $Av$ is a vector with all entries equal to $\sigma$, which means that $v$ is an eigenvector of $A$ with eigenvalue $\sigma$.
Thus $A$ has an eigenvector with all entries equal if and only if all row sums of $A$ are equal. | <urn:uuid:981f0954-07f0-4ac9-a047-28425b31384b> | 2.65625 | 280 | Q&A Forum | Science & Tech. | 78.179402 | 1,203 |
Interview with Dr. Motoji Ikeya
By David Jay Brown
Dr. Motoji Ikeya is a Japanese interdisciplinary researcher, using electron spin resonance (ESR) in geosciences and radiation dopsimetry, with a research interest in the cause of unusual animal behavior prior to earthquakes. His laboratory experiments at Osaka University have shed an enormous amount of light on the possible mechanisms that may be operating during this unexplained phenomenon.
Dr. Ikeya majored in Electronics and then Nuclear Engineering at Osaka University. He worked at Nagoya and Yamaguchi Universities, was a research associate at The University of North Carolina at Chapel Hill, and a fellow of the Alexander von Humboldt Foundation at the University of Stuttgart, Germany. He is a recipient of the Asahi Newspaper Grant for Encouragement of Science (1981) and the 4th Osaka Science Prize in 1986.
Dr. Ikeya’s major field of specialization has been in quantum geophysics. He has researched Electron Sin Resonance (ESR), which is used for dating geological and archaeological materials, and in the future these techniques may be used for dating materials on icy planetary bodies. He has also researched radiation dosimetry and assessment of the paleo-environment. Dr. Ikeya began his earthquake precursor studies after the Kobe Earthquake in 1995.
At Osaka University Dr. Ikeya was chair of the Quantum Geophysics Laboratory, and is the author of more than three hundred scientific papers. He was Professor of Graduate School of Science at Osaka University’s Department of of Physics since 1987, and of Earth Space Science since its foundation in 1991. Dr. Ikeya retired from Osaka University in 2004, and is now helping young people in ESR on a part-time basis.
Dr. Ikeya is also the author of *Earthquakes and Animals: From Folk Legends to Science* (World Scientific, 2004), which is the most important book on the subject of unusual animal behavior and earthquakes since Helmut TrIbutsch’s classic work on the subject *When the Snakes Awake*. This meticulously researched work is an interdisciplinary treasure trove of folk legends, historical anecdotes, interview surveys and subjective reports, geophysical science facts, and most importantly, a fascinating summary of Dr. Ikeya’s own laboratory research. (To order a copy of Dr. Ikeya’s book click here.)
Ikeya’s laboratory experiments were conducted to see if exposure to an electrical field or electromagnetic pulses could elicit animal behavior similar to what has been reported prior to earthquakes. Ikeya’s experiments produced very interesting results. For example, fish showed panic reactions, and earthworms moved out of the soil and swarmed when current was applied. These are very similar to the behaviors that are reported before earthquakes. Dr. Ikeya’s work also sheds light on other mysterious pre-earthquake phenomena–which he was able to recreate in the laboratory–such as strange plant growth, earth-lights, fogs, atmospheric distortions, and unusual phenomena with electric appliances, such as televisions and cell phones.
I interviewed Dr. Ikeya on October 12, 2004. Dr. Ikeya has a great deal of curiosity, open-mindedness, and the rare ability to bridge scientific disciplines. We discussed how his laboratory experiments help us to understand the causes of unusual animal behavior prior to earthquakes, why so many scientists are resistant to this idea, and whether or not a reliable earthquake forecasting system is possible.
David: What motivated you to start studying the relationship between unusual animal behavior and earthquakes?
Dr. Ikeya: The Kobe earthquake in 1995. I live 30 km from the epicenter and thought it strange that many earthworms dug themselves up in my small garden. At the time, I did not know the legend that a number of emerging earthworms is a sign of a large earthquake. Many people noticed this, including my neighbors.
David: How have your laboratory experiments with electric fields and electromagnetic pulses helped to shed some light on what may cause unusual animal behavior prior to earthquakes?
Dr. Ikeya: First, theoretical calculation of EQ light, which was seen by my graduate students and associate professor. EQ clouds and fogs in legends may naturally be produced in super-cooled atmosphere. Then, it dawned on me that animals might be sensing such atmospheric discharge and electric field as electric field effects.
David: How do you think animals detect electromagnetic waves, and why do you think this cause them to behave in peculiar ways?
Dr. Ikeya: Electric fields may be sensed by the force on the animal’s hair. Induced current in the body may cause changes with some neurotransmitters.
David: Your research provides strong evidence for the theory that electromagnetic changes are causing the unusual animal behavior and other unexplained phenomena that are sometimes reported to occur prior to earthquakes. Do you think that this is just one possible explanation or the only one?
Dr. Ikeya: Probably most of the unexplained phenomena (80 – 90%) reported by lay citizens would have electromagnetic causes. Old legends of bent flames, and rice cooking anomaly, as well as animal and plant anomalies, are definitely electromagnetic in origin. However, the Moses’
phenomenon [reports that great bodies of water will suddenly and temporarily split apart, creating a valley to the ocean floor, and two massive walls of water] is due to natural hydrodynamic causes.
David: Why do you think so many scientists are resistant to the idea that unusual animal behavior prior to earthquakes is a real phenomenon?
Dr. Ikeya: Because there are people who link trivial events to large earthquakes, and afterthoughts are inevitably involved in the statements by lay citizens, especially at a distance larger than 100 – 200 km for a
M7 earthquake. I explain this in Chapter 5 of my book *Earthquakes and Animals*.
For countries like New Zealand, the focal depth is 50 km or so.
Electromagnetic (EM) intensity would be less, and so there would be less unusual phenomena. Granite bedrock in Japan might play a role due to the involvement of piezoelectric quartz grains, while basalt may generate less intense EM waves. Fluid movement in the boundary of granite might be responsible for the generation of EM waves, rather than the piezoelectricity.
David: What do you think are the most important experiments that still need to be done in order to shed more light on the nature of mysterious earthquake precursors?
Dr. Ikeya: Experiments of less intense EM exposure to human being, which is not allowed since we are not medical doctors. Some people might be very sensitive.
David: Do you think that it is possible for observations of animal behavior to ever be part of a reliable earthquake forecasting system?
Dr. Ikeya: No! Once we know that EM pulses are responsible, electronic detection will be better at forecasting earthquakes than observations of animal behavior. However, additional information about unusual phenomena–collected by an automatic observation system, rather than a collection of reports from lay citizens–would increase the reliability of a forecast of a disastrous earthquake. Collected data on cattle healthcare from farms in different areas, which are transmitted over the Internet, may be useful for studying the cattle’s response to weather changes, including an impending earthquake. They may provide additional information.
David: What are you currently working on?
Dr. Ikeya: I am a visiting professor of nano-science at the Institute of Scientific and Industrial Research on a part-time basis since my retirement. There is no job at the university if a professor is behaving unusually. However, I am developing my theory on generation and propagation of seismo-electromagnetic signals (SEMS) since my book, *Earthquakes and Animals*, is for the general public. Scientists need some mathematical equations that explain the phenomena quantitatively.
It is a bit tough for an old professor to work on two entirely different subjects, though both | <urn:uuid:3a002823-0b1c-4678-87bd-401e9c6f2b2c> | 2.78125 | 1,663 | Audio Transcript | Science & Tech. | 34.503397 | 1,204 |
Serialization is the process of converting the state of an object into a form that can be persisted or transported. The complement of serialization is deserialization, which converts a stream into an object. Together, these processes allow data to be easily stored and transferred.
The .NET Framework features two serializing technologies:
Binary serialization preserves type fidelity, which is useful for preserving the state of an object between different invocations of an application. For example, you can share an object between different applications by serializing it to the Clipboard. You can serialize an object to a stream, to a disk, to memory, over the network, and so forth. Remoting uses serialization to pass objects "by value" from one computer or application domain to another.
XML serialization serializes only public properties and fields and does not preserve type fidelity. This is useful when you want to provide or consume data without restricting the application that uses the data. Because XML is an open standard, it is an attractive choice for sharing data across the Web. SOAP is likewise an open standard, which makes it an attractive choice.
In This Section
- Serialization How-to Topics
- Lists links to How-to topics contained in this section.
- Binary Serialization
- Describes the binary serialization mechanism that is included with the common language runtime.
- XML and SOAP Serialization
- Describes the XML and SOAP serialization mechanism that is included with the common language runtime.
- Serialization Tools
- These tools help develop serialization code.
- Serialization Samples
- The samples demonstrate how to do serialization.
- Contains classes that can be used for serializing and deserializing objects.
- Contains classes that can be used to serialize objects into XML format documents or streams. | <urn:uuid:03851e7e-0db5-429c-93c2-656adf6af591> | 3.6875 | 376 | Documentation | Software Dev. | 27.278913 | 1,205 |
Great White Sharks Swim One Step Closer to ProtectionsAll Press Releases…
National Marine Fisheries Service to Conduct In-Depth Review of West Coast Population
September 27, 2012
Contact: Geoff Shester ( email@example.com | 831-643-9266 )
The National Marine Fisheries Service (NMFS) today announced a positive 90-day finding on two petitions to list the West Coast population of great white sharks under the Endangered Species Act (ESA). NMFS determined that the population merits further consideration for listing as an “endangered” or “threatened” species. Today’s decision is in response to ESA listing petitions submitted this summer by Oceana, the Center for Biological Diversity, Shark Stewards, and WildEarth Guardians. The conservation groups commend NMFS for recognizing the new science documenting the perils facing this unique population of great white sharks.
”We commend NMFS for elevating great white sharks one step closer toward the protections they desperately deserve,” said Geoff Shester, California Program Director for Oceana. “The alarm bells are ringing and we need to take action to address the bycatch of great white shark pups in our fisheries.”
Over the next nine months, NMFS will conduct an in depth status analysis of the population and make a final determination of whether to add this population to the federal endangered list. Today’s decision also initiates a formal public comment period. The impetus for the finding is new scientific studies showing that great white sharks off the coast of California and Baja California, Mexico are genetically distinct and isolated from all other great white shark populations and that the estimated number of adult sharks in this population is alarmingly low. With central estimates of only a few hundred adults remaining, this unique population is on the brink of extinction because of its low population size and the ongoing threats it faces from human activities.
“Great white sharks are incredible species that have survived for eons along the West Coast. Sadly, they’re in deep trouble right now, so we’re glad to see them a step closer to getting the help they need to survive,” said Miyoko Sakashita, oceans program director at the Center for Biological Diversity.
Oceana, the Center for Biological Diversity, and Shark Stewards also submitted a similar joint scientific petition to the California Fish and Game Commission for endangered listing at the state level. It is anticipated the Commission will take make an initial determination in the next few months.
Deadly gillnets capture and kill great white sharks, and are presently the leading identified threat to their survival. While the direct capture of white sharks for sale is prohibited off the coasts of California and Mexico, young great white sharks are killed as incidental bycatch in set and drift gillnets targeting species including California halibut, white seabass, thresher sharks and swordfish. These nets are responsible for more than 80 percent of the reported young white sharks caught in their nursery grounds. Young great white sharks off the Southern California coast are also found to have some of the highest contaminant levels of mercury, PCBs, and DDT of any shark species worldwide.
"As the top predator in our waters, white sharks are critical for the balance and health of the California Coastal Upwelling Ecosystems” said David McGuire of Shark Stewards. “Protecting these sharks and their habitat is one step closer to restoring the productivity and diversity to our ocean and ocean life."
Great white sharks are a critical part of the ocean ecosystem, playing an important top-down role in structuring the ecosystem by keeping prey populations in check, such as sea lions and elephant seals. The presence of great white sharks ultimately increases species stability and diversity of the overall ecosystem. An Endangered Species Act listing will afford the sharks additional safeguards from key threats and garner more funding for research to better understand the status and threats to this distinct population of great white sharks.
“In the sea as on land, predators are key species in maintaining the natural balance,” said Taylor Jones, Endangered Species Advocate for WildEarth Guardians. “They often face unjust and disproportionate persecution or intensive human exploitation—and the great white shark is no exception. We are pleased that the Fisheries Service is recognizing the importance of these powerful creatures and the serious threats they face.”
Download Press Kit: | <urn:uuid:d1f77a14-127b-434a-b59b-4fb2edd85a3c> | 2.578125 | 905 | News (Org.) | Science & Tech. | 28.031261 | 1,206 |
May 26, 2010, 6:13 AM
Post #4 of 6
Please use the code tags whenever you post code.
Start by adding these 2 lines, which should be in every Perl script you write.
Those pragmas will point out lots of coding errors that can be difficult to track down.
The strict pragma forces you to declare your vars, which is done with the 'my' keyword.
You should always check the return code of an open call to make sure it was successful and take action if it wasn't.
It's best to use the 3 arg form of open and a lexical var for the filehandle instead of the bareword.
open my $file1, '<', $ARGV or die "failed to open '$ARGV $!";
open my $file2, '<', $ARGV or die "failed to open '$ARGV $!";
That is normally written as:
Since the print function is a list operator, your attempt to read in the employee number from file to in the print statement will slurp and print the entire file. Instead, you should assign the employee number to a scalr var and use that var in the print statement. | <urn:uuid:0fb65209-141f-4086-870c-fe2d50bd57cc> | 2.859375 | 254 | Comment Section | Software Dev. | 73.336645 | 1,207 |
On Friday, we posed the following back-to-school-themed Fermi problem:
Assuming you're not in a big lecture hall and the professor shuts the door at the start of class, how long does it take for you and your classmates to deplete the oxygen enough to feel it?
We promised a surprising answer, and here it is. You decide if our back-of-the-envelope calculations are reasonable.
Let's build our classroom first. It's 16 feet wide and long, and 10 feet tall. In handy metric dimensions, that's:
5 meters by 5 meters by 3 meters, or 75 cubic meters.
A cubic meter is 1000 liters, so now we've got 75,000 liters of fresh air.
The oxygen content of air is about 21 percent, and at about 17.5 percent you'll run from the room screaming. To get from fresh and breathable to absolutely stifling, take the difference between 21 percent of 75,000 liters and 17.5 percent of 75,000 liters. That gives us 2,625 liters of oxygen to get through.
How much oxygen does a human consume? It was tough finding a reliable source, but this press release about the 2006 installation of a new oxygen generation system on the International Space Station provides a clue:
During normal operations, it will provide 12 pounds daily; enough to support six crew members.
Aha! So one person needs about 2lb of oxygen a day, or .9 kg. But how many liters is that? Oxygen has a molar mass of 16 grams, so oxygen gas, or O2, has a mass of 32 grams per mole. One mole of gas at standard pressure and temperature takes up 22.4 liters. Now, as my high-school chemistry would say, it's time to hop on the mole-train:
.9 kg x (1000 g/1 kg) x (1 mole O2/32 g O2) x (22.4 L/1 mole O2)
This gives us a daily oxygen intake of 630 liters per person. Let's get a more reasonable rate:
(630 L/day) x (1 day/24 hours) x (1 hour/60 mins)
Now we have the serviceable rate of oxygen consumption of .4375 liters per minute. We're almost there.
Now populate the classroom with 34 students and 1 teacher. The 35 occupants consume 15.3125 liters per minute. Now for the final calculation:
2625 L x (1 minute/ 15.3125 L)
It will take about 171 minutes, or 2 hours and 51 minutes for the room to become unbearably stifling. You can image that you'd start to feel pretty uncomfortable about an hour and a half into the lecture—a good argument for shorter classes. | <urn:uuid:a68ffb3c-3c83-4b98-ad7b-1282010fb275> | 3.03125 | 594 | Personal Blog | Science & Tech. | 80.662083 | 1,208 |
In the United States, more people watch NASCAR racing than baseball, supposedly "America's pastime." It's second only to football, with 75 million dedicated fans who tune in (or show up) almost every weekend of the year to watch stock cars race around a track at speeds up to 190 mph (306 kph) [sources: Fulton, Eaton].
The cars' non-EPA-regulated engines and dangerously high speeds make the sport exciting to watch. They also make it one of the least environmentally friendly sports out there. NASCAR drivers make a living doing exactly what the rest of us are supposed to avoid in order to stave off global warming: Drive ridiculously powerful, gas-guzzling sports cars at extremely high speeds for entertainment value.
The sport burns so much fuel that the U.S. government labeled NASCAR a waste of gas during the fuel shortage of the 1970s. As a result, NASCAR shortened one of its races from 500 miles (804 kilometers) to 450 miles (724 kilometers) as a goodwill gesture. (It was a temporary change.)
So, just how much fuel does it take to hold a NASCAR race, and what effect does it really have on the state of the atmosphere? Is it a major CO2 contributor, or does it just get a bad rap because of the nature of the sport?
In this article, we'll find out whether NASCAR is as big an emitter as it seems. We'll check out the fuel and CO2 numbers, see how it compares to other activities, and look at the potentially "greener" future of the sport.
The first thing to understand when looking at NASCAR's carbon footprint is that race cars are even less like regular cars than some of us think. That speed comes at a price. | <urn:uuid:5ac81604-bad4-4472-9a98-2a76b104a585> | 3.125 | 362 | Personal Blog | Science & Tech. | 60.46284 | 1,209 |
Tomorrow finally marks the end of the 2012 Atlantic hurricane season—and it’s been one for the record books. As the National Oceanic and Atmospheric Administration (NOAA) reports today, this season produced 19 names storms, 10 of which became hurricanes and one that became a major hurricane. The sheer number of named storms is well above the average of 12, but the number of major hurricanes was below the average of three. Altogether, it was enough for NOAA to classify the 2012 season as above-normal—though not exceptionally so.
You’re probably assuming that the one major hurricane was Sandy—but you’d be wrong. The only storm above a Category 1 or 2 was Hurricane Michael, a Category 3 storm that never made landfall. Sandy, despite the destruction it left, was a Category 2 at its strongest, and was only a Category 1 when it made landfall in the U.S. It’s another reminder that the timing and location of a hurricane matters just as much—if not more—than its sheer strength. It’s also a sign that the 2012 season could have been much worse—this marks the seventh consecutive year that no major hurricane (Category 3 or above) hit the U.S.
But before we consign the 2012 hurricane season to history, check out the NOAA video above that shows every storm—crunched down to less than five minutes | <urn:uuid:706afd2f-f87e-4ee0-86ec-33663f77e30e> | 2.59375 | 286 | Truncated | Science & Tech. | 49.913721 | 1,210 |
(Files in red–history)
26H. Birkeland, 1895
27. Aurora from Space
28. Aurora Origin
28a. Plus and Minus
29. Low Polar Orbit
30. Magnetic Storms
30.a Chicago Aurora
31. Space Weather
32. Magnetic planets
33. Cosmic Rays
34. Energetic Particles
35. Solar fast Particles
But there also exists a practical angle: in a world increasingly dependent on electricity and electronics, the "space weather" outside the atmosphere can have serious effects, in particular on human communications.
Currently more than 200 communication satellites circle the Earth in synchronous orbit. A large magnetic storm can greatly increase the number of fast ions and electrons which hit those satellites; such ions and electrons are similar to the ones emitted by radioactive substances and can create serious problems.
The simplest effect is an electric charge on the satellite, usually negative, raising its voltage to hundreds or even thousands of volts. Charging by itself has little effect on the satellite's operation, although on a scientific satellite it would seriously distort observations (if the satellite is charged to, say, -500 volts, electrons with less energy than 500 electron-volts are repelled and cannot be detected). However, if different parts of the satellite are charged to different voltages, the current between them can cause damage.
Particles with higher energy can permanently degrade solar cells. Also, high-energy particles can penetrate the circuitry and cause either damage or false signals which lead to unintended responses by the satellite. All these have occured in the past.
Another effect of magnetic storms (and to lesser extent, of substorms) is a greater intensity of the electric currents circulating between Earth and distant space. As already noted, these currents are associated with the polar aurora, and they flow from space into the auroral zone or the other way around. In big storms, not only is the magnetic disturbance more intense, but it also spreads further equatorwards, into more densely populated areas. For instance, in the picture on the right, taken from space during a storm in March 1989, aurora blankets the northern states of the US as well as southern Canada|
This disturbance also induces extra currents in the wires of the electrical power grid, creating a temporary overload. Serious overloads of this type can trigger circuit breakers and thus cause widespread "power blackouts," and on occasion they have even destroyed power transformers.
For these reasons, conditions at the Sun, in interplanetary space and in the magnetosphere are closely watched. The Space Environment Center in Boulder, Colorado, maintained by of the National Oceanic and Atmospheric Administration (NOAA), has a Space Weather Operation facility which constantly tracks the "weather" in space.
This is done in several ways. NOAA satellites of the GOES series, in synchronous orbit, observe the local radiation environment and also monitor the Sun's x-rays, which come from the corona and increase at active times. Telescopes on Earth observe the Sun through special filters and in special wavelength (e.g. x-rays), all of which highlight active features. For a view of NOAA's "space weather report," click here; another such report, from the University of Michigan, is linked here.
In an interesting development, the recent SOHO spacecraft, currently at the L1 Lagrangian point, allows scientists to detect (by special processing of its images) coronal mass ejections, not just in a sideways view but even when they are headed straight for Earth. A CME noted in this way on January 6, 1997, arrived as predicted on the 10th-11th and caused a widespread disturbance. Another such event occured April 7-11, 1997.
Of course, the sideways view of CMEs contains additional information, and NASA's planned solar missions include STEREO (Solar Terrestrial Relations Observatory), with a pair of well-separated solar observatories, to get a stereoscopic view of such eruptions. One spacecraft would orbit near Earth, the other would be stationed elsewhere in the Earth's orbit around the Sun, capturing a sideways view of solar eruptions. So far there is unfortunately no sure way of predicting whether the direction of the magnetic field carried by an erupting solar plasma would slant northwards or southwards, an important factor in predicting "space weather." Closer to Earth, spacecraft near the L1 point such as SOHO and WIND, and since August 1997 also ACE, intercept shocks and plasma clouds up to one hour before their arrival at Earth and serve as early warning stations.
An obvious question is whether the high energy particles produced by such events constitute a hazard not just for spacecraft but also for astronauts. So far no astronauts have been seriously exposed, not even those on the Russian space station "Mir" whose inclined orbit extends to fairly high latitudes, closer to the auroral zone than the planned orbit of the International Space Station planned by NASA. Nothing in space can be guaranteed, however, and re-entry modules for quick escape into the protecting atmosphere have been studied.
Further Links and Articles:
An article about the violent consequences of the arrival at Earth of an interplanetary shock wave from the Sun, on 24 March 1991, titled "The Birth of a Radiation Belt" (part of this site).
"Storms in Space: A Fictionalized Account of 'The Big One'," John W. Freeman, Jr., Eos, Transaction of American Geophysical Union 6 September, 1994.
"Geomagnetic Storm Forecasts and the Power Industry," by John G. Kappenman, Lawrence J. Zanetti and William A. Radasky,Eos, Transaction of American Geophysical Union 28 January, 1997.
Questions from Users:
*** Why has the aurora been so frequent lately?
*** Does the magnetosphere affect weather? (a)
*** Risks from stormy "Space Weather"
*** Magnetic storms and headaches
*** Electric and Magnetic Energy
*** Radio Propagation
*** Magnetism and weather (b)
*** Waves in the Magnetosphere | <urn:uuid:4645ea55-347c-4502-b5a1-9cdf1272ce9a> | 3.640625 | 1,257 | Knowledge Article | Science & Tech. | 37.842693 | 1,211 |
Search the Web
Content Tagged 'Science'
Massive Solar Storm Could Hit Earth within Two Years
Scientists are monitoring solar activity and say it's possible that the Earth will experience a massive solar storm in the near future.
Announcement: High School Students’ Scientific Research Will Be Featured July 27th at the U of M
This Friday will mark the culmination of a special seven-week course at the University of Memphis for 12 students from several local and out-of-state high schools.
Sky Gazers View Once in a Lifetime Transit of Venus
It has been called one of the most important events in scientific history. The transit of Venus provided astronomers with invaluable information about our solar system.
View the Transit of Venus in Memphis
In a few days you'll be able to see the planet Venus in its rare trek across the sun.
New NASA Satellites to Increase Tornado Warning Lead Time
The GOES-R satellite series will revolutionize forecasting abilities, gathering data more quickly and increasing tornado warning lead time by 50%.
Vanderbilt Discovery Will Revolutionize Mosquito Repellent
Researchers from Vanderbilt University have discovered a compound that can change the behavior of a mosquito. It is expected to be 100% stronger than Deet.
Da Vinci Robot Makes Hysterectomy Surgery Less Painful
Painful hysterectomies are a thing of the past at Baptist Memorial Hospital.
Massive Solar Storm has Little Impact to Mid-South
The largest solar storm in five years enveloped earth Thursday morning.
High: 88° | Low: 73°
This site is hosted and managed by
Mobile advertising for this site is available on
Local Ad Buy
Search the Web
Seek It Local
Movies & Review
Report A Story
FCC Public Profile
©1998 - 2013 abc24.com
Nexstar Broadcasting, Inc.
All Rights Reserved
Copyright & Trademark Notice | <urn:uuid:fe2de53a-dfb5-4e14-91a8-027f6106180d> | 2.640625 | 400 | Content Listing | Science & Tech. | 34.891577 | 1,212 |
Asked Questions About the
- What is the AMO?
- How much of the Atlantic are we talking about?
- What phase are we in right now?
- What are the impacts of the AMO?
- How does the AMO affect rainfall and
- How does the AMO affect Florida?
- How important is the AMO when it comes to
hurricanes - in other words - is it one of the biggest drivers?
Or Just a minor player?
- Does the AMO influence the intensity or the
frequency of hurricanes (which)?
- If the AMO affects hurricanes - what drives the
- Can we predict the AMO?
- Is the AMO a natural phenomenon, or is it
related to global warming?
The AMO is an ongoing series of long-duration changes in the sea surface temperature of the North Atlantic Ocean, with cool and warm phases that may last for 20-40 years at a time and a difference of about 1°F between extremes. These changes are natural and have been occurring for at least the last 1,000 years.
Most of the Atlantic between the equator and Greenland changes in unison. Some area of the North Pacific also seem to be affected.
Since the mid-1990s we have been in a warm phase.
The AMO has affected air temperatures and rainfall over much of the Northern Hemisphere, in particular, North America and Europe. It is associated with changes in the frequency of North American droughts and is reflected in the frequency of severe Atlantic hurricanes. It alternately obscures and exaggerates the global increase in temperatures due to human-induced global warming.
Recent research suggests that the AMO is related to the past occurrence of major droughts in the Midwest and the Southwest. When the AMO is in its warm phase, these droughts tend to be more frequent and/or severe (prolonged?). Vice-versa for negative AMO. Two of the most severe droughts of the 20th century occurred during the positive AMO between 1925 and 1965: The Dustbowl of the 1930s and the 1950s drought. Florida and the Pacific Northwest tend to be the opposite - warm AMO, more rainfall.
The AMO has a strong effect on Florida rainfall. Rainfall in central and south Florida becomes more plentiful when the Atlantic is in its warm phase and droughts and wildfires are more frequent in the cool phase. As a result of these variations, the inflow to Lake Okeechobee - which regulates South Florida’s water pply - changes by 40% between AMO extremes. In northern Florida the relationship begins to reverse - less rainfall when the Atlantic is warm.
During warm phases of the AMO, the numbers of tropical storms that mature into severe hurricanes is much greater than during cool phases, at least twice as many. Since the AMO switched to its warm phase around 1995, severe hurricanes have become much more frequent and this has led to a crisis in the insurance industry.
The frequency of weak-category storms - tropical storms and weak hurricanes - is not much affected by the AMO. However, the number of weak storms that mature into major hurricanes is noticeably increased. Thus, the intensity is affected, but, clearly, the frequency of major hurricanes is also affected. In that sense, it is difficult to discriminate between frequency and intensity and the distinction becomes somewhat meaningless.
Models of the ocean and atmosphere that interact with each other indicate that the AMO cycle involves changes in the south-to-north circulation and overturning of water and heat in the Atlantic Ocean. This is the same circulation that we think weakens during ice ages, but in the case of the AMO the changes in circulation are much more subtle than those of the ice ages. The warm Gulf Stream current off the east coast of the United States is part of the Atlantic overturning circulation. When the overturning circulation decreases, the North Atlantic temperatures become cooler.
We are not yet capable of predicting exactly when the AMO will switch, in any deterministic sense. Computer models, such as those that predict El Niño, are far from being able to do this. What is possible to do at present is to calculate the probability that a change in the AMO will occur within a given future time frame. Probabilistic projections of this kind may prove to be very useful for long-term planning in climate sensitive applications, such as water management.
Instruments have observed AMO cycles only for the last 150 years, not long enough to conclusively answer this question. However, studies of paleoclimate proxies, such as tree rings and ice cores, have shown that oscillations similar to those observed instrumentally have been occurring for at least the last millennium. This is clearly longer than modern man has been affecting climate, so the AMO is probably a natural climate oscillation. In the 20th century, the climate swings of the AMO have alternately camouflaged and exaggerated the effects of global warming, and made attribution of global warming more difficult to ascertain. | <urn:uuid:db6b461b-6c59-4f67-946a-fce7822bcc9c> | 3.359375 | 1,038 | FAQ | Science & Tech. | 42.501329 | 1,213 |
New Study Counters Clovis Comet
“There’s no plausible mechanism to get airbursts over an entire continent,” said Boslough, a physicist. “For this and other reasons, we conclude that the impact hypothesis is, unfortunately, bogus.”
In a December 2012 American Geophysical Union monograph, first available in January, the researchers point out that no appropriately sized impact craters from that time period have been discovered, nor have any unambiguously “shocked” materials been found.
In addition, proposed fragmentation and explosion mechanisms “do not conserve energy or momentum,” a basic law of physics that must be satisfied for impact-caused climate change to have validity, the authors write.
Also absent are physics-based models that support the impact hypothesis. Models that do exist, write the authors, contradict the asteroid-impact hypothesizers.
The authors also charge that “several independent researchers have been unable to reproduce reported results” and that samples presented in support of the asteroid impact hypothesis were later discovered by carbon dating to be contaminated with modern material.
The Boslough Trail
His credibility was on the line on in July 1994 when Eos, the widely read newsletter of the American Geophysical Union, ran a front-page prediction by a Sandia National Laboratories team, led by Boslough, that under certain conditions plumes from the collision of comet Shoemaker-Levy 9 with the planet Jupiter would be visible from Earth.
The Sandia team -- Boslough, Dave Crawford, Allen Robinson and Tim Trucano -- were alone among the world’s scientists in offering that possibility.
“It was a gamble and could have been embarrassing if we were wrong,” said Boslough. “But I had been watching while Shoemaker-Levy 9 made its way across the heavens and realized it would be close enough to the horizon of Jupiter that the plumes would show.” His reasoning was backed by simulations from the world’s first massively parallel processing supercomputer, Sandia’s Intel Paragon.
On the one hand, it was a chance to check the new Paragon’s logic against real events, a shakedown run for the defense-oriented machine. On the other, it was a hold-your-breath prediction, a kind of Babe Ruth moment when the Babe is reputed to have pointed to the spot in the center field bleachers he intended to hit the next ball. No other scientists were willing to point the same way, partly due to previous failures in predicting the behavior of comets Kohoutek and Halley, and partly because most astronomers believed the plumes would be hidden behind Jupiter’s bulk.
That the plumes indeed proved visible started Boslough on his own trajectory as a media touchstone for things asteroidal and meteoritic.
It didn’t hurt that, when he stands before television cameras to discuss celestial impacts, his earnest manner, expressive gestures and extraterrestrial subject matter make him seem a combination of Carl Sagan and Luke Skywalker, or perhaps Tom Sawyer and Indiana Jones.
Standing in jeans, work shirt and hiking boots for the Discovery Channel at the site in Siberia where a mysterious explosion occurred 105 years ago, or discussing it at Sandia with his supercomputer simulations in bold colors on a big screen behind him, the rangy, 6-foot-3 Sandia researcher vividly and accurately explained why the mysterious explosion at Tunguska that decimated hundreds of square miles of trees and whose ejected debris was seen as far away as London most probably was caused neither by flying saucers drunkenly ramming a hillside (a proposed hypothesis) nor by an asteroid striking the Earth’s surface, but rather by the fireball of an asteroid airburst -- an asteroid exploding high above ground, like a nuclear bomb, compressed to implosion as it plunged deeper into Earth’s thickening, increasingly resistive atmosphere. The governing physics, he said, was precisely the same as for the airburst on Jupiter.
Among later triumphs, Boslough was the Sandia component of a National Geographic team flown to the Libyan Desert to make sense of strange yellow-green glass worn as jewelry by pharaohs in days past. Boslough’s take: It was the result of heat on desert sands from a hypervelocity impact caused by an even bigger asteroid burst.
In the Present Case
In the Clovis case, Boslough felt that his ideas were taken further than he could accept when other researchers claimed that the purported demise of the Clovis civilization in North America was the result of climate change produced by a cluster of comet fragments striking Earth.
In a widely reported press conference announcing the Clovis comet hypothesis in 2007, proponents showed a National Geographic animation based on one of Boslough’s simulations as inspiration for their idea.
While this raised red flags to those already critical of the impact hypothesis, “I never said the samples were salted,” Boslough said carefully. “I said they were contaminated.”
That find, along with irregularities reported in the background of one member of the opposing team, was enough for Nova to remove the entire episode from its list of science shows available for streaming, Boslough said.
“Just because a culture changed from Clovis to Folsom spear points didn’t mean their civilization collapsed,” he said. “They probably just used another technology. It’s like saying the phonograph culture collapsed and was replaced by the iPod culture.” | <urn:uuid:d529a5b3-5f99-4957-a5f5-65cd5a0778c9> | 3.109375 | 1,164 | News Article | Science & Tech. | 25.339508 | 1,214 |
Cold Spring Harbor, NY -- A team co-led by neuroscientists at Cold Spring Harbor Laboratory (CSHL) has shed light -- literally -- on circuitry underlying the olfactory system in mammals, giving us a new view of how that system may pull off some of its most amazing feats.
It has long been known from behavioral experiments that rodents, for instance, can tell the difference between two quite similar odors in a single sniff. But in such instances, what precisely happens in the "wiring" leading from sensory neurons in the nose to specialized cells in the olfactory bulb that gather the signals and transmit them to the brain? How can this occur within the brief span of a single respiratory cycle -- one inhalation and one exhalation?
Using a new method of exploring this question, CSHL scientists, in collaboration with researchers at Harvard University and the National Centre for Biological Science in Bangalore, India, have assembled evidence suggesting that the olfactory bulb in mice is not merely a relay station between the nose and brain, as many have supposed. Their data, published today in Nature Neuroscience, indicates that "there are many more information output channels leaving the olfactory bulb [en route to the cortex] than the number of information types entering it," from sensory receptors in the nose.
This complexity in sensory coding, which the team speculates may help the brain rapidly make highly accurate odor distinctions, became evident when the team used beams of light to activate highly specialized cells within the olfactory bulb, as prelude to measuring their electrical activity during single respiratory cycles.
Using beams of light to trace the circuit
The first step of the investigation involved using genetic engineering to generate a line of mice whose sensory neurons expressed a gene borrowed from a kind of algae that make them fire when beams of light are focused upon them.
|Contact: Peter Tarr|
Cold Spring Harbor Laboratory | <urn:uuid:c98ce817-b90b-4a59-9dd0-fbc93439ce4b> | 3.578125 | 385 | News Article | Science & Tech. | 11.826981 | 1,215 |
International Atomic Time
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
...generated by atomic clocks, which furnish time more accurately than was possible with previous astronomical means (measurements of the rotation of the Earth and its revolution about the Sun). International Atomic Time (TAI) is based on a system consisting of about 270 laboratory-constructed atomic clocks. Signals from these atomic clocks are transmitted to the International Bureau of...
...above, had been formed in 1958 at the U.S. Naval Observatory. Other local scales were formed, and about 1960 the BIH formed a scale based on these. In 1971 the CGPM designated the BIH scale as International Atomic Time (TAI).
What made you want to look up "International Atomic Time"? Please share what surprised you most... | <urn:uuid:53b31c5e-51fa-454d-85a1-18cf32ca13ec> | 3.1875 | 187 | Knowledge Article | Science & Tech. | 44.955313 | 1,216 |
The declining areal cover and thickness of summer ice in the Arctic Sea continues, with 2012 shaping up to be another record-breaking year. Read the BBC summary here. These declines are of course predicted as a consequence of warming air temperatures due to anthropogenic global warming, but what really alarms me is the speed at which it is progressing. Basically, ice loss is accelerating every year, and has been since we noted a drastic speed-up in 2007. The consequences of an ice free Arctic will be far-reaching. First there are physical consequences. Losing the ice reduces the regional albedo, and creates a feedback that will continue the acceleration. Furthermore, thinner ice means that more light penetrates into the depths, warming the water beneath. The region’s biology is also changing, and this too will accelerate. Warmer, brighter waters will most likely increase biological productivity in the Arctic Sea, which will be good news for some human industries, but isn’t good news for all. The opening of the seaway, and increasing productivity, will change the ecology of the region, displacing many species, while allowing invasion from neighbouring waters in the northern Pacific and Atlantic. Geerat Vermeij and I wrote a predictive paper about this, oh, 4 years ago now. It is difficult to predict exactly what the consequences of those changes will be because of the problem’s complexity, but they will be large. And finally, of course, opening the sea exposes many many resources of interest to humans, including fossil fuel deposits and shipping lanes. Let the wrangling begin. Noe note though is that the Arctic Sea ice melting will not contribute significantly to sea level rise; that ice is already in the ocean.
Our planet continues to change in response to global warming, and it seems that some of those changes are accelerating. I cannot be certain, and we will only know this in hindsight, but in my opinion we are beginning to cross thresholds. The time for discussion is long past. Now is the time for increased mitigation and implementation of adaptive strategies. I don’t think that we are yet at the point where we need to consider drastic measures, such as extreme geoengineering. But, in the same way that a failure to agree upon and implement effective mitigating measures has brought us to this point, we may well be on our way to addressing this problem with technologically challenging, ecosystem-altering, economically difficult and socially painful actions. | <urn:uuid:6ab4ce27-7611-48b3-a399-c7b486534524> | 2.921875 | 497 | Personal Blog | Science & Tech. | 39.485518 | 1,217 |
- Does global change increase the success of biological invaders?
Trends in Ecology & Evolution, Volume 14, Issue 4, 1 April 1999, Pages 135-139
Jeffrey S. Dukes and Harold A. Mooney
AbstractBiological invasions are gaining attention as a major threat to biodiversity and an important element of global change. Recent research indicates that other components of global change, such as increases in nitrogen deposition and atmospheric CO2 concentration, favor groups of species that share certain physiological or life history traits. New evidence suggests that many invasive species share traits that will allow them to capitalize on the various elements of global change. Increases in the prevalence of some of these biological invaders would alter basic ecosystem properties in ways that feed back to affect many components of global change.
Abstract | Full Text | PDF (101 kb)
- Roles of parasites in animal invasions
Trends in Ecology & Evolution, Volume 19, Issue 7, 1 July 2004, Pages 385-390
John Prenter, Calum MacNeil, Jaimie T.A Dick and Alison M Dunn
AbstractBiological invasions are global threats to biodiversity and parasites might play a role in determining invasion outcomes. Transmission of parasites from invading to native species can occur, aiding the invasion process, whilst the ‘release’ of invaders from parasites can also facilitate invasions. Parasites might also have indirect effects on the outcomes of invasions by mediating a range of competitive and predatory interactions among native and invading species. Although pathogen outbreaks can cause catastrophic species loss with knock-on effects for community structure, it is less clear what impact persistent, sub-lethal parasitism has on native-invader interactions and community structure. Here, we show that the influence of parasitism on the outcomes of animal invasions is more subtle and wide ranging than has been previously realized.
Abstract | Full Text | PDF (130 kb)
- Understanding the long-term effects of species invasions
Trends in Ecology & Evolution, Volume 21, Issue 11, 1 November 2006, Pages 645-651
David L. Strayer, Valerie T. Eviner, Jonathan M. Jeschke and Michael L. Pace
AbstractWe describe here the ecological and evolutionary processes that modulate the effects of invasive species over time, and argue that such processes are so widespread and important that ecologists should adopt a long-term perspective on the effects of invasive species. These processes (including evolution, shifts in species composition, accumulation of materials and interactions with abiotic variables) can increase, decrease, or qualitatively change the impacts of an invader through time. However, most studies of the effects of invasive species have been brief and lack a temporal context; 40% of recent studies did not even state the amount of time that had passed since the invasion. Ecologists need theory and empirical data to enable prediction, understanding and management of the acute and chronic effects of species invasions.
Abstract | Full Text | PDF (587 kb)
Copyright © 2012 All rights reserved.
Current Biology, Volume 22, Issue 19, R819-R821, 9 October 2012
FeatureAdd/View Comments (0)
- Thousands of species have invaded new territories in recent decades, often aided by global trade and man-made habitat change. While many remain harmless, some may cause serious damage. Therefore, we need improvements in surveillance and in our understanding of which factors make a successful invasion possible. Michael Gross reports. | <urn:uuid:9978e2e6-d907-421a-abd0-ef43088fb2b5> | 2.6875 | 706 | Content Listing | Science & Tech. | 26.135653 | 1,218 |
This website is no longer being updated. Visit Dartmouth Now for all news published after June 7, 2010.
Dartmouth College Office of Public Affairs • Press Release
Posted 5/10/10 • Media Contact: Office of Public Affairs (603) 646-3661
By using entire islands as experimental laboratories, two Dartmouth biologists have performed one of the largest manipulations of natural selection ever conducted in a wild animal population. Their results, published online on May 9 by the journal Nature, show that competition among lizards is more important than predation by birds and snakes when it comes to survival of the fittest lizard.
Podcast: Lizards and Evolution
"When Tennyson wrote that nature is 'red in tooth and claw', I think the image in his head was something like a Discovery Channel version of a lion chasing down a gazelle" said Ryan Calsbeek, an assistant professor of biology at Dartmouth College and a co-author of the study. "While that may often be the case, intense natural selection can also arise through competition. Sometimes, death by competitor can be more important than death by predator".
To show this effect, the researchers covered multiple small islands in the Bahamas with bird-proof netting to keep predatory birds at bay. Other islands were left open to bird predators, and on still other islands, the researchers added predatory snakes to expose the lizards to both bird and snake predators. Next, they tracked the lizards over the summer to record which lizards lived and which died on the different islands.
"We found repeated evidence that death by predators occurred at random with respect to traits like body size and running ability" said Robert Cox, a post-doctoral researcher at Dartmouth and Calsbeek's co-author. "But we also found that increasing the density of lizards on an island consistently created strong natural selection favoring larger size and better running ability."
Calsbeek and Cox explain that in high-density populations, the intensity of competition for food, space, and other resources is likely to increase. In turn, this increased competition favors the biggest, toughest lizards on the island.
"The lizards play for keeps," said Calsbeek, "and there's no room for the meek when times get tough".
Though the researchers note that competition will not always be more important than predation in other species or in different environments, they emphasize that their study has broad social implications because it demonstrates the ability to conduct evolutionary experiments in natural animal populations.
"Many people are skeptical of evolutionary biology because they perceive it as a purely historical science that can't be tested experimentally. Here, we're providing a real experimental test of natural selection as it happens in the wild. That's an exciting way for us to advance the public's perception of evolution" said Calsbeek.
Ryan Calsbeek (left) and Bob Cox (photo by Joseph Mehling '69)
Dartmouth has television (satellite uplink) and radio (ISDN) studios available for domestic and international live and taped interviews. For more information, call 603-646-3661 or see our Radio, Television capability webpage.
Last Updated: 5/12/10 | <urn:uuid:68f61f8e-08a4-4451-87a3-616cbebb6771> | 3.046875 | 661 | News (Org.) | Science & Tech. | 31.225908 | 1,219 |
The NetBeans Open Source Story
One of the most interesting places where open source and Java technology overlap is a little integrated development environment (IDE) known as NetBeans. NetBeans' path to open-sourcedom was a circuitous one. In 1996, a group of Czech students set out to author an IDE in pure Java. The idea was to take the best features of Delphi and create an easy-to-use, cross-platform environment where code could be edited, tested, and debugged. They called their software Xelfi.
Enter Czech entrepreneur-engineer Roman Stanek. Encouraged by meeting with Internet visionary and venture capitalist Esther Dyson, Stanek was on the lookout for a good idea to capitalize on. He stumbled across the Xelfi group on the Web, struck a deal with the developers, founded NetBeans with his own money, and then received funding from Dyson.
The company remained lean, releasing several versions of the IDE and supporting the latest in Java technology, such as Swing, Servlets, JDBC, JavaServer Pages (JSP), and XML. The IDE was built to be compact, robust, and easy to use and install. Sales offices were opened in Silicon Valley. But the company was hard-pressed to turn big profits, given the economics of software tools and Java's client-side behavior. Because the product was written in Java, it was slower and required more memory than native-code IDEs. Also, other development tool vendors most notably Microsoft could afford to give their products away cheaply, or even for free.
Meanwhile, Sun Microsystems, creator of Java, had no real Java development tools to speak of. Most users found Sun's Java Workshop and Java Studio products lacking and difficult to work with. In order to promote the use of Java, wanting to make Java a universal language, Sun had a strong reason to support a powerful Java IDE that ran on any platform and could easily be extended. The PC had some strong IDEs for Java development such as Borland's JBuilder, Symantec's Visual Cafe (now owned by WebGain), IBM's Visual Age, Oracle's JDeveloper, and Microsoft's now-discontinued Visual J++. But there were few good Java-centric environments for Linux, Solaris, or other operating systems.
So Sun bought two leading Java tool companies Forte and NetBeans. They rolled together Forte's SynerJ IDE and NetBeans and released a pretty decent product called Forte for Java Community Edition. They gave Forte away for free. They also released pricier versions: An Internet Edition for developing back-end software on a single Web server and an Enterprise Edition for deploying large, distributed Java applications. But while the Forte products were popular, they didn't exactly spread like wildfire. And it was costly to continually add all the features that developers were requesting.
And so, in June 2000, Sun decided to take every NetBeans component (except for the browser and compiler) open source. With the help of a company called Collab.net, Sun deployed a place for developers to work called NetBeans.org. The license is called the Sun Public License (SPL), a minor variation of the open-source Mozilla Public License (MPL) from Netscape.
NetBeans code continues to form the basis for Sun's official suite of Forte products. Essentially, Sun "productizes" major NetBeans drops by extensively testing them and branding them with items such as official Sun splash screens. They then provide Forte's users with full technical support.
Setting It Free
Sun's motivation for open sourcing NetBeans is clear. Letting developers get their hands dirty with the code will make for a richer IDE that will work on many platforms and contain features and a user interface that developers actually find useful. Such a product would, so goes the theory, create a wider base of more dedicated Java programmers. Allowing the community to work on NetBeans code is also a smart political move on Sun's part, helping to build trust and dedication among Java developers.
Many people have their eyes on NetBeans as a possible indicator of whether Sun will embrace open source for other Java products, or even components of the Java language itself. Along with the usual open-source debate, such as whether letting too many cooks stir the pot will add bugs or make for higher-quality product, there are lots of other questions:
Can Forte compete with commercial IDEs, which are usually faster and more stable? So far, it seems to be keeping pace. Forte's ability to quickly weave together the latest Java technologies makes it a favorite among both reviewers and users. And the price is definitely competitive. But it's unclear whether Forte's target audience hard-core developers enjoy using IDEs at all: Many are happy using Notepad or vi or other text editing programs.
But the real question is how many independent developers will actually take their time and effort to work on NetBeans for no pay? Looking at the latest list of contributors brings up only 60 different developers, most of who are on the Sun payroll. Certainly, having access to the source code is incredibly useful for companies hoping to write plug-ins or attachments to Forte. But for the average Java developer, there's little reason other than sheer curiosity to waste much time with NetBeans.
All that being said, NetBeans is an impressive collection of code. The open architecture allows third-party vendors to develop plug-ins such as No Magic's Magic Draw UML diagram designer or Metamata's Debug utility. Sun itself has created a cool J2ME Wireless Toolkit that plugs into Forte and makes it easy for developers to create applications for Java-based cell phones.
NetBeans' main modules include the core, the source editor, a debugger, a form editor, Java support, open IDE plug-in support, Web support, and other tools. Some experimental modules modules that are currently being worked on but are not necessarily stable include integration with the Ant build tool, support for C or C++, an Emacs text editor, a way to view or edit Java bytecode, Jini support, scripting language support, a wizard to help make classes serializable, and native filesystem support. There's also a lengthy wish list for future modules requesting support for things like ArgoUML, the Retrologic obfuscator, to-do lists, Pizza, XML beans, JUnit testing, JNI, Jikes, image editing, JavaCC, Java3D, and much more.
It's easy to join the development effort. As with all open source products, you can help a lot just by trying out Forte or the latest NetBeans drop and writing detailed bug reports when things go awry. You can also request features or modules that you think sound cool without having to write them yourself. Your dreams might be made reality.
Of course, the biggest component of open source is the source code itself. You can either download the latest archive of code or see the code online. If you want to get serious about NetBeans, though, you'll need to use the Concurrent Versions System (CVS) of source control an open standard for storing and editing source code.
The source for NetBeans can be found in a CVS database with directories for every top-level project and subdirectories for individual components. You can download a CVS client, or, surprise-surprise, use CVS support via a stable version of NetBeans itself. You can log in without a password to get anonymous read-only access to any code file.
Often, you'll just need to build the specific component you're interested in working on. The component can then "plug in" to a stable NetBeans build. You can either build by hand using the latest version of the Java Development Kit, or use a custom make-system such as Ant.
The first thing you'll want to play around with is bug fixing. Find a module you're interested in and have a look at the Bugzilla database. If you find the solution, either post your idea for a fix or write a code patch and post it to the bug database.
Really motivated programmers may want to create their own modules from scratch. One can write a module according to the NetBeans OpenAPI and sell it, give it away, whatever. If you want your module part of the open source effort, however, send the finished code to firstname.lastname@example.org. An archive of your message, as well as any responses, is available on the NetBeans site. If your module is accepted, you'll become the proud owner of the component and can oversee its development. It will be given its own CVS directory, bug database category, mailing list, and series of Web pages. You can use the Sun Public License or any license you want (General Public License, Lesser General Public License).
If you're ready to contribute seriously to a piece of the NetBeans code, just write the owner of that module. He or she may grant you your own CVS account, with permission to edit or add code to a specific branch of the source database.
Open to Change
In general, one of the most important tenants of open source code writing is clear and constant communication. You should always post a message to email@example.com to be sure that whatever you're working on isn't already being developed, to get help from those more experienced in various modules, to find coding partners, or anything else.
If you do decide to help with NetBeans, not only will you experience the joy of having worked on a popular piece of software, but you may help inspire Sun to bring more and more of their Java products into the open source world.
About the Author
David Fox is vice president of Games at PlayLink, Inc. He's also the author of numerous books and articles about cyberculture and technology. | <urn:uuid:7186f7c4-4af9-4e17-8a91-1a13be2ced9e> | 2.765625 | 2,067 | Knowledge Article | Software Dev. | 44.215419 | 1,220 |
Radio-Collaring Elephants in Namibia
with Keith Leggett
Keith Leggett radio-collars enormous elephants in the Namibian desert to find out where they range and roam-and gets help from a BBC film crew.
Attaching a radio-collar to a 5-ton animal is no easy task. Especially if that animal, say, an elephant, has no interest in cooperating and does not necessarily turn up where you expect it to. This is Keith Leggett's challenge as a researcher with the Northwestern Namibia Desert-dwelling Elephant and Giraffe Project in Namibia, Africa.
With the help of Earthwatch volunteers since 2002, Leggett has been radio-collaring and tracking these enormous pachyderms in the Namibian desert to find out more about their home ranges and travel routes. Why? These elephants don't make very good neighbors - they drink upwards of 30 gallons of water per day, even in the dry season when water is scarce, and are extremely destructive eaters, pushing down and trampling trees and anything in their paths. Not surprisingly, elephants and people in this area have trouble coexisting.
But, Namibian elephants are of great interest to tourists, and this may be the key to their salvation in this country that has been described as the land that God created in anger. Probably a pretty fair description of the environment, says Leggett. Understanding the routines and ecology of elephants is the first step in helping them coexist with humans.
Last February, Leggett got the chance to capture and collar an elephant in front of BBC cameras. This is his report on how it went:
"I went up to the bush two days before the collaring was due and met the BBC team and enjoyed them straight off. We found the mature bull (WKM-14) but the younger bull was nowhere to be seen. After searching for two days and not finding the younger bull it was decided to go with the older mature male. Everyone had arrived in camp by the morning of the proposed collaring so we went straight out to collar the bull. The collaring was absolutely textbook, couldn't think of a more perfect one. The collar went straight under the bull without any hassles, the bull fell in an open area and he responded perfectly to the drugs...perfect!
"On top of all that he moved straight into the floodplains of the Hoanib River, a move none of the previously collared elephants had undertaken. It will be very interesting to see his movements when he comes into musth especially in response to the other dominant bull in the area.
"The film crew themselves were great fun, the only drawback was doing some takes 3 or 4 times... don't know how actors do it. I simply don't have the patience for it. Though they were very good when we were doing the collaring and stayed in the background and out of the way. Mind you it will probably work out to be about 2 minutes of airtime, but at least I have another collar."
In the last three years of leading Earthwatch volunteers into the Namibian desert, Leggett has tracked, observed, and collared numerous elephants, and sends our office back emails from his trips, such as this report from May 18, 2005:
"The first night we were in Purros, 3 elephants walked straight past camp. It appeared that we were going to have a good trip after all, or so we thought. The next two days was spent in the fruitless search for elephants... not another hide nor hair was observed...it was decided to head to the Hoanib River.
"The first we observed on arriving in the Hoanib River was a herd of 5 elephants with one of the cows having a calf of about 3 months of age. He is still totally uncoordinated and lurches from one misadventure to another. The previous calf born in the west was 12 months ago and so a new calf is still a novelty and most of the herd females take turns in guarding and guiding him around. The minders are very vigilant and when the older calf came to play the older animals saw him off... quite amusing at times. The mothers appears to play only a minor role in the overall rearing of the individual, though they are usually doing the nursing though I have seen even other females nurse young periodically. The group takes responsibility for the offspring.
"Later that day we saw the rest of the herd of 14 so it was hog heaven for 2 days and then the elephants disappeared again. It appears as though they are doing circuits at this time of year wandering between feeding areas. They are always moving never slopping for long in one area.
"Overall, the volunteers were excellent and put up with the vehicle breakdowns, lack of elephants and then the total abundance, then absence again with a resigned tolerance... they were also pretty good fun. The west has dried out significantly and the days were very hot, but the nights were cool. There has been significant grass growth this year with the good rains and the animals are all looking in extremely good shape. Springbok, gemsbok and ostrich were abundant and while the elephants have spread pretty thin the rest of the wildlife has collected in feeding aggregations.
"After a shower, a shave and some relaxation time, I feel almost human again..."
Leggett's study is one of the first to scientifically document the home range and movements of these massive animals. Preliminary findings recently published in African Zoology show that elephant movements range from 50 to 625 kilometers (31 to 388 miles), over a period of up to five months, in response to available water and vegetation.
In June, July, and August of 2006, Earthwatch teams will help Leggett track this animal, as well as up to a dozen others that he has radio-collared. They will also identify individual elephants in the field, using distinguishing tusk characteristics, ear scars, and footprint patterns, and observe their behavior. This information will help conservation agencies better manage Namibia's unique desert elephants. | <urn:uuid:015f4b05-56f2-4463-8bdd-8e038a204a0f> | 3 | 1,253 | Nonfiction Writing | Science & Tech. | 57.921608 | 1,221 |
In this lesson our instructor gives an introduction to conditional loops. First, he discusses while loop, looping over arrays, and array traversal functions. Then he talks bout looping over indexed and associative arrays. He also lectures on looping over arrays using list() and each(), control structure scope and coding conventions. He ends the lesson with a helpful homework challenge.
A while loop is a conditional control structure that executes a
statement group repeatedly as long as its specified test condition remains
A while loop’s test condition’s value is compared to TRUEbefore each execution of the loop’s statement group.
'Looping over arrays' is a common programming function, and PHP provides several
built-in functions for doing so. They work on the basis of an array cursor ,
which is a 'marker' for the 'current' array element:
current() – returns the value of the array element at the
current array cursor position
key() – returns the key of the array element at the current
array cursor position
next() – advances the array cursor by one
prev() – moves the array cursor back by one
reset() – sets the array cursor to the 1st element
end() – sets the array cursor to the last element
The list() construct and the each() function are also used to
loop over arrays.
list() is used to assign values to multiple variables at a time from an
each() returns key/value information for the current array element in an
array and advances the array cursor by one. It returns FALSE if the end of
the array is reached.
Unlike some programming languages, PHP does not have ‘block-level’
scope used with control structures.
Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. | <urn:uuid:ed63306a-0c96-488e-bfb0-b477bcdd3fa5> | 3.90625 | 401 | Truncated | Software Dev. | 39.844538 | 1,222 |
Department of Environment and Conservation (NSW), 2005
ISBN: 1 7412 2144 7
7 Previous Recovery Actions
- 7.1 Survey
- 7.2 Profile and environmental impact assessment guidelines
- 7.3 Establishment of a recovery team
- 7.4 Community awareness initiatives
- 7.5 In-situ protection
During the preparation of this recovery plan, 18 D. sp. C Illawarra sites were surveyed by the DEC with the assistance of Anders Bofeldt (Wollongong Botanic Gardens) and community volunteers. Habitat details, threats and observations of flowering and fruit production were recorded at each surveyed site.
A species profile and environmental impact assessment guidelines have been prepared for D. sp. C Illawarra (Appendix 4) to assist public authorities, community groups and private landholders in the conservation of the species. These documents also aim to assist consent and determining authorities in the statutory assessment of potential impacts on the species.
The Illawarra Regional Threatened Flora Recovery Team was established in June 2001 to coordinate the recovery planning for six plant species which occur in the Illawarra region and are listed as endangered at a State and National level. These species are D. sp. C Illawarra, Irenepharsus trypherus, Zieria granulata, Pterostylis gibbosa, Cynanchum elegans and Pimelea spicata. Representatives of the public authorities that are involved in the planning and/or management of remnant vegetation in the region are present on the recovery team, as are representatives of various regional organisations and community groups.
- An information brochure has been prepared and distributed to raise awareness of the six “Threatened Plants of the Illawarra” including D. sp. C ‘Illawarra’.
- In June 2002, the Australian Network for Plant Conservation and Wollongong Council hosted a workshop to raise awareness of issues relating to the conservation of threatened flora in the Illawarra. D. sp. C ‘Illawarra’ was one of the subject species of that workshop.
- In November 2002, Landcare Illawarra hosted a workshop to raise awareness of D. sp. C ‘Illawarra’ and five other endangered flora species in the Illawarra.
- The DEC has initiated a program of meeting landholders with D. sp. C ‘Illawarra’ on their property to discuss sympathetic management of the species and the opportunities for entering into conservation agreements.
- A Voluntary Conservation Agreement (VCA) under the NP&W Act has been signed to protect habitat for the species at Willow Creek (Dc21). A Plan of Management for the site has been prepared.
- A Property Agreement under the under the Native Vegetation Conservation Act 1997 has been signed to protect habitat for the species at Marshall Mount (Dc3). Cattle have also been temporarily removed from this property until the installation of watering points and fencing of native vegetation has been completed (A. Knowlson, pers. comm.).
- Threat abatement works including fencing to exclude livestock and bush regeneration are being implemented by private landholders and the DEC at four sites (Dc17, Dc21, Dc28 and Dc41). | <urn:uuid:a1045ccb-eb51-4cc4-a94d-dc699accc8f1> | 2.828125 | 685 | Knowledge Article | Science & Tech. | 36.642847 | 1,223 |
1971: South Pole total ozone data before the ozone hole existed
1986: First year of continuous balloon soundings at the South Pole
1993: Record low total ozone related in part to the Pinatubo volcanic eruption (see below)
2003: This year is an example of contemporary ozone depletion at the South Pole while stratospheric chlorine remains near its peak. Stratospheric chlorine is not expected to increase substantially in the future and interannual variations in the magnitude of ozone loss will be determined by atmospheric dynamics and temperature variations, the latter affecting the occurrence of polar stratospheric clouds necessary for ozone depletion. However, a major volcanic eruption during the next 20-30 years, while stratospheric chlorine levels are high, will result in the additional ozone loss of about 20 DU in mid-October, as observed in 1993.
The South Pole balloon measurements of the vertical profile of ozone can be integrated to obtain the total column ozone. During most soundings, the balloon obtains altitudes of about 30 km and does not sample the atmosphere above this altitude where 10% or less of the ozone resides. The amount to add on above the highest altitude obtained by the balloons can be estimated from ozone climatology obtained by the NOAA SBUV satellite instrument. Using this procedure, total ozone can be measured at the South Pole even in darkness when instruments which use ultraviolet radiation to measure ozone are inoperative. The total ozone determined in this way is shown in the figure for several years during the July 1 to December 31 period, which includes the spring when sunlight returns to the South Pole and the ozone hole develops. On October 6,1993, the lowest total ozone value ever recorded on Earth (89 DU with an uncertainty of 5% using the SBUV extrapolation) was observed. This was mainly due to chlorine-catalyzed heterogeneous chemistry on volcanic aerosol particles from the Pinatubo eruption in the lower stratospheric 10-15 km region where it is normally not cold enough to form polar stratospheric clouds in great abundance. Antarctic total ozone is not expected to reach these extremely low values until another major volcanic eruption occurs during the period when stratospheric chlorine remains high (the next 20-30 years). | <urn:uuid:cd6b19be-644a-46f2-8cee-dd406fe2113b> | 3.625 | 448 | Knowledge Article | Science & Tech. | 15.2595 | 1,224 |
How you get the chameleon of the molecules to settle on a particular "look" has been discovered by RUB chemists led by Professor Dominik Marx. The molecule CH5+ is normally not to be described by a single rigid structure, but is dynamically flexible. By means of computer simulations, the team from the Centre for Theoretical Chemistry showed that CH5+ takes on a particular structure once you attach hydrogen molecules. "In this way, we have taken an important step towards understanding experimental vibrational spectra in the future", says Dominik Marx. The researchers report in the journal "Physical Review Letters".
In the CH5+ molecule, the hydrogen atoms are permanently on the move
The superacid CH5+, also called protonated methane, occurs in outer space - where new stars are formed. Researchers already discovered the molecule in the 1950s, but many of its features are still unknown. Unlike conventional molecules in which all the atoms have a fixed position, the five hydrogen atoms in CH5+ are constantly moving around the carbon centre. Scientists speak of "hydrogen scrambling". This dynamically flexible structure has been explained by the research groups led by Dominik Marx and Stefan Schlemmer of the University of Cologne as part of a long-term collaboration (we reported in July 2005 and March 2010: http://www.pm.ruhr-uni-bochum.de/pm2005/msg00209.htm, http://aktuell.rub.de/pm2010/msg00066.htm). Marx's team now wanted to know if the structure can be "frozen" under certain conditions by attaching solvent molecules – a process called microsolvation.
Microsolvatation: addition of hydrogen molecules to CH5+ one by one
To this end, the chemists surrounded the CH5+ molecule in the virtual lab with a few hydrogen molecules (H2). Here, the result is the same as when dissolving normal ions in water: a relatively tightly bound shell of water molecules attaches to each ion in order to then transfer individual ions with several solvent molecules bound to them to the gas phase. To describe the CH5+ hydrogen complexes, classical ab initio molecular dynamics simulations are not sufficient. The reason is that "hydrogen scrambling" is based on quantum effects. Therefore Marx's group used a fully quantum mechanical method which they developed in house, known as ab initio path integral simulation. With this, the essential quantum effects can be taken into account dependent on the temperature.
Hydrogen molecules give the CH5+ molecule "structure"
The chemists carried out the simulations at a temperature of 20 Kelvin, which corresponds to -253 degrees Celsius. In the non-microsolvated form, the five hydrogen atoms in the CH5+ molecule are permanently changing positions even at such low temperatures - and entirely due to quantum mechanical effects. If CH5+ is surrounded by hydrogen molecules, this "hydrogen scrambling" is, however, significantly effected and may even completely come to a halt: the molecule assumes a rudimentary structure. How this looks exactly depends on how many hydrogen molecules are attached to the CH5+ molecule. "What especially interests me is if superfluid helium - like the hydrogen molecules here – can also stop hydrogen scrambling in CH5+" says Marx. Experimental researchers use superfluid helium to measure high-resolution spectra of molecules embedded in such droplets. For CH5+ this has so far not been possible. In the superfluid phase, the helium atoms are, however, indistinguishable due to quantum statistical effects. To be able to describe this fact, the theoretical chemists at the RUB spent many years developing a new, even more complex path-integral-based simulation method that has recently also been applied to real problems.
Researchers at the RUB explore the influences of microsolvation on small molecules in the gas phase and in helium droplets in the Excellence Cluster "Ruhr Explores Solvation" RESOLV (EXC 1069), which was approved by the German Research Foundation in June 2012.
A. Witt, S. Ivanov, D. Marx (2013): Microsolvation-Induced Quantum Localization in Protonated Methane, Physical Review Letters, doi: 10.1103/PhysRevLett.110.083003
A figure related to this press release can be found online at: http://aktuell.ruhr-uni-bochum.de/pm2013/pm00080.html.en
Prof. Dr. Dominik Marx, Centre for Theoretical Chemistry, Department of Chemistry and Biochemistry at the Ruhr-Universität, 44780 Bochum, Germany, Tel. +49/234/32-28083, E-Mail: firstname.lastname@example.org
Click for more
Animations and background information on pure CH5+
Solvation Science@RUB (RESOLV)
Editor: Dr. Julia Weiler
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. | <urn:uuid:1b6a5518-f14f-4416-bbfd-7f1a4f8d6d21> | 2.84375 | 1,072 | News (Org.) | Science & Tech. | 39.376234 | 1,225 |
Pioneering Periodic Arrangements of the ElementsLaws of Triads and of Octaves
Early in the 19th cent., a number of chemists had noticed certain relationships between the properties of elements and their atomic weight. In 1829 J. W. Döbereiner stated that there existed some three-element groups, or triads, in which the atomic weight of the middle element was the average of the other two and the properties of this element lay between those of the other two. For example, calcium, strontium, and barium form a triad; lithium, sodium, and potassium, another. The English chemist J. A. Newlands found (1863–65) that if the elements are listed according to atomic weight starting with the second, the 8th element following any given element has similar chemical properties, and so does the 16th. This became known as the law of octaves. About the same time, A. E. de Chancourtois arranged the elements according to increasing atomic weight in the form of a vertical helix with eight elements in a turn, so that elements having similar properties fell along vertical lines.
D. I. Mendeleev was the first to state the periodic law close to its present form. He proposed in 1869 that the properties of elements are periodic functions of the atomic weight and grouped the elements accordingly in a periodic system. Working independently and not aware of Mendeleev's work, Lothar Meyer arrived at a similar system, publishing his results about a year after Mendeleev's. When Mendeleev devised his periodic table a number of positions could not be fitted by any of the then known elements. Mendeleev suggested that these empty spaces represented undiscovered elements and by means of his system accurately predicted their general properties and atomic weights.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on periodic law Pioneering Periodic Arrangements of the Elements from Fact Monster:
See more Encyclopedia articles on: Chemistry: General | <urn:uuid:48d2b0ec-0881-4853-8b34-ed576cb5e8c1> | 3.96875 | 432 | Knowledge Article | Science & Tech. | 42.815686 | 1,226 |
The Night Skies of August: A Convergence of Planets and a Shower of Meteors
by Leo Enright
In the month of August, with longer nights than in July, we have more time to enjoy the view of the great Summer Milky Way, as well as the famous meteor shower of mid-August. This year we have the added bonus of the two brightest planets steadily converging in the western evening sky.
At the beginning of the month, sunset in this area is at about 8:30 p.m. Eastern Daylight Time, and evening astronomical twilight ends at about 10:30 p.m. By the end of August, sunset will be at about 7:45 p.m., with twilight ending at about 9:30 p.m.
Late summer sky watchers who are fortunate enough to have dark, rural skies can really get to know the Summer Milky Way and the constellations within it. Just let your late-evening gaze sweep from the northeast to the southern part of the sky. In the northeast, entirely within the Milky Way, you see Cassiopeia, in the shape of a very large letter “W”. High in the east you notice Cygnus, the Swan, also called The Northern Cross from the shape of its star pattern, and down in the south, in the richest and densest part of the Milky Way, is Sagittarius, whose star pattern forms the shape of a teapot with the handle to the left and the spout to the right. This Summer Milky Way is really one arm of our home galaxy, the Milky Way Galaxy, and our immense solar system, with the Sun, its nine planets and all their many moons, is really just a small dot among the 200 billion stars that make up this galaxy, which is over 250,000 light-years in diameter!
During August we also have a chance to see the famous Andromeda Galaxy, the only other galaxy that can be seen with the unaided eye. This close neighbour of our galaxy is one of the largest members of the “local group” of over a dozen galaxies, and it is only (!) about 2 million light-years away. To find it, locate the “W” of Cassiopeia well up in the northeastern sky at about 11:00 p.m. Trace a line from the right side of the “W” down and to the right toward the eastern horizon. About half way along that line, you should see a “faint fingerprint” on the sky. That is it. Remember that what you are seeing is another whole galaxy made up of 400 billion stars, and that the light from them has taken over 2 million years to reach your eyes!
Among the bright planets, the two brightest of all do a great converging act this month. Brilliant Venus in early August is easily found low in the western sky between 30 minutes and 90 minutes after sunset. The second brightest planet, Jupiter, is somewhat higher but in the southwestern sky. At the beginning of August, they are 30 degrees apart, that is 3 times the width of a fist held at arm’s length. Each evening they appear closer to each other by 1 degree, that is, by about the width of a person’s little fingernail held at arm’s length. Remarkably, at dusk on August 31, these two brightest planets will appear almost on top of each other. It should be a fine reminder of the Venus-Jupiter convergence of February, 1999. Of course, they are not physically near each other, since Jupiter, with an orbit that is far outside that of Earth, is actually 5 times further way from us than Venus. The third evening planet, reddish Mars, may be seen rising in the east at about midnight in early August, and thence rising 2 to 3 minutes earlier each evening, until by month’s end it will be seen about 10:30 p.m. Mars is gradually brightening, and, if inspected in a small telescope, appears larger over the course of the month. Saturn and Mercury, which were seen low in the western evening sky in the month of June, are not visible in the first half of August, but in the last two weeks of the month they may be seen very low in the eastern sky between 60 minutes and 30 minutes before sunrise. As was the case in the western sky two months ago, they are both again below Castor and Pollux, the brightest stars in the constellation Gemini. Saturn appears above; Mercury is below and to its left. Over the last 10 days of the month, Mercury becomes considerably brighter than Saturn, but remember that a good view of the eastern horizon will be needed to see this planetary pairing in the morning sky.
Several beautiful lunar-planetary arrangements are to be seen this month. On the evening of August 7, do not miss the sight of the slim crescent moon just to the right of Venus and low in the western sky about 40 to 50 minutes after sunset. At the same time on the evenings of the 8th and 9th, the crescent moon will be seen marching between the converging planets Venus and Jupiter, and on the 10th it will be to the left of Jupiter. At about midnight on August 24th the rising moon will appear to the left of Mars, and again about midnight and after on August 25th it will appear close to the Pleiades star cluster. In the morning sky about 40 minutes before sunrise on August 31st, the thin waning crescent moon will appear above Saturn, and at the same time on September 1st, the very thin crescent moon will appear below Saturn.
With the famous Perseid Meteor Shower reaching its absolute peak during the day of August 12, Thursday and Friday August 11th and 12th should be almost equally good for observing this annual event which has received its name because these meteors (sometimes called “shooting stars”) all seem to radiate from a point in the constellation Perseus which is in the northeastern evening sky below the “W”of Cassiopeia. With a First Quarter Moon setting about midnight or before on those evenings, there will be no lunar interference at all after midnight, and so, amateur astronomers are looking forward to spectacular “meteoric fireworks”, especially from midnight to dawn on both of the peak nights. If the weather cooperates, many skywatchers will be observing all night, keeping an hour-by-hour count. If the weather is uncooperative on the peak nights, remember that the Perseids are somewhat active for several weeks before and after their peak. To see the most meteors possible, face in a northerly, or a southeasterly, direction, and direct your gaze to a “quarter-section” of the sky quite high above the horizon. Most of the meteors are very fast, and are coming from a spot, called the radiant, in the northeastern part of the sky. Do not despair if you have 10 minutes without seeing any; in the next 10 minutes you may see 20 of them, since they often come in clusters. I would be interested in hearing from local observers about their “per-hour counts of Perseids” for various times during both of the nights mentioned.
Those who are interested in more information about observing stars, planets, and meteor showers throughout the year should obtain a copy of the book, The Beginner’s Observing Guide, which is now available at Sharbot Lake Pharmacy. | <urn:uuid:5a639053-f70d-416f-9182-d7aa0abd8c7a> | 2.96875 | 1,557 | Nonfiction Writing | Science & Tech. | 58.832917 | 1,227 |
Mars One is a private sector endeavor to send human beings to Mars. The estimated cost of $6 billion will be raised by selling T-shirts and hosting reality shows. In theory, the mission will launch in 2023. In order to reduce costs, astronauts will not be returned to earth. In other words, this is a one-way trip
There are a lot of technical issues that the sponsors have failed to adequately evaluate. Although they acknowledge high radiation exposure, resulting in a much higher probability of developing cancer (without a realistic ability to treat), they have set the launch date for a period of high solar activity, which dramatically increases the risks to the astronauts during transit. In order to reduce radiation exposure on Mars, astronauts will be largely confined to living underground, which poses psychological risks.
Energy generation is proposed to come from solar panels. However, Mars receives 4-times less solar energy than earth. It is also susceptible to dust storms, which would reduce solar energy output to virtually zero. If the storms last longer than a few days, the astronauts will be toast. The solar energy available during winter months is also reduced considerably. The Mars rovers that relied upon solar panels had to shut down during the winter. Such an option is not available to astronauts, who must rely upon energy for heating, oxygen production and water production.
Supplying astronauts with enough food is also a problem. The Mars One website says astronauts will raise their own food. However, this idea is very unrealistic. Even on earth, it took a tremendous amount of land to produce enough food to feed people in the Biosphere 2, who complained that they were always hungry. The Biosphere experiment also suffered from reduced oxygen and high carbon dioxide, which killed many species within the Biosphere. Problems on Mars could not be solved as easily as pumping oxygen from the outside, which was done for the Biosphere.
If problems or illnesses arise on Mars, help is at least 7-12 months away. So, this mission truly is a suicide mission. Fortunately, the sponsors will probably never get enough money to get the mission off the ground. | <urn:uuid:5c9c5c12-81c2-4711-90d3-ecf7d346c8fa> | 3.21875 | 427 | Personal Blog | Science & Tech. | 45.291458 | 1,228 |
The Mechanism of Ice Crystal Growth and the Theory of Evolution
by Larry Vardiman, Ph.D.
Presented at the Second International Conference on Creationism, Pittsburgh, Pennsylvania, July 30–August 4, 1990. Published in: Proceedings of the Second International Conference on Creationism, R. E. Walsh & C.L. Brooks (Eds.), pp. 303–314, 1990.
© 1990 Creation Science Fellowship, Inc., Pittsburgh, PA, USA. Published with permission. All Rights Reserved.
Ice crystal growth has been cited as an example of how evolution creates greater order. The modern explanation of ice crystal shape is described. The second law of thermodynamics is developed in terms of entropy change and applied to ice crystal growth. The difference between the operation of thermodynamic systems and their origin is discussed. It is concluded that ice crystal growth is similar to the operation of life processes but does not support the origin of life as described by the theory of evolution.
Ice Crystals, Growth, Shape, Second Law of Thermodynamics, Entropy, Thermodynamic Systems, Evolution, Snowflakes, Life Processes
For Full Text
Please see the Download PDF link above for the entire article. | <urn:uuid:b71ee758-4184-43a8-aeda-b38138ccf712> | 2.953125 | 249 | Truncated | Science & Tech. | 45.42 | 1,229 |
In GR cosmology, there is a big bang singularity. For every particle, there was only a finite time in the past. During this finite time, only a finite distance may have been reached. This allows to define a horizon of influence for each event — all events which may have had a common cause in the past.
Now, in standard GR cosmology this horizon is small. Too small to explain some observable facts:
The second problem is much more serious — some homogeneous distribution may have caused by something else, last but not least homogeneous initial values seem to be a meaningful assumption, based on Ockham's razor. But if initial fluctuations are greater than horizon size, this requires a very strange conspiracy forbidden by current physics.
The GR solution of this problem is inflation in the early universe. That means, some additional mechanism (with some hypothetical origin in particle physics) has to give an additional term in the early universe. This additional term leads to an acceleration of the expansion of the universe (a"(τ)>0).
That inflation solves this problem seems to be the main reason why it is widely accepted in cosmology.
In GLET we have no big bang singularity and therefore no horizon problem.
In some more general, technical meaning of the word "inflation" (meaning only a period where a"(τ)>0 or the expansion is accelerating) for Υ>0 the related terms of GLET leads to inflation if the state of the universe is sufficiently dense.
Thus, to sove the horizon problem, GR needs some additional mechanism, which has to be originated in something else, like particle physics. GLET cosmology does not require such an additional mechanism. Instead, the GLET parameter Υ, which solves the horizon problem in GLET, follows from completely independent axioms of GLET. | <urn:uuid:a71b4ffa-7bd0-4bc4-9d98-054c6004e4be> | 2.671875 | 377 | Knowledge Article | Science & Tech. | 37.058492 | 1,230 |
absolute value, magnitude of a number or other mathematical expression disregarding its sign; thus, the absolute value is positive, whether the original expression is positive or negative. In symbols, if | a | denotes the absolute value of a number a, then | a | = a for a > 0 and | a | = - a for a < 0. For example, |7| = 7 since 7 > 0 and | - 7| = - ( - 7), or | - 7| = 7, since - 7 < 0.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on absolute value from Infoplease:
See more Encyclopedia articles on: Mathematics | <urn:uuid:204faf78-7a3c-464d-bfc3-589035535875> | 4.09375 | 148 | Knowledge Article | Science & Tech. | 41.005726 | 1,231 |
Asexual reproduction is advantageous in allowing beneficial combinations of characteristics to continue unchanged and in eliminating the often vulnerable stages of early embryonic growth. It is found in most plants, bacteria, and protists and the lower invertebrates. In one-celled organisms it most commonly takes the form of fission, or mitosis, the division of one individual into two new and identical individuals. The cells thus formed may remain clustered together to form filaments (as in many fungi) or colonies (as in staphylococci and Volvox). Fragmentation is the process in filamentous forms in which a piece of the parent breaks off and develops into a new individual. Sporulation, or spore formation, is another means of asexual reproduction among protozoa and many plants. A spore is a reproductive cell that produces a new organism without fertilization. In some lower animals (e.g., hydra) and in yeasts, budding is a common form of reproduction; a small protuberance on the surface of the parent cell increases in size until a wall forms to separate the new individual, or bud, from the parent. Internal buds formed by sponges are called gemmules.
Regeneration is a specialized form of asexual reproduction; by regeneration some organisms (e.g., the starfish and the salamander) can replace an injured or lost part, and many plants are capable of total regeneration—i.e., the formation of a whole individual from a single fragment such as a stem, root, leaf, or even a small slip from such an organ (see cutting; grafting). F. C. Steward showed (1958) that single phloem cells from a carrot plant, when grown on an agar medium, would form a complete carrot plant. Among animals, the lower the form, the more capable it is of total regeneration; no vertebrates have this power, although clones of frogs (1962) and mammals (1996) have been produced in the laboratory from single somatic cells. Closely allied to regeneration is vegetative reproduction, the formation of new individuals by various parts of the organism not specialized for reproduction. In some plants structures that form on the leaves give rise to young plantlets. Rhizomes, bulbs, tubers, and stolons are other forms of vegetative reproduction.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on reproduction Asexual Reproduction from Infoplease:
See more Encyclopedia articles on: Biology: General | <urn:uuid:85129c65-4f42-4751-964b-bd1cb911ad7b> | 3.9375 | 525 | Knowledge Article | Science & Tech. | 34.113162 | 1,232 |
The French could get a magnificent light show for Bastille Day this weekend thanks to a geomagnetic storm due to hit Earth on Saturday, a U.S. government scientist said.
The storm can generate auroras: waving, colorful lights that appear in the sky.
Joe Kunches, a space scientist for the National Oceanic and Atmospheric Administration, said the auroras may be seen along the U.S.-Canada border and in northern Europe on Saturday night. They are common farther north, like the Northern Lights, but magnetic storms like this one can make them appear in lower latitudes, he said.
"The auroras are probably going to be more bright and more equator-ward," Kunches said.
Robert Leamon, a heliophysics scientist at NASA headquarters, said the most notable lights will appear on the night side of the Earth when the storm arrives.
"It might last into Saturday so you see something in the U.S. or Europe, but if you lived in Siberia, say, you'd probably get the best showing," he said.
According to NOAA's scale for geomagnetic storms, the storm heading for Earth -- rated low, at a G1 or G2 on a scale of one to five -- could be mildly problematic for high-latitude power systems or high-frequency radios.
Leamon said some personal Global Positioning System devices, such as his running watch, might be a little slow or briefly lose connections.
Kunches explained that the burst of energy heading for Earth is called a coronal mass ejection, which means that some of the outer layer of the sun -- called the corona -- was blown off by Thursday's solar flare.
"The corona got blown off. It's like the roof got blown off the house because you were having a great party or something," he said. | <urn:uuid:3b45c4ab-2544-4dc8-b1f3-fcc084726da2> | 2.8125 | 382 | News Article | Science & Tech. | 59.242538 | 1,233 |
Life Science: Session 3
Sex Cell Production
What are sex cells?
Sex cells, or gametes, are unique to organisms that reproduce sexually. In animals and plants (fungi are somewhat different in this regard) there are two types of sex cells: male and female. The male sex cells are sperm, while the female sex cells are eggs. Sex cells are formed from special body cells that are typically located in sex organs. In most animals, sperm are formed in the testes of males, and eggs are formed in the ovaries of females.
Sex cells contain only half of the hereditary material present in the body cells that form them. This is important because male and female sex cells ultimately join to become a fertilized egg, which gives rise to a new organism, or offspring. In order for the offspring to resemble its parents, its first cell must receive the entire genome from its two parents. For humans, we know there are 46 chromosomes in body cells existing as 23 pairs. A fertilized egg must therefore contain this same number and arrangement. In an elegant process called meiosis, each sex cell receives one member of each chromosome pair—23 total. When sperm fertilizes egg, these singles unite to reform pairs, with half the genome coming from each parent. With a few exceptions, this pattern holds true for all sexually reproducing organisms.
How are sex cells produced?
Sex cells are produced from special body cells that contain the entire genome. The process by which the genome is halved is very precise — it’s not just a matter of randomly dividing the chromosomes into two sets. The process involves two cell divisions. Before the first occurs, all of the chromosomes are duplicated just as they are in body cell reproduction, but what happens next is different: the two duplicated strands remain attached to each other as the members of each chromosome pair move alongside each other. During the cell division that follows, only one member of each pair is transferred to each daughter cell—this is where the number of chromosomes is halved. The two strands of each chromosome are then separated during the second cell division, still maintaining half the number that existed in the parent cell. This results in four daughter cells — sperm or egg — that contain one member of each chromosome pair. This process is called meiosis.
What is the role of sex cell production in an animal life cycle?
Sex cell production ensures that the genome is maintained between parent and offspring generations. Occasionally, this process goes awry with chromosome pairs not lining up or not separating. The consequences are almost always harmful, and frequently lethal to potential offspring. A successful animal life cycle therefore depends on successful sex cell production.
There is another consequence to sex cell production that has a profound impact on the populations involved. Unlike body cell production, where the daughter cells are identical to parent cells, fertilized eggs result from genetic material from two different parents. Furthermore, each of these parents is only able to pass on half of its genome. The mixing and matching of half sets of chromosomes results in the astounding diversity we see in the living world. For example, we can see “parts” of both our parents when we look in the mirror. Similarly, a litter of puppies will reflect the size and coloration of both parents. The significance of this is explored in Session Five: Variation, Adaptation, and Natural Selection.
Compare body cell reproduction with sex cell production:
|Body cell reproduction||Sex cell production|
|Role in life cycle||Growth and maintenance||Reproduction|
|Where process occurs||Cells in all parts of body||Sex organs or tissues|
|Number of cell divisions||One||Two|
|What happens to chromosomes||All chromosomes line up singly, each chromosome duplicates, the two copies separate, and one copy of each chromosome is distributed to each daughter cell.||First division: chromosomes duplicate and copies remain attached, chromosome pairs line up alongside each other, the members of each pair separate, one member of each pair goes to each daughter cell. Second division: all chromosomes line up singly, the two copies separate, one copy of each chromosome is distributed to each daughter cell.|
|Number of cells that result||Two||Four|
|Number of chromosomes in resulting cells||Same number as in parent cell||Half the number as in parent cell|
|Significance||Genome is maintained; all information is passed along||Genome is halved; will be restored at fertilization|
|prev: body cell reproduction||next: cloning| | <urn:uuid:f9e1e7fe-748a-47bc-8b98-c404e8206a3d> | 4.21875 | 934 | Knowledge Article | Science & Tech. | 38.077658 | 1,234 |
Dictyostelium discoideum is a soil-living amoeba. A group of 100,000 form a mound as big as a grain of sand.
The hereditary information is carried on six chromosomes with sizes ranging from 4 to 7 Mb resulting in a total of about 34 Mb of DNA, a multicopy 90 kb extrachromosomal element that harbors the rRNA genes, and the 55 kb mitochondrial genome. The estimated number of genes in the genome is 8,000 to 10,000 and many of the known genes show a high degree of sequence similarity to genes in vertebrate species. - NIH
Credit: Rex Chisholm, Northwestern University (NIGMS Image Gallery) | <urn:uuid:55a0410f-0c0d-49f9-b756-4eb138946ccf> | 3.421875 | 146 | Knowledge Article | Science & Tech. | 47.450708 | 1,235 |
How did the universe get its structure? It was very smooth when it was born, with matter distributed incredibly evenly through space. Now, thanks to the action of gravity over billions of years, it is very lumpy, with dense clusters of galaxies separated by enormous voids.
You can watch the process unfold in a new computer simulation
by a group of scientists led by Tiziana Di Matteo
of Carnegie Mellon University in Pittsburgh, Pennsylvania, US.
Unlike previous simulations, Di Matteo's team included black holes in their simulation. The black holes are not highlighted in the animation, but they do influence their surroundings.
Increasingly, scientists are realising that supermassive black holes weighing millions or billions of times the mass of the Sun may affect their environment more than previously thought. Their enormous gravity can capture and swallow vast quantities of matter from their immediate vicinity, but they can also produce jets and radiation that can influence matter much farther away.
Another cool animation released recently shows the effects of the solar wind on Earth's magnetosphere. The solar wind constantly buffets the magnetosphere, stretching and bending magnetic field lines until they suddenly snap in what are called magnetic reconnection events.
ESA's four Cluster spacecraft
have been investigating this phenomenon and ESA recently put out an animation
illustrating this magnetic field snapping.
I'm fascinated by how a combination of science and computer graphics can show you things that you could never witness firsthand ? like cosmic changes that unfold over billions of years in the case of Di Matteo's simulation, and the normally invisible dance of magnetic field lines in the case of the Cluster animation.David Shiga, Online reporter (Image: Tiziana Di Matteo/CMU)
Labels: black holes, cluster, large-scale structure, magnetic reconnection | <urn:uuid:7db20a43-8164-47dd-bb60-e6830bc0fce4> | 4.03125 | 365 | News Article | Science & Tech. | 28.654511 | 1,236 |
Street Lights Permanently Change the Ecology of Local Bugs
The first “modern” streetlight was lit in London’s Pall Mall in 1807. That night may also have marked the first time a moth found itself trapped in an irresistible spiral around public lighting. Ever since then, streetlights have become a fixture of life in cities and suburbs, and a deathtrap for flying insects. Researchers at the University of Exeter have recently discovered that the abundance of insect life around these lights is not just a passing assemblage, but a permanent fixture. The diversity of invertebrate ground predators and scavengers, like beetles and harvestmen, remained elevated around streetlights even during the day. These insects had figured out the benefits of living in an island of artificially high prey concentrations.
These findings indicate that streetlights affect local ecologies for a longer duration, and at a higher level in the food web, than previously thought. Given the decline of pollinators and other invertebrates in the UK and around the world, it may be important to re-examine the impact of seemingly harmless nighttime lighting. | <urn:uuid:9efd4810-d511-4b1b-91bf-d3a51d07a04c> | 3.453125 | 226 | News Article | Science & Tech. | 31.029741 | 1,237 |
Why do you think you need to double the backslash on Windows? It's not because of Windows, it's because Perl uses the backslash as the escape character.
In your first attempt, Perl was trying to escape "p" and "l", converting regular characters to special characters, but then the path you were trying to reach didn't exists.
In your second attempt, you double the backslash, basically you escape the special character "\" and it becomes a regular character, so Perl really see a backslash, and the path now exists.
But, in Perl on Windows, you can also use the forward slash "/" in file paths.
Hope this help you understand what was going on internally.
Testing never proves the absence of faults, it only shows their presence.
Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
Read Where should I post X? if you're not absolutely sure you're posting in the right place.
Please read these before you post! —
Posts may use any of the Perl Monks Approved HTML tags:
Outside of code tags, you may need to use entities for some characters:
- a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
Link using PerlMonks shortcuts! What shortcuts can I use for linking?
See Writeup Formatting Tips and other pages linked from there for more info.
| & || & |
| < || < |
| > || > |
| [ || [ |
| ] || ] || | <urn:uuid:bbff385c-0973-4ef3-8eb3-ee89522b8b66> | 2.765625 | 438 | Tutorial | Software Dev. | 73.091502 | 1,238 |
Fleshing out the genome
February 04, 2005
Pacific Northwest-led team devises powerful new system for tying genes to vital functions in cells and for comparing molecular makeup of organisms
RICHLAND, Wash. –
Genomics, the study of all the genetic sequences in living organisms, has leaned heavily on the blueprint metaphor. A large part of the blueprint, unfortunately, has been unintelligible, with no good way to distinguish a bathroom from a boardroom, to link genomic features to cell function.
A national consortium of scientists led by BIATECH, a Seattle-area non-profit research center, and Pacific Northwest National Laboratory, a Department of Energy research institution in Richland, Wash., now suggests a way to put this house in order. They offer a powerful new method that integrates experimental and computational analyses to ascribe function to genes that had been termed "hypothetical" – sequences that appear in the genome but whose biological purposes were previously unknown.
The method not only portends a way to fill in the blanks in any organism's genome but also to compare the genomes of different organisms and their evolutionary relation to one another.
The new tools and approaches offer the most-comprehensive-to-date "functional annotation," a way of assigning the mystery sequences biological function and ranking them based on their similarity to genes known to encode proteins. Proteins are the workhorses of the cell, playing a role in everything from energy transport and metabolism to cellular communication.
This new ability to rank hypothetical sequences according to their likelihood to encode proteins "will be vital for any further experimentation and, eventually, for predicting biological function," said Eugene Kolker, president and director of BIATECH, an affiliate scientist at PNNL and lead author of a study in the Feb. 8 Proceedings of the National Academy of Sciences that applies the new annotation method to a strain of the metal-detoxifying bacterium Shewanella oneidensis.
"In a lot of cases," said James K. Fredrickson, a co-author and PNNL chief scientist, "it was not known from the gene sequence if a protein was even expressed. Now that we have high confidence that many of these hypothetical genes are expressing proteins, we can look for what role these proteins play."
Before this study, nearly 40 percent of the genetic sequences in Shewanella oneidensis—of key interest to DOE for its potential in nuclear and heavy metal waste remediation—were considered as hypothetical. This work identified 538 of these genes that expressed functional proteins and messenger RNA, accounting for a third of the hypothetical genes. They enlisted analytic software to scour public databases and applied expression data to improve gene annotation, identifying similarities to known proteins for 97 percent of these hypothetical proteins. All told, computational and experimental evidence provided functional information for 256 more genes, or 48 percent, but they could confidently assign exact biochemical functions for only 16 proteins, or 3 percent. Finally, they introduced a seven-category system for annotating genomic proteins, ranked according to a functional assignment's precision and confidence.
Kolker said that "a big part of this was the proteomics" – a systematic screening and identification of proteins, in this case those which were expressed in the microbe when subjected to stress. The proteomic analyses were done by four teams led by Kolker; Carol S. Giometti, Argonne National Laboratory; John R. Yates III, The Scripps Research Institute; and Richard D. Smith, W.R. Wiley Environmental Molecular Sciences Laboratory, based at PNNL. BIATECH's analysis of this data included dealing with more than 2 million files.
Fredrickson coordinates a consortium known as the Shewanella Federation. In addition to BIATECH, PNNL and ANL, the Federation also includes teams led by study co-authors James M. Tiedje, Michigan State University; Kenneth H. Nealson, University of Southern California; and Monica Riley, Marine Biology Laboratory. The Federation is supported by the Genomics: GTL Program of the DOE's Offices of Biological and Environmental Research and Advanced Scientific Computer Research. Other collaborators included the National Center for Biotechnology Information of the National Library of Medicine, National Institutes of Health, Oak Ridge National Laboratory and the Wadsworth Center.
BIATECH is an independent nonprofit biomedical research center located in Bothell, Wash. Its mission is to discover and model the molecular mechanisms of biological processes using cutting edge high-throughput technologies and computational analyses that will both improve human health and the environment. Its research focuses on applying integrative interdisciplinary approaches to the study of model microorganisms, and advancing our knowledge of their cellular behavior.
Tags: Energy, Environment, Fundamental Science, Biology | <urn:uuid:3914d9f0-94bb-45c4-9b49-bfcc6a04712c> | 2.90625 | 971 | News (Org.) | Science & Tech. | 20.224504 | 1,239 |
|PostgreSQL 8.2.23 Documentation|
|Prev||Fast Backward||Chapter 42. Overview of PostgreSQL Internals||Fast Forward||Next|
Here we give a short overview of the stages a query has to pass in order to obtain a result.
A connection from an application program to the PostgreSQL server has to be established. The application program transmits a query to the server and waits to receive the results sent back by the server.
The parser stage checks the query transmitted by the application program for correct syntax and creates a query tree.
The rewrite system takes the query tree created by the parser stage and looks for any rules (stored in the system catalogs) to apply to the query tree. It performs the transformations given in the rule bodies.
One application of the rewrite system is in the realization of views. Whenever a query against a view (i.e. a virtual table) is made, the rewrite system rewrites the user's query to a query that accesses the base tables given in the view definition instead.
The planner/optimizer takes the (rewritten) query tree and creates a query plan that will be the input to the executor.
It does so by first creating all possible paths leading to the same result. For example if there is an index on a relation to be scanned, there are two paths for the scan. One possibility is a simple sequential scan and the other possibility is to use the index. Next the cost for the execution of each path is estimated and the cheapest path is chosen. The cheapest path is expanded into a complete plan that the executor can use.
The executor recursively steps through the plan tree and retrieves rows in the way represented by the plan. The executor makes use of the storage system while scanning relations, performs sorts and joins, evaluates qualifications and finally hands back the rows derived.
In the following sections we will cover each of the above listed items in more detail to give a better understanding of PostgreSQL's internal control and data structures. | <urn:uuid:8faf0261-42c7-4668-890f-1d1c3b333f55> | 2.578125 | 419 | Documentation | Software Dev. | 50.638362 | 1,240 |
Aconitase and Iron Regulatory Protein 1May 2007 Molecule of the Month by David Goodsell
doi: 10.2210/rcsb_pdb/mom_2007_5 (PDF Version, ePub Version )
Aconitase is an essential enzyme in the tricarboxylic acid cycle and iron regulatory protein 1 interacts with messenger RNA to control the levels of iron inside cells. You might ask: what do these two proteins have in common? They were discovered and studied by different researchers, who gave them names that described their two very different functions. But surprisingly, when they looked at the amino acid sequence of these proteins, they turned out to be identical. The same protein is performing two very different jobs.
The enzyme aconitase is a key player in the central pathway of energy production. As part of the tricarboxylic acid cycle, it converts citrate into isocitrate. There is one form of aconitase in our mitochondria, which performs most of the conversion for the citric acid cycle, and a similar form in the cytoplasm that creates isocitrate for other synthetic tasks. The cytosolic form is shown here at the top, from PDB entry 2b3y. It is composed of a single protein chain that folds into several domains (the domains are colored slightly different shades of blue here). The domains close like a nutcracker around the active site, which contains an iron-sulfur cluster that assists with the reaction.
...or Iron Regulatory Protein 1
The cytosolic form of aconitase also acts as iron regulatory protein 1. The lower structure, from PDB entry 2ipy, shows how it performs this entirely different function. The iron-sulfur cluster in aconitase is unstable and must be replaced occasionally when it falls out. When iron levels in the cell get low, there isn't enough iron to regenerate the cluster, and the protein shifts to its second function. The protein opens up and grips hairpin loops in a few specific messenger RNA molecules. These include a hairpin at the start of the messenger RNA for ferritin, and five similar hairpins at the end of messenger RNA for the transferrin receptor. When the iron regulatory protein 1 binds, it inhibits the formation of ferritin, so that less iron is locked up in storage, and it enhances construction of the transferrin receptor, so the cell can pick up more transferrin out of the blood, and with it, more iron.
Many other proteins lead double lives, performing two entirely different functions. Three examples are shown here. The enzyme retinal dehydrogenase, which converts light- sensing retinal into the regulatory molecule retinoic acid, is shown on the left (PDB entry 1o9j), with the cofactor NADH in green. Its second job is to modify the consistency and absorbance of eye lenses, where it is called eta-crystallin. Cytochrome c (PDB entry 3cyt) is shown at upper right in red. It performs a familiar role in energy production, ferrying electrons in the mitochondria. But when the cell is damaged, cytochrome c spills out into the cytoplasm and performs its second job: it starts a cascade of responses that ultimately lead to programmed cell death (apoptosis). The third moonlighting protein shown here is phosphoglucose isomerase (PDB entry 2pgi), one of the ten enzymes that perform glycolysis. It is also secreted outside cells, where it acts as a cellular messenger with several names: neuroleukin, autocrine mobility factor, and differentiation and maturation mediator. The names give you an idea of the specific messages it carries from cell to cell.
Exploring the Structure
Aconitase performs a classic stereospecific reaction that is often used as an example in
biochemistry textbooks. It extracts a hydroxyl group and a specific hydrogen atom from
citrate, and replaces them in a geometrically precise way to form isocitrate. This process
is revealed in two crystal structures, but you need to use a little imagination when you
look at them, since the crystal structures do not contain the hydrogen atom positions.
PDB entry 1c96,
shown on the left, has citrate bound in the active site. In the normal form
of the enzyme, the oxygen atom shown in pink will be extracted by the iron sulfur cluster
and a hydrogen atom will be extracted by a serine at the top (both of these reactions are
shown with green arrows). This structure, however, has mutated the serine to alanine, so
the oxygen atom in the serine is missing. In the second step of the reaction, shown on the
right from PDB entry 7acn,
the molecule flips upside down (notice the different location
of the labels A-B-C) and the hydrogen and hydroxyl are added back in different places to
These illustrations were created with RasMol. You can create similar illustrations by clicking on the accession codes here and picking one of the options under Images and Visualization.
Additional reading about aconitase and iron regulatory proteins
P. J. Artymiuk and J. Green (2006) The double life of aconitase. Structure 14, 2-4.
T. A. Rouault (2006) The role of iron regulatory proteins in mammalian iron homeostasis and disease. Nature Chemical Biology 2, 406-414.
C. J. Jeffery (2004) Molecular mechanisms of multitasking: recent crystal structures of moonlighting proteins. Current Opinion in Structural Biology 14, 663-668.
S. D. Copley (2003) Enzymes with extra talents: moonlighting functions and catalytic promiscuity. Current Opinion in Chemical Biology 7, 265-272.
© 2013 David Goodsell & RCSB Protein Data Bank | <urn:uuid:6f14fe68-e0a3-476a-830a-681c8fd8d19b> | 2.84375 | 1,245 | Knowledge Article | Science & Tech. | 42.523925 | 1,241 |
Rings around the Sun
|Tweet|Rings around the Sun Whenever both sun and clouds are in the sky, be sure
to look up--you may behold rings, arcs, and other marvels!
Oct. 24, 2002: It was just after lunch on Sept. 25th when I stepped out onto the rear deck of my home in Ohio. What a gorgeous autumn afternoon. The pale blue sky was streaked with wispy white cirrus clouds. The Sun was high and bright.
I glanced up....
The sun was surrounded by an extraordinarily bright, rainbow-colored halo. Flanking it both left and right were two brilliant, comet-shaped rainbow-colored sun dogs or mock suns (technically known as parhelia from Greek words meaning "beside the sun"). Wow!
Above: This scene, recorded in Finland by Pekka Parviainen using a wide-angle lens, is similar to the one author Trudy E. Bell saw in Ohio last month. A football-shaped "circumscribed halo" surrounds the Sun. A fainter "parhelic circle" rings the horizon. "I had never seen anything so huge and so perfectly circular," says Bell.
I dashed to the front yard, which has a better view of the sky, and began turning to see how far the "comet tails" of the sun dogs reached. I turned 360o, accidentally unbalancing myself and falling onto the grass.
"That's the complete parhelic circle!" I exclaimed aloud to the empty street.
All that morning I had been stepping outside hourly to look up, because I knew that thin cirrus clouds plus bright sunlight almost guaranteed seeing something wonderful. Cirrus clouds are made of millions of hexagonal ice crystals 3 to 6 miles up in the troposphere where jet airplanes fly--each crystal acting as a tiny prism refracting (bending) the sun's light and throwing it elsewhere into the sky.
Because the upper troposphere is almost always below freezing, ice-crystal displays can be seen year-round (I've seen weak sundogs even in July). But truly good displays in the United States are most common in the fall, winter, and spring when the northern jet stream descends southward, drawing down Arctic air masses with their treasure-trove of jewel-like ice prisms.
Left: Wispy high-altitude clouds like these harbor ice crystals that cause sun halos. Credit: Pekka Parviainen
Just then, my neighbor Cindy backed her van out of her drive. I called to her and pointed upward. She stepped out of her idling van and looked up. Her eyes widened and her jaw dropped. "Was this predicted?" she asked eagerly. "Did people know this was going to happen? How do you find out when to look?"
No, I explained, atmospheric displays cannot be predicted the way astronomers can pinpoint the dates and times of meteor showers and eclipses. Sighting such a light show is more akin to spotting an unusual migratory South American bird in Ohio: knowing generally the right weather conditions and time of year, you simply must trust to luck.
Not only that, but every ice-crystal display is as different as every pattern seen through a kaleidoscope--and for similar reasons. Displays in the daylit sky depend on the tilt of the ice crystals in the air and the altitude of the sun. They depend on whether the ice crystals are flat plates or long pencils. They depend on the crystals' size (at least 0.1 millimeter across) and optical quality. Crystals too tiny or imperfect can't act as prisms. But if the crystals are of exceptional gem-like quality, the entire dome of the daytime sky may be festooned with exotic halos, loops, arcs, and crosses--or the full parhelic circle now glowing overhead in silent glory!
"And to think," Cindy remarked, climbing back into her van, "I wouldn't have noticed any of this if you hadn't gotten me just to look up!"
After she drove off, I pondered her questions and comment.
Although I've been an avid amateur astronomer and lover of the nighttime sky since 1965, only in the last five years have I also become a devotee of the daytime sky. During that time, my experience has revealed that even supposedly rare atmospheric-optics displays are more common than meteorology books imply--plainly visible to anyone who simply thinks to tilt back the head.
Now that I've cultivated the simple habit of looking up a dozen times a day, I've found scarcely a week passes without some reward--be it solar halos, sun dogs, crepuscular rays (shafts of sunbeams and shadows from behind puffy cumulus clouds), circumzenithal arcs (a rainbow-colored "ice bow" arc half-encircling the zenith), sun pillars, or now, the complete parhelic circle.
Yes, the daytime sky abounds with unexpected gifts, which can be yours for taking a moment--just to look up!
Editor's note: Ice crystals in Earth's atmosphere cause not only rings around the Sun, but also rings around the Moon, Moondogs and even Venus pillars. If you spot a Sun pillar or halo not long before sunset, be alert for rings and pillars around objects in the night sky a few hours later.More information
Sun halos, pillars and sundogs can happen during any season. "The icy crystals that cause them form in high altitude clouds 5 km or so above Earth's surface where it is always freezing," says Bruce Wielicki, an atmospheric scientist at NASA's Langley Research Center.
Right: Cirrus clouds and hexagonal ice crystals. [more]
Polar Halos: Complex ice-crystal displays are especially common in the Arctic and Antarctic, even being produced by the full moon during the months-long winter night! Here's a description of one seen by Robert Peary on his last voyage toward the North Pole: "On the evening of November 11 , there was a brilliant paraselene, two distinct halos and eight false moons being visible in the southern sky. This phenomenon is not unusual in the Arctic, and is caused by the frost crystals in the air. On this particular occasion the inner halo had a false moon at its zenith, another at its nadir, and one each at the right and left. Outside was another halo, with four other moons." - Robert E. Peary, The North Pole: Its Discovery in 1909 Under the Auspices of the Peary Arctic Club, 1910; Dover Publications, 1986, pp. 175-176.
more references: Trudy E. Bell "Skyscapes," League of American Bicyclists magazine, 37 (3): 12-15 (Summer) -- photos of different common ice-crystal, water-droplet, and dust-mote phenomena you can see in the daytime sky
Robert Greenler, Rainbows, Halos, and Glories, with a comprehensive catalogue of ice crystal forms and the displays they produce, plus highly useful color photos of many kinds of displays one can see. (This is a classic book now out of print, but widely available at libraries.)
Join our growing list of subscribers - sign up for our express news deliveryand you will receive a mail message every time we post a new story!!! | <urn:uuid:57ded341-6d8d-427a-a2ee-cb5c8f7e4b97> | 2.875 | 1,547 | Nonfiction Writing | Science & Tech. | 56.879165 | 1,242 |
A leaf is a plant's principal organ of photosynthesis, the process by which sunlight is used to form foods from carbon dioxide and water. Leaves also help in the process of transpiration, or the loss of water vapor from a plant.
A typical leaf is an outgrowth of a stem and has two main parts: the blade (flattened portion) and the petiole (pronounced PET-ee-ole; the stalk connecting the blade to the stem). Some leaves also have stipules, small
paired outgrowths at the base of the petiole. Scientists are not quite sure of the function of stipules.
Leaf size and shape differ widely among different species of plants. Duckweeds are tiny aquatic plants with leaves that are less than 0.04 inch (1 millimeter) in diameter, the smallest of any plant species. Certain species of palm trees have the largest known leaves, more than 230 feet (70 meters) in length.
Words to Know
Abscission layer: Barrier of special cells created at the base of petioles in autumn.
Blade: Flattened part of a leaf.
Chloroplasts: Small structures that contain chlorophyll and in which the process of photosynthesis takes place.
Margin: Outer edge of a blade.
Midrib: Single main vein running down the center of a blade.
Petiole: Stalk connecting the blade of a leaf to the stem.
Phloem: Plant tissue consisting of elongated cells that transport carbohydrates and other nutrients.
Photosynthesis: Process by which a plant uses sunlight to form foods from carbon dioxide and water.
Stomata: Pores in the epidermis of leaves.
Transpiration: Evaporation of water in the form of water vapor from the stomata.
Xylem: Plant tissue consisting of elongated cells that transport water and mineral nutrients.
A leaf can be classified as simple or compound according to its arrangement. A simple leaf has a single blade. A compound leaf consists of two or more separate blades, each of which is termed a leaflet. Each leaflet can be borne at one point or at intervals on each side of a stalk. Compound leaves with leaflets originating from the same point on the petiole (like fingers of an outstretched hand) are called palmately compound. Compound leaves with leaflets originating from different points along a central stalk are called pinnately compound.
All leaves, no matter their shape, are attached to the stem in one of three ways: opposite, alternate, or whorled. Opposite leaves are those growing in pairs opposite or across from each other on the stem. Alternate leaves are attached on alternate sides of the stem. Whorled leaves are three or more leaves growing around the stem at the same spot. Most plant species have alternate leaves.
The outer edge of a blade is called the margin. An entire margin is one that is smooth and has no indentations. A toothed margin has small or wavy indentations. A lobed margin has large indentations (called sinuses) and large projections (called lobes).
Venation is the pattern of veins in the blade of a leaf. A single main vein running down the center of a blade is called a midrib. Several main veins are referred to as principle veins. A network of smaller veins branch off from a midrib or a principle vein.
All veins transport nutrients and water in and out of the leaves. The two primary tissues in leaf veins are xylem (pronounced ZY-lem) and phloem (pronounced FLOW-em). Xylem cells mainly transport water and mineral nutrients from the roots to the leaves. Phloem cells mainly transport carbohydrates (made by photosynthesis) from the leaves to the rest of the plant. Typically, xylem cells are on the upper side of the leaf vein and phloem cells are on the lower side.
Internal anatomy of leaves
Although the leaves of different plants vary in their overall shape, most leaves are rather similar in their internal anatomy. Leaves generally consist of epidermal tissue on the upper and lower surfaces and mesophyll tissue throughout the body.
Epidermal cells have two features that prevent the plant from losing water: they are packed densely together and they are covered by a cuticle (a waxy layer secreted by the cells). The epidermis usually consists of a single layer of cells, although the specialized leaves of some desert plants have epidermal layers that are several cells thick. The epidermis contains small pores called stomata, which are mostly found on the lower leaf surface. Each individual stoma (pore) is surrounded by a pair of specialized guard cells. In most species, the guard cells close their stomata during the night (and during times of drought) to prevent water loss. During the day, the guard cells open their stomata so they can take in carbon dioxide for photosynthesis and give off oxygen as a waste product.
The mesophyll layer is divided into two parts: palisade cells and spongy cells. Palisade cells are densely packed, elongated cells lying directly beneath the upper epidermis. These cells house chloroplasts, small structures that contain chlorophyll and in which the process of photosynthesis takes place. Spongy cells are large, often odd-shaped cells lying underneath palisade cells. They are loosely packed to allow gases (carbon dioxide, oxygen, and water vapor) to move freely between them.
Leaves in autumn
Leaves are green in summer because they contain the pigment chlorophyll, which absorbs all the wavelengths of sunlight except for green (sunlight or white light comprises all the colors of the visible spectrum: red, orange, yellow, green, blue, indigo, and violet). In addition to chlorophyll, leaves contain carotenoid (pronounced kuh-ROT-in-oid) pigments, which appear orange-yellow. In autumn, plants create a barrier of special cells, called the abscission (pronounced ab-SI-zhen) layer, at the base of the petiole. Moisture and nutrients from the plant are cut off and the leaf begins to die. Chlorophyll is very unstable and begins to break down quickly. The carotenoid pigments, which are more stable, remain in the leaf after the chlorophyll has faded, giving the plant a vibrant yellow or gold appearance.
The red autumn color of certain plants comes from a purple-red pigment known as anthocyanin (pronounced an-tho-SIGH-a-nin). Unlike carotenoids, anthocyanins are not present in a leaf during the summer. They are produced only after a leaf starts to die. During the autumn cycle of warm days and cool nights, sugars remaining in the leaf undergo a chemical reaction, producing anthocyanins.
[ See also Photosynthesis ] | <urn:uuid:544743de-e584-4ec5-ad1a-bf2f9b92b7ea> | 4.09375 | 1,450 | Knowledge Article | Science & Tech. | 48.272265 | 1,243 |
Oct. 28, 2006 An earthquake swarm -- a steady drumbeat of moderate, related seismic events -- over hours or days, often can be observed near a volcano such as Mount St. Helens in Washington state or in a geothermal region such as Yellowstone National Park in Wyoming.
New research led by a University of Washington seismologist shows, however, that such swarms can occur anywhere that is seismically active, not just near volcanoes or geothermal regions.
"In our research we saw swarms everywhere and we could see the broad characteristics of how they behaved," said John Vidale, a UW professor of Earth and space sciences and director of the Pacific Northwest Seismograph Network.
Vidale and two colleagues, Katie Boyle of Lawrence Livermore National Laboratory and Peter Shearer of the University of California, San Diego, examined data from 83 Japanese earthquake swarms over about 2½ years. Their findings confirmed work they published earlier this year that looked at data from 72 events in southern California during a 19-year span.
Both studies examined data collected from swarms in which at least 40 earthquakes were recorded in a few-mile radius over two weeks. The swarms did not follow the well-recognized pattern of an earthquake burst that begins with a main shock and is followed by numerous smaller aftershocks.
"We saw a mix of the two kinds of events, swarms or earthquakes and aftershocks, wherever we looked," Vidale said. "It confirms what people have suspected. There are earthquake swarms and they are responses to factors we can't see and don't have a direct way to measure."
The Japanese research is being published tomorrow in the online edition of Geophysical Research Letters.
The scientists suspect that "swarminess" in volcanic and geothermal zones might be driven by hot water or magma pushing fault seams apart or acting to reduce friction and enhancing the seismic activity in those areas.
Away from volcanic and thermal regions, it is unclear what triggers swarms that don't include main shocks and aftershocks, Vidale said. It is possible that swarms are driven by tectonic movements so gradual that they take many minutes to weeks to unfold but still are much more rapid than normal plate tectonic motions.
The researchers also found that, contrary to expectations, swarms occurring within 30 miles of Japan's volcanoes lasted perhaps twice as long as swarms in other types of geological formations. It was expected that earthquake episodes would have been briefer in hotter rock formations.
The results help to provide a clearer picture of how seismic swarms are triggered and give a better means of assessing the danger level for people living in tectonically active regions where earthquake swarms might occur, Vidale said.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:37acff95-6f51-4f9f-ae97-37dd0ee21237> | 3.671875 | 609 | News Article | Science & Tech. | 38.807317 | 1,244 |
May 19, 2009 Graphene is an atomically thin sheet of carbon that has attracted significant attention due to its potential use in high-performance electronics, sensors and alternative energy devices such as solar cells. While the physics of graphene has been thoroughly explored, chemical functionalization of graphene has proven to be elusive.
Now researchers at Northwestern University have identified conditions for chemically functionalizing graphene with the organic semiconductor perylene-3,4,9,10-tetracarboxylic-dianhydride (PTCDA).
PTCDA self-assembles into a molecularly pristine monolayer that is nearly defect-free as verified by ultra-high vacuum scanning tunneling microscopy. In addition, the PTCDA monolayers are stable at room temperature and atmospheric pressure, which suggest their use as a seeding layer for subsequent materials deposition.
Through chemical functionalization and materials integration, the outstanding electrical properties of graphene likely can be exploited in a diverse range of technologies including high-speed electronics, chemical and biological sensors and photovoltaics.
These results will be published online May 17 by Nature Chemistry and will be featured on the cover of the June 2009 issue of the journal.
"Graphene has captured the imagination of researchers worldwide due to its superlative and exotic electronic properties," said Mark Hersam, who led the research team. He is professor of materials science and engineering in Northwestern's McCormick School of Engineering and Applied Science and professor of chemistry in the Weinberg College of Arts and Sciences.
"However, harnessing these properties requires the development of chemical functionalization strategies that will allow graphene to be seamlessly integrated with other materials that are commonly found in real-world technology," said Hersam. "The stability and uniformity of the chemistry demonstrated here suggest that it can be used as a platform for many device applications."
In addition to Hersam, the other author of the Nature Chemistry paper is Qing Hua Wang, a graduate student in materials science and engineering at Northwestern.
Other social bookmarking and sharing tools:
Note: If no author is given, the source is cited instead. | <urn:uuid:ec8af92c-7730-40e3-b4ea-96a90071b777> | 3.1875 | 435 | News (Org.) | Science & Tech. | 10.694184 | 1,245 |
Discovered: Male fish who engage in same-sex flirting lure in female fish; cheese dates back 7.5 millennia; Americans love public transportation once they give it a chance; depressed mice cheer up after brain stimulation.
Gay fish attract more female mates. Nature has its own version of a pop culture archetype—the highly attractive man who, unfortunately, remains unavailable to the women swooning over him. A team of researchers led by University of Frankfurt's David Bierbach has found a species of tropical fish in which males who flirt with other males are perceived as more attractive by potential female mates. They observed Poecilia mexicana, or Atlantic mollies, engaged in "mate copying," meaning that females will try to mate with a male fish they've seen interacting sexually with members of their own sex. "Males can increase their attractiveness towards females by homosexual interactions, which in turn increase the likelihood of a male's future heterosexual interactions," says Bierbach. "We do not know how widespread female mate choice copying is, but up to now it is reported in many species, including fruit flies, fishes, birds and mammals [including] humans." [BBC News]
Cheese is really old ... ahem ... well aged. Ancient pottery recovered on a dig in Poland reveals that cheese making could date back as far as 5,500 B.C. University of Bristol researchers led by Richard Evershed discovered fatty milk residue on the shards of sieves. That ruled out previous theories that they were used to make honey or beer. "It's almost inconceivable that the milk fat residues in the sieves were from anything else but cheese," comments University of Vermont nutrition professor Paul Kindstedt. We're glad Neolithic people discovered cheese, because their cuisine sounds really boring without it, consisting mostly of porridge. "They probably would not be the first choice for a lot of people today," Kindstedt says of the cheeses these sieves could have produced. "But I would still love to try it." [AP]
If you can convince Americans to take public transportation, they'll love it. It's hard to convince drivers to try public transportation, so Maya Abou-Zeid of the American University of Beirut and Moshe Ben-Akiva of M.I.T. cut a deal with their experimental subjects: they covered their fare for a brief trial period. They found that 30 percent of Boston car commuters were convinced to switch to public transportation, and 25 percent actually stuck with it for six months. So what was preventing them from switching before? Mostly our societal opinions on public transportation, the researchers found. "Because of a generally weaker public transportation culture in Boston than in Switzerland, M.I.T. participants who switched might not have seriously considered using public transportation until they experimented with it during the trial," they write. [Atlantic Cities]
Brain stimulation cheers up depressed mice. Stanford University neuroscientist Karl Deisseroth has been able to quell depression in mice by stimulating and silencing certain parts of the rodents' brains with lasers. Using optogenetics, the researchers behind two new papers could control nerve cells by adjusting fiber-optic light beams. By better understanding the neural pathways that regulate depression in mice, Deisseroth hopes to develop treatments for humans suffering from depression. "In this way, bit by bit, we can piece together the circuitry," he says. "It’s a long process that’s just starting, but we have a foothold now." [Science News] | <urn:uuid:7bb0e61d-f2c7-4bb2-9924-a43fcde6d123> | 2.90625 | 723 | Content Listing | Science & Tech. | 45.130792 | 1,246 |
Book I. Propositions 31 and 32
Back to Propositions 31, 32.
11. Solve the problem of Proposition 31:
Through a given point to draw straight line parallel to a given straight line.
To see the answer, pass your mouse over the colored area.
Let A be the given point, and BC the given straight line;
Choose any point D on BC, and draw AD;
the straight line AD, which meets the two straight lines EF, BC makes the alternate angles EAD, ADC equal. Therefore EF is parallel to BC. (I. 27)
12. a) State the hypothesis of Proposition 32.
A figure is a triangle and one side is extended.
2. b) State the conclusion.
The exterior angle is equal to the two opposite interior angles; and the three interior angles of a triangle are equal to two right angles.
2. c) Practice Proposition 32.
13. Prove Proposition 32 by drawing a straight line DE through A
3. This proof is attributed to Pythagoras, who lived some 250 years
Because DE is parallel to BC,
14. ABC is a circle with center D; ABD is a triangle; and ADC is a
14. straight line. Prove that angle BDC is double angle A.
AD is equal to DB because they are radii of the circle;
15. Prove: The acute angles of a right triangle are together equal to a
The three angles of a triangle are equal to two right angles,
16. In an isosceles right triangle, why is each acute angle half of a right
Since the triangle is isosceles, the base angles are equal.
17. Prove that if an acute angle of one right triangle is equal to an acute
That is, if angles B and E are right angles, and angle C equals
Angles A and C together equal one right angle,
18. According to the Corollary to I. 32, the four interior angles of any
19. In any five-sided rectilineal figure, the five angles are together equal
19. to how many right angles?
10. In the degree system of angular measurement, in which a right angle
10. is called 90°, how many degrees is each angle in a regular octagon?
10. (That is an eight-sided figure which is both equilateral and
Please make a donation to keep TheMathPage online.
Copyright © 2012 Lawrence Spector
Questions or comments? | <urn:uuid:ad9c1f8d-de52-4e65-b208-1ca0fb3c3c25> | 3.96875 | 530 | Tutorial | Science & Tech. | 72.056351 | 1,247 |
The latest science, described in the World Bank report “Turn Down the Heat,” indicates that we are heading toward a 4° C warmer world, with catastrophic consequences in this century. While carbon dioxide (CO2) is still the No. 1 threat, there is another category of warming agent called short-lived climate pollutants (SLCPs). Mitigating these pollutants is a must if we want to avoid the 4° C warmer future.
The main SLCPs are black carbon, methane, tropospheric ozone, and hydrofluorocarbons. They are potentially responsible for more than one-third of the current warming. Because SLCPs have a much shorter lifetime in the air than CO2; reducing their emissions can create almost immediate reduction of global/regional warming, which is not possible by reducing CO2 emissions alone. According to one U.N. report, full implementation of 16 identified measures to mitigate SLCPs would reduce future global warming by about 0.5˚C.
In this blog, we will focus on one SLCP – black carbon. Black carbon is a primary component of particulate matter (PM), the major environmental cause of premature deaths globally. As a climate pollutant, black carbon’s global warming effects are multi-faceted. It can warm the atmosphere directly by absorbing radiation. When deposited on ice and snow, black carbon reduces their reflecting power and increases their melting rate. At the regional level, it also influences cloud formation and impacts regional circulation and rainfall patterns such as the monsoon in South Asia. | <urn:uuid:27df5ad7-cfaa-45dd-9780-199d0aa281ba> | 3.765625 | 322 | News (Org.) | Science & Tech. | 42.763036 | 1,248 |
What happens when you let two bots have a conversation? Cornell researchers Igor Labutov, Jason Yosinski and Hod Lipson find out. Follow the links at the bottom of this post for more about "AI vs. AI."
Laughing babies, talking dogs and Rebecca Black may be Internet sensations, but if you want to add something more substantive to your viral video diet, turn your dial to dueling chatbots, dancing Ph.D. theses and other highlights from the past year's surfeit of science videos.
Talking bots can be just as surprising and silly as talking dogs. Take "AI vs. AI," for example. Cornell researchers Igor Labutov, Jason Losinski and Hod Lipson took two Cleverbot artificial-intelligence programs, hooked them up to each other, and typed in "Hi" as an ice-breaker. Hilarity ensues.
"We just assembled the pieces, the audio and the avatars, and let the program run," Lipson, an associate professor at the Cornell Creative Machines Lab, told me today.
The funniest line in the video comes when one AI program tells the other that they're chatting together as robots. The other bot replies, "I am not a robot, I am a unicorn." Where did that come from?
"The conversations are based on millions of conversations that it had before," Lipson said. "Probably this term is something it had encountered in some conversation with a human." The best guess is that someone made a reference to the unicorn from Lewis Carroll's "Through the Looking Glass," and somehow that stuck in the Cleverbot's electronic brain.
The takeaway is that artificially intelligent chatbots can become as petulant and irrational as the humans who made them. This Cleverbot conversation provides further evidence of that. ("I'm talking about you ... how you are a creep," one clone-bot tells another.)
Here are 10 other clever and creepy science videos from 2011 to while away the minutes with. I've added links to more information about each of them at the bottom of this item:
Science educator James Drake put together 600 pictures from the International Space Station to create this video view of an orbital night flight. It's been viewed more than 6 million times on YouTube since September. Follow the links at the bottom for more night-flight videos.
The top video in this year's "Dance Your Ph.D" contest was "Microstructure-Property Relationships in Ti2448 Components Produced by Selective Laser Melting: A Love Story" from Joel Miller on Vimeo. Follow the links at the bottom to watch more winners from the "Dance Your Ph.D" video file.
One of the year's most trafficked videos is "A Day Made of Glass," which depicts Corning's vision for a glassy future. It's been viewed more than 16 million times on YouTube since February. Follow the links at the bottom of this story for more about the future of glass.
An octopus rises from the deep at the Fitzgerald Marine Reserve in California ... and walks over land on its legs. It turns out this behavior is not all that uncommon. The video is among Txchnologist's top 10 science videos. Follow the links at the bottom for more about walking octopi and the Txchnologist list..
Speaking of octopi, here's a soft robot that crawls along a surface like an octopus out of water. Follow the links at the bottom to see more videos from Chemical & Engineering News.
Soft robots may look cute, but this hard-charging AlphaDog Proto looks downright creepy. It's being developed by Boston Dynamics with funding from DARPA and the U.S. Marine Corps. The first version of the complete robot will be ready in 2012. Follow the links at the bottom to learn more about AlphaDog.
Minute Physics focuses on the faster-than-light neutrino research in its latest video. Follow the links listed below for more from Minute Physics.
Quantum levitation sounds like a science-fiction phenomenon, but the Superconductivity Group at the University of Tel Aviv shows that it really, really works. Watch this report from TODAY.com's Dara Brown, and follow the links at the bottom of this post to learn more.
In one of a series of math-themed videos, Vi Hart takes potshots at pi and talks up tau instead. And she proves she can make a cherry pie. Follow the links at the bottom for more about Hart and Tau Day.
The "Readers Choice" honors in the 2011 Labby Awards went to "Weaver Ants" by Mark Moffett and Melissa Wells. This video was posted by thescientistllc on Vimeo. Follow the links below for more about the Labbies.
Update for 8:35 p.m. ET: For 10 more must-see, humorous science videos, check out this Tree of Life blog posting by UC-Davis biologist Jonathan A. Eisen. He says his No. 1 pick, the "Bad Project" Lady Gaga parody, is "simply awesome" — and I simply agree.
More about the videos:
- Cleverbots at Cornell: AI vs. AI
- How the Cleverbot chats like a human
- Night flights: Sleigh ride in orbit
- Night flights: The best of NASA's night lights
- Ph.D. dance-off makes science sexy
- A Day Made of Glass: The story from Corning
- Future of Tech: The evolution of glass
- Scientific American explains the walking octopus
- Txchnologist: Ten of 2011's top science videos
- Top 10 videos of 2011 from C&EN, including soft robot
- Four-legged battlefield robot evolves into 'AlphaDog'
- Minute Physics' YouTube channel
- Video wows with quantum levitation
- Vi Hart's math blog | The Tau Manifesto
- The Scientist's 2011 Labby Awards | Doctor Bugs
More year-end reviews:
- Cast your vote for the Weird Science Awards
- 11 scientific twists from 2011
- The biggest ancient mysteries of 2011
- The year in space | 2011 slideshow
- Who's on the A-list for bad celebrity science?
Alan Boyle is msnbc.com's science editor. Connect with the Cosmic Log community by "liking" the log's Facebook page, following @b0yle on Twitter and adding the Cosmic Log page to your Google+ presence. You can also check out "The Case for Pluto," my book about the controversial dwarf planet and the search for new worlds. | <urn:uuid:8f334ce1-cd67-4c1b-b102-02226b501b74> | 2.59375 | 1,362 | Content Listing | Science & Tech. | 62.630093 | 1,249 |
Cragg, Rohan G. and Bardgett, Richard D. (2001) How changes in animal diversity within a soil trophic group influence ecosystem processes. Soil Biology and Biochemistry, 33 (15). pp. 2073-2081. ISSN 0038-0717Full text not available from this repository.
There are few experimental data on the consequence of varying the composition and diversity of soil animals communities, or soil food-webs, on ecosystem properties. Here, we tested the hypothesis that varying the diversity and composition of soil animals within a trophic group, the microbial-feeders, affects litter decomposition and nutrient flux in grassland. Microcosms containing grassland plant litter were inoculated with individual species of Collembola Folsomia candida, Pseudosinella alba, and Protaphorura armata,and all possible two and three species combinations of these species. Our data show that towards the end of the experiment individual species of Collembola, and especially F. candida, had markedly different, but positive, effects on measures of litter mass loss, microbial activity (CO2 respiration) and the leaching of dissolved organic carbon (DOC) and nitrate-N. Two and three species combinations of Collembola revealed that effects of fauna on ecosystem processes were due to differences in the composition of the collembolan community, rather than the number of species present. In comparison to a treatment that had no fauna, significantly higher rates of litter mass loss, microbial activity, and DOC and nitrate release were detected only in microcosms that contained F. candida. There was no evidence of effects of F. candida in combination with other species, relative to effects of F. candida alone, on the above properties. These findings support the notion that changes in the diversity of microbivorous fauna may not have a predictable effect on decomposition processes rates and that the functioning of the microbial-feeding trophic group is influenced mainly by the physiological attributes of the dominant animal species present, in this case F. candida.
|Journal or Publication Title:||Soil Biology and Biochemistry|
|Uncontrolled Keywords:||Decomposition ; Diversity ; Collembola ; Trophic group ; Soil fauna|
|Subjects:||Q Science > QH Natural history > QH301 Biology|
|Departments:||Faculty of Science and Technology > Lancaster Environment Centre|
|Deposited By:||Prof Richard Bardgett|
|Deposited On:||10 Jul 2008 16:40|
|Last Modified:||26 Jul 2012 14:46|
Actions (login required) | <urn:uuid:6411f64e-22a0-4276-a6df-22fe6873c47a> | 2.53125 | 563 | Academic Writing | Science & Tech. | 24.210428 | 1,250 |
This is an image of an unidentified environmental microbial community collected from a shallow subsurface sediment sample. The sample was taken from the Gulf of Mexico at a depth of 575 meters and photographed using a DNA DAPI fluorescent stain. The stain fluoresces blue to count the cells found in the sediment sample. Image Credit: Heath Mills/TAMU
Foraminifera, like the one seen here, are tiny creatures in the ocean about the size of the head of a pen that are surrounded by calcium carbonate shells, similar to the shells around other sea creatures. Matthew Schmidt, a Texas A&M oceanographer, uses the foraminifera shells taken from ocean core samples to gather clues about the creature's surroundings, which helps scientists understand the conditions present at the start of the Younger Dryas period. Photo by Howard Spero at University of California Davis.
The mutton snapper inhabits much of the Atlantic Ocean, from Massachusetts to Brazil. Texas A&M Geography doctoral candidate Pablo Granados-Dieseldorff studies the mutton snapper in its spawning ground, the Mesoamerican Reef, which runs from Mexico to Honduras, in hopes of generating science-based conservation methods to protect both fish and habitat.
Peer into the interior of a thermal ionization mass spectrometer, located in the R. Ken Williams '45 Radiogenic Isotope Geosciences Laboratory. The instrument detects minute differences in the sub-atomic makeup of elements. Researchers use these differences found in rocks, minerals, sediments and fossils to trace ancient ocean and atmospheric circulation patterns during periods of past climate change. They can also use isotopic compositions of uranium and lead to date rocks that are millions to billions of years old.
A drill bit from the Joides Resolution, a drilling vessel used by researchers in Texas A&M’s Integrated Ocean Drilling Program. This photo was taken during Program Expedition 321 in the equatorial Pacific Ocean, during which researchers obtained sediments from the sea floor in order to reconstruct a detailed record of climate change over the last 55 million years. Researchers looked at minerals as well as microscopic fossils to construct the history.Photo by Bridget Wade
This is the image you would see were you to stand just south of the Endurance Crater on the surface of Mars and gaze northward. Endurance was visited by NASA’s Mars Exploration Rover Opportunity from May to December, 2004. Images and measurements taken by Opportunity led scientists to conclude that liquid water flowed episodically through the area in ancient times. Texas A&M Geosciences professor Mark Lemmon played integral roles as atmospheric sciences lead in the successful missions of both Mars rovers, Spirit and Opportunity. More recently, he has also contributed to efforts in the Phoenix Lander, which first encountered Mars in May, 2008, and the Mars Science Laboratory (nicknamed Curiosity), which is scheduled for launch in November, 2011. Image Credit: NASA/JPL/Cornell
Pictured on Abraham Lincoln’s nose, the tiny mineral zircon is used by geochronologists such as TAMU Geology and Geophysics professor Brent Miller to date rocks that are millions to billions of years old.The mineral is found in volcanic rocks that are inter-bedded with fossil-bearing sedimentary rocks. This provides one of the best ways to determine the ages of long-extinct species. Once-molten rocks that crystallized deep underground during plate tectonic collisions also contain zircon. The age of these zircons can be linked to the crystallization of the molten rock and thus give scientists a way to clock ancient mountain building processes. | <urn:uuid:322de67f-93aa-4dfe-a64b-20f475fa6ef8> | 3.734375 | 749 | Content Listing | Science & Tech. | 30.758945 | 1,251 |
ICON Web & News
Search Using OECD Database
Return to Previous Page
Addition or Correction
In Situ Synchrotron X-ray Fluorescence Mapping and Speciation of CeO2 and ZnO Nanoparticles in Soil Cultivated Soybean (Glycine max)
Link to Journal Abstract
With the increased use of engineered nanomaterials such as ZnO and CeO2 nanoparticles (NPs), these materials will inevitably be released into the environment, with unknown consequences. In addition, the potential storage of these NPs or their biotransformed products in edible/reproductive organs of crop plants can cause them to enter into the food chain and the next plant generation. Few reports thus far have addressed the entire life cycle of plants grown in NP-contaminated soil. Soybean (Glycine max) seeds were germinated and grown to full maturity in organic farm soil amended with either ZnO NPs at 500 mg/kg or CeO2 NPs at 1000 mg/kg. At harvest, synchrotron ì-XRF and ì-XANES analyses were performed on soybean tissues, including pods, to determine the forms of Ce and Zn in NP-treated plants. The X-ray absorption spectroscopy studies showed no presence of ZnO NPs within tissues. However, ì-XANES data showed O-bound Zn, in a form resembling Zn-citrate, which could be an important Zn complex in the soybean grains. On the other hand, the synchrotron ì-XANES results showed that Ce remained mostly as CeO2 NPs within the plant. The data also showed that a small percentage of Ce(IV), the oxidation state of Ce in CeO2 NPs, was biotransformed to Ce(III). To our knowledge, this is the first report on the presence of CeO2 and Zn compounds in the reproductive/edible portion of the soybean plant grown in farm soil with CeO2 and ZnO NPs.
For this study, soybean (Glycine max) seeds were germinated and grown to full maturity in organic farm soil amended with either ZnO NPs at 500 mg/kg or CeO2 NPs at 1000 mg/kg. At harvest, synchrotron ì-XRF and ì-XANES analyses were performed on soybean tissues, including pods, to determine the forms of Ce and Zn in NP-treated plants.
Peer Reviewed Journal Article
Exposure Or Hazard Target
Method Of Study
Environmental Fate and Transport
Risk Exposure Group
ACS Nano, 2013, 7(2): 1415-1423
Hernandez-Viezcas JA, Castillo-Michel H, Andrews JC, Cotte M, Rico C, Peralta-Videa JR, Ge Y, Priester JH, Holden PA, Gardea-Torresdey JL
Last updated on May 3, 2013
This work is supported in part by the Nanoscale Science and Engineering Initiative of the National Science Foundation
under NSF Award Number EEC-0118007.
Why Join Us?
Mission and Strategy
Good Nano Guide
Nano EHS Research Needs
Current Practices Survey | <urn:uuid:f6844639-952e-49f8-b074-2d30e7c2e568> | 2.703125 | 702 | Academic Writing | Science & Tech. | 43.647634 | 1,252 |
At its distant orbit, Webb is much too far from Earth to be reached by the space shuttle. Webb's science mission length is 5 years with a 10 year goal. To insure the 5 year mission, NASA has engineered the observatory so that all critical subsystems have a backup or will degrade gracefully with age. For instance, the Near Infrared Camera has two identical camera systems so that the optical quality can be maintained even if one fails.
Webb will also contain enough fuel for 10 years of maneuvers. As with Hubble, Chandra, and Spitzer, the Webb science and operations center has the ability to change the operations of the observatory to maximize its scientific potential as it ages.
HubbleSite and STScI are not responsible for content found outside of hubblesite.org and stsci.edu | <urn:uuid:ea34cb02-989e-434c-8368-a17e06db2b00> | 3.421875 | 166 | Knowledge Article | Science & Tech. | 48.372647 | 1,253 |
The answer appears to be yes, that we can construct such numbers at present. The techniques that have been used recently have their roots around 1985 when elliptic curves were first applied to cryptography and factorization and when personal computers with RAM by the megabyte became common.
I would like to thank Charles for reminding me that a product of exactly two primes is called a semiprime.
Chris K. Caldwell, a professor at the University of Tennessee at Martin whose current research interest is prime number theory, writes that "small examples of proven, unfactored, semiprimes can be easily constructed." What is easy for him is not so easy for me, but it might not be too hard if I would re-read my copy of Bressoud's Factorization and Primality Testing.
Proven, unfactored semiprimes are called "interesting semiprimes" by Don Reble, a software consultant who took up the problem from (at least his interpretation of) remarks by Ed Pegg, Jr. There are at least two examples online, a 1084-digit interesting semiprime constructed by Don Reble and a 5061-digit interesting semiprime constructed by David Broadhurst, a theoretical high energy physicist.
Reble's interesting semiprime is in a text file that presents some parameters for a proof and the proof itself. It relies on properties of elliptic curves and is therefore currently over my head. Part of Reble's proof is that his semiprime survives a check that it is not a base-two strong probable prime.
Broadhurst's interesting semiprime is in a text file that can be input to Pari. He has written there the relatively elementary conditions and the parameters that he used in order to prove that his number is a semiprime, basing his work on Reble's. He provides the location of a certificate that one of his parameters was proven prime using the free-of-cost, closed-source program Primo by Marcel Martin. Primo is an implementation of elliptic curve primality proving. For suggesting the problem, Broadhurst thanked Reble and Phil Carmody, a Linux kernel developer and researcher in high-performance numerical computing. | <urn:uuid:0e092c1a-3b18-4556-b79e-25340009ab5d> | 2.703125 | 461 | Q&A Forum | Science & Tech. | 35.372676 | 1,254 |
Detection of protandry and protogyny from infructescences
|Sycamore index page|
|Invasive Woody Plants||
As described in the Sex Expression section sycamore trees may be protandrous when their inflorescences start with a sequence of male flowers followed by a sequence of female flowers, or protogynous when the reverse sequence occurs.
Protogynous individuals will produce inflorescences of Mode B and very rarely a few of Mode G. Protandrous individuals are far more variable as they have inflorescences of Mode C, D, or E, or a mixture of these. Male flowering trees are described as protandrous because in some years some or even all their inflorescences have female flowers. Similarly protandrous individuals will exhibit large annual variation in the proportion of inflorescences of Mode C, D and/or E. The existence of female flowering individuals has been reported. There is no evidence that trees will change their modes of sex expression with age.
In sycamore certain characters such as fruit production, fruit dry weight, percentage of fertilized fruits and the number of carpels per fruit, may vary between morphs (see fruit set). In order to explain such variation it is then essential to know the sexual morph of the trees studied. Because sexing the flowers of trees in spring is rather time consuming and/or impractical on tall specimens, a method using the morphological characters of the infructescence was developed.
Reliability of method
The method for the identification of the sex expression of the inflorescences using infructescences was developed in 1983, and was tested with 95% success when comparing the flowering data of 240 trees obtained in spring and the determination of the sex expression using infructescences in the autumn. The test was repeated in 1984 with 100% success.
Using fruiting material only, one can differentiate between protandrous and protogynous individuals and also between infructescences of Mode B and Mode G of the latter group using the characters listed in Table 1 and as shown diagrammatically in Fig. 1. In Mode G the structure and position of the fruits of the first part of the stalk is similar to the normal protogynous infructescence (Mode B), but it is longer and it bears a few small parthenocarpic fruits at its end (Fig. 1C). It is however impossible to distinguish between infructescences from Mode C and D. The very few male flowering individuals (Mode E; less than 1% in Ireland) will not be recognised with this method and only the shoot morphology will provide indications of their existence. In these, soon after flowering, the inflorescences will fall and the two growing terminal buds will be closely appressed (Fig. 1G). On the other hand in other modes of flowering, female flowers, even if unfertilized, will produce fruits, because of a high parthenocarpic tendency in maples. Such infructescences will remain on the trees most of the summer leading to two well separated terminal buds (Fig. 1H). It should be noted that some small flowering side shoots may not produce any buds.
Only practice allows one to determine the sex expression of the individual with accuracy from infructescence material, and whilst the majority of the individuals examined fit easily into one or other of the two morphs, nevertheless some trees have features which do not always fit entirely the description given in Table 1 and Fig. 1. For instance, some infructescences of Mode B do have a terminal fruit, but this is never the case for protandrous modes of flowering.
The size of fruits and infructescences, and the number of fruits per infructescence given in Table 1 are applicable to sycamores encountered in most of the British Isles and the Alps. However in areas with a very favourable climate (e.g. some parts of lowland Switzerland) measurements of fruit and infructescence size and the number of fruits per infructescence may be higher, and therefore the values listed in Table 1 may be misleading.
Table 1. Morphological data from infructescences differentiating between protandrous and protogynous individuals, and also between Mode B and Mode G of the latter group.
|Copyright © 2000 Pierre Binggeli. All rights reserved.| | <urn:uuid:d7a8d506-f3eb-48a2-8315-51fb1a7dfd94> | 2.734375 | 897 | Knowledge Article | Science & Tech. | 34.92597 | 1,255 |
Table of Contents
Once you have built and installed a package, you can create a binary package which can be installed on another system with pkg_add(1). This saves having to build the same package on a group of hosts and wasting CPU time. It also provides a simple means for others to install your package, should you distribute it.
To create a binary package, change into the appropriate directory in pkgsrc, and run make package:
This will build and install your package (if not already done),
and then build a binary package from what was installed. You can
then use the pkg_* tools to manipulate
it. Binary packages are created by default in
/usr/pkgsrc/packages, in the form of a
gzipped tar file. See Section B.2, “Packaging figlet” for a
continuation of the above
See Chapter 21, Submitting and Committing for information on how to submit such a binary package. | <urn:uuid:b0d632bb-6602-4fa1-a933-640d6b1764e1> | 2.578125 | 204 | Documentation | Software Dev. | 50.293713 | 1,256 |
Solar Adaptive Optics Project
Smoothing out the wrinkles in our view of the Sun
Solar scientists face the same challenge as night-time astronomers when observing from the ground: Earth's atmosphere blurs the view. Astronomers speak of being "seeing limited," or restricted to what atmospheric turbulence allows. The turbulence acts as a flexible lens, constantly reshaping what we are studying, and putting many of the answers about solar activity just beyond our reach.
Sample images with AO off (left) and on (center). In the original, each frame covers an area 45x45 arc-sec and was taken at 550 nm wavelength (10 nm interference filter) using the Baja Technology camera in April 2003. The relative size of Earth and a 1 arc-sec square are superimposed in the last frame.
Bigger telescopes can see fainter objects but with no more detail than mid-size telescopes. The closeness and brightness of the Sun make no difference: sunlight passes through the same atmosphere (usually more disturbed because the Sun heats the ground and air during the day). Solar observations from Earth have the same limit of about 1 arc-second as nighttime astronomy (1 arc-second = about 1/1920th the apparent size of the Sun or Moon; 1/1,296,000th of a circle).
An innovative solution, evolving since the 1990s, is to measure how much the air distorts the light and then adjust mirrors or lenses to cancel much of the problem. This is adaptive optics (AO), a sophisticated blend of computers and optics. For more than a decade night astronomers have used AO to let a larger number of telescopes operate closer to their difraction limit, the theoretical best set by the size of a telescope and how light forms images.
Applying AO to solar astronomy is a bigger challenge, though. Where night astronomers have high-contrast pinpoints -- stars against a black sky -- to measure how the light is distorted, solar astronomers have large, low-contrast targets -- such as sunspots and granules -- comprising an infinite number of point sources. This has required a different approach.
Left: The mirror at center right doesn't look like an ironing board, but that's its basic role in a new high-order adaptive optics system that cancels most of the atmosphere's blurring.
Since the late 1990s the National Solar Observatory has been advancing the Shack-Hartmann technique. We divide the solar image into subapertures then deform a flexible mirror so each subaperture matches one reference subaperture. In 1998 we applied a low-order AO system to the Dunn Solar Telescope, thus allowing it to operate near its diffraction limit under moderately good atmospheric conditions. This technology now is applied at several solar telescopes around the world.
NSO continues this important research and in late 2002 demonstrated a high-order AO system that will allow the Dunn to operate at its diffraction limit under a wider range of atmospheric conditions. Our goal is to expand this capability to support a system that is 100 times as complex and capable to support the planned 4-meter Advanced Technology Solar Telescope (ATST). This will let us grasp many of the details that are beyond our reach now and that we need to start answering vital questions about solar activities.
The current High-Order Adaptive Optics (AO) development project is a partnership between NSO and the New Jersey Institute of Technology, supported by the NSF's Major Research Instrumentation division. | <urn:uuid:7063bc01-3af8-4b61-841f-b1f64d12a3f7> | 3.890625 | 719 | About (Org.) | Science & Tech. | 38.267946 | 1,257 |
Scientists have produced the most extensive map of Arctic sea-ice thickness yet using just two months' worth of data from the European Space Agency's ice mission, CryoSat-2.
Data from the satellite has also helped them create an updated map of ocean circulation in the Arctic, and a topographical relief map of Antarctica.
All three maps demonstrate that CryoSat-2 is working well and, in some cases, is exceeding expectations.
'This is the first time we've been able to measure sea-ice thickness over almost the entire Arctic ice pack,' says Dr Seymour Laxon, director of the Centre for Polar Observation and Modelling (CPOM) at University College London, a member of the research team.
The map shows clear agreement with data gathered from aircraft during a recent Arctic campaign, showing that CryoSat-2 can accurately measure changes in ice thickness.'
'We can't yet say anything about changes for that you need a longer dataset,' he adds.
The sea-ice thickness map is based on data from January and February 2011 and shows thicker, rough, multi-year ice which has survived last summer's melt north of Canada and Greenland, stretching to the North Pole and slightly beyond. Elsewhere in the Arctic the map reveals thinner, first year ice, and corresponds well with maps produced by other researchers.
'Other European Space Agency satellites, like Envisat and ERS-1 have let us build a map of sea-ice thickness up to 81.5 degrees north. But CryoSat-2 goes right up to 88 degrees north, which means we've got more coverage up to the North Pole,' says Dr Katharine Giles, also from CPOM.
CryoSat-2 is designed to take precise measurements of changes in the thickness of ice in the Arctic and Antarctica, helping scientists understand how melting polar ice could affect ocean circulation patterns, sea-level rise and the global climate.
The satellite measures the thickness of polar ice using an instrument called an altimeter, which fires pulses of microwave energy at the ice and records how long they take to return.
Researchers at CPOM calculate the thickness of the ice by comparing how long it takes for the echoes to return from the top of ice floes and from the water in cracks in the ice, called leads. The aim is to measure the freeboard the part of the ice that sits above the waterline.
The satellite can also tell scientists how winds affect the Arctic Ocean by measuring differences in the height of the sea surface exposed between ice floes.
Echoes returning from leads have a much sharper signature than echoes from the ice. It's this data that has let the CPOM researchers to produce a map of ocean circulation in the Arctic using CryoSat-2 data.
They created a similar map in December 2010. But most of the data for that map came from another ESA satellite called Envisat. The CPOM team used CryoSat-2 data to plug a huge hole over the North Pole left by Envisat.
CryoSat-2 can also measure the height of the ice around the edges of Greenland and Antarctica, which is important for understanding changes in ice thickness.
To test how well it does this, the researchers switched the satellite to a different measurement mode as it passed over a prominent chain of mountains under the sea around Hawaii. The mountains in the Hawaiian-Emperor Seamount Chain are so enormous they change how gravity acts on the ocean above them, creating slopes and troughs at the surface.
'We were astonished to find we could measure tiny changes in the ocean surface caused by the seamounts lying deep under water,' says Dr Natalia Galin, also from CPOM.
The satellite is in a polar orbit around 700 kilometres above the Earth. It's expected to be in operation for three years, 'but has enough fuel onboard to keep going for up to seven years,' says Professor Duncan Wingham from CPOM, who conceived the idea for CryoSat-2 more than ten years ago.
Cryosat-2 was launched onboard a Dnepr rocket a converted intercontinental ballistic missile from the Baikonur cosmodrome in Kazakhstan on 8 April 2010.
Wingham presented the team's results at the Paris Air and Space Show today.
Explore further: The tropical upper atmosphere 'fingerprint' of global warming | <urn:uuid:9594179c-de64-49cb-88b5-5d7e5c95636f> | 3.6875 | 895 | News Article | Science & Tech. | 44.43301 | 1,258 |
Major Section: HISTORY
Example: :pe fn ; sketches the command that introduced fn and ; prints in full the event within it that created fn.See logical-name.
Pe takes one argument, a logical name, and prints in full the event
corresponding to the name.
Pe also sketches the command responsible
for that event if the command is different from the event itself.
See pc for a description of the format used to display a command. To
remind you that the event is inferior to the command, i.e., you can only
undo the entire command, not just the event, the event is indented
slightly from the command and a slash (meant to suggest a tree branch)
If the given logical name corresponds to more than one event, then
will print the above information for every such event. Here is an
example. of such behavior.
ACL2 !>:pe nth -4270 (ENCAPSULATE NIL ...) \ >V (VERIFY-TERMINATION NTH)
Additional events for the logical name NTH: PV -4949 (DEFUN NTH (N L) "Documentation available via :doc" (DECLARE (XARGS :GUARD (AND (INTEGERP N) (>= N 0) (TRUE-LISTP L)))) (IF (ENDP L) NIL (IF (ZP N) (CAR L) (NTH (- N 1) (CDR L))))) ACL2 !> | <urn:uuid:fc6923c6-3f17-4acb-84f4-38c1724a11bc> | 3.328125 | 323 | Documentation | Software Dev. | 71.420293 | 1,259 |
This phenomena has been explained by the Zetas and is thoroughly documented on this blog.
While the "official" cause of such massive fish kills is often attributed to hypoxia (lack of oxygen), what is conveniently excluded in these opaque explanations is that high concentrations of dissolved methane essentially expels oxygen, thus rendering water and air uninhabitable for the fish and birds encountering it."Dead fish and birds falling from the sky are being reported worldwide, suddenly. This is not a local affair, obviously. Dead birds have been reported in Sweden and N America, and dead fish in N America, Brazil, and New Zealand. Methane is known to cause bird dead, and as methane rises when released during Earth shifting, will float upward through the flocks of birds above. But can this be the cause of dead fish? If birds are more sensitive than humans to methane release, fish are likewise sensitive to changes in the water, as anyone with an aquarium will attest. Those schools of fish caught in rising methane bubbles during sifting of rock layers beneath them will inevitably be affected. Fish cannot, for instance, hold their breath until the emergency passes! Nor do birds have such a mechanism." ZetaTalk
Click on Map below for interactive version:
yellow=2011, blue=2012, red=2013
Some of the Evidence:
Youtube video up to Jan 30, 2011
5000+ Black Birds
500+ Black Birds
100,000 Drum Fish
Tens of Thousands - Fish
Thousands of Fish
Thousands of Fish
Dozens of fish in just 50 feet
50 - 100 Birds - Jackdaws
100 Tons of Fish
Hundreds of Snapper
10 Tons of fish
Hundreds of fish
Thousands of fish
Hundreds of Fish
Hundreds of Fish
Scores of Fish
Hundreds of Fish
150 Tons of Red Tilapias
Thousands of Fish
Scores of dead fish
Hundreds of Starfish, Jellyfish
Main source: http://maps.google.com/maps/ms?ie=UT...bca25af104a22b
DEAD FISH IN 36 LAKES IN CONNECTICUT!
MASS FISH DIE-OFF IN MICHIGAN!
HEAPS OF DEAD FISH AT BAY STATE PONDS!
DOZENS OF DEAD FISH FOUND IN MADISON POND!
RED SAND LAKE FISH DIE-OFF!
MELTING LAKES REVEAL HUNDREDS OF DEAD FISH!
HUNDREDS OF DEAD FISH IN MEADOWS RIVER
DEAD BIRDS FALL FROM THE SKY IN KANSAS!
TENS OF THOUSANDS OF DEAD FISH IN INDIA!
LAKE MAARDU WITHOUT FISH!
MASSIVE FISH MOR IN THE LIPETSK REGION!
100 TONNES OF DEAD FISH IN UKRAINE!
PENGUINS LOSING THEIR FEATHERS TO UNKNOWN ILLNESS!
DEAD TURTLES FOUND ON AUSTRALIAN BEACH!
Animal Death List
4th June 2011 - 800 Tons of fish dead in a lake near the Taal Volcano in the Philippines.
13th May 2011 - Dozens of Sharks washing up dead in California.
13th May 2011 - Thousands of fish wash up dead on shores of Lake Erie in Ohio.
6th May 2011 - Record number of wildlife die-offs in The Rockies during the winter.
1st May 2011 - Two giant Whales wash ashore and die on Waiinu Beach in New Zealand.
22nd April 2011 - Leopard Sharks dying in San Francisco Bay.
20th April 2011 - 6 Tons of dead Sardines found in Ventura Harbour in Southern California.
20th April 2011 - Hundreds of Dead Abalone and a Marlin wash up dead on Melkbos Beach near Cape Town.
18th April 2011 - Hundreds of dead fish found in Ventura Harbour in Southern California.
29th March 2011 - Over 1300 ducks die in Houston Minnesota.
28th March 2011 - Sei Whale washes up dead on beach in Virginia.
26th March 2011 - Hundreds of fish dead in Gulf Shores.
8th March 2011 - Millions of dead fish in King Harbor Marina in California.
3rd March 2011 - 80 baby Dolphins now dead in Gulf Region.
25th February 2011 - Avian Flu - Hundreds of Chickens die suddenly in North Sumatra Indonesia.
23rd February 2011 - 28 baby Dolphins wash up dead in Alabama and Mississippi.
21st February 2011 - Big Freeze kills hundreds of thousands of fish along coast in Texas.
21st February 2011 - Bird Flu? 16 Swans die over 6 weeks in Stratford-Upon-Avon, UK.
20th February 2011 - Over 100 whales dead in Mason Bay, New Zealand.
20th February 2011 - 120 Cows found dead in Banting, Malaysia.
19th February 2011 - Many Blackbirds found dead in Ukraine.
16th February 2011 - 5 Million dead fish in Mara River, Kenya.
16th February 2011 - Thousands of fish and several dozen ducks dead in Ontario, Canada.
16th February 2011 - Mass fish death in Black Sea Region in Turkey.
11th February 2011 - 20,000 Bees died suddenly in a biodiversity exhibit in Ontario, Canada.
11th February 2011 - Hundreds of dead birds found in Lake Charles, Louisiana.
9th February 2011 - Thousands of dead fish wash ashore in Florida.
8th February 2011 - Hundreds of Sparrows fall dead in Rotorua, New Zealand.
5th February 2011 - 14 Whales die after being beached in New Zealand.
4th February 2011 - Thousands of various fish float dead in Amazon River and in Florida.
2nd February 2011 - Hundreds of Pigeons dying in Geneva, Switzerland.
31st January 2011 - Hundreds of thousands of Horse Mussell Shells wash up dead on beaches in Waiheke Island, New Zealand.
27th January 2011 - 200 Pelicans wash up dead on Topsail Beach in North Carolina.
27th January 2011 - 2000 Fish dead in Bogota, Columbia.
23rd January 2011 - Hundreds of dead fish in Dublin, Ireland.
22nd January 2011 - Thousands of dead Herring wash ashore in Vancouver Island, Canada.
21st January 2011 - Thousands of fish dead in Detroit River, Michigan.
20th January 2011 - 55 dead Buffalo in Cayuga County, New York.
18th January 2011 - Thousands of Octopus was up in Vila Nova de Gaia, Portugal.
17th January 2011 - 10,000 Buffalos and Cows died in Vietnam.
17th January 2011 - Hundreds of dead seals washing up on shore in Labrador, Canada.
15th January 2011 - 200 dead Cows found in Portage County, Wisconsin.
14th January 2011 - Massive fish death in Baku, Azerbaijan.
14th January 2011 - 300 Blackbirds found dead on highway I-65 south of Athens in Alabama.
7th January 2011 - 8,000 Turtle Doves reign down dead in Faenza, Italy.
6th January 2011 - Hundreds of dead Grackles, Sparrows & Pigeons were found dead in Upshur County, Texas.
5th January 2011 - Hundreds of Dead Snapper with no eyes washed up on Coromandel beaches in New Zealand.
5th January 2011 - 40,000+ crabs wash up dead in Kent, England.
4th January 2011 - 100 Tons of Sardines, Croaker & Catfish wash up dead on the Parana region shores in Brazil.
4th January 2011 - 3,000+ dead Blackbirds found in Louisville, Kentucky.
4th January 2011 - 500 Dead Red-winged blackbirds & Starlings in Louisiana.
4th January 2011 - Thousands of dead fish consisting of Mullet, Ladyfish, Catfish & Snook in Volusia County, Florida.
3rd January 2011 - 2,000,000 (2 Million) Dead fish consisting of Menhayden, spots & Croakers wash up in Chesapeake Bay, Maryland & Virginia.
1st January 2011 - 200,000+ Dead fish wash up on the shores of Arkansas River, Arkansas.
1st January 2011 - 5,000+ Red-winged blackbirds & Starlings fall out of the sky dead in Beebe, Arkansas.
20th December 2010 (est. date) - Thousands of Crows, Pigeons, Wattles & Honeyeaters fell out of the sky in Esperance, Western Australia.
2nd November 2010 - Thousands of sea birds found dead in Tasmania, Australia. | <urn:uuid:6b7427b8-34a2-4e75-b0da-eca0f34b3001> | 2.921875 | 1,809 | Personal Blog | Science & Tech. | 73.702063 | 1,260 |
Analyzing the dynamic hydrologic conditions of the Sandhills is critical for water and range management, sustainability of the Sandhills ecosystem as well as for dune stability. There are complex models available to quantify both surface and subsurface hydrological processes. However, we present in this study an application of a relatively simple model to arrive at best estimates of the water balance components. Using the Thornthwaite-Mather (TM) model, water balance components were estimated for 4 Automated Weather Data Network (AWDN) weather monitoring stations. Estimated averages of the water balance components suggested that mean annual precipitation of these four sites was only about 420 mm but water loss through plant evapotranspiration (ET) was 861 mm, with PET of about 1214 mm. Our investigation shows that there was surplus of water between December and March and a deficit occurs at the start of the growing season in May and extends through senescence in September-October. This study also suggests that the High Plains aquifer possibly met the plant water requirement during this deficit period as well as during the soil water extraction period, from May through September.
Sridhar, Venkataramana and Hubbard, K. G.. (2010). "Estimation of the Water Balance Using Observed Soil Water in the Nebraska Sandhills". Journal of Hydrologic Engineering, 15(1), 70-78. http://dx.doi.org/10.1061/(ASCE)HE.1943-5584.0000157 | <urn:uuid:09620dee-398a-45d6-8956-597902aea460> | 2.59375 | 313 | Academic Writing | Science & Tech. | 41.621746 | 1,261 |
Parks in this Network
Southwest Alaska Network
The SWAN I&M Program is one of 32 National Park Service I&M Networks across the country established to facilitate collaboration, information sharing, and economies of scale in natural resource monitoring. It is comprised of five national park units, each of which contain a rich and varied array of natural and cultural resources. These parks and their partners are dedicated to understanding and preserving the region's unique resources through science and education.
A summary of projects, observations, and new methods used by SWAN during the 2012 summer field season
A listing of published reports concerning natural resources that are monitored and managed at Southwest Alaska Network parks. | <urn:uuid:d9ebb430-62e3-4718-a7b4-1b4c943737f0> | 2.5625 | 135 | About (Org.) | Science & Tech. | 19.139655 | 1,262 |
While our direct knowledge of black holes in the universe is limited to what we can observe from thousands or millions of light years away, a team of Chinese physicists has proposed a simple way to design an artificial electromagnetic (EM) black hole in the laboratory.
In the Journal of Applied Physics, Huanyang Chen at Soochow University and colleagues have presented a design of an artificial EM black hole designed using five types of composite isotropic materials, layered so that their transverse magnetic modes capture EM waves to which the object is subjected. The artificial EM black hole does not let EM waves escape, analogous to a black hole trapping light. In this case, the trapped EM waves are in the microwave region of the spectrum.
The so-called metamaterials used in the experiment are artificially engineered materials designed to have unusual properties not seen in nature. Metamaterials have also been used in studies of invisibility cloaking and negative-refraction superlenses. The group suggests the same method might be adaptable to higher frequencies, even those of visible light.
'Development of artificial black holes would enable us to measure how incident light is absorbed when passing through them,' says Chen. 'They can also be applied to harvesting light in a solar-cell system.' | <urn:uuid:12fb261f-aaf8-4c03-a222-ea1748e2b5e5> | 4.28125 | 257 | News Article | Science & Tech. | 24.101583 | 1,263 |
Visualization of Model Output
Visualization of output from mathematical or statistical models is one of the best ways to introduce introductory geoscience students to the results and behavior of sophisticated models. Example of good sites include:
- Climate Impact of Quadrupling Atmospheric CO2: An overview of GFDL Climate Model Results (more info)
- CCM3 T170 Cloud and Precipitation Simulation (more info) A beautiful simulation of global atmospheric circulation.
- Global Fluid Dynamics Laboratory (GDFL) Gallery (more info) of Climate model simulations, storms, stratospheric circulation, etc.
- NASA GISS Global Change Data Access Has links to model output like this.
- Mantle Convection Movies On-Line at Caltech (more info) Includes a discussion of assumptions and science behind simulation.
- Animations of plate tectonics and more (more info) from Tanya Atwater at UCSB. These are conceptual visualizations showing the evolution of plate boundaries and plate movements
- Geophysical and Geologic java applets, movies, animations, articles, tutorials, class notes, etc (more info) has several links to mantle convection movies, plate tectonics, and other interesting links.
- Florida State University Weather Pages (more info) with QT movies of forecast model output and other interesting information.
- Pacific Northwest Mesoscale Model (MM5) Weather Forecasts (more info) This is an incredible site with an enormous amounts of imagery related to Pacific Northwest weather forecast model output.
- NorthAmerican Plots of Temp, pressure, winds, dew point, etc. Great Site with Model Output (more info)
- NOAA/PMEL/TAO El Nino Distributed Numerical Simulations
- Commonwealth Bureau of Meteorology: Forecast ENSO Conditions (more info)
- Geomagnetic and Solar Activity Forecast Service (more info) | <urn:uuid:e5694ec2-9670-47cc-950c-4f0d29d36030> | 3.03125 | 394 | Content Listing | Science & Tech. | 0.251943 | 1,264 |
Fight the Heat with Deep Impact's Edible Comet
15 Jul 2003
(Source: Jet Propulsion Laboratory)
Newsletter for the Deep Impact mission
Welcome to the nearly 7,000 of you who have told us you want to know more about the Deep Impact mission. We are currently in Phase C/D. During this 34-month period, the twin spacecraft - the projectile impactor and the observing flyby spacecraft, and their science instruments are being built and the software that will drive them is being designed and tested. All factors will work together to make this the first mission to look deep beneath the surface of a comet. For more about the mission, visit the Deep Impact web site at http://deepimpact.jpl.nasa.gov or http://deepimpact.umd.edu
. MISSION UPDATE WITH PRINCIPAL INVESTIGATOR DR. MIKE A'HEARN
For the latest on the Deep Impact mission, take a look at the PI's update. Dr. Mike A'Hearn writes to tell us about the current status of the mission, the construction of both spacecraft and our science team's most recent research. Update page: http://deepimpact.umd.edu/mission/update.html
SEND YOUR NAME TO A COMET!
If you haven't joined the over 200,000 people who have registered to have their name put on the side of the impactor that will make a huge crater in Comet Tempel 1, check out http://deepimpact.umd.edu/sendyourname/
Sign up before it's too late. Don't miss the boat - uh, or the impactor.
WHAT A BLAST!
The science team continues to develop tools for visualizing and analyzing the impact. Jim Richardson, a graduate student working with Prof. Jay Melosh, has developed a useful tool that will allow us to vary the orientation of a simulated impact until we can reproduce our observations. Ultimately, these simulations will be used to understand the physical processes that occur in the cometary nucleus based on theories of hypervelocity impacts into solid bodies. We have posted two of these simulations on the web page for your viewing. The animations show the field of view of the two cameras.
HEY KIDS - COOL OFF WITH AN EDIBLE COMET!
Looking for a way to cool down on those hot summer afternoons? Make a Comet Model and Eat it! This is an activity the whole family can do together. Make an ice cream comet and add your own "cometary candy debris." Science never tasted so good!
DID YOU KNOW? COOL FACT!
Did you know that the Deep Impact spacecraft won't be the only "observer" during our encounter with Comet Tempel 1 on July 4th, 2005? While the flyby spacecraft and impactor do their job, an international group of professional and amateur astronomers will watch the "cometary fireworks" from Earth. What are they doing to gear up for this incredible event? Well, they've been watching Comet Tempel 1 since the year 2000. To see some of their images visit our Small Telescope Science Program web site and take a look at http://deepimpact.umd.edu/stsp/.
QUESTIONS FROM YOU: WILL THE IMPACT KNOCK THE COMET OFF ITS PATH AND SEND IT SOMEWHERE ELSE?
No. You can think of the impactor hitting the comet in the same way as a pebble hitting the side of an 18- wheeler. In both cases, there is a small effect in terms of adding energy to the target and subtracting it from the projectile, but again, in both cases, the impacts are not strong enough to knock the truck or the comet off their course.
MISSION BRAIN TWISTER:
The flyby spacecraft has a solar panel to take in the Sun's energy and turn it into power for the spacecraft. The early concept for the solar panel was that it be one piece. During the design phase, the engineers decided they needed a larger panel to provide enough energy for the entire spacecraft. Now the spacecraft has two panels that are hinged. Why was the hinge necessary?
Important to our Deep Impact outreach team are our master educators (Solar System Educator Program) and our ambassadors to the public (Solar System Ambassadors). These people are specially trained in the Deep Impact mission and its activities. If you are interested in having an SSEP educator give a workshop in your area, or you think you might want an Ambassador to speak at a public event, go to: http://deepimpact.umd.edu/disczone/community.html
Contact those organizations directly, or contact us at: http://deepimpact.umd.edu/feedback.html.
CALLING ALL GIRL SCOUTS!
Did you know that Deep Impact is part of a new NASA partnership with the Girl Scouts of the USA? Leader trainers from across the country are excited about the Deep Impact activities to make ice cream comets and comet models out of recyclable materials. For a large event, you can even earn the NASA solar system patch for your Scouts. Ask your council to check into schedules for NASA trainings this year. Or, you can go to our web site activities and try them yourself: http://deepimpact.umd.edu/educ/index.html
Some Scout leaders and troops are already planning to throw community star parties in their area the night of the Deep Impact encounter, July 4th, 2005. You could be one of them. Talk to your local observatory, university or library about a community partnership with your troop or council and contact us to let us know your plans at: http://deepimpact.umd.edu/feedback.html.
Deep News features information about the mission, the Deep Impact web site and our products and special programs. The Deep Impact mission is a partnership among the University of Maryland (UMD), the California Institute of Technology's Jet Propulsion Laboratory (JPL) and Ball Aerospace & Technology Corp. Deep Impact is a NASA Discovery mission, eighth in a series of low-cost, highly focused space science investigations. Deep Impact offers an extensive outreach program in partnership with other comet and asteroid missions and institutions to benefit the public, educational and scientific communities. http://deepimpact.umd.edu. | <urn:uuid:1bd8da8a-fb20-425f-98f7-cd482a09a857> | 2.984375 | 1,313 | News (Org.) | Science & Tech. | 59.421098 | 1,265 |
Ladies and Gentlemen,
For this post, I will look at how the Younger Dryas (YD) the most-well researched of the Late Glacial abrupt climate shifts, supposedly cut-off the thermohaline circulation (THC) causing an abrupt cooling event. The THC, also known as the Atlantic Meridional Overturning Circulation (AMOC), is the method by which the ocean regulates global energy budgets by transporting heat and water across the globe and through the water column. By transporting heat from the equator polewards, together atmospheric circulation, heat is transported to mid- and high-latitude areas. The THC is driven by ocean currents which travel as a factor of sea-water density, affected by temperature and salinity.
Cold, saline water at high latitudes is transported to low latitudes via the oceans' deep currents, warmer water is then transported from low-latitudes to replace this deficit. Water from the Northern Atlantic sinks and flows to the Southern Hemisphere and eventually to the conveyors circulating the Antarctic continent. Here more cold, saline water joins and is transported to the Indian Ocean before interactions with the Pacific basin. In areas of upwelling, especially in the Pacific, cold deep water rises to the surface and is heated and evaporated leaving saltier water behind. Such water flows North to join up with the Gulf Stream which travels from the Gulf of Mexico along the North American Eastern Seaboard and eventually towards NW Europe. The evaporated heat from this maintains the relatively mild British climate for its latitude. Evaporation, sea-ice formation and cooling within this process leaves very cool, saline water behind which sinks to the deep to re-start the process. Since the THC relies on the sinking of cold, saline water in the polar regions, if a large volume of freshwater was dumped into the system, the water would become too light to sink. In this case, no warmer water would replace the regular sinking cold water and so the heat transfer to the polar regions would cease from the THC causing a rapid return to glaciation. Please see Figure 1 for a visual representation of the Earth's ocean currents.
|Figure 1: Thermohaline circulation (Source: 1. TSC)|
Interposed between the start of the Holocene and the Allerod/Bolling warming stages, the YD cooled the Earth from c.12,800 cal yr BP before coming to an abrupt stop c.11,500 cal yr BP. As temperatures rose through the Allerod and Bolling warm stages, the Laurentide ice sheet over North America retreated creating the largest North American lake by volume, Lake Agassiz. In these warm stages, the Lake periodically released water to the North (Arctic Ocean), South (Gulf of Mexico) and East (North Atlantic Ocean) as shown by Figure 2.
|Figure 2: Suggested overflow routes from Lake Aggasiz causing the Younger Dryas (2. Broecker, 2006). Note: the axes show latitude (y) and longitude (x)|
However, it is widely believed that a large outburst of freshwater from Aggasiz into the North Atlantic caused the YD. This disrupted the THC plunging the Northern Hemisphere, and especially Europe into a period of cooling once more. The evidence for such large outbursts affecting the THC and causing the YD is well summarised by Teller et al. (2002). Teller et al suggest that freshwater inputs to the THC as low as a 0.1 Sv flux (where 1 Sverdrup = 1 x 106 m3s-1) may interrupt the formation of North Atlantic Deep Water (NADW) which drives the cool, saline water in the deep THC of the North Atlantic. Data from Lake Agassiz outbursts suggest that a 0.3 Sv flood flux of 9500km3, the second largest recorded outburst, occurred 12.9 ka cal yr BP in line with the beginning of the Younger Dryas. Its route was through the Great-Lakes to the East and the St. Lawrence River flowing NW into the North Atlantic. If this flux was seen over a period of 1 year, it would be at least 6 times higher than the regular flow into the St. Lawrence from Agassiz.
Other authors have proposed different reasons for the inception of the YDs which will be explored next time. Following this, I will look at evidence for the YD's abrupt termination. | <urn:uuid:ab04609b-7d4a-4d4d-bc00-b355d6c0b3c6> | 3.484375 | 910 | News (Org.) | Science & Tech. | 50.408079 | 1,266 |
When we last checked in to the Nansen Sea Ice Graphs, it looked like they were heading towards the “normal” line in a hurry. Ice area seems to still be on that trend, while extent seems to be leveling off it’s growth rate. Area appears to be within about 200,000 square kilometers of the 1979-2007 monthly average and still climbing.
Of course the fact that the 2007 data is included in the average line, means the average is a lower than usual target than one might expect. If we compare to ice area over at Cryopshere today, they use a 1979-2000 mean, which is higher. Still the rebound we are seeing is impressive.
Sea ice extent looks like this:
These graphs will automatically update, so check back often.
For those of you wondering, here is the difference between area and extent, as described in the NSIDC FAQ’s page:
What is the difference between sea ice area and extent? Why does NSIDC use extent measurements?
Area and extent are different measures and give scientists slightly different information. Some organizations, including Cryosphere Today, report ice area; NSIDC primarily reports ice extent. Extent is always a larger number than area, and there are pros and cons associated with each method.
A simplified way to think of extent versus area is to imagine a slice of swiss cheese. Extent would be a measure of the edges of the slice of cheese and all of the space inside it. Area would be the measure of where there’s cheese only, not including the holes. That’s why if you compare extent and area in the same time period, extent is always bigger. A more precise explanation of extent versus area gets more complicated.
Extent defines a region as “ice-covered” or “not ice-covered.” For each satellite data cell, the cell is said to either have ice or to have no ice, based on a threshold. The most common threshold (and the one NSIDC uses) is 15 percent, meaning that if the data cell has greater than 15 percent ice concentration, the cell is considered ice covered; less than that and it is said to be ice free. Example: Let’s say you have three 25 kilometer (km) x 25 km (16 miles x 16 miles) grid cells covered by 16% ice, 2% ice, and 90% ice. Two of the three cells would be considered “ice covered,” or 100% ice. Multiply the grid cell area by 100% sea ice and you would get a total extent of 1,250 square km (482 square miles).
Area takes the percentages of sea ice within data cells and adds them up to report how much of the Arctic is covered by ice; area typically uses a threshold of 15%. So in the same example, with three 25 km x 25 km (16 miles x 16 miles) grid cells of 16% ice, 2% ice, and 90% ice, multiply the grid cell area by the percent of sea ice and add it up. You’d have a total area of 675 square km (261 square miles). | <urn:uuid:c47cbe87-f48d-48de-a45a-14b4061cd61b> | 3.421875 | 659 | Knowledge Article | Science & Tech. | 60.164292 | 1,267 |
Conditions were looking bleak at the beginning of May. Only four months of above normal rainfall had occurred in the previous 28 months. The details can be found in a previous blog post (here). Much of the state was in a drought and the situation was getting worse.
Two events happened to change the course of the drought. First, Tropical Depression Beryl brought much needed rain to parts of the drought-stricken Midlands. However, there were still areas that received little rain. This was soon followed by an upper-level pattern that brought copious amounts of rain to the Southeast, especially along parts of the central Gulf coast. It eventually made its way to South Carolina bringing an abundance of rain. More than 8 inches of rain fell in just four weeks in the Columbia area. The result has been flooding of some of the creeks in the area.
|Flooding of the Rocky Branch Creek closed the intersection of Main St. and Whaley St. on Monday, June 11. Click on any of the images for a larger view. Image Credit: USGS.|
There has been a significant improvement in the drought situation as depicted by the US Drought Monitor. About 75% of South Carolina was in the severe drought or worse as of May 1st. However, by June 12, only about 28% of the state was in that category. This is a substantial improvement in the past 6 weeks. In fact a little over 16% of the state is out of drought conditions.
|A comparison of the drought by the U.S. Drought Monitor. The map on the left is for May 1, while the map on the right is for June 12. The table below compares the two time periods. Image Credit: USDA.|
Week Nothing D0-D4 D1-D4 D2-D4 D3-D4 D4
May 1, 2012 0.02 99.98 98.93 75.20 35.19 2.41
June 12, 2012 16.47 83.53 54.60 27.88 4.46 0.00
Tropical Depression Beryl brought the first surge of rain to the southeastern half of South Carolina. The counties of Barnwell, Bamberg, and Orangeburg counties saw 4 to 7 inches of rain from Beryl. Much of the northwestern half of South Carolina saw little rain from this system. In fact, Beryl missed the core of the drought-stricken region, the area from Macon to Augusta, Georgia. The past 365 days have been the driest on record at Augusta by over 3 inches, and Georgia climate division 6 (east-central GA) had its driest 24-months on record, nearly 26 inches below normal.
|Tropical Depression Beryl at 9:15 a.m. edt on May 29, 2012. Image Credit: NOAA Environmental Visualization Lab.|
The next surge of moisture began moving out of the Gulf of Mexico late last week. An upper-level disturbance was slowly moving through the Southeast. This brought two waves of rain through the Midlands Sunday and Monday. The two day totals were generally between 1 to 2.5 inches of rain. This put a damper on the NCAA Super-regional baseball tournament being played in Columbia. It brought much needed rain to Augusta, Georgia where 2 to 5.4 inches of rain fell across the city.
|The weather pattern at 500 mb shows a weak upper-level disturbance over the lower Mississippi River Valley at 00z on June 11, 2012. Image Credit: WSI.|
The precipitation analysis from the Advanced Hydrologic Prediction Service shows that a large area of South Carolina has seen much above normal rainfall over the past 30 days (May 12 – June 12). Parts of Barnwell, Orangeburg, and Williamsburg counties have been as much as 8 inches above normal.
|Rainfall anomaly for the 30-day period ending June 12. Click on the image for a larger view. Image Credit: NOAA\AHPS.|
Why isn’t the drought completely over? It takes a long time to get into a drought and droughts are rarely over in a short time. However, if the weather pattern is changing then this could be good news for parts of the Southeast. This is the wettest time of the year for the Midlands. An average or above average summer rainfall should alleviate the drought by fall. Keep in mind that El Nino may be back by the fall and this would likely bring wet conditions for the winter. | <urn:uuid:55509c13-826c-4156-8328-e04de01e39a2> | 2.671875 | 929 | Personal Blog | Science & Tech. | 77.36063 | 1,268 |
Revision 1 as of 2005-11-07 20:34:54
converted to 1.6 markup
|No differences found!|
WhiteSpace Handling in the XSL FO spec
Some thoughts about the concerns
The FO spec must address the following three concerns:
- What to do with linefeed characters in the input: consider as space or as a real linefeed?
- What to do with XML white space characters other than linefeed in the input: preserve or collapse?
These two concerns are governed by the properties linefeed-treatment and white-space-collapse.
Together these two items address the matter of pretty printing of XML documents (in this case FO documents).
- What to do with white space and other eligible characters around line breaks?
This concern is governed by the properties white-space-treatment and suppress-at-linebreak.
XML itself has a prescription for dealing with white space in the input XML file: The parser must report whether white space occurs in element content or not, allowing applications to ignore it in element content; in SAX terms, white space in element content is ignorable white space.
Because FO does not have a DTD or schema, there is no element content, and all white space is passed on to the FO processor. FO does have its own equivalent of element content. When white space occurs in flow objects which do not take PCDATA as children, it is ignored by the FO processor. White space in flow objects that take PCDATA children, however, must be taken into account. Its interpretation is governed by the first two items.
Pretty printing can also occur inside PCDATA. Editors commonly break long stretches of text into separate lines, substituting space characters with linefeed characters. They also commonly indent the lines to illustrate the nesting position of the element containing the PCDATA, replacing single spaces with sequences of spaces and tab characters. The above two concerns also undo those pretty printing effects on the output of the FO processor.
The first two items are concerned with input. Therefore they can in principle be taken care of at the refinement stage.
The third item is concerned with input characters whose representation depends on the layout, viz., which are suppressed when they occur before and/or after a line break. Therefore it can only be taken care of when the line breaks are known, i.e. at the layout or area building stage.
The formulation of this concern was flawed in version 1.0 of the FO spec. Instead of line breaks, it mentions line feed characters. This is clearly not what is needed. Users expect white space to be suppressed around line breaks, and FO processors do this, even though the spec has no good prescription for this behaviour. Version 1.1 of the FO spec tries to correct this. But the result is a mixed behaviour of the property white-space-treatment. Two of its values refer to input characters and can be taken care of at the refinement stage, the other three refer to suppression as a result of layout and must be taken care of at the layout or area building stage.
Remarks on white-space-collapse
white-space-collapse is formulated in terms of flow objects, so that it only applies to direct siblings. This can give rise to undesirable effects. Examples:
- Spaces before an fo:inline and spaces at the start of an fo:inline are not collapsed, perhaps contrary to the expectation of the user.
- fo:marker elements may have spaces at their start and end, which may become adjacent to spaces before and after the fo:retrieve-marker that inserted the fo:marker content. These spaces are not collapsed, again perhaps contrary to the expectation of the user.
The user would prefer to think in terms of collapsing of adjacent white space glyph areas. The comments of the XSL editors have made it clear, however, that white-space-collapse is strictly interpreted in terms of sibling flow objects. On the other hand, they do not make it clear why they place white-space-collapse handling at the area building stage. As a result the user must be careful not to add extra white space to inline content.
Remarks on white-space-treatment and white-space-collapse
The values ignore and preserve of white-space-treatment would better be combined with white-space-collapse into a new property, called something like white-space-treatment, with three values ignore, collapse and preserve as follows:
- white-space-treatment="ignore" and white-space-collapse="true": ignore
- white-space-treatment="ignore" and white-space-collapse="false": ignore
- white-space-treatment="preserve" and white-space-collapse="true": collapse
- white-space-treatment="preserve" and white-space-collapse="false": preserve
The property with the remaining values then could be called something like around-line-break. Unfortunately, the remaining three values have linefeed in their name, where linebreak is intended. | <urn:uuid:d81e5a1b-9ec9-4ffa-900f-c56314afa56d> | 2.734375 | 1,047 | Documentation | Software Dev. | 45.049716 | 1,269 |
By Irene Klotz
CAPE CANAVERAL, Florida (Reuters) - NASA on Monday showed off the first high-resolution, color portrait images taken by the Mars rover Curiosity, detailing a mound of layered rock where scientists plan to focus their search for the chemical ingredients of life on the Red Planet.
The stunning images reveal distinct tiers near the base of the 3-mile- (5-km-)tall mountain that rises from the floor of the vast, ancient impact basin known as Gale Crater, where Curiosity landed on August 6 to begin its two-year mission.
Scientists estimate it will be a year before the six-wheeled, nuclear-powered rover, about the size of a small car, physically reaches the layers of interest at the foot of the mountain, 6.2 miles away from the landing site.
From earlier orbital imagery, the layers appear to contain clays and other hydrated minerals that form in the presence of water.
While previous missions to Mars have uncovered strong evidence for vast amounts of water flowing over its surface in the past, Curiosity was dispatched to hunt for organic materials and other chemistry considered necessary for microbial life to evolve.
The $2.5 billion Curiosity project, NASA's first astrobiology mission since the 1970s-era Viking probes to Mars, is the first to bring all the tools of a state-of-the-art geochemistry laboratory to the surface of a distant planet.
But the latest images from Curiosity, taken at a distance from its primary target of exploration, already have given scientists a new view of the formation's structure.
The layers above where scientists expect to find hydrated minerals show sharp tilts, offering a strong hint of dramatic changes in Gale Crater, located in the planet's southern hemisphere near its equator.
SLANTED LAYERS EXPOSED
Mount Sharp, the name given to the towering formation at the center of the crater, is believed to be the remains of sediment that once completely filled the 96-mile- (154-km-) wide basin.
"This is a spectacular feature that we're seeing very early," project scientist John Grotzinger, with the California Institute of Technology, told reporters on Monday. "We can sense that there is a big change on Mount Sharp."
The higher layers are steeply slanted relative to the layers of underlying rock, the reverse of similar features found in Earth's Grand Canyon.
"The layers are tilted in the Grand Canyon due to plate tectonics, so it's typical to see older layers be more deformed and more rotated than the ones above them," Grotzinger said. "In this case, you have flat-line layers on Mars overlaid by tilted layers. The science team, of course, is deliberating over what this means."
He added: "This thing just kind of jumped out at us as being something very different from what we ever expected."
Absent plate tectonics, the most likely explanation for the angled layers has to do with the physical manner in which they were built up, such as being deposited by wind or by water.
"On Earth, there's a whole host of mechanisms that can generate inclined strata," Grotzinger said. "Probably we're going to have to drive up there to see what those strata are made of."
Also Monday, NASA said it used the rover to broadcast a message of congratulations to the Curiosity team from NASA chief Charles Bolden, a demonstration of the high bandwidth available through a pair of U.S. science satellites orbiting Mars.
"This is the first time that we've had a human voice transmitted back from another planet" beyond the moon, said Chad Edwards, chief telecommunications engineer for NASA's Mars missions at the Jet Propulsion Laboratory in Pasadena, California.
"We aren't quite yet at the point where we actually have a human present on the surface of Mars ... it is a small step," Edwards said.
(Editing by Steve Gorman and Philip Barbara)
(The photo previously attached to this story was incorrectly identified as Mars' Mount Sharp) | <urn:uuid:1ba81d91-6603-438c-97a2-11f027df325d> | 3.0625 | 838 | News Article | Science & Tech. | 44.096597 | 1,270 |
The Acoustic Search
In the field
ARU mounted on a tree (left) with its battery (right). Photo by Chris Tessaglia-Hymes
To search for acoustic evidence for Ivory-billed Woodpeckers in Arkansas and other states within the historical range, we record ambient sounds using autonomous recording units (ARUs). ARUs are programmable, battery-operated digital audio recorders developed by the Cornell Lab of Ornithology’s Bioacoustics Research Program. Each ARU contains a microprocessor, 12-bit analog-to-digital converter, an omnidirectional microphone, preamplifier and signal conditioning circuitry, and a hard disk for storing audio data. These components are packaged in a cylindrical PVC housing, and attached to tree trunks two to three meters above the ground or water surface. ARUs are typically deployed for periods of two to four weeks.
ARU in Arkansas. Photo by Chris Tessaglia-Hymes
ARUs are programmed to record for two four-hour periods each day, the first beginning 30 to 45 minutes before sunrise, the second ending 30 to 45 minutes after sunset. The range at which an ARU could detect sounds of an Ivory-billed Woodpecker is unknown, because there are no data available on the volume of kent calls or double knocks. We estimate, however, that these signals would be detectable by ARUs up to distances of approximately 200 meters.
We select recording sites based on habitat quality, locations of previous Ivory-billed Woodpecker sighting reports, and presence of possible ivory-bill roost/nest cavities and feeding signs.
Reviewing and analyzing the sounds
Since the start of large-scale acoustic search efforts in 2004, our protocols for reviewing and evaluating ARU recordings have evolved in order to provide more consistent and informative evaluations of ivory-bill-like sounds. Our current protocol is summarized here.
To find sounds similar to those of Ivory-billed Woodpeckers in the ARU recordings, we use a multi-step process:
1. Automated screening by computer: The digital recordings are scanned by software that detects sounds similar to known vocalizations of Ivory-billed Woodpeckers (from the 1935 Allen-Kellogg recording), and to double-knocks from other Campephilus woodpeckers.
2. Initial human screening: An acoustic analyst reviews all of the computer’s detections. Most of the sounds flagged by the computer are easily discarded at this stage as not being similar enough to ivory-bill sounds to warrant further attention. The computer flags many “false alarm” events because we adjust the software to be very sensitive, reducing the chance that a real ivory-bill call might be missed. Sounds that pass this stage are forwarded to the next stage of review.
3. Expert panel review: A panel of three or more experts (outside of the acoustic analysis team) reviews all of the sounds that pass stage two. The expert panel categorizes each sound as “implausible” or “plausible.” “Plausible” events are further categorized depending on whether a potential alternate source is identified, and if that alternate source is positively identified elsewhere on the deployment. Sounds categorized as “implausible” are either positively identified as an alternate source, or are deemed to be too different than an ivory-bill.
Plausible categories are:
- P1: Plausible Ivory-billed Woodpecker, no likely alternative known
- P2: Plausible Ivory-billed Woodpecker, alternate possibility identified but not present in recording
- P3: Plausible Ivory-billed Woodpecker, alternate possibility identified and present
- P4: Insufficient signal for full analysis
“Plausible” sounds are scored on various criteria, receiving a point for each positive response to one of several questions. A higher score indicates a greater likelihood that the sound originated from an Ivory-billed Woodpecker.
Scoring criteria for vocalizations:
1. Is the harmonic interval between 580 and 780 Hz?
2. Is harmonic emphasis appropriate?
3. Is the event part of a biologically appropriate series?
4. Is there a temporal context or co-occurrence with other events of interest on the same day?
5. Is there a clear temporal context or co-occurrence with other events of interest across days?
Scoring criteria for double-knocks:
1. Is the inter-knock interval between 60 and 120 milliseconds?
2. Is sound resonant and woody?
3. Is there an absence of confounding woodpeckers?
4. Is the event part of a biologically appropriate series?
5. Is there a clear temporal context or co-occurrence with other events of interest on the same day?
6. Is there a clear temporal context or co-occurrence with other events of interest across days?
At every stage of the review process, researchers compared suspect sounds not only with those of ivory-bills and other Campephilus woodpeckers, but also with a variety of similar sounds from other species, and carefully considered the surrounding context.
What have we discovered so far?
Here we present some examples of “plausible” sounds collected in the Big Woods of Arkansas. Note: this website is not intended to be a complete and final analysis of our acoustic monitoring and research. Rather, we aim to provide a sampling of sounds that we believe are suggestive of ivory-bill and a number of “sound-alikes” that we hope will help inform other searchers about what to listen for. We are presently working on peer reviewed publication that will explain our findings in detail. | <urn:uuid:77b0bbf8-c845-462a-bae9-2a7692b39098> | 3.21875 | 1,193 | Knowledge Article | Science & Tech. | 34.158975 | 1,271 |
Campus: CSU Long Beach -- October 11, 2002
CSU Campuses Share $170,000 National Science Foundation
The fluid evolution of the earth’s crust and origins of valuable
deposits are just a few of the areas that geological sciences faculty
members at the California State University, Long Beach and Fullerton
campuses will be able to study with a new mass spectrometer funded by
a $170,000 grant from the National Science Foundation (NSF).
Samples of materials such as rocks, minerals or water are placed into
the gas-source isotope-ratio mass spectrometer, which will be located
at a new laboratory at CSULB, said Gregory J. Holk, grant team leader.
Other faculty members on the team are James C. Sample and Richard J.
Behl, as well as Diane Clemens-Knott of Cal State Fullerton.
Scientists can learn a great deal about an area’s geological history
by examining the chemical signatures of tested materials.
“My area of expertise is stable isotope geochemistry and the type
of research I do involves understanding the role of aqueous fluids in
the evolution of the earth’s crust,” explained Holk. “One
aspect of these studies is the investigation of the movement of water
through faults and its effect on their behavior, so there are some implications
with regard to the earthquake process—how earthquakes happen and
what sorts of conditions are necessary for an earthquake to happen.”
He also studies where such water comes from in the first place—from
underground or percolating from above. The new mass spectrometer will
enable him to examine not only ground water but also water encased in
rocks and minerals.
“Much of my research deals with mapping hydrothermal systems associated
with ore deposits,” Holk added. “The mining companies have
to drill a lot of holes to look for the ore and drilling costs are about
$50,000 per hole. This stable isotope technique removes one step from
that guessing game in that a stable isotope survey of an area that has
high potential for mineralization can delineate the areas of highest
potential. The cost of such a survey is about the same as one drill
hole.” Furthermore, reducing the amount of drilling has environmental
Holk and Clemens-Knott of Fullerton were doctoral classmates at Cal
Tech and she will play an integral role in the new lab. “She has
gas extraction facilities at Fullerton. These facilities supplement
those recently built at CSULB. Before a sample is ready for mass spectrometry,
rigorous chemical separation work is done in the laboratory,”
said Holk. “Dr. Clemens-Knott will be bringing her samples over
here for analysis. She’s done a lot of work with ground water
in Orange County. She works primarily on magmatic systems, how magmas
and fluids interact with each other.”
“We would like to open up the instrument for collaboration with
faculty from other CSU universities,” said Holk. CSULB has or
is acquiring a variety of research instruments, including a new NSF-funded
scanning electron microscope. “If we pool our resources together,
we have the potential on a university scale to have instrumentation
comparable to the big research universities. Our goal is to coordinate
our efforts to have a center for analysis that scientists can utilize,”
Furthermore, much of the equipment may be available for use by undergraduate
as well as graduate science students. The NSF recognizes CSULB as a
significant provider of hands-on research opportunities for undergraduate
students and rates Long Beach one of the top master’s level universities
whose students go on to earn doctoral degrees in science and engineering. | <urn:uuid:7ea6754b-2f54-4648-be6a-b84427db4010> | 2.78125 | 826 | News (Org.) | Science & Tech. | 38.035833 | 1,272 |
Scientists find evidence of ancient lake on Mars
Layered rocks on the floor of McLaughlin Crater on Mars show sedimentary rocks that contain spectroscopic evidence for minerals formed through interaction with water. Photo: Reuters/NASA
A spacecraft orbiting Mars has provided evidence of an ancient crater lake fed by groundwater, adding further support to theories that the Red Planet may once have hosted life, says NASA.
Spectrometer data from NASA's Mars Reconnaissance Orbiter shows traces of carbonate and clay minerals usually formed in the presence of water at the bottom of the 2.2-kilometre-deep McLaughlin Crater.
"These new observations suggest the formation of the carbonates and clay in a groundwater-fed lake within the closed basin of the crater," NASA said of the findings, which were published in the online edition of Nature Geoscience.
The spot selected for Curiosity's first drilling site. Photo: AP/NASA
"Some researchers propose the crater interior catching the water," the space agency said, adding that "the underground zone contributing the water could have been wet environments and potential habitats."
The crater lacks large inflow channels, so the lake was likely fed by groundwater, scientists said.
The latest observations "provide the best evidence for carbonate forming within a lake environment instead of being washed into a crater from outside," said Joseph Michalski, lead author of the paper.
The 92-kilometre-wide crater sits at the low end of a regional slope several hundreds of kilometres long and, as on Earth, groundwater-fed lakes would be expected to occur at low elevations.
NASA's Mars rover Curiosity has been exploring the planet's surface since its dramatic landing on August 6, collecting rock samples and beaming back rare images in anticipation of an eventual manned mission.
Mars Reconnaissance Orbiter scientist Rich Zurek, of NASA's Jet Propulsion Laboratory, said the latest findings indicate "a more complex Mars than previously appreciated, with at least some areas more likely to reveal signs of ancient life than others." | <urn:uuid:4f9a692e-7e2f-4aae-aa56-460e5fa84af7> | 3.953125 | 418 | News Article | Science & Tech. | 25.352129 | 1,273 |
How the universe is erasing evidence of its beginnings and moving faster toward its end
In 1917, on the third floor of an apartment building in the Wilmersdorf borough of wartime Berlin, an ailing tenant named Albert Einstein sat focused on a lofty subject: the universe. In February of that year, he published a paper that effectively launched the modern field of cosmology. In it, he suggested that the fabric of space and time contains an innate tension, an energy that seethes beneath the surface of every inch of the universe. This “cosmological constant” was the force that held gravity in check and kept the universe from collapsing on itself, he said.
In other words, the universe was in a holding pattern.
A dozen years later, however, the astronomer Edwin Hubble discovered that the universe was not standing still, as Einstein had suggested. Hubble found that the universe was instead expanding—forever moving outward—and didn’t need anything to keep itself from collapsing. Hubble’s discovery led Einstein to repudiate his own claim of a cosmological constant and to write the incident off as the “worst blunder” of his career.
In the years that followed, Einstein’s concept of a cosmological constant faded but never disappeared. Researchers continued to ask: If the universe is simply being carried out by its own momentum, does that necessarily mean that nothing, no tension is filling the vacuum of the universe?
It turns out Einstein’s conclusions might have been less farfetched than he thought. Cosmologists continued to research this theory, and what they discovered is shedding light on the future of the universe—while simultaneously erasing traces of its past.
A New Constant is Discovered
In 1995, physicists Lawrence Krauss, Ph.D., then at Case Western Reserve University, and Michael Turner, Ph.D., of Fermilab in Illinois, argued in the journal General Relativity and
Gravitation that the universe does, in fact, have a cosmological constant. It is a force that not only propels the expansion of the universe, but does so at ever-faster speeds, constantly accelerating, they said.
The scientists pieced together data, including X-ray telescope observations of faraway galaxies and Hubble Space Telescope distance measurements to nearby ones. They concluded that something seems to be pushing the expansion of the universe ever faster.
That force is dark energy, researchers say, and its existence means the universe will ultimately expand so far and so wide that the stars, planets and galaxies as we know them will disappear from view. Future astronomers will look skyward toward a barren universe that lacks any clues about its origins.
“There will be ever-diminishing evidence that there was a Big Bang,” says Glenn Starkman, Ph.D., a Case Western Reserve physicist and director of the university’s Origins Initiative. That could mean the end of cosmology as we know it. “Cosmologists in general are trying to answer big questions,” Starkman says. “Most of the questions we’ve been trying to answer are about the past. But I think the big questions about the future are, in many ways, just as interesting.”
Dark energy may indeed have a lot to say about the future, scientists are finding. In 1995, though, not everyone was on board with the concept of a new cosmological constant.
“The concept turned out to be right, and that was a very remarkable thing,” says Will Kinney, Ph.D., a physicist at the State University of New York at Buffalo. At the time, Kinney says, “I don’t know that a lot of people took the cosmological constant seriously.”
That changed in 1998, when an international coalition of astronomers released a sheaf of data in both the Astronomical Journal and the Astrophysical Journal that they said proved the universe is expanding at an increasingly rapid rate. Measuring the brightness of 102 exploding stars, or supernovae, in distant galaxies, the scientists found that these supernovae were often dimmer than expected. The findings fit a pattern that could only be explained by a universe whose expansion was accelerating over time.
The cosmic self-pressure that the scientists observed—dark energy—has since been confirmed by independent observations, including careful measurements by high-tech instruments such as NASA’s Wilkinson Microwave Anisotropy Probe, which launched in 2001.
Shedding Light on Dark Energy
No one knows for certain what dark energy is or what generates it, but one thing is clear: It is pressuring space to expand. That makes dark energy stand apart from everything else in the universe because every other form of matter or energy gravitationally tugs on other matter.
Dark energy’s peculiar feature is that it seems to fill any void or vacuum, including those created by the universe’s expansion. Even a patch of empty space that had been eradicated of all known forms of matter and energy still contains dark energy, Starkman says.
“So if you have twice as much vacuum as you had before, then you have twice as much of that energy,” he says. “That’s really peculiar. If you take a box and stretch it, you get something for free. That’s the property that accounts for the ability of the vacuum to expand at an accelerating rate. The more you expand it, the more of the [dark energy] you have, and the more that it pushes.”
If dark energy seems confusing, that’s because it is, Starkman says. The greatest minds in physics are baffled. Dark energy is one of the most perplexing unsolved mysteries in science today, and scientists’ best guess for what lies at the heart of dark energy and the cosmological constant lies in quantum physics, Starkman says.
Quantum theory predicts that empty space will wiggle with low-level vibrations, even when all the energy in that space is depleted. It says that the simplest kind of motion conceivable, subatomic particles moving back and forth like miniature springs, will be present even when no other energy is present and they will never not move. Imagine a universe filled with simple quantum particles. Now rob the universe of every ounce of energy it contains. What quantum theory says is that, powered by nothing whatsoever, the universe will still vibrate with what is sometimes called “vacuum energy” or “zero-point energy.”
Quantum vacuum energy is “the simplest explanation for the origin of [dark] energy,” Starkman says. But the explanation remains murky. Starkman holds out hope that in Geneva, Switzerland, the CERN laboratory’s Large Hadron Collider, the world’s most powerful particle accelerator, may uncover precious clues about dark energy. The accelerator, which began operating in September, will allow scientists to analyze high-energy beam collisions and possibly reveal a new world of unknown particles.
The experiments could ultimately explain why those particles exist and behave as they do. They could reveal the origins of mass, shed light on dark matter, uncover hidden symmetries of the universe, and possibly find extra dimensions of space.
In the meantime, the observed existence of dark energy—whatever its origins—is producing real consequences for the universe’s future.
The Universe’s Beginning and End
In 1999, Starkman co-authored a paper with fellow Case Western Reserve physicist Tanmay Vachaspati, Ph.D., and Mark Trodden, Ph.D., of Syracuse University. The research, which appeared in Astrophysical Journal, linked cosmic acceleration to a decidedly bleak future. The universe had entered an extended period of rapid growth, they said, and, eventually, the objects in it would move away so rapidly from our world that they would fall away from view.
The evidence came from observations of supernovae, they said, which measurements showed were not only moving away, but moving away at ever faster speeds. Traditional Big Bang theory runs counter to this notion. It predicts that cosmic expansion will slow or even halt over time. Think of a fireworks explosion: an initial blast, streamers shooting out from the core at great speed, then a gradual slowing until the lights of the fireworks collapse and fade.
If the universe’s expansion continues to speed up, not slow down, then light from distant galaxies will fade for a different reason: It eventually will be unable to keep up. “We realized that things were going to start disappearing,” Starkman says. “The longer you wait, the less you’ll see.”
However, he adds, it will take scores of billions of years to lose sight of the universe’s landscape as we know it. Today, the universe is just a teenager, a spry 14 billion years young. The cosmic end-state comes when the universe nears 100 billion years old.
As that faraway birthday approaches, cosmic expansion will have created vast stretches of void between galaxies. Today’s visible universe, with its hundreds of billions of galaxies stretchingfar into the great beyond, will have sunk below the Earth’s horizon. Our sun and solar system will be long gone, having fizzled somewhere near the 19 billion-year mark.
If civilizations exist in other galaxies at such a late date, their conclusions about the universe will be incomplete. Light from neighboring galaxies will be unable to reach them because the expansion of space will have quickened beyond the lowly photon’s ability to keep up. Cosmology, particularly the study of the universe’s origins, will by then have reached an end. The science launched by Einstein’s notion of a cosmological constant will be destroyed by that very same constant.
But scientists are not only considering questions of the past; they are also considering future prospects for life in the universe.
In 1979, physicist Freeman Dyson, Ph.D., of the Institute for Advanced Study at Princeton University published a paper in the journal Reviews of Modern Physics that argued life could survive indefinitely in a universe that also expanded indefinitely. In Dyson’s view, biology could ultimately win the battle with a hostile universe.
Of course, appearing 19 years before the discovery of accelerating cosmic expansion, Dyson’s paper did not consider dark energy or a cosmological constant. In 2004, Starkman co-wrote another paper with Lawrence Krauss that delivered the bad news: Life is eventually doomed. Einstein’s greatest blunder ultimately, after hundreds of billions of years, wrenches the universe apart. And with it goes the prospect for biology.
“The universe is going to have a long, slow end,” Starkman says. “It will first begin with ignorance. And if we are right, it will end with death.”
Kinney, of the University at Buffalo, expands on that argument. In a paper written with physicist Katherine Freese, Ph.D., of the University of Michigan, Kinney points out that no one knows for certain whether the cosmological constant is, in fact, constant. It could be that the acceleration of the universe’s expansion will change over time. In some scenarios, in which the amount of dark energy exponentially diminishes over time, they find that doom and gloom may not prevail. Under such circumstances, the universe and biological processes in it could, theoretically at least, continue far into the future.
The question is, how far into the future?
“We all agree that life can last longer if the cosmological constant isn’t constant,” Starkman says. “What we’re arguing over here is how long. The evidence doesn’t seem to suggest that it will last forever. But maybe the certainty of our continued existence isn’t the most important thing—maybe it’s the understanding that we gain while we’re here.” | <urn:uuid:71948182-97de-405f-ad9e-6111c2afa6e1> | 3.765625 | 2,494 | Knowledge Article | Science & Tech. | 43.823468 | 1,274 |
To develop an application with SYMPHONY, you need to first understand how the source files are organized. Note that in this chapter, all path names are given Unix-style. When you unpack the SYMPHONY source distribution, you will notice at the root level a number of files associated with the automatic configuration system, as well as a number of subdirectories, each of which corresponds to a library used by SYMPHONY for some specific functionality. The files associated with SYMPHONY itself are located in the SYMPHONY subdirectory. Within the SYMPHONY subdirectory are a number of other subdirectories, including one called src containing the source files for SYMPHONY itself.
Also in the main SYMPHONY/ subdirectory, there is a subdirectory called Applications/ (see Sections 184.108.40.206 and 2.2.4) for instructions on building the applications). The Applications/ subdirectory contains the source code for a number of sample applications developed with SYMPHONY, as well as function stubs for developing a custom application using SYMPHONY's callbacks. The subdirectory SYMPHONY/Applications/USER contains the files needed for implementing the callbacks and is a template for developing an application. In this directory and its subdirectories, which mirror the subdirectories of SYMPHONY itself, each file contains function stubs that can be filled in to create a new custom application. There is a separate subdirectory for each module--master ( Master/), tree management ( TreeManager/), cut generation ( CutGen/), cut management ( CutPool/), and node processing ( LP/). Within each subdirectory, there is a file, initially called USER/*/user_*.c, where * is the name of the module. The primary thing that you, as the user, need to understand to build a custom application is how to fill in these stubs. That is what the second part of this chapter is about. Before describing that, however, we will discuss how to build your application. | <urn:uuid:90e870ef-19d9-452a-a2f7-b5ca279f1c2a> | 2.625 | 430 | Documentation | Software Dev. | 32.309235 | 1,275 |
One method for verification of correctness is to compare algorithm implementations to STL sort for assurance of equivalent results, but that assumes STL sort is correct. To not rely on correctness of STL sort requires implementing a correctness test for sorting algorithms. Correctness requires that array[i] ≤ array[i+1] for all elements of the array, which is simple to check. Of course, comparison to results from STL sort would be a useful redundant verification. These two tests were used for all implemented routines, including Intel's IPP library routines. Boundary cases of the input arrays of size 0 and 1 were also tested.
The performance comparison setup was as follows:
- Visual Studio 2008, optimization project setting is set to optimize neither speed or size, and inline any suitable function.
- Intel Core 2 Duo CPU E8400 at 3 GHz (64 Kbytes L1 and 6 Mbytes L2 cache).
- 14-stage pipeline with 1,333 MHz front-side bus.
- 2 GB of system memory (dual-channel 64-bits per channel, 800 MHz DDR2).
- motherboard is DQ35JOE.
Random numbers were generated by using the following method for each element in the array:
// each call to rand() produces 15-bit random number. unsigned long tmp = ((unsigned long)rand()) << 30 | ((unsigned long)rand())<< 15 | ((unsigned long)rand());
The arrays were all checked for percentage of unique values, which were all above 95% for arrays filled with 32-bit unsigned values. The range of min and max were also checked for each array, which were between 0 and near the max value for 32-bit unsigned numbers.
Performance was measured by always processing 100 million elements. When 10 element arrays were being measured, then 10 million of them were allocated. When 100 element arrays were being measured, then 1 million of them were allocated, and so on. A different random-number generator seed was used for each array, but the same seeds were used across all algorithms. Time-stamp was taken before sorting the 10 million arrays and also after. The average value across all arrays is the value reported. | <urn:uuid:65f4c07c-fb2f-4325-beb3-21abc9c0b1bb> | 3.171875 | 445 | Documentation | Software Dev. | 53.377581 | 1,276 |
For more information on the Concurrency Runtime Framework, see Concurrency Runtime: The Resource Manager.
Visual C++ 2010 comes with new features and enhancements to simplify more native programming. The Concurrency Runtime (CRT), for instance, is a framework that simplifies parallel programming and helps you write robust, scalable, and responsive parallel applications. The CRT raises the level of abstraction so that you do not have to manage the infrastructure details that are related to concurrency. The Concurrency Runtime also enables you to specify scheduling policies that meet the quality of service demands of your applications.
Figure 1 presents the architecture of Concurrency Runtime Framework.
In this article, I discuss the Task Scheduler layer and examine how it works internally. To do so, I use CppDepend, an analysis tool that makes it easier for you to manage complex C\C++ (native, mixed, and COM) code base.
The Task Scheduler
The Task Scheduler schedules and coordinates tasks at runtime. A task is a unit of work that performs a specific job. The Task Scheduler manages the details that are related to efficiently scheduling tasks on computers that have multiple computing resources.
Windows provides a preemptive kernel-mode scheduler -- a round-robin, priority-based mechanism that gives every task exclusive access to a computing resource for a given time period, then switches to another task. Although this mechanism provides "fairness" (every thread makes forward progress), it comes at some cost of efficiency. For example, many compute-intensive algorithms do not require fairness. Instead, it is important that related tasks finish in the least overall time. Cooperative scheduling enables an application to more efficiently schedule work.
Cooperative scheduling is a mechanism that gives every task exclusive access to a computing resource until the task finishes or until the task yields its access to the resource.
The user-mode cooperative scheduler enables application code to make its own scheduling decisions. Because cooperative scheduling enables many scheduling decisions to be made by the application, it reduces much of the overhead that is associated with kernel-mode synchronization.
The Concurrency Runtime (CRT) uses cooperative scheduling together with the preemptive scheduler of the operating system to achieve maximum usage of processing resources. In this article, I examine the Task Scheduler design and lift its hood to see how it works internally. For information on the CRT Resource Manager, see Concurrency Runtime: The Resource Manager. Again, I use CppDepend to analyze the CRT source code.
The CRT provides the interface
Scheduler to implement a specific scheduler adapted to application needs. Let's examine classes that implement this interface:
The CRT provides two implementations of the scheduler --
UMSThreadScheduler. As illustrated in the dependency graph in Figure 2, the
SchedulerBase contains all common behavior of these two classes.
Is the Scheduler flexible? A good indicator of flexibility is to search for all abstract classes used by the Scheduler.
As shown in the dependency graph in Figure 3, the Scheduler uses many abstract classes. It enforces low coupling, and makes the scheduler more flexible, so adapting it to other needs is easy. To explain the role of each abstract class used by the Scheduler, I'll discuss its responsibilities.
There are three major responsibilities assigned to the Task Scheduler:
Getting resources (processors, cores, memory). When the scheduler is created, it asks for resources from the runtime Resource Manager (as explained in CRT Concurrency Runtime: Resource Manager). The Scheduler communicate with Resource Manager using
IScheduler interfaces. Resources given by Resource Manager use scheduler policy to allocate resources to the Scheduler.
The policy as shown in Figure 4 is assigned when the Scheduler is created.
The CRT creates a default Scheduler if no Scheduler exists by invoking the
GetDefaultScheduler method, and a default policy is used. The Task Scheduler enables applications to use one or more Scheduler instances to schedule work, and an application can invoke
Scheduler::Create to add another Scheduler that uses a specific policy.
Concurrency::PolicyElementKey enumeration defines the policy keys that are associated with the Task Scheduler.
For more information on policy keys, see this article.
The following collaborations between the Scheduler and Resource Manager shows the role of each interface concerned by the allocation.
Ask for resource allocation:
Getting resources from Resource Manager: | <urn:uuid:a8560120-dd89-457c-91b5-f91cc2a65d44> | 3.0625 | 914 | Documentation | Software Dev. | 20.351805 | 1,277 |
are gamma rays?
A gamma ray is a packet
of electromagnetic energy--a photon. Gamma photons are
the most energetic photons in the electromagnetic
spectrum. Gamma rays (gamma photons) are emitted from
the nucleus of some unstable (radioactive) atoms.
What are the properties
of gamma radiation?
Gamma radiation is very
radiation. Gamma photons have about 10,000 times
as much energy as the photons in the visible range of
the electromagnetic spectrum.
Gamma photons have no
mass and no electrical charge--they are pure
Because of their high
energy, gamma photons travel at the speed of light and
can cover hundreds to thousands of meters in air
before spending their energy. They can pass through
many kinds of materials, including human tissue. Very
dense materials, such as lead, are commonly used as
shielding to slow or stop gamma photons.
Their wave lengths are
so short that they must be measured in nanometers,
billionths of a meter. They range from 3/100ths to
3/1,000ths of a nanometer.
What is the difference
between gamma rays and x-rays?
Gamma rays and x-rays,
like visible, infrared, and ultraviolet light, are
part of the electromagnetic spectrum. While gamma rays
and x-rays pose the same hazard, they differ in their
origin. Gamma rays originate in the nucleus. X-rays
originate in the electron fields surrounding the
What conditions lead to
gamma ray emission?
emission occurs when the nucleus of a radioactive atom
has too much energy. It often follows the emission of
What happens during gamma
provides an example of radioactive decay by gamma
radiation. Scientists think that a neutron transforms
to a proton and a beta particle. The additional proton
changes the atom to barium-137. The nucleus ejects the
beta particle. However, the nucleus still has too much
energy and ejects a gamma photon (gamma radiation) to
become more stable.
How does gamma radiation
change in the environment?
Gamma rays exist only
as long as they have energy. Once their energy is
spent, whether in air or in solid materials, they
cease to exist. The same is true for x-rays.
How are people exposed to
Most people's primary
source of gamma exposure is naturally occurring
radionuclides, particularly potassium-40, which is
found in soil and water, as well as meats and
high-potassium foods such as bananas. Radium is also a
source of gamma exposure. However, the increasing use
of nuclear medicine (e.g., bone, thyroid, and lung
scans) contributes an increasing proportion of the
total for many people. Also, some man-made
radionuclides that have been released to the
environment emit gamma rays.
Most exposure to gamma
and x-rays is direct external exposure. Most gamma and
x-rays can easily travel several meters through air
and penetrate several centimeters in tissue. Some have
enough energy to pass through the body, exposing all
organs. X-ray exposure of the public is almost always
in the controlled environment of dental and medical
Although they are
generally classified as an external hazard, gamma
emitting radionuclides do not have to enter the body
to be a hazard. Gamma emitters can also be inhaled, or
ingested with water or food, and cause exposures to
organs inside the body. Depending on the radionuclide,
they may be retained in tissue, or cleared via the
urine or feces.
Does the way a person is
exposed to gamma or x-rays matter?
Both direct (external)
and internal exposure to gamma rays or X-rays are of
concern. Gamma rays can travel much farther than alpha
or beta particles and have enough energy to pass
entirely through the body, potentially exposing all
organs. A large protion gamma radiation largely
passes through the body without interacting with
tissue--the body is mostly empty space at the atomic
level and gamma rays are vanishingly small in size. By
contrast, alpha and beta particles inside the body
lose all their energy by colliding with tissue and
causing damage. X-rays behave in a similar way, but
have slightly lower energy.
Gamma rays do not
directly ionize atoms in tissue. Instead, they
transfer energy to atomic particles such as electrons
(which are essentially the same as beta particles).
These energized particles then interact with tissue to
form ions, in the same way radionuclide-emitted alpha
and beta particles would. However, because gamma rays
have more penetrating energy than alpha and beta
particles, the indirect ionizations they cause
generally occur farther into tissue (that is, farther
from the source of radiation). | <urn:uuid:b21545f5-462d-4679-b8fb-a45f9d675683> | 4.0625 | 1,048 | Knowledge Article | Science & Tech. | 41.566987 | 1,278 |
"That's not what I meant": human communication is fraught with misinterpretation. Written out in longhand, words and letters can be misread. A telegraph clerk can mistake a dot for a dash. Noise will always be with us, but at least a new JQI (*) device has established a new standard for reading quantum information with a minimum of uncertainty.
Success has come by viewing light pulses not with a single passive detector with but an adaptive network of detectors with feedback. The work on JQI's new, more assured photonic protocol was led by Francisco Becerra and carried out in Alan Migdall's JQI lab. They report their results in Nature Photonics (**). Here are some things you need to know to appreciate this development.
HOW TO MODULATE?
Digital data, in its simplest form, can be read with a process called on-off keying: a detector senses the intensity of incoming bursts of electrons in wires or photons through fibers and assigns a value of 0 or 1. A more sophisticated approach to modulating a signal (not merely off/on) is to encode data in the phase of the pulse. In "phase-shift keying," information is encoded in the amount of phase shift imposed on a carrier wave; the phase of the wave is how far along the wave cycle you happen to be (say, at the top of a crest or the bottom of a trough in a sinusoidal, as in this figure).
WHAT KIND OF ALPHABET?
Larger words can be assembled from a small suite of symbols. The Roman alphabet has 26 letters, the Greek only 24. Binary logic, and most transistors, makes do with just a two-letter alphabet. Everything is a 0 or a 1, and larger numbers and letters and words are assembled from as many binary bits as are necessary. But what if we enlarged the alphabet from two to four? In quaternary logic more data can be conveyed in a single pulse. The cost of this increase is having to write and read 4 states of modulation (or 4 symbols). Even more efficient in terms of packing data, but correspondingly more difficult to implement, is logic based on 6 states, or 8, or any higher number. Digital data at its most basic---at the level of transistor---remains in binary form, but for communicating this data, higher number alphabets can be used. In fact, high-definition television delivery already involves high-level logic.
No matter what kind of logic is used, errors creep in. A detector doesn't just unequivocally measure a 0 or a 1. The reading process is imperfect. And even worse, the state of the light pulse is inherently uncertain, and that is a real problem when the light pulses belong to a set of overlapping states. This is illustrated in the figure below for binary and quaternary phase states.
On the left side of the figure, the measurement of the phase of a light pulse is depicted, where there are only two choices. Is the pulse in the alpha state or the –alpha state? Because the tails of one overlap the other there is a slight ambiguity that leads to uncertainty in which state a measurement indicates. On the right, four possible states are depicted on a complex-number graph (with real (Re) and imaginary (Im) axes). Here the overlap of the states is more complicated, but results in similar ambiguities of the measured states, seen mostly near the borders (decision threshold lines) between the states.
STANDARD QUANTUM LIMIT
Decades ago communications theory established a minimal uncertainty for the accurate transmission and detection of information encoded in overlapping states. The hypothetical minimal detection error using conventional schemes is called the standard quantum limit and it depends on things like how many photons of light comprise the signal, how many levels (binary, quaternary, etc.) need to be read out, and which physical property of light is used to encode the information, such as the phase.
But starting in the 1970s with physicist Carl W. Helstrom, some scientists have felt that the standard quantum limit could be circumvented. The JQI researchers do exactly this by using not a single passive photo-detector, but an active detection process involving a series of stages. At each stage, the current light signal strikes a partially-silvered mirror, which peels off a fraction of the pulse for analysis and the rest goes on to subsequent stages. At each stage the signal is combined with a separate reference oscillator wave used as a phase reference against which the signal phase is determined. This is done by shifting the reference wave by a known amount and letting it interfere with the signal wave at the beamsplitter. By altering that known shift, the interference pattern can reveal something about the phase of the input pulse.
By combining many such stages (see the figure below) and using information gained by previous stages to adjust the phase of the reference wave in successive stages, a better estimate of the signal phase can be obtained.
Detecting phase in this adaptive way, and implemented in a feedback manner, the JQI system is able to beat the standard quantum limit for a set of 4 states (quaternary) encoding information as a phase. These states are represented as fuzzy distributions arranged at different angles around a circle as seen in the figure above where the angles represent the phase of the light pulses.
The JQI noise-reduction achievement is depicted in the graph below. The error rate is plotted as a function of the mean number of photons used to deliver the information. The standard quantum limit (SQL) is the red line. The light gray line is the SQL line if you take into account that individual photon detector stages used were ~72% efficient rather than 100% (with the detector efficiencies being 84%. In the business of detecting single photons, 84% is top of the line.)
The error probabilities measured for the system (black points with error bars) fall well below the quantum limit, by about 6 decibels in the center of the curve. This is equivalent to saying that the JQI receiver is performing better than the SQL by a factor of about 4 in determining the phase of an incoming signal. That is, the JQI receiver achieves an error probability that is 4 times lower than the so-called "Standard Quantum Limit." This graph shows results for a system that implements 10 adaptive measurements. The two other lines on the chart show what the expected uncertainty would be for a perfect system (100% efficient detectors) and without any of the imperfections that would be encountered in any realistic implementation, and a hypothetical ultimate-limit on uncertainty derived by Helstrom.
To conclude, the JQI photon receiver features an error rate four times lower than perfect conventional receivers, over a wide range of photon number, and with discrimination for four states. The only previous detection below the quantum limit was for a very narrow range of photons and with only a 2-state protocol and only slightly below the SQL.
(*)The Joint Quantum Institute (JQI) is operated jointly by the National Institute of Standards and Technology in Gaithersburg, MD and the University of Maryland in College Park.
(**) "Experimental demonstration of a receiver beating the standard quantum limit for multiple nonorthogonal state discrimination," by F. E. Becerra, J. Fan, G. Baumgartner, J. Goldhar, J. T. Kosloski, and A. Migdall, Nature Photonics, published online 6 January 2013.
Alan Migdall, firstname.lastname@example.org, 301-975-2331
Press contact at JQI: Phillip F. Schewe, email@example.com, 301-405-0989. http://jqi.umd.edu/
Phillip F. Schewe | Source: EurekAlert!
Further information: www.umd.edu
More articles from Physics and Astronomy:
“Out of This World” Space Stethoscope Valuable on Earth, Too
22.05.2013 | Johns Hopkins
Storms on Uranus, Neptune Confined to Upper Atmosphere
21.05.2013 | University of Arizona
A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials.
The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers.
Droplets in this toroidal shape made ...
Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry.
Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada.
High manufacturing cost and a short lifetime are still a major obstacle on ...
University of Würzburg physicists have succeeded in creating a new type of laser.
Its operation principle is completely different from conventional devices, which opens up the possibility of a significantly reduced energy input requirement. The researchers report their work in the current issue of Nature.
It also emits light the waves of which are in phase with one another: the polariton laser, developed ...
Innsbruck physicists led by Rainer Blatt and Peter Zoller experimentally gained a deep insight into the nature of quantum mechanical phase transitions.
They are the first scientists that simulated the competition between two rival dynamical processes at a novel type of transition between two quantum mechanical orders. They have published the results of their work in the journal Nature Physics.
“When water boils, its molecules are released as vapor. We call this ...
Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset.
For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ...
22.05.2013 | Life Sciences
22.05.2013 | Ecology, The Environment and Conservation
22.05.2013 | Earth Sciences
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:36bff55d-91aa-43fb-ad89-0f9a2f3528c4> | 2.984375 | 2,242 | Content Listing | Science & Tech. | 47.581177 | 1,279 |
Choosing between constructor and setter methods
While the use of constructor helps in initializing the member variables during object creation. The setter methods help to assign values to member variables after the object has been created in the memory. The choice of constructor or setter methods for initializing the member variables depends upon the requirement.
Some of the examples where using a constructor which accepts arguments and then uses these arguments for initializing a member variables are:
- When no setter method has been exposed, the constructor can be used to initialize any private member variable. We can see this approach being used by the string class in JDK where in the constructor accepts a string literal object and use the characters of this literal to initialize the internal character array.
- When invoking the methods on constructor from super class is a requirement before object can be constructed, one has to resort to using constructors for member variable initialization.
- Another common scenario we constructors are used for initializing instance variable is is when those instance variables are quite significant for the object being constructed. For example when creating a Book class instance, passing the title of the book as an argument to the Book class constructor.
Using Setter Methods
Some of the examples where using the setter methods for setting instance variables are:
- When a large number of member variables are present in a class then using a constructor with say 15 arguments does not make sense and one should resort to using setter methods for initializing those 15 member variables.
- When the member variables have the property of changing their value quite frequently than one must be using the setter methods.
- The use of setter methods makes it clear as to which member variable is being set. This is because of the fact that proper naming convention is used for the names of setter methods whereas in case of constructor there is no clear indication about the arguments which need to be passed to the constructor.
One last point I want to make above the choice of using constructor or setter methods is that while one can overload the constructors to accept a different type and number of arguments which can be convenient in some cases. But that is not possible with the use of setter methods. | <urn:uuid:f00d4844-f4dc-4bff-b892-8cd6af25a1bc> | 3.671875 | 445 | Customer Support | Software Dev. | 29.251919 | 1,280 |
I am looking through a piece of code and I cannot figure out what this does. I have seen it in other code but as I am just learning java, I was hoping someone could tell me about it. Here is the code snipet:
line = line.replaceAll("\t", " ");
My question is what does \t do in Java??? | <urn:uuid:1f31e17e-4422-4abf-b612-a066006dd45a> | 2.875 | 76 | Q&A Forum | Software Dev. | 79.939868 | 1,281 |
Posted by Physics fail on Friday, January 25, 2013 at 5:28pm.
An electron experiences 1.2 x 10^-3 force when it enters the external magnetic field, B with a velocity v. What is the force experienced by the electron if the magnetic field is increased two times and the velocity is decreased to half?
Any help would be greatly appreciated
Answer this Question
For Further Reading | <urn:uuid:fec66674-d3b5-4042-9a3a-3d315ac56f72> | 2.65625 | 83 | Q&A Forum | Science & Tech. | 63.924118 | 1,282 |
0 for the most recent generation, 1 for the most recent two generations, and so on up to a maximum (usually 3). Numbers outside this range signal an error.
is used to garbage-collect a specified generation of storage (and all lower generations). A call to this function forces the garbage collector to scan the specified generations. This can be of use in obtaining consistent timings of programs that require memory allocation. Alternatively, performance can sometimes be improved by forcing a garbage collection, when it is known that little memory has been allocated since a previous collection, rather than waiting for a later, more extensive collection. For example, the function could be called outside a loop that allocates a small amount of memory.
It is specially helpful to mark and sweep generation 2 when large, long-lived data structures become garbage, because by default it is never marked and swept. The higher the generation number the more time the
takes, but also the more space recovered. | <urn:uuid:7f41e3d2-ae89-48d7-ace5-56c75d821ec2> | 2.578125 | 195 | Documentation | Software Dev. | 33.693407 | 1,283 |
Students can learn how geologists use stratigraphy, the study of layered rock, to understand the sequence of geological events. As students watch baking soda-vinegar "lava" flow from their clay volcanoes, they will see that the lava follows different paths. They will also learn how to distinguish between older and newer layered flows.
Lava Layering Activity
[82KB PDF file]
This activity is part of the Exploring the Moon Educator Guide | <urn:uuid:99ce2c1f-fbae-47e1-a402-ab298e242ee3> | 3.75 | 95 | Tutorial | Science & Tech. | 39.331667 | 1,284 |
Two Cars in 2-Dimensional Collision
Collisions between objects are governed by laws of momentum and energy. When a collision occurs in an isolated system, the total momentum of the system of objects is conserved. Provided that there are no net external forces acting upon the objects, the momentum of all objects before the collision equals the momentum of all objects after the collision. If there are only two objects involved in the collision, then the momentum change of the individual objects are equal in magnitude and opposite in direction.
Certain collisions are referred to as elastic collisions. Elastic collisions are collisions in which both momentum and kinetic energy are conserved. The total system kinetic energy before the collision equals the total system kinetic energy after the collision. If total kinetic energy is not conserved, then the collision is referred to as an inelastic collision.
The animation below portrays the inelastic collision between two 1000-kg cars. The before- and after-collision velocities and momentum are shown in the data tables.
In the collision between the two cars, total system momentum is conserved. Yet this might not be apparent without an understanding of the vector nature of momentum. Momentum, like all vector quantities, has both a magnitude (size) and a direction. When considering the total momentum of the system before the collision, the individual momentum of the two cars must be added as vectors. That is, 20 000 kg*m/s, East must be added to 10 000 kg*m/s, North. The sum of these two vectors is not 30 000 kg*m/s; this would only be the case if the two momentum vectors had the same direction. Instead, the sum of 20 000 kg*m/s, East and 10 000 kg*m/s, North is 22 361 kg*m/s at an angle of 26.6 North of East. Since the two momentum vectors are at right angles, their sum can be found using the Pythagorean theorem; the direction can be found using SOH CAH TOA (specifically, the tangent function). The value 22 361 kg*m/s is the total momentum of the system before the collision; and since momentum is conserved, it is also the total momentum of the system after the collision. Since the cars have equal mass, the total system momentum is shared equally by each individual car. In order to determine the momentum of either individual car, this total system momentum must be divided by two (approx. 11 200 kg*m/s). Once the momentum of the individual cars are known, the after-collision velocity is determined by simply dividing momentum by mass (v=p/m).
An analysis of the kinetic energy of the two objects reveals that the total system kinetic energy before the collision is 250 000 Joules (200 000 J for the eastbound car plus 50 000 J for the northbound car). After the collision, the total system kinetic energy is 125 000 Joules (62 500 J for each car). The total kinetic energy before the collision is not equal to the total kinetic energy after the collision. A large portion of the kinetic energy is converted to other forms of energy such as sound energy and thermal energy. A collision in which total system kinetic energy is not conserved is known as an inelastic collision.
For more information on physical descriptions of motion, visit The Physics Classroom Tutorial. Detailed information is available there on the following topics: | <urn:uuid:c8a8aa99-0acb-4aa2-bee7-a3bc9ac7397b> | 4.28125 | 704 | Tutorial | Science & Tech. | 48.962724 | 1,285 |
Daniel Botkin, emeritus professor of ecology at UC Santa Barbara, argues in the Wall Street Journal (Oct 17, page A19) that global warming will not have much impact on life on Earth. We’ll summarize some of his points and then take our turn:
Botkin: The warm climates in the past 2.5 million years did not lead to extinctions.
Response: For the past 2.5 million years the climate has oscillated between interglacials which were (at most) a little warmer than today and glacials which were considerably colder than today. There is no precedent in the past 2.5 million years for so much warming so fast. The ecosystem has had 2.5 million years to adapt to glacial-interglacial swings, but we are asking it to adapt to a completely new climate in just a few centuries. The past is not a very good analog for the future in this case. And anyway, the human species can suffer quite a bit before we start talking extinction.
Botkin: Tropical diseases are affected by other things besides temperature
Response: I’m personally more worried about dust bowls than malaria in the temperate latitudes. Droughts don’t lead to too many extinctions either, but they can destroy civilizations. It is true that tropical diseases are affected by many things besides temperature, but temperature is important, and the coming warming is certainly not going to make the fight against malaria any easier.
Botkin: Kilimanjaro again.
Response: Been there, done that. The article Botkin cites is from American Scientist, an unreviewed pop science magazine, and it is mainly a rehash of old arguments that have been discussed and disposed of elsewhere. And anyway, the issue is a red-herring. Even if it turned out that for some bizarre reason the Kilimanjaro glacier, which is thousands of years old, picked just this moment to melt purely by coincidence, it would not in any way affect the validity of our prediction of future warming. Glaciers are melting around the world, confirming the general warming trends that we measure. There are also many other confirmations of the physics behind the predictions. It’s a case of attacking the science by attacking an icon, rather than taking on the underlying scientific arguments directly.
Botkin: The medieval optimum was a good time
Response: Maybe it was, if you’re interested in Europe and don’t mind the droughts in the American Southwest. But the business-as-usual forecast for 2100 is an entirely different beast than the medieval climate. The Earth is already probably warmer than it was in medieval times. Beware the bait and switch!
Botkin argues for clear-thinking rationality in the discussion about anthropogenic climate change, against twisting the truth, as it were. We couldn’t agree more. Doctor, heal thyself.
For years the Wall Street Journal has been lying to you about the existence of global warming. It doesn’t exist, it’s a conspiracy, the satellites show it’s just urban heat islands, it’s not CO2, it’s all the sun, it’s water vapor, and on and on. Now that those arguments are losing traction, they have moved on from denying global warming’s existence to soothing you with reassurances that it ain’t gonna be such a bad thing.
Fool me once, shame on…shame on you. Fool me–you can’t get fooled again.
-George W. Bush | <urn:uuid:e427ac79-68aa-493b-b504-a5492682a6f8> | 2.96875 | 740 | Comment Section | Science & Tech. | 52.143282 | 1,286 |
Last week we proposed a bet against the “pause in global warming” forecast in Nature by Keenlyside et al. and we promised to present our scientific case later – so here it is.
This is why we do not think that the forecast is robust:
Figure 4 from Keenlyside et al ’08. The red line shows the observations (HadCRU3 data), the black line a standard IPCC-type scenario (driven by observed forcing up to the year 2000, and by the A1B emission scenario thereafter), and the green dots with bars show individual forecasts with initialised sea surface temperatures. All are given as 10-year averages.
- Their figure 4 shows that a standard IPCC-type global warming scenario performs slightly better for global mean temperature for the past 50 years than their new method with initialised sea surface temperatures (see also the correlation numbers given at the top of the panel). That the standard warming scenario performs better is highly remarkable since it has no observed data included. The green curve, which presents a set of individual 10-year forecasts and is not a time series, each time starts again close to the observed climate, because it is initialised with observed sea surface temperatures. So by construction it cannot get too far away, in contrast to the “free” black scenario. Thus you’d expect the green forecasts to perform better than the black scenario. The fact that this is not the case shows that their initialisation technique does not improve the model forecast for global temperature.
- Their ‘cooling forecasts’ have not passed a the test for their hindcast period. Global 10-year average temperatures have increased monotonically during the entire time they consider – see their red line. But the method seems to have produced already two false cooling forecasts: one for the decade centered on 1970, and one for the decade centered on 1999.
- Their forecast was not only too cold for 1994-2004, but it also looks almost certain to be too cold for 2000-2010. For their forecast for 2000-2010 to be correct, all the remaining months of this period would have to be as cold as January 2008 – which was by far the coldest month in that decade thus far. It would thus require an extreme cooling for the next two-and-a-half years.
- Even for European temperatures (their Fig. 3c, not part of our proposed bet), the forecast skill of their method is not impressive. Their method has predicted cooling several times since 1970, yet the European temperatures have increased monotonically since then. Remember the forecasts always start near the red line; almost every single prediction for Europe has turned out to be too cold compared to what actually happened. There therefore appears to be a systematic bias in the forecasts.
- One of the key claims of the paper is that the method allows forecasting the behaviour of the meridional overturning circulation (MOC) in the Atlantic. We do not know what the MOC has actually been doing for lack of data, so the authors diagnose the state of the MOC from the sea surface temperatures – to put it simply: a warm northern Atlantic suggests strong MOC, a cool one suggests weak MOC (though it is of course a little more complex). Their method nudges the model’s sea surface temperatures towards the observed ones before the forecast starts. But can this induce the correct MOC response? Suppose the model surface Atlantic is too cold, so this would suggest the MOC is too weak. The model surface temperatures are then nudged warmer. But if you do that, you are making surface waters more buoyant, which tends to weaken the MOC instead of enhancing it! So with this method it seems unlikely to us that one could get the MOC response right. We would be happy to see this tested in a ‘perfect model’ set up, where the SST-restoring was applied to try and get the model forecasts to match a previous simulation (where you know much more information). If it doesn’t work for that case, it won’t work in the real world.
- When models are switched over from being driven by observed sea surface temperatures to freely calculating their own sea surface temperatures, they suffer from something called a “coupling shock”. This is extremely hard, perhaps even impossible, to avoid as “perfect model” experiments have shown (e.g. Rahmstorf, Climate Dynamics 1995). This problem presents a formidable challenge for the type of forecast attempted by Keenlyside et al., where just such a “switching over” to free sea surface temperatures occurs at the start of the forecast. In response to the “coupling shock”, a model typically goes through an oscillation of the meridional overturning circulation over the next decades, of the magnitude similar to that seen in the Keenlyside et al simulations. We suspect that this “coupling shock”, which is not a realistic climate variability but a model artifact, could have played an important role in those simulations. One test would be the perfect model set up we mentioned above, or an analysis of the net radiation budget in the restored and free runs – a significant difference there could explain a lot.
- To check how the Keenlyside et al. model performs for the MOC, we can look at their skill map in Fig. 1a. This shows blue areas in the Labrador Sea, Greenland-Iceland-Norwegian Sea and in the Gulf Stream region. These blue areas indicate “negative skill” – that means, their data assimilation method makes things worse rather than improving the forecast. These are the critical regions for the MOC, and it indicates that for either of the two reasons 5 and 6, their method is not able to correctly predict the MOC variations. Their method does show skill in some regions though – this is important and useful. However, it might be that this skill comes from the advection of surface temperature anomalies by the mean ocean circulation rather than from variations of the MOC. That would also be a an interesting issue to research in the future.
- All climate models used by IPCC, publicly available in the CMIP3 model archive, include intrinsic variability of the MOC as well as tropical Pacific variability or the North Atlantic Oscillation. Some of them also include an estimate of solar variability in the forcing. So in principle, all of these models should show the kind of cooling found by Keenlyside et al. – except these models should show it at a random point in time, not at a specific time. The latter is the innovation sought after by this study. The problem is that the other models show that a cooling of one decadal mean to the next in a reasonable global warming scenario is extremely unlikely and almost never occurs – see yesterday’s post. This suggests that the global cooling forecast by Keenlyside et al. is outside the range of natural variability found in climate models (and probably in the real world, too), and is perhaps an artifact of the initialisation method.
Our assessment could of course be wrong – we had to rely on the published material, while Keenlyside et al. have access to the full model data and have worked with it for months. But the nice thing about this forecast is that within a few years we will know the answer, because these are testable short term predictions which we are happy to see more of.
Why did we propose a bet on this forecast? Mainly because we were concerned by the global media coverage which made it appear as if a coming pause in global warming was almost a given fact, rather than an experimental forecast. This could backfire against the whole climate science community if the forecast turns out to be wrong. Even today, the fact that a few scientists predicted a global cooling in the 1970s is still used to undermine the credibility of climate science, even though at the time it was just a small minority of scientists making such claims and they never convinced many of their peers. If different groups of scientists have a public bet running on this, this will signal to the public that this forecast is not a widely supported consensus of the climate science community, in contrast to the IPCC reports (about which we are in complete agreement with Keenlyside and his colleagues). Some media reports even suggested that the IPCC scenarios were now superseded by this “improved” forecast.
Framing this in the form of a bet also helps to clarify what exactly was forecast and what data would falsify this forecast. This was not entirely clear to us just from the paper and it took us some correspondence with the authors to find out. It also allows the authors to say: wait, this is not how we meant the forecast, but we would bet on a modified forecast as follows… By the way, we are happy to negotiate what to bet about – we’re not doing this to make money. We’d be happy to bet about, say, a donation to a project to preserve the rain forest, or retiring a hundred tons of CO2 from the European emissions trading market.
We thus hope that this discussion will help to clarify the issues, and we invite Keenlyside et al. to a guest post here (and at KlimaLounge) to give their view of the matter. | <urn:uuid:f973850b-0d59-4abe-934c-cc3630e05790> | 2.8125 | 1,921 | Academic Writing | Science & Tech. | 45.906641 | 1,287 |
Parasitic Wasp Genome Released
Parasitic wasps kill pest insects, but their existence is largely unknown to the public. Now, scientists led by John H. Werren, professor of biology at the University of Rochester, and Stephen Richards at the Genome Sequencing Center at the Baylor College of Medicine have sequenced the genomes of three parasitoid wasp species, revealing many features that could be useful to pest control and medicine, and to enhance our understanding of genetics and evolution. The study appears in the Jan. 15 issue of Science.
"Parasitic wasps attack and kill pest insects, but many of them are smaller than the head of a pin, so people don’t even notice them or know of their important role in keeping pest numbers down," says Werren. "There are over 600,000 species of these amazing critters, and we owe them a lot. If it weren’t for parasitoids and other natural enemies, we would be knee-deep in pest insects."
Parasitoid wasps are like "smart bombs" that seek out and kill only specific kinds of insects, says Werren. "Therefore, if we can harness their full potential, they would be vastly preferable to chemical pesticides, which broadly kill or poison many organisms in the environment, including us."
The three wasp genomes Werren and Richards sequenced are in the wasp genus Nasonia, which is considered the "lab rat" of parasitoid insects. Among the future applications of the Nasonia genomes that could be of use in pest control is identification of genes that determine which insects a parasitoid will attack, identification of dietary needs of parasitoids to assist in economical, large-scale rearing of parasitoids, and identification of parasitoid venoms that could be used in pest control. Because parasitoid venoms manipulate cell physiology in diverse ways, they also may provide an unexpected source for new drug development.
In addition to being useful for controlling pests and offering promising venoms, the wasps could act as a new genetic system with a number of unique advantages. Fruit flies have been the standard model for genetic studies for decades, largely because they are small, can be grown easily in a laboratory, and reproduce quickly. Nasonia share these traits, but male Nasonia have only one set of chromosomes, instead of two sets like fruit flies and people. "A single set of chromosomes, which is more commonly found in lower single-celled organisms such as yeast, is a handy genetic tool, particularly for studying how genes interact with each other," says Werren. Unlike fruit flies, these wasps also modify their DNA in ways similar to humans and other vertebrates””a process called "methylation," which plays an important role in regulating how genes are turned on and off during development.
"In human genetics we are trying to understand the genetic basis for quantitative differences between people such as height, drug interactions and susceptibility to disease," says Richards. "These genome sequences combined with haploid-diploid genetics of Nasonia allow us to cheaply and easily answer these important questions in an insect system, and then follow up any insights in humans."
The wasps have an additional advantage in that closely related species of Nasonia can be cross-bred, facilitating the identification of genes involved in species’ differences. "Because we have sequenced the genomes of three closely related species, we are able to study what changes have occurred during the divergence of these species from one another," says Werren. "One of the interesting findings is that DNA of mitochondria, a small organelle that ‘powers’ the cell in organisms as diverse as yeast and people, evolves very fast in Nasonia. Because of this, the genes of the cell’s nucleus that encode proteins for the mitochondria must also evolve quickly to ‘keep up.’ " It is these co-adapting gene sets that appear to cause problems in hybrids when the species mate with each other. Research groups are now busy trying to figure out what specific kinds of interactions go wrong in the hybrid offspring. Since mitochondria are involved in a number of human diseases, as well as fertility and aging, the rapidly evolving mitochondria of Nasonia and coadapting nuclear genes could be useful research tools to investigate these processes.
A second startling discovery is that Nasonia has been picking up and using genes from bacteria and Pox viruses (e.g. relatives of the human smallpox virus). "We don’t yet know what these genes are doing in Nasonia," says Werren, "but the acquisition of genes from bacteria and viruses could be an important mechanism for evolutionary innovation in animals, and this is a striking potential example."
A companion paper to the Science study, published today in PLoS Genetics, reports the first identification of the DNA responsible for a quantitative trait gene in Nasonia, and heralds Nasonia joining the ranks of model genetic systems. The study reveals that changes in "non-coding DNA," the portion that does not make proteins but can regulate expression of genes, causes a large developmental difference between closely related species of Nasonia. This finding relates to an important ongoing controversy in evolution ““ whether differences between species are due mostly to protein changes or regulatory changes. "Emerging from these genome studies are a lot of opportunities for exploiting Nasonia in topics ranging from pest control to medicine, genetics, and evolution," says Werren. "However, the community of scientists working on Nasonia is still relatively small. That is why we are hoping that more scientists will see the utility of these insects, and join in efforts to exploit their potential."
Image 1: Nasonia female. Credit: Michael E. Clark/University of Rochester
Image 2: Chris Desjarding and Jack Werren compare parasitic wasps (tiny insects in upper tube) to their hosts flies ( in the lower tube). Credit: University of Rochester
On the Net: | <urn:uuid:47f59705-072b-4333-866a-a93859662fdf> | 3.328125 | 1,240 | News Article | Science & Tech. | 27.326861 | 1,288 |
Nov. 25, 2002 Molting, that periodic ritual in which arthropods shed and replace their outer skeletons, can be a dangerous time for the creatures. Just ask the trilobite.
Research published by a Michigan State University paleontologist suggests that an inconsistent molting style, coupled with inefficient physiology, contributed to the demise of these prehistoric relatives of today's crabs and lobsters nearly 250 million years ago.
"They would shed their old exoskeleton any way they could," said Danita Brandt, a faculty member in MSU's Department of Geological Sciences whose findings were published in the Australian paleontology journal Alcheringa. "They had to improvise."
On the other hand, today's modern arthropods molt the very same way every time. The same suture opens every time, letting the animal out.
"When the same technique is used, there is less of a chance that things will go wrong," she said. "Molting is a very dangerous time for an arthropod. A lot of things can go wrong."
Brandt's proposed connection between arthropod molting and evolutionary fate is based on two pieces of evidence: the inconsistency of molting patterns that characterize trilobites, in contrast to the consistent patterns seen in modern arthropods; and her observation that certain trilobites that had fewer body segments tended to live longer -- evolutionarily speaking -- then those that had many segments.
"Trilobites with fewer segments probably had a lower risk of molting-related accidents, and may have shed their old exoskeleton more quickly," she said. "These are traits of modern arthropods that act to minimize the period during which the animals are vulnerable to predators."
Brandt also noted that trilobite molting differed from molting in modern arthropods in another potentially important way: many modern arthropods resorb minerals from the old exoskeleton or consume their molted exoskeleton, thus conserving resources.
"There is no evidence that trilobites used these conservation strategies," she said. "Apparently trilobites were faced with the considerable task of rebuilding a heavily calcified skeleton 'from scratch' with each molt."
At one time, trilobites were one of the more evolutionary successful animals to roam the early world's oceans. The crab-like creatures, some of which were as small as a fingernail while others were nearly a foot long, thrived, especially during the Cambrian Period. It was at the end of the Paleozoic Era that the trilobite disappeared.
For years the trilobite's extinction had been blamed on a sudden increase in the numbers of trilobite predators. Fossil records show that the number of trilobites began to drop as other aquatic animals, such as fish and squid, began to increase.
"But it's highly unlikely that predators ever eliminated an entire group," Brandt said. "Another argument against predation alone is that other arthropods continue to thrive even today despite the proliferation of predator groups."
Other theories linked to trilobite extinction include climate change, sea-level fluctuation, and even the effects of meteorite impact. However, the correlation between these possible causes and the pattern of trilobite extinctions is not consistent, Brandt said.
"I think there is a biological 'wild card' that complicates the correlation of trilobite extinction with environmental factors, and for the trilobites I think that wild card was the unique challenge they faced during molting," she said.
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by Michigan State University.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:4e44e283-1fe3-45ab-800e-b3627b92a0bb> | 3.6875 | 810 | News Article | Science & Tech. | 27.495217 | 1,289 |
Mar. 13, 2010 New studies of ripples and dunes shaped by the winds on Mars testify to variability on that planet, identifying at least one place where ripples are actively migrating and another where the ripples have been stationary for 100,000 years or more.
Patterns of dunes and the smaller ripples present some of the more visually striking landforms photographed by cameras orbiting Mars. Investigations of whether they are moving go back more than a decade.
Two reports presented at the 41st Lunar and Planetary Sciences Conference near Houston recently make it clear that the answer depends on where you look. Both reports used images from the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter, which allows examination of features as small as about a meter, or yard, across.
One report is by Simone Silvestro of the International Research School of Planetary Sciences at Italy's G. d'Annunzio University, and his collaborators. They investigated migration of ripples and other features on dark dunes within the Nili Patera area of Mars' northern hemisphere. They compared an image taken on Oct. 13, 2007, with another of the same dunes taken on June 30, 2007. Most of the dunes in the study area are hundreds of meters long. Ripples form patterns on the surfaces of the dunes, with crests of roughly parallel ripples spaced a few meters apart.
Careful comparison of the images revealed places where ripples on the surface of the dunes had migrated about 2 meters (7 feet) -- the largest movement ever measured in a ripple or dune on Mars. The researchers also saw changes in the shape of dune edges and in streaks on the downwind faces of dunes.
"The dark dunes in this part of Mars are active in present-day atmospheric conditions," Silvestro said. "It is exciting to have such high-resolution images available for comparisons that show Mars as an active world."
The other report is by Matthew Golombek of NASA's Jet Propulsion Laboratory, Pasadena, Calif., and collaborators. They checked whether ripples have been moving in the southern-hemisphere area of Mars' Meridiani Planum where the Mars Exploration Rover Opportunity has been working since 2004. They used observations by Opportunity as well as by HiRISE, surveying an area of about 23 square kilometers (9 square miles). Examination of ripples at the edges of craters can show whether the ripples were in place before the crater was excavated or moved after the crater formed.
"HiRISE images are so good, you can tell if a crater is younger than the ripple migration," Golombek said. "There's enough of a range of crater ages that we can bracket the age of the most recent migration of the ripples in this area to more than 100,000 years and probably less than 300,000 years ago."
Winds are still blowing sand and dust at Meridiani. Opportunity has seen resulting changes in its own wheel tracks revisited several months after the tracks were first cut.
Golombek has a hypothesis for why the ripples at Meridiani are static, despite winds, while those elsewhere on Mars may be actively moving. Opportunity has seen that the long ripples in the region are covered with erosion-resistant pebbles, nicknamed "blueberries," which the rover first observed weathering out of softer matrix rocks beside the landing site. These spherules -- mostly about 1 to 3 millimeters (0.04 to 0.12 inches) in diameter -- may be too large for the wind to budge.
"The blueberries appear to form a armoring layer that shields the smaller sand grains beneath them from the wind," he said.
HiRISE Principal Investigator Alfred McEwen, of the University of Arizona, Tucson, said, "The more we look at Mars at the level of detail we can now see, the more we appreciate how much the planet differs from one place to another."
The Mars Reconnaissance Orbiter and the Mars Exploration Rover missions are managed by JPL for NASA's Science Mission Directorate in Washington. Lockheed Martin Space Systems in Denver was the prime contractor for the orbiter and supports its operations. The University of Arizona operates the HiRISE camera, which was built by Ball Aerospace & Technologies Corp., Boulder, Colo.
Other social bookmarking and sharing tools:
Note: If no author is given, the source is cited instead. | <urn:uuid:5833fe27-0dfb-4e21-90b8-da2378972590> | 3.453125 | 917 | News Article | Science & Tech. | 41.079683 | 1,290 |
May 21, 2010 Researchers at Spain's Centre for Genomic Regulation (CRG) demonstrate evidence in support of the common ancestry of life, thanks to a new computational approach to study protein evolution.
The work, published in Nature, takes its inspiration from the astronomer Edwin Hubble and uses his approach to study protein evolution. The extrapolation of Hubble's approach to proteins shows that proteins that share a common ancestor billions of years ago continue to diverge in their molecular composition.
The study reveals that protein evolution has not reached its limit and it is still continuing. At the same time, it provides us new information on why this evolution is so slow and conservative, showing that protein structures are more evolutionary plastic than previously thought.
Almost 100 years ago Edwin Hubble observed that distant galaxies are moving away from Earth faster than those that are closer. This relationship between distance and velocity is widely cited as evidence of the origin of the Universe from a Big Bang. Researchers at the Centre for Genomic Regulation used his approach to investigate the divergence between protein sequences.
"We wanted to know if the divergent evolution between proteins was still proceeding. Today, we can find proteins that are still similar after almost 3,5 billion years of evolution. Our study showed that their divergence continues with these proteins becoming more and more different despite their incredible level of conservation," said Fyodor Kondrashov, principal investigator of the project and leader of the Evolutionary Genomics group at the CRG.
The work done by Kondrashov and Inna Povolotskaya goes beyond similarity studies and discusses the evolution of proteins from the view of evolutionary dynamics, offering a new perspective on how protein structures are maintained in evolution. "In the same way that Hubble's observations led to an understanding of the past and the future of our universe, using his approach at a molecular level we get a similar overview that gives us the ability to analyze evolutionary dynamics and get a broad prediction of the possible changes to the proteins in the future," says Povolotskaya, first author of the work and responsible for obtaining and analyzing all data.
Proteins are formed through combinations of amino acids, with only 20 types of amino acids are available to form a particular protein. To obtain the data for their study, the CRG researchers have compared proteins sequences from different species that were available in GenBank, a public database of genetic information. Comparing these sequences the authors measured the distance of proteins from each other and devised a method for measuring how fast the proteins are accumulating different changes. Thus, they could replicate Hubble's approach by correlating the distance between the proteins with the rate of their divergence. The result indicates that even the most distantly-related proteins are still accumulating differences.
The study shows how new techniques of bioinformatics and computational analysis can also expand knowledge at a molecular level. "Our work is a good example of how we can learn new and very fundamental things just by analyzing a larger volume of data that can be obtained by one experimental laboratory," says Kondrashov.
Most changes in a protein are deleterious because they somehow disrupt its structure or function. The authors observation that even very conservative proteins are still diverging challenges this view, because it implies that most amino acids in a protein can be changes without any ill effects. Their explanation is that amino acid changes that are deleterious in one combination can be benign when occurring in a different one. "Thanks to our study we now have a better understanding of protein structure dynamics," says Kondrashov. It may provide a new perspective to groups working on protein structure to find new targets for design drugs, etc.
The Povolotskaya and Kondrashov study also provides new information on how different interactions between different amino acids in the structure of proteins slows down but does not completely prevent evolution.
Other social bookmarking and sharing tools:
- Inna S. Povolotskaya, Fyodor A. Kondrashov. Sequence space and the ongoing expansion of the protein universe. Nature, 2010; DOI: 10.1038/nature09105
Note: If no author is given, the source is cited instead. | <urn:uuid:437c84e0-4d2f-4478-9de1-d386b61601c9> | 3.5625 | 854 | News (Org.) | Science & Tech. | 25.070384 | 1,291 |
When we drive somewhere new, we navigate by referring to a two-dimensional map that accounts for distances only on a horizontal plane. According to research published online in August in Nature Neuroscience, the mammalian brain seems to do the same, collapsing the world into a flat plane even as the animal skitters up trees and slips deep into burrows.
“Our subjective sense that our map is three-dimensional is illusory,” says Kathryn Jeffery, a behavioral neuroscientist at University College London who led the research. Jeffery studies a collection of neurons in and around the rat hippocampus that build an internal representation of space. As the animal travels, these neurons, called grid cells and place cells, respond uniquely to distance, turning on and off in a way that measures how far the animal has moved in a particular direction.
Past research has focused on how these cartographic cells encode two-dimensional space. Jeffery and her colleagues decided to look at how they respond to changes in altitude. To do this, they enticed rats to climb up a spiral staircase while the scientists collected electrical recordings from single cells. The firing pattern encoded very little information about height.
The finding adds evidence for the hypothesis that the brain keeps track of our location on a flat plane, which is defined by the way the body is oriented. If a squirrel, say, is running along the ground, then scampers straight up a tree, its internal two-dimensional map simply shifts from the horizontal plane to the vertical. Astronauts are some of the few humans to describe this experience: when they move in space to “stand” on a ceiling, they report a moment of disorientation before their mental map flips so they feel right side up again.
Researchers do not know yet whether other areas of the brain encode altitude or whether mammals simply do not need that information to survive. “Maybe an animal has a mosaic of maps, each fragment of which is flat but which can be oriented in the way that’s appropriate,” Jeffery speculates. Or maybe in our head, the world is simply flat. | <urn:uuid:87a08e2d-8f6c-4dcb-b8cb-c5fc52364d4f> | 4 | 430 | News Article | Science & Tech. | 38.159249 | 1,292 |
String theorists had looked at the idea of confining all forces to a brane and having gravity leak, but they had not worked out the mechanism, says physicist Joseph Lykken of Fermilab in Batavia, Ill. Randall and Sundrum, he remarks, "changed people's thinking about this stuff entirely."
As Randall and Sundrum refined their idea, they realized that if the extra dimension of spacetime were warped in anti-De Sitter fashion, it could be infinitely large and what we observe about gravity could still be true. This model came to be known as RS-2. "Working that out was mind-blowing," Sundrum recalls. "We had reason to be dead scared. In each of these cases, there was a distinct fear of making complete fools of ourselves."
"It was counterintuitive," notes theorist Michael J. Duff of Imperial College London. "It came as a surprise even to those working in extra dimensions that even though the extra dimension is very large, we wouldn't be aware of it. Newton's law would still be an inverse square law, not an inverse cube law, which is what you might naively expect."
It took a while for many physicists to realize what Randall and Sundrum were suggesting, but the time was right for such thinking. Anti-De Sitter space was popping up in some models, branes were thriving, and in 1998 Nima Arkani-Hamed of Harvard, Georgi Dvali of New York University and Savas Dimopoulos of Stanford University (or ADD, for short) had postulated a three-brane within two large extra dimensions.
Randall and Sundrum offered a new set of options of what went on in the early universe.
Some of the recent models, be they RS, elaborations of ADD or others, will be put to the test when the Large Hadron Collider (LHC) at CERN near Geneva fires up in 2007. "If there is any solution to the hierarchy problem, it should be revealed at the energies the LHC will explore," Randall enthuses. Evidence could include gravitons, supersymmetric partners or evanescent, tiny black holes. "Even if we don't know the answer, it should tell us what the answer is," she adds.
In typical fashion, Randall recently took on two things new to her. The first was writing a book about physics, released last month. The second was participating on a task force formed by Harvard president Lawrence Summers after his comments about women in science. She says she is nervous about the reception of the first project and dislikes talking about the second one. "I like to solve simple problems like extra dimensions in space," Randall declares. "Everyone thinks [women in science] is a simpler issue, but it is so much more complicated."
She should know: she was the first female captain of her high school math team, and even though Stuyvesant is famous for cultivating science and math whizzes, she did not find it supportive of girls. "There was one teacher who kept saying that Stuyvesant was much better when it was all boys, even though the two best students in his class were girls, and he liked us both. It was this weird cognitive mismatch," she says. Regarding Harvard and the task force, Randall is reticent: "I just want to see a whole bunch more women enter the field so these issues don't have to come up anymore."
The 43-year-old Randall is now collaborating with Andreas Karch of the University of Washington, investigating some of the cosmological implications of branes and extra dimensions. According to Randall, we may live in a three-brane, but "there are regions beyond the horizon that look really entirely different. And we haven't fully explored them yet."
If her ideas don't feel obvious to you, don't fret. You are in good company. "I often don't understand her," Karch confesses. "When she says things, they don't make sense and I first think 'she is crazy.' But I don't say anything, because she is usually right. Lisa just knows the answer."
This article was originally published with the title The Beauty of Branes. | <urn:uuid:43937958-3187-45a2-8098-1365406af8dd> | 2.515625 | 871 | Truncated | Science & Tech. | 54.907341 | 1,293 |
By: Michael Collins -EnviroReporter.com
Millions of Southern Californians and tourists seek the region’s famous beaches to cool off in the sea breeze and frolic in the surf. Those iconic breezes, however, may be delivering something hotter than the white sands along the Pacific: Buckyballs.
According to a recent UC Davis study, these uranium-filled nanospheres were created from the millions of tons of fresh and salt water used to try to cool down three molten cores of the stricken reactors at Japan’s Fukushima Daiichi Nuclear Power Plant. The tiny and tough buckyballs are shaped like soccer balls.
Water hitting the incredibly hot and radioactive, primarily uranium-oxide fuel turns it into peroxide. In this goo mix, buckyballs are formed, loaded with uranium and able to move quickly through water without disintegrating.
High radiation readings in Santa Monica and Los Angeles air during a 42-day period from late December to late January strongly suggest that radiation is increasing in the region including along the coast in Ventura County.
The radiation, detected by this reporter and the US Environmental Protection Agency, separate from each other and using different procedures, does not appear to be natural in origin. The EPA’s radiation station is high atop an undisclosed building in Los Angeles, while this reporter’s detection location is near the West LA boundary.
Both stations registered more than 5.3 times the normal amount, though the methods of sampling and detection differed. The videotaped Santa Monica sampling and testing allowed for the detection of alpha and beta radiation, while the sensitive EPA instrument detected beta only, according to the government Web site.
A windy Alaskan storm front sweeping down the coast the morning of March 31 slammed Southern California with huge breakers, a choppy sea with 30-foot waves and winds gusting to 50 mph. A low-hanging marine layer infused with sea spray made aloft from the chop and carried on the winds that blew inland over the Los Angeles Basin for several miles, bringing with it the highest radiation this reporter has detected in hot rain since the meltdowns began.
Scientific studies from the United Kingdom and Europe show that sea water infused with radiation of the sort spewing out of Fukushima can travel inland from the coast up to 300 kilometers. These mobile poisons include cesium-137 and plutonium-239, the latter of which has a half-life of 24,400 years.
Despite the fact that University of California and this reporter’s tests show high radiation in the air, water, food and dairy products in this state, the state and federal governments cut off special testing for Fukushima radionuclides more than half a year ago.
Southern California is still getting hit by Fukushima radiation at alarmingly high levels that will inevitably increase as the main bulk of polluted Pacific Ocean water reaches North America in the next two years.
Luckily, the area is south of where the jet stream has brought hot rains from across the Pacific and Fukushima, more than 5,000 miles away, upwind and up-current of the West Coast. Those rains have brought extraordinary amounts of radiation to places like St. Louis, with multiple rain events detected and filmed, showing incredibly hot rains.
Unluckily, North America is directly downwind of Japan, where the government is having 560,000 tons of irradiated rubble incinerated with the ash dumped in Tokyo Bay. The burning began last October and is scheduled to continue through March 2014, enraging American activists for this unwitting double dose.
American media coverage of Fukushima’s continuing woes and of contamination spreading across Japan and threatening Tokyo’s 30 million residents, while not robust, has been adequate. Coverage of contamination in America and Southern California has been practically non-existent.
That’s one of the reasons we started Radiation Station Santa Monica four days after the meltdowns began on March 11, 2011, transmitting live radiation readings for the Los Angeles Basin 24/7 ever since.
With nuclear radiation monitoring equipment, investigation team members have performed more than 1,500 radiation tests in different media throughout four states and in jet airplane cabins where, even accounting for higher radiation at higher altitudes, readings were more than five times the norm, according to the manufacturer of our Inspector Alert nuclear radiation monitor.
Those readings, along with the EPA’s, combined with the UC Davis study of buckyballs and a European study of sea spray radiation spread, strongly indicate that Southern California is being exposed to significant amounts of radiation. The closer to the coast, the more pronounced the radiation in this scenario.
Other reports of what the likely Fukushima fallout will be in areas throughout the Southland exist.
Researchers from Hopkins Marine Station of Stanford University and the School of Marine and Atmospheric Sciences, Stony Brook University, released on May 28 a study that found 100 percent of 15 samples of Pacific Bluefin tuna caught off of San Diego in August 2011 showed indisputable signs of radiation contamination emanating from Fukushima.
This suggests that the popular and expensive animal usually carved up for sushi is even more contaminated now — nearly a year after it was first harvested and tested. Meanwhile, at least 1,000 tons of highly radioactive water used to cool the melted cores and spent fuel ponds is being dumped daily into the ocean, according to recent statements from the nuclear plants owners, Tokyo Electric Power Co.
The study also suggests that other highly migratory species, like turtles, sharks and marine birds, may also be contaminated with the radiation found in the tuna: cesium-134 and cesium-137.
Heading our way
The US Geological Survey (USGS) reported on Feb. 21 that Los Angeles had more cesium-137 fallout than any other region in the nation during the opening days of the disaster, from March 15 to April 5, 2011.
The amount of Cs-137 detected in precipitation at a monitoring station 20 miles east of downtown was 13 times the limit for the toxin in drinking water, according to a report obtained by the Pasadena Weekly.
USGS released another astonishing study Feb. 22, with data from measurements taken at its Bennington National Atmospheric Deposition Program in Vermont, confirming a grim cesium-137 scenario for Southern California.
“Deposition actually decreased as the air mass traveled east to west,” Greg Wetherbee, a chemist with USGS, told the Brattleboro Reformer newspaper.
“In the United States, cesium-134 and cesium-137 wet dispersion values were higher than for Chernobyl fallout, in part due to the US being further downwind,” Wetherbee told the paper. “With Chernobyl, there was more opportunity for plume dispersion.”
This double whammy of cesium-137, which has a half-life of 30 years, isn’t even in a uranium-60 buckyball. But they are both in the unfathomable spread of goo throughout the Pacific, riding on the second strongest current in the world and headed right for us.
The three reactor meltdowns have spewed trillions of becquerels of highly radioactive iodine-131, cesium-137, strontium-90 and plutonium-239 into the atmosphere and Pacific since March 11, 2011. The initial explosions and fires sent untold amounts of radiation high into the atmosphere.
A Feb. 28 report by the Meteorological Research Institute, just released at a scientific symposium in Tsukuba, Ibaraki Prefecture, Japan, says that 40,000 trillion becquerels, double the amount previously thought, have escaped from the Unit 1 reactor alone.
This has resulted in fallout around the globe that especially impacts the Pacific and parts of America and Canada — two countries downwind of Japan on the jet stream. British Columbia, the Pacific Northwest, Midwest and Ontario have been hit especially hard by rain, sleet and snow, in some cases with dizzying amounts of high radiation.
A March 6 Department of Biological Sciences study conducted at California State Long Beach found that kelp along the coast of California was heavily impacted by radioactive Iodine-131 one month after the meltdowns began. The virulent and deadly isotope was detected at 250 times levels the researchers said were normal in the kelp before the disaster.
Radioactive fallout in St. Louis, Mo., rainfall, which has been monitored at Potrblog.com since the crisis began, has been repeatedly so hot that levels have been reached that make it unsafe for children and pregnant women. An Oct.17, 2011, St. Louis rainstorm was measured on video at 2.76 millirems per hour, or more than 270 times background levels.
The main wave of water-borne radiation from the meltdowns, including highly mobile uranium-60 buckyballs, is surging across the Pacific along the Kuroshio Current, second only to the Gulf Stream for power on the planet.
Millions of tons of seawater and fresh water have been used to cool the melted cores and spent fuel rods, generating millions of tons of irradiated water. The Kuroshio Current is transporting a significant amount of this escaping radiation from Fukushima Daiichi across the Pacific toward the West Coast.
The 70-mile-wide current joins the North Pacific Current, moving eastward until it splits and flows southward along the California Current, which flows along the coast. The American government has done nothing to monitor the Pacific Ocean for over half a year, even though a Texas-sized sea of Japanese earthquake debris is already washing up on outlying Alaskan islands and is suspected to have already hit the West Coast, including California.
“In terms of the radiation, EPA is in charge of the radiation network for airborne radiation; it’s called RadNet,” EPA Region 9 Administrator Jared Blumenfeld said on Feb. 9, during a news conference about new ship sewage regulations. “And we have a very significant and comprehensive array of RadNet monitors along the, actually along the coast, but on land. We don’t have jurisdiction for looking at marine radiation. Perhaps NOAA (National Oceanic and Atmospheric Administration) would be able to answer that question, but we don’t have data or monitor it.”
NOAA suspended testing in the Pacific for Fukushima radiation last summer after concluding that there wasn’t any radiation to be detected.
“As far as questions about radiation, we are working with radiation experts within the Environmental Protection Agency and the Department of Energy,” NOAA media liaison Keeley Belva said in a Feb. 10 email interview.
In other words, no federal agency, department or administration is doing anything to sample and analyze water from the Pacific. Fish aren’t being tested for contamination, either.
“NOAA is not currently doing further research on seafood,” Belva added. “NOAA is doing a study related to radiation that is focused on radiation plume modeling.”
This lack of testing is disappointing, according to Dan Hirsch, a UC Santa Cruz nuclear policy lecturer and president of the nuclear policy nonprofit Committee to Bridge the Gap, which exposed the Rocketdyne partial meltdowns above the western San Fernando Valley in 1979 and continues to lead the fight to clean up the area today.
“EPA did some special monitoring for a few weeks after the accident began, then shut down the special monitoring,” Hirsch said. “What monitoring was done was very troubled. Half of the stationary air monitors were broken at the time of the accident. Deployable monitors were ordered but not deployed.”
Even when the government testing did work, increasingly high levels of radiation seem to have been ignored.
The paper also learned that the California Department of Public Health halted monitoring of Fukushima fallout when its Radiologic Health Branch issued its last report on Oct. 10, 2011.
That report shows an alarming rise in cesium-137 in Cal Poly San Luis Obispo dairy farm milk beginning June 14, 2011, when it tested 2.95 picocuries per liter (pCi/l) and steadily rising in four subsequent tests until it was 5.91 pCi/l. The hot milk was at twice the allowable amount of this radionuclide in drinking water, according to the EPA’s 3.0 pCi/l limit.
After that report, the testing suddenly stopped, for no other reason than the government had concluded that nothing from Fukushima had sufficiently contaminated anything to merit concern. Even detections of radioactive sulphur-35 in San Diego and plutonium-239 in Riverside did nothing to pique the interest of regulators.
“The lesson to be learned is that both the U.S. and Japan suffer from very lax regulation, a too-cozy relationship between nuclear regulators and the industry they are to regulate,” Hirsch said. “This can lead to dangerous outcomes. This was not unanticipated. Yet the need for immediate information was undeniable.”
Sea spray transmission
Special tests revealed elevated radiation in Bryce Canyon and Grand Canyon rain. Southwest Michigan rain samples were hot. Santa Monica and Los Angeles rain and mist were also high.
Meanwhile, across the ocean, Japanese sake, beer, vegetable juice, seaweed, pastries and tea have all registered significant ionization above background. Powdered milk, turkey hot dogs and jet travel breathing masks were all part of the specific media tested, many of which were recorded in videotaped radiation detections.
The Jan. 27, 2012, UC Davis report “Uranyl peroxide enhanced nuclear fuel corrosion in seawater,” is the first account to analyze what is happening to the gargantuan amount of seawater, as well as fresh water, that has been hosing down the melted reactor cores and flushing into the Pacific.
The study spells out a horrific scenario in which compromised irradiated fuel turned huge amounts of ocean water into a series of uranium-related peroxide compounds containing as many as 60 “uranyl ions” in hardy nanoscale cage clusters that can “potentially transport uranium over long distances” and persist for “at least 294 days without detectable change.” How hot these nano-cage clusters of cancer-causing radiation are depends on what type and ratio of uranium isotopes make up the 60 in each one.
“A given isotope has the same radioactivity (half-life) regardless of what chemical state it is in” said Alexandra Navrotsky, PhD, director of nanomaterials research at UC Davis. “So the radioactivity for a constant number of U atoms depends on the proportion of different isotopes in the sample.”
There is a strong possibility that these uranium peroxide buckyballs are already sloshing around in the waters off Southern California as this reporter and the EPA’s radiation readings appear to indicate. But if it was the source of our high detections what was the mechanism that was transporting radiation inland?
Sea spray, perhaps. Radioactive sea spray has been shown to blow hundreds of kilometers inland in tests conducted in the United Kingdom by British and European researchers. As anyone who has ever smelled the salty ocean air miles from the ocean might expect, salt in sea spray can travel a significant distance. The same holds true for radioactive particles floating in the sea, even if in addition to U60 buckyballs.
In the 2008 report “Sea to land transfer of radionuclides in Cumbria and North Wales,” the greatest average concentration of cesium-137 and plutonium-239 in soil at a depth of 0 to 15 centimeters was found 10 kilometers from the coast. The highest average amounts found at 15 to 30 centimeters deep were 5 kilometers away from the sea illustrating the unpredictability of radiation fallout.
A 62-page UK study released in December 2011 found that sea spray and marine aerosols created from bubbles forming and popping when the sea is choppy or waves break have increased concentrations of radioactive “actinides.”
Actinides are chemically alike radioactive metallic elements and include uranium and plutonium. One actinide infused the spray with an 812 times greater concentration of americium-241 than normal amounts of Am-241 in ambient seawater.
The report cited information that sea-spray-blown cesium 137 was found 200 kilometers from the discharge source in the New Hebrides Islands in northern Scotland.
Another UK study found that the Irish Sea has a micro layer on top of it, perhaps only thousandths of a millimeter in thickness, that can become imbued with fine particulate material and its absorbed radiation.
These concentrations of plutonium and americium are four to five times their concentrations in ambient seawater. Plutonium concentrates by 26,000 times in floating algal blooms at sea, says the report.
These radionuclides and buckyballs make up the goo inexorably crossing the Pacific, which may just have begun to impact our shores.
Yet not a nickel of state or federal money is spent monitoring it. We are on our own in this Fukushima nightmare. | <urn:uuid:744f17af-4062-4811-9556-906e9b26d521> | 3.09375 | 3,538 | News Article | Science & Tech. | 38.131996 | 1,294 |
I have written the MATLAB code according to the algo given in he tutorials for edge detection.
Edge detection is a technique to locate the edges of objects in the scene. This can be useful for locating the horizon, the corner of an object, white line following, or for determing the shape of an object. The algorithm is quite simple:
sort through the image matrix pixel by pixel
for each pixel, analyze each of the 8 pixels surrounding it
record the value of the darkest pixel, and the lightest pixel
if (darkest_pixel_value - lightest_pixel_value) > threshold)
then rewrite that pixel as 1;
else rewrite that pixel as 0;
What the algorithm does is detect sudden changes in color or lighting, representing the edge of an object."
I want to know how to get the threshold value for best results.
If I calculate the threshold according to the original method of finding the mean of all the elements of image matrix, then its too big a value.
See, in this algo we find the difference between the largest and smallest neighbour of a matrix element. Now for a grayscale image, this difference is not too big, not more than 50 at extreme points and generally around 20-30. But as the pixel value is almost same for all pixels in a grayscale image and hence the threshold calculated by the normal method is always greater than the difference and the resulting image is completely black.
I ran the code on a 640x480 grayscale image. It gave the best result at threshold = 20 whereas the threshold calculated by computing the mean was coming out to be = 115.
Now how to calculate the accurate threshold? | <urn:uuid:7ec9a08b-9de3-4992-90e5-5a26bd770fbb> | 2.828125 | 349 | Comment Section | Software Dev. | 44.770238 | 1,295 |
break and continue Statements
The break statement is used to alter the flow of control. When a break statement is executed in a while loop, for loop, do-while loop or switch statement, it causes immediate exit from that statement. Program execution continues with the next statement. Common uses of the break statement are to escape early from a loop or to skip the remainder of a switch statement.
The program written below demonstrates the break statement in a for-loop.
When the if-statement detects that x has become 5, break statement is executed. This terminates the for-loop and the program continues from cout after the for-loop.
The continue statement is also used to alter the flow of control. When it is executed in a while loop, for loop or do-while loop, it skips the remaining statements in the body of the control loop and performs the next iteration of the loop.
An example of continue statement is shown below,
Some programmers feel that break and continue statements violate the norms of structured programming since the effects of these statements can be achieved by structured programming technique. The break and continue statements, when used properly, perform faster than the corresponding structured technique. | <urn:uuid:c4089a04-e956-4919-b52d-316c81c240df> | 4.3125 | 240 | FAQ | Software Dev. | 45.731673 | 1,296 |
Discover one of our 28 local entrepreneurial communities »
Be the first to know as we launch in new countries and markets around the globe.
Interested in bringing MIT Technology Review to your local market?
Unsupported browser: Your browser does not meet modern web standards. See how it scores »
Bacteria in termite guts could make ethanol from noncorn sources cheaper.
Scientists say the results represent a new stage in synthetic biology.
Engineered E. coli proves efficient at churning out the biofuel.
GM teams with a startup aiming to produce low-cost biofuels.
Stem cells from skin, myriad microbes, and a $350,000 personal genome.
Advanced biofuels, more-efficient vehicles, and solar power top the most notable energy stories of 2007.
A portable system converts biowaste into jet fuel and diesel for the military.
As the primaries near, the presidential candidates are calling for similar, ambitious growth in ethanol biofuel.
Researchers have designed a process to generate hydrogen from organic materials.
Brazilian researchers report that exposure to magnetic fields increased ethanol yields by as much as 17 percent. | <urn:uuid:78d4bdd4-5927-4e46-aae7-d92866dd56a2> | 2.703125 | 232 | Content Listing | Science & Tech. | 42.179545 | 1,297 |
Miscellaneous changes to java.net worth mentioning:
Noteworthy bug fixes:
A bug existed in JDK1.0.2 where one could not create an InetAddress
out of a IP address String (e.g., "22.214.171.124") if a corresponding
host name (e.g., "java.sun.com") could not be found. This resulted in
an UnknownHostException. The bug is fixed in JDK1.1. Additionally,
when an InetAddress is created from an IP address, the corresponding hostname
is not looked up until specifically requested (via InetAddress.getHostName()),
as a performance enhancement.
- ServerSocket/DatagramSocket close()
A bug existed in JDK1.0.2 where the
close() method of ServerSocket
and DatagramSocket were
synchronized. The result was that if one thread
were blocked indefinitely in DatagramSocket.receive() or ServerSocket.accept(),
another thread couldn't break the blocking thread out by calling close(). This
is fixed in JDK1.1 by making the close() methods unsynchronized.
There was a bug in JDK1.0.2 where the methods on URLConnection:
etc, did not work. These are fixed in JDK1.1.
Binding to local port/address:
- Socket, ServerSocket, DatagramSocket
These classes has overloaded constructors for binding to a specific
local address and port. This is useful and necessary for applications
like proxy servers that operate on multi-homed machines, and need
particular control over which network interfaces are used.
- The MulticastSocket class was moved from package sun.net into the core
API of java.net.
- JDK1.1 introduces a new class, HttpURLConnection which extends
URLConnection, and provides additional functionality specific to
- Ability to use all of the request methods with HTTP/1.1, like:
- Control over whether to follow HTTP redirects.
Last modified: Thu Dec 5 15:09:54 PST | <urn:uuid:3b53b20b-9e03-4c6a-80cd-324498e06f28> | 2.53125 | 450 | Documentation | Software Dev. | 55.310407 | 1,298 |
Birds dive for food despite sub-zero temperatures
On the upper Chena River in the heart of a cold winter, a songbird appeared on a gravel bar next to gurgling water that somehow remained unfrozen in 20-below zero air. Then the bird jumped in, disappeared underwater, and popped up a few feet upstream.
The bird continued snorkeling and diving against the current of the stream, which is so far north that in December direct sunlight never touches it, instead bathing only the tops of spruce trees with a ruby light.
Soon, two other dark birds with bodies the size of tennis balls landed near the other. Bending from their knees, they bobbed up and down and then all three jumped into the stream. It seemed crazy behavior for a cold winter day, but swimming is how American dippers make their living, even here in Alaska, where they range as far north as the Brooks Range.
Mary Willson, a biologist, ecologist and consultant from Juneau, might be the only Alaska researcher who has studied the American dipper. She has pulled on her chest waders to follow dippers on waterways near Juneau’s road system, and she’s gotten to know a bit of the character of what she calls “a very cool bird.”
The dipper often feeds while flying underwater, using the liquid as it does another fluid, air. The birds also snorkel, swimming on the surface with their heads below the water surface. They sometimes pick up rocks on stream bottoms to find food underneath.
Dippers depend on clean, open water. In very cold places, the birds appear at openings in ice caused by water upwelling, and dippers can dive through one hole in the ice and emerge from another one. Near Juneau, dippers sometimes appear at deltas where streams flow into the ocean.
Dippers eat aquatic and flying insects and are skilled enough to catch small fish, Willson said. She has seen a dipper with four tiny fish in its beak at once. Another time, she witnessed a dipper catching a four-inch fish called a sculpin.
“It had to beat that one on the rocks until it was in enough pieces to eat.”
Willson thinks the dippers can survive the transition from 32-degree water to subzero air because of their feathers, which are denser than other songbirds’, and large oil glands near the base of their tails. They dip their beaks in the oil glands and wipe oil on their feathers, perhaps to keep themselves waterproof. Dippers also have flaps that cover their nostrils while diving. And, according to the Birder’s Handbook by Paul Ehrlich, David Dobkin, and Darryl Wheye, “these birds are able to forage on the bottom of streams in which the current is too fast and the water too deep for people to stand.”
Nobody knows how dippers survive the cold, dark winter in northern Alaska and the Yukon. Willson said scientists have studied the effects of severe winters on the similar European dipper, which ranges above the Arctic Circle in Scandinavia. They have found that extreme cold spells kill many of the birds. She wonders how dippers in the far north don’t perish in the frigid air temperatures and during the long nights between the three-to-four hours of twilight.
“They are visual hunters,” she said. “In the pits of winter, they’d have to hurry-scurry to get enough food in the time where there’s light to hunt.”
(Since the late 1970s, the University of Alaska Fairbanks’ Geophysical Institute has provided this column free in cooperation with the UAF research community. Ned Rozell is a science writer for the Geophysical Institute. This column first appeared in 2006.) | <urn:uuid:7bc6ad2e-a0af-4fa5-9c03-53867c7065f7> | 3.5 | 812 | Truncated | Science & Tech. | 56.769516 | 1,299 |