text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
We live in a sea of radiation. In any city, an unsuspecting owner of a 0.1 acre backyard garden may not know that the top one metre of soil from his garden contains 11,200 kg of potassium, 1.28 kg which is of potassium- 40 (K-40, a radioactive isotope of potassium), 3.6 kg of thorium and one kg of uranium.
These values may be higher or lower depending on the soil. Uranium and thorium decay through several radio-nuclides to lead, a stable element. The presence of radioactive nuclides does not pose any significant risk.
The total annual external dose from sources in soil and cosmic rays in Mumbai, Kolkata, Chennai, Delhi and Bengaluru is 0.484, 0.81, 0.79, 0.70 and 0.825 milligray respectively. Gray is a unit for absorbed dose; when the radiation energy imparted to a kg of material is one joule, it is called a gray. Since gray is very large, milligray (one thousandth of a gray), and microgray (one millionth of a gray), are commonly used.
Cosmic rays come from outer space. Their intensity at a place depends on the altitude. Cosmic rays alone contribute 0.28 milligray at the first three cities as they are at sea level; the column of air helps to reduce their intensity. At high altitudes, the protection from the column of air is less.
The cosmic ray contributions are higher at 0.31 milligray and 0.44 milligray respectively at Delhi and Bengaluru as these cities are at altitudes of 216 metre and 921 metre. Air passengers receive 5 microgray per hour from cosmic rays.
Parts of Kerala and Tamil Nadu are high background radiation areas (HBRA) because of the presence of large quantities of monazite in the soil. Thorium content in monazite ranges from 8-10.5 per cent. Researchers found that the radiation levels in 12 Panchayats in Karunagappally varied between 0.32 to 76 milligrays per year; the levels in 90 per cent of over 71,000 houses were more than one milligray per year.
The average value of population dose in HBRA is 3.8 milligray per year. One milligray is the average value for areas of normal background radiation. The units milligray and millisievert are the same in these instances. Study at the HBRA during 1990-99 by the researchers from the Regional Cancer Centre and Bhabha Atomic Research Centre did not show any health effect attributable to radiation.
Radon, which occurs in uranium series present in soil seeps into homes. In temperate areas radon decay products build up in air due to poor ventilation and deliver high doses to the lungs of millions of people. In tropics ventilation is adequate to disperse radon .In the United Kingdom persons in 5 per cent of the homes are exposed to doses above 23.7 mSv/year. One per cent of the population receives doses above 55.8 mSv/year. The highest estimated dose was 320 mSv/year in Cornwall.
All foodstuffs contain potassium-40 (K-40). We need potassium for sustenance. K-40 is 0.012 per cent of potassium. Once ingested, most of the potassium enters the blood stream directly and gets distributed to all tissues and organs.
The potassium content in the human body is strictly under homeostatic control. The body retains only the amounts in the normal range essential for its functioning; it is independent of the variations in the environmental levels.
The body excretes excess amounts with a biological half life of 30 days. K-40 delivers a constant annual radiation dose of 0.18 mSv to soft tissue. This dose is unavoidable as potassium is an essential element. Every time we eat a banana, we are introducing 14 Bq of K-40 in to our body. Trucks containing bananas have triggered radiation alarms at border posts in the U.S.
Brazil nut is probably the most radioactive food. Scientists have measured 700Bq of radium per kg of Brazil nut.
The roots of the Brazil nut tree pass through acres of land; They have a tendency to concentrate barium; along with barium, the roots collect radium as well. Radium appears in the nuts. Many vegetables like brinjal, carrot etc. also contain the radioactive isotope.
Indian researchers have measured polonium-210 in fish and other marine organisms. Our whole body is hit by particles coming from all sides. Radiation is a part of our life. We cannot avoid eating food just because it contains radioactivity
(Raja Ramanna fellow, Department of Atomic Energy) | <urn:uuid:75847000-7313-45ff-a591-dd0bc62ae223> | 3.6875 | 991 | Nonfiction Writing | Science & Tech. | 60.19088 |
Natural freshwater surface streams of considerable volume and a permanent or seasonal flow, moving in a definite channel toward a sea, lake, or another river; any large streams, or ones larger than brooks or creeks, such as the trunk stream and larger branches of a drainage system. [Glossary of Geology, 4th ed.]
Report (PDF format) on an evaluation of the potential environmental impacts of contaminated ground water from a metals refinery adjacent to the Missouri River in Omaha, Nebraska testing water and sediments for contaminants and toxicity.
Maps and GIS data depicting land use and land cover in areas near the upper Mississippi River. Historical data are available, dating from the 1890s, 1975, and more regularly from the late 1980s to 2000.
Program to measure streamflow stage and discharge at four sites in the Upper Wallkill River Valley's Black Dirt Region and determine Total Suspended Solids (TSS) concentration data and estimate loads of TSS and sources of contaminants. | <urn:uuid:15216dbf-3d23-414b-b950-bd27c3306915> | 3.078125 | 201 | Structured Data | Science & Tech. | 28.295991 |
UW mathematician Branko Grunbaum has taken his inspiration from decorative patterns used in arts and crafts to advance basic theories of geometry. Since the early 1980s, Grunbaum and colleague G. C. Shephard of the University of East Anglia, Norwich, England, have pioneered new ways of analyzing the intricate patterns found in tilings and textiles and have elaborated what they call a "theory of patterns."
The results of this work are far-ranging. They have important ramifications for divisions of mathematics including group theory, combinatorics, geometry and topology; they may benefit other technical fields such as crystallography and engineering; and they may find use in the fields of design, art, and anthropology.
Grunbaum and Shephard observe that the beginnings of geometry were, in ancient times, stimulated by practical problems of building, surveying, and decorating. "From the beginnings of civilization, peoples of every culture have manufactured and used objects decorated with repeating geometric patterns. This is still true today and everywhere patterns of many kinds can be seen all around us," they write. "Moreover, repeating patterns frequently arise naturally in many areas of science and engineering, and their investigation has proved a useful tool in such diverse areas as crystallography and the ethnological classification of primitive artifacts."
What is surprising, they note, is that there had been relatively little work on these patterns from a theoretical standpoint. The practice of weaving is a typical example. "Weaving is one of the oldest activities of mankind and so it is hardly surprising that there exists a vast literature on the subject. But this literature is almost entirely concerned with the practical aspects of weaving; any treatment of the theoretical problem of designing fabrics with prescribed mathematical properties is conspicuously absent," note the researchers.
In 1980, Grunbaum and Shephard published "Satins and Twills: An Introduction to the Geometry of Fabrics," which revealed subtle problems in combinatorics and geometry. Traversing uncharted territory, they had to establish new concepts with an entirely new vocabulary to describe textile patterns.
Much the same situation was encountered with repetitive ornamentation. Previous studies had been restricted to analyzing the patterns with the well-established tool of symmetry groups. But Grunbaum found them to be "very coarse tools for the characterization and description of repeating patterns." Much finer classifications were developed by the team; the concepts are elaborated in Grunbaum and Shephard's detailed text, Tilings and Patterns.
The researchers were able to elucidate "certain mysterious aspects" of interlace patterns frequently found in Islamic and Moorish art. They verified that, despite the complexity of the designs, most of the interlaces are formed by strands of a small number of shapes, often just a single shape stretching over many repeats of the design. A plausible explanation is that the early artisans used stencils to draw the patterns; for practical reasons, the stencils were made as small as possible for a given pattern, and they may have consisted of just one translational repeat unit.
Other work by Grunbaum contributed to the mathematical understanding of aperiodic tilings, in which the constituent units repeat but the pattern lacks symmetry over the long range. This work subsequently became of interest to solid matter physics because of the discovery of actual substances with this type of aperiodic symmetry, called quasicrystals; and it was applied to an analysis of decorations of ancient Peruvian fabrics that are not covered by the usual symmetry groups. | <urn:uuid:411da270-1c9e-4e88-8527-cf5dc274358b> | 3.421875 | 716 | Academic Writing | Science & Tech. | 21.0366 |
More than most people who do not understand a little bit about algebra would ever guess; so I am going to answer the above question in this blog.
Introduction and background
My degree is in Engineering Physics and I have always been fascinated by space and astronomy and communications. I was a licensed amateur radio operator at 17 and my amateur radio knowledge of the frequency spectrum coupled with my degree in physics guided me in my investigation into the possibilities of interstellar space communications using simple algebra!
So let start with the basic algebra (ten minutes at most).
x = distance units (meters, miles, kilometers it does not make any difference in algebra as long as one uses consistent units)
t = time (seconds, hours, days, years)
Velocity = x/t e.g. (x meters/t sec)
That is all the algebra you need to know!
Here is all the Physics needed:
The velocity of light has a special name c which replaces v; therefore c = x meters / t seconds
c = 300,000,000 meter/sec or 186,000 miles / sec is a universal constant
As large as the speed/velocity of light c is; due to the distance to stars and galaxies, we must know how far light travels in a year -not a second!
The reasons will become obvious later.
Here is the conversion:
One year (in seconds)=365days *24 hours/day *60 minutes/hour * 60 seconds/minute = 31,536,000 seconds
Question! What is the distance light travels in one year?
If you cannot calculate it based on the information given above here is the answer:
By definition x=v*t but for light, we use c instead of v by convention, and therefore x=c*t
Or in miles; distance light traveled in one year x=186,000 miles / sec*31,536,000 sec = 5,865,569,600,000 mile
Wikipedia ˜5878625 million miles˜63241.1 astronomical units (au)
1 au is distance from sun to earth; and is about 93,000,000 miles. OK! We know have all the basic information we need; let’s apply it to the practical matter of communication times to star.
The nearest known star (other than the Sun), Proxima Centauri, is about 4.22 light-years away.
The center of our galaxy, the Milky Way, is about 26, 000 light-years away.
The Milky Way is about 100000 light-years across.
The Andromeda Galaxy is approximately 2.5 000 000 light-years away.
If you are following me so far -note these are one way distances so you would have to double the times for round trips.
Since we earthlings have been transmitting g from about 1920 our signals have traveled at most 92 years in time and 92 light years in distance.
Conclusion: we are not even close to our radio/TV signals reaching stars in 99% our Milky Way; therefore if there are any other planets, in the Milky Way, with intelligent life; they can't possibly know we exist. | <urn:uuid:3768a843-2692-4792-896a-41f82de3baf5> | 3.671875 | 657 | Personal Blog | Science & Tech. | 62.377414 |
View Article & Commentsview the latest news articles
Monday, December 3rd 2012, 2:23 PM EST
The atmosphere warms the Earth by 33C (some arrive at different numbers but that doesn’t matter here) simply because a quantity of kinetic energy is constantly being recycled up and down within the atmosphere so as to supply additional energy to the surface in addition to incoming solar energy at any given moment.
The cycling process involves the conversion of that kinetic energy to gravitational potential energy and back again. During the up and down cycling process potential energy is not available to the exchange of radiation in and out of the Earth system but it does become available for radiating out to space when it is returned downwards and converted back from potential energy to kinetic energy again at the surface.
Article continues below this advert:
At any given time, taking the globe as a whole about half of the atmosphere is rising and half is falling.
That is mainly apparent in the troposphere with the high and low pressure cells but circulations also exist in the higher layers so the general principle holds for the entire atmosphere.
When kinetic energy is converted to potential energy by the interaction with gravity whilst air is rising that disappears from the planet’s radiative exchange with sun and space until it is returned to the surface again by descending air.
Here is a simplified account of the interchangeability of kinetic energy and potential energy:
So, potential energy should be deducted from the surface energy budget when air rises and, because it has disappeared from view, it must be deducted from the top of atmosphere energy budget too.
Then, when that potential energy descends it has to be added back to the surface energy budget as kinetic energy and then also added back to the top of atmosphere energy budget because it radiates straight out from the ground to top of atmosphere instantly at the speed of light.
Now if the entire process were instant there would be no problem but it all takes time for air to rise and then fall so the process is out of phase with the normal radiative flow of solar shortwave in and longwave out and has been since the very first atmospheric molecules floated above the surface.
Because it is out of phase with the background exchange of radiation in and out of the Earth system the surface to top of atmosphere energy exchange must be regarded as a separate energy loop quite independent of the pass through of solar energy.
What AGW proponents have done is to just take half the loop by taking the surface temperature BEFORE the conversion to potential energy and the top of atmosphere temperature AFTER the conversion to potential energy.
That leads to double counting at the surface because the surface temperature then represents both insolation AND returning kinetic energy from the separate loop.
The proper scenario is this:
i) Solar shortwave in 255 at top of atmosphere.
Longwave out 255 at top of atmosphere.
and in a separate loop:
ii) Kinetic energy removed from surface 33 (or whatever the actual amount might be) and NOT delivered to top of atmosphere.
Kinetic energy returned to surface 33.
What they have done instead is this:
i) At surface:
Solar shortwave in 255 plus kinetic energy returning 33 = 288
Longwave out 255 plus kinetic energy out 33 = 288.
ii) At top of atmosphere:
Solar shortwave in 255
Longwave out 255.
So obviously there is a discrepancy which they cover by proposing that there is a flow of downward infrared radiation from the sky to ground (DWIR) of 33 units.
There is also a kinetic/potential energy impact at top of atmosphere which they have ignored.
(i) Solar shortwave in 255 plus 33 from warmer surface due to returning kinetic energy = 288
(ii) Longwave out 255 less 33 retained by atmosphere as potential energy = 222
The two 33s cancel out so that leaves 255 as observed.
So, we then have the solar input passing straight through yet a further 33 being recycled up and down through the atmosphere which gives a warmer surface but no change in top of atmosphere energy balance.
And that then balances the energy budget without proposing a radiative solution involving DWIR.
The important point then is that the 33 units stored within the atmosphere as radiatively invisible potential energy are present as a function of mass and gravity in the presence of solar input and NOT atmospheric composition which is why other planets show similar lapse rate characteristics despite huge variations in composition.
What has been overlooked is the effect of the time delay in the process of converting kinetic to potential and back again and only taking account of one half of the loop.
The thicker the atmosphere, the longer the delay in the kinetic / potential exchange and back again, the more energy is locked away in that exchange, the more energy will return to the surface on the down cycle and the higher the surface temperature will become for a given level of solar input.
If one changes the composition of the atmosphere without changing the mass significantly then the speed of the throughput of energy changes and NOT the amount of potential energy stored in the atmosphere so one sees a circulation shift instead of a temperature rise.
Even if our CO2 emissions were to increase the temperature the effect would be indiscernible because the amount of change would be related to total atmospheric mass and not related to the proportionate increase in CO2
So either way AGW theory fails.
What matters is delay time during a mechanical process and NOT radiative physics.
Radiative physics completely overlooks, and fails to account for, the energy hidden away as potential energy within an atmosphere and which is being constantly recycled so as to affect surface temperature not top of atmosphere temperature.
Note what I said back in 2008:
“It is that interruption in the flow of radiant energy in and out which gives rise to a warming effect. The warming effect is a single persistent phenomenon linked to the density of the atmosphere and not the composition. Once the appropriate planetary temperature increase has been set by the delay in transmission through the atmosphere then equilibrium is restored between radiant energy in and radiant energy out.
The fundamental point is that the total atmospheric warming arising as a result of the density of the atmosphere is a once and for all netting out of all the truly astronomic number of radiant energy/molecule encounters throughout the atmosphere. The only things that can change that resultant point of temperature equilibrium are changes in solar radiance coming in or changes in overall atmospheric density which affect the radiant energy going out”
“Greenhouse Confusion Resolved"
Pinning it all down to the length of the KE/PE and PE/KE transition period tops and tails the whole thing very nicely because it firmly nails the culprit as mass rather than composition.
CO2 has no chance of changing total atmospheric mass on Earth significantly however much we produce so the only remaining question is as to how far our CO2 emissions could change the circulation pattern.
I would suggest barely at all and most likely indiscernible against the 1000 mile latitudinal shifts that occurred naturally from MWP to LIA and LIA to date.
Comments section below this advert:
Have Your Say
7 comments found
showing page 1 of 1
« previous 1 next » | <urn:uuid:be4ffb12-2c45-40e9-97a3-19bc143706da> | 3.359375 | 1,466 | Comment Section | Science & Tech. | 23.747182 |
The Small White (Pieris rapae) is a small- to medium-sized butterfly species of the Yellows-and-Whites family Pieridae. It is also known as the Small Cabbage White and in New Zealand, simply as White Butterfly. The names "Cabbage Butterfly" and "Cabbage White" can also refer to the Large White.
It is widespread and populations are found across Europe, North Africa, Asia, and Great Britain. It has also been accidentally introduced to North America, Australia and New Zealand where it causes damage to cultivated cabbages and other mustard family crops. The caterpillar stage alone is responsible for crop damage because of which it is referred to as the Imported Cabbageworm.
In appearance it looks like a smaller version of the Large White (Pieris brassicae). The upperside is creamy white with black tips to the forewings. Females also have two black spots in the center of the forewings. Its underwings are yellowish with black speckles. It is sometimes mistaken for a moth due to its plain-looking appearance. The wingspan of adults is roughly 32–47 mm (1.25–2 in).
The species has a natural range across Europe, Asia and North Africa. It spread across the Atlantic into Canada and the United States beginning somewhere around 1860. It spread to Hawaii by 1898, and Australia in 1929 around Melbourne and spreading across to Perth by 1943.
The nominate subspecies P. r. rapae is found in Europe while the Asian populations are placed in the subspecies P. r. crucivora. Other subspecies include atomaria, eumorpha, leucosoma, mauretanica, napi, novangliae, and orientalis.
Life cycle
In Britain, it has two flight periods, April–May and July–August, but is continuously-brooded in North America, being one of the first butterflies to emerge from the chrysalis in spring, flying until hard freeze in the fall.
Its caterpillars can be a pest on cultivated cabbages, kale, radish, broccoli, and horseradish but it will readily lay eggs on wild members of the cabbage family such as Charlock (Sinapis arvensis) and Hedge mustard (Sisybrium officinale). The eggs are laid singularly on foodplant leaves. It has been suggested that isothiocyanate compounds in the family Brassicaceae may have been evolved to reduce herbivory by caterpillars of the Small White.
Traditionally known in the United States as the Imported Cabbage Worm, now more commonly the Cabbage White, the caterpillars are green and well camouflaged. Caterpillars rest on the undersides of the leaves, thus making them less visible to predators. Unlike the Large White, they are not distasteful to predators like birds. Like many other "White" butterflies, they hibernate as a pupa. It is also one of the most cold-hardy of the non-hibernating butterflies, occasionally seen emerging during mid-winter mild spells in cities as far north as Washington D.C.
Like its close relative the Large White this is a strong flyer and the British population is increased by continental immigrants in most years. Adults are diurnal and fly throughout the day, except for early morning and evening. Although there is occasional activity during the later part of the night, it ceases as dawn breaks.
- RR Scott & RM Emberson (compilers) (1999). Handbook of New Zealand Insect Names. Entomological Society of New Zealand. ISBN 0-9597663-5-9.
- Scudder, SH (1887). "The introduction and spread of Pieris rapae in North America, 1860-1886". Memoirs of the Boston Society of Natural History 4 (3): 53–69.
- Agrawal, AA & NS Kurashige (2003). "A Role for Isothiocyanates in Plant Resistance Against the Specialist Herbivore Pieris rapae". Journal of Chemical Ecology 29 (6): 1403–1415. doi:10.1023/A:1024265420375.
- Fullard, James H. & Napoleone, Nadia (2001): Diel flight periodicity and the evolution of auditory defences in the Macrolepidoptera. Animal Behaviour 62(2): 349–368. doi:10.1006/anbe.2001.1753 PDF fulltext
Further reading
- Asher, Jim et al.: The Millennium Atlas of Britain and Ireland. Oxford University Press.
- Evans, W.H. (1932): The Identification of Indian Butterflies (2nd Ed.). Bombay Natural History Society, Mumbai, India.
- "Pieris rapae". Integrated Taxonomic Information System. Retrieved 6 February 2006.
|Wikimedia Commons has media related to: Pieris rapae|
|External identifiers for Pieris rapae|
|Encyclopedia of Life||180629|
|Also found in: Wikispecies| | <urn:uuid:1ba5001c-b9ce-4ab5-90a1-a4a2e4dbf881> | 3.453125 | 1,071 | Knowledge Article | Science & Tech. | 47.3977 |
How to print Prime numbers in Java or how to check if a number is prime or not is classical Java programming questions, mostly taught in Java programming courses. A number is called prime number if its not divisible by any number other than 1 or itself and you can use this logic to check whether a number is prime or not. This program is slightly difficult than printing even or odd number which are relatively easier Java exercises. This Simple Java program print prime number starting from 1 to 100 or any specified number. It also has a method which checks if a number is prime or not.
This Java tutorial is in conjunction with my earlier tutorial for beginners like How to set Path in Java on windows and Unix , Java Program to reverse String in Java with recursion and recently how to read file in Java etc, if you haven’t read them you may find them useful.
Code Example to print Prime numbers in Java
Here is complete sample code example to print prime numbers from 1 to any specified number. This Java program can also check if a number is prime or not as prime number checking logic is encapsulated in isPrime(int number) method.
In this Java program we have two part, first part which is taking input from user to print prime numbers and other part is function isPrime(int number) which checks whether a number is prime or not. Java method isPrime(int number) can also be used anywhere in code because its encapsulated logic of checking prime number. There is also another popular Java question is "How to check if a number is prime or not?", isPrime() can be used there. its rather simple and just implement the logic of prime number in Java i.e. divides a number, starting from 2 till the number itself, if its divisible by another number means its not prime.
That's all on how to print prime numbers in Java or how to check if a number is prime or not. Its worth remembering logic that when a number is prime and how do you check for prime number. If you are doing homework then get an Idea but type the program yourself , that will give you thinking time on how this Java program is working and checking for prime numbers.
Related Java Program tutorials | <urn:uuid:68914e1e-9944-4061-9966-fc36188a170b> | 3.890625 | 451 | Tutorial | Software Dev. | 51.732282 |
Schenk Star pyranometer
The Schenk Star pyranometer is a black and white star type pyranometer that has six black and six white segments. The temperature difference between the black and white painted sectors is proportional to the incident solar radiation. Because the measurement is a temperature difference, the measurement should not be affected as much by ambient temperature.
An Eppley PSP, which measures the temperature difference between a black disk and the body of the pyranometer, has an offset that results from the black disk radiating to the sky. This effect can be seen by the small negative irradiance values generated during the night. (This offset is on the order of 10 W/m2.)
The Schenk pyranometer measures the temperature difference between the black and white surfaces which both see the same sky temperature. Therefore, this effect is much smaller for star-type pyranometers: one W/m2.
For clear day diffuse measurements that are on the order of 100 W/m2 or less, the offset can cause problems. At the Eugene station, we are measuring the diffuse radiation with a shade disk, with both an Eppley PSP and a Schenk Star pyranometer.
Schenk Star pyranometers exhibit systematic errors when tilted. They also exhibit an azimuthal affect as the responsivity varies slightly, depending on whether the sun is over a black wedge or a white wedge.
© 2000, UO Solar Radiation Monitoring Laboratory.
Last revised: December 17, 2000.
Home page URL: solardat.uoregon.edu | <urn:uuid:66701c55-85f4-4b00-97a4-d93cc71cf2b6> | 3.03125 | 337 | Knowledge Article | Science & Tech. | 40.811224 |
Second generation of biofuels could give forests a break
A Forests News article about renewable energy cited research by the Institute of the Environment and Sustainability analyzing the energy return on crops like soy that are used for biofuels.
Biofuels have been criticised for propelling deforestation, but a second generation of this energy source may have the potential to supply a larger proportion of our fuel sustainably, experts say.
The European Union seems to agree. In trying to encourage the use of second-generation biofuels to meet renewable energy targets, it doubled the value assigned to them as compared to first generation.
Growing concerns about climate change and fluctuating oil prices have reignited interest in wind, solar, geothermal and other forms of ‘clean’ energy in recent years, especially among industrialised nations.
First-generation biofuels — those derived from starches, sugar, soy, animal fats, palm and vegetable oil – have won wide popular support.
But because some of these crops require a tremendous amount of land, scientists worry that forested areas will be cut down or burned to make way for agricultural expansion.
Some of these crops also have a low energy return. Soy and rapeseed, for instance, produce only 500 to 1000 litres of biodiesel fuel per hectare, according to the UCLA Institute of the Environment and Sustainability , meaning the life-cycle production and transport emissions in some cases exceed those of traditional fossil fuels.
But experts are still hopeful about second-generation biofuels – those derived from woody crops, agricultural residues, waste and inedible crops, like stems and switch grass.
It turns out they are often both better for the environment and more fuel efficient than their earlier cousins, scientists say. Though a few first-generation crops – sugarcane, sugar beet and sweet sorghum – perform well environmentally, with sugar cane being the most economically competitive among these.
To read the full article by Andrea Booth click here.
Published: Friday, October 05, 2012 | <urn:uuid:3f092398-0eec-46f2-aa42-fa0a36ca7412> | 3.265625 | 416 | Truncated | Science & Tech. | 21.745389 |
The term was coined by Benoit Mandelbrot in 1975, based on the fact that no matter how many times you “fracture” a fractal shape, you can always break it down to smaller, exact copies of itself. The pattern of the original shape repeats itself endlessly on an ever-smaller scale (somewhat like a family tree).
Because fractal patterns are so complex, it wasn’t until the 20th century that modern math and science could figure them out. But surprisingly, they seem to have been a familiar motif all over the ancient world – because fractals are one of the most fundamental patterns of nature.
It’s hard to look around and not see a fractal, if you know what to look for. Some examples of natural fractals are familiar to us; others are so bizarre, they will blow your mind!
See some amazing photos of fractals in nature >
Fluids such as water tend to move in fractal patterns. The branching-out of the flow into smaller and smaller streams creates this kind of fractal motion.
Cloud formations known as Von Karman vortices, as seen from space:
WEB ECOIST: 17 CAPTIVATING FRACTALS FOUND IN NATURE
(If you don’t mind the ads this page has some particularly good examples) | <urn:uuid:5497157f-a9d9-40b5-b7e3-9a18438d60dd> | 3.421875 | 282 | Personal Blog | Science & Tech. | 48.971875 |
Raining animals is a rare meteorological phenomenon, although occurrences have been reported from many countries throughout history. One hypothesis that has been offered to explain this phenomenon is that strong winds travelling over water sometimes pick up creatures such as fish or frogs, and carry them for up to several miles. However, this primary aspect of the phenomenon has never been witnessed or scientifically tested.
The animals most likely to drop from the sky in a rainfall are fish and frogs, with birds coming third. Sometimes the animals survive the fall, especially fish, suggesting the animals are dropped shortly after extraction. Several witnesses of raining frogs describe the animals as startled, though healthy, and exhibiting relatively normal behavior shortly after the event. In some incidents, however, the animals are frozen to death or even completely encased in ice. There are examples where the product of the rain is not intact animals, but shredded body parts. Some cases occur just after storms having strong winds, especially during tornadoes.
However, there have been many unconfirmed cases in which rainfalls of animals have occurred in fair weather and in the absence of strong winds or waterspouts.
Rains of animals (as well as rains of blood or blood-like material, and similar anomalies) play a central role in the epistemological writing of Charles Fort, especially in his first book, The Book of the Damned. Fort collected stories of these events and used them both as evidence and as a metaphor in challenging the claims of scientific explanation.
The English language idiom "it is raining cats and dogs" (As well as its Swiss-German equivalent, "Raining frogs and snakes"), referring to a heavy downpour, is of uncertain etymology, and there is no evidence that it has any connection to the "raining animals" phenomenon.
French physicist André-Marie Ampère was among the first scientists to take seriously accounts of raining animals. He tried to explain rains of frogs with a hypothesis that was eventually refined by other scientists. Speaking in front of the Society of Natural Sciences, Ampère suggested that at times frogs and toads roam the countryside in large numbers, and that the action of violent winds can pick them up and carry them great distances.
More recently, a scientific explanation for the phenomenon has been developed that involves waterspouts. Waterspouts are capable of capturing objects and animals and lifting them into the air. Under this theory, waterspouts or tornados transport animals to relatively high altitudes, carrying them over large distances. The winds are capable of carrying the animals over a relatively wide area and allow them to fall in a concentrated fashion in a localized area. More specifically, some tornadoes can completely suck up a pond, letting the water and animals fall some distance away in the form of a rain of animals.
This hypothesis appears supported by the type of animals in these rains: small and light, usually aquatic. It is also supported by the fact that the rain of animals is often preceded by a storm. However the theory does not account for how all the animals involved in each individual incident would be from only one species, and not a group of similarly-sized animals from a single area.In the case of birds, storms may overcome a flock in flight, especially in times of migration. The image to the right shows an example where a group of bats is overtaken by a thunderstorm. The image shows how the ph
enomenon could take place in some cases. In the image, the bats are in the red zone, which corresponds to winds moving away from the radar station, and enter into a mesocyclone associated with a tornado (in green). These events may occur easily with birds in flight. In contrast, it is harder to find a plausible explanation for rains of terrestrial animals; the enigma persists despite scientific studies.
Sometimes, scientists have been incredulous of extraordinary claims of rains of fish. For example, in the case of a rain of fish in Singapore in 1861, French naturalist Francis de Laporte de Castelnau explained that the supposed rain took place during a migration of walking catfish, which are capable of dragging themselves over the land from one puddle to another. Thus, he argued that the appearance of fish on the ground immediately after a rain was easily explained, as these animals usually move over soft ground or after a rain. | <urn:uuid:65b41a58-5772-4c6f-8087-6a11cdc92022> | 3.40625 | 879 | Personal Blog | Science & Tech. | 33.906965 |
Whether or not our solar system is the only planetary system in the universe has intrigued scientists and philosophers for hundreds of years. It has only been in the last 15 years that planets orbiting distant stars have been detected. However, we still have not seen these 'extrasolar' planets. So, how do we know they are there?
A giant planet orbiting close to a solar-type star.
Until recently there was no evidence to suggest that any planets existed around the 100 billion stars in our galaxy, except for around our sun. However, over the last 15 years we have detected evidence of over a hundred planets orbiting other stars, even though they are far too small to be seen by any telescope. We know they are there because of the effect they have on their host star. When a planet orbits a star, its gravity causes the star to wobble. Changes in the wavelength of the light coming from the star can be detected when the star wobbles in the direction of Earth. This change in wavelength is known as the Doppler effect and is similar to the change in pitch of the siren when a police car passes the listener.
Extrasolar planets are different to Earth. They are massive. The biggest we know of is 3,180 times larger than Earth. Even the smallest extrasolar planet is about 30 times larger than our planet. Most also have orbits which are about five times closer to their host star. This means that they must have incredibly high surface temperatures. So far the biggest extrasolar planetary system found is around Upsilon Andromedae, a star 44 light years away that has three orbiting giant planets.
This is not to say that there are no extrasolar systems like ours. Extrasolar planets that are large and close to their host star create a larger and more noticeable wobble. This means their Doppler effect is more easily detected. At present we are not able to detect the Doppler effect for an extrasolar system similar to ours, but it is possible they are out there. | <urn:uuid:52ff32c3-53ec-4161-9ef8-821d79b0b4e4> | 4.1875 | 411 | Knowledge Article | Science & Tech. | 54.843844 |
The Mars rover Curiosity’s first roll was more than a cause for celebration — it will help pinpoint where the rover set down, and emblazon the name of its maker into the Martian soil. Curiosity’s wheels have holes arranged in the Morse code pattern for “JPL.”
Each of Curiosity’s six 20-inch-diameter wheels has a zigzag tread and a dash-dot pattern (.--- .--. .-..), which translates into the short and long signals of Morse code for the letters JPL. Engineers at the Jet Propulsion Laboratory, where the rover was built, designed it this way in homage to the unmanned planetary systems center. It’s also to serve as a sort of wheel-based odometer.Matt Heverly, lead rover driver, said in a news conference that the tracks will help the team figure out the rover’s position. “If we are in sand dunes where we don't have lots of rock features around us, we can use those patterns to do our visual odometry,” he said.
Curiosity’s first roll is the latest in a series of checkouts it must finish before driving to its first science target. The science team found out that some debris kicked up by the rover’s sky crane rocket backpack landed on one of its wind sensors, scrambling its data-collection abilities. But all things considered — from the thundering launch to radiation-filled journey to Mars to the crazy sky crane delivery — Curiosity is a picture of health.
Eventually, the rover will traverse through Gale Crater to Mt. Sharp at its center.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:2bc7b9b6-bef3-46f5-b529-64e08957628d> | 3.546875 | 394 | Listicle | Science & Tech. | 60.406513 |
One could inject perhaps a dozen or so photons into a cavity and then launch through it, one by one, Rydberg atoms whose velocity is fixed at about a meter per second. The kinetic energy of these atoms would be greater than the atom-cavity potential energy, and they would pass through the cavity after experiencing a slight positive or negative delay, depending on the sign of the atom-cavity detuning. To detect the atom's position after it has passed through the cavity, researchers could fire an array of field ionization detectors simultaneously some time after the launch of each atom. A spatial resolution of a few microns should be good enough to count the number of photons in the cavity.
Before measurement, of course, the photon number is not merely a classically unknown quantity. It also usually contains an inherent quantum uncertainty. The cavity generally contains a field whose description is a quantum wave function assigning a complex amplitude to each possible number of photons. The probability that the cavity stores a given number of photons is the squared modulus of the corresponding complex amplitude.
The laws of quantum mechanics say that the firing of the detector that registers an atom's position after it has crossed the cavity collapses the ambiguous photon-number wave function to a single value. Any subsequent atom used to measure this number will register the same value. If the experiment is repeated from scratch many times, with the same initial field in the cavity, the statistical distribution of photons will be revealed by the ensemble of individual measurements. In any given run, however, the photon number will remain constant, once pinned down.
This method for measuring the number of photons in the cavity realizes the remarkable feat of observation known as quantum nondemolition. Not only does the technique determine perfectly the number of photons in the cavity, but it also leaves that number unchanged for further readings.
Although this characteristic seems to be merely what one would ask of any measurement, it is impossible to attain by conventional means. The ordinary way to measure this field is to couple the cavity to some kind of photodetector, transforming the photons into electrons and counting them. The absorption of photons is also a quantum event, ruled by chance; thus, the detector adds its own noise to the measured intensity. Furthermore, each measurement requires absorbing photons; thus, the field irreversibly loses energy. Repeating such a procedure therefore results in a different, lower reading each time. In the nondemolition experiment, in contrast, the slightly nonresonant atoms interact with the cavity field without permanently exchanging energy.
Quantum optics groups around the world have discussed various versions of quantum non-demolition experiments for several years, and recently they have begun reducing theory to practice. Direct measurement of an atom's delay is conceptually simple but not very sensitive. More promising variants are based on interference effects involving atoms passing through the cavity--like photons, atoms can behave like waves. They can even interfere with themselves. The so-called de Broglie wavelength of an atom is inversely proportional to velocity; a rubidium atom traveling 100 meters per second, for example, has a wavelength of 0.45 angstrom.
If an atom is slowed while traversing the cavity, its phase will be shifted by an angle proportional to the delay. A delay that holds an atom back by a mere 0.22 angstrom, or one half of a de Broglie wavelength, will replace a crest of the matter wave by a trough. This shift can readily be detected by atomic interferometry.
If one prepares the atom itself in a superposition of two states, one of which is delayed by the cavity while the other is unaffected, then the atomic wave packet itself will be split into two parts. As these two parts interfere with each other, the resulting signal yields a measurement of the phase shift of the matter wave and hence of the photon number in the cavity. Precisely this experiment is now under way at our laboratory in Paris, using Rydberg atoms that are coupled to a superconducting cavity in an apparatus known as a Ramsey interferometer. | <urn:uuid:7e53bb77-6733-4bef-b475-df53838b7dde> | 3.6875 | 827 | Knowledge Article | Science & Tech. | 29.862201 |
More In This Article
Polynomials, the meat and potatoes of high-school algebra, are foundational to many aspects of quantitative science. But it would take a particularly enthusiastic math teacher to think of these trusty workhorses as beautiful.
As with so many phenomena, however, what is simple and straightforward in a single serving becomes intricately detailed—beautiful, even—in the collective.
On December 5 John Baez, a mathematical physicist at the University of California, Riverside, posted a collection of images of polynomial roots by Dan Christensen, a mathematician at the University of Western Ontario, and Sam Derbyshire, an undergraduate student at the University of Warwick in England.
Polynomials are mathematical expressions that in their prototypical form can be described by the sum or product of one or more variables raised to various powers. As a single-variable example, take x2 - x - 2. This expression is a second-degree polynomial, or a quadratic, meaning that the variable (x) is raised to the second power in the term with the largest exponent (x2).
A root of such a polynomial is a value for x such that the expression is equal to zero. In the quadratic above, the roots are 2 and –1. That is to say, plug either of those numbers in for x and the polynomial will be equal to zero. (These roots can be found by using the famous quadratic formula.) But some roots are more complex. Take the quadratic polynomial x2 + 1. Such an expression is only equal to zero when x2 is equal to –1, but on its face this seems impossible. After all, a positive number times a positive number is positive, and a negative number times a negative number is positive as well. So what number, multiplied by itself, could be negative?
Imaginary numbers were, well, imagined into existence to fit the bill. Based on the number i, the square root of –1, imaginary numbers are unusual in that they do not represent a tangible physical quantity. (You cannot have i dollars—at least, not if you wish to pay your bills.) Polynomial roots can be either real or imaginary—that is, they may or may not have an imaginary component.
What Christensen and Derbyshire did was plot the roots of entire families of single-variable polynomials, imposing constraints on the polynomials' degrees and coefficients. (Coefficients are the multipliers of the variable terms—in the polynomial 4x - 2, the coefficients are 4 and –2, respectively.) For example, Christensen plotted the roots of every polynomial whose degree is six or less and whose coefficients are integers between –4 and 4.
The horizontal axis in Christensen's and Derbyshire's plots is the real numbers; the vertical axis is the imaginary numbers. So a real root, such as –1, would fall on the horizontal axis; a purely imaginary root such as 2i would fall on the vertical axis. The rest of the imaginary numbers—those with both real and imaginary components—fill out the quadrants of the graph. For instance, the imaginary number 3 - 2i would be represented by the point aligning with 3 on the horizontal (real) axis and –2 on the vertical (imaginary) axis.
What happens when these families of roots are plotted en masse? Intricate and intriguing patterns emerge that should appeal even to the most math-averse. Take a look at Christensen's and Derbyshire's images to see for yourself.
Slide Show: Polynomial Plot | <urn:uuid:f98619bd-28ac-40b1-baf5-dafd2d9cfadc> | 3.96875 | 754 | Knowledge Article | Science & Tech. | 40.392174 |
Graphs of Exponential Functions Math Help Game Tips:
- The relations y=2x, y=3x, and y=4x are examples of exponential functions.
- A 'positive constant base raised to a variable exponent' = (constant)(variable) is an 'Exponential function'.
- In general, an 'Exponential function' is represented by the formula y=ax where a>0. [Also a cannot be 1.]
Notice that for integer a>0, the graph of y=ax passes through the point (0,1).
- The relations y=3(2x), y=4(6x), and y=2(3x) are examples of exponential functions with a front multiplier.
- In general, a 'Constant times an Exponential function' can be represented by a formula such as y=k(ax).
For integer a>0, the graph of y=k(ax) passes through the point (0,k).
- In this game, 'k' is an integer from -3 to 3 and 'a' is an integer from 2 to 5.
- Notice y=2x has graph points such as (-1,1/2), (0,1), (1,2), and (2,4)... ,
while a relation such as y=2(3x) has graph points such as (-1,2/3), (0,2), (1,6), and (2,18)... .
- There are many applications of exponential functions. For example, exponentials are used in formulas
for compound interest, for geometric growth patterns [ax], and for radioactive decay patterns [a-x].
- The game can be played using the mouse by itself or using the keyboard by itself.
- If the game doesn't respond to keyboard input, click inside the game area to reset the game's focus.
- Adjust the game's speed by pressing the + or - key repeatedly. | <urn:uuid:1503dcca-0ff4-4523-9092-c915d21d9213> | 4.0625 | 416 | Tutorial | Science & Tech. | 80.179438 |
The substitution method is a very valuable way to evaluate some indefinite integrals. The substitution method adds a new function into the one being integrated, and substitutes the new function and its derivative in order to make finding the wanted antiderivative easier.
Let's take a look at a differentiation problem. If I asked you to differentiate one tenth of 1 plus x cubed to the tenth power you'll probably use the chain rule right? One tenth times 10 1 plus x cubed to the ninth times 3x squared. The 3x squared comes from the derivative of the 1 plus x cubed. So this simplifies the one tenth and the tenth cancels so 1 plus x cubed to the ninth times 3x squared. So we use the chain rule to differentiate something like this. But remember every derivative formula can also be written as an integral formula so if I take this as my integrand the integral of this function with respect to x equals this function one tenth, 1 plus x cubed all raised to the tenth power plus c. Now what I want to ask you about this is how would we have done this if we didn't actually start with a derivative problem, if we didn't actually know what the answer was to begin with how would we have integrated this? And the answer is the method of substitution.
We use the chain rule to get this derivative the method of substitution kind of reverses the process of the chain rule it undoes the chain rule. So this is like a reverse chain rule and let me show you how it works. Whenever you use the chain rule or the method of substitution you usually have a composite function of some kind involving your integral. You want to look at the inside part of that composite function and in this case that's the 1 plus x cubed and you're going to substitute for that. This is essentially a change of variables trick. I'm going to let w equal 1 plus x cubed so this is going to become w. And then I need the derivative of that with respect to x.
The derivative of w is going to be 3x squared and very important whenever you have an integral you always have this little dx or d something. This is called the differential you can get a differential from a derivative like this by multiplying both sides by dx, so the differential I'm going to need to change to is dw and so this is going to be my conversion. Now let me show you how this works, so the 1 plus x cubed that's w to the ninth times 3x squared dx that's exactly dw so by a change of variables I've turned this difficult integral into this very easy one. I can integrate this using the power rule for antidifferentiation and remember the way that works is you add 1 to the exponent this becomes w to the tenth and you divide by that same new exponent plus c.
Now because I want my answer to be in terms of x I need to convert back again and remember in the w is 1 plus x cubed I just plug that back in 1 plus x cubed all to the tenth over 10 plus c and that's it. That's how you use the method of substitution to obtain an antiderivative for a complicated function like this but basically undoes the chain rule so whenever you see a composite function or something that you don't think corresponds to any of the integration formulas that you know try to use the method of substitution it works a lot. | <urn:uuid:d498f652-7a01-4387-a702-b102e9d5934e> | 3.703125 | 697 | Tutorial | Science & Tech. | 57.525553 |
Write the shortest possible program that demonstrates a programming language's entire syntax: statements, expressions, operators, reserved words, etc.
Take the language's grammar (usually in EBNF) and create a program that uses all of it.
The program doesn't have to do anything particularly useful, but if it does that's a bonus :)
- You must use all of the language's syntax: every type of statement, expression, operator, keyword, etc the language defines.
- It must be able to be run with no dependencies, except for the standard library included with the language (it doesn't need to use the entire standard lib though).
- Include a link to the language spec you used (e.g. here is the Python 2.7 grammar). | <urn:uuid:58139b69-6ba7-4df9-8bad-ca54732a1537> | 3.90625 | 157 | Q&A Forum | Software Dev. | 52.667725 |
As with many other fields of scientific study, the military has picked up on the use of Artificial Intelligence. The possibilities of military use of AI are boundless, exciting, intimidating, and frightening. While today's military robots are used mainly to find roadside bombs, search caves, and act as armed sentries, they have the potential to do so much more.
Not all military uses of AI directly relate to the battlefield however; it can use Artificial Intelligence for more passive purposes as well. For example, the military has developed a computer game that uses AI to teach new recruits how to speak Arabic. The program requires soldiers to complete game missions during which they must be able to understand and speak the language. This system gives the soldiers a more realistic, easy, and effective way to learn the new tongue. This particular game works by using speech recognition technology that evaluates the soldier's words and detects common errors. It can then create a model of the soldier, keeping track of what he's learned and what he hasn't in order to provide individualized feedback for the soldier's specific problems. Those who are working on this project believe that it will change the face of all language learning and similar programs will become mainstream sometime in the near future.
The military is also trying to create automated vehicles — the ultimate autopilot. Machines already have the ability to see the world around them and read a map, theoretically well enough to be able to drive from point to point without human assistance. However, when the Pentagon first sponsored a competition for prototype-automated vehicles in the Mojave Desert in 2004 to test their resilience against difficult terrain, none of the fifteen entries crossed the finish line. The following year, a car built by students at Stanford University completed the 131 mile course in six hours and 53 minutes. The car completed the race without any human input, using only onboard computers and sensors to navigate terrain meant to mimic combat conditions in Iraq and Afghanistan. Though this proved that great strides had been made in one year alone, even more are needed before the technology can be marketed and put to real use.
According to the Pentagon, actual robotic soldiers powered by Artificial Intelligence will be a major fighting force in the American army, probably within the next decade. The first robot soldiers will actually be remote-controlled vehicles. The military has poured tens of billions of dollars into this project already. Congress wants to see this happen, and they ordered that a third of all military vehicles and deep-strike aircraft be automated by 2010.
As the machines begin to think, see, and react more like humans, the level of their autonomy and our level of trust in them will grow as well. However, it is predicted that a true soldier-simulating robot will not come about for another 30 years. These robots need to be able to determine friend from foe and enemy from bystander, and teaching them to do so will require a tremendous amount of research and work. The government has assured us however that these robotic soldiers will not be put into the field and allowed to make such decisions until they are ready to do so.
Another current infantry prototype knows how to recognize an enemy when it is under fire. When this happens, it can react to enemy fire on its own or follow orders given to it from a remote observer. Although it's programmed to work autonomously, in its present state, it still requires some set of outside monitoring controls in order for it to work. Its designers plan to have it usable for infantry missions by 2015.
Another one of their prototypes nearly realizes the anthropomorphic goal imagined by Isaac Asimov in his I, Robot book. This prototype is a machine about four feet high with a Cyclops eye and a gun for a right arm. It is programmed to perform basic hunting and killing tasks. It can actually find valid targets on its own and can shoot at them with remarkable accuracy.
The list of benefits of using machines to achieve military goals is long and significant. The immediate and most evident boon of such technology is the elimination of human risk: machines, not humans, would be lost in battle. In addition, specialized robots can be designed to accomplish specific tasks more effectively than humans can, increasing the military's overall effectiveness. They are also more cost-effective. Robots will always be able to do what they were designed to do and can be recycled when they are obsolete. A human soldier costs on average $4 million dollars over his lifetime, and the U.S. Pentagon cannot obtain the money to pay all of them. Robots could cost a tenth of that amount or less.
Although the ultimate goal of the robot soldier is to completely eliminate human risk, even the experts say that war will always be a human endeavor involving human loss of life, no matter how much the AI warrior is developed. New ethical questions will arise once we have the ability to invade countries without risk of bloodshed on the part of the invader. And even though these robotic developments will soon be on our doorstep, it’s a little frightening to see that the only ones who are addressing the issue of use and or misuse of such technology are the scientists and the authors of science-fiction. | <urn:uuid:0cd2a8a2-6562-4035-b5a5-d2d8543202bf> | 3.34375 | 1,040 | Nonfiction Writing | Science & Tech. | 39.161797 |
1.2. A brief history of cosmological constant
Originally, Einstein introduced the cosmological constant in the field equation for gravity (as in equation (5)) with the motivation that it allows for a finite, closed, static universe in which the energy density of matter determines the geometry. The spatial sections of such a universe are closed 3-spheres with radius l = (8 G NR)-1/2 = -1/2 where NR is the energy density of pressureless matter (see section 2.4) Einstein had hoped that normal matter is needed to curve the geometry; a demand, which - to him - was closely related to the Mach's principle. This hope, however, was soon shattered when de Sitter produced a solution to Einstein's equations with cosmological constant containing no matter . However, in spite of two fundamental papers by Friedmann and one by Lemaitre [13, 14], most workers did not catch on with the idea of an expanding universe. In fact, Einstein originally thought Friedmann's work was in error but later published a retraction of his comment; similarly, in the Solvay meeting in 1927, Einstein was arguing against the solutions describing expanding universe. Nevertheless, the Einstein archives do contain a postcard from Einstein to Weyl in 1923 in which he says: "If there is no quasi-static world, then away with the cosmological term". The early history following de Sitter's discovery (see, for example, ) is clearly somewhat confused, to say the least.
It appears that the community accepted the concept of an expanding universe largely due to the work of Lemaitre. By 1931, Einstein himself had rejected the cosmological term as superflous and unjustified (see reference , which is a single authored paper; this paper has been mis-cited in literature often, eventually converting part of the journal name "preuss" to a co-author "Preuss, S. B"!; see ). There is no direct record that Einstein ever called cosmological constant his biggest blunder. It is possible that this often repeated "quote" arises from Gamow's recollection : "When I was discussing cosmological problems with Einstein, he remarked that the introduction of the cosmological term was the biggest blunder he ever made in his life." By 1950's the view was decidedly against and the authors of several classic texts (like Landau and Liftshitz , Pauli and Einstein ) argued against the cosmological constant.
In later years, cosmological constant had a chequered history and was often accepted or rejected for wrong or insufficient reasons. For example, the original value of the Hubble constant was nearly an order of magnitude higher than the currently accepted value thereby reducing the age of the universe by a similar factor. At this stage, as well as on several later occasions (eg., [22, 23]), cosmologists have invoked cosmological constant to reconcile the age of the universe with observations (see section 3.2). Similar attempts have been made in the past when it was felt that counts of quasars peak at a given phase in the expansion of the universe [24, 25, 26]. These reasons, for the introduction of something as fundamental as cosmological constant, seem inadequate at present.
However, these attempts clearly showed that sensible cosmology can only be obtained if the energy density contributed by cosmological constant is comparable to the energy density of matter at the present epoch. This remarkable property was probably noticed first by Bondi and has been discussed by McCrea . It is mentioned in that such coincidences were discussed in Dicke's gravity research group in the sixties; it is almost certain that this must have been noticed by several other workers in the subject.
The first cosmological model to make central use of the cosmological constantwas the steady state model [29, 30, 31]. It made use of the fact that a universe with a cosmological constant has a time translational invariance in a particular coordinate system. The model also used a scalar field with negative energy field to continuously create matter while maintaining energy conservation. While modern approaches to cosmology invokes negative energies or pressure without hesitation, steady state cosmology was discarded by most workers after the discovery of CMBR.
The discussion so far has been purely classical. The introduction of quantum theory adds a new dimension to this problem. Much of the early work [32, 33] as well as the definitive work by Pauli [34, 35] involved evaluating the sum of the zero point energies of a quantum field (with some cut-off) in order to estimate the vacuum contribution to the cosmological constant. Such an argument, however, is hopelessly naive (inspite of the fact that it is often repeated even today). In fact, Pauli himself was aware of the fact that one must exclude the zero point contribution from such a calculation. The first paper to stress this clearly and carry out a second order calculation was probably the one by Zeldovich though the connection between vacuum energy density and cosmological constant had been noted earlier by Gliner and even by Lemaitre . Zeldovich assumed that the lowest order zero point energy should be subtracted out in quantum field theory and went on to compute the gravitational force between particles in the vacuum fluctuations. If E is an energy scale of a virtual process corresponding to a length scale l = c / E, then l-3 = (E / c)3 particles per unit volume of energy E will lead to the gravitational self energy density of the order of
This will correspond to LP2 (E / EP)6 where EP = ( c5 / G)1/2 1019GeV is the Planck energy. Zeldovich took E 1 GeV (without any clear reason) and obtained a which contradicted the observational bound "only" by nine orders of magnitude.
The first serious symmetry principle which had implications for cosmological constant was supersymmetry and it was realized early on [10, 11] that the contributions to vacuum energy from fermions and bosons will cancel in a supersymmetric theory. This, however, is not of much help since supersymmetry is badly broken in nature at sufficiently high energies (at ESS > 102 Gev). In general, one would expect the vacuum energy density to be comparable to the that corresponding to the supersymmetry braking scale, ESS. This will, again, lead to an unacceptably large value for . In fact the situation is more complex and one has to take into account the coupling of matter sector and gravitation - which invariably leads to a supergravity theory. The description of cosmological constant in such models is more complex, though none of the attempts have provided a clear direction of attack (see e.g, for a review of early attempts).
The situation becomes more complicated when the quantum field theory admits more than one ground state or even more than one local minima for the potentials. For example, the spontaneous symmetry breaking in the electro-weak theory arises from a potential of the form
At the minimum, this leads to an energy density Vmin = V0 - (µ4 / 4g). If we take V0 = 0 then (Vmin / g) - (300 GeV)4; even if g = (2) we get | Vmin| ~ 106 GeV4 which misses the bound on by a factor of 1053. It is really of no help to set Vmin = 0 by hand. At early epochs of the universe, the temperature dependent effective potential [39, 40] will change minimum to = 0 with V() = V0. In other words, the ground state energy changes by several orders of magnitude during the electro-weak and other phase transitions.
Another facet is added to the discussion by the currently popular models of quantum gravity based on string theory [41, 42]. The currently accepted paradigm of string theory encompasses several ground states of the same underlying theory (in a manner which is as yet unknown). This could lead to the possibility that the final theory of quantum gravity might allow different ground states for nature and we may need an extra prescription to choose the actual state in which we live in. The different ground states can also have different values for cosmological constant and we need to invoke a separate (again, as yet unknown) principle to choose the ground state in which LP2 10-123 (see section 11). | <urn:uuid:c0b04501-7ad7-40c2-9b79-764c47a872b7> | 3.5 | 1,745 | Academic Writing | Science & Tech. | 38.266987 |
Nanocrystals double up
Jun 26, 2003
Magnetic nanocrystals and semiconductor quantum dots can self-assemble into ‘metamaterials’ that could be useful in a range of applications, experiments in the US have shown. Franz Redl at the IBM TJ Watson Research Center in New York and colleagues at IBM, Columbia University and the University of New Orleans made the new materials with lead-selenium semiconductor quantum dots and iron oxide magnetic nanocrystals (F X Redl et al. 2003 Nature 423 968).
The properties of a metamaterial depend on the characteristics and interactions of the different nanocrystals used to make it. Metamaterials with improved magnetic, optical, electrical and mechanical properties could be used in applications as diverse as electric drives, motors and generators.
Redl and co-workers varied the sizes of the two different types of nanocrystal, as well as the processing conditions, to optimize the properties of the final ‘superlattice’ structure. Images taken with a transmission electron microscope confirm that two main types of superlattice were created (see figures 1 and 2). The best structures formed when the diameter of the lead-selenium quantum dots was 55% that of the iron oxide nanocrystals.
The average cubic unit cell in the superlattice was found to contain 8 iron oxide nanocrystals and 104 lead-selenium quantum dots, giving a total of about 4.5 million atoms per unit cell. Moreover, long-range order was seen in an area that extended up to 2 microns squared, whereas most metamaterials made in the lab so far have exhibited short-range order.
“We now are investigating magneto-optic phenomena in these materials to make new optical modulators and switches that could serve as building blocks for future telecommunications,” Redl told PhysicsWeb. “The unique combination of magnetic and semiconductor properties may also have an impact in magneto-electronics where both the charge of electron and its magnetic spin are exploited to carry out electronic operations.”
About the author
Belle Dumé is Science Writer at PhysicsWeb | <urn:uuid:f180b664-874f-4276-ada4-11e61c0ebb80> | 3.09375 | 447 | Truncated | Science & Tech. | 23.116623 |
he distinction between web design and web development is determined by two different roles in the construction of websites. Design revolves around the aesthetics of a website whilst development deals with the coding, to ensure functionality and accessibility. Sometimes there are individuals who are able to do both however, the majority in the digital industry who may know aspects of the other field, still tend to specialise in one area. Generally, developers earn a higher salary than designers. This is established by their status as being proprietors of company systems, their intelligible coding skills and their rarity, compared to the excess of designers in the market. However, both development and design skills are necessary in creating a commendable website.
- Web Designer vs Web Developer: Infographic (downgraf.com)
- Web Designers vs. Web Developers | Web Designers and Developers – Same or Different? (mohil.typepad.com)
- 25 Amazing Examples Of Minimalism In Web Design (downgraf.com)
- 100+ Useful Web Design Tools [ Designers Toolbox ] (madrasgeek.com)
- The Web Design Business is Dead (collaborativegrowthnetwork.com) | <urn:uuid:27b9dac9-e70b-4595-b292-916ecadbbab9> | 2.6875 | 241 | Listicle | Software Dev. | 27.885459 |
National Geographic magazine
Story by Mark Jenkins
Photography by Kevin Schafer
"The Amazon dolphin, Inia geoffrensis, parted company with its oceanic ancestors about 15 million years ago, during the Miocene epoch. Sea levels were higher then, says biologist Healy Hamilton of the California Academy of Sciences in San Francisco, and large parts of South America, including the Amazon Basin, may have been flooded by shallow, more or less brackish water. When this inland sea retreated, Hamilton hypothesizes, the Amazon dolphins remained in the river basin, evolving into striking creatures that bear little resemblance to our beloved Flipper. These dolphins have fat, bulbous foreheads and skinny, elongated beaks suited to snatching fish from a tangle of branches or to rooting around in river mud for crustaceans. Unlike marine dolphins, they have unfused neck vertebrae that allow them to bend at up to a 90-degree angle—ideal for slithering through trees. They also have broad flippers, a reduced dorsal fin (a larger one would just get in the way in tight spots), and small eyes—echolocation helps them pinpoint prey in muddy water.
At up to 450 pounds and eight feet in length, the Amazon dolphin, or boto, is the largest of the four known species of river dolphin. The others live in the Ganges in India and the Indus in Pakistan, in the Yangtze in China, and in the Río de la Plata between Argentina and Uruguay. All river dolphins are superficially similar, says Hamilton, yet the four species don't belong to the same family. DNA studies by Hamilton and others have shown that river dolphins evolved from archaic marine cetaceans (the order that also includes whales) on at least three separate occasions—first in India, later in China and in South America—before modern marine dolphins themselves had emerged as a distinct group. In an example of what's known as convergent evolution, geographically isolated and genetically distinct species developed similar characteristics because they were adjusting to similar environments...."
National Geographic.com has the following available on their website:
- -the full article
- -an interactive learning tool for the feature
- -a gallery of photos by Kevin Schafer (prints available to order). | <urn:uuid:5dffbcf8-71e1-43dd-98d0-93a7915cd26e> | 3.5625 | 470 | Truncated | Science & Tech. | 31.730846 |
Crashing into the Moon
A whole new space race has begun.
Over the next decade, the United States… Germany… England… Japan… India… China… Russia… and even a few private companies… have plans to send rockets to explore the moon.
They will map the lunar surface… search for clues to its origins… and find out what’s there that humans can use to survive.
A Russian mission will send seismic detectors into the soil to monitor moon-quakes… and study the flow of heat from the moon’s core.
A Japanese mission will use x-rays to search for rare minerals.
An American mission is prospecting for water in the shadowy craters at the Moon’s poles.
But governments aren’t the only ones joining this new race to the Moon;
With more missions on the drawing boards…
- and the chance to actually make money developing space businesses -
private ventures are angling to supply launch or human transport services….
And even begin exploiting space resources like energy…materials…and the freedom from gravity itself.
Private robotics teams, vying for the 30 million dollar Google Lunar X-Prize, are designing, building and planning to launch rovers with video cameras to explore lunar landscapes.
It’s inspired by the Orteig prize that sent Charles Lindbergh flying across the Atlantic Ocean more than 80 years ago.
That feat helped launch the civil aviation industry. The sponsors of this prize hope it will unleash the entrepreneurial spirit into space.
The goal of these missions is to begin to fulfill a grand promise of the space age… to send humans back to the moon and beyond, to permanently live and work in space.
NASA has unveiled its grand plan…
It’s a series of steps… designed to build knowledge and expertise, while steadily reducing the risks to human life.
For now, it’s the space shuttle to take us up there. It’s a big freight hauling system able to lift over 25 tons of people and machines into space with every launch.
On more than two-dozen flights since | <urn:uuid:96ef5c2b-e667-48d7-8b5b-d530c5c00f71> | 3.203125 | 439 | Truncated | Science & Tech. | 49.750872 |
SEE:Ten things people want to know about Pythonfor more details.
- 'Faster' requires a bit of thought in practice. Language implementations, instead of language syntax, have a speed that can be measured on benchmarks. Python implementations can vary quite a bit, although dynamically typed languages usually perform slower than statically typed ones on standard benchmarks. As a practical matter, a profiler is necessary to understand performance. Python usually runs plenty fast.
First, the language implementations have speed, Python as a language is a set of rules (its syntax and semantics) and so doesn't have a 'speed'. Only a specific language implementation can have a measurable speed, and then we can only compare performance with a specific implementation of another language. In general you can't compare the speed of one language to another - you can only compare implementations.
With Python there are several implementations - CPython (with or without Psyco, a specializing compiler for CPython), IronPython, Jython, PyPy - plus several partial implementations that implement a subset of Python (Tinypy) or can even compile a subset of Python to C++ (Shedskin). If you say Python is slow, which specific implementation are you talking about?
Having said that, as a dynamic language Python will typically perform slower for specific benchmarks than standard implementations of some other languages (although it is faster than plenty of others). As a dynamic language a lot of information about the program can only be determined at runtime. This means that a lot of common compiler tricks, that rely on knowing the type of objects at compile time, can't work. Despite this there are a lot of things that can be done to improve the performance of dynamic languages (beyond the performance of statically typed languages many believe), several of which have been done before in virtual machines like Strongtalk and are being explored for Python in the PyPy JIT tracing compiler. Finally, using an execution profiler, such as the Python profile or cprofile module, provides the critical information for speeding up execution. Most programs spend the majority of execution time in calls to operating system libraries and execution trade-offs are less intuitive. For example, many small requests for data across a network are much slower than a single, larger request. In practice, Python code execution is fast enough. | <urn:uuid:2b541ddf-cfcf-442a-a646-b20c6d62d72f> | 3.34375 | 465 | Knowledge Article | Software Dev. | 26.452021 |
In one of the neutron-induced fission reactions of U-235(atomic
mass = 235.043922 ), the products are Ba -140 and Kr-93 (a
radioactive gas). What volume of Kr-93(at and 1.0 ) is produced
when 1.80 g of U-235 undergoes this fission reaction? Please
provide answer in L.
I got 24 for this, but it is wrong, so I appreciate your help! | <urn:uuid:efbcbb39-f7df-48e7-ad0b-0a69fb9d3c36> | 3.15625 | 100 | Q&A Forum | Science & Tech. | 97.367761 |
named-multiple-values.lisp might be considered to be a convenience library. It defines a single macro, called named-multiple-values:define-named-multiple-values. This macro generates two macros, whose names it returns. See the example.
Here is an example of what it can look like:
(define-named-multiple-values foo-values (bar zot zut) (:all &all)) => FOO-VALUES, FOO-VALUES-BIND (foo-values () :bar 'barney :zut 'zutteklut) => BARNEY, NIL, ZUTTEKLUT (foo-values-bind (&zut my-zut &bar local-bar) (values 1 2 3) (list my-zut local-bar)) => (3 1) (foo-values-bind (&all default-foos) (values 1 2 3) (foo-values (:defaults default-foos) :zot 'special-zot)) => 1, SPECIAL-ZOT, 3
However, named-multiple-values is not expected to be particularly useful for interactive work, see the "background" section.
I will offer two analogies to the concept of named-multiple-values:
- Multiple-values are similar to functions' argument-lists, where the latter is "down-stream", and the former is "up-stream" flow of information. Much like Common Lisp---by use of &key---allow you to pass arguments (down-stream) to functions by name rather than position, you can now generate up-stream multiple-values by name.
- Common Lisp's defstruct offers the possibility of defining abstract types whose concrete type is (for example) a list. So while the type abstractly consits only of members a, b, and c, in reality there will be a list of length three, whose first element is a, second element is b, and third is c. Named multiple-values are very much the same thing, except the set of values only exists as an abstract entity: No list or anything else is ever consed up, since the value-sets as such never exist as lisp objects. Consequently, they cannot be passed around between functions.
What is named-multiple-values useful for? I'm not quite sure. This code is factored out from a project where I had many tens of functions that followed the same protocol, both down-stream and up-stream. That is, the functions accepted the same parameters, and returned the same values. Being the lisper that I am, I took advantage of this regularity in the code by having macros that dealt with both directions of the protocol. This added greatly to the clarity of the code, and saved me a lot of editing.
While the down-stream part of my original project's macrology was rather trivial, and didn't really provide that much beyond what &key already does, the up-stream part I thought might be more interesting for more general-purpose use. So here it is, available as finished code, or just as and idea you might find usable in your own code.
Because this code is factored out and modified quite a bit from my original project with well-tested code, and abstracted into what I suppose might be called a higher-order-macro (a macro-writing macro, that is), I fully expect there to be bugs and quirks. So this code should perhaps be considered more as a description of an idea than a ready-to-use library.
Comments are welcome.
I (Gary King) am not sure that I follow but are named-multiple-values similar to returning property lists but without the consing?
Yes, I think that would describe it reasonably. If you want to return numerous values and would prefer to refer to each value by name rather than ordering, you can use ``Named multiple-values''. | <urn:uuid:399c81dc-931f-48e5-89b9-6e422baca708> | 2.8125 | 824 | Documentation | Software Dev. | 48.589621 |
ERS-2 data used in
El Niño animation
A vivid animation based on data from ESA’s ERS-2 satellite shows the onset of the recent El Niño phenomena from July to December of last year.
Covering a large area of the Pacific Ocean from South America to Australia and southeast Asia, the animation demonstrates the three most important factors that mark a phenomenon that can shape weather patterns from South America to Australia, and from India to southeast Asia:
- sea surface temperature
- sea surface levels
Surface water temperature is represented as deviations from average temperature values by the colour of the water surface. The greenish-blue colour represents the average temperature of the water. The purple colour represents a temperature 8 deg. Celsius above average, while the blue represents the other extreme of the scale, 8 deg. Celsius below average.
The height of the ocean water, as a deviation from average levels, is seen by the shape of the sea surface, an effect that is difficult to see because of the compression of the video. The 'wave' effect of the surface represents the amplified deviation of the water’s surface from its average height; the highest 'waves' display deviations from the average of about 1.8 metres.
The wind is shown as blue arrows. Trade winds in the area, blowing constantly from east to west, are clearly visible, particularly in the final months of 2002. Winds blowing in this direction, pushing warm surface water to the west, is consistent with a weak, or weakening, El Niño.
El Niño expected to weaken
According to the US National Weather Service’s Climate Prediction Center (CPC), El Niño conditions continued during January 2003, but there were indications that the warm episode is beginning to weaken.
"Consistent with current conditions and recent observed trends, most coupled model and statistical model forecasts indicate that El Niño conditions will continue to weaken through April 2003," the CPC forecast states. "Thereafter, the consensus forecast is for near-normal conditions during May-October 2003."
The animation incorporated data from several ERS-2 instruments. Sea level measurements were obtained by the radar altimeter, an active microwave sensor designed to measure return echoes from ocean and ice surfaces. ERS-2’s Along Track Scanning Radar (ATSR) acquired the data on temperatures of the sea surface temperatures. The European Centre for Medium-Range Weather Forecasts, an international organisation for weather data, provided the wind data used in the animation.
What causes El Niño?
In Spanish El Niño means 'the Christ Child' – a name given to it by the Peruvian fishermen who hundreds of years ago noticed how sometimes their coastal waters grew unusually warm and fish grew scarce around Christmas time. They had no way of knowing they were naming a vast weather pattern whose effects strike much of the globe.
El Niño is an irregular oscillation in tropical Pacific currents, around the Equator. Usually, the wind blows in a westerly direction in this region. This pushes the warmer surface water into the western Pacific (which can be as much as a half-meter higher than surface levels in the east). In the eastern Pacific, colder water from below the ocean’s surface is pulled up from below to replace the water pushed west. So, the normal situation is warm water (about 30 C) in the west, cold (about 22 C) in the east.
In an El Niño, the winds pushing that water to the west get weaker. With thermal circulation some of the warm water piled up in the west is released and moves back east, and not as much cold water gets pulled up from below. This makes the water in the eastern Pacific warmer, an El Niño trademark.
El Niño doesn't stop there. Warmer ocean waters weaken the winds, which in turn further warms the water, a cycle that makes El Niño even stronger. This can have wide-ranging consequences on climate patterns around the world. These can include vastly increased rainfall in South America, drought in Australia and fires across southeast Asia, dying coral reefs in India, severe winter storms in California, a heat wave in Canada and intense hurricanes raging along the Pacific Ocean.
This phenomenon seems to occur every three to seven years. The El Niño of 1997-98 is estimated to have caused more than €30 000 million of global property damage and an unknown toll in human lives.
Several versions available
Several high-resolution versions of the animation are available in Windows (.avi) and Quicktime (.mov) formats (please be patient while they load). To view the animation, click on the preferred version. | <urn:uuid:5cfbe45b-8046-4ef1-9af7-f7f25f8bc993> | 3.6875 | 937 | Knowledge Article | Science & Tech. | 40.455311 |
The Cavity Model
The packing algorithm is based on a cavity model for the available space inside a frame. For example, when the main wish window is created, the main frame is empty and there is an obvious space, or cavity, in which to place widgets. The primary rule about the packing cavity is a widget occupies one whole side of the cavity. To demonstrate this, pack three widgets into the main frame. Put the first two on the bottom, and the third one on the right:
Example 23-5 Mixing bottom and right packing sides.
# pack two frames on the bottom.
frame .one -width 100 -height 50 -bg grey50
frame .two -width 40 -height 40 -bg white
pack .one .two -side bottom
# pack another frame to the right
frame .three -width 20 -height 20 -bg grey75
pack .three -side right
When we pack a third frame into the main window with -side left or -side right, the new frame is positioned inside the cavity, which is above the two frames already packed toward the bottom side. The frame does not appear to the right of the existing frames as you might have expected. This is because the .two frame occupies the whole bottom side of the packing cavity, even though its display does not fill up that side.
Can you tell where the packing cavity is after this example? It is to the left of the frame .three, which is the last frame packed toward the right, and it is above the frame .two, which is the last frame packed toward the bottom. This explains why there was no difference between the previous two examples when .one.gamma was packed to the left, but .one.right was packed to the right. At that point, packing to the left or right of the cavity had the same effect. However, it will affect what happens if another widget is packed into those two configurations. Try out the following commands after running Example 23-3 and Example 23-4 and compare the difference.
button .one.omega -text omega
pack .one.omega -side right
Each packing parent has its own cavity, which is why introducing nested frames can help. If you use a horizontal or vertical arrangement inside any given frame, you can more easily simulate the packer's behavior in your head! | <urn:uuid:3f34f929-740b-4940-aa47-d5ff559c0ef2> | 3.125 | 484 | Tutorial | Software Dev. | 65.472353 |
Judy Turner's upward pointing icicle (Letters, 2 and 16 November) could be explained as follows.
Spider-web filaments are virtually invisible because our eyes hardly ever focus exactly on them.
Such a filament may have been anchored on a stone or something on the bottom of the birdbath, before it was refilled. Dewdrops would form on the web and eventually run down where they would freeze if the temperature difference near the ground was just right.
On the night in question, these conditions must have held for a considerable time, in order to form an icicle pointing upwards at the angle of the invisible spider web.
J. A. Briscoe Harrogate, North Yorkshire
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:5b60a919-1dd8-4e4e-a6b4-d8edbb8fb9ee> | 3.671875 | 171 | Truncated | Science & Tech. | 48.079165 |
The arrival of a new dwarf planet
in the solar system last week reminded me of the fierce argument over the definition of "planet"
that broke out in Prague in 2006, at a meeting of the International Astronomical Union
If things had gone differently in Prague, the newcomer Makemake would now be considered in the same broad class as Earth or Jupiter, becoming the thirteenth planet of the solar system to be named.
At that meeting, the IAU started with a draft definition that included anything big enough for gravity to pull it into a roughly rounded shape, meaning that Pluto and similar largeish icy objects such as Makemake would have qualified. But the debate was eventually won
by another camp, who argued that Pluto and its kind just formed a swarm of small debris
that had never coalesced into planets proper. So today the solar system only has eight official planets, plus some dwarfs.
There are many astronomers and others who object to the IAU's decision in Prague
, and would prefer to return to a definition that includes Pluto and probably all the other dwarf planets
too. It is possible that they will get their way, changing the official definition at some future meeting of the IAU.
If that were to happen today, the thirteen named planets of the solar system would include Ceres
and now Makemake. (I'm also counting the twin planet of Pluto-Charon as two. Although moons in general were sensibly excluded from planet status by the IAU's draft definition, Pluto's companion Charon was due to become half of a twin planet, for slightly technical reasons
. It is not at present a dwarf planet; merely a moon.)
Perhaps there's no harm in having thirteen planets. Even the superstitious would soon be soothed, as astronomers will keep discovering more Pluto-like objects in the outer solar system. Eventually, we would probably have scores of planets.
It might be a problem, however, if you want to learn all their names and remember their order from the Sun.
Before 2006, there were plenty of mnemonics for remembering the then nine planets in order
- such as:
Abbreviated versions are fine for today's eight. But with thirteen or more, how well would mnemonics work
We would already have two Es and three Ms, and the situation would no doubt get worse, until people will be trying to remember which of the four Qs represents the newly discovered planet Quetzalcoatl.
The idea of "order out from the sun" would also become blurred, as the plutoids are liable to have overlapping orbits. (Though at least this would confuse some triskaidekaphobes
And it might be challenging to concoct a catchy phrase with the new letters.
... although I will reluctantly concede that someone else might be able to do better.Stephen Battersby, New Scientist contributor
Labels: eris, makemake, planet, pluto | <urn:uuid:23b829c0-5909-4253-9ba2-b92036d2b1fc> | 3.421875 | 608 | Personal Blog | Science & Tech. | 45.519501 |
How To Compile Asm Code?
Posted 17 April 2007 - 03:22 PM
Posted 17 April 2007 - 03:58 PM
GCC can assemble (actually every compiler can)
from teh man page
DESCRIPTION When you invoke GCC, it normally does preprocessing, compilation, assembly and linking. The ‘‘overall options’’ allow you to stop this process at an intermediate stage. For example, the -c option says not to run the linker. Then the output consists of object files output by the assembler.
# 15:59 hipposaver i enjoyed the conversations on religeon
# 15:59 hipposaver and how he just called me stupid the whole time lol
< Jarhead> i need thedoors
Posted 17 April 2007 - 04:10 PM
I started writing some example code, but the following does a better job at explaining than I!
That guide uses AT&T syntax.
Search for "gcc inline intel asm" in google if you prefer Intel syntax (you should also add '--save-temps -masm=intel' to [Dev-Cpp >> Tools >> Compiler Options])
Something like this should suit Intel syntax purposes:
Posted 17 April 2007 - 04:18 PM
Posted 17 April 2007 - 04:28 PM
Posted 17 April 2007 - 05:02 PM
OMFG just use a fucking assembler
and asm files don't compile, they are assembled
are you fuking kidding me
changing your mac and hacking into a bank
this is real life
not disney world
Posted 17 April 2007 - 06:28 PM
http://www.masm32.com/ -- Microsoft Assembler, for creating 32-bit Windows-only applications in Assembly (MASM is the only assembler that I know of that does GUIs)
Posted 17 April 2007 - 07:27 PM | <urn:uuid:72bb9974-3685-4df2-a4f7-d26064935d67> | 2.703125 | 399 | Comment Section | Software Dev. | 62.02221 |
Use DECLARE Statement in Procedure is used to define variable and its data type.
Understand with Example
The Tutorial illustrate an example from 'Use Declare Statement in Procedure'. To grasp this Example, we create a procedure 'abc'. The Query start with the BEGIN Section that include DECLARE statement which define a variable x ,y and z and data type is integer kind. The SET variable set the value of ' x' to 10 and 'y' to 20.The set z set the compute value of variable x and y respectively.
select z :The select z return you the sum value of x and y variable in z.
create procedure abc() BEGIN DECLARE x int; DECLARE y int; DECLARE z int;
SET x = 10; SET y = 20;
set z = x+y; select z;
END$$ delimiter ;
To invoke a procedure, we use call abc ( ).The below syntax show you to call procedure :
+------+ | z | +------+ | 30 | +------+
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:b0a7baa0-8e22-446b-979a-1312d1bde0d9> | 3.453125 | 266 | Tutorial | Software Dev. | 57.561452 |
Lake Science (Home)
lesson plans at Westminster's
GSL Project site.
Paul Hooker: As a chemist,
I am interested in the chemical composition
of the lake and how it changes. At present
my research is focused on the element selenium
(Se). Though selenium is a nutritionally
essential element, it has also proven to
be toxic to animals and hazardous to humans...
professor Paul Hooker says students are
learning about the focus and dedication
required to become good analytical chemists.
Get the free Quicktime player.
...and is known to bioaccumulate in living tissues which can have many long-lasting effects
for both short-term and long-term exposure. Utah had the largest release of selenium into
the environment from 1987 to 1993, with a total of 696,515 pounds of selenium released to
land and 1,578 pounds released to water. Se also occurs naturally in the seleniferous
soils of Utah.
Environmentally, Se is a very dangerous and toxic substance. Even at
very low concentrations in the parts-per-billion range, the effects of Se contamination
can be devastating. The best-documented episode occurred in the mid-1980's at the Kesterson
Wildlife reserve in the San Joaquin Valley, California. This wetland reserve received the
majority of its water from agricultural drainage. The water contained Se dissolved at about
50 ppb, but the effects of bioaccumulation resulted in fish with elevated levels of Se up
to several hundred parts per million (compared to 1-10 ppm for "normal" fish). The result
of this was a decimation of the bird life at the reserve with chicks being born with
Students in my research group collect samples of brine shrimp (Artemia franciscana) and
water from the GSL to analyze for Se content. Brine shrimp are an important food source
for the birds that use the GSL. We are presently conducting experiments in which GSL
water samples containing brine shrimp have been spiked with increasing quantities of
selenium, the brine shrimp being analyzed for selenium bioaccumulation.
A secondary project just underway in conjunction with the National Audobon Society
involves the clean-up of 117 acres of GSL shoreline close to the Lee Creek. The vision
for this area is to transform a severely degraded part of the GSL into a place of
quiet and peace dedicated to wildlife and community education.
Researchers have measured GSL salinity as great as 27%, making it about 6 times as salty as sea water. | <urn:uuid:8330d4cd-9b3f-4d82-b8f7-6b66976ffb5e> | 3.1875 | 547 | Knowledge Article | Science & Tech. | 46.573075 |
It’s been a while since I’ve posted a jaw-dropping high-res picture from Mars, so how about this one: a gorgeous shot of frost coating dunes on the surface of the Red Planet?
[Oh yes, you want to click that to enaresenate.]
This picture was taken by the HiRISE camera on board the Mars Reconnaissance Orbiter, which takes extremely detailed images of the surface of the planet. It shows wind-driven sand dunes on Mars, rippling in a similar way as on Earth. The sunlight is coming from the upper left direction, and where the light hits the surface you can see the familiar reddish cast; that’s actually from very fine-grain dust laden with iron oxide — rust!
But the shadows, where the Sun doesn’t reach, it’s cold enough that carbon dioxide in the Martian air freezes out, forming a thin layer of dry ice on the surface. In this image — where the colors have been enhanced so you can see the effects better — this shades the dunes blue. You can see the frost not just covering the dunes in general, but hiding in the troughs of the ripples too (which I think is why the sunward facing parts of the dunes can look blue; that’s from the ripple shadows). The non-color-enhanced version showing the entire dune region can be found here — and is stunning in its own right.
These dunes fascinate me. The sand on Mars is actually basaltic, making it look grey to the eye. Those grains are big enough that they don’t move as easily as the finer dust, and they pile up to form the big dunes, with the redder dust coating them. The color can change when frost forms, as in the picture above, but you also get incredibly dramatic and simply stunning patterns when dust devils — tornado-like vortices that form when wind blows over warm air rising off the surface — lift up the red dust and expose the grey basalt underneath. The swirling patterns are intricate and incredible, as you can see in this picture here (click to embiggen and get more details).
Pictures like this remind me viscerally that these objects we see in the sky are not just some distant lights, they are whole worlds. They have fantastic details and are as diverse and have complex interactive systems as any we find on Earth. This makes their study important, fascinating… and of course, astonishingly beautiful.
Image credit: NASA/JPL/University of Arizona. Tip o’ the heat shield to HiRISE on Twitter.
Check. This. Out: a perfectly-formed collapse pit on Mars that leads to an underground cavern!
Amazing! [Click to barsoomenate.]
This was taken by the Mars Reconnaissance Orbiter in July 2011. See the hole in the bottom? You can tell from the lighting that this is an underground opening to a cavern — a skylight. Quite a few of these have been found on Mars, actually. We see them on Earth and even on the Moon. Given the angle of the shadows, the vertical distance from the bottom of the pit to the floor of the cavern is about 20 meters (65 feet). Watch your step!
Here’s how we think skylights like this form. In the distant past, Mars was geologically active. Rivers of lava ran across the surface. If the surface of the lava hardens it can form a roof, allowing the lava underneath to continue flowing; these are called lava tubes and there are bazillions of them in Hawaii, for example. Eventually, the source of the lava chokes off and the lava flows away, leaving the empty tube underground. If the roof is thin in one spot it can collapse. Sometimes that just leaves a hole, but apparently in this case it was under a sand field. Some of the sand must have fallen into the chamber below and eventually blown away, leaving the pit and the hole. The pit is located not too far from Pavonis Mons, a known (long-dead) Martian volcano.
The hole is about 35 meters (115 feet) across, so the pit is about 175 meters (nearly 600 feet) across the rim. I love how it sits in an otherwise nearly featureless sand field; the contrast is beautiful. In the high-res image you can see boulders perched on the pit wall, having rolled part of the way down as well. The inside of the pit has lines and furrows that are instantly recognizable to anyone who has tried to dig a hole at the beach and had sand continually flow down from the rim.
It would be incredible to see something like this up close. It’s possible eventually someone will: such lava tubes would make good homes for future Mars explorers; they’d be protected from sand storms, temperature swings, and solar radiation (which is worse than for us on Earth because Mars doesn’t have a strong magnetic field to protect it).
… but you couldn’t pay me enough to go inside one of those. I have no desire to be slowly digested over ten thousand years.
Image credit: NASA/JPL/University of Arizona. Tip o’ the light saber to reddit.
For the past few years, tantalizing evidence has been found that Mars — thought to be long dead, dry, and lifeless — may have pockets of water just beneath the surface. To be clear, we know there’s water on Mars, in the form of ice. We see ice in the polar caps, and we’ve seen it revealed under the surface by small meteorite impacts.
The question is, is there liquid water?
New images by the Mars Reconnaissance Orbiter bring us a step closer to answering that question. A series of pictures of the 300 km (180 mile) wide Newton crater taken over the course of several years show dark deposits on the crater wall which change predictably with the seasons, clearly affiliated with some sort of material flowing downslope:
[Click to barsoomenate.]
The picture above shows Newton’s crater wall. It’s pretty steep, with about a 35° slope, and the dark deposits are labeled. This crater is located in the southern mid-latitudes of Mars, and this part of the crater faces north. That’s critical! Since it faces toward the equator, that means it’s facing the Sun in the summer, and so these deposits appear when the temperatures get warm.
NASA has created several animated gifs (too big to embed here) that show the growth and retreat of these features over time. You can easily see how these dark features change.
In the past, similar things have been seen in gullies on Mars. It’s not clear those are from water, since frozen carbon dioxide can also be thawing out and forming them. In those cases, the flows were seen on the cold-facing sides of crater walls, making it less likely they’re from water. These new formations are on the warm-facing side, making it more likely they are from water.
So what’s going on? Read More
Sometimes, I see an image and do a double-take. This picture sure caused one:
[Click to barsoomenate.]
If I told you those were bacteria under a microscope, you might believe me for a minute or two. But actually, those are sand dunes on Mars!
Yup. It’s funny how bizarre and alien Mars can be. What you’re seeing in this image from the Mars Reconnaissance Orbiter HiRISE camera is actually two different kinds of sand: the dark stuff in the big dunes is actually made of grains of gray basaltic sand. They’re heavy and pile up into dunes. The ripply pinkish stuff between the dunes is made of smaller grains of sand laden with iron oxide — rust! The wind can shape those grains more easily, so they can form more gentle, smaller wavelike patterns. This is also why dust devils on Mars leave such amazing and intricate patterns.
Still, those dunes really look like microbes… and hey, wait a second. There are a set of characteristics that living things share: the ability to consume, excrete, multiply, and show complexity. Sand dunes consume, in a way: the wind brings in more sand to build them up. The excrete, too, by losing sand. They can grow, and split in half, making more. And in point of fact, they do show emergent complex behavior.
Maybe the dunes share more than just appearances with bacteria…could sand dunes actually be [dun dun dunnnnnn] alive*? | <urn:uuid:45907a7e-0aaf-4136-85d4-9ec1c5fbb8a6> | 2.796875 | 1,834 | Personal Blog | Science & Tech. | 59.782584 |
Among the operating-system services that Core Foundation abstracts is memory allocation. It uses allocators for this purpose.
Allocators are opaque objects that allocate and deallocate memory for you. You never have to allocate, reallocate, or deallocate memory directly for Core Foundation objects—and rarely should you. You pass allocators into functions that create objects; these functions have “Create” embedded in their names, for example,
CFStringCreateWithPascalString. The creation functions use the allocators to allocate memory for the objects they create.
The allocator is associated with the object through its life span. If reallocation of memory is necessary, the object uses the allocator for that purpose and when the object needs to be deallocated, the allocator is used for the object’s deallocation. The allocator is also used to create any objects required by the originally created object. Some functions also let you pass in allocators for special purposes, such as deallocating the memory of temporary buffers.
Core Foundation allows you to create your own custom allocators. Core Foundation also provides a system allocator and initially sets this allocator to be the default one for the current thread. (There is one default allocator per thread.) You can set a custom allocator to be the default for a thread at any time in your code. However, the system allocator is a good general-purpose allocator that should be sufficient for almost all circumstances. Custom allocators might be necessary in special cases, such as in certain situations on Mac OS 9 or as bulk allocators when performance is an issue. Except for these rare occasions, you should neither use custom allocators or set them as the default, especially for libraries.
For more on allocators and, specifically, information on creating custom allocators, see “Creating Custom Allocators.”
© 2009 Apple Inc. All Rights Reserved. (Last updated: 2009-10-21) | <urn:uuid:7d3c6c04-c4e9-4bbe-9b09-92ad5341abb7> | 2.71875 | 409 | Documentation | Software Dev. | 28.95874 |
Instead, these scientists were talking about chemistry — not just of the ocean water but of ocean animals themselves: their cells, tissues, and body fluids. They were noting the increased costs of living that come as a result of elevated carbon dioxide inside the body and how this added cost stresses marine life and has led to massive extinctions in the past. They were issuing a warning about the potential for unabated elevated carbon dioxide to threaten directly the survival of all marine species.
The direct link between increased carbon dioxide concentrations in oceans and increased internal stress on marine creatures is largely absent from the “climate change” dialogue. It used to be enough to say “global warming” was the problem. But increasing concentrations of carbon dioxide in the atmosphere — and in oceans — have been causing more varied and faster effects than previously imagined. In fact, massive changes underway in the ocean are not captured with the word “climate.”
Certainly, warming itself is a colossal issue. Record world temperatures, melting sea ice, thermally expanding ocean waters, sea level rise, engorged rain clouds in some regions and droughts in others — “climate change” has been used to encapsulate all these effects.
Yet now, that term is beginning to hamper our understanding — and our conversation. Beyond the well-defined relationship between carbon dioxide and temperature lie various chemical reactions that have profound implications for life. These other issues are starving for attention, partly because they are not about warming, the atmosphere, or the climate. But they are about the same carbon dioxide.
Almost half of all the carbon dioxide emitted since industrialization has been absorbed by the ocean. When carbon dioxide reacts with water, it forms carbonic acid, and releases more hydrogen ions into the sea, lowering pH and causing “acidification” of the ocean. Further, these hydrogen ions quickly bind with carbonate ions. This deprives animals like hard corals and certain mollusks and plankton of the raw material for their calcium carbonate shells and skeletons. This may ultimately cause the world’s oceans to become corrosive to such animals, and coral reefs to dissolve.
Calcification rates (think of this as the rate at which a coral, say, can grow, based on its ability to construct its skeleton) decline in relation to carbonate concentrations. Models predict that coldwater corals may lose 70 percent of their habitat by 2100 with some waters becoming corrosive as early as 2020. Calcification rates in tropical waters have already declined by 6 to 11 percent and are expected to decline by as much as 17 to 35 percent by the end of the century. Some models predict concentrations of carbonate ions will be too low for reef growth by as early as 2065.
It turns out that carbon dioxide molecules not only penetrate the ocean; they also infiltrate the bodies of marine animals, permeating cell membranes and disrupting fundamental biological functions. Carbon dioxide is a small, uncharged gaseous molecule that in the ocean environment can rapidly cross cell membranes. Once inside the cell, the same acidification process that happens in ocean water occurs within the cell. Higher concentrations of CO2 alter the acid-base balance within cells and disrupt many cellular functions, from oxygen transport to protein synthesis. The more CO2 inside body tissues, the higher the cost of living for an organism. In its energy budget, the cost of dealing with CO2 comes directly out of energy that would otherwise have gone for other basic functions such as metabolism, growth, immune function, and making babies.
We call this cost “metabolic drag.” It has long-term consequences for survival because even if not acutely fatal, over time reduced growth, disease resistance, and reproductive output threaten the viability and resilience of populations. And they’re going to need resilience.
Too much carbon in the ocean particularly threatens creatures living in the deep sea. The depths of the ocean comprise one of Earth’s most stable environments. Its animals are adapted to that stability. They don’t handle change well. Scientists predict that the pH in the deep sea will be greater than for the ocean’s other regions.
Experiments show that elevated carbon dioxide affects various cellular and bodily functions, such as the ability to make proteins, transport oxygen through the body, or growth rates. But how will the ever-increasing CO2 affect individual animals, populations, and species in coming decades? No one knows. Significant harm appears possible, but evaluating long-term effects will require more work. Research is also critically needed to evaluate large-scale carbon disposal (sequestration) that would cause very high concentrations in deep ocean water (in some experiments, pH declined by more than 1 unit).
Consideration of these known and potential effects of elevated carbon dioxide levels has not been part of the “climate” debate, and it will be difficult to raise or understand these effects if people continue to believe that the problem only involves climate. The language lags behind the science, and needs to catch up.
No term in use captures the full array of issues from warming and climate to the chemistry changes throughout the ocean and inside every marine creature. Not “climate change,” certainly not the almost-quaintly catastrophic “global warming.” Those aren’t even the problem; they’re symptoms. Behind all these symptoms is the root of the problem. We call it “the carbon burden.”
So hear this: It is not just about climate. It is, and always has been, about the carbon. We need to place carbon back in the center of the equation. From atmosphere to ocean to cell, the carbon burden is the problem. It’s the heaviest load anyone’s ever placed on an unsuspecting planet, and the more we learn, the more its dimensions appear ever more staggering. | <urn:uuid:44b3300b-1fca-4ceb-8aef-076fd213a413> | 3.6875 | 1,201 | Nonfiction Writing | Science & Tech. | 36.837603 |
Gravity Recovery and Climate Experiment
Artist's concept of the twin GRACE satellites
|Operator||NASA and German Aerospace Center (DLR)|
|Major contractors||Space Systems/Loral and Astrium GmbH|
|Mission type||Earth Orbiter|
|Launch date||March 17, 2002|
|Launch vehicle||Three-stage Rocket from Plesetsk Cosmodrome, Russia|
|Mission duration||Five-year primary mission extended|
|Mass||487 kilograms (1,074 lb) each|
The Gravity Recovery And Climate Experiment (GRACE), a joint mission of NASA and the German Aerospace Center, has been making detailed measurements of Earth's gravity field since its launch in March 2002.
Gravity is determined by mass. By measuring gravity, GRACE shows how mass is distributed around the planet and how it varies over time. Data from the GRACE satellites is an important tool for studying Earth's ocean, geology, and climate.
GRACE is a collaborative endeavor involving the Center for Space Research at the University of Texas, Austin; NASA's Jet Propulsion Laboratory, Pasadena, Calif.; the German Space Agency and Germany's National Research Center for Geosciences, Potsdam. The Jet Propulsion Laboratory is responsible for the overall mission management under the NASA ESSP program.
The principal investigator is Dr. Byron Tapley of the University of Texas Center for Space Research, and the co-principal investigator is Dr. Christoph Reigber of the GeoForschungsZentrum (GFZ) Potsdam.
Discoveries and applications
The monthly gravity maps generated by Grace are up to 1,000 times more accurate than previous maps, substantially improving the accuracy of many techniques used by oceanographers, hydrologists, glaciologists, geologists and other scientists to study phenomena that influence climate.
From the thinning of ice sheets to the flow of water through aquifers and the slow currents of magma inside Earth, measurements of the amount of mass involved provided by GRACE help scientists better understand these important natural processes.
Among the first important applications for GRACE data was to improve the understanding of global ocean circulation. The hills and valleys in the ocean's surface are due to currents and variations in Earth's gravity field. GRACE enables separation of those two effects to better measure ocean currents and their effect on climate. GRACE data are also critical in helping to determine the cause of sea level rise, whether it is the result of mass being added to the ocean, from melting glaciers, for example, or from thermal expansion of warming water or changes in salinity.
As of February 2012, the data obtained by GRACE are the most precise gravimetric data yet recorded: they have been used to re-analyse data obtained from the LAGEOS experiment to try to measure the relativistic frame-dragging effect. In 2006, a team of researchers led by Ralph von Frese and Laramie Potts used GRACE data to discover the 480-kilometer (300 mi) wide Wilkes Land crater in Antarctica, which probably formed about 250 million years ago. GRACE has been used to map the hydrologic cycle in the Amazon River basin and the location and magnitude of post-glacial rebound from changes in the free air gravity anomaly. GRACE data has also been used to analyze the shifts in the Earth's crust caused by the earthquake that created the 2004 Indian Ocean tsunami. Scientists have recently developed a new way to calculate ocean bottom pressure—as important to oceanographers as atmospheric pressure is to meteorologists—using GRACE data.
How GRACE works
GRACE is the first Earth-monitoring mission in the history of space flight whose key measurement is not derived from electromagnetic waves either reflected off, emitted by, or transmitted through Earth's surface and/or atmosphere. Instead, the mission uses a microwave ranging system to accurately measure changes in the speed and distance between two identical spacecraft flying in a polar orbit about 220 kilometers (140 mi) apart, 500 kilometers (310 mi) above Earth. The ranging system is sensitive enough to detect separation changes as small as 10 micrometres (approximately one-tenth the width of a human hair) over a distance of 220 kilometers.
As the twin GRACE satellites circle the globe 15 times a day, they sense minute variations in Earth's gravitational pull. When the first satellite passes over a region of slightly stronger gravity, a gravity anomaly, it is pulled slightly ahead of the trailing satellite. This causes the distance between the satellites to increase. The first spacecraft then passes the anomaly, and slows down again; meanwhile the following spacecraft accelerates, then decelerates over the same point.
By measuring the constantly changing distance between the two satellites and combining that data with precise positioning measurements from Global Positioning System (GPS) instruments, scientists can construct a detailed map of Earth's gravity.
The two satellites (nicknamed "Tom" and "Jerry") constantly maintain a two-way microwave-ranging link between them. Fine distance measurements are made by comparing frequency shifts of the link. As a cross-check, the vehicles measure their own movements using accelerometers. All of this information is then downloaded to ground stations. To establish baseline positions and fulfill housekeeping functions, the satellites also use star cameras, magnetometers, and GPS receivers. The GRACE vehicles also have optical corner reflectors to enable laser ranging from ground stations, bridging the range between spacecraft positions and Doppler ranges.
The spacecraft were manufactured by Astrium of Germany, using its "Flexbus" platform. The microwave RF systems, and attitude determination and control system algorithms were provided by Space Systems/Loral. The star cameras used to measure the spacecraft attitude were provided by Technical University of Denmark. The instrument computer along with a highly precise BlackJack GPS receiver and digital signal processing system has been provided by JPL in Pasadena. The highly precise accelerometer that is needed to separate atmospheric and solar wind effects from the gravitation data was manufactured by ONERA.
See also
- Gravity Field and Steady-State Ocean Circulation Explorer (GOCE, launched March 2009)
- Gravity Recovery and Interior Laboratory (GRAIL, a similar probe intended to map the moon)
- "Grace Space Twins Set to Team Up to Track Earth's Water and Gravity". NASA/JPL.
- "Mission Overview". University of Texas. 19 Nov. 2008. Retrieved 2009-07-30.
- "GRACE Orbit Lifetime Prediction".
- "New Gravity Mission on Track to Map Earth's Shifty Mass". NASA/JPL.
- "NASA Missions Help Dissect Sea Level Rise". NASA/JPL.
- "Big Bang in Antarctica--Killer Crater Found Under Ice". Ohio State University.
- Chang, Kenneth (August 8, 2006). "Before the ’04 Tsunami, an Earthquake So Violent It Even Shook Gravity". The New York Times. Retrieved May 4, 2010.
- "Gravity data sheds new light on ocean, climate". NASA/JPL.
- "GRACE Launch Press Kit". NASA/JPL.
- GRACE mission home page (Primary Investigator)
- GRACE Tellus (JPL)
- GRACE mission home page (co Primary Investigator) (language: German)
- GRACE Mission Profile by NASA's Solar System Exploration
- Science@NASA article about GRACE
- Weighing Earth's Water from Space Estimating ground water using GRACE (written for non-scientists)
- Report by BBC showing early results
- GPS World Discussion of instrumentation | <urn:uuid:30d63474-9370-4a65-a8c9-72c92703ec5a> | 3.609375 | 1,576 | Knowledge Article | Science & Tech. | 30.806236 |
Estimates of Ground-Water Recharge Based on Streamflow-Hydrograph Methods: PennsylvaniaEntry ID: USGS_OFR_2005_1333
Abstract: This study, completed by the U.S. Geological Survey (USGS) in cooperation with
the Pennsylvania Department of Conservation and Natural Resources, Bureau of
Topographic and Geologic Survey (T&GS), provides estimates of ground-water
recharge for watersheds throughout Pennsylvania computed by use of two
automated streamflow-hydrograph-analysis methods--PART and RORA. The PART
computer program uses a ... hydrograph-separation technique to divide the
streamflow hydrograph into components of direct runoff and base flow. Base flow
can be a useful approximation of recharge if losses and interbasin transfers of
ground water are minimal. The RORA computer program uses a recession-curve
displacement technique to estimate ground-water recharge from each storm period
indicated on the streamflow hydrograph.
Recharge estimates were made using streamflow records collected during
1885-2001 from 197 active and inactive streamflow-gaging stations in
Pennsylvania where streamflow is relatively unaffected by regulation. Estimates
of mean-annual recharge in Pennsylvania computed by the use of PART ranged from
5.8 to 26.6 inches; estimates from RORA ranged from 7.7 to 29.3 inches.
Estimates from the RORA program were about 2 inches greater than those derived
from the PART program.
Mean-monthly recharge was computed from the RORA program and was reported as a
percentage of mean-annual recharge. On the basis of this analysis, the major
ground-water recharge period in Pennsylvania typically is November through May;
the greatest monthly recharge typically occurs in March.
[Summary provided by the USGS.]
(Click for Interactive Map)
Data Set Citation
Dataset Originator/Creator: Dennis W. Risser, Randall W. Conger, James E. Ulrich, and Michael P. Asmussen
Dataset Title: Estimates of Ground-Water Recharge Based on Streamflow-Hydrograph Methods: Pennsylvania
Dataset Series Name: Open File Report 2005-1333
Dataset Publisher: U.S. Geological Survey
Data Presentation Form: report, mapsOnline Resource: http://pubs.usgs.gov/of/2005/1333/
Start Date: 1885-01-01Stop Date: 2001-12-31
TERRESTRIAL HYDROSPHERE > GROUND WATER > AQUIFERS
TERRESTRIAL HYDROSPHERE > GROUND WATER > GROUND WATER DISCHARGE/FLOW
TERRESTRIAL HYDROSPHERE > GROUND WATER > DRAINAGE
TERRESTRIAL HYDROSPHERE > GROUND WATER > SPRINGS
TERRESTRIAL HYDROSPHERE > GROUND WATER > WATER TABLE
TERRESTRIAL HYDROSPHERE > SURFACE WATER > AQUIFER RECHARGE
TERRESTRIAL HYDROSPHERE > SURFACE WATER > DISCHARGE/FLOW
TERRESTRIAL HYDROSPHERE > SURFACE WATER > DRAINAGE
TERRESTRIAL HYDROSPHERE > SURFACE WATER > HYDROPATTERN
TERRESTRIAL HYDROSPHERE > SURFACE WATER > RIVERS/STREAMS
TERRESTRIAL HYDROSPHERE > SURFACE WATER > WATER CHANNELS
TERRESTRIAL HYDROSPHERE > SURFACE WATER > WATERSHED CHARACTERISTICS
TERRESTRIAL HYDROSPHERE > SURFACE WATER > WETLANDS
ISO Topic Category
Quality Refer to Sowers et al. (2005) for quality assessment information.
Data Set Progress
Email: todd.sowers5 at gmail.com, sowers at geosc.psu.edu
237 Deike Building, Dept. of Geosciences Penn State University
City: University Park
Province or State: Pennsylvania
Postal Code: 16802
Role: TECHNICAL CONTACT
Phone: +1 (303) 492-6199
Fax: +1 (303) 492-2468
Email: nsidc at nsidc.org
National Snow and Ice Data Center CIRES, 449 UCB University of Colorado
Province or State: CO
Postal Code: 80309-0449
Leckrone, K. J., and J. M. Hayes. 1998. Water-induced errors in continuous flow carbon isotope ratio mass spectrometry. Analytical Chemistry 70, 2737-2744.
Miller, J. B., K. A. Mack, R. Dissly, J. W. C. White, E. J. Dlugokencky, and P. Tans. 2002. Development of analytical methods and measurements of 13C/12C in atmospheric CH4 from the NOAA Climate Monitoring and Diagnostics Laboratory Global ... Air Sampling Network. Journal of Geophysical Research 107, 4178-4193.
Rice, A., A. A. Gotoh, H. O. Aijie, and S. C. Tyler. 2001. High-precision continuous-flow measurements of d13C and dD of atmospheric CH4 Analytical Chemistry 73, 4104-4110.
Santrock, J., S. A. Studley, and J. M. Hayes. 1985. Isotopic analyses based on the mass spectrum of carbon dioxide Analytical Chemistry 57, 1444-1448.
Sowers, T., S. Bernard, O. Aballain, J. Chappellaz, J.-M. Barnola, and T. Marik. 2005. Records of the d13C of atmospheric CH4 over the last 2 centuries as recorded in Antarctic snow and ice. Global Biogeochemical Cycles 19, doi:10.1029/2004GB002408.
Creation and Review Dates
DIF Creation Date: 2007-07-09
Last DIF Revision Date: 2012-05-14 | <urn:uuid:c3cc6f90-c1a6-462f-add0-7759fd4c084d> | 3.234375 | 1,332 | Structured Data | Science & Tech. | 47.042604 |
You'll need a piece of paper suitable for folding (clean unlined paper or patty paper).
You should begin to see a curve appearing on your paper. The lines you have created are tangent to the curve. Compare your curve to that of the person next to you. In what ways are they the same? In what ways are they different?
What happens when the point moves closer to the bottom edge? What happens when it moves further away?
Look back at your paper construction. Fold along one of the lines and make a mark on the bottom edge where it hits the point. This gives you two points to work with. How is the fold related to those two points? (You might want to try this on a fresh piece of paper to get a clearer view.) Construct this line in Sketchpad.
Now that you have your line, make sure it's selected and go to the Display menu and choose Trace Line. If you drag D along AB, you'll get your curve!
Move C closer to AB. Drag D again.
We could continue doing this, but it would be really slick to have D animated along AB so that we didn't have to drag it every time. To do this, select D and the segment AB (not the endpoints, just the segment). From the Edit menu, choose Action Button->Animation. Click Animate. There is now an Animate button on the screen. Double click this. Pretty cool, eh? (You can click anywhere on the screen to get the animation to stop.)
What happens when C is closer to AB? What happens when it's far away? Is this the same thing you found with the paper folding?
It would be a lot easier to see the differences if our traced curve didn't disappear every time we clicked on the screen to move something. Instead of tracing the line, we can construct the locus. Select the line and point D and choose Locus from the Construct menu. Now you can move C or AB and the envelope will stay in place.
What we would really like, though, is the actual points on the curve. The lines we're tracing are the tangents to the curve, and the points we want lie on those lines somewhere. Let's see if we can find them.
The definition of a parabola states that a curve is a parabola if, given any point on the curve, the distance from that point to the focus is the same as the distance from that point to the directrix. Our segment is our directrix here, and the point C is the focus. How can we find the points that fill this criteria? We know they're on the lines somewhere. Here's a hint: think about how you find the distance from any point to a line. Here's another hint: it only takes one more line in your sketch. (You might get together with your neighbor to figure this part out.)
How did you find the point? Explain why you think this point is equidistant from the focus and the directrix.
We could trace this point to see the curve, but it's still going to go away when we click since the path is only traced, not constructed. It would be awesome to have this curve as an object that moves dynamically when we drag D.
To do this, hide the objects in the sketch that you don't need (keep the focus, directrix, D, and your new point). Select the new point and D and choose Locus from the Construct menu - we want the locus of the new point when D moves. Your screen should look something like this:
Now move C. What happens to the curve? Move AB. What happens if you move AB above C?
Open a new sketch and choose Create Axes from the Graph menu. We're going to graph the equation f(x)=ax^2 + b. To do this, we need three variables, a, b, and x. We're going to use the x-coordinates of three points on the x-axis to serve as the variables.
Put three points on the x-axis. Make sure they aren't attached to the unit point or the origin - they need to be free (aside from being attached to the x-axis). Label these points A, B, and X. Select the three points and choose Coordinates from the Measure menu. We need the x-coordinates of these points. Double-click on the coordinates of A. In the calculator, hold down the Values menu, slide down to Point A, and slide over to x. Click okay. You've now got the x-coordinate of point A. Move A along the axis to make sure this number changes. Do the same for B and X. Hide the coordinates of the three points (don't delete them!).
Now we need the equation for f(x), ax^2 + b. Select the x-coordinates for A, B, and X and open the calculator. Under the values menu, choose x[A]. Click the * for multiplication. Choose x[X] from the values menu. Click ^ and 2. Click +. Choose x[B] and click okay. You should have something like this:
Again, move A, B, and X to ensure that everything changes.
Choose xX and your equation and go to the Graph menu and choose Plot as (x, y). You should get a point on your screen. If you can't see your point, move A, B, and X closer to the origin - your value for f(x) may be too large to see on your axes.
Let's construct the locus of your new point as X moves along the x-axis. Choose the new point and X. Choose Locus from the Construct menu. Awesome!
What happens when you move B?
What happens when you move A? What if A is negative? What if it's zero?
What happens if, instead of picking xX and the equation, you switch the order - choose the equation and xX. Now plot the point and construct the locus. What happens to the shape?
We could have gotten three variables in a number of ways. The first time I did this construction I used the lengths of line segments. What's the disadvantage of doing this?
Try repeating the construct with the equation f(x)=a(x-h)^2 + k. How do a, h, and k affect the shape and location of the parabola? | <urn:uuid:dfdfd78f-c0e0-4a60-9569-e3533f71e243> | 3.65625 | 1,347 | Tutorial | Science & Tech. | 82.508238 |
ABCD is a quadrilateral with AB horizontal and CD vertical. Angle BAD = angle BCD = 40 degrees and AB=BC=5. Show that angle CDB = angle ADB = 25 degrees.
Here is a diagram I made which I hope is correct:
By observing alternate angles, I can see that angle ADC = 50 degrees but how do I show that angle CDB = angle ADB? I thought about proving congruence but is this possible here? | <urn:uuid:d7c761b9-ed0f-4511-94ac-99838b141c83> | 3.03125 | 98 | Q&A Forum | Science & Tech. | 74.45862 |
Does the marine biosphere mix the ocean?
Dewar, W.K.; Bingham, R.J.; Iverson, R.L.; Nowacek, D. P.; St. Laurent, L.C.; Wiebe, P.H.. 2006 Does the marine biosphere mix the ocean? Journal of Marine Research, 64 (4). 541-561. 10.1357/002224006778715720Full text not available from this repository.
Ocean mixing is thought to control the climatically important oceanic overturning circulation. Here we argue the marine biosphere, by a mechanism like the bioturbation occurring in marine sediments, mixes the oceans as effectively as the winds and tides. This statement is derived ultimately from an estimated 62.7 TeraWatts of chemical power provided to the marine environment in net primary production. Various approaches argue something like 1% (.63 TeraWatts) of this power is invested in aphotic ocean mechanical energy, a rate comparable to wind and tidal inputs.
|NORA Subject Terms:||Marine Sciences|
|Date made live:||21 Aug 2012 14:27|
Actions (login required) | <urn:uuid:ad68c9df-46c9-4bde-a152-17f6f1cdf6a9> | 2.703125 | 248 | Academic Writing | Science & Tech. | 56.893207 |
Research from the University of Southampton, which examines how dolphins might process their sonar signals, could provide a new system for man-made sonar to detect targets, such as sea mines, in bubbly water.
When hunting prey, dolphins have been observed to blow 'bubble nets' around schools of fish, which force the fish to cluster together, making them easier for the dolphins to pick off. However, such bubble nets would confound the best man-made sonar because the strong scattering by the bubbles generates 'clutter' in the sonar image, which cannot be distinguished from the true target.
Taking a dolphin's sonar and characterising it from an engineering perspective, it is not superior to the best man-made sonar. Therefore, in blowing bubble nets, dolphins are either 'blinding' their echolocation sense when hunting or they have a facility absent in man-made sonar.
The study by Professor Tim Leighton, from the University's Institute of Sound and Vibration Research (ISVR), and colleagues examined whether there is a way by which dolphins might process their sonar signals to distinguish between targets and clutter in bubbly water.
In the study, published in Proceedings of the Royal Society A, Professor Leighton along with Professor Paul White and student Gim Hwa Chua used echolocation pulses of a type that dolphins emit, but processed them using nonlinear mathematics instead of the standard way of processing sonar returns. This Biased Pulse Summation Sonar (BiaPSS) reduced the effect of clutter by relying on the variation in click amplitude, such as that which occurs when a dolphin emits a sequence of clicks. Read more here... | <urn:uuid:9627b261-2680-44f9-91a6-666f90f3eba3> | 3.4375 | 348 | Truncated | Science & Tech. | 37.395935 |
Git is a version control system, like "track changes" for code. It's fast, powerful, and easy-to-use version control system. But the thing that's really special about Git is the way it empowers people to collaborate.
All the projects on drupal.org are stored in Git, and there are millions of public projects hosted by GitHub.com. Whether you are a developer who wants to contribute to an open source project, a freelancer who needs to know how to maintain a patched module, or a member of a team collaborating on a single code base, Git is a tool worth having in your toolbox.
This blog post walks through some basic Git workflows for collaborative development. If you've heard people talk about "decentralized" or "distributed" version control, but you haven't seen it in action, or you're not sure what's so cool about it, this post is for you. To follow along, you just need to have Git installed on your computer. Some basic experience with version control (Git or other) is helpful, but not required.
Here's our scenario: Alice starts a project called "rhymes", it's a simple Git repo with a bunch of Alice's favorite nursery rhymes stored in it. Bob uses the project and wants to contribute to it. Specifically, he wants to contribute a few new rhymes, and help improve formatting to make the documents easier to read. Alice will review Bob's changes, accept some of them, then make her own changes to the project. Then Bob needs to sync up his copy of the project with Alice's.Plus... | <urn:uuid:df561503-7283-423d-8885-68af221883ce> | 2.90625 | 336 | Personal Blog | Software Dev. | 61.477426 |
Tuesday 18 June
House spider (Tegenaria domestica)
House spider fact file
- Find out more
- Print factsheet
House spider description
The house spider (Tegenaria domestica) is probably the best known and perhaps the most hated of the British spiders, and is often encountered trapped in the bath (2). The house spider is fairly large and hairy with long legs. It varies in colour from pale to dark brown (4), with variable sooty markings on the abdomen, although some individuals can be uniform pale yellowish or grey. Male and female house spiders are similar in appearance, but males have a more slender abdomen and longer legs (3).Top
House spider biology
Although often detested, the house spider provides a service wherever it occurs, reducing the number of flies and other unwelcome insects from houses. It makes a flat sheet-like silk web, typically with a tubular retreat at one corner. These webs can become fairly large when undisturbed (2). When an insect falls onto the web, the spider dashes out from its retreat, seizes the prey and returns to the retreat to consume the meal (5).
Male house spiders are usually seen more often than females, as they wander widely in search of a mate (5). After a male has found a female's web he will stay with her for a number of weeks, mating with her repeatedly during this time. He then dies and the female eats him; the nutrients within the male contribute to the development of his young (6).
The word 'spider' derives from the Old English word 'spithra', which means 'spinner'. Spider webs have been used to heal wounds and staunch blood flow for many years (7).Top
House spider range
Found all over the world, the house spider is common and widespread in Britain and Europe (3).Top
House spider habitat
The house spider is found in houses and other buildings, including garden sheds (2).Top
House spider status
The house spider is widespread and common (3).Top
House spider threats
The house spider is not currently threatened.Top
House spider conservation
Conservation action has not been targeted at the common house spider.Top
Find out more
Discover more about British spiders:
British Arachnological Society:
Information authenticated by Dr Peter Merrett of the British Arachnological Society:
- In arthropods (crustaceans, insects and arachnids) the abdomen is the hind region of the body, which is usually segmented to a degree (but not visibly in most spiders). In crustacea (e.g. crabs) some of the limbs attach to the abdomen; in insects the limbs are attached to the thorax (the part of the body nearest to the head) and not the abdomen. In vertebrates the abdomen is the part of the body that contains the internal organs (except the heart and lungs).
National Biodiversity Network Species Dictionary (January, 2003)
- Roberts, M.J. (1993) The Spiders of Great Britain and Ireland. Harley Books, Colchester.
- Roberts, M.J. (1995) Collins Field Guide- Spiders of Britain and Northern Europe. Harper Collins Publishers, London.
- Sterry, P. (1997) Complete British Wildlife Photo Guide. Harper Collins Publishers, London.
- Nichols, D., Cooke, J. and Whiteley, D. (1971) The Oxford Book of Invertebrates. Oxford University Press, Oxford.
BBC Wildfacts - House Spider (March, 2003)
- Buczacki, S. (2002) Fauna Britannica. Hamlyn, London.
More »Related species
Play the Team WILD game
MyARKive offers the scrapbook feature to signed-up members, allowing you to organize your favourite ARKive images and videos and share them with friends.
Terms and Conditions of Use of Materials
Copyright in this website and materials contained on this website (Material) belongs to Wildscreen or its licensors.
Visitors to this website (End Users) are entitled to:
- view the contents of, and Material on, the website;
- download and retain copies of the Material on their personal systems in digital form in low resolution for their own personal use;
- teachers, lecturers and students may incorporate the Material in their educational material (including, but not limited to, their lesson plans, presentations, worksheets and projects) in hard copy and digital format for use within a registered educational establishment, provided that the integrity of the Material is maintained and that copyright ownership and authorship is appropriately acknowledged by the End User.
End Users shall not copy or otherwise extract, alter or manipulate Material other than as permitted in these Terms and Conditions of Use of Materials.
Additional use of flagged material
Green flagged material
Certain Material on this website (Licence 4 Material) displays a green flag next to the Material and is available for not-for-profit conservation or educational use. This material may be used by End Users, who are individuals or organisations that are in our opinion not-for-profit, for their not-for-profit conservation or not-for-profit educational purposes. Low resolution, watermarked images may be copied from this website by such End Users for such purposes. If you require high resolution or non-watermarked versions of the Material, please contact Wildscreen with details of your proposed use.
Creative commons material
Certain Material on this website has been licensed to Wildscreen under a Creative Commons Licence. These images are clearly marked with the Creative Commons buttons and may be used by End Users only in the way allowed by the specific Creative Commons Licence under which they have been submitted. Please see http://creativecommons.org for details.
Any other use
Please contact the copyright owners directly (copyright and contact details are shown for each media item) to negotiate terms and conditions for any use of Material other than those expressly permitted above. Please note that many of the contributors to ARKive are commercial operators and may request a fee for such use.
Save as permitted above, no person or organisation is permitted to incorporate any copyright material from this website into any other work or publication in any format (this includes but is not limited to: websites, Apps, CDs, DVDs, intranets, extranets, signage, digital communications or on printed materials for external or other distribution). Use of the Material for promotional, administrative or for-profit purposes is not permitted. | <urn:uuid:f6c91790-8b32-40d6-bb6c-a921117a0ffa> | 3.109375 | 1,349 | Knowledge Article | Science & Tech. | 39.178095 |
In the Wink of a Star
By Anna Edmonds
At about 2,400 light-years distance and located in a cluster of young stars called NGC 2264 in the winter constellation of Monoceros, the star KH 15D shines at full brightness for 48.36 days and then “winks” at astronomers (their whimsical term) 18 days. The star’s eclipse is remarkable, not in its regularity, but in its duration. A single object like another star, a planet, or a moon could not be big enough or move slowly enough to act as the intervening body. Therefore astronomers are positing a collection of smaller objects such as dust grains, rocks, and/or asteroids that could be strung together in an orbiting band or disk. Perhaps it would be like an interrupted ring around Saturn.
Mac Gardiner drew our attention to the article by John Noble Wilford in the New York Times (June 20, 2002) concerning KH 15D. A wealth of similar material can also be found on Internet. This recent spate of interest is because astronomers are debating seriously if this “young” star (it’s only 3 billion years old) and its occluding dust can lead us to discoveries about how our “middle-aged” solar system (4.5 billion years old) was formed.
In order to concentrate the studies of KH 15D, the research team led by graduate student Catrina
Hamilton and Professor William Herbst of Wesleyan University (Middletown, Conn.) organized an international observing campaign for the fall, winter, and spring of 20012002 of astronomers in Tashkent, Munich, Tautenberg, Heidelberg and Tel Aviv, plus five additional US locations. This geographic spread permitted observations around the world as fully as possible. After the five-year preliminary study at Wesleyan, the effort resulted in the announcement in June about KH 15D at a meeting of extra solar planet astronomers at Carnegie Institute in Washington, DC.
According to Hamilton and Herbst, the data gathered this past year not only confirms its basic pattern of eclipse, but also suggests that there may be more than one clump of dust. Even more amazing, they believe that in the last several months they have seen the effects of increasing clumping. Two clumps seem to have slightly different shapes, suggesting that the orbital period may be
96.72 days rather than the originally calculated 48.36 days. Besides these clumps, there is an unusual color difference: When KH 15D is faint it is bluer than when it is bright. On Earth when the light gets dim it generally turns reddish because dust particles scatter blue light better than red. What may be happening with KH 15D is that, rather than the star itself, astronomers may be observing light reflected off the solid objects of the band. Astronomers are studying the sizes of the clumps trying to find what gravitational force could be keeping the clumps organized. Another question related to the sizes is whether or not they are large enough to make the main star wobble. One possibility suggested is that the orbiting objects might cause “density wave” ripples in the band, some of which would extend high enough above the band to block the light from the star at the regular intervals.
The interaction of the orbiting band and its clumps is central to the study of extra solar planetary systems. The mass of KH 15D is too far away for astronomers to know yet how big it or its clumps are. However, they believe the clumps are closer to the star than Mercury is to our Sun. This situation is typical of many massive planets orbiting other stars that have been found since 1995. The puzzle with this proximity is how the building blocks of planets—the elements—could be condensed or trapped in the intense heat so near a star. Could the clumps have “migrated” in from the outer regions, propelled by the density waves? The astronomers acknowledge that all this needs much more study.
“In summary,” writes Professor Herbst, “it appears that nature has provided us with a unique opportunity to study the early evolution of KH 15D will evolve with time…. Time will tell---but, a disk by orienting KH 15D in just the right way to show of course, only if we keep watching.”us its clumpy disk. If we looked at any other angle, we For the winking of a star. would either see no eclipse at all, or, perhaps, a very faint star all the time. No one can predict exactly how | <urn:uuid:45eb9233-5e58-433b-8841-9aec2b2b4823> | 3.484375 | 944 | Nonfiction Writing | Science & Tech. | 54.130811 |
New findings suggest that, contrary to current theory, life for young salmon is most treacherous in the sea.
Survival of young salmon has previously been thought to be most affected by disturbances and environmental quality in the freshwater lakes, streams, and rivers where they spend their early and late life stages. However, researchers from the Pacific Ocean Shelf Tracking (POST) project of the Census of Marine Life have found that up to 40% of young salmon that make it to the sea, perish soon afterward, and never return to freshwater to spawn. By implanting acoustic tracking tags in young salmon, researchers were able to track their movements from the freshwater environment, out to sea during their saltwater phase, and then back again as they return to spawn. The results of these efforts show a vastly different picture than conventional biological theories previously suggested.
Formerly, the freshwater phase of salmon was thought to be the most critical component of their life cycle. As Pacific salmon populations have declined in recent decades, huge effort has been placed in conserving, restoring, and buffering the freshwater habitats that support salmon populations in an effort to bolster these populations. These new data, however, are suggesting that survival during the saltwater phase of a salmon's life cycle is more difficult.
By comparing two similar freshwater systems in close proximity to one another, one being relatively "natural" and untouched and one being heavily dammed and "altered", researchers were able to show that young salmon survived the trek to the sea in relatively equal numbers. This suggests that the alterations to the freshwater environment did not affect their survival. The data also show that almost half of the salmon that made it to the sea did not return to spawn. Thus, as this research produces more data, and the importance of the ocean's role in salmon life cycles becomes more clear, sea survival may become a critical issue in the future management of Pacific salmon.
- What: New findings in the survival of Pacific salmon may change views on conservation and manangement.
- Who: POST Scientists --> D. Welch, S. McKinley, E.L. Rechisky, M.C. Melnychuk, C.J. Walters, C. Schreck, B. Clemens, and R. Bison
- When: Ongoing
- Where: Fraser River system in Southern British Columbia, Canada and the Columbia River system in the Pacific Northwest United States
- How: Acoustic tracking arrays and implanted tags allow researchers to track movements of young salmon during their migrations.
- References: Submitted for Publication | <urn:uuid:62265713-b40b-43c4-8df1-0ce88a61739d> | 3.59375 | 518 | Knowledge Article | Science & Tech. | 41.64712 |
/NEW/ ENDANGERED SPECIES INFO – Here’s a website that’ll help you find and/or research endangered animal species. Individual species are listed alphabetically by name. Learn which are the world’s most endangered mammals and much more.
/NEW/ TEACHING CLIMATE CHANGE
· * From the US Environmental Protection Agency, “A Student’s Guide to Global Climate Change” http://www.epa.gov/climatechange/kids/index.html
* From the University of Chicago, “Open Climate 101,” a series of lectures that have been videotaped covering “Heat and Light,” “The Greenhouse Effect,” “Ice and Water Feedback,” and more. Start with this great explanatory article by the New York Times’ Andy Revkin: http://dotearth.blogs.nytimes.com/2012/01/16/climate-101-online-and-free/
/NEW/ GREENANSWERS.COM – If you’ve got an environmental question – any question at all – this site should be able to give you an answer. Just type in your question and tey’ll email you an answer. http://greenanswers.com/
/NEW/ TED-Ed – From the “Ted Talk” folks on YouTube, a channel dedicated to “Awesome Nature.” Learn how “Life Begins in the Ocean,” “Evolution in a Big City,” and more.
/NEW/ AP Science Exam Review – Are you thinking about taking AP Environmental Science in high school? Get a feel for what the course includes in this multi-part video on YouTube, which is designed to help students review for the College Board AP Environmental Science Exam.
A guide to understanding tropical rainforests and the people, plants and animals that call rainforests home. From Mongabay.com, this tropical rainforest guide also helps kids like you understand why rainforests are important, why they are disappearing, and how they can be saved. http://kids.mongabay.com/
From the National Environmental Education Foundation, “Classroom Earth” is an online guide for teachers who want to include environmental learning in their daily teaching. There are news stories, a resource library, and “success stories” about how other teachers have put environmental education on the daily meu of learning. http://www.classroomearth.org/
“Collective Learning” is a “garden based” learning program for students from kindergarten through high school. The basic goal of the various age-appropriate programs is to teach by showing students how to design and grow organic gardens. http://www.collectiveroots.org/whats-growing/garden-based-learning
From the California-based nonprofit group Zilowatt, an interactive set of learning tools for teaching energy conservation to kids in all grades. Zilowatt provides numerous online resources including videos, downloadable posters and fact sheets. Zilowatt’s stated goal: to help teachers and staff “creatively engage students and maximize those ‘ah-ha’ moments.” http://www.zilowatt.org/
From the National Energy Education Development project (NEED), this is a complete resource and curriculum guide for teaching students of every age the fundamentals of energy. Both “green” energy and fossil fuels are included. Teachers can pick and choose the information they need to conduct classroom experiments that bring the subject of energy to life. http://www.need.org/needpdf/Catalog.pdf
The nonprofit group Facing the Future of Seattle, WA offers a number of free downloads for students of all ages emphasizing global sustainability, climate change and conservation. http://www.facingthefuture.org/Curriculum/DownloadFreeCurriculum/tabid/114/Default.aspx
From Keep America Beautiful, the nonprofit education organization, this page of classroom activities includes games, a checklist of ideas for what kids can do to protect the environment, tips for preventing litter in your community, and more. http://www.kab.org/site/PageServer?pagename=kids_zone
“Clean” energy is a multi-faceted topic, but this guide will simplify things and instantly increase students’ understanding. For older students, this guide from Kachan & Co., a cleantech analysis firm, breaks clean energy into eight basic areas: renewable energy; energy storage; energy efficiency; transportation; air & environment; clean industry; water, and agriculture. As valuable for an adult as it is for a kid.
The goal of Ireland’s Green Schools program is to make every student a participant in the greening of his school through activities in and out of the classroom. A complete rundown of what the program entails can be found at: http://www.greenschoolsireland.org/Index.aspx?Site_ID=1&Item_ID=28
From the Wisconsin Center for Environmental Education, this website has oodles of free online materials that will help students, teachers, school administrators, and parents understand and teach the basics of sustainability and energy. There are games and downloadable fact sheets on a wealth of subjects including the carbon cycle, renewable forms of energy, energy efficiency and more. http://www.uwsp.edu/cnr/wcee/keep/
This You Tube video especially for teachers provides a list of resources for teaching “Green Energy” in the classroom. http://www.youtube.com/watch?v=ti3Mgv49dUA
From the US National Renewable Energy Laboratory, everything you need to know about how to build a model solar car, plus dates and places where students can compete against other schools. http://www.nrel.gov/education/jss_hfc.html
Duke University in the US has put together a “Green Book” for incoming students that provides a great guide for how to live “sustainably” in any community. Food, recycling, transportation and more are all explained in ways students of every age can benefit.
Looking for a green job? The US state of Mississippi has a “green jobs” website that should be one of the first places you check out. The site is a great primer on how many different kinds of green jobs there are and what you need to get one. Specifically for work in Mississippi, this site is good background for finding a green job wherever you live.
From Lake County, Illinois comes a free online library of 17 videos grouped into water, recycling, energy conservation and strategic planning. Each is suitable for showing in a classroom or public library.
Especially for teachers, California’s “Education and Environment Initiative” gives schools interested in teaching environmental literacy a complete curriculum, plus an opportunity to interact with other educators. You just have to enter your name and where you teach.
From Great Britain’s National Energy Foundation comes a comprehensive list of links on renewable energy and climate change suitable for students of an age. Find the BBC’s “climate change game,” the Danish Energy Agency’s “renewable energy activities” (in English), and much, much more.
The Australian Sustainable Schools Initiative offers a slew of online resources to help teach general sustainability, energy, climate change, waste, water, biodiversity and more. http://www.environment.gov.au/education/aussi/educational-resources.html
From recycledevon.org in England comes a pair of short videos that make it easy to understand the process of aluminum and glass recycling.
Now that Maryland has become first US state to require high school graduates to be environmentally literate, here is the state’s proposed “environmental literacy curriculum”
From Second Nature, a non-profit dedicated to sustainable building, comes a curriculum designed to teach sustainable building concepts to all students. http://www.campusgreenbuilder.org/CurriculumChapters | <urn:uuid:4281dc49-cb99-4f04-92c3-603b74daceab> | 3.609375 | 1,701 | Content Listing | Science & Tech. | 42.117639 |
I’ve been looking at the ASP.NET MVC framework for the past two weeks, and it has occurred to me that some of the simple things we may want to do when creating a web application may seem confusing to someone new to ASP.NET MVC – for example the task of creating a DropDownList control on a form. ASP.NET MVC provides a number of ‘HTML Helpers’ which we can easily use to construct the form items. ‘DropDownList’ is one of these HTML helpers we can use.
Let’s create a simple example form using some of these HTML helpers. To begin a form, we can use a helper, we just need to add this code to our View:
// Form data will go here
<% } %>
This creates the basic form code for us – no need to explicitly write any HTML code. Before adding the DropDownList control, we need to decide where we want to get the data which will bind to the list. We can either hard code the items, or use LINQ to SQL to grab them from a database at runtime.
Method 1 – Hardcoding the form items
With this approach, we just add the items to a list, and pass this list to ViewData, so we can access it from the View:
Text = "Apple",
Value = "1"
Text = "Banana",
Value = "2",
Selected = true
Text = "Orange",
Value = "3"
ViewData["DDLItems"] = items;
Then, to actually display the DropDownList, we’d just need to add a single line to our View code, utilizing the DropDownList HTML helper:
Method 2 – Using LINQ to SQL to get the data at runtime
We could also retrieve the list data from a database table at runtime using LINQ to SQL. In order for this approach to work, you will need to have generated LINQ to SQL classes for your database using the wizard in Visual Studio. Then we can easily write the code to retrieve the data:
var db = new TransDBDataContext();
IEnumerable<SelectListItem> languages = db.trans_SupportedLanguages
.Select(c => new SelectListItem
Value = Convert.ToString(c.ID),
Text = c.Name.ToString()
ViewData["SupportedLanguages"] = languages;
Again, to display the DropDownList, we’d just need to add a single line of code to the View:
From the above, you can see how easy it is to render form items using the HTML helpers provided by ASP.NET MVC.
For a full list of the helpers, check out the MSDN documentation here. | <urn:uuid:1c75fbf5-6b07-425e-8a01-92fd2db561d0> | 2.796875 | 582 | Personal Blog | Software Dev. | 63.823408 |
Imagine how far you could kick a soccer ball if gravity didn't pull it toward the ground. On Earth we take gravity for granted -- especially when it comes to playing sports. When we shoot a basketball, we expect it to go down toward the ground after it goes through the hoop. If we turn a cartwheel, we know that we will land shortly in a place near our starting point. Most sports involve keeping our feet on the ground most of the time. The constant force of gravity on Earth gives us different results than we would get if we played sports in space.
Gravity is present on the International Space Station, but astronauts orbiting Earth experience "microgravity." The prefix "micro" comes directly from the Greek "micros," which means "small." The pull of gravity seems very small compared to what astronauts feel on Earth. The reason is because everything on the orbiting station is in a state of free fall. The station and everything inside it are constantly falling toward Earth. So, if an astronaut drops something, it does not fall to the floor because the floor is falling, too. Everything seems to float.
Microgravity isn't the only factor that changes the game in space. Consider Newton's Laws of Motion, which describe relationships between motion, matter and force.
Also known as ...
First law of motion
Law of inertia
An object that is not moving will not move until a force makes it move. An object that is moving will continue to move at a constant speed and direction until a force causes it to change.
Second law of motion
The force of an object equals its mass times its acceleration.
Third law of motion
Law of action and reaction
For every action there is an equal and opposite reaction.
We're used to the way Newton's laws work on Earth under the force of gravity. Newton's laws also apply in microgravity, and sometimes it's easier to see how they work when you watch demonstrations on the space station. You can actually see an object at rest suspended in the air until a force causes it to move. In the DIY Podcast Sports Demo video, one of the ways astronaut Clayton Anderson demonstrates Newton's laws is by swinging a bat to hit a baseball floating in midair.
The laws of aerodynamics are another set of physical laws that apply to sports. Aerodynamics is the study of the way objects move through air. You can see the effects of these laws as you watch the trajectory of a ball. The laws of aerodynamics come into play when we set guidelines and rules for a sport -- such as the three-point line in basketball or the yard line for a football kickoff.
Another scientific principle that may be easier to understand once you see Anderson demonstrate it in space is the conservation of angular momentum. This law says an object will spin more slowly as resistance increases. An object will spin faster as resistance decreases. An example of angular momentum is a spinning ice skater. When skaters tuck their arms in tightly (decreasing the moment of inertia), rotational speed increases. But, when skaters extend their arms (increasing the moment of inertia), the speed at which the skaters spin slows. On the space station, it is easy to show the conservation of angular momentum while tumbling through the air. Don't try that on Earth!
Even though the same scientific laws and principles apply on Earth and the space station, it's fun to see them demonstrated in microgravity -- especially with the same sports we play on Earth. | <urn:uuid:1266216c-1f38-4c25-a274-57ec4650d9db> | 4.15625 | 710 | Knowledge Article | Science & Tech. | 58.564103 |
I think Stray covered most of what you asked, but I wanted to clear something up just a bit. Let's take humans as an example, our n (called a haploid number) equals 23. This means that there are a total of 23 unique chromosomes in a human gamete
. Normal, human, non-reproductive cells (anything but sperm and ovum) are 2n, or diploids. This means each of the 23 unique chromosomes has a duplicate attached to it. This is the whole reason why reproduction is possible. The cells responsible for making sperm or ovum, which are 2n (46 chromosomes, or 23 matching pairs), split into haploid cells (gametes), each containing one of the 23 chromosomes. When the male and female gametes meet, the haploid chromosomes combine and you get a fertilized ovum with the full diploid number.
Tetraploids are possible be treating small, germinating seeds with chemicals that inhibit mitotic spindles. During a certain phase in cell division, all the chromosomes make a duplicate of themselves, so for a moment it is in a sense a tetraploid, as there is four copies of each chromosome. Normally, shortly after the doubling, little threads in the cells start pulling the chromosomes away from eachother, which results in two daugher cells with a diploid chromosome number. What these chemicals, such as colchicine or oryzalin, do is in some form dissolve those little spindles that pull the chromosomes apart. This action causes the cell to stop, and reset the process, leaving one cell with a tetraploid number of chromosomes, which then restarts the process and eventually splits into two, tetraploid daughter cells.
Now I'm positive that any alteration to the human chromosomes is usally fatal, and I think its the same case in the animal world, but for some reason plants can handle it much better. In fact, most of the times tetraploids are more vigorous than their diploid counterparts, but in a good percentage of the cases, triploids may actually be the most vigorous of all. The huge downfall of triploids is that they are very difficult to get to breed. The reason is that during the point where the cells producing gametes start dividing into their haploid numbers, you have three pairs of chromosomes to go into two cells, which often case doesn't go nice and evenly and you have plants that are genetic messes. In some very rare
cases, certain gametes of a triploid may luckily end up with all three pairs of chromosomes, giving you a triploid, or 3n, gamete. This gamete, when paired with a haploid (1n, or normally called n) gamete, gives you a tetraploid (4n) cell, which will then be able to breed in the future. I do really want to stress that the chances of getting even tetraploids out of a 3n x 2n cross are very small, and in the Phalaenopsis world, there is only one triploid hybrid that is known to make triploid gametes more often than the nearly never of most other triploids, although there are still many cases where you get aneuploid (not being a whole number) seeds from it if you get anything at all.
If you didn't notice by the end of the first paragraph, I absolutely LOVE
anything to do with chromosomes, genetics, or genes!
I really hope this helped, and if you have any further questions, ask away! | <urn:uuid:a58cd49b-6bb6-44c6-8f1b-1efa08459648> | 3.59375 | 751 | Q&A Forum | Science & Tech. | 45.319908 |
You may think that greenhouse effect is harmful to the environment because it is associated with global warming. The truth of the matter is that we need this phenomenon in order to live.
What Causes the Greenhouse Effect?
Some of the sun’s rays that hit the earth are deflected by a layer called ozone. The rest enters the atmosphere and are reflected by the earth’s surface as a slow-moving energy, infrared radiation. Greenhouse gases such as carbon dioxide, water vapour, ozone and methane absorb some of this heat and slow down the escape of the said energy from our planet.
Greenhouse gases regulate the climate through the mechanism of trapping heat and just like a warm blanket that insulates our earth. This is what scientists call “greenhouse effect”. Without it the temperature will be cooler than it is that may affect our ecosystem.
How Do We Contribute to this Phenomenon?
We, humans, alter this natural process with the greenhouse gases we create. The process of combustion, which is burning materials like in incineration of wastes, may release more of the gases such as carbon dioxide and water vapour. There will be more conductors of heat thereby increasing the earth’s temperature level. Another factor is the wide use of oil in industries and even with the use of vehicles that rely on this source of energy.
Nitrous oxide and methane levels may also increase with some illegal farming practices. Deforestation may also cause global warming with the diminishing number of trees and plants that use carbon dioxide and release oxygen. These trees play a great role in maintaining a balance of these gases. The growing population may also contribute to global warming with the wide use of fuels for transportation, industry and the use of heat in homes.
How Do We Reduce These Gases?
- Plant trees
To balance the amount of gases in the atmosphere it is important that we plant trees. These are responsible for using up amounts of carbon dioxide and release of oxygen that we breathe. Problems with deforested area must be dealt with and it is through reforestation. The authorities must be keen with their job of implementing laws that govern agricultural practices such as farming and logging. You can take part by supporting activities that aim to solve the problems of the environment such as tree planting.
- Wise Use of Energy
Today, the growing number of car owners causes the increasing demand for oil. You may prefer taking a cab in going to establishments even in short distances. Using air conditioning units, heaters, gas stove and ovens may also increase the amount of greenhouse gases to the atmosphere. Responsible use of energy is important in preventing global warming.
- Environmental Awareness
It is vital that people be educated about greenhouse gases and global warming. We must be aware that we are altering the natural processes in the ecosystem even through our habits that need to be modified. Imparting our knowledge about these topics may increase the level of awareness of the people and persuade them to take part in the struggle of protecting our planet. | <urn:uuid:dadf62e9-472a-45bc-bce6-f3af7184593b> | 3.875 | 615 | Knowledge Article | Science & Tech. | 42.260951 |
A parameter is declared with a name, followed by "as", followed by the type: paramname as type. If there is no "as type", then the type is assumed to be type object.
Callable Type + Event
Variable number of parameters
Boo allows you to call or declare methods that accept a variable (unknown) number of parameters.
You add an asterix (*) before the parameter name to signify that it holds multiple parameter values. If there is no 'as type', the type is assumed to be an array of objects: (object). You can declare the type as any array type. For example (int) if your method only accepts int parameters.
Here is an example:
Some boo builtins accept a variable number of parameters, like matrix() and ICallable.Call.
Add a "ref" keyword before the parameter name to make a parameter be passed by reference instead of by value. This allows you to change a variable's value outside of the context where it is being used. Some examples:
Basic byref example:
Wrapping a native method that takes a parameter by reference.
See also tests/testcases/integration/byref*.boo in the boo source distribution. | <urn:uuid:e0abba01-cfed-446a-b665-68b6381e284d> | 3.265625 | 253 | Documentation | Software Dev. | 39.889474 |
There are four distinct numeric types: plain integers,
floating point numbers, and complex numbers.
In addition, Booleans are a subtype of plain integers.
Plain integers (also just called integers)
are implemented using long in C, which gives them at least 32
bits of precision (
sys.maxint is always set to the maximum
plain integer value for the current platform, the minimum value is
-sys.maxint - 1). Long integers have unlimited precision.
Floating point numbers are implemented using double in C.
All bets on their precision are off unless you happen to know the
machine you are working with.
Complex numbers have a real and imaginary part, which are each
implemented using double in C. To extract these parts from
a complex number z, use
Numbers are created by numeric literals or as the result of built-in functions and operators. Unadorned integer literals (including hex and octal numbers) yield plain integers unless the value they denote is too large to be represented as a plain integer, in which case they yield a long integer. Integer literals with an "L" or "l" suffix yield long integers ("L" is preferred because "1l" looks too much like eleven!). Numeric literals containing a decimal point or an exponent sign yield floating point numbers. Appending "j" or "J" to a numeric literal yields a complex number with a zero real part. A complex numeric literal is the sum of a real and an imaginary part.
Python fully supports mixed arithmetic: when a binary arithmetic operator has operands of different numeric types, the operand with the ``narrower'' type is widened to that of the other, where plain integer is narrower than long integer is narrower than floating point is narrower than complex. Comparisons between numbers of mixed type use the same rule.3.2 The constructors int(), long(), float(), and complex() can be used to produce numbers of a specific type.
All numeric types (except complex) support the following operations, sorted by ascending priority (operations in the same box have the same priority; all numeric operations have a higher priority than comparison operations):
||sum of x and y|
||difference of x and y|
||product of x and y|
||quotient of x and y||(1)|
||(floored) quotient of x and y||(5)|
||absolute value or magnitude of x|
||x converted to integer||(2)|
||x converted to long integer||(2)|
||x converted to floating point|
||a complex number with real part re, imaginary part im. im defaults to zero.|
||conjugate of the complex number c|
||x to the power y|
||x to the power y|
[1, 2]is considered equal to
[1.0, 2.0], and similarly for tuples. | <urn:uuid:0cfea2fc-10ee-4195-b365-eae2ea739ddc> | 4.625 | 619 | Documentation | Software Dev. | 49.465094 |
I don’t think there’s any good evidence that things can actually travel faster than light. But in an expanding universe, the further apart two object are, the faster they are moving apart (you can test this by drawing pairs of dots on a balloon and blowing it up.
So if two galaxies are moving apart from one another, each at 0.75 the speed of light, I suppose their relative speed of separation would be 1.5 times light-speed.
But all that is mere conjecture on my part – I’m not an astrophysicist.
I answered a similar question earlier, and I’m going to say the same thing even though another student came back with a very complicated quote suggesting I should think again. (Health Warning: I Am Not A Physicist)
As you know distant galaxies, like our own Milky Way are carried along by the expansion of the Universe, and will move apart from every other galaxy. As you look at galaxies further and further away, they appear to be moving faster and faster away from us. So is it possible that they could eventually appear to be moving away from us faster than the speed of light? Don’t panic Einstein’s theory is not broken! The galaxies themselves aren’t actually moving very quickly through space, it’s the space itself which is expanding away, and the galaxy is being carried along with it. As long as the galaxy doesn’t try to move quickly through space, no physical laws are broken. | <urn:uuid:6729af24-0d71-4d56-b16f-a830feea15d1> | 3.671875 | 313 | Q&A Forum | Science & Tech. | 58.807433 |
Weirdest Object in the Solar System? July 16, 2009Posted by stcescience in Astronomy, planets.
Tags: Astronomy, dwarf planet, Haumea, Solar System
Matt Bulman has his blog at: http://stcescience.wordpress.com/
Taken From newscientist.com
Astronomers have recently discovered one of the strangest objects, to date, in our solar system. This dwarf planet has virtually the same diameter as Pluto but is only about 1/3 its mass – meaning it actually looks more like a flattened cigar or pancake. Read more at: http://www.space.com/news/080919-fifth-dwarf-planet.html
The question then becomes how does such an object form? It is no coincidence that nearly all planets and stars are spherical in shape. Objects tend to assimilate to the lowest energy state possible, or in the case of celestial bodies – spheres. This is because the planets and stars have a very large gravitation force pulling inward from all directions; creating a “ceiling” or “roof” that is the same height in all directions (a sphere). But how then do anomalies such as these exist?
According to the article “The new dwarf planet has the same diameter as Pluto, but is much thinner, and contains about 32 percent of Pluto’s mass. Scientists suggest Haumea’s long, narrow shape arose from its rapid spin — it rotates about once every four hours.” In other words there are forces on this object other than just its gravitational pull. This is true of all celestial bodies; however it becomes much more apparent as objects begin to rotate very quickly.
Think of it much like building a clay pot. As you rapidly spin the clay in a circle the clay begins to flatten and elongate. This is due to the centripetal acceleration of the mass. As the mass continues to spin faster and faster it begins to accelerate outward and is either shot outward and off the remaining mass or causes the clay pot to elongate and squish together.
Haumea’s formation would be much like that of a clay pot. While the dwarf planet has a gravitational force pulling inward in all directions, it is also spinning incredibly fast on its axis. So you could imagine that the mass is being pulled in and pushed out by two competing forces. However this gives rise to an even bigger question – why then is such a large body spinning so incredibly fast?
What’s even more interesting is the object’s name. According to the original article, “The object previously known as 2003 EL61 is now named Haumea, after the goddess of childbirth and fertility in Hawaiian mythology.”
Taken from: NASA, ESA, and A. Feild (STScI)
Haumea is one of the largest members of the relatively newly coined “Kuiper Belt”. The Kuiper Belt is basically a large gathering of ice structures extending out further than Neptune’s orbit. Through the analysis of this region in space astronomers have pretty much been able to demote Pluto from full planet to simply the largest member of this region in space. It is a lot like the asteroid belt only it is much larger and all of the substances are made primarily of ice rather than rock. Astronomers are discovering more and more Kuiper Belt members through closer analysis of our solar system.
Institute for Astronomy at the University of Hawaii faculty member David Jewitt is one such astronomer. Jewitt believes, “the Kuiper Belt holds significance for the study of the planetary system on at least two levels. First, it is likely that the Kuiper Belt objects are extremely primitive remnants from the early accretional phases of the solar system. The inner, dense parts of the pre-planetary disk condensed into the major planets, probably within a few millions to tens of millions of years. The outer parts were less dense, and accretion progressed slowly. Evidently, a great many small objects were formed. Second, it is widely believed that the Kuiper Belt is the source of the short-period comets. It acts as a reservoir for these bodies in the same way that the Oort Cloud acts as a reservoir for the long-period comets.” | <urn:uuid:a6358914-ae6a-4cc0-830e-0469217a662c> | 3.53125 | 901 | Personal Blog | Science & Tech. | 51.275523 |
The Tower of Hanoi is an ancient mathematical challenge. Working on
the building blocks may help you to explain the patterns you
Take any prime number greater than 3 , square it and subtract one.
Working on the building blocks will help you to explain what is
special about your results.
What is the area of the quadrilateral APOQ? Working on the building
blocks will give you some insights that may help you to work it
This group tasks allows you to search for arithmetic progressions
in the prime numbers. How many of the challenges will you discover
This problem is a sequence of linked mini-challenges leading up to the proof of a difficult final challenge, encouraging you to think mathematically. Starting with one of the mini-challenges, how. . . .
Explore the properties of matrix transformations with these 10 stimulating questions.
We really liked the way Alex approached the solution to this
problem, using a mixture of numerical and pure methods.
Go to last month's problems to see more solutions.
The third of three articles on the History of Trigonometry.
Members of the NRICH team are beginning to write blogs and this very short article is designed to put the reasoning behind this move in context.
NRICH website full of rich tasks and guidance. We want teachers to
use what we have to offer having a real sense of what we mean by
rich tasks and what that might imply about classroom practice. | <urn:uuid:a4b0d86a-d6ab-4b06-8d50-271d29874b44> | 3.25 | 298 | Content Listing | Science & Tech. | 57.244989 |
Deformation monitoring at Newberry
Ground deformation refers to any change in shape of the volcano, which can occur as a result of uplift or subsidence, stretching or contraction, or some combination of these types of movements. Levelling surveys measure vertical movements of the ground surface (uplift or subsidence). The GPS technique measures 3-dimensional movements and therefore is sensitive to both vertical and horizontal changes.
CVO conducted leveling surveys across Newberry Volcano in 1985, 1986, and 1994 for comparison to an initial USGS survey in 1931. The 1994 results indicated that the summit area of the volcano moved upward about 4 inches with respect to its base sometime between 1931 and 1994. Smaller surveys across the caldera floor in 1985 and 1986 showed that no uplift or subsidence occurred there during 1985-1994. The 1931 survey was less precise than the later ones, so the 1931-1994 uplift episode is uncertain. Lack of any measurable ground deformation in the summit area from 1985 to 1994, on the other hand, is well established.
Until 2011, Newberry Volcano was not continuously monitored for patterns of deformation. In 2002 and 2009, scientists deployed short-lived "campaign" GPS surveys that included measurements of ground postion at 27 locations (additional information about these campaign surveys can be found on the Earthquake Hazards Program website). When comparing the data between the two years, only background tectonic deformation was observed (all stations moving NNW at about 4 mm/yr (0.16 in/yr)). In 2011 the Cascades Volcano Observatory installed 8 real-time stations that have joint seismic and deformation (GPS) monitoring instruments. Recent results from these new "continuous" GPS stations are consistent with low rates of deformation measured between 2002 and 2009. | <urn:uuid:0a27d4e2-4af8-4e1a-93fe-4980a16ad31d> | 3.53125 | 366 | Knowledge Article | Science & Tech. | 40.86014 |
Definite Clause Deduction
Tutorial Five (Supplementary): Syntax
For the most part, the syntax that the applet uses is the syntax used by CILog. However, it does not implement all of CILog's built-in features and functionality. There are three components in the language: terms, atoms, and clauses.
Terms: A term is a variable, a constant, or a compound term. A variable is a sequence of alphanumeric characters that starts with an uppercase letter. The only exception is "_", which is a variable that can unify with anything. A constant is either a sequence of alphanumeric characters starting with a lowercase letter or an integer. Finally, a compound term has the form f(t1,...,tn), where f is a function symbol (a sequence of alphanumeric characters starting with a lowercase letter) and the ti are terms. However, arithmetic expressions are also considered to be compound terms. Some atoms require arithmetic expressions as arguments. The infix functions "+", "-", "*", and "mod" are implemented, with the usual interpretation.
Atoms: Atoms, also called goals, are almost identical in syntax to compound terms. They are of the form p(t1,...,tn), where p is a predicate symbol (a sequence of alphanumeric characters starting with a lowercase letter) and the ti are terms. It is also possible to have a goal with no terms and only a predicate symbol. As with compound terms, there are also (infix) built-in predicates:
Clauses: Clauses, which are also called rules, are made up of two parts. The first part, the head, is simply a goal. Every clause must have a head, and must end with a period. The body of a clause is a conjunction of atoms g1 & ... & gn. Thus, the clause is either of the form h. or h <- b where h is the head of the clause and b is the body.
|Main Tools: Graph Searching | Consistency for CSP | SLS for CSP | Deduction | Belief and Decision Networks | Decision Trees | Neural Networks | STRIPS to CSP| | <urn:uuid:ba2bb99a-45a6-4049-ace2-2f53a848c196> | 3.40625 | 462 | Documentation | Software Dev. | 58.144632 |
This bestiary briefly introduces some of
the species of extinct mammals discussed in various places in this site. Not
all the mammals described in the bestiary are Pleistocene extinctions; some
of the species described here died out as late as this century. Descriptions
include possible causes for why the creature went extinct.
Many illustrations in this bestiary have been drawn for the first time,
and represent the first-ever reconstructions of these creatures! You can
find first-time reconstructions by looking for species names and illustrations
with this symbol .
You can look for a creature in the bestiary by name (common name, species
name, or family name) or by geography (geographic region where it once lived). | <urn:uuid:047fded3-8110-4a99-b8c0-1df2370270b7> | 3.09375 | 152 | Knowledge Article | Science & Tech. | 35.515115 |
Pointers and references are the different
1. pointers use the “*” and “->” operators and references use “.“
2. A pointer can be re-assigned any number of times while a reference can not be reassigned after
3.A pointer can point to NULL while reference can never point to NULL
4. You can't take the address of a reference like you can with pointers
5. There's no "reference arithmetics" (but you can take the address of an object pointed by a
reference and do pointer arithmetics on it as in &obj + 5). | <urn:uuid:106cc59e-e181-4f1e-b961-20ffd940334f> | 2.8125 | 132 | Q&A Forum | Software Dev. | 61.077609 |
Enzyme on Enzymes
Country: United States
Date: February 2006
How do enzymes break down enzymes?
Enzymes are simply proteins, so they can be broken down just like any other
of molecule. It doesn't matter if the molecule is a carbohydrate (lactose
lactase), fat (by lipases) or any other type of protein (like steak by
The mechanism of enzyme action is basically the same across the wide range
of enzymes that can be found.
First the enzyme must have an active site specific for the substrate it
helps to change. This is a special area of the enzyme, usually a pocket or
the substrate is complementary to and can fit into, just like a phone into a
Once the substrate is in the active site of the enzyme, the enzyme bends or
twists the substrate into a position (conformation) that favors a chemical
reaction to take place.
This is called lowering the activation energy of the substrate. After the
reaction is done, the enzyme releases the substrate that is now chemically
different and is called the product. The enzyme is not changed during the
reaction and is now unbound and free to participate in another reaction.
Our bodies use tons of enzymes to speed up (catalyze) the metabolic events
in our cells that would normally occur so slowly that we would not be able
to live!!! Thank goodness for enzymes!
Hope this helps and study hard-
Stephen A. Sardino Jr.
Click here to return to the Molecular Biology Archives
Update: June 2012 | <urn:uuid:77abda5b-6582-4018-9bb5-786a586d974c> | 3.828125 | 326 | Knowledge Article | Science & Tech. | 42.193571 |
Why Cannot Steel be See through?
Here is one that I have been trying to explain for some time but have
be unable to find a satisfactory answer: What makes something transparent?
I understand about the index of refraction, however, it seems
me that if a substance such as silicon can be made into transparent
glass, and a substance like petroleum can be made into transparent
plastic, and so on, it stands to reason that something like steel
could be made transparent as well. Why cannot aluminum be ma transparent,
ala Scotty in Star Trek IV?
The main reason is the electrons. The electrons in a metal, which
give it its electrical conductivity, cause the light to be reflected.
Aluminum oxide can be transparent, aluminum can not.
Yup. Glass is not pure silicon, it is silicon di-oxide (a combination
of silicon with oxygen). Plastic not the same material as crude
oil - they have totally changed the chemical composition and carbon
is notorious for being able to do wonderfully different things
in different molecules - Aluminum and other elements just do not
have that ability.
But, so far we have explored only a very limited range of possible
combinations of elements - there are undoubtedly wonderful materials
that could be made in the future. Just think - there are 92 different
elements (that are relatively stable) so the number of materials
with equal parts of just 2 elements is 92^2 = 8464, and you
can try arbitrary combinations of any of them. The total number
even of things in equal combinations would be 2^92 which is
an immensely large number. And then preparation technique
etc all enters in. This is why the whole field of materials
science is so important these days.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:aaded5e0-6c6a-4eec-a927-10939a4bc343> | 3.59375 | 379 | Personal Blog | Science & Tech. | 40.876636 |
Here is the answer.
The java.util.Map interface represents a mapping between a key and a value. The Map interface is not a subtype of the Collection interface. Therefore it behaves a bit different from the rest of the collection types. the interface starts of it?s own interface hierarchy, for maintaining key-value associations.
The interface describes a mapping from keys to values, without duplicate keys, by definition. The Map interface provides three collection views, which allow a map's contents to be viewed as a set of keys, collection of values, or set of key-value mappings. The order of a map is defined as the order in which the iterators on the map's collection views return their elements.
The Map interface maps unique keys to values. A key is an object that you use to retrieve a value at a later date.
Given a key and a value, you can store the value in a Map object. After the value is stored, you can retrieve it by using its key. Several methods throw a NoSuchElementException when no items exist in the invoking map. A ClassCastException is thrown when an object is incompatible with the elements in a map. A ClassCastException is thrown when an object is incompatible with the elements in a map. A NullPointerException is thrown if an attempt is made to use a null object and null is not allowed in the map. An UnsupportedOperationException is thrown when an attempt is made to change an unmodifiable map
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:4d6a0e1f-2c23-462e-9104-6ac060ef35b0> | 3.453125 | 347 | Q&A Forum | Software Dev. | 51.915575 |
Are There More Big Earthquakes Than There Used To Be?
How human activity affects the tectonic plates.
Read more about the Haiti earthquake in Slate.
A 7.0-magnitude earthquake struck Haiti on Tuesday, three days after a 6.5-magnitude quake rocked the extreme north of California. In total, there have been 270 earthquakes worldwide in the last week. Can humans affect the frequency of earthquakes, just like we've affected the global climate?
Not significantly. While certain activities, like mining, oil extraction, and dam-building can trigger an earthquake by changing the weight on tectonic plates or lubricating the joints between them, there is no evidence that humans have caused a quake of greater than 5.3 in magnitude. Major earthquakes like the one in Haiti are natural and unstoppable events. If the Earth's plates are going to shift and cause one of these disasters, there is nothing we can do.
The frequency of major earthquakes has remained fairly constant throughout recorded history. Since 1900, there have been approximately 18 earthquakes of 7.0 or greater magnitude per year. (One usually crosses the 8.0 barrier.) There are no data suggesting an upward trend in that rate. Seismologists are detecting more and more smaller quakes, but that phenomenon can be attributed to the quality and quantity of detection devices: Since 1931, the number of seismological stations worldwide has increased from 350 to more than 8,000.
Because earthquakes occur deep beneath the Earth's surface, most of the human activity capable of causing earthquakes involves digging or drilling. For example, oil companies sometimes inject water into their wells to force out the black gold under pressure. The water, pumped to a depth of thousands of feet, can decrease the friction holding two plates in place, causing a slip and a minor tremor.
The largest earthquake known to be triggered by people occurred four decades ago in Colorado. In 1962, the U.S. Army drilled a 12,000-foot-deep well to dispose of the waste from a chemical and conventional weapons factory in Commerce City. Over four years, the reservoir was filled with 165 million gallons of contaminated liquids. The Army stopped pumping when it realized that the process had caused a series of small-to-moderate tremors. One year later, a 5.3-magnitude quake caused more than $1 million in damage to the Denver metropolitan area.
As water accumulated behind Hoover Dam in the late 1930s, hundreds of small earthquakes shook the area. Some suspect that the 7.5-magnitude Koyna earthquake in India in 1967 was caused by a nearby reservoir, but most seismologists reject this theory. Many scientists now fear that the Three Gorges Dam in China poses a serious seismic risk. If the dam causes an earthquake that fractures the retaining wall, flooding would likely affect the millions of people living downstream.
Nuclear detonations are extremely unlikely to cause major earthquakes, contrary to Hollywood depictions, but subterranean tests can register small amounts of seismic activity. A 1968 nuclear test with the code name "Faultless" did manage to open up a new fault in the Nevada desert. The incident moved seismographic needles, but that was mostly attributable to the nuclear explosion itself rather than to any resulting tectonic activity.
Got a question about today's news? Ask the Explainer.
Become a fan of the Explainer on Facebook. | <urn:uuid:0439312c-2b44-4cea-bcf2-72bf0f262ea6> | 3.984375 | 695 | Truncated | Science & Tech. | 47.705 |
Phenomena, Comment and Notes
When a drop of rain carries a particle of dirt off the land and into the sea, there are repercussions from deep within Earth to the nearer reaches of space
- By Stearns A. Morse
- Smithsonian magazine, April 1996, Subscribe
When drops of rain fall, some of them flow downhill, each carrying a bit of dirt. As a result, over time the landscape changes from steep to shallow slopes. The land surface erodes away toward sea level. Like a boat in the water, Earth's continental crust displaces its own weight in the underlying layer of denser mantle rock. As its cargo of eroding soil and rock is thrown overboard, the crust loses weight and rises. To accommodate this change, some hot mantle rock must flow in beneath the continent, just as water flows in under a boat that is rising out of the water as its load is lightened. The only difference is the speed at which it happens: hot rock flows slowly, a finger's width a year.
The mantle rock that flows inward under the thinning continent must come from somewhere and, in turn, be replaced by other hot mantle rock. Where does it come from? We know that fresh mantle rock rises up at the mid-ocean ridges, where tectonic plates pull apart to release hot material coming up from below (Smithsonian, January and February 1975). Some of that mantle rock becomes oceanic crust, adding to the plates' trailing edges as they move apart. And some of it flows underneath the oceanic crust to fill the space being created by that lightening, rising continental crust. All this moving mantle rock is, in turn, replaced by the upward flow of hot rock from deeper in the planet. We face the astonishing fact that raindrops falling on land indirectly cause hot, flowing rock material to rise up from Earth's depths.
Until it nears the surface, the flowing rock we have seen in action is flowing in a plastic way, deforming like steel squeezed between rollers. But when hot rock rises from the interior under ocean ridges, it partly melts into a liquid as the pressure on it lessens. The melted rock travels through pores and cracks until it collects and rises and erupts in submarine volcanoes. The cooling lava, transferring its heat to the water, helps the Sun heat the ocean, powering the wind and producing the rain.
One drop of water; mantle flow; volcanism; rain. Loop closed. We are back at the beginning of the cycle.
Eventually, mantle rock flowing beneath the oceanic crust encounters, pushes and slides past the colder, more-rigid rock material of the continent. Something in that crust breaks, and there is an earthquake, because cold rocks break rather than flow. One drop of water. One Northridge earthquake.
The flowing mantle eventually cools and dives back down into the earth in subduction zones, producing more earthquakes and volcanoes as it does, especially in the "ring of fire" around the Pacific. No ocean floor is older than 180 million years; all of it goes down the tubes sooner or later.
If a piece of tectonic plate is subducted, its place at the surface is taken by new material somewhere else, at an ocean ridge. In North America we ride westward away from the Atlantic Ridge on the North Atlantic Plate; at the Ridge new rock is made from magmas that come up from hot mantle. Meanwhile, the western edge of the Pacific Plate slides down into the mantle. To paraphrase John Donne, no plate is an island, entire of itself. All motions are interrelated. One drop of water may set the whole thing going.
Subscribe now for more of Smithsonian's coverage on history, science and nature.
Related topics: Geology | <urn:uuid:27642190-2a60-4adc-afeb-e9b20d5db1a2> | 4.0625 | 776 | Truncated | Science & Tech. | 54.950714 |
The fires that are burning throughout the country offer a window into what we can expect in the future as the climate heats up. That grim assessment comes from Steve Running, a wildfire expert, ecologist and forestry professor at the University of Montana. Running was among the scores of scientists who, along with Al Gore, won the 2007 Nobel Peace Prize for their work on the Intergovernmental Panel on Climate Change.
Running's insight is so sought after these days that he has speaking engagements scheduled four years out, about one a week. The pace picked up in June, when people began turning to him for his perspective on the massive Western wildfires. As of Monday, there were 52 active fires across the country, most in the West, that have burned more than 900,000 acres, according to the National Interagency Fire Center. Nearly 1,000 homes have been destroyed in the West alone, and at least three deaths have been attributed to the fires.
InsideClimate News caught up with Running last weekend to get his views on what's going on now, and what we can expect in a world dominated by climate change.
ICN: How do the conditions this year compare to what we saw in 1988 with the Yellowstone fire? (That wildfire, the largest in Yellowstone National Park's history, burned nearly 800,000 acres and about a third of the park.)
Running: We have had many dry years in the West. But when we get an ignition, and then high winds (which is what happened in the Yellowstone fire and in the recent Colorado fires), the fires turn into blast furnaces. The wind pushes the flames into more fuel faster and injects more oxygen so the fires just explode. Think of when you blow on a sleepy campfire and it flares right up.
ICN: You have said there's nothing man can do to stop these runaway fires. Assuming we'll see more of them in a warmer future, how do we prepare?
Running: The big variable during fires is wind. If high wind kicks up, people need to simply get out of the way. For the longer term, the only variable we can control in advance is fuels. So thinning forests and cleaning out dead trees is the best thing we can do.
ICN: Much of the dead fuel that has accumulated has been attributed to pine beetles, which have killed millions of trees. Has climate change played a role in the beetle's population explosion?
Running: As a rough estimate, you need a few nights of -30 degrees once every five to 10 years (to keep pine beetles under control). Those really cold nights knock the population back, and they take years to recover. In the past, -30 degree winter night temperatures did occasionally occur.
ICN: How much have the wintertime lows been going up?
Running: Here in Montana, the absolute minimum temperature, the coldest night of the year, has gone up 10 degrees.
ICN: This year, we're seeing fire conditions in late June that normally exist in August. Is that a repeated pattern, or unique to this year? | <urn:uuid:82828003-5621-4f12-b629-f2aae871d52e> | 3.15625 | 626 | Audio Transcript | Science & Tech. | 58.147433 |
Last updated: October 12, 2011
|1. Gulf of Mexico|
This a chromatogram of oil that came from the Gulf of Mexico, but you can readily see that it is different from oil that came from Santa Barbara, Calif. (slide 2) or a different reservoir in the Gulf: the Macondo well (slide 3).
(Photos courtesy of Bob Nelson, Woods Hole Oceanographic Institution)
Specific compounds provide clues to what types of organisms and conditions went into making the oil. Bisnorhopanes show that this oil formed in low-oxygen ocean sediments. But oleanane reveals that some of the oil came from land sources.
The tall peaks on the left are diasteranes: bulky, nonpolar compounds that resist breaking down in the environment. A high fraction of trisnorneohopane indicates that the Macondo oil underwent a lot of heating during its formation. | <urn:uuid:4d608227-f743-4451-8878-5bece2595b70> | 3.390625 | 189 | Knowledge Article | Science & Tech. | 45.884412 |
Strict programming language
|This article does not cite any references or sources. (October 2006)|
A strict programming language is one in which only strict functions (functions whose parameters must be evaluated completely before they may be called) may be defined by the user. A non-strict programming language allows the user to define non-strict functions, and hence may allow lazy evaluation.
Nearly all programming languages in common use today are strict. Examples include C#, Java, Perl (through version 5), Python, Ruby, Common Lisp, and ML. The best known non-strict languages are Haskell, Miranda, and Clean. Languages whose ordinary functions are strict but which provide a macro system to build non-strict functions include C, C++, and Scheme.
In most non-strict languages the non-strictness extends to data constructors. This allows conceptually infinite data structures (such as the list of all prime numbers) to be manipulated in the same way as ordinary finite data structures. It also allows for the use of very large but finite data structures such as the complete game tree of chess.
Non-strictness has several disadvantages which have prevented widespread adoption:
- Because of the uncertainty regarding if and when expressions will be evaluated, non-strict languages generally must be purely functional to be useful.
- All hardware architectures in common use are optimized for strict languages, so the best compilers for non-strict languages produce slower code than the best compilers for strict languages, with the notable exception of the Glasgow Haskell Compiler which outperforms many strict language compilers.
- Space complexity of non-strict programs is difficult to understand and predict.
Strict programming languages are often associated with eager evaluation, and non-strict languages with lazy evaluation, but other evaluation strategies are possible in each case. The terms "eager programming language" and "lazy programming language" are often used as synonyms for "strict programming language" and "non-strict programming language" respectively. | <urn:uuid:c9aec2f6-2574-4400-bec4-497b5f4934b4> | 4 | 418 | Knowledge Article | Software Dev. | 26.961194 |
Sun dogs, or parhelia, are little back-up suns that appear on either side of the sun. They are the Pips to the sun's Gladys, the Lion and the Witch to the sun's Wardrobe, and they look way cooler than rainbows.
What is it that makes these parhelia possible? Only the most badass shape in the whole world: the hexagon. It's the geometric shape that rhymes with sex and means ‘an evil spell.' It's balanced, edgy. It's got an angle on everything. It's what the pentagon would have been if the government hadn't been too cheap to add that last wall. There's nothing that the hexagon can't do. When it gets cold enough, hexagons fall from the sky. Literally.
When water freezes it forms hexagonal crystals. Sometimes the crystals are long, pencil-shaped objects, but much of the time they are flat disks. When they fall, the flat side turns parallel to the earth, due to Io9's old friend, the Bernoulli Effect. When the disk tilts so that an edge pushes out, the air around it rushes past faster. This creates a low pressure zone and pulls the crystal's edges out harder, until it is flat-side-down.
This leaves the hexagonal shape of the crystal open to exploitation by miscreant sun beams. Most of us know that certain substances, including ice, can bend light. It's difficult to understand, though, how this would make duplicate suns appear.
Picture someone looking at a regular sunset. It's pretty easy to understand the course of the light. The sun gives off photons in all directions, and some of them travel in a straight line until they hit the the person's eyeball, allowing them to see the image of the sun. Now picture two other people on either side of the original person, about a quarter mile away. They would also get face-fulls of photons, allowing them to see the sun.
The same thing would happen during a cold sunset with ice crystals in the air. Some of the photons would travel in a straight line and hit each person's eyeball letting them see the sun. But they wouldn't get an eyeful all the photons which had originally travelled in that direction. Because of the hexagonal crystals in the air, some of the photons - part of the image of what the two other people would see – would be caught and redirected to the original person's position. They would see images of the sun not just from straight ahead of them, but coming in from either side.
Since the images would be coming from either side of the sun, and we assume from experience that light travels in straight lines, the person would think they were seeing three suns next to each other in the sky. | <urn:uuid:f8ad7c9f-cdbe-4db1-8fe1-61c71f8f39c4> | 2.8125 | 580 | Personal Blog | Science & Tech. | 69.115681 |
post by NASA Science News , January 21, 2009
[Did you know a solar flare can make your toilet stop working?]
That’s the surprising conclusion of a NASA-funded study by the National Academy of Sciences entitled Severe Space Weather Events—Understanding Societal and Economic Impacts. In the 132-page report, experts detailed what might happen to our modern, high-tech society in the event of a “super solar flare” followed by an extreme geomagnetic storm. They found that almost nothing is immune from space weather—not even the water in your bathroom.
Right: Auroras over Blair, Nebraska, during a geomagnetic storm in May 2005. Photo credit: Mike Hollingshead/Spaceweather.com.
The problem begins with the electric power grid. “Electric power is modern society’s cornerstone technology on which virtually all other infrastructures and services depend,” the report notes. Yet it is particularly vulnerable to bad space weather. Ground currents induced during geomagnetic storms can actually melt the copper windings of transformers at the heart of many power distribution systems. Sprawling power lines act like antennas, picking up the currents and spreading the problem over a wide area. The most famous geomagnetic power outage happened during a space storm in March 1989 when six million people in Quebec lost power for 9 hours.
According to the report, power grids may be more vulnerable than ever. The problem is interconnectedness. In recent years, utilities have joined grids together to allow long-distance transmission of low-cost power to areas of sudden demand. On a hot summer day in California, for instance, people in Los Angeles might be running their air conditioners on power routed from Oregon. It makes economic sense—but not necessarily geomagnetic sense. Interconnectedness makes the system susceptible to wide-ranging “cascade failures.”
To estimate the scale of such a failure, report co-author John Kappenmann of the Metatech Corporation looked at the great geomagnetic storm of May 1921, which produced ground currents as much as ten times stronger than the 1989 Quebec storm, and modeled its effect on the modern power grid. He found more than 350 transformers at risk of permanent damage and 130 million people without power. The loss of electricity would ripple across the social infrastructure with “water distribution affected within several hours; perishable foods and medications lost in 12-24 hours; loss of heating/air conditioning, sewage disposal, phone service, fuel re-supply and so on.”
“The concept of interdependency,” the report notes, “is evident in the unavailability of water due to long-term outage of electric power–and the inability to restart an electric generator without water on site.”
Above: What if the May 1921 super storm occurred today? A US map of vulnerable transformers with areas of probable system collapse encircled. A state-by-state map of transformer vulnerability is also available: click here. Credit: National Academy of Sciences.
The strongest geomagnetic storm on record is the Carrington Event of August-September 1859, named after British astronomer Richard Carrington who witnessed the instigating solar flare with his unaided eye while he was projecting an image of the sun on a white screen. Geomagnetic activity triggered by the explosion electrified telegraph lines, shocking technicians and setting their telegraph papers on fire; Northern Lights spread as far south as Cuba and Hawaii; auroras over the Rocky Mountains were so bright, the glow woke campers who began preparing breakfast because they thought it was morning. Best estimates rank the Carrington Event as 50% or more stronger than the super storm of May 1921.
“A contemporary repetition of the Carrington Event would cause … extensive social and economic disruptions,” the report warns. Power outages would be accompanied by radio blackouts and satellite malfunctions; telecommunications, GPS navigation, banking and finance, and transportation would all be affected. Some problems would correct themselves with the fading of the storm: radio and GPS transmissions could come back online fairly quickly. Other problems would be lasting: a burnt-out multi-ton transformer, for instance, can take weeks or months to repair. The total economic impact in the first year alone could reach $2 trillion, some 20 times greater than the costs of a Hurricane Katrina or, to use a timelier example, a few TARPs.
Above: A web of inter-dependencies makes the modern economy especially sensitive to solar storms. Source: Dept. of Homeland Security.
What’s the solution? The report ends with a call for infrastructure designed to better withstand geomagnetic disturbances, improved GPS codes and frequencies, and improvements in space weather forecasting. Reliable forecasting is key. If utility and satellite operators know a storm is coming, they can take measures to reduce damage—e.g., disconnecting wires, shielding vulnerable electronics, powering down critical hardware. A few hours without power is better than a few weeks.
NASA has deployed a fleet of spacecraft to study the sun and its eruptions. The Solar and Heliospheric Observatory (SOHO), the twin STEREO probes, ACE, Wind and others are on duty 24/7. NASA physicists use data from these missions to understand the underlying physics of flares and geomagnetic storms; personnel at NOAA’s Space Weather Prediction Center use the findings, in turn, to hone their forecasts.
At the moment, no one knows when the next super solar storm will erupt. It could be 100 years away or just 100 days. It’s something to think about the next time you flush. | <urn:uuid:59384748-ef2d-4959-bac1-bac50d401220> | 3.15625 | 1,171 | Personal Blog | Science & Tech. | 37.553736 |
Use this link for the String API (useful methods for this lab).
Methods you might need to use in this lab are: substring, contains, indexOf, replace, replaceAll
- Create two String variables. Assing "The best class ever,,," to the first String, and "and the best dxy ever" to the second string. Use different assignment methods for two strings:
String a = ""; and String b = new String("");
- Concatenate the two strings to each other and assign the resultant string to a new variable called d.
- Print the concatenated string.
- Replace ",,," with ", ".
- Replace "dxy" with "day".
- Add a period to the end of the sentence and print it.
- Ask the user to enter a word and check if the string contains that word. Print "The string contains your word!" if the string contains it, and "The string does not contain your word!" otherwise.
- Capitalize the first character of the second word of the sentence and print the revised sentence.
- Remove every 'r' character from the sentence and print the sentence.
- See this link for sample programs on basics of Java Strings.
- Repeat step #8 for third word of the sentence.
- Repeat step #8 for last word of the sentence .
- Repeat step #4 without using replace method. You can use indexOf(",,,") and substring method instead.
- Repeat step #5 without using replace method. You can use indexOf(",,,") and substring method instead.
- Receive two words from the user and print which one is occured sooner than the other one in the sentence. | <urn:uuid:33009b8f-d873-41bc-b6c5-b4c3a1950fa6> | 3.734375 | 364 | Tutorial | Software Dev. | 69.14955 |
How do the "drift tubes" in linear particle accelerators
shield the charged particles from the electric charges that would
otherwise slow them down? Also, how is alternating current used to
generate these electrical fields?
Stray electric fields are kept out of the drift region by completely
surrounding it with a conducting material. Alternating current is rectified
and filtered to make a D.C. voltage (just like any other D.C. power supply)
that is used to put the desired electric field in the drift region.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:2ad0cd15-663c-4b80-873a-654b26729584> | 3.75 | 128 | Knowledge Article | Science & Tech. | 50.13 |
Sun and Wind
Name: Richard G.
What effect does the sun have on wind direction on hills,
mountain tops and passes.
The solar influence on wind direction applies only in situations such
as land-sea interfaces, where a sea-breeze or land-breeze
becomes established due to daytime heating, and nighttime cooling. Air
drainage does occur in hilly and mountainous areas at
night when the air near the ground cools and flows downhill into the
valleys. But atmospheric pressure patterns have a much
greater effect on wind direction than any solar influences has.
Wendell Bechtold, Meteorologist
Forecaster, National Weather Service
Weather Forecast Office, St. Louis, MO
Click here to return to the Weather Archives
Update: June 2012 | <urn:uuid:ab1cd12f-eaa8-4bef-82e0-3fa0e4a87170> | 3.28125 | 169 | Knowledge Article | Science & Tech. | 42.427115 |
In a June communication to the Journal of Materials Chemistry, the NIU researchers report on a new method that converts carbon dioxide directly into few-layer graphene (less than 10 atoms in thickness) by burning pure magnesium metal in dry ice.
Journal of Materials Chemistry - Conversion of carbon dioxide to few-layer graphene
Burning magnesium metal in dry ice resulted in few-layer nanosheets of graphene in high yields. These carbon nanomaterials were characterized by Raman spectroscopy, energy-dispersive X-ray analysis, X-ray powder diffraction and transmission electron microscopy. This work provides an innovative route for producing one of the most promising carbon nanostructures by capturing carbon dioxide that is popularly known as the greenhouse gas.
“It is scientifically proven that burning magnesium metal in carbon dioxide produces carbon, but the formation of this carbon with few-layer graphene as the major product has neither been identified nor proven as such until our current report,” said Narayan Hosmane, a professor of chemistry and biochemistry who leads the NIU research group.
“The synthetic process can be used to potentially produce few-layer graphene in large quantities,” he said. “Up until now, graphene has been synthesized by various methods utilizing hazardous chemicals and tedious techniques. This new method is simple, green and cost-effective.”
Hosmane said his research group initially set out to produce single-wall carbon nanotubes. “Instead, we isolated few-layer graphene,” he said. “It surprised us all.”
“It’s a very simple technique that’s been done by scientists before,” added Amartya Chakrabarti, first author of the communication to the Journal of Materials Chemistry and an NIU post-doctoral research associate in chemistry and biochemistry. “But nobody actually closely examined the structure of the carbon that had been produced.”
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks | <urn:uuid:ecf3b190-2f9e-4bf3-ba59-a03bba04fe9b> | 3.328125 | 431 | Truncated | Science & Tech. | 25.849161 |
Chapter 17 - Coastal Processes and Tides
17.6 Important Concepts
- Waves propagating into shallow water are refracted by features of the seafloor,
and they eventually break on the beach. Breaking waves drive near-shore currents
including long-shore currents, rip currents, and edge waves.
- Storm surges are driven by strong winds in storms close to shore. The amplitude
of the surge is a function of wind speed, the slope of the seafloor, and the
propagation of the storm.
- Tides are important for navigation; they influence accurate geodetic measurements;
and they change the orbits and rotation of planets, moons, and stars in galaxies.
- Tides are produced by a combination of time-varying gravitational potential
of the moon and sun and the centrifugal forces generated as Earth rotates
about the common center of mass of the Earth-moon-sun system.
- Tides have six fundamental frequencies. The tide is the superposition of
hundreds of tidal constituents, each having a frequency that is the sum and
difference of five fundamental frequencies.
- Shallow water tides are predicted using tide measurements made in ports
and other locations along the coast. Tidal records of just a few months duration
can be used to predict tides many years into the future.
- Tides in deep water are calculated from altimetric measurements, especially
Topex/Poseidon measurements. As a result, deep water tides are known almost
everywhere with an accuracy approaching ± 2cm.
- The dissipation of tidal energy in the ocean transfers angular momentum
from moon to Earth, causing the day to become longer.
- Tidal dissipation mixes water masses, and it is a major driver of the deep,
meridional overturning circulation. Tides, abyssal circulation, and climate
are closely linked. | <urn:uuid:9b7e8a29-8eb5-4063-b99e-17a7869aac1b> | 4.28125 | 393 | Knowledge Article | Science & Tech. | 32.890249 |
The gravitational interactions of numerous massive objects are too complex to solve exactly. There are simplified solutions where the gravitational pull of one object is ignored. Such solutions are reasonable approximations to the situation where one body is much more massive than the other, such as a star being orbited by a planet or asteroid. It is this model which produces elliptical orbits.
There is a 2nd level of simplified solution where the gravitational pull of two bodies are taken into account and the gravity of a 3rd object is ignored. This is a decent approximation for the case where the 3rd body is far less massive than either of the other two, such as an asteroid relative to a planet and its parent star. This simplification produces slightly more complex results than mere ellipses. The two more massive objects orbit the barycenter between each other, in many cases the barycenter will be within the larger object (especially in cases of stars orbited by planets), in cases of binary stars or binary planets the barycenter may lie between the bodies.
In the case of a star that is much more massive than a planet the planet's orbit approximates an ellipse, but the interactions of a 3rd body are more complicated. There is a volume of space where a 3rd body would orbit only the star, and a much smaller volume of space where a 3rd body would orbit the planet. There are also 5 other volumes where more interesting interactions occur, these are called Lagrangian Points. Along the axis of the planet and star there are 3 such points, one between (L1), one behind the planet (L2), and one opposite the planet and behind the star (L3). Objects at these points can stay at these points indefinitely. However, these points are only quasi-stable, small perturbations of objects at these points will tend to cause them to drift away from them, until eventually they're just in solar orbit. Spacecraft can stay at or near these points because they can perform small trajectory adjustments over time. The other two points lie along the planet's orbit, 60 degrees ahead (L4) or behind (L5) the planet. These points, volumes more precisely, are truly stable, objects near them will tend to stay in the same area over long periods (up to billions of years). Hundreds of thousands of asteroids exist in "lissajous" orbits around these "trojan" points near Jupiter and Saturn, even Mars has a handful.
2010 TK7 appears to exist in an orbit that transitions back and forth between the Earth-Sun L4 region and L3. Given that it may be that this asteroid has become captured around L4/L3 only relatively recently. | <urn:uuid:541ad6c8-934b-4dca-9f18-f0a503bae455> | 3.453125 | 553 | Q&A Forum | Science & Tech. | 41.911475 |
Binary systems share stardust
Nov 19, 2009 2 comments
Telescopes now routinely yield detailed images of the cosmos, and in the process help unravel some of the mysteries surrounding our own existence. But one big unanswered question is how the Earth and our planetary neighbours were created from the primordial dust surrounding the young Sun.
In the past two decades we have come to understand that stars form "protoplanetary disks" with radii that can reach up to several hundred times the mean distance between the Sun and the Earth. Astrophysicists have studied the structure of such disks at several radiation wavelengths, which has led to a growing understanding of the star-formation process.
However, most stars form in pairs (and sometimes more complex groups) and numerical models produce conflicting results because of the complex dynamical interaction of two cosmological bodies. Our understanding has also been held back by a lack of direct observations of such systems.
This image reveals a rare glimpse of a young multiple protoplanetary disk encircling a two-star system located 160 ps away in the Ophiuchus constellation around the celestial equator. Having pinpointed the binary system, the researchers led by Satoshi Mayama of the Graduate University for Advanced Studies, Japan, placed a coronograph over the Subaru Telescope in Hawaii. This enabled them to filter out the direct light from the twin stars and reveal two individual protoplanetary disks bridged by a complex interaction.
Comparison with numerical models suggests that there could be a channelling of debris from one disk to the other – a finding that the researchers say might reveal where planets can form in binary systems (Science 10.1126/science.1179679).
As the International Year of Astronomy (IYA2009) draws to a close, its organisers must be delighted at the huge attention given over the last 12 months to a field that has come so far since Galileo turned his primitive instrument to the Moon 400 years ago.
About the author
James Dacey is a reporter for physicsworld.com | <urn:uuid:f7bb51e8-5942-45ca-9587-bbaf5dca1de5> | 3.515625 | 417 | Truncated | Science & Tech. | 34.070878 |
Krammer 2003 Category: Asymmetrical biraphid
TYPE SPECIES: Navicymbula pusilla Krammer
The valve outline is very slightly asymmetric to the apical axis, valves are nearly naviculoid in appearance. The raphe is positioned in a nearly central position on valve. The distal raphe ends are deflected dorsally. Striae are lineate. Valves lack apical pore fields.
The genus is more closely allied with Navicula than Cymbella. Navicymbula pusilla, one of the brackish to saline taxa, is found in endorheic lakes of the Northern Great Plains.
Cite This Page:
Spaulding, S., and Edlund, M. (2009). Navicymbula. In Diatoms of the United States. Retrieved May 22, 2013, from http://westerndiatoms.colorado.edu/taxa/genus/Navicymbula | <urn:uuid:47dcb869-40dd-4b88-aac2-3d3a092303b8> | 2.796875 | 205 | Knowledge Article | Science & Tech. | 31.967273 |
Recent papers on astro-ph
The Origin of Jets
DISK ACCRETION TO MAGNETIZED STARS
3D SIMULATIONS OF DISK ACCRETION TO AN INCLINED DIPOLE. HOT SPOTS AND VARIABILITY
Disk accretion to a rotating star with a misaligned dipole magnetic
field has been studied further by three-dimensional MHD simulations.
This work focuses on the nature of the “hot spots” formed on the stellar surface due
to the impact of two or more funnel streams. We investigated the shape and intensity of the hot spots for different
Disk accretion to a rotating star with a misaligned dipole magnetic field has been studied further by three-dimensional MHD simulations. This work focuses on the nature of the “hot spots” formed on the stellar surface due to the impact of two or more funnel streams. We investigated the shape and intensity of the hot spots for different misalignment anglesQ between the star’s rotation axis W and its magnetic moment µ. Further, we calculated the light curves due to rotation of the hot spots for different angles i between the observer’s line-of-sight and W. The main results are the following:
1. For small inclination angles, Q < 30°, the hot spots typically have a shape of a bow which is bent around the magnetic pole. At large inclination angles, Q > 60°, the shape becomes bar-like. Often a spot on a given hemi-sphere splits to form two spots, which reflects the splitting of the funnel stream into two streams. The secondary stream is typically weaker than the main stream so that one spot is much larger than the other.
2. The density, temperature, matter flux and other parameters increase towards the central regions of the so that the spots are larger at lower temperature/density and smaller at larger temperature / density. They cover about 10 −20% of the area of the star at the density level typical for the external regions of the funnel streams (see Figures 1 and 2). The size of the hot spots increases with the accretion rate.
3. The spots have tendency to be located close to the µ−W plane. They tend to be located downstream of this plane if a star rotates slowly (i.e., the inner region of the disk and the foot-points of the stream rotate somewhat faster than the star), or upstream, if a star rotates relatively fast. The spots wander around their “favorite” position. The amplitude of wandering is smaller in case of the cooler disk.
The calculated light curves reveal the following features:
5. The variation of the shape and location of the spots will lead to departure from the exact variability and to quasi-variability At small misalignment angles, Q < 30°, the streams (and hot spots) may rotate with velocity different from that of the star thus leading to quasi-periodic oscillations. | <urn:uuid:262553b3-1bd0-49b4-adfb-47d3f1a3243c> | 2.765625 | 628 | Academic Writing | Science & Tech. | 51.588068 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
...some species can tolerate highly saline or alkaline waters—such as Ephydra riparia, a species that inhabits the Great Salt Lake in Utah. Another interesting species is the carnivorous petroleum fly (Helaeomyia petrolei), which lives and breeds in pools of crude petroleum and feeds on trapped insects. At one time, Indians in the western United States gathered the aquatic...
What made you want to look up "petroleum fly"? Please share what surprised you most... | <urn:uuid:9e28905c-48ab-4940-b8d2-531113ae78a8> | 2.84375 | 136 | Truncated | Science & Tech. | 48.553929 |
Searching a Substring Within a String
Searching a substring within a string is another common task in text-oriented apps. The Standard Library defines several specialized overloaded versions of string::find(). These overloaded versions take const string&, const char *, or char as a sought-after value. For the sake of brevity, I will focus only on locating a string object within another string object:
string phrase="Franco's rain in Spain";
string sought = "rain";
int pos=phrase.find(sought); // pos=9
find() returns the position of sought's first character within phrase. This position can then be used for further manipulations such as deleting the sought-after string or replacing it with another value. Test if the search succeeded like this:
If the sought-after string wasn't found, find() will return string::npos
, which is an agreed-upon constant value indicating an invalid position in any string.
Replacing a Substring
To replace the sought-after value, use the replace() function which also has several overloaded versions. Again, use the canonical version which takes three arguments: the position to which the replacement string should be written, the length of the substring to be replaced, and an overriding string. For example, the following snippet corrects a typo in phrase by replacing 'rain' with 'reign':
phrase.replace(pos, sought.size(), replacement);
shows what the output looks like.
In fact, the search and replace operations can be combined into a single statement:
Remember that this form should be used only when you're certain that the sought-after value exists in the containing string (this may be the case if the processed sentence is machine-generated, for example). If there's no such guarantee, check whether the search operation succeeded before calling replace().
|Figure 2: replace() automatically adjusts its object's capacity to fit to its new size.
I silently ignored an issue that may have perturbed you: 'reign' is one character longer than the replaced value 'rain' and yet the program didn't extend the original string's size before calling replace(). Is this an oversight? No, it isn't. replace() automatically adjusts its object's capacity to fit to its new size. To convince the skeptics among you, I'll use a longer replacement string and check the string object's capacity before and after the replace() call:
cout<<"original string capacity: "<<phrase. capacity ()<<endl;
string replacement "reign in Spain should see out the forties";
string sought = "rain in Spain";
phrase.replace(phrase.find(sought), sought.size(), replacement);
cout<<"new capacity: "<<phrase. capacity ()<<endl;
As you can see, the resulting string and its capacity were expanded automatically:
And that, my friends, is the beauty of std::string! | <urn:uuid:c41b1d30-bed2-4eb0-b070-03d7a8782ab3> | 3.703125 | 611 | Documentation | Software Dev. | 47.980315 |
One would like to present a consistent view of mathematics. Often, however, the mathematics that one is involved with at the moment tends to colour the view. For example, I can currently teaching two courses: (a) Discrete mathematics and (b) The theory of computation. The enclosed view of the role of infinity in mathematics is clearly shaded by this coincidence.
Numbers play multiple roles:
- As a way of counting. (Cardinals)
- As a way of ranking. (Ordinals)
For natural (finite) numbers, these different senses of the use of numbers coincide. (Though one can have some doubts when the numbers are really large!)
In the same way, mathematical objects can be considered in different ways:
- "Pure" set-theory. (Axiomatic set theory)
- Sets with some sort of structure. (Category theory)
Even for finite sets, these notions do not coincide! For example, there are many different finite groups of the same size.
Coming to the problem of "infinity". The simplest notions of infinity are:
- The set of natural numbers. (Cardinal aleph_0)
- The ordered set of natural numbers. (Ordinal small omega)
- Asymptotic points or points at infinity. (For example, the point (1:0:0) in projective geometry)
Each of the above have associated arithmetic and algebraic operations. For example, with counting numbers we have addition and as a consequence multiplication. With ordinal numbers we have the notion of a successor which can be used to define a notion of addition. The corresponding structure in sets is that of Boolean or sigma algebra of sets. Category theory also has its own notion of algebra called ``universal algebra'', which is like (but not quite the same as) the sigma algebra of sets (infinite sums and products need to be defined and may not exist!).
So to re-phrase the question, we are asking if the ordinary notion of arithmetic and algebraic operations extends to infinity.
At first glance it does. We can certainly perform Boolean operations with infinite sets. The problem is that the usual statements about these operations are sometimes no longer true and our intuition about algebraic identities would fail us.
For example, it is usual to say that multiplication is the operation of repeated addition. When the number of additions is infinite, it is not very evident what this means. We define the product of sets AxB which clearly explains what this operation (multiplication) is for sets.
Similarly, it is natural to think of addition as repeated successor operations, but it is not always clear what this means for the infinite successor operation. Again, ordinal succession is defined in a way that such an operation is meaningful through the notion of limit ordinals.
However, in each case some "obvious" results from the finite case are no longer valid.
It is worthwhile to extend notions from the finite to the infinite when this is useful in giving us expectations regarding questions (about finite sets!) that we could not have arrived at otherwise. (For an interesting example, have a look at the Goodstein sequence.)
As an addendum, I would like to add the naive re-statement of Skolem-Lowenheim.
Since language consists of countably many sentences, we can only hope to define countably many things and from a practical point of view we can only define finitely many things.
Thus, infinity is a notion that mathematicians handle with care, limiting the roles that it can take, so that playing around with infinity gives meaningful (and correct!) results about the finitely many things that we will actually encounter! Mastering this way of handling infinity with care is what a lot of mathematical training is about. | <urn:uuid:120a4869-63dc-4367-a4ee-63b26227211c> | 3.375 | 779 | Personal Blog | Science & Tech. | 38.305566 |
Petrified Forest is a National Park where the largest concentration of petrified wood is found. Unlike Florissant where the stumps are found right where the trees were growing, these trees grew 225 million years ago in a vast forest on nearby mountains. They were washed away in a flood and buried in the bottom of a river. The silt that surrounded them gradually hardened into a rock. Then they slowly underwent a petrification process similar to the trees at Florissant. However, this wood is much more colorful than the wood at Florissant because in addition to silica, they contain minerals like iron and manganese that give them different hues. | <urn:uuid:ab5eb44d-9128-465c-98eb-9bc6e57bad05> | 3.421875 | 134 | Knowledge Article | Science & Tech. | 45.584824 |
Earth versus Galactic Orbital Planes
Date: May 2, 2011
In relationship to the Milky Way would Earth actually be
on its side? The Milky Way band moves in the night sky which leads
me to believe Earth's rotation is 90 or 180 degrees different from
the Milky Way's rotation.
The difference between the celestrial and galactic equators is
approximately 63 degrees.
Good question. I am not sure just what the solar system's orientation
would be relative to the plane of the Milky Way. However, since there
is really no up or down or sideways in space, it does not matter much.
Thanks for asking, though.
David H. Levy
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:c982ad10-39d8-456e-8c72-7534266e1cf5> | 3.296875 | 154 | Q&A Forum | Science & Tech. | 60.989037 |
Physics has given us a great many simple principles that make it easier to understand what’s going on in the world, some better-known than others. To wit: Every action has an equal and opposite reaction; what goes up must come down—both classics, for good reason. And the blingiest of the axioms, E=mc², is particularly useful for understanding why a fistful of plutonium can cause such a big bang. Less famous but far more important on a day-to-day basis if you’re an SUV designer, a high jumper or—as in the present case—a crane operator, is the principle that any object will behave as if all its weight is concentrated at its center of mass.
Finding an object’s center of mass is fairly simple. It’s the point at which half the mass is above the center and half below, half is on the right and half on the left, and half is in front and half in back. If you stand straight up with your arms at your sides, your center of mass is a little below your bellybutton (unless you’re J. Lo). But here’s the important part: If your center of mass is not above your feet, you’re going to fall over. The same principle works for a crane. If the center of mass of the total system—crane plus whatever it’s carrying—moves to one side of the crane’s base, the crane will tip.
As our crane lifts the bus out of the water, trouble is a-brewin’. The water itself is holding up the partially submerged bus. (Remember Archimedes? No? Here: Water pushes up on an object with a force equal to the weight of the water being displaced—this is the reason things feel lighter in water.) As the bus leaves the river, the crane takes on more of its weight until the center of mass shifts so far away from the crane’s arm that suddenly there’s a tip, a splash and the call for a bigger crane. —Michael Moyer
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:33c7e530-0cc4-49f6-874c-ec0b0d4783a2> | 3.171875 | 493 | Truncated | Science & Tech. | 66.367244 |
The discovery that helium-3 at extremely low temperatures could become, like helium-4, a superfluid--having no viscosity whatsoever--earned David Lee and Robert Richardson of Cornell University and Douglas Osheroff of Stanford University a 1996 Nobel prize. But in fact, studying this wondrous substance since then has proved a bit tricky. Helium-4 becomes superfluid when pairs of its atoms condense into a coherent quantum state. (As such, with no resistance to flow, the fluid can pass where even a gas cannot.) Although helium-3 does the same, its atomic couples are magnetic, adding complications.
Scientists have typically examined the interactions between such helium-3 atoms by first putting fluid samples into aerogel--a superlight solid comprised of skinny silicon columns. The result is that the temperature at which this dirty helium-3 becomes superfluid drops. But researchers at the National Center for Scientific Research in Grenoble, France, report in the October 16 issue of Physical Review Letters that helium-3 acts somewhat superfluid at temperatures in between its normal and aerogel-influenced transition points. They studied the helium using nuclear magnetic resonance and suggest that perhaps the behavior of the atoms at these intermediate temperatures represents a new kind of superfluidity--a state in which four helium-3 atoms are joined instead of two. | <urn:uuid:f789a35d-c236-4fb2-a10b-d1ead3ee497a> | 3.25 | 272 | Knowledge Article | Science & Tech. | 26.807384 |
This section applies to Unix-based environments that have signals or multithreading. The Windows version is compiled for multithreading, and Windows lacks proper signals.
We can distinguish two classes of embedded executables. There are small C/C++ programs that act as an interfacing layer around Prolog. Most of these programs can be replaced using the normal Prolog executable extended with a dynamically loaded foreign extension and in most cases this is the preferred route. In other cases, Prolog is embedded in a complex application that---like Prolog---wants to control the process environment. A good example is Java. Embedding Prolog is generally the only way to get these environments together in one process image. Java applications, however, are by nature multithreaded and appear to do signal handling (software interrupts).
On Unix systems, SWI-Prolog uses three signals:
- is used to sychronise atom and clause garbage collection. The handler is installed at the start of garbage collection and reverted to the old setting after completion.
- has an empty signal handler. This signal is sent to a thread after
sending a thread-signal (see
It causes blocking system calls to return with
EINTR, which gives them the opportunity to react to thread-signals.
- is used by the top level to activate the tracer (typically bound to control-C). The first control-C posts a request for starting the tracer in a safe, synchronous fashion. If control-C is hit again before the safe route is executed, it prompts the user whether or not a forced interrupt is desired.
The --nosignals option can be used to inhibit
SIGINT. The other signals are vital for the
functioning of SWI-Prolog. If they conflict with other applications,
signal handling of either component must be modified. The SWI-Prolog
signals are defined in
pl-thread.h of the source distribution. | <urn:uuid:5b50d616-e718-4823-b992-1c0b744acd0f> | 3.359375 | 410 | Documentation | Software Dev. | 44.648246 |
Human Fingerprint in Sea Temps?
In the ongoing battle to persuade the world that
global warming is real and is a problem, advocates are waging a
two-pronged attack. And each prong is heavily based on global climate
models that have accounted for the largest portion of the research
dollars "invested" in this issue.
Not long ago, theirs was but a simple one-pronged
attack: Climate models say the
world will warm [x] degrees (insert your favorite large number
here)—implying that the time for action to combat the forthcoming
tragedy had long passed. But so-called global warming "skeptics"
(i.e., people who base their arguments on data rather than speculation [e.g., climate models]) cried foul when these self-same
models proved incapable of reproducing observed climate variations.
Forecasting is always the easy part, it's the verification that's
the bane of TV weathercaster and climate modeler alike.
Enter Prong No. 2: fingerprint detection. Though
everyone agrees that climate has a lot of inherent variability that
serves to screw up comparisons with models, it nevertheless should be
possible to detect the impact of human activities (from greenhouse
gases)—the so-called human "fingerprint"—on global climate. Take
a model, add a slow greenhouse-gas buildup over time, and compare the
resulting pattern of temperature change with the observations. If they
match up, then presto! The observed changes are caused
by greenhouse gases. (Of course, no self-respecting scientist would say
"caused," but a journalism major is happy to veer off in that
direction.) Most importantly, model forecasts of the future can now be
trusted, since they have successfully reproduced past observations.
Quite a story.
This context explains the hype and hoopla
surrounding a new study in Science
by Scripps Institute of Oceanography scientist Tim Barnett and two
colleagues. Barnett used the existing approach for fingerprint detection
of air temperatures over land and applied it to the world's oceans
using a newly compiled data set that shows changes in water temperatures
to a depth of 3,000 meters. Barnett ran a climate model and compared the
observed changes since 1995 with the changes in ocean temperatures
produced by the model that forced increasing greenhouse gases and
Before we show you the results, here's a very
important aside that in reality is more of a main course. No matter how
complex a climate model is—no matter how many layers it has, how
complex the parameterizations of cloud processes are, or how many soil
moisture or sea-ice feedbacks exist—when the model is forced by
increasing carbon dioxide, it will warm, and usually in a linear
fashion. Sulfates are merely added to lower the future warming rate to a
value that's not inherently ludicrous, but rather merely ridiculous.
The models can do nothing but
produce a warming. They have no choice. More greenhouse gases equals
more warming, period.
But for some reason, our real global climate,
which apparently hasn't been paying attention to the P. R. from the
modeling community, sometimes cools. If this cooling happens for a
decade or more (like the surface air temperatures did from end of World
War II until the mid-1970s), well, the model is screwed. It simply
can't produce a cooling with all those nasty greenhouse gases in the
atmosphere. So, for the fingerprinting to work, other things have to be
added to the model that can generate a cooling. These can be anything
from volcanic expulsions to changes in solar energy to outright cheating
(by pre-specifying observed ocean temperatures rather than modeling
them, for example, which was the approach of NASA's rocket
Figure 1 shows model-predicted values of oceanic
heat content, averaged from the ocean surface down to 3,000 meters, from
1955 to 2000, compared with the observations. Here, the observed values
are highly smoothed (they simply use the mean value for each 10-year
period), because otherwise they would not appear to match the model
output. Do you think the two lines match? Well, as usual, the model
produces a warming ocean, but in every ocean basin except the South
Atlantic, the oceans actually cooled between the mid-1970s and
mid-1980s. To get around that, error bars are added to the model
forecasts (though not to the observations)—showing, according to the authors, "an
unexpectedly close correspondence between the observed heat-content
change and the average of the same quantity from the five model
1. Modeled (shaded region) vs. observed (dotted line) oceanic heat
content, averaged from the ocean surface down to 3000 meters, 1995
And now on to the fingerprint detection. In Figure
2, we reproduce the modeled and observed ocean temperatures at depth
(down to 2,000 meters) over time. Do they match up? The answer depends
of how far away you are from your computer screen right now. If you're
looking at this graph with one eye shut from across the room, then
you'd better sell your beach house now, because global warming is
coming with a vengeance. But step up closer, and let's look at these
data a little more carefully.
2. Modeled and observed ocean temperatures at depth (down to 2,000
meters) over time.
We plotted the observations and model predictions
of temperature anomalies at depth at the end of the record (about 1995)
for the North Atlantic and North Pacific Oceans (Figure 3). The model
essentially produces a huge surface warming that weakens the deeper you
go into the abyss. But yet again, nature seems to be operating under a
different set of physics. That is one lousy forecast, especially at the
surface. If, however, you want the temperature 2000 meters below the
surface, where temperatures seem to be unaffected by greenhouse gases,
then the model does a fantastic job.
3. Modeled and observed temperature anomalies at depth (down to
2,000 meters) at the end of the record (about 1995) for the North
Atlantic and North Pacific Oceans.
But wait a second. Look again at Figure 1. At the
end of the record, the models and observations seem to match perfectly.
How is that possible, given Figure 3? To demonstrate, quickly estimate
the average of the following numbers:
0.30, 0.25, 0.15, 0.10, 0.5, 0.3, 0.1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.
And now the following row:
0.13, 0.2, 0.12, 0.12, 0.12, 0.1, 0.07, 0.06., 0.05, 0.05, 0.05, 0.04,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.
If you said the two averages were both pretty
darned close to zero, you win a new slide rule!
And that's also essentially what Figure 1 shows.
In the top layers of the ocean, where all the temperature variations are
taking place, the model does in fact do a lousy job. But since the
averages are taken over a layer that extends down to 3000 meters, most
of which includes no variation, then the model is now excellent because
the important fluctuations are averaged out. (This situation is not
unlike the unforgettable brouhaha over Benjamin Santer's claim that he
first detected the human-induced greenhouse warming fingerprint in the
atmosphere in 1996: The surface and lower atmosphere temperatures
don't really match, but he used a statistic that depended heavily on
strong cooling of the stratosphere for confirmation).
What we are seeing with the Barnett paper is more
of the same. We have claims that a general circulation model can
reproduce ocean temperatures when, in reality, it cannot. We have
evidence of a human fingerprint in ocean temperature patterns that
arises only when the data are substantially smoothed. And we have a
press corps that's even more convinced of the certainty of significant
human-induced global warming. In fact, however, evidence for the human
global warming fingerprint remains elusive.
Barnett, T.P., D.W.
Pierce, and R. Schnur, 2001. Detection of anthropogenic climate change
in the world's oceans, Science, | <urn:uuid:ad4d7b69-756c-4b5c-b14b-856e05ba3a44> | 2.953125 | 1,879 | Nonfiction Writing | Science & Tech. | 57.228903 |
© 2004 Renaud Boistel (1 of 1)
Country distribution from AmphibiaWeb's database: French Guiana
IUCN (Red List) status: Data Deficient (DD).
This species is reported to range through French Guiana and adjacent Suriname, south to the Manaus area of Brazil. However, there is serious uncertainty as to what species is covered by this name, and attempts to map its range at this stage should be regarded as provisional.
Habitat and Ecology
It occurs in the leaf-litter of forests, where it lays its eggs on land, and the larvae are then carried to water to develop further.
It is reportedly reasonably common in French Guiana.
There are no known threats, and it occurs widely in an area of minimal human impact.
This species presumably occurs in several protected areas. Its taxonomic status is very confused, so further taxonomic research is required to resolve this.
It is uncertain how to apply this name to animals from outside French Guiana, and so its taxonomic status is very confused.
Robert Reynolds, Marinus Hoogmoed, Ross MacCulloch, Philippe Gaucher 2004. Anomaloglossus baeobatrachus. In: IUCN 2012 | <urn:uuid:c849d984-33b8-4bca-b704-fa544ea513c6> | 2.875 | 262 | Knowledge Article | Science & Tech. | 41.705909 |
Convention on Biological Diversity
Convention on Biodiversity
The convention on biological diversity is a treaty that was formed in 1992 at the Earth Summit. They have three main objectives, are
- To conserve biological diversity,
- To use biological diversity in a sustainable fashion,
- To share the benefits of biological diversity fairly and equitably.
The CBD chooses an annual theme, which will be the main focus for that year. In 2010 the main focus is Biodiversity and Sustainable Development. It is also the International Year of Biodiversity; this is because 2010 was the target year to significantly reduce the loss of biodiversity.
How does the CBD work for decisions making?
There are meetings being held throughout the year where they discuss different topics. Every two years they hold the Convention of the Parties (COP). In October 2010 they will be holding the 10th COP in Japan. Here the youth will be presenting the International youth Accord. This document will be opened in the beginning of 2010 for children and youth signing. | <urn:uuid:4d59dc81-4006-4d2f-8cc4-e37020a0b23a> | 3.59375 | 208 | Knowledge Article | Science & Tech. | 42.565169 |
Texas has nearly hit the drought crossover point. Drought severity right now statewide is similar to severity a year ago at this time. Specifically:
The last time 10% of the state was not abnormally dry: February 15, 2011.
The last time only 81% of the state was in drought: March 8, 2011.
The last time only 56% of the state was in severe or worse drought: March 8, 2011.
The last time only 36% of the state was in extreme or worse drought: March 22, 2011.
The last time only 18% of the state was in exceptional drought: April 26, 2011.
We may see more improvement in the US Drought Monitor next week, as the full effects of this week’s rainstorm get tallied up. Meanwhile, with most Texans living in the eastern half of the state, it’s easy to forget that many parts of Texas are much worse off now than they were at this time last year.
The House Committee on Natural Resources held a hearing on Thursday on drought issues. This was my status report to them:
La Nina, a coupled atmosphere-ocean weather pattern involving unusually cool temperatures in the tropical Pacific Ocean, has such a strong effect on jet streams and weather patterns that Texas wintertime precipitation is below normal four times out of five. Fortunately for much of Texas, this winter was one of the exceptional years.
Texas monthly rainfall was above normal in December, January, and February, following ten consecutive months of below-normal rainfall. March rainfall to date may already be sufficient to ensure a fourth consecutive above-normal month.
This welcome rainfall has not been distributed evenly across the state. While some parts of the state can now be considered to be out of drought, other parts remain in exceptional drought status.
The figures at the end of this document embedded here provide an overview of conditions across the state.
Current drought status is indicated using the U.S. Drought Monitor. The latest update includes rainfall through Monday night, so the impact of the rainfall Tuesday and Wednesday is not yet shown. Most of Texas is designated “L”, meaning that longer-term drought issues, such as water supply, dominate. Other parts are designated “SL”, meaning that both short- and long-term drought impacts are being experienced.
The details of drought across Texas are depicted using two maps generated by the Office of the State Climatologist from high-resolution National Weather Service precipitation analyses. The first map shows rainfall compared to normal over the past 1-3 months, illustrating the wet winter experienced by most of the state. The second map shows rainfall compared to normal over the past 12-24 months, illustrating the extent to which the recent rains have made up for the longer-term precipitation deficits. Please note that there are some incorrect details in these maps due to radar biases that have not been entirely corrected. In particular, the details of the rainfall pattern in Texas east of I-45 are incorrect, and the overall amount of precipitation in South Texas is underestimated.
The Dallas-Fort Worth area is farthest from drought. DFW Airport has received 13.57″ of rain so far this year, almost twice normal. This region, unlike most other areas of the state, did not set a record for driest twelve consecutive months ever during the 2011 drought, so recovery came faster. However, because of the inability to transfer water from Lake Texoma, parts of the area are still highly vulnerable to drought.
East Texas was already in drought in 2010, so it has taken longer for the area to recover. Widespread rains have helped considerably. Major rivers are returning to normal base flows, and most major reservoirs are nearly at conservation storage capacity. Tyler has received 12.31″ of rain so far this year, nearly three inches above normal. Additional rainfall is still needed to produce full storage by summer, but conditions are much better than they were last year at this time.
Southeast Texas is also close to recovery from drought. IAH has received 17.66″ of rain so far this year, which is twice normal. As with East Texas, the last remaining step is for lake and groundwater levels to return to normal.
Central Texas is also doing very well. Rainfall so far this year at Austin-Bergstrom is 15.99″, which is two and a half times normal. Indeed, as lakes fill up and streamflows return to normal, the primary water issue for many agricultural interests is the lack of a break in the rainfall sufficient to prepare fields and plant crops.
The last region on the favorable side of the ledger is the cross-timbers region from west of Fort Worth through Abilene. Abilene itself has received 5.64″ so far this year, which is one and a half times normal, and some areas east and south of Abilene have received much more. The area is locally either free from drought or much improved.
The overall drought picture in this area, as well as much of central Texas, is complicated by the fact that the largest rivers flowing through the region are fed by rainfall farther upstream, and upstream rainfall has been lacking. Thus, local meteorological drought conditions may be excellent, but there may still be major reservoir storage issues due to lack of inflows from upstream.
San Angelo is a case in point. Rainfall there has been 7.13″, over twice normal for the year so far. Yet most major reservoirs in the area are very low. Upstream of San Angelo, rainfall totals are running near or below normal and have been inadequate to restore normal stream flows.
Across West Texas, from the Panhandle to Big Bend, drought conditions are still prevalent. Precipitation is normally relatively light during the wintertime, so even above-normal precipitation this time of year is generally insufficient to produce recovery from drought, and most places have not even received above-normal precipitation. Here are some year-to-date totals: Amarillo 1.24″, 60% of normal; Lubbock 1.19″, 60% of normal; Midland 1.30″, 80% of normal; El Paso 0.76″, 75% of normal. Dust storm activity has been high and is likely to remain high through the spring. On the bright side, there was so little vegetation growth last year that wildfires have not yet been a problem.
In South Texas, there has been some rain but not nearly enough to eliminate the drought. Corpus Christi, for example, has received 5.44″ of rain so far this year, only slightly above normal. The situation there is like a “dunked biscotti”: the top of the soil is soft and moist, but below the surface the ground remains dry and crunchy.
OUTLOOK FOR THE REST OF 2012
The outlook in the first paragraph is based primarily on forecasts from the Climate Prediction Center (CPC) branch of the National Weather Service. The current La Niña conditions in the Tropical Pacific Ocean are forecasted weaken to neutral conditions by the end of April. By May, not only will La Niña be gone, but so to goes our ability to forecast seasonal rainfall. Rainfall in May and June depends upon unforecastable details in how squall lines form and move, and summertime disorganized rainfall is notoriously unpredictable. CPC predicts a slightly enhanced change of above-normal temperatures across Texas and below-normal precipitation along the Gulf Coast. The summertime forecast also has slightly enhanced chances of above-normal temperatures. For next winter, not much information is available at this stage. More will become clear in mid-June when skillful forecasts of El Niño or La Niña conditions next winter will be possible.
The wide range of drought conditions presently seen across the state is likely to continue. For places with ample water supplies, the primary danger will be a sudden spell of intense dry weather, but even if such an event occurs it is unlikely to cause problems as large as those during the summer of 2011.
In most of West Texas, conditions remain dire. Reservoirs are mostly low or empty, and the topsoil has not become moist during the winter. If below-normal rainfall continues, agricultural losses could be as large or larger there than in 2011. Even if above-normal rainfall commences now, it may already be too late to produce substantial runoff to reservoirs in time for summer.
Between those two extremes, the situation is promising but precarious. Without deep soil moisture, the area will be vulnerable to an extended dry period that dries out the soil and makes water unavailable for plants. Tree mortality from last year and early springtime growth of grasses would also contribute to very high fire danger if things dry out. Yet if above-normal rain continues, especially during the climatologically wet months of May and June, the present lack of a backup deep soil water supply for roots may not matter.
In response to the precarious situation across much of Texas, a team of scientists and engineers at the University of Texas and Texas A&M University, including myself, coordinated by Prof. David Maidment of U.T., is working to apply research-quality modeling techniques to produce and make available to the State of Texas forecasts of soil moisture, streamflow, and reservoir levels. We are hopeful that some of the technology developed or refined in the state’s public universities can be utilized for the benefit of local, regional, and state decision-makers during this exceptional time.
In 2011, the entire state of Texas experienced extreme drought conditions. In 2012, conditions will vary widely across the state, from extreme or exceptional drought conditions to no drought at all. This will complicate drought response. Much of the rain has fallen in heavily-populated areas of the state, so many Texans may forget that other parts of the state are suffering under drought impacts that may be as bad or worse locally as those in 2011. | <urn:uuid:1c0da2bd-21e9-4b33-8406-f4bc93629b82> | 2.734375 | 2,033 | Knowledge Article | Science & Tech. | 49.657384 |
“Scientists are exploring dripping passages by the light of headlamps, mapping out an ecosystem from 307 million years ago, just before the world’s first great forests were wiped out by global warming. This vast prehistoric landscape may shed new light on climate change today.
Dating from the Pennsylvanian period of the Carboniferous era, the forest lies entombed in a series of eight active mines. They burrow through the rich seams of the Springfield Coal, a nationally important energy resource that underlies much of Illinois and two neighboring states and has been heavily mined for decades.”
Big Reveal of the Day: After 25 years, Simpsons creator Matt Groening spills the beans on Springfield: “Springfield was named after Springfield, Oregon. The only reason is that when I was a kid, the TV show Father Knows Best took place in the town of Springfield, and I was thrilled because I imagined that it was the town next to Portland, my hometown. When I grew up, I realized it was just a fictitious name. I also figured out that Springfield was one of the most common names for a city in the U.S. In anticipation of the success of the show, I thought, ‘This will be cool; everyone will think it’s their Springfield.’ And they do.”
A blog about the interactions between the built environment, people, and nature.
I'm a climate change consultant specializing in climate adaptation, environmental law, and urban planning based in the U.S. In addition to traveling and hiking, I research, publish, and lecture on how cities can adapt to climate change.
Professional and sponsorship inquiries, please | <urn:uuid:9dece1de-4e6f-48e1-9da6-222cfdb13227> | 3.3125 | 345 | Personal Blog | Science & Tech. | 53.012462 |
Brief SummaryRead full entry
Description"Blainville's beaked whale is found worldwide in warm temperate to tropical waters. Small pods of 3-7 whales have been seen off Hawaii in waters 700 to 1,000 m deep, near much deeper water. None have been seen near the British Isles, but one was recorded from Portugal and one from the Mediterranean coast near Spain. There are also scattered records from the Indian Ocean, the South Atlantic, and the South Pacific. Males of this species have relatively enormous teeth, about 120 mm tall, 80 mm long from front to back, and 30 mm wide. Their mouths have a distinctively curved line, as the lower jaw becomes very deep toward the back to accommodate them. A female stranded in North Carolina had scars on her skin that might have come from a killer whale or false killer whale."
Mammal Species of the World | <urn:uuid:ae205b46-a709-485c-a57a-d28bfbf03fde> | 3.34375 | 179 | Knowledge Article | Science & Tech. | 55.929 |
As the world fossil fuel reserves diminish, alternate energy sources will become increasingly important. One of the most commonly discussed forms of alternative energy is nuclear power. Although there are a number of pros and cons to nuclear power generation, one aspect that has received some attention in the news over the past few years is the long-term storage solutions for nuclear waste from both past, current, and future productions.
Nuclear waste currently in storage comes from three principal sources: spent fuel from commercial or research reactors, liquid waste from the reprocessing of spent fuel, and waste from the nuclear weapons and propulsions industry. Most of the storage concerns relate to so-called ‘high level’ nuclear waste, which are highly radioactive, require cooling and containment because their decay gives off heat and radiation, and have an extremely long half-life. In particular, some radioactive isotopes such as Tc-99, Se-79, and I-129 are mobile in water, requiring a storage solution that reduces their ability to move into the groundwater.
In the US, most nuclear waste storage is temporarily done on site. Spent nuclear fuel is kept in pools of recirculated water to keep it cool while the increased radioactivity dies down before it either reprocessed to recover the plutonium or kept in dry storage for eventual deposit into a geological repository. For the nuclear waste resulting from weapons development, the majority of the waste is stored at the Hanford site in southeastern Washington, where the plutonium for the US weapons was produced. Much of the waste there is stored in 177 buried tanks as a combination of both high-level and low-level liquid waste. These tanks were never intended for long-term waste storage and several are known to be leaking.
The desired long-term storage form for nuclear waste is a relatively insoluble, compact solid. As a solid, the waste becomes easier to store and handle; a small volume is desired because there are likely to be few candidates for long-term storage spaces and thus space will be at a premium. Keeping the solubility low reduces the chances of groundwater contamination. The resulting solid is then likely to be packaged, which provides additional barriers to contamination of the environment, but the effects of radiation on the surrounding matrix packaging are not negligible.
Amorphous borosilicates have been identified as one option for nuclear waste storage forms. To produce the glass, the waste is dried, heated to convert the nitrates to oxides, and then mixed with glass-forming chemicals and heated again to very high temperatures (approximately 1000 °C) to produce the melt. This is then poured into a containment vessel where it cools to form a glass. The containment vessel can then be sealed, decontaminated, and placed into a long-term (or temporary) storage facility. Studies of archeological glasses have agreed with models showing the immobilization of the important mobile nuclides during the critical time period where they are highly radioactive, encouraging the continued study and use of this methodology. This process is used to prepare waste for storage at a number of nuclear power plants in Europe.
Although much of the work has focused on cleanup and storage of nuclear waste already present, it is clear that as more nuclear plants are added there is an increased need for waste storage capacity and eventually a long term storage location. Vitrification continues to be studied as a long term treatment plan, but the glass produced is still radioactive and needs to be stored somewhere.
For a 1000 MW plant, 30 tonnes of high level nuclear waste are produced a year. With 104 nuclear plants of roughly that size in the US, this produces 3120 tonnes of high level nuclear waste a year in the US alone. Multiple the number of nuclear plants by 5 to compensate for all the natural gas plants, for example, and the nuclear waste issue scales as well. In comparison, the abandoned Yucca Mountain Nuclear Waste Storage Repository was only planned to hold 77,000 tonnes of material, all of which is currently stored elsewhere. Even if the storage facility was not required for any of the currently existing waste, the current rate of high level nuclear waste production would fill the facility in 24.7 years, excluding any increases in capacity.
While nuclear power may prove to help alleviate the upcoming energy crisis, it becomes apparent that while some of the challenges relating to the storage of nuclear waste may have been solved, there are still major issues that remain before increased capacity becomes a viable solution over the long term.
© 2010 Linda Thompson. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.
M. Wald, "Future Dim for Nuclear Waste Repository," New York Times, 5 March 09.
R. C. Ewing, W. J. Weber, and F. W. Clinard, Jr. "Radiation Effects in Nuclear Waste Forms for High-level Radioactive Waste," Prog. Nucl. Energy 29, 63 (1995).
S. E. Hasan, "International Practice in High-level Nuclear Waste Management," Developments in Environmental Science 5, 57 (2007).
B. Grambow, "Mobile Fission and Activation Products in Nuclear Waste Disposal," J. Contaminant Hydrology 102, 180 (2008).
M. Wald, "Analysis Triples U.S. Plutonium Waste Figures," New York Times, 10 July 10.
P. Hrma, "Towards Optimization of Nuclear Waste Glass: Constraints, Property Models, and Waste Loading," Pacific Northest Laboratory, PNL-SA-23384, April 1994.
R. C. Routson, et al., "High-Level Radioactive Waste Leakage from the 241-T-106 Tank on the Hanford Site," Nuclear and Chemical Waste Management 1, 143 (1980).
Z. Dlouhy, "Solidification of High Level Nuclear Waste," Studies in Environmental Science 15, 155 (1982).
C. M. Jantzen, "First Principles Process-product Models for Vitrification of Nuclear Waste: Relationship of Glass Composition to Glass Viscosity, Resistivity, Liquidus Temperature, and Durability," Westinghouse Savannah River Company, WSRC-MS-91-011, December 1992.
A. Verney-Carron, S. Gin, and G. Libourel, "Archaeological Analogs and the Future of Nuclear Waste Glass," J. Nucl. Materials 406, 365 (2010).
" Waste Management," Sellafield Ltd.
"Existing Capacity by Energy Source," U.S. Energy Information Administration.
E. Neff, "The Arbitrary Science of Yucca Mountain," Las Vegas Review-Journal, 16 Apr 06. | <urn:uuid:cf6c67d3-240c-4e25-99fa-56ceb48c8168> | 3.84375 | 1,419 | Academic Writing | Science & Tech. | 43.40673 |
Oceans cover 70% of the Earth’s surface and support an extraordinarily diverse world. More than one million species live on coral reefs alone, and perhaps as many as 10 million in the deep seas. But only a fraction of these have been discovered so far and much of ocean life remains a mystery. IUCN experts are striving to unveil it and here you get the chance to see some of the weird and wonderful creatures that have recently been seen for the first time.
Mysteries of the Big Blue
August 2010. This month’s ocean focus takes us on a journey of discovery. We take a look at how IUCN is helping to unlock the secrets of The Deep and find ways of combating the many threats that are placing our marine world under siege.
Three months after the Deepwater Horizon oil platform exploded in the Gulf of Mexico, leaking hundreds of thousands of barrels of oil into the ocean, Carl Gustaf Lundin, Head of IUCN's Global Marine Programme, gives his assessment of the impacts on the marine environment. He says there is much uncertainty about the long term effects on marine ecosystems which were already threatened by human activity before the spill occurred. …
06 Aug 2010 | Audio
Invasive species are one of the main threats to our oceans, severely affecting not only marine biodiversity but also human health and the economy. So what exactly are these mysterious creatures, where do they come from and why are they so dangerous? …
30 Jul 2010 | News story | <urn:uuid:1f3e56fc-b739-41a1-8f14-522a4e918c52> | 2.875 | 303 | Content Listing | Science & Tech. | 46.510014 |
A super conductor is an electrical conductor that allows current to flow without any resistance in low temperatures. This allows development of technologies such as high temperature magnetic levitation vehicles which accelerate to high speeds without any Resistance. The following video explains the three defining characteristics of magnetic levitation namely :
The Meissner Effect
Magnetic Levitation & Suspension, and
Flux Trapping Effect.
Watch this fabulous video, all of 6 minutes, that shows how superconducting levitation works.
This entry was posted by linuxandfriends on September 9, 2008 at 11:39 am, and is filed under science, videos. Follow any responses to this post through RSS 2.0.Both comments and pings are currently closed. | <urn:uuid:7f8f945a-d41d-457e-8a9b-b216d32b3a74> | 3.828125 | 148 | Truncated | Science & Tech. | 33.459983 |
So far in our 2012 blog series, we’ve dealt with the Mayan calendar and claims of a planetary alignment. Another claim making the rounds says that on December 21, 2012, Earth, our Sun, and the galactic center will align, and something about this alignment will cause Earth to be annihilated.
This claim is trickier than most, because it turns out that we will experience a rough alignment of these three celestial objects on 12/21/2012. But don’t start investing in survival supplies just yet: it turns out that Earth, our Sun, and the black hole at the center of our galaxy align like this twice each year – and we’re still here!
We all know that Earth orbits the Sun once each year. What you may not know is that our Sun is also orbiting the center of the Milky Way galaxy. One complete solar orbit takes about 225 million years! As these two orbits are occurring, Earth, the Sun, and the galactic center experience an approximate alignment twice each year. Even this alignment is not perfect, since the Earth and the Sun’s orbits are tilted relative to one another.
2012 doomsayers make a big deal out of the fact that this alignment is occurring on the winter solstice. But the extremely long orbital period of the Sun around the galactic center means that this alignment has occurred on the solstices for years, and will continue to for quite a few more. The bottom line? There is no scientific reason to think that the approximate alignment of Earth, our Sun, and the galactic center will be any different on 12/21/2012 than it was on 12/21/2009.
Casey Rawson is the Science Content Developer for Science 360, and she wonders how a 50-pound drum of coffee would save anyone if a black hole really DID suck in the Earth. | <urn:uuid:e016991a-9534-4b19-80a1-08cf58a045ab> | 3.296875 | 378 | Personal Blog | Science & Tech. | 60.50538 |
Will Godzilla save the U.S. West Coast from Hedorah? Via ScienceDaily:
The first anniversary is approaching of the March, 2011, earthquake and tsunami that devastated Fukushima, Japan, and later this year debris from that event should begin to wash up on U. (Read more…)S. shores — and one question many have asked is whether that will pose a radiation risk.
The simple answer is, no. Nuclear radiation health experts from Oregon State University who have researched this issue following the meltdown of the Fukushima Dai-ichi nuclear plant say the minor amounts of deposition on the debris field scattered in the ocean will have long since dissipated, decayed or been washed away by months of pounding in ocean waves.
However, that’s not to say that all of the debris that reaches Pacific Coast shores in the United States and Canada will be harmless. “The tsunami impacted several industrial areas and no doubt swept out to sea many things like bottled chemicals or other compounds that could be toxic,” said Kathryn Higley, professor and head of the Department of Nuclear Engineering and Radiation Health Physics at OSU …
Read more here. | <urn:uuid:168fb5be-2397-4435-8a66-288dd90347db> | 2.796875 | 232 | Truncated | Science & Tech. | 46.596144 |
Nov. 19 & 20, 2008 – The spacecraft passed the System Integration Review at Ball Aeropsace,
so we can proceed to assemble and test the flight system.
Nov. 17, 2008 – The WISE team completed an end-to-end test of the data system, sending data from the
instrument through the spacecraft, transmitting to the
geosynchronous Tracking and Data Relay Satellite System (TDRSS),
back from TDRSS to the ground station in White Sands, New Mexico,
and over the internet to the WISE Science Data Center at the Infrared
Processing and Analysis Center in Pasadena.
Nov. 12, 2008 – Deputy Project Scientist Amy Mainzer has posted a story on the JPL Blog about Sizing Up Near-Earth Asteroids: http://blogs.jpl.nasa.gov/?p=19
July 22, 2008 – Deputy Project Scientist Amy Mainzer has posted a story on the JPL Blog about the building of WISE: http://blogs.jpl.nasa.gov/?p=5
2008 - WISE Deputy Project Scientist Amy Mainzer is featured in several episodes of Season 2 and the upcoming Season 3 of “The Universe” on the History Channel. Season 2 episodes: Constellations, Cosmic Collisions, and Biggest Things in Space (WISE is mentioned in this episode). Season 3 episodes so far: The Strangest Things in Space, and Asteroids and Comets.
January 31, 2008 - 50th Anniversary of Explorer 1
Today is the 50th anniversary of the launch of America’s first satellite, Explorer 1. Explorer 1 was designed and built at the Jet Propulsion Laboratory (JPL). Following the establishment of NASA an Explorer program was developed to provide frequent flight opportunities for scientific investigations from space and has had over 70 successful missions. WISE is one of the latest in the Explorer line, a Medium Class Explorer (MIDEX), which is managed by JPL.
Learn more about Explorer 1 at
January 25, 2008 - 25th Anniversary of IRAS
Today marks the 25th anniversary from the launch of IRAS (January 25, 1983). The Infrared Astronomical Satellite (IRAS) is WISE's predecessor, successfully conducting a ten month all-sky survey. For more information about the IRAS mission, visit NASA's IRAS page. | <urn:uuid:1ddf1cf0-5549-42bc-8bdf-688f934b290c> | 2.828125 | 489 | Content Listing | Science & Tech. | 54.315754 |