text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Pollutants in the atmosphere mix with water in the air to form acid rain. Sulphur dioxide (SO2) and nitrogen oxides (NOx) are the two main pollutants that form acid rain. SO2 reacts with water in the air to form sulphurous acid (H2SO3), and then with oxygen to form sulphuric acid (H2SO4). NOx reacts with water to form nitric acid (HNO3).
Acid rain causes damage to land and water ecosystems, as well as to human-made materials and structures such as buildings and statues. Human health impacts appear to be mostly attributable to SO2 and to the formation of fine particulate and ozone (the main components of smog) from SO2 and NOx.
Damage to land and water ecosystems occurs when the land, water or plants like trees and crops cannot neutralize the acid being deposited by the rain. High levels of acidity can destroy life in our lakes and rivers and reduce forest growth. Nova Scotia has low tolerance for acid precipitation because of the low buffering capacity and neutralization abilities of water and land ecosystems in most of the province, especially in south-western Nova Scotia.
Critical load is the amount of acid precipitation that an ecosystem can endure before long-term harmful effects occur. The critical load threshold for lakes in Nova Scotia is 8 kilograms of sulphate per hectare per year (kg/ha/y).
The existing critical loads (kg/ha/y) were developed only for water ecosystems and were defined as the level of wet sulphate deposition that would maintain a pH of 6 in 95% of lakes (1997 Canadian Acid Rain Assessment). (A pH of 6 is a benchmark threshold for sustaining fresh water systems).
Very recently, the science of critical loads and measurements to support them has advanced and critical loads for land ecosystems have been developed. For the first time, this has enabled development of "combined ecosystem (water and land) critical load" maps. These combined ecosystem maps are expressed in different units (eq/ha/yr) than for water ecosystems only (kg/ha/y), because different chemical species and wet and dry deposition must be combined in the calculation.
In addition to SO2 and NOx pollution from local provincial sources, Nova Scotia receives a great deal of trans-boundary air pollution from other areas in Canada and the United States. The 2004 Canadian Acid Deposition Science Assessment provides recent estimates of the major source regions affecting Nova Scotia ecosystems. | <urn:uuid:bd0eb26c-a80a-46e2-9be8-e19c135d387b> | 3.71875 | 508 | Knowledge Article | Science & Tech. | 36.952461 |
Credit: NASA/UMass/D.Wang et al.
Midtown of the Milky Way
Bright lights, big galaxy. The Chandra X-ray Observatory has
recently obtained a detailed
color mosaic of the center of the Milky Way Galaxy. Bright points of
light show hot compact objects bathed in a swirling glow of hot gas clouds.
The hot gas may be produced by the deaths of old stars, or from the winds
of very massive young stars. At any rate, the pressure of this high
temperature gas produces a flow from the center to the rest of the galaxy.
This flow apparently helps to spread heavy elements (those necessary for
life, like carbon, nitrogen, oxygen, molybdenum, etc.) throughout the Milky
Last Week *
HEA Dictionary * Archive
* Search HEAPOW
Each week the HEASARC
brings you new, exciting and beautiful images from X-ray and Gamma ray
astronomy. Check back each week and be sure to check out the HEAPOW archive!
Page Author: Dr. Michael F.
Last modified January 11, 2002 | <urn:uuid:202bcbea-3cc7-47f2-ac48-685a8c95d4f8> | 3.0625 | 235 | Knowledge Article | Science & Tech. | 62.646023 |
This problem investigates one connection between the Golden Ratio and the Fibonacci Sequence.
The Fibonacci Sequence 1, 1, 2, 3, 5, 8, 13, 21, . . . begins with F(1) = 1, F(2) = 1 and nth term, n > 2 is denoted by
F(n) = F(n-1) + F(n-2).
Create a Spreadsheet to generate the Fibonacci Sequence.
See the Sublime Triangle for one derivation of the Golden Ratio.
We have and setting the equations
can be used to find . Now, use the first equation to generate a sequence of positive powers of the Golden Ratio:
Verify each of these by pursuing the relevant algebra, e.g.
PROVE with Mathematical Induction:
Need help getting started?
Need help finishing the proof?
Find the sequence of powers of for NEGAGIVE integers.
Continue and create a general expression to prove by mathematical induction. | <urn:uuid:b215832a-971d-48dc-bf23-c095b679dcac> | 2.703125 | 212 | Tutorial | Science & Tech. | 65.025652 |
Someday, explorers on the planet Mars might use special clocks --
with forty extra minutes per day -- to keep track of the time.
Friday, April 2, 1999DB: This is Earth and Sky for Friday,
April 2. Patrick Milo of Orleans, Ontario, writes, "How will
astronauts tell time during missions on Mars?"
JB: Shuttle astronauts coordinate their clocks with Mission
Control Center in Houston. But future explorers on Mars might adjust
their schedules to match the martian day at their landing site. They
might do this gradually during the flight to Mars to avoid
interplanetary "jet lag."
DB: A martian day is about forty minutes longer than an earthly
day -- which creates problems for astronauts who want to stay in
synch with Mission Control. One idea is to use a martian "time slip"
-- clocks would stop for forty minutes every twenty-four hours. Or
martian clocks might be altered to stretch the second. A day would
still be divided into twenty-four sixty-minute hours, but each hour
would last about sixty-two Earth minutes.
JB: During unmanned Mars missions, scientists numbered the
passing Martian days -- they called them "Sols." The Pathfinder
spacecraft started operations on Sol 1 and stopped transmitting on
Sol 83. Martian astronauts will probably use a similar system -- but
they might still keep track of birthdays and holidays using an
ordinary Earth calendar. By the way, you can see Mars tonight. After
moonrise in mid-evening, Mars will be the brightest object near the
moon. Thanks for your question, Patrick. And with thanks to the
National Science Foundation, we're Block and Byrd for Earth and Sky.
Author(s): David S. F. Portree
information about this topic
comments on this topic
Thursday April 1,
1999 | Saturday April 3,
let us know what you | <urn:uuid:4f884d52-f959-4f22-a6dd-b160ea3adf4f> | 3.53125 | 402 | Comment Section | Science & Tech. | 63.45884 |
Tsunami - 5-year Anniversary Movie
NOAA, in partnership with the National Tsunami Hazard Mitigation Program (NTHMP), has produced "Tsunami", a new Science On a Sphere® (SOS) presentation to commemorate the 5th anniversary of the devastating 2004 Indian Ocean Tsunami. The narrated production uses global datasets, graphics and animations to describe the 2004 event and to take viewers through a future tsunami scenario originating off the US west coast.
NOAA and the NTHMP collaborated to create "Tsunami" with input from a broad team of tsunami and communication experts, including scientists working in research and operations, coastal emergency managers, information technology specialists, and civil defense planners. The NTHMP is a joint program between NOAA and other Federal agencies, U.S. coastal States and U.S. Territories that works to prepare for and educate coastal communities about tsunamis. NOAA's Environmental Visualization Laboratory and Film studio provided visualization, animation and production expertise. For simplicity, the two NOAA Tsunami Warning Centers with their distinct areas of responsibility are represented as one Center located in Alaska.
Length of dataset: 7:55 | <urn:uuid:c7dfd7f0-59e1-48f8-8de0-b54c6a36647e> | 3.109375 | 237 | Knowledge Article | Science & Tech. | 20.251682 |
A type of humidity that considers the mass of water vapor present per unit volume of space. Also considered as the density of the water vapor. It is usually expressed in grams per cubic meter.
ABSOLUTE INSTABILITYinstability »
When the lapse rate of a column of air is greater than the dry adiabatic lapse rate. The term absolute is used because this applies whether or not the air is dry or saturated.
ABSOLUTE TEMPERATURE SCALEKelvin Temperature Scale »
A temperature scale with a freezing point of +273°K (Kelvin) and a boiling point of +373°K.
Considered to be the point at which theoretically no molecular activity exists or the temperature at which the volume of a perfect gas vanishes. The value is 0° Kelvin, -273.15° Celsius and -459.67° Fahrenheit.
The process in which incident radiant energy is retained by a substance. The absorbed radiation is then transformed into molecular energy.
The flat, gently sloping or nearly level region of the sea floor.
A thermodynamic change of state in a system in which there is no transfer of heat or mass across the boundaries of the system. In this process, compression will result in warming and expansion will result in cooling.
The horizontal transfer of any property in the atmosphere by the movement of air (wind). Examples include heat and moisture advection.
ADVECTION FOGArctic Sea Smoke » and sea fog »
Fog that develops when warm moist air moves over a colder surface, cooling that air to below its dew point.
Statements that are issued by the National Weather Service for probable weather situations of inconvenience that do not carry the danger of warning criteria, but, if not observed, could lead to hazardous situations. Some examples include snow advisories stating possible slick streets, or fog advisories for patchy fog condition causing temporary restrictions to visibility.
Acronym for Automation of Field Operations and Services. It is the computer system that links National Weather Service offices together for weather data transmission.
This is considered the mixture of gases that make up the earth's atmosphere. The principal gases that compose dry air are Nitrogen (N2) at 78.09%, Oxygen (O2) at 20.946%, Argon (A) at 0.93%, and Carbon Dioxide (CO2) at 0.033%. One of the most important constituents of air and most important gases in meteorology is water vapor (H2O).
An extensive body of air throughout which the horizontal temperature and moisture characteristics are similar.
AIR MASS THUNDERSTORM
A thunderstorm that is produced by convection within an unstable air mass through an instability mechanism. Such thunderstorms normally occur within a tropical or warm, moist air mass during the summer afternoon as the result of afternoon heating and dissipate soon after sunset. Such thunderstorms are not generally associated with fronts and are less likely to become severe than other types of thunderstorms. However, that does not preclude them from having brief heavy downpours.
The soiling of the atmosphere by contaminants to the point that may cause injury to health, property, plant, or animal life, or prevent the use and enjoyment of the outdoors.
AIR QUALITY STANDARDS
The maximum level which will be permitted for a given pollutant. Primary standards are to be sufficiently stringent to protect the public health. Secondary standards must protect the public welfare, including property and aesthetics.
The downslope air flow that blows through the Alaskan valleys. It is usually given local names, such as Knik, Matanuska, Pruga, Stikine, Taku, Take, Turnagain, or Williwaw.
ALBEDODave's Dictionary »
The ratio of the amount of radiation reflected from an object's surface compared to the amount that strikes it. This varies according to the texture, color, and expanse of the object's surface and is reported in percentage. Surfaces with high albedo include sand and snow, while low albedo rates include forests and freshly turned earth.
A fast moving, snow-producing weather system that originates in the lee of the Canadian Rockies. It moves quickly across the northern United States, often bring gusty winds and cold Arctic air.
ALEUTIAN LOWIcelandic Low »
A semi-permanent, subpolar area of low pressure located in the Gulf of Alaska near the Aleutian Islands. It is a generating area for storms and migratory lows often reach maximum intensity in this area. It is most active during the late fall to late spring. During the summer, it is weaker, retreating towards the North Pole and becoming almost nonexistent. During this time, the North Pacific High pressure system dominates.
An instrument used to determine the altitude of an object with respect to a fixed level. The type normally used by meteorologists measures the altitude with respect to sea level pressure.
The pressure value to which an aircraft altimeter scale is set so that it will indicate the altitude above mean sea level of an aircraft on the ground at the location for which the value was determined.
In meteorology, the measure of a height of an airborne object in respect to a constant pressure surface or above mean sea level.
Composed of flattened, thick, gray, globular masses, this middle cloud genus is primarily made of water droplets. In the mid-latitudes, cloud bases are usually found between 8,000 and 18,000 feet. A defining characteristic is that it often appears as a wavy billowy layer of cloud, giving it the nickname of "sheep" or "woolpack" clouds. Sometimes confused with cirrocumulus clouds, its elements (individual clouds) have a larger mass and cast a shadow on other elements. It may form several sub-types, such as altocumulus castellanus or altocumulus lenticularis. Virga may also fall from these clouds.
A middle cloud with vertical development that forms from altocumulus clouds. It is composed primarily of ice crystals in its higher portions and characterized by its turrets, protuberances, or crenelated tops. Its formation indicates instability and turbulence at the altitudes of occurrence.
This middle cloud genus is composed of water droplets, and sometimes ice crystals, In the mid-latitudes, cloud bases are generally found between 15,000 and 20,000 feet. White to gray in color, it can create a fibrous veil or sheet, sometimes obscuring the sun or moon. It is a good indicator of precipitation, as it often precedes a storm system. Virga often falls from these clouds.
AMERICAN METEOROLOGICAL SOCIETY
An organization whose membership promotes the education and professional advancement of the atmospheric, hydrologic, and oceanographic sciences.
For further information, contact the AMS.National Weather Association »
A wind that is created by air flowing uphill. Valley breezes, produced by local daytime heating, are an example of these winds. The opposite of a katabatic wind.
ANEMOMETERDave's Dictionary »
An instrument that measures the speed or force of the wind.
ANEROID BAROMETERmercurial barometer »
An instrument for measuring the atmospheric pressure. It registers the change in the shape of an evacuated metal cell to measure variations on the atmospheric pressure. The aneroid is a thin-walled metal capsule or cell, usually made of phosphor bronze or beryllium copper. The scales on the glass cover measure pressure in both inches and millibars.
This refers to the non-standard propagation of a beam of energy, radio or radar, under certain atmospheric conditions, appearing as false (non-precipitation) echoes. May be referred to as A.P.
Of or relating to the area around the geographic South Pole, from 90° South to the Antarctic Circle at approximately 66 1/2°South latitude, including the continent of Antarctica. Along the Antarctic Circle, the sun does not set on the day of the summersolstice (approximately December 21st) and does not rise on the day of the winter solstice (approximately June 21st).
Although not officially recognized as a separate ocean body, it is commonly applied to those portions of the Atlantic, Pacific, and Indian Oceans that reach the Antarctic continent on their southern extremes.
ANTICYCLONEhigh pressure »
A relative pressure maximum. An area of pressure that has diverging winds and a rotation opposite to the earth's rotation. This is clockwise the Northern Hemisphere and counter-clockwise in the Southern Hemisphere. It is the opposite of an area of low pressure, or a cyclone.
The upper portion of a cumulonimbus cloud that becomes flat and spread-out, sometimes for hundreds of miles downstream from the parent cloud. It may look smooth or fibrous, but in shape, it resembles a blacksmith's anvil. It indicates the mature or decaying stage of a thunderstorm.
The point on the earth's orbit that is farthest from the sun. Although the position is part of a 21,000 year cycle, currently it occurs around July, when the earth is about 3 million miles farther from the sun than at perihelion. This term can be applied to any other celestial body in orbit around the sun. It is the opposite of perihelion.
The point farthest from the earth on the moon's orbit. This term can be applied to any other body orbiting the earth, such as satellites. It is the opposite of perigee.
Of or relating to the area around the geographic North Pole, from 90° North to the Arctic Circle at approximately 66 1/2° North latitude.
ARCTIC AIR MASS
An air mass that develops around the Arctic, it is characterized by being cold from surface to great heights. The boundary of this air mass is often defined by the Arctic front, a semi-permanent, semi-continuous feature. When this air mass moves from its source region, it may become more shallow in height as it spreads southward.
The jet stream that is situated high in the stratosphere in and around the Arctic or Antarctic Circles. It marks the boundary of polar and arctic air masses.
ARCTIC SEA SMOKEsteam fog »
A type of advection fog that forms primarily over water when cold air passes across warmer waters.
A colorless, odorless inert gas that is the third most abundant constituent of dry air, comprising 0.93% of the total.
A term used for an extremely dry climate. The degree to which a climate lacks effective, life-promoting moisture. It is considered the opposite of humid when speaking of climates.
Acronym for Automated Surface Observing System. This system is a collection of automated weather instruments that collect data. It performs surface based observations from places that do not have a human observer, or that do not have an observer 24 hours a day.
ASTRONOMICAL TWILIGHTtwilight »
The time after nautical twilight has commenced and when the sky is dark enough, away from the sun's location, to allow astronomical work to proceed. It ends when the center of the sun is 18° below the horizon.
The gaseous or air portion of the physical environment that encircles a planet. In the case of the earth, it is held more or less near the surface by the earth's gravitational attraction. The divisions of the atmosphere include the troposphere, the stratosphere, the mesosphere, the ionosphere, and the exosphere.
ATMOSPHERIC PRESSUREbarometric pressure »
The pressure exerted by the atmosphere at a given point. Its measurement can be expressed in several ways. One is in millibars. Another is in inches or millimeters of mercury (Hg).
It is created by the radiant energy emission from the sun and its interaction with the earth's upper atmosphere over the middle and high latitudes. It is seen as a bright display of constantly changing light near the magnetic poles of each hemisphere. In the Northern Hemisphere, it is known as the aurora borealis or Northern Lights, and in the Southern Hemisphere, this phenomena is called the aurora australis.
The season of the year which occurs as the sun approaches the wintersolstice, and characterized by decreasing temperatures in the mid-latitudes. Customarily, this refers to the months of September, October, and November in the North Hemisphere and the months of March, April, and May in the Southern Hemisphere. Astronomically, this is the period between the autumnal equinox and the winter solstice.
Acronym for Advanced Very High Resolution Radiometer. It is the main sensor on the U.S. polar orbitingsatellites.
AVIATION WEATHER CENTER
As one of the National Centers for Environmental Prediction, it is the national center for weather information that is used daily by the Federal Aviation Administration, commercial airlines, and private pilots. It is entering a new phase of service, growing to accept global forecasting responsibilities.
For further information, contact the AWC, located in Kansas City, Missouri.
Acronym for Advanced Weather Interactive Processing System. It is the computerized system that processes NEXRAD and ASOS data received at National Weather Service Forecast Offices.
AZORES HIGHNorth Pacific High »
A semi-permanent, subtropical area of high pressure in the North Atlantic Ocean that migrates east and west with varying central pressure. Depending on the season, it has different names. In the Northern Hemispheric winter and early spring, when the Icelandic Low dominates the North Atlantic, it is primarily centered near the Azores Islands. When it is displaced westward, during the summer and fall, the center is located in the western North Atlantic, near Bermuda, and is known as the Bermuda High. | <urn:uuid:998359c4-178a-42cc-82b1-e305198fed31> | 3.609375 | 2,894 | Structured Data | Science & Tech. | 39.327665 |
The simplest definition of biotechnology is "applied biology." The application of biological knowledge and techniques to develop products. It may be further defined as the use of living organisms to make a product or run a process. By this definition, the classic techniques used for plant and animal breeding, fermentation and enzyme purification would be considered biotechnology. Some people use the term only to refer to newer tools of genetic science. In this context, biotechnology may be defined as the use of biotechnical methods to modify the genetic materials of living cells so they will produce new substances or perform new functions. Examples include recombinant DNA technology, in which a copy of a piece of DNA containing one or a few genes is transferred between organisms or "recombined" within an organism. | <urn:uuid:99970b05-485b-4020-80dd-2ca012e11bae> | 3.25 | 156 | Knowledge Article | Science & Tech. | 25.098015 |
|General Information||Taxonomy and identification||General biology||Habitat preferences and distribution||Reproduction and longevity||Sensitivity||Importance|
Image Keith Hiscock - Clavelina lepadiformis. Image width ca 5 cm.
Image copyright information
Clavelina lepadiformis is not listed under any importance categories.
|Researched by:||Karen Riley||Refereed by:||Dr Xavier Turon|
|Phylum||Chordata||Sea squirts, fish, reptiles, birds and mammals|
|Recorded distribution in Britain and Ireland||Clavelina lepadiformis occurs around most coasts of Britain and Ireland.|
|Habitat information||Clavelina lepadiformis attaches itself to rocks, stones and seaweed in the sublittoral, to a depth of about 50 m.|
|Description||Clavelina lepadiformis is a colonial sea squirt that grows up to 20 mm high. Groups of transparent zooids are joined at the base by short stolons. Eggs and larvae vary in colour and are visible in the atrial cavity. In the Mediterranean the eggs and embryos are most often yellowish white and sometime pink (X. Turon, pers. comm.) although in other areas in NW Europe they can also be red (Fish & Fish, 1996). Zooids possess a white ring around the pharynx, and have pale yellow or white longitudinal lines along the endostyle and dorsal lamina, which gives this species its 'light-bulb' appearance. In some areas colonies regress in winter and re-grow in spring although in the Mediterranean this may not be the case. De Caralt et al. (2002) looked at the differences in Clavelina lepadiformis between populations inside and outside of harbours and found that the population inside the harbour remained all year (albeit often at very low abundances). In contrast, the population in a rocky littoral area outside the harbour aestivated (regressed) for up to 7 months over the summer period (De Caralt et al., 2002).|
This review can be cited as follows:
Karen Riley 2008. Clavelina lepadiformis. Light bulb sea squirt. Marine Life Information Network: Biology and Sensitivity Key Information Sub-programme [on-line]. Plymouth: Marine Biological Association of the United Kingdom. [cited 24/05/2013]. Available from: <http://www.marlin.ac.uk/speciesfullreview.php?speciesID=3009> | <urn:uuid:0f6375c4-e46b-40a5-a595-c09d44ef16ae> | 2.96875 | 534 | Knowledge Article | Science & Tech. | 35.001343 |
July 29th, 2008
Phil Plait at Bad Astronomy addresses the question of why are there no green stars. It’s a nice post and brings together astronomy and biology in interesting ways, and reminds me of the posts I did a couple of months ago for a friend who’d just sold a story about life on a world orbiting an M star and wanted to know some things about how it would look there.
At first I thought everything would look red, but decided that was not quite right. Those posts have some very good links as references on this issue, on both the astrophysical and biological sides and I expect to refer back to them in the future.
And then discussing these issues with some fellow astronomers we figured out that if there weren’t exactly green stars, there are at least green star-like objects in the sky anyway. | <urn:uuid:b8b4c467-23e5-4fda-9fea-590388a1c0dd> | 2.828125 | 176 | Personal Blog | Science & Tech. | 50.748737 |
Using PySOAPby Cameron Laird
You don't have to become expert in XML to use SOAP. For most programmers, SOAP is just a technique for distributed computing. You write a function call on your computer here, and it retrieves a result from a computation there, on a server perhaps half a world away.
With well under an hour of installation and practice, you can learn to write distributed Python programs using SOAP to exchange services and results from programs written in almost any language.
Compared to other distributed programming tools like DCOM or CORBA (or, more properly, the IIOP which transports CORBA,) SOAP is
- computing language-independent, and
For example, you might use SOAP and Python to write an application for your handheld computer that retrieves data from a mainframe using Cobol. SOAP translates the details of computing language, hardware, and even human-language encoding. Russian-language words appear in proper Cyrillic, and so on.
A growing number of Python programmers are using SOAP. ActiveState Tools Corporation provides product updates using SOAP as a vehicle. Digital Creations Inc.'s Zope is SOAP-enabled. With just a little setup time, you too can experience SOAP's benefits.
Starting with SOAP
Programming with SOAP is easy, but first we must assemble the proper pieces to make SOAP work. In principle, you can use Python 2.0 or later. However, PySOAP depends on the
xml.parsers.expat module, which is difficult to install correctly. Installing the Windows binary for Python2.1 final is the easiest way to get PySOAP running. If you work from sources, you'll need expat 1.1 or
Once you are able to
you're ready to download PySOAP from
Sourceforge. For this tutorial, I used version 0.9.5.
Unpack it into a local directory, and copy the
sources into a standard library location as appropriate. On a
Unix-like machine, for example, you might
cp SOAPpy095/SOAP.py /usr/local/lib/python2.0/SOAP.py
This makes SOAP available to all Python2.0 developers on your host. Under Windows you'd want
copy SOAPpy095\SOAP.py \python2.1\lib
Now you're ready to exercise SOAP. Run
import SOAP # XMethods.net provides several interesting SOAP # services. See the references below for details. server = SOAP.SOAPProxy( "http://services.xmethods.net/soap/servlet/rpcrouter", namespace = "urn:xmethods-Temperature") # 47978 is Rensselaer's zip (postal) code. print server.getTemp("47978")
This returns the temperature in Fahrenheit degrees of a small town in northwest Indiana, in the US Mid-West. You have just begun your career as a SOAP programmer.
Pages: 1, 2 | <urn:uuid:bb24852f-64c2-4b66-9b77-c16c773b1a64> | 2.796875 | 646 | Tutorial | Software Dev. | 57.240379 |
|Sep22-07, 07:07 AM||#1|
A sinusoidal wave
pls help me on this problem, thanks a lot.
A sinusoidal wave is described by y=(0.30 m)sin(0.20x-40t). Determine the wave speed.
|Sep23-07, 09:58 AM||#2|
What are the individual parts of the sin wave? 0.30 would be your amplitude, right? Break it down. What is the .20x and -40t?
A sin wave holds the form y = Asin(wt - kx + phase shift) + D, where D is DC offset. w is in rad/sec. K is .20, which is your WAVE number. K = 2*pi*f/c = 2*pi/lambda = w / c. Go from there.
This should probably be moved to the HW forums.
|Sep24-07, 10:28 AM||#3|
I got it.
|Similar Threads for: A sinusoidal wave|
|Sinusoidal Wave Problem||Introductory Physics Homework||3|
|Meaningless sinusoidal wave?||General Math||11|
|A sinusoidal wave||Introductory Physics Homework||0|
|sinusoidal wave||Introductory Physics Homework||2|
|sinusoidal wave propagation||Introductory Physics Homework||2| | <urn:uuid:7a805f7a-0e7f-4bc7-9945-a74d9d8bea07> | 3.140625 | 311 | Comment Section | Science & Tech. | 86.516143 |
For decades, earthquake experts dreamed of being able to divine the time and place of the world's next disastrous shock. But by the early 1990s the behavior of quake-prone faults had proved so complex that they were forced to conclude that the planet's largest tremors are isolated, random and utterly unpredictable. Most seismologists now assume that once a major earthquake and its expected aftershocks do their damage, the fault will remain quiet until stresses in the earth's crust have time to rebuild, typically over hundreds or thousands of years. A recent discovery--
that earthquakes interact in ways never before imagined--
This article was originally published with the title Earthquake Conversations. | <urn:uuid:9bee1e20-6eda-4378-a565-f940023c096d> | 3.25 | 133 | Truncated | Science & Tech. | 27.260132 |
By Mark Schrope of Nature magazine
Last summer, intrepid surfers flocked to Florida's east coast to ride the pounding swells spawned by a string of offshore hurricanes. But they were not prepared for a different kind of hazard washing towards shore--an invasion of stinging moon jellyfish, some of which reached the size of bicycle wheels. The swarms of gelatinous monsters grew so thick that they forced a Florida nuclear power plant to shut down temporarily out of concern that the jellies would clog its water-intake pipes.
Earlier in the year, similar invasions had forced shut downs at power plants in Israel, Scotland and Japan. The gargantuan Nomura's jellyfish (Nemopilema nomurai) found in Japanese waters can weigh up to 200 kilograms and has plagued the region repeatedly in recent years, hampering fishing crews and even causing one boat to capsize. Jellyfish have destroyed stocks at fish farms in Tunisia and Ireland. And in the Mediterranean Sea and elsewhere, officials have built nets to keep out jelly swarms.
The jellyfish blooms seem to bear out warnings from some scientists and conservationists, who argue that humans are knocking marine ecosystems off balance, causing a massive increase in the global population of jellyfish--a catch-all term that covers some 2,000 species of true cnidarian jellyfish, ctenophores (or comb jellies) and other floating creatures called tunicates. But many marine biologists are now questioning the idea that jellyfish have started to overrun the oceans.
This week, a group of researchers published preliminary results from what will be the most comprehensive review of jellyfish population data. They say that there is not yet enough evidence to support any conclusions about a global upswing in jellyfish populations. "We are not at a point to make these claims," says Robert Condon, a marine scientist at the Dauphin Island Sea Lab in Alabama, who leads the group. "If you look at how that paradigm formed, it's not based on data or rigorous analyses."
The main problem, say the scientists, is that jellyfish are notoriously difficult to study, so they have historically received scant attention from marine biologists. There are remarkably few data about their life cycles, populations and responses to natural oceanographic cycles. But the creatures could serve as key indicators for the health of the oceans, so scientists are now building a database of jellyfish research and exploring new ways to keep track of them.
Monty Graham, chairman of the department of marine science at the University of Southern Mississippi in Diamondhead, is part of the review team now questioning the idea of a rise in jellyfish numbers, but more than a decade ago he was sounding the alarm. In 1996, he took a position at the Dauphin Island Sea Lab, where he found that the US National Oceanic and Atmospheric Administration had years of mostly unprocessed population data on moon jellyfish (Aurelia aurita) and Atlantic sea nettles (Chrysaora quinquecirrha) in the Gulf of Mexico. These data were a rare treasure; only a handful of similar long-term records exist.
Graham discovered that from 1985 to 1997, the jellies had grown substantially more widespread and abundant in several parts of the Gulf, and he suggested that human changes to the ecosystem might be the cause.
Similar findings supported the notion. In the Bering Sea, one of the handful of locations with a monitoring record longer than a few years, jelly numbers had also risen through the 1990s. That matched predictions made by ocean scientists, who had warned that as humans degrade the oceans they are shifting ecosystems, reducing numbers of larger fish and promoting populations of organisms from lower down the food chain. Among the beneficiaries would be algae, toxic plankton and jellyfish--in other words, there would be a sea of slime.
Boom and bust
The link between ocean degradation and jellyfish makes biological sense. Nutrient pollution can increase food supplies for jellyfish; overfishing can reduce their competition; and warmer temperatures are thought to trigger reproduction in some jellyfish species.
But as the slime paradigm gained traction in the literature, something odd was happening in the Gulf of Mexico. One of Graham's graduate students, Kelly Robinson, found that jellyfish numbers in the northern Gulf had declined for several years after 1997, and then rebounded. Her work has not yet been published.
Researchers studying jellyfish in the Gulf of Mexico and the Bering Sea now think that long-term natural climate cycles have an important role in controlling populations there. "Just seeing a lot of jellyfish does not say anything," says Graham now. "People say, 'Oh my God, the world is going to hell,' but jellies form blooms. That's what they're supposed to do." The challenge for researchers lies in separating normal fluctuations from those for which humans might deserve some of the blame.
Graham and others decided to take a scientific step back. In 2009, the National Center for Ecological Analysis and Synthesis at the University of California, Santa Barbara, funded Graham, Condon and Carlos Duarte, a marine ecologist at the Mediterranean Institute for Advanced Studies on Majorca, Spain, to establish the working group that has just published its findings. Made up of dozens of researchers, it compiled all the scientific data available on jellies worldwide. After a preliminary examination, the group said1 that it could not support the conclusion that jellyfish numbers are increasing globally, because only a few places have been monitored carefully and even there the data are limited.
Researchers have tended to ignore or avoid jellyfish, in part because they are such a nightmare to deal with. Typical nets shred them, and collecting them intact can require heroic efforts. Shin-ichi Uye, a marine ecologist at Hiroshima University in Japan, says that because some jellyfish are heavier than sumo wrestlers, marine biologists must carefully balance their small research boats to avoid capsizing when retrieving a single specimen.
To make matters worse, many jellyfish groups have complex life cycles. Several species, including moon jellyfish, reproduce sexually to form larvae that settle on the sea floor and develop into anemone-like growths called polyps. If conditions are favourable, a single polyp can bud to form 20 floating jellies. But in hard times, polyps can produce more polyps or retreat into a tough cyst. Then, when the environment improves, the waiting polyps can fuel a massive bloom of jellyfish that might seem like an invasion from nowhere.
With very few exceptions, the polyp colonies that cause large jellyfish blooms are "really hard to find", says Claudia Mills, a jellyfish specialist at the University of Washington's Friday Harbor Laboratories, who was part of the jellyfish review group and was one of the first to examine the possibility of a global rise. Bloom triggers seem to be tied to seasonal temperature changes, she says. And that raises the possibility that warming of the oceans could indeed cause populations to mushroom.
Most of the researchers who have questioned the idea of a jellyfish explosion say they cannot rule out the possibility that blooms are becoming more prevalent or that humans are at least partly responsible. In Japan, for example, long-term records suggest that blooms happened only every 40 years or so before 2000, but have come nearly every year since. Moreover, the blooms seem to originate in Chinese waters, where overfishing has severely depleted the Japanese jellyfishes' main competitors.
In most other cases, however, the trends are not so clear cut. So in 2010, the biologists on the task force began a Jellyfish Database Initiative (JEDI), compiling every scientific jellyfish record they could find, and they expect to continue expanding this resource. Some researchers are also teaming up with the public. The Monterey Bay Aquarium Research Institute in California has launched a website called Jellywatch.org, through which scientists and citizens can report jellyfish sightings to help fill out the JEDI database. The intergovernmental Mediterranean Science Commission in Monaco has started a similar programme for that sea.
Despite the lack of long-term studies, some scientists think that there is enough evidence to answer some important questions. In a paper due out later this month, Lucas Brotz, a graduate student in fisheries biology at the University of British Columbia in Vancouver, Canada, and his adviser, Daniel Pauly, analysed media reports and other non-scientific data about bloom patterns since 1950, such as interviews with locals and scientists. The researchers used fuzzy logic, a ranking system that incorporates the reliability and abundance of information, to identify trends in less-than-ideal data sets. They found that jellyfish populations were increasing in 31 of the 45 ocean regions that they studied.
"Our study says you can actually pierce through the confusing fog of press reports and anecdotes and scientific data to establish whether increases are occurring," says Pauly.
Graham and other researchers praise the approach, but they contend that such information just isn't adequate. The jellyfish review team will begin to analyse the full JEDI database later this month, encouraged by the growth of programmes in places such as Peru and Japan, where scientists work with fishermen to monitor jellyfish populations. But even so, the researchers caution that they will need to establish many more monitoring programmes to complete a global picture.
With such programmes in place, researchers could use the comprehensive jellyfish data sets to track how the oceans are changing. These creatures make ideal environmental sentinels because humans mostly leave them alone, says Graham. "Jellyfish are the great bystanders of the oceans and the oceans' health."
This article is reproduced with permission from the magazine Nature. The article was first published on February 1, 2012. | <urn:uuid:ab4a81c6-3269-41fb-94d2-3cc06eb118cc> | 3.0625 | 2,001 | Truncated | Science & Tech. | 31.904111 |
In this section, you will find useful resources of C programming language for references.
- C (programming language)
C programming language article on Wikipedia.
- The Development of the C Language
This is an interesting page about C language history development
- C FAQs
Common C FAQs published by Usenet newsgroup covering the a lot of C programming questions such as pointer,null pointer, array and pointer, memory allocation…
- Netbean IDE for C/C++
Netbean IDE help you to develop professional native C/C++ application for various platforms including windows, UNIX, Linux, Mac OS and Solaris.
- Eclipse CDT
Offers the C/C++ IDE based on the Eclipse platform for free. Eclipse CDT has a lot of features including C/C++ project creation, standard make, source code navigation, code editor with syntax highlighting, etc.
Provides a minimalist development environment for developing native Microsoft Windows applications. MinGW is known as “Minimalist GNU for Windows” | <urn:uuid:51370895-8009-4b22-8339-cf7dc9dc82d8> | 2.84375 | 214 | Content Listing | Software Dev. | 27.759368 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
August 26, 1997
Explanation: Sometimes the sky itself seems to glow. Usually, this means you are seeing a cloud reflecting sunlight or moonlight. If the glow appears as a faint band of light running across the whole sky, you are probably seeing the combined light from the billions of stars that compose our Milky Way Galaxy. But if the glow appears triangular and near the horizon, you might be seeing something called zodiacal light. Pictured above, zodiacal light is just sunlight reflected by tiny dust particles orbiting in our Solar System. Many of these particles were ejected by comets. Zodiacal light is easiest to see in September and October just before sunrise from a very dark location.
Authors & editors:
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
&: Michigan Tech. U. | <urn:uuid:59bb1615-40d3-419b-8d13-2607619c0729> | 3.546875 | 211 | Knowledge Article | Science & Tech. | 49.173328 |
Survey and Impact Assessment of Derelict Fish Traps in St. Thomas and St. John
Project Status: This project began in May, 2009 and was completed in December, 2012
In the waters surrounding St. Thomas and St. John, US Virgin Islands, we conducted an impact assessment of the causes and effects of lost fish traps. We evaluated fish mortality, trap fouling and degradation, and assessed the efficiency of autonomous underwater vehicles in detecting and verifying derelict traps in a coral reef ecosystem. We estimate that, of the approximately 6500 traps located in the area, 650 traps are lost each year, totaling an estimated annual fish market loss of $34,000.
Why We Care
Generations of fishermen in St. John and St. Thomas have used traps to catch fish and lobster. While traps are efficient and relatively cost effective, they can move during storms; become snagged by passing boats and dragged to other areas; and be vandalized, stolen, or abandoned. In addition to the economic forfeiture, lost traps pose boating and navigational hazards as well as major threats to marine life and habitats. Moreover, lost or abandoned traps continue to collect fish, a casualty known as ghost fishing. Because the extent of the problem and its ecological and economic impacts were unknown, this project was initiated to begin defining the scope, causes, and impacts of derelict fishing traps in St. John and St. Thomas.
What We Did
We mapped fishing effort in St. John and St. Thomas, documenting the number and type (fish or lobster) of traps being used, and their specific locations. Trap use had not been characterized at this scale for this region.
We collected information from local fishermen about trap discarding practices and locations where traps had been lost in the past. Our partners ran field experiments to quantify fish death in controlled derelict traps and to assess trap degradation. We tested the Navy’s autonomous underwater vehicles (AUV) equipped with sidescan sonar to evaluate their ability to detect derelict traps in and adjacent to various complexities of coral reefs.
What We Found
Approximately 6,500 traps were used in the region, and the total annual loss rate was estimated to be 10%. Traps from the primary commercial fishery were primarily found in deep water locations on the south shores of St. John and St. Thomas. An unknown fishing group targeted shallow waters. The mortality of fish held in experimental derelict traps was 5%, with an estimated $30,000 to $40,000 in annual catch loss.
Our experimental traps at 10 meters moved considerably (up to 150 meters) by a hurricane; movement before the storm was negligible. Traps at 20 meters were not moved by the hurricane. Fish movement was not impaired in traps with escape panels open, and mortality was rare. Some experimental derelict traps provided structure for juvenile fish species. In many cases, abandoned traps had been colonized by benthic organisms, including stony corals. Autonomous underwater vehicles (AUVs) were successful at identifying traps in low relief habitats.
We are pursuing partnerships to continue AUV surveys to fully quantify derelict fish trap abundance in the study area. We hope to use the AUV to verify potential archaeological sites such as shipwrecks.
Related Regions of Study: Atlantic Ocean, Caribbean Sea, US Virgin Islands
Primary Contacts: Chris Caldow, Randy Clark
Science for Coastal Ecosystem Management (Seafloor Mapping, Marine Spatial Planning, Coral, Human Dimensions)
Related NCCOS Center: CCMA | <urn:uuid:6f21966d-4b7b-4ff1-94ae-bba5f5d5794e> | 3.109375 | 722 | Knowledge Article | Science & Tech. | 42.522081 |
A huge drought in the Amazon rain forest last year may have caused the release of more emissions than the U.S. is responsible for in a year.
The usually carbon-hungry forest is a major CO2 sink, but when a drought strikes and vegetation dies, all of that carbon that was stored gets released, and in the case of the major droughts that occurred last year and in 2005, the impact is pretty big.
In fact, a study published yesterday in the journal Science concludes that the Amazon would not absorb the usual 1.5 billion metric tons of CO2 from the atmosphere in both 2010 and 2011 and the dying vegetation would result in a release of 5 billion metric tons of CO2, meaning a total of 8 billion metric tons of CO2 added to the atmosphere. In 2009, the U.S. was responsible for 5.4 billion metric tons of CO2 from fossil fuel use.
The scariest part of this is that climate change is causing more weather extremes like droughts, which in turn are causing more greenhouse gases to be released into the atmosphere, creating a vicious cycle.
|< Prev||Next >| | <urn:uuid:86e4d07d-7a9e-4250-8748-cdbac1fcd3dd> | 3.734375 | 236 | Truncated | Science & Tech. | 65.958833 |
Thinking about it more, it may be some problem with English rather than statistics....
again, the book says
"Regardless of the shape of the population, if a sufficiently large sample of size n is taken from the population, then the sample is approximately normally distributed....
If the book is correct (it seems wrong to me) then how do deal with sample being the ENTIRE population? Suppose our population is adults in the USA, and the variable is annual income. As the sample size approaches the population size, the distribution of the sample approaches the distribution of the population, which is highly skewed. If we have the whole population in the sample, then the distributions must be identical.
But what the CLM really says is that if we take REPEATED samples from the population then the distribution of the mean of the samples approaches normal as the number of samples approaches infinity | <urn:uuid:039c8188-fabe-4d63-bd1a-ae3ad1304e86> | 2.703125 | 177 | Comment Section | Science & Tech. | 47.833778 |
March 25 2008 / by futuretalk
Category: Technology Year: Beyond Rating: 11
By Dick Pelletier
In a recent report, The World in 2030, futurologist Ray Hammond predicts that over the next two to three decades, breakthroughs in computing, healthcare, communications, and robotics could mark the beginning of the end for human evolution as it has progressed over the last two million years.
“As machines surpass the intellectual capacity of humans,” Hammond says, “they will become a companion species on Earth, but could eventually turn into humanity’s successors.” However, with biotech and nanotech advances expected in the 2010s and 2020s, humans will be able to enhance their physical and cognitive abilities and by as early as the 2030s, technologies could enable humans to interface with these super-intelligent creations and share their vast information-processing abilities.
Today, we are increasingly reliant on computers, cell phones, robot vacuum cleaners, and automated TV programming systems such as Tivo. These machines are considered “dumb” inanimate objects, but experts believe that is about to change.
In the 2010s, household gadgets will begin to take on what some call a “computer personality,” and serve as companion to family members. At first, these helpful companions will be a digital image – a talking avatar displayed on computer screens, cell phones, and TVs. The avatars will eventually be embedded in clothing and jewelry and later, enter our bodies as nano-implants beneath the skin; and by mid-2020s, a more intelligent avatar will appear in our robots.
Robot companions will be incredibly smart. Projects like IBM’s effort to build an artificial brain and Janelia Farm goal to capture and store human thought could, some experts believe, enable robots to gain consciousness. Our companions could one day feel joy, fear, compassion, and other emotions just like we do.
And these silicon wonders will take on an uncanny human resemblance. Former Disney scientist David Hanson has developed artificial robot skin that bunches and wrinkles just like human skin, enabling smiles, frowns, and grimaces in human-like ways. Robot mannerisms will be indiscernible from humans.
But University of Bath robotics researcher Dylan Evans warns there could be a dark side to this utopia. As artificial life forms become smarter, he believes it will be increasingly difficult to determine responsibility should a robot accidently hurt someone. Who would be liable; the manufacturer, user, or robot? And computers already make important financial decisions. What if they make a bad investment?
And here’s a real scary situation. South Korea recently unveiled a robot border guard built by Samsung that can hit targets up to 500 meters away and can be programmed to shoot-to-kill. And the U.S. military plans to replace one third of its ground vehicles with robots by 2015; and twenty percent of its combat units by 2020. Will robo-warriors make it easier to start wars? Experts believe that they will
However on the positive side, Japan and South Korea, nations with the highest percentage of older people, believe that robots can become companions and caregivers to senior citizens, allowing them to remain independent. South Korea’s government has mandated a robot in every home by 2020.
Clearly the road to robotics winds around unknown, even dangerous turns, but strong commerce and public support will drive this “magical future” forward. | <urn:uuid:8319ad28-5f61-4925-a41b-e1ad414ed2fe> | 2.796875 | 709 | Nonfiction Writing | Science & Tech. | 36.782609 |
During the breakfast with my colleagues, a question popped into my head:
What is the fastest method to cool a cup of coffee, if your only available instrument is a spoon?
A qualitative answer would be nice, but if we could find a mathematical model or even better make the experiment (we don't have the means here :-s) for this it would be great! :-D
So far, the options that we have considered are (any other creative methods are also welcome):
Stir the coffee with the spoon
- Pros: - the whirlpool has greater surface than the flat coffee, so the better for heat exchange with air. - due difference of speed between the liquid and the surrounding air, the Bernoulli effect should lower the pressure and that would cool it too to keep the atmospheric pressure constant. - cons: - the Joule effect should heat the coffee.
Leave the spoon inside the cup
As the metal is a good heat conductor (and we are not talking about a wooden spoon!), and there is some part inside the liquid and other outside, it should help with the heat transfer, right?
A side question about this is what is better, to put it like normal or reversed, with the handle inside the cup? (I think it is better reversed, as there is more surface in contact with the air, as in the CPU heat sinks).
Insert and remove the spoon repeatedly
The reasoning about this is that the spoon cools off faster when it's outside.
(I personally think it doesn't pay off the difference between keeping it always inside, as as it gets cooler, the lesser the temperature gradient and the worse for the heat transfer). | <urn:uuid:7d19fe85-e424-4cad-9491-737d61630930> | 2.765625 | 345 | Q&A Forum | Science & Tech. | 52.476588 |
Today I want to set out an incredibly important example of a ring. This example (and variations) come up over and over and over again throughout mathematics.
Let’s start with an abelian group . Now consider all the linear functions from back to itself. Remember that “linear function” is just another term for “abelian group homomorphism” — it’s a function that preserves the addition — and that we call such homomorphisms from a group to itself “endomorphisms”.
As for any group, this set has the structure of a monoid. We can compose linear functions by, well, composing them. First do one, then do the other. We define the operation by and verify that the composition is again a linear function:
This composition is associative, and the function that sends every element of to itself is an identity, so we do have a monoid.
Less obvious, though, is the fact that we can add such functions. Just add the values! Define . We check that this is another endomorphism:
Now this addition is associative. Further the function sending every element of to the element of is an additive identity, and the function is an additive inverse. The collection of endomorphisms with this addition becomes an abelian group.
So we have two structures: an abelian group and a monoid. Do they play well together? Indeed!
showing that composition distributes over addition.
So the endomorphisms of an abelian group form a ring with unit. We call this ring , and like I said it will come up everywhere, so it’s worth internalizing. | <urn:uuid:b8ffbe67-6d50-4c33-90e9-e12fd5498c06> | 3.40625 | 352 | Personal Blog | Science & Tech. | 52.535512 |
Before we push ahead with the Faraday field in hand, we need to properly define the Hodge star in our four-dimensional space, and we need a pseudo-Riemannian metric to do this. Before we were just using the standard , but now that we’re lumping in time we need to choose a four-dimensional metric.
And just to screw with you, it will have a different signature. If we have vectors and — with time here measured in the same units as space by using the speed of light as a conversion factor — then we calculate the metric as:
In particular, if we stick the vector into the metric twice, like we do to calculate a squared-length when working with an inner product, we find:
This looks like the Pythagorean theorem in two or three dimensions, but when we get to the time dimension we subtract instead of adding them! Four-dimensional real space equipped with a metric of this form is called “Minkowski space”. More specifically, it’s called 4-dimensional Minkowski space, or “(3+1)-dimensional” Minkowski space — three spatial dimensions and one temporal dimension. Higher-dimensional versions with “spatial” dimensions (with plusses in the metric) and one “temporal” dimension (with minuses) are also called Minkowski space. And, perversely enough, some physicists write it all backwards with one plus and minuses; this version is useful if you think of displacements in time as more fundamental — and thus more useful to call “positive” — than displacements in space.
What implications does this have on the coordinate expression of the Hodge star? It’s pretty much the same, except for the determinant part. You can think about it yourself, but the upshot is that we pick up an extra factor of when the basic form going into the star involves .
So the rule is that for a basic form , the dual form consists of those component -forms not involved in , ordered such that , with a negative sign if and only if is involved in . Let’s write it all out for easy reference:
Note that the square of the Hodge star has the opposite sign from the Riemannian case; when is odd the double Hodge dual of a -form is the original form back again, but when is even the double dual is the negative of the original form. | <urn:uuid:96bcf47d-8285-4bf4-9b08-e9d4f19ddb41> | 3.421875 | 513 | Personal Blog | Science & Tech. | 40.320146 |
Science Fair Project Encyclopedia
- Subphylum Trilobitomorpha
- Trilobita - Trilobites (extinct)
- Subphylum Chelicerata
- Subphylum Myriapoda
- Subphylum Hexapoda
- Subphylum Crustacea
NOTE: Some classification schemes group
Myriapoda and Hexapoda into one subphylum
called Uniramia. Arthropods (Phylum Arthropoda) are the largest phylum of animals and include the insects, arachnids, crustaceans, and other similar creatures. Over four out of five extant (living today) animal species are arthropods, with over a million modern species described and a fossil record reaching back to the early Cambrian. Arthropods are common throughout marine, freshwater, terrestrial, and even aerial environments, as well as including various symbiotic and parasitic forms. They range in size from microscopic plankton (~0.25 mm) up to forms several metres long. The Arthropoda make up a very successful phylum. The Greek word Arthropoda means "jointed feet."
The arthropods have a segmented body with appendages on each segment. They have a dorsal heart and a nervous system on the vetnral side of their bodies. All arthropods are covered bya a hard exoskelton that is made out of chitin, a polysaccharide. Periodically, an arthropod sheds this covering when it molts. This covering prevents the arthropod from drying out, but also prevents arthropods from growing too big. The arthropod goup identified with the subphylum Chelicerata is the class Arachnida. The most familiar arachnid is the spider. These organisms have two body regions, six jointed appendages, simple eyes, and often carry on respiration by means of book lungs. Their chelicerae are hollow fangs that pierce prey. The second appendages, the pedipalps, contain sesory receptors. They also have four pairs of jointed legs. On the tip of the abdomen of many spiders there are spinnerets, which they use to make silk for their web. Other arachnids include the scorpions with its pedipalps shaped like pincers, and the mites and ticks, which can be destructive to both plants and animals.
Lobsters, crabs, shrimp, and barnacles belong to the class Crustacea. Most of these organisms have calcium carbonate. Their bodies are divided into three parts: abdomen, thorax, and head. Most are aquatic and use gills for respiration. The young stage is a naupilus larva. The number and type of head appendages helps to determine the crustaceans. One typical crustacean which looks like a lobster is a crayfish. It has four pairs of antenna on its head. Large eyes are attached on the head. Behind those are its mandibles or jaws, which are used to chew food. They are helped by the two pairs of maxillae right behind them. The crayfish also has walking legs and claws on its throax region. On the abdomen are appendages called swimmerets that females use to hold their eggs. Other groups of arthropods include the Diploda, commonly known as millipedes, and the Chilopoda, or the centipedes. A major difference between these groups is the number of legs on each segment.
Basic arthropod structure
The success of the arthropods is related to their hard exoskeleton, segmentation, and jointed appendages. The appendages are used for feeding, sensory reception, defense, and locomotion.
Arthropods respire (breathe) through a tracheal system; a potential difficulty considering that the skeletal structure is external and covers nearly all of the body. Aquatic arthropods use gills to exchange gases. These gills are specialized with an extensive surface area in contact with the surrounding water. Terrestrial arthropods have internal surfaces that are specialized for gas exchange. The insects have tracheal systems: air sacs leading into the body from pores, called spiracles, in the cuticle.
Arthropods have an open circulatory system. Hemolymph, a copper-based blood analogue, is propelled by a series of hearts into the body cavity where it comes in direct contact with the tissues. Arthropods are protostomes. There is a coelom, but it is reduced to a tiny cavity around the reproductive and excretory organs, and the dominant body cavity is a hemocoel, filled with hemolymph which bathes the organs directly. The arthropod body is divided into a series of distinct segments, plus a presegmental acron which usually supports compound and simple eyes and a postsegmental telson. These are grouped into distinct, specialized body regions called tagmata. Each segment at least primitively supports a pair of appendages.
The cuticle in arthropods forms a rigid exoskeleton, composed mainly of chitin, which is periodically shed as the animal grows. They contain a inner zone (procuticle) which is made of protein and chitin (a polysaccharide) and is responsible for the strength of the exoskeleton. The outer zone (epicuticle) lies on the surface of the procuticle. It is nonchitinous and is a complex of proteins and lipids. It provides the moisture proofing and protection to the procuticle. The exoskeleton takes the form of plates called sclerites on the segments, plus rings on the appendages that divide them into segments separated by joints. This is in fact what gives arthropods their name—joint feet—and separates them from their very close relatives, the Onychophora and Tardigrada. The skeletons of arthropods strengthen them against attack by predators and are impermeable to water. In order to grow, an arthropod must shed its old exoskeleton and secrete a new one. This process, molting, is expensive in energy consumption. During the molting period, an arthropod is vulnerable. How do arthropods grow? Once their cuticle hardens they can't grow ever again. Their cuticles slowly expand as they increase in mass. They breakdown (digest) their cuticle every now and then when they need to grow. Their cuticle hardens at their adult size and they slowly grow to fill it up.
At one point it was considered that the different subphyla of arthropods had separate origins from segmented worms, and in particular that the Uniramia were closer to the Onychophora than to other arthropods. However, this is rejected by most workers, and is contradicted by genetic studies.
Traditionally the Annelida have been considered the closest relatives of these three phyla, on account of their common segmentation. More recently, however, this has been considered convergent evolution, and the arthropods and allies may be closer related to certain pseudocoelomates such as roundworms that share with them growth by molting, or ecdysis. These two possible lineages have been termed the Articulata and Ecdysozoa.
The classification of the arthropods varies somewhat from source to source. There are five main subgroups: the Trilobita, Chelicerata, Myriapoda, Hexapoda, and Crustacea, which may be variously ranked from subphyla to classes, with various other taxa introduced above or below them and corresponding changes in the ranks of their subgroups. Here we have followed a "splitting" taxonomy, containing only generally accepted groups and assigning them higher ranks.
Aside from these major groups, there are also a number of fossil forms, mostly from the lower Cambrian, which are difficult to place, either from lack of obvious affinity to any of the main groups or from clear affinity to several of them.
External links and references
- http://www.itis.usda.gov ITIS TSN: 82696
- http://www.peripatus.gen.nz/Taxa/Arthropoda/Index.html Campbell, Reece and Mitchell. Biology. 1999
- Do spiders have hydraulic legs? (from The Straight Dope)
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:fb95eead-483b-4a4b-ae88-3346ae33dfd3> | 4.0625 | 1,799 | Knowledge Article | Science & Tech. | 35.886908 |
Science Fair Project Encyclopedia
Reuven Ramaty High Energy Solar Spectroscopic Imager
Reuven Ramaty High Energy Solar Spectroscopic Imager (or RHESSI) is a NASA sixth Small Explorer , launched on 5 February 2002. It's primary mission is to explore the basic physics of particle acceleration and explosive energy release in solar flares. Reuven Ramaty is the eponym, and was a pioneer in the area - RHESSI was the first space mission named after a NASA scientist.
RHESSI was the first satellite to accurately measure terrestrial gamma-ray flashes that come from thunder storms , and RHESSI found that such flashes occur more often than thought and the gamma rays have a higher frequency on average than the average for cosmic sources.
- RHESSI Home Page
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:acd348ca-e09d-482b-83b5-81a18418f165> | 3.28125 | 198 | Knowledge Article | Science & Tech. | 28.750676 |
PAPI is a portable hardware performance counter library developed by the Innovative Computing Labratory at the University of Tennessee. The goal of the PAPI project is to provide a consistent interface to the hardware performance counters found on most modern microprocessors. PAPI can be used to measure a variety of performance characteristics across a diverse field of computer architectures, and can be a very effective tool for understanding the performance of your code.
This page contains examples demonstrating how to instrument applications with PAPI. The PAPI library can be called from C/C++ and Fortran. I chose to write the examples on this page in Fortran90 because the documentation on the PAPI home page is more C oriented and has fewer Fortran examples.
There are seven basic examples on this page:
Basics - How to set up counters and compile/link with PAPI. Events - How to list available events on your system. Hardware - How to get some hardware info from PAPI. Timers - How to use PAPI timers to time code sections. Cache - How to count cache misses with PAPI. FLOPS - How to measure FLOPS with PAPI. Threads - How to use PAPI with threaded code.
The examples are quite general and it should be simple to adapt any of the examples to count different sets of events. The examples have been built and tested on IBM Power3 and Power4 systems at NCAR. The techniques used in the examples should work on any platform on which PAPI is available, however not all platforms support every event type so it is important to check which events are available on your system before you start using PAPI.
For additional resources check the related links page. | <urn:uuid:63223634-3366-425c-b9b2-2a9b1cd3c1b8> | 3.234375 | 346 | Documentation | Software Dev. | 38.441639 |
Albedos: Fractions of the total light incident on reflecting surfaces, especially celestial bodies, which are reflected back in all directions.
Amor: A family of near-Earth asteroids that have average orbital diameters in between the orbits of Earth and Mars and perihelia slightly outside Earth's orbit (1.017-1.3 astronomical units). Amor asteroids often cross the orbit of Mars, but they do not cross the orbit of Earth.
Aphelia: The points on planetary orbits that are farthest from the Sun
Apollo: A family of near-Earth asteroids that have average orbital diameters greater than that of the Earth and perihelia less than Earth's aphelion.
Apogee: That point in an orbit at which the moon or an artificial satellite is most distant from the Earth.
Asteroid: One of the many small celestial bodies revolving around the Sun, most of the orbits being between those of Mars and Jupiter. Also known as minor planet or planetoid.
Asteroid belt: The region between 2.1 and 3.5 astronomical units (AU) from the sun where most of the asteroids are found. Asteroids are small planetary bodies revolving around the sun; most of the orbits are between the planets Mars and Jupiter.
Atens: A family of near-Earth asteroids that have average orbital diameters closer than one astronomical unit (1 AU), the distance from the Earth to the Sun and aphelia of greater than Earth's perihelion, placing them usually inside the orbit of Earth.
Biosphere: The life zone of the Earth, including the lower part of the atmosphere, the hydrosphere, soil, and the lithosphere (the rigid outer crust of rock) to a depth of about 1.2 miles (2 kilometers).
Comet: A nebulous celestial body having a fuzzy head surrounding a bright nucleus; comets are one of three major types of bodies moving in closed orbits about the sun, the others being the planets and asteroids (minor planets); in comparison with the planets, comets are characterized by their more eccentric orbits and greater range of inclinations to the ecliptic plane.
Delta-v: In general physics, delta-v is simply the change in velocity. In astrodynamics (the study of the application of celestial mechanics to the creation of artificial satellite orbits), delta-v is a scalar measure for the amount of "effort" needed to carry out an orbital maneuver, i.e., to change from one orbit to another. A delta-v is typically provided by the thrust of a rocket engine. The time-rate of delta-v is the magnitude of the acceleration, i.e., the thrust per kilogram (kg) total current mass, produced by the engines. The actual acceleration vector is found by adding the gravity vector to the vector representing the thrust per kg.
Eccentric orbits: Orbits of celestial bodies that deviate markedly from a circle.
K-T impact: The Cretaceous-Tertiary (K-T or KT) extinction event, also known as the KT boundary, which was a period of massive extinction of species, about 65.5 million years ago. It corresponds to the end of the Cretaceous Period and the beginning of the Tertiary Period in Earth's geological history.
Geostationary transfer orbit (GTO): An intermediate orbit between a low Earth orbit (LEO) and a geosynchronous orbit. A geosynchronous orbit is a geocentric orbit that has the same orbital period as the sidereal (stellar referenced) rotation period of the Earth. A geostationary transfer orbit is used to move a satellite from LEO into a geostationary orbit.
Gravity assist: Also known as swingby or flyby; an interplanetary space vehicle maneuver whereby the vehicle closely approaches a target planet without impacting the planet or going into an orbit around it. The vehicle uses the energy obtained from the planet's gravitational field to change the speed or shape of the spacecraft's orbit.
Low Earth orbit (LEO): A circular orbit around Earth between the atmosphere and the Van Allen radiation belt, with a low angle of inclination. These boundaries are not firmly defined, but are typically around 350-1400 km above the Earth's surface, with inclination angles less than 60 degrees from the equator. This is generally below intermediate circular orbit (ICO) and far below geostationary orbit. Orbits lower than this are not stable, and will decay rapidly because of atmospheric drag. Orbits higher than this are subject to early electronic failure because of intense radiation and charge accumulation.
Libration point: Any one of five points in the orbital plane of two massive particles (such as planets or moons) in circular orbits around a common center of gravity (such as the Sun), where a third particle of negligible mass (such as a spacecraft) can remain in equilibrium.
Meteoroid: Any solid object moving in interplanetary space that is smaller than a planet or asteroid but larger than a molecule.
Momentum: For a single nonrelativistic particle, the product of the mass and the velocity of a particle.
Near-Earth asteroids (NEAs): A subset of the NEOs, are asteroids whose orbit intersects Earth's orbit and which may therefore pose a collision danger, as well as being most easily accessible for spacecraft from Earth.
Near-Earth objects (NEOs) : Celestial bodies whose orbits are nudged into Earth's neighborhood by the gravitational attraction of other planets.
Newton: The unit of force in the meter-kilogram-second system, equal to the force which will impart an acceleration of 1 meter per second squared to the International Prototype Kilogram mass. Symbolized as N.
Perihelion: That orbital point nearest the Sun when the Sun is the center of gravitational attraction.
Parking orbits: Temporary Earth orbits during which space vehicles are checked out and their trajectories carefully measured to determine the amount and time of increase in velocity required to send them into a final orbit or into space in the desired direction.
Semi-major axis: Either of the equal line segments into which the major axis of an ellipse is divided by the center of symmetry.
Specific impulse: A performance parameter of a rocket propellant, expressed in seconds, equal to the thrust in pounds divided by the weight flow rate in pounds per second. Also known as specific thrust.
Torino Scale: An important tool for categorizing the Earth impact hazard associated with newly discovered NEOs, equivalent to the "Richter Scale" but for NEOs. This scale was created by Professor Richard P. Binzel at the Massachusetts Institute of Technology and revised at an international conference on NEOs held in Torino, Italy, in June 1999. The Torino scale utilizes numbers that range from 0 to 10, where 0 indicates an object that has a zero or negligibly small chance of collision with the Earth, or that is too small to penetrate the Earth's atmosphere intact in the event that a collision does occur. A 10 indicates that a collision is certain, and the impacting object is so large that it is capable of precipitating a global disaster. An object is assigned a value based on its collision probability and its kinetic energy (proportional to its mass times the square of its encounter velocity).
Tsunami: A long-period sea wave produced by a seaquake or volcanic eruption; it may travel for thousands of miles. Also known as seismic sea wave.
Trans-Neptunian object (TNO): is any object in the solar system which orbits the Sun at a greater distance on average than the planet Neptune. The Kuiper belt, Scattered disk and Oort cloud are names for three divisions of this volume of space. The planet Pluto and its moon Charon are trans-Neptunian objects, and if Pluto had been discovered today, it might not have been called a planet.
Volatiles: Matter which is readily passed off by evaporation. | <urn:uuid:018e7196-92d1-4976-8342-677b36a27c90> | 3.578125 | 1,663 | Structured Data | Science & Tech. | 38.218831 |
Try it with (S,S).
The cis isomer has one axial and one equatorial bromine. Only the axial bromine has the necessary anti arrangement for elimination.
The trans does have the diaxial conformation, albeit unstable, available to it. Now both bromines are ani to a hydrogen and can eliminate. Try drawing the structure.
The cis isomer has the bromine in an axial position as the t-butyl group will be equatorial due to steric considerations. In order to get the bromine axal in the trans isomer, the t-butyl group must also occupy an axial position which is sterically hindered – hence the difference in rate
Two units of unsaturation
Two units of unsaturationThe management feels that the compound might be useful as a pesticide, but they need to know its structure. You have been called in as a consultant at a handsome fee. Compound "A", when treated with KOH in ethanol, yields two compounds "B" and "C", each with molecular formula C10H16.
Three units of unsaturation now present. A dehydrobromination has occurred to give a new double bond.
Compound "A" rapidly reacts in aqueous ethanol to give an acidic solution which, in turn, gives a precipitate of AgBr when tested with AgNO3 solution.
Suggests a tertiary bromide which forms a tertiary carbocation and bromide ion
Ozonolysis of "A" followed by treatment with (CH3)2S gives (CH3)2C=O as one of the products plus an unidentified halogen-containing product.Original hydrocarbon contains (CH3)2C= group
Hydrogenation of either "B" or "C" gives a mixture of both trans- and cis-1-isopropyl-4-methylcyclohexane.
This is the carbon skeleton of "A"
Compound "A" reacts with one equivalent of Br2 to give a mixture of two separable compounds "D" and "E", both of which can be shown to be achiral. Finally, ozonolysis of "B" gives (CH3)2C=O and diketone:
Propose structures for compounds A through E.
Home | Faculté Saint-Jean | University of Alberta | Chemistry Department
This page is maintained by Dr. Ed Blackburn (Ed.Blackburn@UAlberta.CA), course instructor.Updated August 23, 2002 | <urn:uuid:89f462fa-af96-444a-8f72-c922ab242a06> | 2.90625 | 536 | Tutorial | Science & Tech. | 44.110263 |
Taxonomic name: Orthotomicus erosus (Wollaston)
Common names: European bark beetle, Mediterranean pine engraver beetle
Organism type: insect
Orthotomicus erosus is an engraver beetle of the family Scolytidae. It is being introduced around the world, often due to the wood packaging material used in the shipment of textiles and other products. Orthotomicus erosus is a carrier for pathogenic fungi and is known to carry Sphaeropsis sapinea, which causes extensive mortality of many Pinus spp.
Cavey et al. (2004) reports that the length of Orthotomicus erosus is generally between 2.7 and 3.5mm. It is reddish brown in colour. The anterior portion of the pronotum (the region of an insects body immediately behind the head) on this species is asperate (rough with points or projections). The elytral declivity (downward slope of the modified forewings of beetles serving as protective coverings for the hindwings) is also moderately concave with lateral spines or teeth on it. Please see Cavey et al. 1994 for aid in identification.
Ips latidens, Ips pini, Orthotomicus caelatus
natural forests, planted forests
Campbell (2004) states that, "O. erosus primarily attack pine species (Pinus) but can also occur on Douglas-fir (Pseudotsuga menziesii), spruce (Picea), fir (Abies), and cedar species (Cedrus). The beetle infests recently fallen trees, slash, and stressed living trees."
Campbell (2004) states that, "As with other bark beetles, one of the major dangers from O. erosus is the transmission of pathogenic fungi, including blue stain fungi such as Ophiostoma minus." Wylie (2000) states that, "The fungus Sphaeropsis sapinea has caused extensive mortality of Pinus spp. following hail damage in South Africa, and Zwolinski et al. (1990) have estimated that losses of US$ 3.2 million per year have been incurred. Damage due to Sphaeropsis dieback is often exacerbated through infestation of trees by the weevil Pissodes nemorensis and Orthotomicus erosus."
Native range: Asia and Europe (Campbell, 2004).
Known introduced range: North Africa, North America, South America, and the South Pacific (Haack, 2001; Campbell, 2004; Ramsden et al. 2002).
Introduction pathways to new locations
Solid wood packing material: Orthotomicus erosus hast most commonly entered the United States from other countries through various crated exports such as, crating tiles, marble, and granite (Haack, 2001).
Integrated management: Henin and Paiva (2004) state that, "Management of bark beetle populations, such as O. erosus can only be achieved by adopting an integrated approach. Among preventive measures, this approach must combine ''prophylactic'' silviculture practices with an enhancement of their natural enemies, some of which have been shown to exert a significant impact upon bark beetle populations."
Chemical: In field experiments, Klimetzek and Vite (1986) were able to lure O. erosus into traps baited with a combination of the beetle produced compounds 2-methyl-3-buten-2-ol and ipsdienol. The authors state that, "When offered along with 2-methyl-3-buten-2-ol, an up to 1000-fold increase in concentration of racemic ipsdienol led to a continual increase in catch of O. erosus and Ips sexdentatus, accompanied by a steady increase of .female..female.-%. It is assumed that 2-methyl-3-buten-2-ol influences landing behaviour of O. erosus, while ipsdienol acts as a long distance signal".
Mechanical: In South Africa, Wylie (2000) reports that, "Sanitation felling and removal of Rhizina-infected older trees is necessary to prevent build-up of O. erosus.
Biologcial: Tribe and Kfir (2001) have been studying Dendrosoter caenopachoides, which was introduced into South Africa for the biological control of O. erosus.
Campbell (2004) states that, "While beetles inhabit non Pinus species, beetle reproduction is limited to infestations in pine species."
Reviewed by: Prof. Dr. Maria Rosa Paiva DCEA, Faculdade de Ciências e Tecnologia Universidade Nova de Lisboa Portugal
Principal sources: Campbell (2004) states that, "While beetles inhabit non Pinus species, beetle reproduction is limited to infestations in pine species."
Compiled by: National Biological Information Infrastructure (NBII) & IUCN/SSC Invasive Species Specialist Group (ISSG)
Last Modified: Monday, 29 August 2005 | <urn:uuid:fa6a43e9-15b9-4c96-a50b-7363cce8129f> | 2.90625 | 1,082 | Knowledge Article | Science & Tech. | 34.286118 |
Thu June 21, 2012
A Final Voyage, Into The Wild Black Yonder
Originally published on Wed April 10, 2013 3:16 pm
When Voyager I and II left Earth, Jimmy Carter was president, platform shoes were all the rage and moviegoers were still discovering a summer blockbuster called Star Wars.
Some 35 years later, the spacecraft have traveled farther than anything ever built by humans. Now there is evidence one of the plucky probes may soon cross the undulating boundary between the absolute edge of our solar system and the terra incognita of interstellar space.
"It's not that clear because there's no signpost telling you that you're now leaving the solar system, but the evidence is mounting that we're getting really close," says Arik Posner, a Voyager program scientist at NASA's headquarters in Washington, D.C.
That boundary is a mysterious place called the heliopause, where scientists believe the solar wind — a stream of charged particles spewed out by the sun — fizzles out completely. Call it the cosmic doldrums, or perhaps even the heavenly horse latitudes. There are tantalizing signs that Voyager I, now some 11 billion miles from home, is nearly there. (Voyager II, which launched first, is about 2 billion miles behind its twin.)
Analyzing the heliopause and what lies just beyond is expected to be the Voyagers' last feat before they go black.
Like Columbus Crossing The Atlantic
For Posner, who is crunching data from a spacecraft that left Earth when he was still in grade school, the notion of humans leaving the solar system for the first time is extraordinary.
"Humanity eventually leaves the material that is constantly being expelled by the sun. I would compare it to the crossing of the Atlantic by Columbus," he says.
After launch in 1977, the Voyagers sent back the first close-up pictures of Jupiter, Saturn, Uranus and Neptune, and they have managed to keep working well beyond their shelf life. No one expected them to get this far, Posner says.
"But now that they've made it, we are extremely excited to find out what's out there," he says.
The program might never have happened had it not been for two employees working at the Jet Propulsion Laboratory in the 1960s. The first was a UCLA graduate student named Michael Minovitz, who discovered that planetary gravity could be used to slingshot a spacecraft into deep space — something that had seemed hopeless using conventional rockets of the day. The second was Gary Flandro, who realized that a rare time window was about to open that would make it possible to visit Jupiter, Saturn, Uranus and Neptune in a single mission.
"We're talking about the 1960s, when we had pretty weak rockets," says Stephen J. Pyne, author of Voyager: Exploration, Space, and the Third Great Age of Discovery. "The only way to get to the outer planets was to have some sort of boost."
The 'Grand Tour'
A mission to embark on this "Grand Tour" had to be launched between 1976 and 1979. The next opportunity wouldn't come around for another 175 years.
"You're either going to do it now or your great-grandchildren are going to do it," says Pyne, who is also a life sciences professor at Arizona State University.
The Grand Tour, though ultimately scaled back, fundamentally changed our view of the outer solar system. Pyne recalls taking an undergraduate astronomy course in 1970: "I think there were about two pages in the standard astronomy book on the outer planets. That was it. They had some murky black and white photos and a brief description of the orbital mechanics. There was just nothing. Nobody knew anything."
By the time Voyager I reached the Jupiter system — including its four large inner moons, Io, Europa, Ganymede and Callisto — in April 1978, enough data were streaming in to fill volumes of future astronomy texts, says Fran Bagenal, a professor of astrophysical and planetary sciences at the University of Colorado, Boulder.
"It wasn't until we got up close and saw the volcanoes on Io, the geysers spewing out sulfur dioxide, all the geology on Ganymede and the impact craters on Callisto that we really realized that these moons were totally different worlds orbiting these fabulous gas giants that had clouds and weather," says Bagenal, who worked on Voyager as well as later planetary missions.
"There was so much that we saw for the first time with the Voyager spacecraft," she says.
Postcards From Space
Voyager I also snapped a few images of its home planet as it hurtled toward deep space — the first vehicle to take a full-frame shot of the Earth and moon together. At the urging of famed astronomer Carl Sagan, it also took a picture that came to be known as the "pale blue dot."
While some of the Voyagers' instruments are dead, their radioactive batteries still have some life left in them. At some point in the next 10 to 15 years, presumably well after both probes have crossed into interstellar space, they will go out not with a bang, but a whimper.
Pyne, for one, hopes the Voyagers will take one last picture before that happens — an image that shows a faint sun and might become as iconic as the pale blue dot.
"I don't know if there's enough power, but I sort of hope they might have enough for one of them to turn around and take a snapshot before it goes," Pyne says.
"Sort of a final postcard mailed to Earth: 'Here we are, wish you well.' " | <urn:uuid:0fc7d792-1c17-4860-ae26-35828bab69a9> | 3.40625 | 1,178 | Truncated | Science & Tech. | 49.0806 |
May 17, 2013 — Generations of physicists have claimed that time is an illusion. But not all agree. In his book Time Reborn: From the Crisis in Physics to the Future of the Universe, theoretical physicist Lee Smolin argues that time exists—and he says time is key to understanding the evolution of the universe.
May 10, 2013 — The SETI (Search for Extraterrestrial Intelligence) Institute's Jill Tarter has spent decades searching for the signals that would tell us we aren't alone in the cosmos. Tarter discusses the hunt, and what the presence of intelligent life elsewhere might tell us about our own future on Earth.
May 10, 2013 — Saul Perlmutter shared the 2011 Nobel Prize in physics for his discovery that the universe was expanding at an accelerating rate. Perlmutter explains how supernovae and other astronomical artifacts are used to measure the expansion rate, and explains what physicists are learning about "dark energy" — the mysterious entity thought to be driving the acceleration.
May 7, 2013 — Turns out our solar system — with its medium sized sun, its four small rocky planets, its four big gassy ones farther out — isn't like the others. We are unusual. Very unusual. Says one prominent astronomer, we are "a bit of a freak."
Apr 26, 2013 — The James Webb Space Telescope will succeed Hubble in 2018, boasting modern computers and a mirror with seven times the viewing area. Bob Hellekson, ATK Program Manager for the telescope, discusses the telescope's newly constructed wings, designed to support the telescope's folding mirror, and astrophysicist Stacy Palen talks about what the telescope may reveal about the cosmos. | <urn:uuid:640e97c7-663b-47c4-9378-168d172874f4> | 3.109375 | 338 | Content Listing | Science & Tech. | 41.450324 |
s coal is burned, thorium-232 (232Th) and uranium-238 (238U) are released as exhaust products in coal ash. What could be done with these isotopes if they were recovered? At least one scenario is readily apparent.
Because atoms of 232Th and 238U do not split, or "fission," when bombarded with slow (thermal) neutrons, they are referred to as "fertile," rather than fissionable, materials_materials that can be used to "breed" nuclear fuel by the addition of a neutron to each atomic nucleus. For example, when the nucleus of a thorium atom absorbs a neutron, it becomes 233Th, which decays in relatively short order to 233U, a nuclear fission fuel. Similarly, plutonium-239 (239Pu), an efficient fuel for both reactors and nuclear weapons, can be bred by the capture of neutrons from fissioning Uranium-235 (235U) in a blanket of 238U.
A potential source of the neutrons required to breed nuclear fuels from these isotopes is the fission of 235U--the reaction that powers nuclear power plants. The fission of each 235U nucleus releases 2 or 3 neutrons that either produce more fissions, breed new fuel through capture in fertile materials, or decay into a proton, an electron, and an anti-neutrino. In a "breeder" reactor environment 238U or 232Th can capture enough of these neutrons to breed more fissionable material than is consumed during fission of the original 235U fuel in the reactor.
Typical nuclear power plants rely on the heat produced from the splitting of 235U and heat from its "daughters," radioactive elements formed in the process. This heat converts the water circulating through the reactor to steam, which drives turbines for generating electricity. The same process could be fueled by the fission of 233U or 239Pu, isotopes that could be bred from the discarded leftovers of coal combustion.
At least 73 elements found in coal-fired plant emissions are distributed in millions of pounds of stack emissions each year. They include: aluminium, antimony, arsenic, barium, beryllium, boron, cadmium, calcium, chlorine, chromium, cobalt, copper, fluorine, iron, lead, magnesium, manganese, mercury, molybdenum, nickel, selenium, silver, sulfur, titanium, uranium, vanadium, and zinc.
site provided by Oak Ridge National Laboratory's Communications and | <urn:uuid:440e2eef-8cd9-419b-93df-e78b36541b76> | 4.15625 | 531 | Knowledge Article | Science & Tech. | 35.834799 |
Data reported by the weather station: 82230 (LEVS)
Latitude: 40.38 | Longitude: -3.78 | Altitude: 690
Weather Madrid / Cuatro Vientos
|Main||Year 1982 climate||Select a month|
To calculate annual averages, we analyzed data of 363 days (99.45% of year).
If in the average or annual total of some data is missing information of 10 or more days, this is not displayed.
The total rainfall value 0 (zero) may indicate that there has been no such measurement and / or the weather station does not broadcast.
|Annual average temperature:||15.4°C||363|
|Annual average maximum temperature:||20.0°C||363|
|Annual average minimum temperature:||9.5°C||363|
|Annual average humidity:||57.5%||361|
|Annual total precipitation:||396.00 mm||360|
|Annual average visibility:||9.9 Km||363|
|Annual average wind speed:||9.8 km/h||363|
Number of days with extraordinary phenomena.
|Total days with rain:||64|
|Total days with snow:||3|
|Total days with thunderstorm:||8|
|Total days with fog:||24|
|Total days with tornado or funnel cloud:||0|
|Total days with hail:||1|
Days of extreme historical values in 1982
The highest temperature recorded was 39°C on July 7.
The lowest temperature recorded was -7°C on December 31.
The maximum wind speed recorded was 61.1 km/h on March 27. | <urn:uuid:09b1293a-84d0-4005-9a2e-c0a258249911> | 2.6875 | 367 | Structured Data | Science & Tech. | 71.012436 |
Wednesday, May 13, 2009
...Thus, the population effective dose equivalent from coal plants is 100 times that from nuclear plants
Radioactive material is everywhere & if you dig stuff out of the ground you are going to dig up radioactives & if you burn it you are going to have smoke with radioactivity in it.
Indeed "the fly ash emitted by a power plant—a by-product from burning coal for electricity—carries into the surrounding environment 100 times more radiation than a nuclear power plant producing the same amount of energy." Here is a comparison of the amount of radioactive exposure we are subject to.
By this standard the exposure from coal burning plants will be another 10%.
Studies of radioactive radon in homes have been done & consistently found a small hormetic effect (smoking being a much larger masking effect for obvious reasons). Indeed they have been done repeatedly because they keep coming up with the same "wrong" answer. Clearly if the effect of radon is at least 550 times greater than that of nuclear power generation & 180 times greater than all the nuclear bombs ever tested & yet it is actually beneficial there is no possible case for saying commercial nuclear power, or even radioactivity from coal power is in any way harmful.
I also wrote here about radiationn hormesis a few days ago. | <urn:uuid:8577f630-8440-4552-9368-0a24928d82f8> | 2.96875 | 266 | Personal Blog | Science & Tech. | 43.993608 |
MIT Technology Review: Metamaterials have already been used to guide and direct electromagnetic waves in unusual ways. Now Carles Navau of the Autonomous University of Barcelona and his colleagues have shown that a static magnetic field can be manipulated in a similar way. Their design consists of a 7-cm-long tube made of a series of concentric rings, which was filled with a ferromagnetic alloy. At one end of the tube they generated a 1.3-mT magnetic field. A crack farther down the tube allowed the magnetic field to escape. When they measured the field escaping, they found it to be 0.8 mT in strength. That was significantly greater than the field strength at that distance from the source without the tube. Navau suggests that the ability to project magnetic fields over longer distances might be useful in quantum computing, where they are needed for manipulating quantum bits.
Ars Technica: For an energy grid with a network of electrical generators to remain stable, the generators must balance their contribution or else one generator will become overloaded. When the system is stable, much of that balancing is done automatically; when one generator is slowed by an increased load, the others speed up. However, the coupling can result in other generators responding to the adjustments and compounding the instability. When the system is less stable, operators actively adjust the generators’ output to return to stability. Adilson Motter of Northwestern University in Illinois and his colleagues examined several real-world power systems to determine if there was a way to allow more passive maintenance of the system stability. They found that by incorporating banks of capacitors and inductors that are automatically activated when the load on the generator increases, the phase of the current could be reestablished, thereby ensuring that the generator remains synchronized with the others in the network and decreasing the likelihood of spreading instability. The researchers believe that could work in the real world as well, since they applied the system to models of actual power grids.
Science: When Isaac Newton defined his law of gravity, he used it to determine that two bodies orbiting each other will create an ellipse. It took more than 200 years before a German mathematician, Heinrich Bruns, determined that there was no general solution to describe the path of three bodies orbiting each other in a repeated pattern: Only specific solutions are possible. Since Bruns’s first solution, only two other families of orbits that solve the “three-body problem” have been found. Now, Milovan Šuvakov and Veljko Dmitrašinović of the University of Belgrade in Serbia have used computer simulations to define an additional 13 unique solutions. Starting their simulations with the known solutions, they systematically adjusted the initial conditions until a new solution was found. Surprised by how many solutions they discovered, they had to create a new classification system for the solutions. They developed a “shape-sphere” that depicts where the bodies cannot go in their orbits and determines the relative distances between the bodies. Then the bodies were sorted based on symmetry and other characteristics. The next step will be to determine the stability of the solutions to see if any of the systems may be seen in observations of astronomical objects.
Nature: New research indicates that not everything on a quantum level exhibits quantum behavior. Wires just a few nanometers wide have now been shown to conduct electricity in the same way as the larger components of existing devices. Michelle Simmons, a physicist and director of the Centre for Quantum Computation and Communication Technology at the University of New South Wales in Sydney, Australia, and her colleagues made atomic-scale wires of phosphorous-doped silicon in which the phosphorous provided the extra electrons needed to generate a current, writes Edwin Cartlidge for Nature. Although the width of the wires varied from 1.5 to 11 nm, the resistivity did not differ substantially, thus obeying Ohm’s law of classical electronics. David Ferry, an electrical engineer at Arizona State University in Tempe, noted the importance of the finding to such devices as transistors, which every two years have been shrinking in size yet yielding ever-better performance—a trend known as Moore’s law. If quantum coherence came into play, he said, the transistors wouldn’t turn on and off as expected. Therefore, the new research could have significant implications for the microchip industry. What the implications will be for quantum computing, however, remains to be seen.
Nature: Although self-sustaining dynamos occur readily in stars and planets, none has yet been achieved in the lab. That may change next year when a project at the University of Maryland, College Park, is scheduled to go on line. Housed in a cavernous warehouse at the university, the Three Meter Experiment consists of a 3-meter-diameter ribbed sphere, inside of which is a 1-meter sphere surrounded by thousands of kilograms of liquid sodium heated to about 105 °C. When the device is turned on, it will whirl around and churn the electrically conducting fluid, which researchers hope will generate a self-sustaining electromagnetic field similar to Earth’s. The project could shed light on how rotational forces in Earth’s core deflect flows of electrically conducting liquid into a configuration that produces a magnetic field with north and south poles, writes Susan Young for Nature.
Guardian: Light bulbs could soon be used to broadcast wireless internet. Harald Haas of the UK’s Edinburgh University has been working on a revolutionary method of data transmission that makes use of light waves rather than wires or radio waves. Using LEDs, which are more efficient than standard light bulbs and can be switched on and off very quickly, he has found that he can vary the intensity of their output and pick up the signals with a simple receiver. With data rates of 100 megabits per second, Haas’s system relies on the fact that the human eye cannot detect the rapid flickering on and off of the LEDs—instead they appear to maintain a normal steady glow. Besides faster transmission capabilities, such a device would also have applications in the oil and gas industries, where radio waves can cause sparks, and for underwater robotic vehicles and submarines, where the electrically conductive salt water stifles radio waves.
BBC: Scientists have achieved a huge engineering feat by building the world’s most powerful “split magnet,” made in two halves with holes in the middle for observing experiments. Operating at 25 tesla, which is equivalent to 500 000 times the strength of Earth’s magnetic field, the magnet is 43% stronger than its predecessor, built in 1991, and has 1500 times more space inside to carry out tests. “The split magnet is essentially like two magnets brought close together, but kept a few centimeters apart to provide open pathways to the sample,” said Gregory Boebinger, head of the National High Magnetic Field Laboratory at Florida State University. “The spectacular engineering achievement with the magnet is the ability to maintain the very high magnetic field without having the two halves slam together.” Another of the researchers, Eric Palm, added, “Discoveries made here will enable researchers to improve their materials and use them to make improved products such as solar cells or semiconductors for the next generation of computers.” | <urn:uuid:caa91d83-f028-4af9-b260-4bb2c7bc1ec9> | 3.3125 | 1,495 | Content Listing | Science & Tech. | 29.859859 |
How many detectable alien civilizations are out there in our galaxy? In 1961, astronomer Frank Drake developed an equation to estimate the number. Now data journalist David McCandless, who gave the talk “The beauty of data visualization” at TEDGlobal 2010, has created an information graphic for the BBC calculating the Drake Equation -- with a twist. It’s interactive, and you can be as optimistic or skeptical as you like as you set the value of each variable in the equation.
Visualizing the possibility of intelligent life in the Milky Way | <urn:uuid:31bec73c-c804-42b1-903d-14e855265f40> | 3.140625 | 111 | Personal Blog | Science & Tech. | 29.8375 |
Allan Macrae has posted an interesting study at ICECAP. In the study he argues that the changes in temperature (tropospheric and surface) precede the changes in atmospheric CO2 by nine months. Thus, he says, CO2 cannot be the source of the changes in temperature, because it follows those changes.
Being a curious and generally disbelieving sort of fellow, I thought I’d take a look to see if his claims were true. I got the three datasets (CO2, tropospheric, and surface temperatures), and I have posted them up here. These show the actual data, not the month-to-month changes.
In the Macrae study, he used smoothed datasets (12 month average) of the month-to-month change in temperature (∆T) and CO2 (∆CO2) to establish the lag between the change in CO2 and temperature . Accordingly, I did the same. My initial graph of the raw and smoothed data looked like this:
Figure 1. Cross-correlations of raw and 12-month smoothed UAH MSU Lower Tropospheric Temperature change (∆T) and Mauna Loa CO2 change (∆CO2). Smoothing is done with a Gaussian average, with a “Full Width to Half Maximum” (FWHM) width of 12 months (brown line). Red line is correlation of raw (unsmoothed) data. Black circle shows peak correlation.
At first glance, this seemed to confirm his study. The smoothed datasets do indeed have a strong correlation of about 0.6 with a lag of nine months (indicated by the black circle). However, I didn’t like the looks of the averaged data. The cycle looked artificial. And more to the point, I didn’t see anything resembling a correlation at a lag of nine months in the unsmoothed data.
Normally, if there is indeed a correlation that involves a lag, the unsmoothed data will show that correlation, although it will usually be stronger when it is smoothed. In addition, there will be a correlation on either side of the peak which is somewhat smaller than at the peak. So if there is a peak at say 9 months in the unsmoothed data, there will be positive (but smaller) correlations at 8 and 10 months. However, in this case, with the unsmoothed data there is a negative correlation for 7, 8, and 9 months lag.
Now Steve McIntyre has posted somewhere about how averaging can actually create spurious correlations (although my google-fu was not strong enough to find it). I suspected that the correlation between these datasets was spurious, so I decided to look at different smoothing lengths. These look like this:
Figure 2. Cross-correlations of raw and smoothed UAH MSU Lower Tropospheric Temperature change (∆T) and Mauna Loa CO2 change (∆CO2). Smoothing is done with a Gaussian average, with a “Full Width to Half Maximum” (FWHM) width as given in the legend. Black circles shows peak correlation for various smoothing widths.
Note what happens as the smoothing filter width is increased. What start out as separate tiny peaks at about 3-5 and 11-14 months end up being combined into a single large peak at around nine months. Note also how the lag of the peak correlation changes as the smoothing window is widened. It starts with a lag of about 4 months (2 mo and 6 month smoothing). As the smoothing window increases, the lag increases as well, all the way up to 17 months for the 48 month smoothing. Which one is correct, if any?
To investigate what happens with random noise, I constructed a pair of series with similar autoregressions, and I looked at the lagged correlations. The original dataset is positively autocorrelated (sometimes called “red” noise). In general, the change (∆T or ∆CO2) in a positively autocorrelated dataset is negatively autocorrelated (sometimes called “blue noise”). Since the data under investigation is blue, I used blue random noise with the same negative autocorrelation for my test of random data.
This was my first result using random data:
Figure 3. Cross-correlations of raw and smoothed random (blue noise) datasets. Smoothing is done with a Gaussian average, with a “Full Width to Half Maximum” (FWHM) width as given in the legend. Black circles show peak correlations for various smoothings.
Note that as the smoothing window increases in width, we see the same kind of changes we saw in the temperature/CO2 comparison. There appears to be a correlation between the smoothed random series, with a lag of about 7 months. In addition, as the smoothing window widens, the maximum point is pushed over, until it occurs at a lag which does not show any correlation in the raw data.
After making the first graph of the effect of smoothing width on random blue noise, I noticed that the curves were still rising on the right. So I graphed the correlations out to 60 months. This is the result:
Figure 4. Rescaling of Figure 3, showing the effect of lags out to 60 months.
Note how, once again, the smoothing (even for as short a period as six months, green line) converts a non-descript region (say lag +30 to +60, right part of the graph) into a high correlation region, by the lumping together of individual peaks. Remember, this was just random blue noise, none of these are represent real lagged relationships despite the high correlation.
My general conclusion from all of this is to avoid looking for lagged correlations in smoothed datasets, they’ll lie to you. I was surprised by the creation of apparent, but totally spurious, lagged correlations when the data is smoothed.
And for the $64,000 question … is the correlation found in the Macrae study valid, or spurious? I truly don’t know, although I strongly suspect that it is spurious. But how can we tell?
My best to everyone, | <urn:uuid:37ce5eaf-8b45-4213-a5d6-2ae85fde2fc6> | 2.84375 | 1,318 | Personal Blog | Science & Tech. | 55.173388 |
A group of distinguished scientists has issued a report
confirming the theory of man-made global warming. Many of those same scientists are now holding their breath, hoping that a major research project in Antarctica will, for the first time, prove the existence of man-made global warming.
To the ordinary non-scientist, these two statements may seem a bit contradictory, but apparently not to the members of the National Academy of Sciences panel charged with reviewing the evidence for climate change and making recommendations for addressing its effects. These scientists, it would appear, are so committed to the ideology of man-made global warming that they are willing to issue a definite opinion in advance of compelling new research that might debunk their conclusion. But then, timing is everything, and the academy's 869-page report, requested by Democrats in 2008, has been issued just ahead of the left's attempt to ram a cap-and-trade bill through Congress.
Predictably, the NAS report confirms findings contained in the much-criticized 2007 IPCC report. The NAS panel believes that climate change is "largely" the result of human factors and that the consequences are even worse than those suggested by the IPCC. The NAS scientists believe that by 2100, sea levels could rise as much as ten times more than previously thought.
That is quite a leap in just three years, but then the NAS panel was charged not just with surveying the scientific literature surrounding global warming -- it was told to arrive at definite policy recommendations, and it was not shy about doing so. Those recommendations include the goal of reducing U.S. carbon dioxide emissions to 64% of current levels by 2050 despite the fact that in the next forty years, America's population will increase to 400 million, and the world will still rely on fossil fuels for most of its energy needs. As the International Energy Agency asserts
in its prognosis for 2030, "oil will remain the world's main source of energy for years to come." Regardless of these facts, scientists on the NAS panel believe that a 36% reduction is feasible. They have not pointed out how. | <urn:uuid:218f4ac0-4182-4718-a361-a018758eb9d2> | 2.734375 | 426 | Personal Blog | Science & Tech. | 43.602579 |
Using Embedded Styles
In this section, we continue to construct and embellish our shopping list of cheeses in the file index.html.
Select the text for another list item, and reopen the CSS Properties dialog (via Panels, Style Properties as before, or via Alt+p then Alt+s, or by selecting the "Set CSS styles" icon slightly to the right of the middle of the icon bar), but this time change the selection to "Apply styles to:" "all elements of same type...". Select a slightly cheesier orange-yellow hue this time, and then select the OK command button.
In the "wysiwyg" view of the main BlueGriffon window (or in the web browser preview, after you Save the file again, and reload the view in the browser), you can see the background color for all of the list items except one has been changed. This illustrates one aspect of the hierarchical overrides in CSS; an inline style which you apply to a given element (e.g. list item) takes precedence over an embedded style which is applied to all elements of that type (e.g. all list items within a given file).
Ah, but our goal had been to only highlight the items on the list which we feared might not ever be available at this particular cheese shop, rather than every list item. CSS includes a powerful concept of selectors, which we can utilize if we supply a class for the items we would like to style similarly. It is wise to use class names which convey meaning, or the significance of the distinction. So, for example, rather than naming our class "cheese-yellow-orange" we will name it "no" (since that was the simple response to our inquiry about this item last week). Then for example, if on next week's edition of our shopping list we begin to loose all hope of finding these items locally, we could change the background color for all items of class "no" to be gray. (We could even change their "Visibility" to be "hidden".) Such changes would leave more confusion in their wake if the class name simply echoed an initial description of how to display the element (e.g. "cheese-yellow-orange") rather than gave a semantic hint.
How do you prepare to use selectors in your style? You need to supply a class name for some elements. Select a list item, and in the class textbox type in the class name, "no".
Repeat for each of the other similar list items (Gruyère through Double Gloucester). (While it would be tempting to select a group of list items and supply the class name once for all of them, that would achieve a different result. As the hover text reveals this operation would "Apply a class to selection's container". For this list in this document, you will probably find that the class is applied to the entire body of the document. Since our goal is for some list items to be in our chosen class, and others not, applying a class to the container would not be helpful for our goal in this instance.)
With the text cursor on one of these list items of class "no", reopen the CSS Properties dialog, but this time change the selection to "Apply styles to:" "all elements of class...". Notice that by default the selection beside "all elements of class..." is filled in with "no". In the future, when working with documents in which multiple classes have been defined you will need to select the pertinent class from this drop-down list. Select the background color dot again, and choose a lighter yellow color this time.
We must also select a compatible Foreground color with this rule, to guard against some other style rule possibly changing the text to e.g. a nearly-invisible white color against this light-colored background). Select the Foreground color dot and choose black, then select the OK command button. An important guideline for web design and style creation is:
- Whenever a style specifies a Background color it must also specify a compatible color (or Foreground color, as BlueGriffon names it).
In the BlueGriffon window (or in the web browser preview window, after Saving the file and reloading the tab or page within browser) you will observe three different background colors for the various list items. This illustrates another aspect of the hierarchical overrides in CSS; an embedded style rule which has more specificity (e.g. it applies to items only with class="no") takes precedence over an embedded style which is applied to all elements (e.g. all list items within a given file). This tutorial will not illustrate each of the techniques for determining precedence, and therefore the displayed or rendered style, when multiple style rules match a given item. But there is a well-defined priority for predicting the deterministic outcome, regardless of which web browser is interpreting the style rules (in the absence of implementation errors within the browser).
Now you might want to get rid of the undesirable darker highlighting of the list items which are not of class "no". Merely move your text cursor to be within one of those list items (Red Leicester through Stilton), and in the CSS Properties dialog (with the default selection of "Apply styles to:" "all elements of same type..."), select the Colors, Background dot yet again. This time select the upper left choice (white), then select the OK command button. Note that BlueGriffon shows your previous highlighting of list items of class "no" still persists.
These exercises have demonstrated some of the flexibility and power of using CSS to control the appearance of your web page.
Using BlueGriffon to construct a stand-alone web page (i.e. an HTML file that you would like to send as an email attachment) with inline style information or embedded style rules would be quite appropriate. The advantage is that all information (for both the content and the style) is inseparably contained in one file.
However, if you are using BlueGriffon to construct a web site, or even just a small portion of a web site, using only inline styling or embedded style rules would be counterproductive. The next section will reveal why, and show the better alternative. | <urn:uuid:cad4f551-0681-41b9-92da-c5d4249456c8> | 2.71875 | 1,285 | Tutorial | Software Dev. | 54.579462 |
Pond Scum Prized Again
as Potential Biofuel
Algae is a broad term that refers to most of the organisms that live in water and capture energy from the sun. One kind, called cyanobacteria, is also known as blue-green algae for its color. Like other bacteria, cyanobacteria are very small, have few genes and normally make a small supply of lipids. Other kinds of algae, often called microalgae, have cells much more like ours. (That’s because they’re more closely related to us than to bacteria.) The cells of these microalgae are big and complex. In many cases, they also have many times more genes than cyanobacteria. And—most importantly for the search for new kinds of fuel—they produce a remarkable quantity of lipids. In some species of microalgae, lipids can take up over half the mass of a cell.
Thirty years ago, the U.S. Department of Energy launched the Aquatic Species Program to investigate the possibility of getting fuel from microalgae. It might be possible, scientists reasoned, to grow algae, extract lipids from them and transform those lipids into diesel or other kinds of fuel. Fanning out across North America, they gathered 3,000 promising, lipid-rich strains. They tested the algae in massive racetrack-shaped tanks. They engineered algae with genes to make them churn out extra lipids. And they explored different kinds of chemical reactions that could pull those lipids out of the algae.
Over its 17-year lifetime, the Aquatic Species Program made a lot of important discoveries about the basic biology of algae. But despite these achievements, the program’s scientists never got the cost of algae-derived fuel down low enough to make it a practical alternative to fossil fuels. In 1996, the Department of Energy closed the program down in a wave of budget cuts.
Thirteen years later, however, the algae are back. “The landscape has changed,” says Zimmerman, assistant professor of green engineering at F&ES.
Many experts are now warning that the world’s oil supply cannot expand fast enough to satisfy the growing demand for energy. As a result, they warn, we can expect more price spikes like the ones that have shocked the economy in recent years. At the same time, petroleum’s toll on the environment is becoming clearer, especially its huge role in warming the Earth’s climate.
Concerns like these have led the U.S. government and the energy industry to get serious about all kinds of alternative fuels. And that includes biofuels—the fuels derived from living things. Today ethanol from corn and diesel from other crops, such as soybeans, make up the majority of biofuels on the market. These biofuels emit carbon dioxide just like petroleum when they power a car, but they have, at least in theory, a big advantage over fossil fuels. In order for biofuels to be made in the first place, plants have to suck carbon dioxide out of the air. The gas they draw down could potentially balance the climate books.
Unfortunately, many experts now argue, biofuels from crops have hidden costs. “When you count all the pesticides and fertilizer and farming and the water that goes into it, it isn’t really a good environmental strategy,” says Zimmerman.
It’s also a strategy that can force us to choose between food and fuel. That’s because it takes a lot of land to grow corn and soybeans for biofuel, land that could otherwise be dedicated to feeding people. In 2006, University of Minnesota biologist David Tilman and his colleagues reported that dedicating all of the current U.S. corn crop to ethanol would satisfy just 12 percent of the country’s demand for gasoline. If all the soybean farms shipped their beans to refineries, they would satisfy only 6 percent of the country’s demand for diesel fuel.
Outside the United States, biofuels are having even more devastating impacts. The demand for palm oil for biofuel has spurred corporations to clear millions of acres of tropical forests for plantations. Conservation biologists have warned that the rapid destruction of these forests will threaten many species of plants and animals. Oil palm plantations may drive many populations of orangutans extinct within 10 years, for example.
These drawbacks to crop biofuels have led a number of researchers to take another look at algae. On paper, at least, algae don’t carry the risks of crop biofuels. They may be a much more efficient source of lipids, for example. “In soybeans, it’s just in the beans,” says Peccia. “In algae, it’s the whole thing.”
Unlike soybeans, Peccia points out, algae don’t need soil. “They can just live on wastewater.” Engineers would have a lot of options about where to put their algae tanks. Some species would thrive in the sunny deserts of the Southwest, while others would do well in the cloudy Northeast. It might even be possible to grow marine algae in the ocean, in much the same way that fish are farmed. “You can grow algae everywhere, so you can make them where people are using them,” says Zimmerman.
One of the reasons that crop biofuels aren’t as green as they may seem is that they require lots of fertilizers, which then get carried into rivers and oceans, where they foster the growth of oxygen-devouring bacteria, creating so-called dead zones where few animals can survive. Algae, on the other hand, can be fertilized with materials that can then be captured as they leave a tank. And algae can also be fertilized in ways that crops cannot. The carbon dioxide belched out of a coal-fired power plant, for example, can get piped into an algae tank, stimulating growth. | <urn:uuid:9274f72f-5f7d-4985-bafa-b478d8f372b0> | 3.78125 | 1,238 | Knowledge Article | Science & Tech. | 53.447004 |
Tuesday, June 21, 2011 - 06:30 in Earth & Climate
The first map of sea-ice thickness from ESA’s CryoSat mission was revealed today at the Paris Air and Space Show. This new information is set to change our understanding of the complex relationship between ice and climate.
- CryoSat-2 exceeding expectationsThu, 1 Jul 2010, 11:09:30 EDT
- Successful launch for ESA's CryoSat-2 ice satelliteThu, 8 Apr 2010, 13:05:18 EDT
- Scientists endure Arctic for last campaign prior to CryoSat-2 launchFri, 9 May 2008, 10:14:26 EDT
- Arctic sea ice thinning at record rateTue, 28 Oct 2008, 13:28:22 EDT
- Will changes in climate wipe out mammals in Arctic and sub-Arctic areas?Mon, 14 Jan 2013, 16:34:25 EST | <urn:uuid:9b1eb3c4-574c-4452-8909-b3b1bb17c5e0> | 3.046875 | 189 | Content Listing | Science & Tech. | 57.778333 |
The modern interpretation of heat is, or course, somewhat different to Lavoisier's
calorific theory. Nevertheless, there is an important subset
of problems involving heat flow for which Lavoisier's approach is
These problems often crop up as examination questions.
For example: ``A clean dry copper
calorimeter contains 100 grams of water at 30 degrees centigrade.
A 10 gram block of copper heated to 60 centigrade is added. What is the final
temperature of the mixture?''.
How do we approach this type of problem? Well,
according to Lavoisier's theory, there is an
analogy between heat flow and incompressible
fluid flow under gravity.
The same volume of liquid added to
containers of different cross-sectional area fills them to different heights.
If the volume is , and the cross-sectional area is , then the height is
. In a similar manner, the same quantity of heat added to different bodies
causes them to rise to different temperatures. If is the heat and
is the (absolute) temperature then
, where the constant is termed the heat capacity.
[This is a somewhat oversimplified example. In general, the heat capacity is
a function of temperature, so that .]
Now, if two containers filled to different heights
with a free flowing incompressible fluid
are connected together at the bottom, via a small pipe,
then fluid will flow under gravity,
to the other, until the two heights are the same. The final height is easily
calculated by equating the total fluid volume in the initial and final
The analogy between heat flow and fluid flow works because in Lavoisier's theory heat is a conserved quantity, just like the volume of an incompressible fluid. In fact, Lavoisier postulated that heat was an element. Note that atoms were thought to be indestructible before nuclear reactions were discovered, so the total amount of each element in the Universe was assumed to be a constant. If Lavoisier had cared to formulate a law of thermodynamics from his calorific theory then he would have said that the total amount of heat in the Universe was a constant.
In 1798 Benjamin Thompson, an Englishman who spent his early years in pre-revolutionary America, was minister for war and police in the German state of Bavaria. One of his jobs was to oversee the boring of cannons in the state arsenal. Thompson was struck by the enormous, and seemingly inexhaustible, amount of heat generated in this process. He simply could not understand where all this heat was coming from. According to Lavoisier's calorific theory, the heat must flow into the cannon from its immediate surroundings, which should, therefore, become colder. The flow should also eventually cease when all of the available heat has been extracted. In fact, Thompson observed that the surroundings of the cannon got hotter, not colder, and that the heating process continued unabated as long as the boring machine was operating. Thompson postulated that some of the mechanical work done on the cannon by the boring machine was being converted into heat. At the time, this was quite a revolutionary concept, and most people were not ready to accept it. This is somewhat surprising, since by the end of the eighteenth century the conversion of heat into work, by steam engines, was quite commonplace. Nevertheless, the conversion of work into heat did not gain broad acceptance until 1849, when an English physicist called James Prescott Joule published the results of a long and painstaking series of experiments. Joule confirmed that work could indeed be converted into heat. Moreover, he found that the same amount of work always generates the same quantity of heat. This is true regardless of the nature of the work (e.g., mechanical, electrical, etc.). Joule was able to formulate what became known as the work equivalent of heat. Namely, that 1 newton meter of work is equivalent to calories of heat. A calorie is the amount of heat required to raise the temperature of 1 gram of water by 1 degree centigrade. Nowadays, we measure both heat and work in the same units, so that one newton meter, or joule, of work is equivalent to one joule of heat.
In 1850, the German physicist Clausius correctly
postulated that the essential conserved quantity
is neither heat nor work, but some combination of the two which quickly became
known as energy, from the Greek energia meaning ``in work.''
According to Clausius, the change in the internal energy of a macroscopic body
can be written | <urn:uuid:63a68981-a03a-4875-8f91-e2490621d2b5> | 3.90625 | 955 | Academic Writing | Science & Tech. | 42.92513 |
Pub. date: 2008 | Online Pub. Date: April 25, 2008 | DOI: 10.4135/9781412963893 | Print ISBN: 9781412958783 | Online ISBN: 9781412963893| Publisher:SAGE Publications, Inc.About this encyclopedia
A RESOURCE IS any item or substance that is in scarce supply and has some value. Resources are normally considered to be physical items, such as oil and natural gas. However, it is also possible to consider humans resources, since they are finite in number and are perishable under current technological conditions. Resources, when used in the context of computer or virtual environments, meanwhile, are inherently intangible in nature, although the hardware that produces them is not. It is customary, when considering resources, to distinguish between those that are renewable and those that are nonrenewable. Resources such as oil are consumed in use and are, therefore, nonrenewable. However, in a number of other cases, it is possible to recreate or recycle some resources either in the original form or, at least, some components of the original. Glass and plastic bottles may, to some extent, be recycled into different forms and ... | <urn:uuid:499540fc-1651-4b8b-8bf4-fdc37f13e8d6> | 3.21875 | 244 | Truncated | Science & Tech. | 34.969318 |
NOTE: I recommend reading Noldorin's answer first, for useful background information, and Matt's answer afterward if you want additional detail
Noldorin is right that there isn't a single event that you can look at and identify a Higgs boson. In fact, unless the theories are drastically wrong, the Higgs particle is unstable and it has an exceedingly short lifetime - so short that it won't even make it out of the empty space inside the detector! Even at the speed of light, it can only travel a microscopic distance before it decays into other particles. (If I can find some numeric predictions I'll edit that information in.) So we won't be able to detect a Higgs boson directly.
What scientists will be looking for are particular patterns of known particles that are signatures of Higgs decay. For example, the standard model predicts that a Higgs boson could decay into two Z bosons, which in turn decay into a muon and antimuon each. So if physicists see that a particular collision produces two muons and two antimuons, among other particles, there's a chance that somewhere in the mess of particles produced in that collision, there was a Higgs boson. This is just one example, of course; there are many other sets of particles that the Higgs could decay into, and the large detectors at the LHC are designed to look for all of them.
Of course, Higgs decay is not the only thing that could produce two muon-antimuon pairs, and the same is true for other possible decay products. So just seeing the expected decay products is not a sure sign of a Higgs detection. The real evidence is going to come from the results of many collisions (billions or trillions), accumulated over time.
For each possible set of decay products, you can plot the fraction of collisions in which those decay products are produced (or rather, the scattering cross section, a related quantity) against the total energy of the particles coming into the collision. If the Higgs is real, you'll see a spike, called a resonance, in the graph at the energy corresponding to the mass of the Higgs particle. It'll look something like this plot, which was produced for the Z boson (which has a mass of only 91 GeV):
The image is from http://blogs.uslhc.us/the-z-boson-and-resonances, which is actually a pretty good read.
Anyway, to sum up: the main signature of the Higgs boson, like other unstable particles, will be this resonance peak that appears in a graph produced by aggregating data from many billions or trillions of collisions. Hopefully this makes it a bit clearer why there's going to be a lot of detailed analysis involved before we get any clear detection or non-detection of the Higgs particle. | <urn:uuid:35f378b9-55a8-4b31-b41b-2b46b7059cb1> | 2.703125 | 595 | Q&A Forum | Science & Tech. | 46.612289 |
Ultra low frequency
||0.3 to 3 kHz
Ultra low frequency (ULF) is the frequency range of electromagnetic waves between 300 hertz and 3 kilohertz. In magnetosphere science and seismology, alternative definitions are usually given, including ranges from 1 mHz to 100 Hz, 1 mHz to 1 Hz, 10 mHz to 10 Hz. Frequencies above 3 Hz in atmosphere science are usually assigned to the ELF range.
Many types of waves in the ULF frequency band can be observed in the magnetosphere and on the ground. These waves represent important physical processes in the near-Earth plasma environment. The speed of the ULF waves is often associated with the Alfven velocity that depends on the ambient magnetic field and plasma mass density.
This band is used for communications in mines, as it can penetrate the earth.
Some monitoring stations have reported that earthquakes are sometimes preceded by a spike in ULF activity. A remarkable example of this occurred before the 1989 Loma Prieta earthquake in California. On December 9, 2010, geoscientists announced that the DEMETER satellite observed a dramatic increase in ULF radio waves over Haiti in the month before the magnitude 7.0 Mw 2010 earthquake. Researchers are attempting to learn more about this correlation to find out whether this method can be used as part of an early warning system for earthquakes.
Earth Mode Communications
ULF has been used by the military for secure communications through the ground. NATO AGARD publications from the 1960s detailed many such systems, although one suspects the contents of the published papers left a lot unsaid about what actually was developed secretly for defense purposes. Communications through the ground using conduction fields is known as "Earth Mode" communications and was first used in WWI. Radio amateurs and electronics hobbyists have used this mode for limited range communications using audio power amplifiers connected to widely spaced electrode pairs hammered into the soil. At the receiving end the signal is detected as a weak electric current between two further pairs of electrodes. Using weak signal reception methods with PC based DSP filtering with extremely narrow bandwidths it is possible to receive signals at a range of a few kilometers with a transmitting power of 10-100W and electrode spacing of around 10-50m.
- ^ V.A. Pilipenko, "ULF waves on the ground and in space", Journal of Atmospheric and Terrestrial Physics, Volume 52, Issue 12, December 1990, Pages 1193-1209, ISSN 0021-9169, DOI:10.1016/0021-9169(90)90087-4.
- ^ T. Bösinger and S. L. Shalimov, "On ULF Signatures of Lightning Discharges", Space Science Reviews, Volume 137, Issue 1, Pages 521-532, June 2008, DOI:10.1007/s11214-008-9333-4.
- ^ O. Molchanov, A. Schekotov, E. Fedorov, G. Belyaev, and E. Gordeev, "Preseismic ULF electromagnetic effect from observation at Kamchatka", Natural Hazards and Earth System Sciences, Volume 3, Pages 203-209, 2003
- ^ HF and Lower Frequency Radiation - Introduction
- ^ Fraser-Smith, Antony C.; Bernardi, A.; McGill, P. R.; Ladd, M. E.; Helliwell, R. A.; Villard, Jr., O. G. (August 1990). "Low-Frequency Magnetic Field Measurements Near the Epicenter of the Ms 7.1 Loma Prieta Earthquake" (PDF). Geophysical Research Letters (Washington, D.C.: American Geophysical Union) 17 (9): 1465–1468. ISSN 0094-8276. OCLC 1795290. http://ee.stanford.edu/~acfs/LomaPrietaPaper.pdf. Retrieved December 18, 2010.
- ^ KentuckyFC (December 9, 2010). "Spacecraft Saw ULF Radio Emissions over Haiti before January Quake". Physics arXiv Blog. Cambridge, Massachusetts: TechnologyReview.com. http://www.technologyreview.com/blog/arxiv/26114/. Retrieved December 18, 2010. | <urn:uuid:c9049947-0468-4742-81c8-a155c5fe6f43> | 3.359375 | 887 | Knowledge Article | Science & Tech. | 58.628875 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Friday, 29 February 2008
Biological organisms play a surprisingly large role in how rain and snow forms, a new study shows.
Monday, 25 February 2008
An Arctic 'doomsday vault' filled with samples of the world's most important crop seeds will be opened this week.
Friday, 15 February 2008
Bottles tossed into the sea for research nearly half a century ago may help people fight the spread of coastal weeds in Australia.
Wednesday, 12 December 2007
Banana and mint are the two unlikely candidates for the next generation of novel insecticides, scientists say.
Tuesday, 27 November 2007
A microscopic lens, inspired by a Venus flytrap's ability to trap insects in a split second, can pop instantly between convex and concave when triggered, scientists say.
Wednesday, 21 November 2007
Carnivorous pitcher plants that feed on insects in the tropics may be smarter than they look.
Thursday, 12 July 2007
Great Moments in Science The potato is forever associated with Ireland, yet the tale of the potato is a twisted one. As you will discover with Dr Karl following the curls.
Thursday, 1 February 2007 575K
Lesson Plan This fascinating, cheap and very reliable experiment clearly demonstrates the damaging effects of salinity (salt) on seed germination.
Thursday, 26 October 2006
Great Moments in Science I am lucky enough to have a chippie (carpenter) for a brother-in-law, and over the years, he has taught me a lot.
Thursday, 28 September 2006
Great Moments in Science Dr Karl investigates how lucky the 'lucky' four-leaf clover might be.
Thursday, 11 August 2005
Scribbly Gum Across Australia, mosses and lichens are starting to reproduce and spread themselves around. Lichen are the termites of the plant world, but also used to be the source of the colour purple! Mosses, on the other hand, are a garden's friend, holding soil together and keeping in moisture.
Friday, 5 March 2004
Scribbly Gum Are mistletoes friend or foe? They provide food and shelter to a vast range of birds, mammals and insects, yet now infest trees in plague proportions in some parts of Australia.
Thursday, 3 July 2003
Scribbly Gum Feature Why do some wattles bloom in winter while so many other plants hunker down for the cold weather?
Friday, 2 August 2002
Western Australia has one of the most spectacular displays of wildflowers anywhere in the world. At least 12,000 plant species live across the state, with more discovered every year. Yet the soils here are among the most barren in the world, and it hardly ever rains. What's going on?
Thursday, 6 June 2002
Scribbly Gum Feature If you suddenly catch a whiff of bubblegum or peanut butter when you go for a bushwalk, don't be surprised if there's a native truffle lurking nearby. As winter approaches, these fungi form fat and fruiting bodies just below the soil surface, where they wait to be dug up and devoured by native animals. | <urn:uuid:3a330f20-ec0d-4b39-9375-ccfe8d64c4e0> | 2.78125 | 659 | Content Listing | Science & Tech. | 53.832547 |
Jumping Robots Leap Ahead
G. Meeks/Georgia Tech
When physiologists and roboticists study the process of jumping, they usually include as much of the complexity of real human or robotic legs as possible. But physicists at Georgia Tech in Atlanta took the opposite approach, using a simple model consisting of just a vertical bar with a spring at the bottom and a motor that could be programmed to rapidly climb up and down the bar by millimeters at a time.
In their experiments, described in Physical Review Letters, the team found that the highest jumps occurred when the timing of the motor's rapid up-and-down motion was slightly different from the natural oscillation frequency of the spring. Like a pendulum, any weighted spring has a natural oscillation frequency.
The researchers found this surprising result with two different jumping modes: one was equivalent to a person squatting down and then extending their legs; the other was a slower motion, like a person lifting their feet almost off the floor before bending their legs and pushing off the floor. This "stutter jump" is similar to a common move in basketball. The researchers' mathematical model points the way to better robotic design and a better understanding of the movement of living creatures.
This image shows the robot jumping. | <urn:uuid:31caebeb-f230-47f9-a041-7f9ead48b2f7> | 4.03125 | 255 | Knowledge Article | Science & Tech. | 40.842407 |
Inspect Existing Code
Our first step was to inspect the "starting" application code to understand its intrinsic parallelism and how efficiently it used hardware resources. The following four tasks helped us better understand the structure of the application before porting it to a pair of Dual-Core Intel Xeon Processors LV 2.0 GHz.
Gather Information About the Code. We gathered information about the runtime performance and control characteristics of the code. The former indicates code segments that benefit most from the parallelization; the latter identifies locations to split the work. This analysis highlights the code's intrinsic parallelism and indicates how well the hardware resources (such as processor cache memory) handle the data structures.
Before beginning code parallelization, start with an optimized program. The largest gains come from parallelizing the code that is executed most often and consumes the greatest amount of execution time. If, through single-thread optimization, the percentage of time spent in different function calls changes, then the decision on which functions to parallelize may change as well. Once the code is optimized for the single-thread case, you can use performance-profiling tools to determine the best candidates for parallelization.
Our analysis of Amide and VolPack resulted in the data-flow diagram in Figure 2. This chart depicts the flow of data through major code blocks and three loops, and represents the software architecture of the application.
We collected Amide performance data as measured by the time required to render an image after users request to view the image from a new angle. Depending on the angle of rotation, Amide decides whether to search the image dataset from the x-, y-, or z-axis. We found the time required to display an image depended on the axis Amide selected to traverse. For example, a search along the z-axis tends to be roughly three times faster than along the x-axis, illustrated for 1000 rotations in Figure 3. We used image-rendering time as our performance metric to measure the effectiveness of our parallelization implementation.
This x- and z-axis render-time variance is explained by the memory stride of the 2D pixel search. For the z-axis, bytes are accessed sequentially with a stride of four bytes, which is relatively short. In other words, Amide reads regular, sequential data blocks that fit nicely into cache memory. This data regularity also lets the CPU more easily predict the data required for future instructions. Based on these predictions, the CPU preemptively moves data from memory into its cache to hide memory latency. This process, called "prefetching," lets the CPU preload its cache in anticipation of future accesses.
In contrast, the cache hit rate for searches along the x-axis was 26 percent; this result is associated with the relative long memory stride (976 bytes). With memory access strides of 1 KB or greater, the data prefetcher may become significantly less effective. Hardware perfecting is typically configured in BIOS and may be disabled manually.
Pinpoint External Dependencies. Next, we looked for external dependencies manifested by a high degree of idle time. When idle time is minimized, the functions consuming the most time tend to be compute-intensive pieces of the code. If code functions are waiting for something (the hard drive, network, user input, or whatever), parallelization will not speed up those external interfaces. Having convinced ourselves that the code was free of external dependencies, ran without interruption, and had no interaction with peripheral devices, we determined it was the best candidate for parallelization.
Identify Parallelization Opportunities. We had to decide on a parallelization approachdividing the data or dividing the control. In our case, the image data is stored in a one-dimensional array. We could split either by data (with each core responsible for accessing pixels in its fourth of the array) or by control (splitting one of the loops by its iterations). These methods are often equivalent because the same loop iteration accesses the same data each time this function is called. However, as we rotate the image, the number of loop iterations changes, as does the data that a particular loop iteration touches.
We decided that splitting the computation by data would lead to great load imbalances because each render does not touch all of the data, and only the innermost loop searches the data to find viewable pixels. From some angles, only one or two cores would be active. So we instead split the computation by the control. Each core was responsible for determining one-fourth of the pixels displayed by the monitor.
To split by control, we had to decide at what level to parallelize this code. In other words, if the code is nested, how deep within the nesting structure should the threads be created? And which loop (see Figure 2) should be split among the four cores? We evaluated three nesting levels corresponding to the three loops (lowest to highest):
Solid_Pixel_Search_Loop, New_Pixel_Loop, Image_Loop
The lowest threading level contains the function with the greatest workload (compAR11B.c, available electronically; see "Resource Center," page 5), which runs Solid_Pixel_Search_Loop. Within this loop, compAR11B.c accesses the main array and checks for pixels containing data. It stops when it finds a pixel with data. This type of loop is not appropriate for parallelization because, for any given execution, it executes for an undetermined number of iterations. This loop could consume one core while the other cores are idle, waiting for a synchronization event to occur.
The next level kicks off the Initialize Image function that runs the New_Pixel_Loop, with a deterministic number of iterations (for example, while it is not a constant number, the number is known at the beginning of the loop execution). This loop calls compAR11B.c during every iteration. This is an excellent candidate for parallelization because we can divide the iterations by the number of threads and distribute the workload evenly among the four cores. Another benefit from parallelizing this FOR LOOP is that, because the threads call compAR11B.c multiple times before coordinating with each other, synchronization overhead is minimized.
At the highest level, the Image_Loop could be parallelized so each thread renders a different image. Amide would then call this function multiple times, once for each image, displaying multiple images at the same time. This has two possible drawbacks:
- There will be idle cores whenever the number of images being processed is less than the number of cores.
- Each image requires substantially different amounts of time to render.
If one thread completes its image much earlier than the other, there is parallelism for only part of the time. Thus, load balancing is less consistent when the threads are split at this higher level.
Locate Data Dependencies. The fourth task was to examine the data structures to determine whether parallelism may create data dependencies, causing incorrect execution or negatively impacting performance. One source of data dependencies occurs when multiple threads write to the same data element. Without additional synchronization code, this can produce incorrect results. Synchronization code and data-locking mechanisms (such as semaphores) can lead to stalled threads.
In addition to ensuring that multiple threads are not writing to the same data element, it's important to minimize any false sharing of memory among the threads. False sharing occurs when two or more cores are updating different bytes of memory that happen to be located on the same cache line. Technically, the threads are not sharing the exact memory location, but because the processor deals with cache-line sizes of memory, the bytes end up getting shared anyway. Because multiple cores cannot cache the same line of memory at the same time, the shared cache line is continually sent back and forth between cores, creating cache misses and potentially huge memory latencies.
Amide calls VolPack to render a 2D display image by processing an image dataset. This requires searching the image dataset, but never changing it, and writing the output as a 2D display image to a graphics driver for display on a monitor. Both structures (the medical image dataset and the 2D display image) are stored as arrays.
Each loop iteration corresponds to one location in the 2D display image, and consecutive iterations write to neighboring locations. Thus, by giving each core a set of consecutive iterations, rather than assigning the iterations round robin, the cores never write to the same locationand they only write to neighboring locations (false sharing) on their first or last iteration. | <urn:uuid:5a6022d3-7c65-4302-b080-dd1ea0b4c56c> | 2.8125 | 1,747 | Documentation | Software Dev. | 31.00824 |
Scientists warn world heatwaves will intensify
20 August 2004, source edie newsroom
Heatwaves will become more severe, more intense and will last for much longer throughout Europe and North America, according to a recent study.
Gerald Meehl, one of the scientists who led the study, stated that the average heatwave could soon last up to three days longer, and by 2080 there will be at least two every year rather than just one. He also warned that countries needed to make preparations for the impending change in the weather.
"Places such as France, Germany and the Balkans could see increases of heat intensity that could have more serious impacts because these areas are not currently as well adapted to heatwaves," he said.
Mr Meehl said that global warming posed a serious threat, and action needed to be taken to prevent weather patterns from becoming more extreme: "It's the extreme weather and climate events that will have some of the most severe impacts on human society as the climate changes."
For the study, Mr Meehl and his colleague Claudia Tebaldi compared present and future decades to see how greenhouse gases and sulphate aerosols could affect climate change. They demonstrated how we were on track for atmospheric pressure changes to be heavily enhanced by accumulating amounts of carbon dioxide (see related story) in the atmosphere, assuming there would be little in the way of policy intervention to slow down the build-up.
According to the study, heatwaves can kill more people in a shorter time than almost any other climatic event. In France alone, over 15,000 people and thousands more animals died last summer as a result of the high temperatures.
Patches of extreme weather have occurred throughout Europe this summer, with uncontrollable fires raging in Portugal (see related story) and parts of France and Spain.
By Jane Kettle
This story is tagged with:
You need to be logged in to make a comment. Don't have an account? Set one up right now in seconds!
© Faversham House Group Ltd . edie news articles may be copied or forwarded for individual use only. No other reproduction or distribution is permitted without prior written consent. | <urn:uuid:94aa31a6-2718-4371-8e99-5261738428fd> | 3.296875 | 443 | Truncated | Science & Tech. | 39.31132 |
1. Object Oriented programming ... break your programming down into "classes", related to logical sets of data, and the methods that manipulate that data.
2. Focus on building the LOGICAL classes, separate from your UI.
So, don't write a dozen input hanlders doing all your math ... write a dozen input handlers calling methods, which do the math ... now in a real program there is nothing wrong with event handlers the size of yours (3 lines long) ... but since this is half the logic of your program, you are not solving your problem cleanly.
There are at least 5-10 reasonable and very different ways to solve the problem you are asking (and millions of smaller varieties). Programming is about thinking through problems and we all think different (like musicians or painters) ... so all we can do is either show you how we would do it and hope you can generalize, or show you general techniques and hope you can apply them specifically.
But to help you out a bit ... first a thought, try to store your variables in a "natural" manner completely not realated to your UI. what is "natural" depends on you point of view. If you were implementing a more simple calculator "natural" might be:
a variable called something like "accumulator", "currentValue", "register" or the like ... which holds the total so far ... ie "0" when reset, and the number that is displayed after users press the "=" etc.
a variable called "operator", "pendingOperation", "state" or some such .... which holds the operation the user has requested after the press something like "+". and would be able to detect the when there isn't one (ie if the user types "3+3=" after the "=" the pending operation is null, so that you can tell if the user then types "53" then are doing the "set value to ..." operation, which doesn't care what the previous value was ... wheras if they type "+" then "53" they are doing the "add value ..." operation which does use the previous / current value.
now of course, i've only hinted at posibilities ... over about 17 years of programming, I've written calculators for examples, for fun, and for trying out new programming languages or ideas probably 7 or 8 times ... none are just like I described, but that's the point, its just a program to think up however you like ... and then work through it tell it makes since AND works.
Edited by Xai, 29 June 2012 - 10:33 PM. | <urn:uuid:9dccdf73-5e65-467f-9d07-393269d663c2> | 3.03125 | 535 | Comment Section | Software Dev. | 71.412097 |
Example for using the extended math of the HMO series (ID #540)Analysis of the electrical energy.
The HMO series offers five sets of formulas, each of them can accept up to five equations. This allows to enter the most important mathematical analysis into these five sets of formulas so that they can be called quickly without the need to enter them each time anew. In this example the energy function shall be displayed. The voltage across the load is being measured with an active difference amplifier probe and applied to channel 2. The current is being measured with a current probe and applied to channel 1.
First, the coefficient of the current probe (100 mV/A) is to be entered; the formula set 1 is called and the equation MA1 defined. After pushing the soft menu key EDIT, the appropriate functions can be selected using the SELECT button and the universal knob. Here, the channel 1 is being multiplied with a constant (0.1), and the unit A (Ampere) added. This ensures the correct display of the scale factors as well as of the units for cursors and automatic measurements. The equation MA1 is given the name "CURRENT using the soft menu key LABEL.
Fig. 1: Definition oft the current equation
Subsequently, a new equation will be entered and adjusted so that the result of the calculation of the equation "CURRENT" and channel 2 will be multiplied which will yield the power curve. At last, another equation will be added to the set of formulas which will be defined as the integral of the equation "POWER".
Fig. 2: Definition of the energy equation
Now, all definitions will be complete, and the results can be displayed and are available for further analysis. The analysis can be performed with either the cursor or the automatic measurement functions, all measurement results will be correctly scaled and will show the correct units: Ampere, Watt, Joule.
As you can see from this example, with the extended math functions of the HMO series and the ability to do math on math, you can do complex data analysis easily and fast. | <urn:uuid:ab4b4a30-4879-45f3-9367-a297b6f5362d> | 2.828125 | 430 | Tutorial | Science & Tech. | 49.412498 |
Q1. We get so much water from underground rocks. Are there natural streams flowing underground?
Not really! Groundwater moves through porous rock formations similar to the way water flows through a sponge with inter-connected pores. In nature, no space remains empty. Therefore, the pore space within the underground rock formations, no matter how small, remains filled either with air or water (sometimes oil and gas in deeper formations). Given a continuous supply, water enters a porous rock formation replacing the air and gradually saturates all the pore spaces. As the process continues, excess water tends to move through the saturated formation under gravitational force. Water can even seep through poorly cemented house walls and concrete basements, particularly during rainy season.
Therefore, when a storage space such as a dug well is constructed within a saturated rock formation, a part of the water from within the formation flows out as free water (specific yield) and gradually accumulates to fill in the well till it reaches to a level equal to the water level in the formation. Water also flows through joints, fractures and contact zones between two hard rock formations. Sometimes, one can even see water flowing out through a fracture or a contact zone in a dug well constructed in hard rock formation tapping such zones giving credence to the erroneous notion that groundwater moves underground as a subterranean stream.
Carbonate rocks are also known to develop due to the action of water, large sized cavities and inter-connected solution channels which contain and transmit large quantities of groundwater. Sometimes, in coal mines, groundwater accumulated in large quantity in spaces created due to removal of coal over the years can cause accidents thus creating the effect of a flowing underground river. Coal mines can also get flooded due to direct inflow of water from surface water bodies.
Q2. Which rock formations are good for transmitting groundwater?
From the hydrogeology point of view, rock formations are categorized conveniently as unconsolidated (loose), consolidated (hard) and semi-consolidated. Recent and older alluviums are unconsolidated sedimentary formations, which occur usually as alternate beds of sand and clay (or shale) with varying thickness and proportion. Sand formation is a natural carrier of water; coarser the grain size and lesser the compaction of the sand, better the water content and its flow. Clay and shale on the other hand being impervious are natural barriers to groundwater flow.
Water is also unable to pass through compact rocks like granite, basalt, quartzite etc., which are usually devoid of any primary porosity. However, in course of time the top portions of these hard rocks when exposed to extensive weathering can develop numerous fractures and get weathered into loose formation with granular consistency. Such weathered and fractured portions can transmit groundwater commensurate to the degree of secondary porosity developed due to weathering.
Although, formations like sandstone are compact in nature, they are prone to quick weathering and develop extensive joints and fractures and turn out to be good aquifers. Similarly, limestone is prone to develop cavities and solution channels. Both sandstone and limestone yield moderate to good quantity of groundwater and may be referred to as semi-consolidated formations. Read More
(Source: Dr M K Maitra)
|Groundwater_Frequently Asked Questions_MKMaitra_2011.pdf||512.25 KB| | <urn:uuid:de4e34a8-1bed-41eb-b100-fdc021a868b2> | 3.671875 | 691 | Q&A Forum | Science & Tech. | 32.420078 |
This function uses Grid graphics to draw a diagram of a Grid layout.
grid.show.layout(l, newpage=TRUE, bg = "light grey", cell.border = "blue", cell.fill = "light blue", cell.label = TRUE, label.col = "blue", unit.col = "red", vp = NULL)
- A Grid layout object.
- A logical value indicating whether to move on to a new page before drawing the diagram.
- The colour used for the background.
- The colour used to draw the borders of the cells in the layout.
- The colour used to fill the cells in the layout.
- A logical indicating whether the layout cells should be labelled.
- The colour used for layout cell labels.
- The colour used for labelling the widths/heights of columns/rows.
- A Grid viewport object (or NULL).
A viewport is created within
vp to provide a margin for annotation, and the layout is drawn within that new viewport. The margin is filled with light grey, the new viewport is filled with white and framed with a black border, and the layout regions are filled with light blue and framed with a blue border. The diagram is annotated with the widths and heights (including units) of the columns and rows of the layout using red text. (All colours are defaults and may be customised via function arguments.)
Documentation reproduced from R 2.15.3. License: GPL-2. | <urn:uuid:a32cfd85-a402-4eeb-8fef-0b2e063f7737> | 3.640625 | 318 | Documentation | Software Dev. | 78.924503 |
The Lovell Telescope has joined with the 305-metre Arecibo Telescope
in Puerto Rico to take part in what is the most sensitive and
comprehensive search yet undertaken for possible radio signals
from extraterrestrial civilisations beyond our Solar System.
The 5 year research programme, project Phoenix, is led by the
privately-funded SETI Institute. The aim is to observe 1000
of the nearest Sun-like star systems. It is hoped that an advanced
civilisation might exist on a planet within one of these systems.
Observations are scheduled for 40 nights each year.
The Arecibo Telescope uses a 56-million channel receiver to
make initial signal detections. Information about those signals
which are not in the data bank of known terrestrial signals,
are passed on to two further sets of identical receivers at
Arecibo and Jodrell Bank. Due to the rotation of the Earth,
and the great distance separating Jodrell Bank and Arecibo,
a signal from outside the Solar System will have precisely calculable
differences when observed at the two observatories. This allows
an extraterrestrial signal, should one be found, to be distinguished
from those originating on, or near, the Earth.
For more on SETI research see the JBO SETI research pages
| U.Man | PPARC
| MERLIN | VLBI |
Search | Feedback | <urn:uuid:ed95f5fd-7279-4681-bd5c-c8d334263657> | 3.328125 | 295 | Knowledge Article | Science & Tech. | 33.605692 |
Hubble Expansion and Speed of Galaxies
I have a question that has been gnawing at me
for sometime, and I can not find the answer on the Internet or in the
library. Please help me.
Hubble compared the distances to the velocity with which the galaxies
were speeding away, and explained that the farther away the galaxy, the
more rapidly it moved. This relation, known as Hubble's Law, was proof
that the universe was expanding.
Now consider, Einstein's Law of Relativity which states that
light in a vacuum travels at a constant speed of 299,792,458 meters per
For instance, if we viewed a galaxy 10 billion light years away; we are in fact seeing the galaxy as it appeared 10 billion years ago. Are we
not seeing the speed of this galaxy racing away 10 billion years ago?
This does not mean that it is traveling faster now, only in the distant
past. This could be a result of being nearer the time of the Big Bang. In
fact, this could cast doubt on the Big Rip Theory if it is traveling
slower now. Has this been taken into account, or have I missed something?
The story of the cosmology of the Universe is fascinating, but far too
long to address here. In addition, it becomes mathematically messy when
looked at in "detail". There are some misconceptions that arise because
that story is counter-intuitive. For example, "being nearer the time of
the Big Bang" seems to imply that we have a watch and can say when the
"Big Bang" occurred. That is not quite accurate because prior to the "Big
Bang" there was no such thing as time (so far as we know). The "Big Bang"
creates time and space. The finite speed of light does mean that there is
an event "horizon". that is there may be some galaxies whose light has not
yet reached us (we have only been observing distant galaxies for about 50
years). Presumably, there may also be galaxies that we have not observed
because they have "burned out" before we ever got a chance to observe
them. Distant galaxies do not appear to be slowing down. In fact the speed
of recession from our view (Earth) appears to be accelerating, and all are
moving away from us and one another, so that the average density of the
Universe appears to be continually decreasing. Recent measurements and
analysis show that the distant objects in the Universe are "younger" than
the Universe itself (which is measure by cosmic background microwave
radiation first observed by Penzias and Wilson), so there can be an event
horizon i.e. galaxies "out there" whose light has not yet reached "us".
I recommend a very readable book, "Big Bang: The Origin of the Universe" by
Simon Singh who has a talent for making difficult ideas approachable.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:090023e8-3ef2-49d5-8fa9-eebbcf9b851c> | 3.453125 | 624 | Q&A Forum | Science & Tech. | 52.849891 |
A supplement providing a snapshot of the latest developments in chemical biology
Anaesthetised brains under pressure
19 September 2006
What happens if you get a frog drunk and then take him scuba diving? The answer could help explain how anaesthetics work, claim scientists in the UK.
Agnieszka Wlodarczyk and colleagues from the Royal Institution of Great Britain, London, are currently looking at the effects of high pressure on the brain. To date, scientists have found that the effects of various narcotics and anaesthetics on the brain are often reversed or enhanced under high pressure. For example, if a frog swims in alcohol until it becomes unconscious, it will resume swimming at higher pressures.
Brain under pressure
Research in this area could shed light on how anaesthetics control the consciousness of the mind, explained Wlodarczyk. One of the most important unsolved scientific problems is consciousness and 'how the brain works', she said.
A recent development in brain research involves studying the route electric current takes through brain slices in real time. The slices are treated with dyes that fluoresce as electrical current passes through them, allowing the route to be mapped. The route taken is believed to determine the brain's state of consciousness. Wlodarczyk and colleagues are now mapping these routes in brain slices under high pressure. This is a new and promising direction in the field, said Wlodarczyk.
Stuart Dunbar, a neurobiology expert from Syngenta, UK, said that being able to study tissues in real time is an important stepping stone in brain research. This could lead to a better understanding of what happens to the brain under anaesthesia, he said.
A Wlodarczyk, PF McMillan and SA Greenfield, Chem. Soc. Rev., 2006, | <urn:uuid:41ecb6c2-7190-4242-9144-e109c672d859> | 3.078125 | 375 | Truncated | Science & Tech. | 48.240709 |
pictures: Marian Ørgaard and Niels Jacobsen
drawings: Line Jacobsen
Three main points in the article:
- The lower part of the kettle has a mucilage covering, interpreted as a hitherto
unnoticed food source for visiting insects.
- The cells of the inner surface of the tube and the kettle have downward pointing
trichomes, which collapse after two days and sink into the cell lumen. A lattice like
structure remains enabling insects to climb out of the kettle and tube.
- The flap covering the male flowers is interpreted as a prolongation and continuation of
the spathe tube margin.
See also the page Spathe, Inside the kettle, and Foliage, rhizome and roots
The article with much more detail and discussion and 73 pictures:
Ørgaard, M. & Jacobsen, N., 1998. SEM study of surfaces of the
spathe in Cryptocoryne and Lagenandra (Araceae: Aroideae:
Cryptocoryneae). Botanical Journal of the Linnean Society 126: 261-289. | <urn:uuid:369d0d01-283d-46d1-a57b-7f6e8274eb4a> | 2.96875 | 233 | Knowledge Article | Science & Tech. | 41.775034 |
- Large images:
- June 27, 1973 (Landsat 1 MSS; 57 meter resolution; 2.7 MB)
- July 2, 2002 (Landsat 7 ETM+; 28.5 meter resolution; 2.8 MB)
In the far North, short summers thaw only the top layer of the frozen ground. Beneath this shallow layer, the soil is permanently frozen—permafrost. The permafrost is like the cement bottom of a swimming pool. Water saturates the soil above the permafrost and collects on the surface of the tundra in tens of thousands of lakes. However, temperatures are climbing in the Arctic, and the bottom of the pool appears to be cracking. Using satellite imagery, scientists have documented that in the past two decades, a significant number of lakes have shrunk or disappeared altogether as the permafrost thaws and lake water drains deeper into the ground.
This image pair shows lakes dotting the tundra in northern Siberia to the east of the Taz River (bottom left). The tundra vegetation is colored a faded red, while lakes appear blue or blue-green. White arrows point to lakes that have disappeared or shrunk considerably between 1973 (top) and 2002 (bottom). After studying satellite imagery of about 10,000 large lakes in a 500,000-square-kilometer area in northern Siberia, the scientists documented a decline of 11 percent in the number of lakes, with at least 125 disappearing completely.
As the Arctic warms, loss of snow and ice make the region less efficient at reflecting incoming sunlight, which accelerates warming. As a result, the Arctic is warming faster than Earth’s middle or equatorial latitudes.
These images use near-infrared, red, and green wavelength data from the Landsat MSS sensor (1973) and the Landsat ETM+ sensor (2002).
- Smith, L.C., Sheng, Y., MacDonald ,G.M., and Hinzman, L.D. (2005). Disappearing Arctic Lakes. Science. 308 (5727), 1429.
NASA images created by Jesse Allen, Earth Observatory, using data obtained from the University of Maryland’s Global Land Cover Facility. | <urn:uuid:1fa1474e-7e3d-4c73-b84d-523663e66f6e> | 4.09375 | 460 | Knowledge Article | Science & Tech. | 60.474673 |
|Instrument: SMMR : Scanning Multichannel Microwave Radiometer|
Earth Remote Sensing Instruments
Instrument Class: Passive Remote Sensing
Instrument Type: Spectrometers/Radiometers
Instrument Subtype: Imaging Spectrometers/Radiometers
Related Data Sets
View all records related to this instrument
The primary purpose of the Scanning Multichannel Microwave Radiometer (SMMR) experiment was (1) to provide all-weather measurements of ocean surface temperature and wind speed, and (2) to obtain integrated liquid water column content and atmospheric water vapor column content for path length and attenuation corrections to the ALT and SASS observations. Microwave brightness temperatures were observed with a 10-channel (five-frequency dual polarized) scanning radiometer operating at 0.8-, 1.4-, 1.7-, 2.8-, and 4.6-cm wavelengths (37, 21, 18, 10.7, and 6.6 GHz). The antenna was a parabolic reflector offset from nadir by 0.73 rad. Motion of the antenna reflector provided observations from within a conical volume along the ground track of the spacecraft. The SMMR had a swath width of about 600 km and the spatial resolution ranged from about 22 km at 37 GHz to about 100 km at 6.6 GHz. The absolute accuracy of sea surface temperature obtained was 2 K deg with a relative accuracy of 0.5 K deg. The accuracy of the wind speed measurements was 2 m/s for winds ranging from 7 to about 50 m/s. The same experiment was flown on Nimbus 7. A more detailed description can be found in E. Njoku, et al., "The Seasat Scanning Multichannel Microwave Radiometer (SMMR): instrument description and performance," IEEE J. Oceanic Eng., v. OE-5, pp. 100-115, 1980. The instrument operated continuously in orbit from July 6, 1978 for a period of 95 days, until the spacecraft failed on October 10, 1978. Data are available from SDSD. | <urn:uuid:c68833c9-69b6-4440-b604-11477e6436f7> | 3 | 441 | Knowledge Article | Science & Tech. | 59.069222 |
A previous post addressed some issues with linear regression, “linear” meaning we’re fitting a straight line to some data. Let’s devote another post to scrutinizing the issue — so this post is all about the math, readers who aren’t that interested can rest assured we’ll get back to climate science soon.
It was mentioned in a comment that least-squares regression is BLUE. In this acronym, “B” is for “best” meaning “least-variance” — but for practical purposes it means (among other things) that if a linear trend is present, we have a better chance to detect it with fewer data points using least-squares than with any other linear unbiased estimator. “U” is for “unbiased,” meaning that the line we expect to get is the true trend line. Both of these are highly desirable qualities.
Finally, “L” is for “linear,” which in this context has nothing to do with the fact that our model trend is a straight line. It means that the best-fit line we get is a linear function of the input data. Therefore if we’re fitting data x as a linear function of time t, and it happens that the data x are the sum of two other data sets a and b, then the best-fit line to x is the sum of the best-fit line to a and the best-fit line to b. In some (perhaps even many) contexts that is a remarkably useful property. | <urn:uuid:7fbb4f38-445e-4ced-beda-597f855bd542> | 2.96875 | 331 | Personal Blog | Science & Tech. | 59.145877 |
In order to separate blocks of code (like for loops, if blocks and function definitions) the compiler / interpreter needs something to tell it when a block ends. Curly braces and end statements are perfectly valid ways of providing this information for the compiler. For a human to be able to read the code indentation is a much better way of providing the visual cues about block structure. As indentation also contains all the information for the compiler, to use both would be redundant. As indentation is better for humans, it makes sense to use that for the compiler too. It has the advantage that Python programs tend to be uniformly and consistently indented, removing one hurdle to understanding other people's code. Python does not mandate how you indent (two spaces or four, tabs or spaces - but not both), just that you do it consistently. Those that get used to the Python way of doing things tend to start seeing curly braces as unnecessary line noise that clutters code. On the other hand, 'the whitespace thing' is possibly the single biggest reason why some developers refuse to even try Python.
[Folklore says that the ABCers in the '80s experimentally determined that significant whitespace has measurable advantages. 'Anyone have documentation for this?]
- Among programmers using languages which ignore white space the prevailing conventions are to indent blocks of code to facilitate readability by humans. The exact details of such conventions vary (and are subject to considerable stylistic debate among some programmers). But the general features are ubiquitous, nearly universal: code is rendered mostly in an "outline" format ... with levels of lexical nesting denoted by increasing degrees of indentation. Python merely uses this (nearly universal) convention as part of its syntax which allows it to dispense with other block ending tokens. Some argue that this syntax helps avoid situations where the code doesn't match the apparent intent (when the indentation is subtly inconsistent with the lexical structure). As with the finer points of how code should be indented in white space agnostica languages, that is a point of endless discussion. | <urn:uuid:e01b38c1-d580-459a-a2f6-65195d5bcd4d> | 3.28125 | 423 | Documentation | Software Dev. | 35.194883 |
what are magnetic beads made up??
duong at chestnut.chem.upenn.edu
Mon Feb 8 08:18:18 EST 1993
I just read an article in Science (Nov, 13, 1992). It talked about the
direct Mechanical Measurements of the elasticity of Single DNA
molecules by using magnetic beads. I am wondering that what are the
chemical functional groups in magnetice beads? what are the components
in magnetice beads?
Also, one end of that single DNA attached to a glass surface. So how
one can make a chemical functional group on that glass surface? (same
thing as for magnetice beads). I looked up a couple papers but I got
no specifice responses.
If any one know the answer, please E-mail to me. I'll summarize and
post on the net later if there are enough interest. Thank you.
More information about the Gen-link | <urn:uuid:3f70ab89-e51d-4768-9ab4-928f446d0bfd> | 3.015625 | 194 | Comment Section | Science & Tech. | 64.3375 |
Dear Saxena ji,
Thank you for inquiry.
West facing windows can be a big source of heat, first measure which you...
Why all these are not applicable to Tuticorin port or the one planned in AP or WB ?
What an eye opener! As an environmental engineer,disposal of sanitary napkins has always been a concern during waste...
determining the atomic structure of solid materials is important for material scientists, since properties of every compound depends on it. Apart from the internal structure, the arrangement of the atoms in the surface layers is also crucial because this determines properties like friction and chemical reactivity.
Though several conventional methods can determine the atomic structure of the compound, studying the surface organisation of atoms was quite difficult till now. However, physicists at the University of Erlangen-Nurnberg in Germany have found a technique that could easily give a three-dimensional image of the atomic arrangement on the surface of solids (Physical Review Letters , Vol 79, No 24).
The scanning-tunnelling microscope and x-ray diffraction are some of the tools that no material scientist can do without.The microscope is used to get a basic idea of what the structure of the top most layer of the solid looks like, while x -ray diffraction gives the crystal structure. But the method used most frequently to scan surface layers is the low energy electron diffraction or leed.
leed is the diffraction or the bending of an electron beam as it passes near a material or through the spacing or "holes" in its submicroscopic structure. A 1924-theory of by French physicist Louis de Broglie says that an electron's wavelength is always inversely proportional to its momentum. So, fast-moving electrons have short wavelengths, and can pass through the "holes" between the atomic layers in crystals. A beam of such high-speed electrons should undergo diffraction when directed through thin sheets of material or when reflected from the surface of crystals.
As an analytical method, it is used to identify a substance chemically or to locate the position of atoms in it. This information can be inferred from the patterns that are formed when various portions of the diffracted electron beam cross each other and through interference, make a regular arrangement of impact positions. Recorded photographically or otherwise, such a distri-bution is called a diffraction pattern, which gives information about the nature and structure of the gas, liquid, or solid that caused the diffraction.
The main problem with this method is in converting the spots of light obtained from the electrons into the arrangement of the atoms on the surface. Normally, an approximate model of the surface is taken and the data obtained from leed is matched with it to obtain information. If the intensities obtained from leed matches those derived from the model, it is believed to be a valid approximation for the surface. Otherwise, the model is modified slightly and the whole process repeated till the scientists get the right idea of the surface structure.
But breakthrough came in 1990 when the experts realised that the diffraction pattern generated by the electrons could be used as a hologram. If one atom is sticking out of the surface, it could be used to split the incoming beam of electrons into one reference beam and one which scatters off the surface. The split beam when recombined could give a holographic image. Klaus Hienz and his team in Germany used the disordered oxygen atoms on the surface of nickel. But the intensities of the diffuse diffraction pattern was very hard to measure. So they used a crystalline material, silicon carbide that gives much brighter holograms. They have managed to modify the holograph-reconstruction algorithms to generate the first ever picture of the surface of silicon carbide.
The technique, though useful, has its own limitations: it can only be used with materials which have some atoms sticking out of their surfaces. Moreover, its resolution is also half of the conventional leed methods. But the advantage of this new method is the ease with which a rough picture of the surface can be obtained without too many wild guesses. Physicists think that this technique harbours a lot of potential and can be used as a quick, easy way to obtain an approximate view of the surface. This can then be used to generate model in conventional leed algorithms and a more accurate picture can be obtained. | <urn:uuid:a014413f-e5c9-4dd8-b748-a86ea2db145d> | 3.625 | 893 | Comment Section | Science & Tech. | 38.154134 |
Geothermal Power vs. Geothermal Heating
Geothermal power & geothermal heating and cooling are not the same thing, although they are often incorrectly interchanged.
Geothermal Power = Electricity & Power Plant
Uses extreme heat/steam from miles beneath the surface to rotate a turbine that generate electricity at a power plant.
Geothermal Heating & Cooling = Air Conditioning & Heat for Your Home
Uses dirt right under our feet to heat and cool homes and small buildings; no electricity is generated.
Geothermal power is often called geothermal energy because electricity is produced in the processes. You’ll hear about geothermal energy projects on the news which aim to produce so many Mega Watts of power - this is geothermal power NOT geothermal heating & cooling. It costs millions of dollars and takes years to build.
Geothermal heating and cooling is also referred to as ground source heating or geoexchange. This is what you would use at your own home to heat and cool your home. Plastic pipes are inserted into the ground anywhere from 6 to 300 feet. These pipes circulate water to transfer heat to and from the home. There is not electricity generated, there is no lava, there is no steam. It costs thousands and takes a day or two to install.
Geothermal power, as it were, is the process of generating electricity through geothermal energy. How is this done? Well, it's sort of like heating and cooling, actually. The process begins with testing - a lot of testing. Whether it's a few geologists, a team of engineers, or some combination of the two, along with others, a certain group of people spend a good chunk of time locating and procuring a location from which this energy can be derived. Where, though? Well, underground of course. What they're looking for, though, is not dirt, like in the case of heating and cooling. Instead, they're searching for underground cavities that are home to geothermal water. Once these cavities are located, a geothermal production well is drilled so that both steam and water can rise to the surface. It is this steam, this water, that is utilized to generate both geothermal power and electricity.
Much as a wind farm uses wind to power the turbines that generate electricity, a geothermal power plant utilizes steam, water, or pure heat to do the same. There are various types of geo power plants, including dry steam, flash, and binary. A dry steam power plant is, as you may have guessed, dependent upon steam power, while the latter two are dependent upon water reservoirs.
This is all well and good - but what are the advantages of geothermal power? First off, it's flat out better on the surrounding land. Geo power plants are significantly smaller than the average plant that uses alternative energy sources. Not to mention, it's incredibly clean. No fuels are burned. No waste is dumped into surrounding rivers and tributaries. Most importantly, nonrenewable fossil fuels are conserved to a much greater degree. Really, there are countless benefits - these are just a few.
To check out more benefits of geothermal energy, visit UCSUA. | <urn:uuid:99adcd30-f4a5-48ce-ae9c-059182433219> | 3.328125 | 652 | Knowledge Article | Science & Tech. | 47.843643 |
The Mutant Butterflies Of Fukushima
While not quite Simpson-esque in their deformities, the mutations witnessed in butterflies near Fukushima should not be dismissed.
Researchers have discovered the Fukushima nuclear disaster has caused physiological and genetic damage to the pale grass blue butterfly (Zizeeria maha).
Specimens collected from the Fukushima area in May last year showed relatively mild abnormalities; however, the offspring from the first-voltine females showed more severe abnormalities, which were present in 33.5% of the subsequent generation.
The abnormalities included colour-pattern modifications, deformed compound eyes and antenna malformation, or a forked antenna.
Adult butterflies collected in September 2011 also showed more severe abnormalities than those collected in May. 28.1% of those specimens were displaying altered traits, more than double that observed in the field-collected first-voltine adults in May
“The Z. maha population in the Fukushima area is deteriorating physiologically and genetically,” state the researchers in a paper published in Nature.
The current situation at Fukushima is still not over 17 months after the event and even when the reactors are finally dealt with, the repercussions will continue for years. The disaster provided many lessons, but as to how much we learned remains to be seen.
I’m lucky to live in a country that has fended off the nuclear energy lobby for many years – there are no nuclear power stations in this country and won’t be any for the foreseeable future. Folks in the USA aren’t so fortunate, with one in three people living within 50 miles of a nuclear power station.
In other nuclear news, a nuclear power plant in Belgium has been shut down after the nation’s atomic energy regulator discovered potential issues, including possible cracks, in the tank containing the reactor’s core. While not posing any threat according to the regulator, the power station will remain shut at least until the end of August.
Nuclear power and the environment
Thorium vs uranium
Green Living Tips.com
Article reproduction guidelines
blog comments powered by Disqus | <urn:uuid:adda87f4-cbcc-4d24-855c-a78f880f936f> | 2.890625 | 426 | Personal Blog | Science & Tech. | 29.356236 |
Since Lua isn't used as much as other languages, it doesn't have a million simple tutorials for it. However, it is still a fun language to learn and useful for beginners. You can find the official Lua documentation at http://www.lua.org/docs.html
. I could write you a five hundred page book on Lua but I honestly don't have the time for it. If you need explainations of something, feel free to ask, but be precise.
If you are completely new to it, here's a check list of things you MUST know how to do in Lua in order to program it well:
- different types of variables and values (including nil, boolean, number, string, function, and table)
- functions (obviously)
- for loops
- while loops
- logic operations and if statements
- string manipulation functions
- table manipulation functions
- and of course the print() function
Again, if you want something specific, just ask. | <urn:uuid:f9c68df3-cd45-452c-a391-ab4e72ff18a8> | 3.015625 | 204 | Q&A Forum | Software Dev. | 61.468521 |
99 questions/11 to 20
1 Problem 11
(*) Modified run-length encoding. Modify the result of problem 10 in such a way that if an element has no duplicates it is simply copied into the result list. Only elements with duplicates are transferred as (N E) lists.
Example: * (encode-modified '(a a a a b c c a a d e e e e)) ((4 A) B (2 C) (2 A) D (4 E)) Example in Haskell: P11> encodeModified "aaaabccaadeeee" [Multiple 4 'a',Single 'b',Multiple 2 'c',Multiple 2 'a',Single 'd',Multiple 4 'e']
data ListItem a = Single a | Multiple Int a deriving (Show) encodeModified :: Eq a => [a] -> [ListItem a] encodeModified = map encodeHelper . encode where encodeHelper (1,x) = Single x encodeHelper (n,x) = Multiple n x
The ListItem definition contains 'deriving (Show)' so that we can get interactive output.
2 Problem 12
(**) Decode a run-length encoded list. Given a run-length code list generated as specified in problem 11. Construct its uncompressed version.
Example in Haskell: P12> decodeModified [Multiple 4 'a',Single 'b',Multiple 2 'c',Multiple 2 'a',Single 'd',Multiple 4 'e'] "aaaabccaadeeee"
decodeModified :: [ListItem a] -> [a] decodeModified = concatMap decodeHelper where decodeHelper (Single x) = [x] decodeHelper (Multiple n x) = replicate n x
We only need to map single instances of an element to a list containing only one element and multiple ones to a list containing the specified number of elements and concatenate these lists.
3 Problem 13
(**) Run-length encoding of a list (direct solution). Implement the so-called run-length encoding data compression method directly. I.e. don't explicitly create the sublists containing the duplicates, as in problem 9, but only count them. As in problem P11, simplify the result list by replacing the singleton lists (1 X) by X.
Example: * (encode-direct '(a a a a b c c a a d e e e e)) ((4 A) B (2 C) (2 A) D (4 E)) Example in Haskell: P13> encodeDirect "aaaabccaadeeee" [Multiple 4 'a',Single 'b',Multiple 2 'c',Multiple 2 'a',Single 'd',Multiple 4 'e']
encode' :: Eq a => [a] -> [(Int,a)] encode' = foldr helper where helper x = [(1,x)] helper x (y:ys) | x == snd y = (1+fst y,x):ys | otherwise = (1,x):y:ys encodeDirect :: Eq a => [a] -> [ListItem a] encodeDirect = map encodeHelper . encode' where encodeHelper (1,x) = Single x encodeHelper (n,x) = Multiple n x
4 Problem 14
(*) Duplicate the elements of a list.
Example: * (dupli '(a b c c d)) (A A B B C C C C D D) Example in Haskell: > dupli [1, 2, 3] [1,1,2,2,3,3]
dupli = dupli (x:xs) = x:x:dupli xs
or, using list comprehension syntax:
dupli list = concat [[x,x] | x <- list]
or, using the list monad:
dupli xs = xs >>= (\x -> [x,x])
or, using concatMap:
dupli = concatMap (\x -> [x,x])
also using concatMap:
dupli = concatMap (replicate 2)
or, using foldr:
dupli = foldr (\ x xs -> x : x : xs)
5 Problem 15
(**) Replicate the elements of a list a given number of times.
Example: * (repli '(a b c) 3) (A A A B B B C C C) Example in Haskell: > repli "abc" 3 "aaabbbccc"
repli :: [a] -> Int -> [a] repli xs n = concatMap (replicate n) xs
or, in Pointfree style:
repli = flip $ concatMap . replicate
6 Problem 16
(**) Drop every N'th element from a list.
Example: * (drop '(a b c d e f g h i k) 3) (A B D E G H K) Example in Haskell: *Main> dropEvery "abcdefghik" 3 "abdeghk"
An iterative solution:
dropEvery :: [a] -> Int -> [a] dropEvery _ = dropEvery (x:xs) n = dropEvery' (x:xs) n 1 where dropEvery' (x:xs) n i = (if (n `divides` i) then else [x]) ++ (dropEvery' xs n (i+1)) dropEvery' _ _ = divides x y = y `mod` x == 0
or an alternative iterative solution:
dropEvery :: [a] -> Int -> [a] dropEvery list count = helper list count count where helper _ _ = helper (x:xs) count 1 = helper xs count count helper (x:xs) count n = x : (helper xs count (n - 1))
or using zip:
dropEvery n = map snd . filter ((n/=) . fst) . zip (cycle [1..n])
7 Problem 17
(*) Split a list into two parts; the length of the first part is given.
Do not use any predefined predicates.
Example: * (split '(a b c d e f g h i k) 3) ( (A B C) (D E F G H I K)) Example in Haskell: *Main> split "abcdefghik" 3 ("abc", "defghik")
Solution using take and drop:
split xs n = (take n xs, drop n xs)
Alternatively, we have the following recursive solution:
split :: [a] -> Int -> ([a], [a]) split _ = (, ) split l@(x : xs) n | n > 0 = (x : fst splitSub, snd splitSub) | otherwise = (, l) where splitSub = split xs (n - 1)
The same solution as above written more cleanly:
split :: [a] -> Int -> ([a], [a]) split xs 0 = (, xs) split (x:xs) n = let (f,l) = split xs (n-1) in (x : f, l)
8 Problem 18
(**) Extract a slice from a list.
Given two indices, i and k, the slice is the list containing the elements between the i'th and k'th element of the original list (both limits included). Start counting the elements with 1.
Example: * (slice '(a b c d e f g h i k) 3 7) (C D E F G) Example in Haskell: *Main> slice ['a','b','c','d','e','f','g','h','i','k'] 3 7 "cdefg"
slice xs (i+1) k = take (k-i) $ drop i xs
Or, an iterative solution:
slice :: [a]->Int->Int->[a] slice lst 1 m = slice' lst m where slice' :: [a]->Int->[a]->[a] slice' _ 0 acc = reverse acc slice' (x:xs) n acc = slice' xs (n - 1) (x:acc) slice (x:xs) n m = slice xs (n - 1) (m - 1)
9 Problem 19
(**) Rotate a list N places to the left.
Hint: Use the predefined functions length and (++).
Examples: * (rotate '(a b c d e f g h) 3) (D E F G H A B C) * (rotate '(a b c d e f g h) -2) (G H A B C D E F) Examples in Haskell: *Main> rotate ['a','b','c','d','e','f','g','h'] 3 "defghabc" *Main> rotate ['a','b','c','d','e','f','g','h'] (-2) "ghabcdef"
rotate _ = rotate l 0 = l rotate (x:xs) (n+1) = rotate (xs ++ [x]) n rotate l n = rotate l (length l + n)
There are two separate cases:
- If n > 0, move the first element to the end of the list n times.
- If n < 0, convert the problem to the equivalent problem for n > 0 by adding the list's length to n.
or using cycle:
rotate xs n = take len . drop (n `mod` len) . cycle $ xs where len = length xs
rotate xs n = if n >= 0 then drop n xs ++ take n xs else let l = ((length xs) + n) in drop l xs ++ take l xs
rotate xs n = drop nn xs ++ take nn xs where nn = n `mod` length xs
10 Problem 20
(*) Remove the K'th element from a list.
Example in Prolog:
?- remove_at(X,[a,b,c,d],2,R). X = b R = [a,c,d]
Example in Lisp:
* (remove-at '(a b c d) 2) (A C D)
(Note that this only returns the residue list, while the Prolog version also returns the deleted element.)
Example in Haskell:
*Main> removeAt 1 "abcd" ('b',"acd")
removeAt :: Int -> [a] -> (a, [a]) removeAt k xs = case back of -> error "removeAt: index too large" x:rest -> (x, front ++ rest) where (front, back) = splitAt k xs
If the original list has fewer than k+1 elements, the second list will be empty, and there will be no element to extract. Note that the Prolog and Lisp versions treat 1 as the first element in the list, and the Lisp version appends NIL elements to the end of the list if k is greater than the list length.
removeAt n xs = (xs!!n,take n xs ++ drop (n+1) xs) | <urn:uuid:e8eb9163-e2f8-4feb-8698-c3818c26b715> | 3.046875 | 2,446 | Tutorial | Software Dev. | 76.4478 |
Blast-off for researching bugs in space
Dr Tony Ricco is working with Nasa putting tiny satellites into space to see how bugs develop in that atmosphere – seen as critical in eventual space travel, writes CLAIRE O’CONNELL
WHAT DO medical devices have in common with satellites that orbit the Earth? On the face of it, perhaps not much. But Dr Tony Ricco’s work spans both.
As chief technologist in the area of small payloads and instrumentation at the Nasa Ames Research Center (on leave from Stanford University), he sends living microbes into space in shoebox-sized satellites to see how the bugs grow and react to pharmaceutical drugs. And as an adjunct professor at Dublin City University he works with the Biomedical Diagnostics Institute on technology to better measure the stickiness of blood platelets and detect pathogens and disease in patients here on terra firma.
Those twin tracks of medical devices and space both require insight into the science at the interface between biology and chemistry and engineering, and between devices and physics integration, he explains when we meet at DCU. “That is very much the commonality between my Nasa work and my work with the BDI.”
Ricco started out as a chemist and moved into the field of chemical microsensors before joining California-based company Aclara Biosciences, where he focused on microfluidic technologies for use in applications such as DNA sequencing and detecting pathogens. The company was gearing up for success. “We did an IPO in 2001, and on the day we did that it was the largest biotech IPO that had ever been done on the Nasdaq,” recalls Ricco.
But the environment changed and the company had to alter its plans.
“We had staked the company on being able to make very large numbers of consumable plastic devices that did some fairly sophisticated things,” says Ricco. “But suddenly the pharma industry started shutting down a lot of its interest in developing next-generation technologies, which this really was.”
So Ricco followed a long-held ambition and set up a consulting business. “I had always been interested in having a consulting business to solve problems at interfaces between chemistry and biology and engineering,” he recalls. That led him to become involved with Nasa and he also spent time in Ireland through a Science Foundation Ireland ETS Walton Visitor Award in 2004, helping to develop what is now the BDI.
What kind of work does he do? Let’s start with space: we know that spending time away from Earth can physically affect the human body, but watching what happens to other living organisms or organic molecules in space can also be instructive. Sometimes they can help answer questions about whether life could exist elsewhere in the cosmos. And if we want to plan for long space missions with humans on board, we’ll need to know about biology in space – not just for human health but also for practical considerations like growing food.
So Ricco has been involved in projects to send living organisms into orbit to measure how the environment affects them. In 2006, the GeneSAT experiment launched an 11-pound (5kg) small satellite containing E.coli bacteria into orbit at about 450km above the Earth. The bugs were housed in an incubator that shielded them somewhat from the radiation in space but allowed the scientists to look at the effects of microgravity. The E.coli had been engineered to switch on a glowing protein that the researchers could track as the bugs grew in microfluidic wells, and the growth data were sent to Earth where the researchers could analyse them. Despite its relatively tiny size, the satellite packed plenty in. | <urn:uuid:b2d9d637-9c5d-4328-9fff-d33fb4d24b28> | 2.765625 | 766 | Truncated | Science & Tech. | 39.551653 |
Wednesday Science Night for February 1st presents:
7:00 PM Nature – “Wolverine: Chasing The Phantom”
Its name stirs images of the savage, the untameable. Legend paints it as a solitary, bloodthirsty killer that roams the icy heart of the frozen north, taking down prey as large as moose, crushing bones to powder with its powerful jaws. But there is another image of the wolverine that is just beginning to emerge, one that is far more complex than its reputation suggests. This film takes viewers into the secretive world of the largest and least known member of the weasel family to reveal who this dynamic little devil truly is. Hard-wired to endure en environment of scarcity, the wolverine is one of the most efficient and resourceful carnivores on Earth.
8:00 PM NOVA – “Ice Age Death Trap”
In a race against developers in the Rockies, archaeologists uncover a unique site packed with astonishingly preserved bones of mammoths, mastodons and other giant extinct beasts, opening a vivid window on the vanished world of the Ice Age.
9:00 PM Inside Nature’s Giants – “Great White Shark”
The experts travel to South Africa to dissect a 15-foot-long great white shark. Comparative anatomist Joy Reidenberg uncovers the amazing array of senses the shark possesses, including the ability to detect the electro-magnetic field given off by other creatures. Veterinary scientist Mark Evans investigates the origins of the shark’s infamous killing bite, and evolutionary biologist Richard Dawkins explains how sharks’ teeth and jaws evolved from their outer skin and gill arches. Finally, the experts ask whether the shark deserves its reputation as a man killer. | <urn:uuid:31848584-1e7e-4363-acbd-c5f3e6db9b44> | 2.6875 | 365 | Content Listing | Science & Tech. | 32.363416 |
The other two shuttles -- Challenger and Columbia -- did not make it back to Earth after accidents that killed their entire crews.
9. SpaceX gets to the space station, and back
No NASA shuttles flew in 2012, but a private company called SpaceX successfully sent almost 900 pounds of cargo to the international space station in its first official mission in October. The Dragon capsule came back with nearly 1,700 pounds of freight. This was only months after the SpaceX demonstration flight in May.
NASA and SpaceX have a contract for a dozen flights to the space station, and the October trip was just the first.
SpaceX isn't the only player in this commercial spaceflight arena. Virgin Galactic, Sir Richard Branson's private spaceflight company, recently completed a high-altitude test. Orbital Sciences is also under contract with NASA, and will also launch a demonstration flight.
10. Baby's DNA constructed before birth
For the first time, researchers at the University of Washington were able to construct a near-total genome sequence of a fetus, using a blood sample from the mother and saliva from the father.
The study suggested this method could be used to detect thousands of genetic diseases in children while they are still in the fetal stage. In the long run, it could help scientists derive new insights about genetic diseases.
Right now, this sequencing costs in the neighborhood of $50,000, but given how rapidly the price of genetic testing is falling, the process may become less expensive over time. Of course, it also raises ethical issues about selecting certain desirable traits in children. For right now, however, the technology is still in its early stages.
What were your favorite science stories this year? Share them in the comments. | <urn:uuid:47d65108-6ac8-46f5-96a9-4985e129bcf9> | 3.21875 | 353 | Listicle | Science & Tech. | 51.398922 |
Imagine a world where microscopic medical implants patrol our arteries, diagnosing ailments and fighting disease; where military battle-suits deflect explosions; where computer chips are no bigger than specks of dust; and where clouds of miniature space probes transmit data from the atmospheres of Mars or Titan.
Nanotechnology is science and engineering at the scale of atoms and molecules. It is the manipulation and use of materials and devices so tiny that nothing can be built any smaller.
How small is small?
Nanomaterials are typically between 0.1 and 100 nanometres (nm) in size - with 1 nm being equivalent to one billionth of a metre (10-9 m).
This is the scale at which the basic functions of the biological world operate - and materials of this size display unusual physical and chemical properties. These profoundly different properties are due to an increase in surface area compared to volume as particles get smaller - and also the grip of weird quantum effects at the atomic scale.
If 1 nanometre was roughly the width of a pinhead, then 1 metre on this scale would stretch the entire distance from Washington, DC to Atlanta - around 1000 kilometres. But a pinhead is actually one million nanometres wide. Most atoms are 0.1 to 0.2 nm wide, strands of DNA around 2 nm wide, red blood cells are around 7000 nm in diameter, while human hairs are typically 80,000 nm across.
Unwittingly, people have made use of some unusual properties of materials at the nanoscale for centuries. Tiny particles of gold for example, can appear red or green - a property that has been used to colour stained glass windows for over 1000 years.
Nanotechnology is found elsewhere today in products ranging from nanometre-thick films on "self-cleaning" windows to pigments in sunscreens and lipsticks.
Nano is born
The idea of nanotechnology was born in 1959 when physicist Richard Feynman gave a lecture exploring the idea of building things at the atomic and molecular scale. He imagined the entire Encyclopaedia Britannica written on the head of a pin.
However, experimental nanotechnology did not come into its own until 1981, when IBM scientists in Zurich, Switzerland, built the first scanning tunnelling microscope (STM). This allows us to see single atoms by scanning a tiny probe over the surface of a silicon crystal. In 1990, IBM scientists discovered how to use an STM to move single xenon atoms around on a nickel surface - in an iconic experiment, with an inspired eye for marketing, they moved 35 atoms to spell out "IBM".
Further techniques have since been developed to capture images at the atomic scale, these include the atomic force microscope (AFM), magnetic resonance imaging (MRI) and the even a kind of modified light microscope.
Other significant advances were made in 1985, when chemists discovered how to create a soccer-ball-shaped molecule of 60 carbon atoms, which they called buckminsterfullerene (also known as C60 or buckyballs). And in 1991, tiny, super-strong rolls of carbon atoms known as carbon nanotubes were created. These are six times lighter, yet 100 times stronger than steel.
Both materials have important applications as nanoscale building blocks. Nanotubes have been made into fibres, long threads and fabrics, and used to create tough plastics, computer chips, toxic gas detectors, and numerous other novel materials. The far future might even see the unique properties of nanotubes harnessed to build a space elevator.
More recently, scientists working on the nanoscale have created a multitude of other nanoscale components and devices, including:
Tiny transistors, superconducting quantum dots, nanodiodes, nanosensors, molecular pistons, supercapacitors, "biomolecular" motors, chemical motors, a nano train set, nanoscale elevators, a DNA nanowalking robot, nanothermometers, nano containers, the beginnings of a miniature chemistry set, nano-Velcro, nanotweezers, nano weighing scales, a nano abacus, a nano guitar, a nanoscale fountain pen, and even a nanosized soldering iron.
Engineering at the nanoscale is no simple feat, and scientists are having to come up with completely different solutions to build from the "bottom-up" rather than using traditional "top-down" manufacturing techniques.
Some nanomaterials, such as nanowires and other simple devices have been shown to assemble themselves given the right conditions, and other experiments at larger scales are striving to demonstrate the principles of self-assembly. Microelectronic devices might be persuaded to grow from the ground-up, rather like trees.
In the short term, the greatest advances through nanotechnology will come in the form of novel medical devices and processes, new catalysts for industry and smaller components for computers.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Thu Nov 01 11:12:30 GMT 2007 by Rydri
Most-seemingly-random-yet-serious-sentence Award goes toooo! *drum roll*
"One 2004 study hinted that buckyballs can accumulate and cause brain damage in fish."
What EXACTLY does a brain damaged fish look like/do?
Sat Feb 14 16:43:35 GMT 2009 by David
A brain-damaged eel, for example, might not be able to find its way across the several thousand kilometres from a European lake to the Sargasso Sea to spawn as it otherwise would
Please I Need Help On My Coursework
Sat Dec 29 22:33:27 GMT 2007 by Lucky Omeke
I am a student of City London Academy. I really find this masterpiece very useful. Although i plead if i could get some help on "nanostructures used in medicine". I hope you give this comment a good thought and consideration.
Travel 'sans Frontiere' With Nanotech!
Tue Jan 01 18:20:01 GMT 2008 by Ariane Von Wolfland
This amazing article reveals that the future of the mankind depends on nanotechnology and nano-engineering. When in the 70's a movie showed the travel of medical group inside the body of an ill person to restore the damages occured , it appeared somehow a fantastic fiction, but it made allusion to the way science would solve some medical issues in the future. Now we are witnesses of the fact that how 'fictions' become reality. Nowadays nanotechnology is the clue for solving all problems in all fields of science, from genetics to time-space adventures. The 'future' of life on the earth would not resemble to any in the 'past', unless or maybe to mythologic tales.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:19b290e0-7b69-4673-947d-1deff6423b0f> | 3.578125 | 1,530 | Comment Section | Science & Tech. | 36.979114 |
Ever since the discovery of buckminsterfullerene, the football-shaped molecule which contains 60 carbon atoms, chemists have been racing to make larger 'fullerenes'. However, once a new fullerene has been discovered, determining its structure is difficult. The distinctive 'hollow cage' form chosen by nature may be one of thousands of different possible structures.
Now two British chemists have developed simple rules to predict the structures of fullerenes with more than 60 carbon atoms. Their predictions are allowing other chemists to identify new members of the family of fullerness as they are made in the laboratory.
Patrick Fowler of the University of Exeter and David Manolopoulos of the University of Nottingham have used simple chemical theory to devise a set of rules for predicting the most stable, and hence the most likely, structures for any number of carbon atoms.
A first requirement is that carbon atoms must use up all four of ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:eb96df69-f21e-4193-845d-f69a5265859b> | 3.640625 | 218 | Truncated | Science & Tech. | 28.782679 |
Spontaneous Combustion Heat
I am curious about spontaneous combustion when oily rags are discarded improperly. What causes heat build up?
In order for something to burn (oxidize), ignition heat and oxygen are necessary. As you know, such reactions release heat. Burning may be a slow oxidation like that which occurs when a iron rusts, or it may occur quickly in a very fast oxidation like that which occurs when grain dust explodes.
If the rags are soaked in a combustible substance such as gasoline or oil, a very gradual, slow oxidation may occur. As it does, the pile of rags serves as a kind of insulating blanket that traps and allows a build-up of heat and flammable vapors from the reaction. When the temperature rises far enough, the whole pile may catch fire very quickly in a "spontaneous combustion."
If the rags are hung up so that they are exposed to the air but not piled up, they are far less likely to ignite on their own. Nevertheless, If soiled rags (like shop towels) must be stored prior to washing, they should be stored in an air-tight container, thus limiting the initial amount and supply of oxygen available to enable combustion.
One additional caution: Certain drying oils used in wood preservation are so chemically reactive that they may ignite in circumstances under which a common oily rag would not. Whenever those kinds of oils are used, one must be extra careful to follow instructions on the proper disposal of any rags or paper towels that may be contaminated with those materials.
Click here to return to the Chemistry Archives
Update: June 2012 | <urn:uuid:fc919795-899d-4199-a405-fbdd460b4797> | 3.171875 | 334 | Knowledge Article | Science & Tech. | 41.32121 |
This is called an "arithmetic series".
Each term in an arithmetic series ALWAYS has a difference of "d" between each term. Subtract any term in this sequence by the one that preceded it and the answer is always d=(I'll let you figure this one out).
If you want to calculate the actual cost of 100 apples, you need to find the sum of the first 10 terms (as each term represents the price of 10 apples in sequence). you can find this easily by plugging the first and last term into a simple equation:
For a FINITE arithmetic sequence,
Sn = (n/2)(a1 + an)
a1=first term, an=last term, n=the total number of terms being added.
Plug these figures in and what do you get?
now, to create *an equation*
The nth term of an arithmetic sequence can be found using a couple of general formulas, but here's the best one you should use (I recommend you look into this)
The nth term of an arithmetic sequence has the form an = dn + c
where c = a1 - d
You take it from there.
Last edited by strumbore
on Sat Jan 12, 2013 9:38 am, edited 1 time in total. | <urn:uuid:92a2d8b4-f1d3-4af4-9a01-bfc77ad9567f> | 3.046875 | 270 | Comment Section | Science & Tech. | 66.908382 |
Following in the pad prints and rover tracks of Viking 1 and 2, Mars Pathfinder, Spirit and Opportunity, the Mars Phoenix lander became Earth’s sixth successful visitor to the surface of the Red Planet. Using a maneuver involving parachutes and rocket thrusters, the craft touched down on May 25, to the delight of NASA mission controllers and space fans everywhere. The Mars Reconnaissance Orbiter, one of three craft currently circling Mars, spotted the lander, with its two solar panels splayed out.
One of the first photographs returned by Phoenix, a shot of the northern plains of Mars, shows pebbles and polygonal patterns, which probably result from the seasonal expansion and contraction of ice near the surface. Curiously, the polygons, about 1.5 to 2.5 meters in diameter, were much smaller than estimates made from earlier orbital images, suggesting that the area may be more complex and dynamic than previously thought.
With its 2.4-meter-long robotic arm, Phoenix scooped up some Martian dirt on June 6, but the first attempt to deliver the sample to onboard equipment for analysis—the hope is to find water—stalled when the Martian soil proved to be clumpier than anticipated. The sample sat on top of metal screens meant to sift out smaller particles. Mission controllers had to come up with a means to bypass the problem, including instructing Phoenix to turn on a mechanical screen shaker in an effort to dislodge the material. The seventh and final round of shaking did the trick.
Phoenix should last to September but probably not much beyond that. For most of 2009, it will be encased in dry ice, as the Martian winter arrives and carbon dioxide condenses out of the atmosphere, covering the region.
For more SciAm.com coverage of the Phoenix Lander's mission on Mars, click here
Note: This story was originally published with the title, "Seeing Red". | <urn:uuid:e57ed377-b9d0-49b6-aafa-d8be70bc5e3b> | 3.296875 | 394 | Truncated | Science & Tech. | 50.884934 |
Fact sheet number: FS-2002-04-80-MSFC
These days everyone is talking about thinking “outside the box, “ but sometimes science is better done inside a box — a glovebox that is. The Microgravity Science Glovebox — a sealed container with built-in gloves — provides an enclosed work space for investigations conducted in the unique, low-gravity, or microgravity, environment created as the International Space Station orbits Earth.
There are good reasons for using a glovebox to contain experiments with fluids, flames, particles and fumes. In an Earth-based laboratory, liquids stay in beakers or test tubes. In the near-weightlessness of the Station, they float away. They might get into the cabin air and irritate a crew member’s skin or eyes or even make them sick. They could damage the Station’s sensitive computer and electrical systems or contaminate other experiments.
To make laboratory-type investigations inside the Station possible, engineers and scientists at NASA’s Marshall Space Flight Center in Huntsville, Ala., worked with the European Space Agency to build the Microgravity Science Glovebox — a facility that will support Station investigations for the next 10 years. In exchange for developing the Microgravity Science Glovebox, the European Space Agency will have use of other facilities inside the Destiny laboratory until its Space Station laboratory — the Columbus Orbital Facility — is attached to the Station in a couple of years.
The Space Shuttle Endeavor will transport the Microgravity Science Glovebox to the Station during STS-111, ISS Flight UF2, set for spring 2002.
The Expedition Five crew will move the glovebox facility from the Shuttle to the Station’s Destiny Laboratory, where they will perform an initial setup and checkout of the facility.
To checkout the glovebox’s containment systems, they will practice some waste handling procedures with non-toxic substances. They also will complete some maintenance activities.
These initial glovebox activities will be supported by scientists and engineers working in a telescience center at the Microgravity Development Laboratory — a unique Marshall Center facility that helps scientists and engineers prepare investigations from conception to implementation in space. This laboratory has an identical engineering model of the glovebox for investigation testing and flight preparation.
The Space Station glovebox will occupy a floor-to-ceiling rack inside Destiny. It is more than twice as large as gloveboxes flown on the Space Shuttle, and can hold larger investigations that are about the size of an airline carry-on bag.
The part of the unit that holds experiment equipment is called the Work Volume, and has a usable volume of about 67 gallons (255 liters). This work space is approximately waist-high and can slide out to extended or protracted positions, making it easier for crew members to use. An airlock under the work volume, can be accessed to bring objects safely into the work volume, while other activities are going on inside the glovebox.
The glovebox has side ports, 16 inches (40 centimeters) in diameter, for setting up and manipulating equipment inside the box. The ports are equipped with rugged gloves that can be sealed tightly to prevent leaks. The gloves can be removed to provide uninhibited access to the inside of the glovebox when contaminants are not present.
The Station glovebox allows investigators to control their investigations inside the box from the ground. It has an upgraded video system and a coldplate that can provide cooling for experiment hardware. It provides vacuum, venting and gaseous nitrogen, power and data interfaces to investigations.
All of these improvements allow the Microgravity Science Glovebox to accommodate a broad range of investigations. It is set up like a traditional lab bench on Earth to minimize the gap between what can be accomplished in a ground-based lab and what can be achieved in the Space Station lab.
The Microgravity Science Glovebox is designed to support Station investigations for the next 10 years, with occasional replacement of parts in orbit and upgrades in technology to video and data systems.
The Microgravity Science Glovebox accommodates small and medium-sized investigations from many disciplines including biotechnology, combustion science, fluid physics, fundamental physics and materials science. Many of these experiments use chemicals or burning or molten samples that must be contained.
The crewmembers insert their hands in gloves attached directly to the facility doors. Using gloves, they can safely manipulate samples inside the sealed working area.
As investigations are conducted in space, the crew can see inside the glovebox. A video display shows glovebox investigations, and the crew can scrutinize samples with a microscope attached to the inside of the work volume. Video is sent from the Space Station to scientists on Earth so they can observe their investigations as they take place in orbit.
As part of the initial glovebox science activities, two investigations will be conducted during Expedition Five:
The pore formation investigation will melt a material and solidify it. As the materials solidify, tiny holes or pores will form. These pores can affect how strong a material is and how well it performs. This experiment will examine ways to control pore formation and improve materials processing for many applications including turbine blades used in aircraft engines.
The SUBSA investigation will melt materials used for semiconductor crystals, a key component in computers and other electronic devices. On Earth, convection — the gravity-dependent phenomenon, which causes hot air to rise — can cause defects in semiconductors and other materials. As the material melts, convection causes mixing and fluid motion. Investigators will use a moving baffle to see if it reduces convection in the melt and improves crystal formation.
Twenty more glovebox investigations are being planned for future Space Station missions. Numerous glovebox investigations are planned for flight over the next several years.
The development of the Microgravity Science Glovebox builds on a series of successes with the Middeck Glovebox and Spacelab Glovebox, both used on several prior Space Shuttle missions and on the Russian space station Mir. These gloveboxes also were built by Bradford Engineering B.V., an engineering company in The Netherlands, in collaboration with the Marshall Center.
The Station glovebox supports larger, more sophisticated investigations, expanding the capabilities of its predecessors. cells and increase the chances of it reaching target tumors.
The Microgravity Science Glovebox makes it possible to do investigations in space similarly to those done in ground-based laboratories. It provides a safe environment for research with liquids, flames and particles used as a part of everyday research on Earth.
Without the glovebox, many types of hands-on investigations would be impossible or severely restricted on the Station. The Microgravity Science Glovebox is a valuable research tool that lets space crews handle materials safely.
The glovebox allows scientists to test small parts of larger investigations in a microgravity environment, try out equipment in microgravity, and even do complete laboratory-like investigations. It also enables researchers to fly simple investigations more quickly.
The glovebox can support all key areas of microgravity research as well as other scientific fields that may want to use it. This makes it a useful laboratory resource for scientists in many different fields conducting a wide array of investigations.
For more information on the Microgravity Science Glovebox and other Space Station investigations, please visit: | <urn:uuid:3c4ad59c-4c9c-4ddc-97be-18967316d614> | 3.921875 | 1,490 | Knowledge Article | Science & Tech. | 23.32913 |
Quantum mechanics, the branch of mathematical physics that deals with atomic and subatomic systems and their interaction with radiation in terms of observable quantities. It is an outgrowth of the concept that all forms of energy are released in discrete units or bundles called quanta.
Quantum mechanics is concerned with phenomena that are so small-scale that they cannot be described in classical terms. Throughout the 1800s most physicists regarded Isaac Newton's dynamical laws as sacrosanct, but it became increasingly clear during the early years of the 20th century that many phenomena, especially those associated with radiation, defy explanation by Newtonian physics. It has come to be recognized that the principles of quantum mechanics rather than those of classical mechanics must be applied when dealing with the behaviour of electrons and nuclei within atoms and molecules. Although conventional quantum mechanics makes no pretense of describing completely what occurs inside the atomic nucleus, it has helped scientists to better understand many processes such as the emission of alpha particles and photodisintegration. Moreover, the field theory of quantum mechanics has provided insight into the properties of mesons and other subatomic particles associated with nuclear phenomena.
In the equations of quantum mechanics, Max Planck's constant of action h = 6.626 10-34 joule-second plays a central role. This constant, one of the most important in all of physics, has the dimensions energy time. The term "small-scale" used to delineate the domain of quantum mechanics should not be literally interpreted as necessarily relating to extent in space. A more precise criterion as to whether quantum modifications of Newtonian laws are important is whether or not the phenomenon in question is characterized by an "action" (i.e., time integral of kinetic energy) that is large compared to Planck's constant. Accordingly, if a great many quanta are involved, the notion that there is a discrete, indivisible quantum unit loses significance. This fact explains why ordinary physical processes appear to be so fully in accord with the laws of Newton. The laws of quantum mechanics, unlike Newton's deterministic laws, lead to a probabilistic description of nature. As a consequence, one of quantum mechanics' most important philosophical implications concerns the apparent breakdown, or at least a drastic reinterpretation, of the causality principle in atomic phenomena.
The history of quantum mechanics may be divided into three main periods. The first began with Planck's theory of black-body radiation in 1900; it may be described as the period in which the validity of Planck's constant was demonstrated but its real meaning was not fully understood. The second period began with the quantum theory of atomic structure and spectra proposed by Niels Bohr in 1913. Bohr's ideas gave an accurate formula for the frequency of spectral lines in many cases and were an enormous help in the codification and understanding of spectra. Nonetheless, they did not represent a consistent, unified theory, constituting as they did a sort of patchwork affair in which classical mechanics was subjected to a somewhat extraneous set of so-called quantum conditions that restrict the constants of integration to particular values. True quantum mechanics appeared in 1926, reaching fruition nearly simultaneously in a variety of forms--namely, the matrix theory of Max Born and Werner Heisenberg, the wave mechanics of Louis V. de Broglie and Erwin Schrdinger, and the transformation theory of P.A.M. Dirac and Pascual Jordan. These different formulations were in no sense alternative theories; rather, they were different aspects of a consistent body of physical law.
Excerpt from the Encyclopedia Britannica without permission. | <urn:uuid:42b539ef-6097-4753-bca0-7ce979beaa85> | 3.359375 | 734 | Knowledge Article | Science & Tech. | 30.64496 |
The story so far
In my previous post I showed how the "evolutionary rate" of Zhivotovsky, Underhill, and Feldman (2006) is inappropriate for TMRCA calculations, because:
- It is not calculated from the time depth of the MRCA, but of an earlier "Patriarch"; more importantly:
- It is an average over many simulated haplogroups of small size, and not the kinds of haplogroups one is usually interested in dating in population studies
How big are the haplogroups in Z.U.F.-type simulations?
Z.U.F. consider several different demographic models, differing in their choice of m, the population growth constant. The population size increases (stochastically) on average by 100(1-m)% every generation.
I produce N=10,000 simulations for each reported number. These are the average, and maximum number of descendants over these N simulations.
Constant population size (m=1)
Under this assumption, haplogroup size grows purely due to randomness of the fathering process; there is no overall population growth. This is an important case, because the 3.6x slower evolutionary rate has been derived from it.
|Number of Descendants|
It is clear, that this type of simulation produces very small haplogroup sizes. Even for 320 generations (early Neolithic for Greece) the very largest haplogroup produced had 1,310 descendants, while the average one had the theoretically predicted ~160.
Small haplogroups => more drift => loss of variance => lower "effective" mutation rate.
So, as I mentioned in my previous post, to calculate the 3.6x slower rate, not only do we average over haplogroups of all sizes, small and large alike, but we are actually missing the relevant observations. But more on this, in the next section.
Expanding population (m=1.01)
|Number of Descendants|
Predictably, haplogroups end up bigger in an expanding population, but still far short of the sizes of commonly dated real-world haplogroups. The case of m=1.01 is important, because it is the one which yields the maximum effective mutation rate considered by Z.U.F. assuming haplogroups start with one individual.
Thus, even the highest mutation rate considered by Z.U.F (about 0.55μ over 400 generations) is derived by averaging over haplogroups that are unrealistic (too small). Real Y-STR variance accumulates at a higher rate in the real world.
Why are Z.U.F.-style simulated haplogroups so small?
It is surprising that these simulated haplogroups end up so small, looking nothing like commonly studied haplogroups even for an expanding population.
The apparent mystery is resolved, once we realize that m is nothing more than the average number of sons a man has. The reason why we see haplogroups so much bigger than the simulated ones is because for individual men, m may be much more, or much less than its population average. In other words, there is reproductive inequality, which could be due both to social advantage, or to natural selection.
So, rather than having a uniform m for all men, we can allow m to vary in individual lineages. A man A may have mA<m if he is impoverished or has a faulty Y-chromosome gene, and he may have mA>m if he is a ruler or has an advantageous gene in his Y-chromosome.
The advantage could be slight but long-standing (a small fitness improvement) or small and intense (a conquest or foundation of a dynasty). Its effect on the lucky lineage is an increase in the number of descendants. Its effect on Y-STR variance is a rate of increase approaching the germline rate.
It is clear, by now, that realistic haplogroup sizes can occur only when there is reproductive inequality. They are not the result of genetic drift, but of natural or social selection. And, effective mutation rates should be calculated over successful haplogroups under conditions of reproductive inequality, and not over all haplogroups under conditions of reproductive equality.(*)
A note on sampling
Consider a lineage of 1,000 men (i.e. ~ the maximum produced with reproductive equality) in a population of 1,000,000 men. Its frequency is thus 0.1%
We take a sample of 1,000 men from this population; this is much larger sample than is typically used in population studies, and for a smaller population. We expect on average to find just 1 man from the lineage in question in our sample. You can't do a variance-based age estimate with one man!
Thus, it becomes clear why haplogroups produced by Z.U.F.-style simulations are uninteresting. You just never encounter enough representatives from them in a real population study. You are typically interested in the much larger haplogroups, which could only have proliferated under conditions of reproductive inequality, and which are the only ones that can yield enough representatives in a sample to allow for a variance calculation.
In the previous post I showed that Z.U.F. calculate their effective rate over all simulated observations, but the rate is applied in the literature over a very specific set of observations, i.e. large haplogroups.
In this post, I showed that Z.U.F.-style simulation just don't produce realistic haplogroup sizes. Drift alone can't explain why millions of men share patrilineal ancestry. Large haplogroup sizes require an assumption of reproductive inequality, and Y-STR variance within them accumulates near the germline rate.
(*) Of course, if one studies numerically small populations, it is possible that a slower effective rate may be desired. My concern is with the large human populations (e.g. Greeks or Indians) where real haplogroup sizes exceed greatly those produced by simulations with reproductive equality.
UPDATE (August 8): Continued in On the effective mutation rate for Y-STR variance | <urn:uuid:9e939a64-42b2-466f-aa7f-5948628d9ccb> | 2.6875 | 1,268 | Personal Blog | Science & Tech. | 44.325345 |
Sample Analysis at Mars
Sample Analysis at Mars (SAM) is a suite of instruments on the Mars Science Laboratory Curiosity rover. The SAM instrument suite will analyze organics and gases from both atmospheric and solid samples. It was developed by the NASA Goddard Space Flight Center, the Laboratoire Inter-Universitaire des Systèmes Atmosphériques (LISA) (jointly operated by France's CNRS and Parisian universities), and Honeybee Robotics, along with many additional external partners.
The SAM suite consists of three instruments:
- The Quadrupole Mass Spectrometer (QMS) detects gases sampled from the atmosphere or those released from solid samples by heating.
- The Gas Chromatograph (GC) is used to separate out individual gases from a complex mixture into molecular components. The resulting gas flow is analyzed in the mass spectrometer with a mass range of 2-535 Daltons.
- The Tunable Laser Spectrometer (TLS) performs precision measurements of oxygen and carbon isotope ratios in carbon dioxide (CO2) and methane (CH4) in the atmosphere of Mars in order to distinguish between their geochemical or biological origin.
The SAM also has three subsystems: the 'Chemical separation and processing laboratory', for enrichment and derivatization of the organic molecules of the sample; the sample manipulation system (SMS) for transporting powder delivered from the MSL drill to a SAM inlet and into one of 74 sample cups. The SMS then moves the sample to the SAM oven to release gases by heating to up to 1000oC; and the pumps subsystem to purge the separators and analysers.
The Space Physics Research Laboratory at the University of Michigan built the main power supply, command and data handling unit, valve and heater controller, filament/bias controller, and high voltage module. The uncooled infrared detectors were developed and provided by the Polish company VIGO System.
- 9 November 2012: A pinch of fine sand and dust became the first solid Martian sample deposited into the SAM. The sample came from the patch of windblown material called Rocknest, which had provided a sample previously for mineralogical analysis by CheMin instrument.
- 3 December 2012: NASA reports SAM has detected water molecules, chlorine and sulphur. Hints of organic compounds couldn't be ruled out as contamination from Curiosity itself, however.
- "MSL Science Corner: Sample Analysis at Mars (SAM)". NASA/JPL. Retrieved 2009-09-09.
- Overview of the SAM instrument suite
- Cabane, M.; et al. (2004). "Did life exist on Mars? Search for organic and inorganic signatures, one of the goals for "SAM" (sample analysis at Mars)". Advances in Space Research 33 (12): 2240–2245. Bibcode:2004AdSpR..33.2240C. doi:10.1016/S0273-1177(03)00523-4.
- "Sample Analysis at Mars (SAM) Instrument Suite". NASA. October 2008. Retrieved 2009-10-09.
- Mahaffy, Paul R.; et al. (2012). "The Sample Analysis at Mars Investigation and Instrument Suite". Space Science Reviews. Bibcode:2012SSRv..tmp...23M. doi:10.1007/s11214-012-9879-z.
- Tenenbaum, D. (9 June 2008). "Making Sense of Mars Methane". Astrobiology Magazine. Retrieved October 8, 2008.
- Tarsitano, C. G.; Webster, C. R. (2007). "Multilaser Herriott cell for planetary tunable laser spectrometers". Applied Optics 46 (28): 6923–6935. Bibcode:2007ApOpt..46.6923T. doi:10.1364/AO.46.006923. PMID 17906720.
- Kennedy, T.; Mumm, E.; Myrick, T.; Frader-Thompson, S. (2006). "Optimization of a mars sample manipulation system through concentrated functionality".
- Tuesday, 13 December 2011 (2011-12-13). "Vigo System / Vigo IR Detectors on Mars". Vigo.com.pl. Retrieved 2012-08-17.
- "Rover's 'SAM' Lab Instrument Suite Tastes Soil". JPL-NASA. 13 November 2012.
- Brown, Dwayne; Webster, Guy; Jones, Nance Neal (December 3, 3012). "NASA Mars Rover Fully Analyzes First Martian Soil Samples". Retrieved December 3, 2012. Unknown parameter
- "'Complex chemistry' found on Mars". 3 News NZ. December 4, 2012.
- Sample Analysis at Mars - NASA
- SAM is loaded into the Rover - NASA
- The SAM instrument suite, without side panels | <urn:uuid:f9d08b6f-8490-4527-9d22-29a89ab522da> | 3.40625 | 1,020 | Knowledge Article | Science & Tech. | 57.092416 |
The Copenhagen Diagnosis
The University of New South Wales Climate Change Research Center has put together a report surveying scientific papers that have been published since the 2007 Intergovernmental Panel on Climate Change (IPCC) completed its fourth Assessment Report over three years ago.
“The Copenhagen Diagnosis: Updating the World on Latest Climate Science” found that many climate indicators are worsening at a faster pace than predicted by the IPCC.
Global carbon dioxide emissions from fossil fuels in 2008 were 40% higher than those in 1990 with a three fold acceleration over the past 18 years. This tracks near the highest scenarios considered by the IPCC. At the same time the fraction of CO2 emissions absorbed by the land and ocean appears to have decreased from 60% to 55%.
A wide variety of satellite and ice measurements show that both the Greenland and Arctic ice sheets are losing mass at an increasing rate. Glaciers in other parts of the world have been melting at an increased rate since 1990. Summer time melting of Arctic sea-ice has accelerated since 2007 far beyond any of the IPCC predictions; averaging 40% less than average IPCC predictions.
Satellite measurements of sea level rise also exceed IPCC predictions, rising 3.4 mm/yr over the past 15 years, about 80% above past IPCC predictions. At this rate, global sea level rise is likely to be twice as much as predicted by the IPCC, perhaps as much as 2 meters.
Rising temperatures are beginning to trigger positive feedback loops. It is believed that as one degree Celsius warming carries moderate risks of passing large scale tipping points and three degrees Celsius warming would bring substantial or severe risks.
The 2005 drought in Western Amazonia resulted in a massive release of carbon, and event that is expected to become more common. If a lengthening of the dry season continues and droughts increase in frequency or severity, the system could reach a tipping point resulting in a dieback of up to 80 percent of the rainforest and its replacement by a savannah.
Farther north, the southern boundary of the permafrost zone has shifted northward over North America, as well as higher on the Tibetan plateau. Similar observations in Europe have noted permafrost thawing. As the permafrost melts, organic materials decay, producing methane. This feedback has not been accounted for in any of the IPCC projections.
Some of the most concerning regions and tipping points include the Greenland ice sheet which may be nearing a tipping point where its melting is irreversible. The West Antarctic ice sheet may also be nearing a melting tipping point.
The Indian summer monsoon is probably already being disrupted. Some future projections show a doubling of drought frequency within a decade.
Global CO2 emissions will have to peak by 2020 and then decline rapidly in order to avoid catastrophic climate change. The fact that they have been accelerating in recent years makes this is an even more daunting challenge.
Governments are moving at a slow pace at best on global warming. Achieving significant reductions will involve massive investment plus changes in public behavior that governments are reluctant to enforce. For those concerned about global warming, the lifeboat strategy is becoming more imperative as time goes on: developing local, self sufficient communities that can survive a low energy future, and can adapt to the changes that are coming. | <urn:uuid:33f1bbd5-686b-4806-a848-f9524b62b33b> | 3.328125 | 665 | Personal Blog | Science & Tech. | 38.714349 |
New tracking and observing technologies are giving marine conservationists a fish-eye view of conditions, from overfishing to climate change, that are contributing to declining fish populations, according to a new study.
|A researcher surgically implants a tag in a bluefin tuna. |
Until recently, scientists provided fishery managers only such limited data as stock counts and catch estimates, said Charles Greene, Cornell professor of ocean sciences and lead author of the study published in the March issue (Vol. 22, No. 1) of the journal Oceanography.
But new advances in miniature sensors and fish-tracking tags, ocean observing systems and computer models are providing much more insight into environmental changes and how fish are responding behaviorally and biologically to such changes, thereby enabling better modeling to predict fish populations. As a result, researchers are making more informed recommendations for strategies to address falling fish populations.
Obtaining real-world data is essential, stressed Greene. "Many of the commercial fish populations in the world are pretty highly depressed. It's a bleak picture in terms of the status of many wild marine fish populations."
For example, the Atlantic bluefin tuna fishery, which can garner more than $15,000 per fish, is managed as two separate stocks, one in the eastern Atlantic basin, with a breeding ground in the Mediterranean, and another in the western basin, with a breeding ground in the Gulf of Mexico. Both stocks are not sustainably harvested, and the western population has declined by roughly 90 percent over the last 25 years, despite strict quotas.
A project known as Tag A Giant (TAG) uses an implanted tag in the tuna to record external pressure, internal and external temperature and ambient light, though the tuna must be recaptured to recover these data. TAG also uses a pop-up tag that is attached to the tuna but self releases, floats to the surface and transmits data on each tuna's external conditions via satellite. The tags help researchers estimate geo-locations and track each fish's daily movements.
According to the study, new TAG data have revealed that as tuna grow, they swim all over the Atlantic, and that the fish from the two stocks commingle. Past failure to account for this mixing of the two stocks has led to unsustainable management practices, especially for tuna originating in the Gulf of Mexico, Greene said. New strategies must account for mixing stocks, since fishing in the eastern basin has undermined the quotas and recovery plans for the western basin stock.
With regard to Pacific salmon, fishery managers have assumed that juveniles traveling from spawning grounds to the ocean face great mortality along heavily dammed rivers, like the Snake-Columbia river system, than in undammed rivers. Thus, they collected juveniles and transported them past the Snake-Columbia river system's eight dams before releasing them downstream. However, adult salmon numbers returning from the ocean did not increase.
The Pacific Ocean Shelf Tracking project, which tagged juvenile fish, showed that the smaller, less developed fish were dying in high numbers in the lower river and coastal ocean. This kind of knowledge will help managers test and adapt their strategies in wild-fish systems, which historically have been hard to monitor.
This work was supported by more than a dozen entities, including the Gordon and Betty Moore, Packard, Monterey Bay Aquarium and Sloan foundations, and the Bonneville Power Administration. | <urn:uuid:f0567beb-4f2f-4d4b-ae38-edf9d64cd129> | 3.625 | 683 | Truncated | Science & Tech. | 31.706497 |
Internet Explorer 6 DOCTYPE Bug
Join the Discussion
Microsoft should be ashamed of themselves for not knowing what a DOCTYPE is.
A DOCTYPE defines the version of the HTML standards that has been followed in the coding of the HTML contained within the web page. Of course most if not all current web browsers ignore the DOCTYPE and parse the page based on their own predefined rules rather than the rules defined in the DOCTYPE. That is a problem that all current browsers share which presumably will be fixed in not too far distant future versions of the browsers.
A DOCTYPE is also used by validators to determine whether the HTML coded in the page follows the defined standard or not.
The problem arises because there are several different ways to refer to some parts of the Document Object Model as it relates to the page (eg. in determining scroll position). The standards say that you should refer to these parts of the DOM using document.documentElement but the traditional Microsoft way of accessing the same fields is using document.body. Of course other browsers used other non-standard ways to retrieve the same information but many had adopted the Microsoft way even before the standard way was developed.
If you use a Strict DOCTYPE you need to test both versions of a field in order to have get the values from document.documentElement in IE6 and from document.body in IE5 and other older browsers. Of course it doesn't matter which of the two fields we get in other modern browsers since they populate both sets of fields with the same values (the way Microsoft should have done). The simplest way to do this is to replace each reference with code that will substitute whichever of the two fields has been populated with a value. For the scrollTop field the simplest code to do this is: | <urn:uuid:500bf413-176a-4711-b9f8-0a21efa9d001> | 2.859375 | 373 | Documentation | Software Dev. | 48.58829 |
One way to remember the multiplication table of the octonions is to use the following diagram (which I got from John Baez's online paper): if $(e_i,e_j,e_k)$ is one of the lines listed according to the cyclic order indicated in the diagram, then $e_ie_j=e_k$ and $e_je_i=-e_k$ in $\mathbb O$.
If we forget the cyclic orientation of the lines, this is of course a well-known depiction of the Fano plane $P^2(\mathbb F_2)$, which is an example of many different structures: it is a Steiner triple system, a quasigroup, &c.
What kind of object is this oriented Fano plane?
NB1: Naive googling informs of the concept of Mendelsohn triple systems and of transitive triple systems, both of which are enrichments of the notion of Steiner triple systems with orderings on the blocks. The oriented Fano plane above is not an example of these concepts, though.
NB2: One way to reconstruct the orientation is as follows: it is (up to projective linear automorphisms) the unique way to cyclically orient the lines in the plane in such a way that for each point $x$, the set of three points which follow $x$ in the three lines that go through it is itself a line. In fact, it is the only Steiner triple system which can be oriented with this property. | <urn:uuid:f12ddde2-db5b-4fab-8378-ed619fd6b84d> | 2.6875 | 322 | Q&A Forum | Science & Tech. | 45.513453 |
Fields (C# Programming Guide)
For simplicity, these examples use fields that are public, but this is not recommended in practice. Fields should generally be private. Access to fields by external classes should be indirect, by means of methods, properties, or indexers. For more information, see Methods, Properties and Indexers.
Fields store the data a class needs to fulfill its design. For example, a class representing a calendar date might have three integer fields: one for the month, one for the day, and one for the year. Fields are declared within the class block by specifying the access level of the field, followed by the type of the field, followed by the name of the field. For example:
Accessing a field in an object is done by adding a period after the object name, followed by the name of the field, as in
objectname.fieldname. For example:
A field can be given an initial value by using the assignment operator when the field is declared. To automatically assign the
month field to
7, for example, you would declare
month like this:
Fields are initialized immediately before the constructor for the object instance is called, so if the constructor assigns the value of a field, it will overwrite any value given during field declaration. For more information, see Using Constructors.
A field initializer cannot refer to other instance fields.
Fields can be marked as public, private, protected, internal, or protected internal. These access modifiers define how users of the class can access the fields. For more information, see Access Modifiers.
A field can optionally be declared static. This makes the field available to callers at any time, even if no instance of the class exists. For more information, see Static Classes and Static Class Members.
A field can be declared readonly. A read-only field can only be assigned a value during initialization or in a constructor. A static readonly field is very similar to a constant, except that the C# compiler does not have access to the value of a static read-only field at compile time, only at run time. For more information, see Constants. | <urn:uuid:3069388c-95dd-4dbb-a3e4-27f402cb43b0> | 3.890625 | 446 | Documentation | Software Dev. | 45.561372 |
There used to be very good reasons for keeping instruction / register names short. Those reasons no longer apply, but short cryptic names are still very common in low-level programming.
Why is this? Is it just because old habits are hard to break, or are there better reasons?
- Atmel ATMEGA32U2 (2010?):
- .NET CLR instruction set (2002):
Aren't the longer, non-cryptic names easier to work with?
When answering and voting, please consider the following. Many of the possible explanations suggested here apply equally to high-level programming, and yet the consensus, by and large, is to use non-cryptic names consisting of a word or two (commonly understood acronyms excluded).
Also, if your main argument is about physical space on a paper diagram, please consider that this absolutely does not apply to assembly language or CIL, plus I would appreciate if you show me a diagram where terse names fit but readable ones make the diagram worse. From personal experience at a fabless semiconductor company, readable names fit just fine, and result in more readable diagrams.
What is the core thing that is different about low-level programming as opposed to high-level languages that makes the terse cryptic names desirable in low-level but not high-level programming? | <urn:uuid:e8066e7a-8242-46d6-9e61-cb28ff7d45cc> | 2.78125 | 272 | Q&A Forum | Software Dev. | 42.4325 |
Saturday, July 31
The 125-foot-tall column overlooking Astoria, Ore., is covered with scenes of the region’s exploration-rich history: Capt. Robert Gray claims the region for America in 1792, Lewis and Clark reach the Pacific in 1805, and explorers with the Pacific Fur Company arrive in 1811. Despite these textbook-worthy highlights, Astoria is perhaps best known for its starring role in the Oscar-snubbed cult classic film “The Goonies” (apparently we missed the 25-year “Goonies” reunion by a couple of weeks). Nonetheless, it’s a fitting place to set off on our own journey of discovery.
Over the next 12 days, our contingent of 24 scientists and 30 crew members will be mounting a scientific assault on Hydrate Ridge, a fascinating site 90 kilometers off the Oregon coast where methane gas flows out of the earth’s crust and into the deep ocean. Methane has a P.R. problem: In the atmosphere, the gas is a troublemaker, contributing to climate change with 25 times the heat-trapping power of carbon dioxide. But on the seafloor, it’s a lifeline, as innovative micro-organisms are able to eke out a living converting methane to carbon dioxide and using the resulting energy to grow. Where one type of organism leads, others will follow, and entire ecosystems have grown up around these methane vents: microbes, clams, stringy tube worms and a range of other exotic species.
The methane vents at Hydrate Ridge are known as cold seeps, because the temperature hovers around 4 degrees Celsius — typical of the deep ocean. These conditions are in marked contrast to the flashier hydrothermal vents, where superheated water can exceed 100 degrees Celsius and plumes of “black smoke” (which is really composed of metallic minerals) billow out of the rock chimneys.
But from a biological point of view, both types of deep sea vents are critically important in the same way. Virtually all other known life on Earth is ultimately dependent on the sun, either using it for energy directly (plants) or eating other links of the food pyramid (that would be you). Deep sea vents are a different story: Hundreds of meters down, sunlight is irrelevant, and organisms need to look elsewhere for food. Chemicals in fluids squeezed out of vents provide the right nutrients to sustain an oasis of life in the middle of the seafloor desert. This kind of sun-independent ecosystem makes the idea of extraterrestrial life a bit more palatable because we know that even if the surface is too hostile (too hot, too cold, too dry, too much radiation), there may well be enough food beneath the surface for creative cells to work with. Our mission over the next couple of weeks is to learn more about the life forms at cold seep environments, how they interact with each other, how they shape our planet, and what they might mean for the possibility of life beyond earth.
We arrived yesterday morning as our driver/Astoria evangelist pointed out the highlights of this fading fishing town on the south bank of the Columbia River. Over there is where they filmed the orca in “Free Willy” jumping over the rocks. That was the first house in the country to have a flush toilet. This part of town had the largest Finnish population west of the Mississippi in the 1800s. I consider myself well prepared for the day “Astoria” appears as a “Jeopardy” category.
When we got to the dock, the blue-hulled Atlantis was a hive of activity, with suitcases, microscopes and tanks of liquid nitrogen being loaded on board. Atlantis is a 274-foot-long research powerhouse, owned by the United States Navy and operated by the Woods Hole Oceanographic Institution for the marine research community. Onboard, it’s “Deadliest Catch” meets “CSI,” as grizzled sailors and crew work alongside lab-coated scientists in the name of science. We’ll be setting sail (or, less romantically but more accurately, firing up the diesel-electric thrusters) early tomorrow morning, eager to get to work at Hydrate Ridge. | <urn:uuid:3814a5e7-2bcd-4f75-a4bf-3df018bd38c1> | 3.09375 | 886 | Personal Blog | Science & Tech. | 44.045403 |
A young crater that's about nine miles (15 km) across scars the surface of Vesta, one of the largest asteroids, in this view from the Dawn spacecraft. Large boulders that were blasted out by the impact that created the crater are scattered around its rim. Dawn will depart Vesta in early September and head toward the largest asteroid, Ceres, with arrival in early 2015. [NASA/JPL/UCLA/MPS/DLR/IDA]
Vesta is one of the largest asteroids — a layered ball of rock and metal more than 300 miles in diameter. It’s also something of a scientific conundrum, notes Carol Raymond, a scientist with a mission called Dawn.
RAYMOND: We’ve had pieces of Vesta under study in the laboratory for decades, but are now only getting out to know the parent body.
For decades, scientists have suspected that several groups of meteorites came from Vesta — blasted from Vesta’s surface when it was hit by another asteroid. The composition of the meteorites seemed to match what astronomers could see of Vesta through telescopes. Until Dawn entered orbit around Vesta last year, though, there was little way to confirm that idea.
But Dawn’s observations strongly support it. In fact, they’ve even narrowed down the likely site of the impact that chipped off the meteorites — a large basin at the asteroid’s south pole. The energy of the impact caused the asteroid’s crust to rebound, building a mountain that towers 13 miles above the basin floor.
The combination of the meteorites and Dawn’s observations will help scientists better understand Vesta’s composition, structure, and history. And since Vesta is one of the oldest surviving “leftovers” from the formation of the planets, the combination will also provide new insights into the process that gave birth to the planets — including our own Earth.
Dawn’s time at Vesta is almost up, though; more about that tomorrow.
Script by Damond Benningfield, Copyright 2012
For more skywatching tips, astronomy news, and much more, read StarDate magazine. | <urn:uuid:ede833b4-3351-43c0-86cb-dddbab8f2b33> | 3.71875 | 450 | Audio Transcript | Science & Tech. | 47.541429 |
Science Fair Project Encyclopedia
Death's head moth
The name Death's head hawkmoth usually refers to one of the two species (A. atropos and A. styx) of moth in the Acherontia genus. Found throughout the Middle East and the Mediterranean region, this moth is easily distinguishable by a skull-shaped pattern on its back.
The skull pattern has helped the moth earn a negative reputation, such as associations with the supernatural and evil, and was featured in the movies Silence of the Lambs and Un Chien Andalou. Numerous superstitions also claim that the moth brings bad luck to the house it flies into.
The moth also has numerous other unique features, such as an ability to emit a loud squeak if irritated, and is commonly observed raiding beehives for honey. There is some contradiction in reports on whether the moth is able to enter a hive and feed undisturbed, or meets resistance and fights using its wings, usually being defeated.
The A. atropos is also very large, with a wingspan of 90-130mm, being the largest moth in some of the regions it is found in.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:ca9b3d96-c995-4262-8158-0a96ef0dbbe3> | 3.296875 | 269 | Knowledge Article | Science & Tech. | 52.15738 |
C. elegans is probably the most versatile nematode’s known to molecular and developmental biotechnologists. It has been in use in laboratories since 1974 and was the first multicellular organism to have its entire genome sequenced. As one of the simplest organisms with a nervous system, it is a favorite research specimen of neurobiologists world-wide.
On May 6th, Science Express published an article by American and Israeli scientists that once again highlighted the versatility of C. elegans in neurobiology research. According to the press release in EurekAlert! “a breakthrough about the formation and maintenance of tree-like nerve cell structures could have future applications in the treatment of neurodegenerative diseases and the repair of injuries in which neurons are damaged.” The researchers showed that two neurons (called PVDs) were required for reception of strong mechanical stimuli in the nematode which also elaborate neuronal trees comprising structural units called ‘menorahs,’ because they look like multi-branched candelabra. They then identified the gene EFF-1 as being responsible for pruning excess or abnormal branches thereby serving as part of a quality control process that is important for sculpting and maintenance of complicated menorahs.
If the results can be “humanized” it may allow for future repair of brains and spinal cord injury as well as other applications in the treatment of neurodegenerative diseases.
Oren-Suissa M, Hall DH, Treinin M, Shemer G, & Podbilewicz B (2010). The Fusogen EFF-1 Controls Sculpting of Mechanosensory Dendrites. Science (New York, N.Y.) PMID: 20448153
See the video below which I like to call: An Ode to the Nematode | <urn:uuid:7752558e-4638-4837-a44e-927d5e35e86b> | 3.09375 | 380 | Personal Blog | Science & Tech. | 30.132203 |
What's the weather like today? It's a question we often ask, because the weather can vary so much—and because the nature of the weather is so fundamental to our lives. But despite the variability in the weather, we assume a constancy in the climate. Weather is not the same as climate. Climate is the long-term average of weather in a particular place—taking into account all the small fluctuations and the seasonal changes. See Weather and climate—what's the difference?.
Because climate is only meaningful over a long timeframe and across all the seasons, changes to it are much harder to detect. This is clear if we pick one particular day of the year at random—for example, Christmas Day, and examine the weather on that day over 20 years. Obviously there will be considerable variation between the years. We cannot predict the weather on that day a year ahead, because the factors determining daily weather change rapidly. But we can appreciate what it means to talk about the climate on that day. In Australia we know that it will be summer then, and therefore the weather will operate within certain 'boundaries' expected for summer. A mid-winter's day brings different expectations. The same holds true in monsoonal tropical areas with wet and dry seasons.
Climate change means that the boundaries in the weather that we expect in a particular location and season are changing. But such changes are slow and subtle, and because climate can only be measured over decades it is often not possible to give an instant answer to the question of whether and when a climate has changed. It is rather like asking for the precise moment when day becomes night as the sun sets.
There are many reasons why climate can change, and this section deals with the main ones. Most of the reasons are natural, and we know that in the long geological past the earth's climate has altered many times. Causes range from subtle shifts in the earth's orbit and the angle of its axis of rotation, to changes in the output of heat from the sun. Regional climate can also change slowly for quite natural reasons, such as alterations to ocean currents or the incredibly slow drift of continents. More rapid, temporary climate change can be caused by strong volcanic eruptions leaving fine dust high in the atmosphere that weakens the sunlight for months at a time.
The most important determinant of any planet's temperature is its distance from the sun. This cannot be changed, other than by slow changing of an orbit over millennia or by catastrophic impact. The second most important factor is the composition and volume of the planet's atmosphere. Some gases act rather like a blanket, keeping heat from leaving the surface. Others are quite transparent to departing heat. We know from studying the atmospheres of Mars and Venus, and the respective conditions and temperatures there, how important atmospheric composition can be in affecting temperature.
Subtle alteration to the composition of our own atmosphere has recently occurred, and the main cause is the burning of carbon-containing fuels (coal, oil and methane gas). The carbon dioxide gas released by the combustion of these substances is a heat-trapping gas (known as a greenhouse gas). An increase in its concentration in our atmosphere will change conditions on the planet—in broad terms the earth will retain more heat. This is considered to be the main reason why climate is changing around the world. | <urn:uuid:12431c27-da62-427a-a69a-bdc7f2fa4e84> | 3.84375 | 674 | Knowledge Article | Science & Tech. | 46.852879 |
This diagram shows the presumed distance of the Oort cloud compared to the rest of the solar system
The Oort cloud (sometimes called the Ípik-Oort Cloud) is a postulated spherical cloud of comets situated about 50,000 to 100,000 AU from the Sun. This is approximately 1000 times the distance from the Sun to Pluto or roughly one light year, almost a quarter of the distance from the Sun to Proxima Centauri, the star nearest the Sun.
The Oort cloud would have its inner disk at the ecliptic from the Kuiper belt. Although no direct observations have been made of such a cloud, it is believed to be the source of most or all comets entering the inner solar system (some short-period comets may come from the Kuiper belt), based on observations of the orbits of comets.
In 1932 Ernst Ípik, an Estonian astronomer, proposed that comets originate in an orbiting cloud situated at the outermost edge of the solar system. In 1950 the idea was revived and proposed by Dutch astronomer Jan Hendrick Oort to explain an apparent contradiction: comets are destroyed by several passes through the inner solar system, yet if the comets we observe had existed since the origin of the solar system, all would have been destroyed by now. According to the theory, the Oort cloud contains millions of comet nuclei, which are stable because the sun's radiation is very weak at their distance. The cloud provides a continual supply of new comets, replacing those that are destroyed.
The Oort cloud is a remnant of the original nebula that collapsed to form the Sun and planets five billion years ago, and is loosely bound to the solar system. The most widely-accepted theory of its formation is that the Oort cloud's objects initially formed much closer to the Sun as part of the same process that formed the planets and asteroids, but that gravitational interaction with young gas giants such as Jupiter ejected them into extremely long elliptical or parabolic orbits. This process also served to scatter the objects out of the ecliptic plane, explaining the cloud's spherical distribution. While on the distant outer regions of these orbits, gravitational interaction with nearby stars further modified their orbits to make them more circular.
It is thought that other stars are likely to possess Oort clouds of their own, and that the outer edges of two nearby stars' Oort clouds may sometimes overlap, causing the occasional intrusion of a comet into the inner solar system. The star with the greatest possibility of perturbing the Oort cloud in the next 10 million years is Gliese 710.
So far, only one potential Oort cloud object has been discovered; 90377 Sedna. With an orbit that ranges from roughly 76 to 840 AU, it is much closer than originally expected and may belong to an "inner" Oort cloud. If Sedna indeed belongs to the Oort cloud, this may mean that the Oort cloud is both denser and closer to the Sun than previously thought. This has been proposed as possible evidence that the Sun initially formed as part of a dense cluster of stars; with closer neighbors during Oort cloud formation, objects ejected by gas giants would have their orbits circularized closer to the Sun than was predicted for situations with more distant neighbors. | <urn:uuid:9abd952c-fada-4738-aa15-6b9e84538720> | 4.15625 | 678 | Knowledge Article | Science & Tech. | 40.282903 |
In order to simulate true concurrency on a single processor system, our interpreter must allow for switching between tasks. In other words, we execute a single task for a (short) perdio of time, then switch between tasks randomly. In this way, our concurret tasks can assume an order of execution, just as tasks run on separate processors run independently. The two features the interpreter must have in order to handle this task switching are knowledge of the state of computation of each task and a task switching mechanism.
The state of computation of each task is defined by knowledge of all variable bindings and the current evalution state. A symbol (atom) in LISP has four associated properties:
- A (possibly empty) stack of bindings.
- A value.
- A function definition.
- A property list.
The last three properties are known globally so they have the same values across concurrent tasks. The stack of bindings represents locally assigned values. Such values disappear when the defining environment is left. Bindings are unique to the defining function so they are not carried across tasks. Our concurrent interpreter must remember the correct bindings for each task independently.
The current evaluation state consists of information as execution address, register contents, etc., and is essentially the information required whenever a subprogram is made in any language. This information is unique for each task, must be retained as we switch between tasks so that a task may be resumed precisely where it suspended.
The second fetaure required is the ability to randomly switch between the tasks. In other words, at the end of a given quantum of processor time, we wish to suspend the task currently executing and to begin execution of a randomly selected task. This process continues until all tasks have finished execution. Note that if we did not require this randomly interleaved capability, concurrent programming would reduce to sequential evaluation of a list of forms, which may be trivially implemented in LISP.
Three schemes present themselves for providing task switching. The first would be, to depend upon an external interrupt, presumably from a hardware clock, to cause the interpreter to switch. The difficulty is that we do not wish to interrupt LISP primitive operations or we would quickly corrupt the system. In addition, writing an interrupt routine that would work gracefully with an existent LISP interpreter is a large undertaking.
A second scheme is to implement a complete LISP interpreter extended to include the concurrent capability. This type of interpreter might even be written in LISP (see Programming in Common LISP for such an interpreter but without concurrency). This interpreter could monitor the number of calls to itself and gracefully switch tasks as desired. But the interpreter requires a large amount of system memory, which is already tight.
The third scheme, adopted here, is to define a new eval procedure on top of the sYstern's eval. In this way, all function evaluation must pass through our eval routine, which can count the number of calls to itself and switch tasks at appropriate intervals. However, some evaluations are not switchable (those that handle the actual task switching, for example), so we must allow for the ability to turn off switching as desired.
Concurrent Interpreter in LISP
We need to represent each concurrent task by an object that allows us to retain complete knowledge of the evaluation state of the task toge3ther with a list of applicable bindings. We also must be able to pick up this object, evaluate it for a period of time, suspend its evaluation so that we may evaluate another task, and later resume the original task.
Common LISP has exactly the object required: a stack-group. Stack-groups are functional objects with the attributes of a task. (Stack-groups are not a feature of Common LISP -- they are copied from Zeta LISP.) Stack-groups contain exactly the, information needed to implement a concurrent interpreter. A single stack-group is the LISP equivalent of a single task.
It is possible to initiate a stack-group (remember they have the attributes of a task), suspend a stack-group, and then resume it. Thus our switching algorithm consists of:
- Evaluate a stack-group until it is time to switch to a new task.
- Put the present stack-group into suspended status.
- Choose a new stack-group (task).
- Execute this new stack-group.
Implementation of the interpreter requires three major new routines:
- An initialization function that creates the stack-groups and begins concurrent execution.
- An evaluation function to ride on top of the system's eval, which will handle both form evaluation and groups initiation of task switching if appropriate;
- A function to choose and begin execution of the next task.
In CLI, these three functions exactly are cobegin, cli_eval, and switch-around, respectively. They are described in Figure 1.
Figure 1: Routines cobegin, cli_eval, and switch-around are the heart of the interpreter.
<b>COBEGIN</b> - Input: the forms to be evaluated concurrently (the tasks) Output: a list of the values to these forms - initialize pseudo clock used to switch between tasks - create a stack group for each concurrent task - initiate concurrent execution - create a list of the values of the tasks <b>CLI_EVAL</b> - Input: a form to be evaluated Output: the value of the form - increment pseudo clock - if switching is enabled and we have reached the end of a time slice and we are in concurrent mode, then - suspend current task and enable switching else - evaluate the form - return the value <b>SWITCH-AROUND</b> - Input: none Output: none - if all tasks are complete, then - return else - randomly choose a task to execute - if this task has completed, then - eliminate it from the list of tasks - try again else - initiate task execution
Let's discuss each routine briefly. Keep in mind these key facts:
- A stack-group is initiated and/or resumed by needed to calling it as a function.
- A stack-group is suspended by executing the stack-group-return function.
is suspended, control i returned to the pomint at which the stack-group was resumed (always function switch-around).
- Evaluation of every form, whether directly named or subsidiary, always passes through cli_eval.
Cobegin is the only function of the three directly called by the user. The main purpose of this function is to create a stack-group for each input form (the tasks to be executed concurrently) and to begin concurrent execution by calling switch-around. Each stack-group is initialized to call upon cli_eval to evaluate its form.
In order to implement cli_eval, we use the Common LISP function evalhook. Evalhook takes as arguments the to evalutate this form (cli_eval). Attempting to evaluate the input form will typically cause evaluation of a number of subsidiary forms, and for each of these cli_eval will be used. However, the standard eval is used for the final evaluation of itself. It is this bypassing of cli_eval that enables us to use the standard eval for all function evaluations because each subsidary form will eventually be evaluated directly by a call to cli_eval. The primary purposes of cli_eval are to initiate switching between, if necessary, evaluate the form, and return a value.
Switch-around handles choosing the next task to execute and then resuming the execution of this task by calling the task's stack-group as a function (using funcall). Note that this means that when a stack-group issuspended, control returns to the form following this fun-call -- which is a recursive call to switch-around to choose a new task. Switch-around also handles deleting completed tasks (their stack-groups will be exhausted).
These three functions, consisting of about 70 lines of LISP code including subsidieary routines, implement concurrency in Common LISP. This ability is a real testimonial to both the power and elegance of LISP and the importance of including such powerful primitives as stack-groups. | <urn:uuid:25b2d9af-f84e-45d3-8ccd-0351ebfd4b23> | 3.015625 | 1,710 | Documentation | Software Dev. | 40.234622 |
cowrie or cowry (both: kouˈrē) [key], common name applied to marine gastropods belonging to the family Cypraeidae, a well-developed family of marine snails found in the tropics. Cowries are abundant in the Indian Ocean, particularly in the East Indies and the Maldive Islands. Species of cowries inhabit the waters around S California and the warm waters southward from the SE United States. They characteristically have massive, smooth, shiny shells with striking patterns and colors. The upper surface is round and the lower flat. When alive, the cowrie's shell is usually concealed by its large mantle; as the cowrie creeps along the ocean bottom, the mantle envelops the shell. As the body grows, the inner whorls of the shell are dissolved, and the dissolved lime is then used to enlarge the outer whorl of the shell. Some shells have been used for money, e.g., those of the money cowrie, Cypraea moneta. The shells of various species are used also for personal adornment and in some primitive cultures indicate the rank of the wearer. The smooth brown cowrie, Cypraea spadicea, inhabits the protected outer coast and mud flats in S California, often as far north as Newport, Calif. The most prized cowrie for a shell collector is the tiger cowrie, Cypraea tigris, which grows to 4 in. (10 cm) in length and whose shell is considered by some to be the most lustrous shell of the South Pacific. Having the appearance of a tiger skin, it is white with many brown spots. Cowries are classified in the phylum Mollusca, class Gastropoda, order Mesogastropoda, family Cypraeidae.
More on cowrie from Fact Monster:
See more Encyclopedia articles on: Zoology: Invertebrates | <urn:uuid:14338c07-2d05-426f-b5d1-cdc7489992e4> | 3.484375 | 393 | Knowledge Article | Science & Tech. | 46.316583 |
fer-de-lance (fĕrˌ-də-lănsˈ) [key], highly poisonous snake, Bothrops atrox, found in tropical South America and the West Indies. A pit viper, related to the bushmaster and the rattlesnake, it has heat-sensitive organs on the head for detecting its warm-blooded prey. Usually about 5 to 6 ft (150–180 cm) long, the fer-de-lance may reach a maximum length of about 9 ft (3 m). It is gray or brown with light stripes and dark diamond markings and has a yellow throat. Common throughout most of its range, it causes many human fatalities. It is classified in the phylum Chordata, subphylum Vertebrata, class Reptilia, order Squamata, family Crotalidae.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on fer-de-lance from Infoplease:
See more Encyclopedia articles on: Vertebrate Zoology | <urn:uuid:03549c80-1920-4c59-bd17-2f4b6dbff217> | 3.453125 | 225 | Knowledge Article | Science & Tech. | 52.933333 |
Methane is formed under the absence of oxygen by natural biological and physical processes, e.g. in the sea floor. It is a much powerful green house gas than carbon dioxide.
Thanks to the activity of microorganisms this gas is inactivated, before it reaches the atmosphere and unfolds its harmful effects on Earth´s climate. Researchers from Bremen have proven that these microorganisms are quite picky about their diet. Now they have published their results in the Proceedings of the National Academy of Sciences (PNAS).
Carbon can be the basic structural element...
All life on Earth is based on carbon and its compounds. Cell components of all creatures contain carbon. The cell can take up this basic structural element via organic matter, or the cell build up its own organic matter from scratch, i.e. carbon dioxide. Researchers termed the first cells heterotrophs and the latter autotrophs. All plants, many bacteria and archaea are autotrophs, whereas all animals, including humans, are heterotrophs. The autotrophs form the basis for the life of the heterotrophs and all higher life by taking up inorganic carbon to form organic material.
…and can be the energy source
To keep the cellular systems running all cells need fuel. Methane can be such a fuel. When studying the methane consuming microbes discovered by Bremen scientists more than ten years ago it was assumed that they take the methane for filling up their energy tanks and using it as a carbon source, i.e, they were thought to be heterotrophs.
Now scientists from MARUM and the Max Planck Institute for Marine Microbiology show in their PNAS research paper that this is surprisingly not the case and the methane derived carbon is not used as a carbon source. “Our growth studies clearly show that the labeled carbon in the methane never showed up directly in the cell material, but experiments with labeled carbon from carbon dioxide did. It was quite surprising, ” said PNAS author Matthias Kellermann. The archaea in the consortia behave like it is expected for chemoautotrophs.
“Archaea and the sulfate reducing bacteria are living close together in consortia, which are growing extremely slow. And only in the newly synthesized cell material we could find the answer for the question, from where the carbon originates,” adds Kai-Uwe Hinrichs, leader of the organic geochemistry group at MARUM.
Co-author Gunter Wegener from the Max Planck Institute concludes: ”With our new knowledge we can optimize our studies about the inactivation of methane in nature. Our surprising results tell us that we still know little details of this globally important process.”
Samples were retrieved from the Guaymas Basin on the West coast of Mexico from a depth of more the 2000 meters using the US diving submersible Alvin .
Further informations/ photo material/Interviews:
Dr. Manfred Schloesser, +49 421 2028704, mschloes mpi-bremen.de
Dr. Rita Dunker, +49 421 2028856, rdunker mpi-bremen.de
Albert Gerdes, +49 421 21865540, agerdesmarum.de
Max Planck Institute for Marine Microbiology, Bremen
MARUM – Center for Marine environmental Research at the University of Bremen
Autotrophy as a predominant mode of carbon fixation in anaerobic methane-oxidizing microbial communities
Matthias Y. Kellermann, Gunter Wegener, Marcus Elvert, Marcos Yukio Yoshinaga, Yu-Shih Lin,
Thomas Holler, Xavier Prieto Mollar, Katrin Knittel, and Kai-Uwe Hinrichs
More articles from Life Sciences:
Spheres can form squares
24.05.2013 | Wageningen University
Ferrets, pigs susceptible to H7N9 avian influenza virus
24.05.2013 | NIH/National Institute of Allergy and Infectious Diseases
This morning at 05:45 CEST, the earth trembled beneath the Okhotsk Sea in the Pacific Northwest. The quake, with a magnitude of 8.2, took place at an exceptional depth of 605 kilometers.
Because of the great depth of the earthquake a tsunami is not expected and there should also be no major damage due to shaking.
Professor Frederik Tilmann of the GFZ German Research Centre for Geosciences: "The epicenter is exceptionally deep, far below the earth's crust in the mantle. Such strong ...
The Ring Nebula's distinctive shape makes it a popular illustration for astronomy books. But new observations by NASA's Hubble Space Telescope of the glowing gas shroud around an old, dying, sun-like star reveal a new twist.
"The nebula is not like a bagel, but rather, it's like a jelly doughnut, because it's filled with material in the middle," said C. Robert O'Dell of Vanderbilt University in Nashville, Tenn.
He leads a research team that used Hubble and several ground-based telescopes to obtain the best view yet of ...
New indicator molecules visualise the activation of auto-aggressive T cells in the body as never before
Biological processes are generally based on events at the molecular and cellular level. To understand what happens in the course of infections, diseases or normal bodily functions, scientists would need to examine individual cells and their activity directly in the tissue.
The development of new microscopes and fluorescent dyes in ...
A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials.
The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers.
Droplets in this toroidal shape made ...
Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry.
Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada.
High manufacturing cost and a short lifetime are still a major obstacle on ...
24.05.2013 | Life Sciences
24.05.2013 | Ecology, The Environment and Conservation
24.05.2013 | Physics and Astronomy
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:dc935003-c5d9-44c6-a370-2b046d5ad7ac> | 3.734375 | 1,438 | Content Listing | Science & Tech. | 48.428599 |
Part 13: Multiple Stars and Star Clusters
John P. Pratt
- An optical double is merely two stars that happen to be nearly on the same line of sight.
- The two stars are not physically associated in any way.
- An example is Mizar (the middle star of the three in the Big Dipper's Handle) and
- Mizar and Alcor used to be an Arab eye test, which is strange because it's a very easy test now.
- Optical doubles are not important in astronomy, so no more will be said about them.
- The word binary is used for stars which are in orbit around each other.
- They represent the first discovery that gravity is at work outside of our solar system.
- They are more common than single stars--over 2/3 of stars are in binary or multiple systems.
- They provide the best way to determine the mass of stars, by using Kepler's Laws.
- They are discovered in many ways, which leads to many different classifications of binaries.
- Visual Binaries can be seen in a telescope to be two separate stars.
- Even large telescopes are limited to about 1" of arc separation by air quality, the same as a
telescope with only a one foot diameter mirror.
- There are many beautiful visual binary stars, with stars of very different colors.
- The brighter component is labeled A, and the dimmer B, as in Sirius A.
- A nice double star in binoculars is Epsilon Lyrae, which is within one degree of Vega.
- Mizar (at the middle of the Big Dipper's handle) is a visual binary in small telescopes.
- One beautiful pair is the blue and yellow Albireo, the star which is the bottom star of the
the Northern Cross.
- Some binaries are too close to see visually but can be discovered by red shifts in their spectra.
- That is, one or both of the stars can be seen coming toward us or moving away from us.
- To have such a fast orbital motion always means that they are too close to be a visual binary.
- Both of the visual components of Mizar are also spectroscopic doubles.
Algol, the head of the Medusa which Perseus holds, it the most famous eclipsing binary.
- Eclipsing Binaries pass in front of each other, which dims the light coming from them.
- From the light curve one can deduce their relative sizes and positions.
- Most are also spectroscopic binaries, so we can get a lot of information about them.
- The primary is a hot blue star with M = 5, secondary is an orange giant with M =1.
- It dims by a full magnitude in only 4 hours, every 2.9 days, when the giant star eclipses the
Evolution of Binaries
- A figure "8" called the Roche surface can be drawn around the two stars.
- Each half of the figure is called a Roche lobe.
- The point where the two lobes meet is the point of equal gravity between the stars.
- When a star expands as it evolves, it can fill its Roche lobe.
- When it does, matter streams through the equal gravity point onto the other star like sand
through an hourglass.
- Sometimes over half of the expanding star can transfer onto the other stars.
- That what apparently happened on Algol: The orange giant was originally the more massive.
- If both stars fill their Roche lobes, it is called a contact binary.
- The periods of orbital revolution are usually less than 2 days.
- Their evolution can be complicated, and they can be very unusual stars.
- A nova is a star that flares up in much increased brightness.
- The explosion is not nearly so violent as in a supernova.
- It is believed that all novae are binary stars, and that one star is expanding through its Roche
lobe, transferring hydrogen onto a white dwarf.
- That hydrogen is greatly compressed and can explode with nuclear reactions.
- If the matter falls on the white dwarf fast enough to exceed the Chandrasekhar limit, the
white dwarf could also explode as a supernova.
Binaries with two jet streams
- Some binaries have huge jets of gas shooting out of both sides, with velocities of 25% of the
speed of light.
- These give off X-rays and gamma rays.
- The star producing the two jets is orbiting around another star.
The search for Planets around other stars
- One weird binary is the star which is the goat that Auriga (the Charioteer) is holding (epsilon
- The primary is a yellow-white super giant, as large as the orbit of Mars.
- Every 27 years it is eclipsed by a huge disk of dust for 2 years, absorbing half its light.
- It now appears that at the center of the disk is a close binary that stirs up the dust dumped
onto it by the supergiant.
- Astronomers have hunted for years for planets around other stars.
- They have tried to find them mostly by looking for oscillations of a star around invisible
- In the last few years, it is believed that several have been found.
There are three basic kinds of clusters, based mostly on how tightly clustered the stars are.
- An association of stars is so loosely packed that it is not even held together gravitationally.
- An open cluster is a moderately close-knit, irregularly shaped group of 100-1,000 stars.
- A globular cluster contains about 100,000 stars and is distinctly spherical shaped.
- Associations are being ripped apart by galactic tidal forces, even as Saturn's rings are particles that
are separated by Saturn's tidal forces.
- They often have an open cluster at their center, which is still gravitationally intact.
- There are two kinds of associations, formed of rather different types of stars.
- O associations are composed of O and B stars (huge blue stars, often in gas).
- T associations are composed of T Tauri stars (giant red stars with dust clouds)
- An example of an association is the head of Perseus; it is a beautiful field in binoculars.
- The four stars at the center of the Orion Nebula (the Trapezium) might be a tiny association.
- The nearest open cluster is the Ursa Major cluster, which include all but the end stars of
the Big Dipper, being about 70 l.y. away.
- Some have suggested that our sun may be part of the Ursa Major cluster.
- The Hyades and Pleiades (Seven Sisters) and Beehive are some of the
next closest open clusters.
- Open clusters tend to be found in the plane of our galaxy; hence they are sometimes called
- Globular clusters are tightly gravitationally bound, being nearly spherical shaped.
- They typically contain many stars evolving off the lower main sequence to red giants.
- They are found spherically distributed around our galaxy, not in the plane.
- They look like little spherical fuzzy patches in small telescopes, an 8 inch can see stars.
- One of the brightest examples is the Hercules Cluster on an edge of the trapezoid in Hercules.
- Many globular clusters are strong X-ray sources, perhaps from many neutron stars.
Measuring Distances to Clusters
- Parallax only works for the very nearest clusters, like the Hyades.
- Estimating the luminosity from the H-R diagram works, especially fitting the main sequence.
- Measuring the period of Cepheid variables gives luminosities for some globular clusters.
- Measuring the diameter of more distant globular clusters gives an estimates.
- One has to allow for the interstellar reddening from dust particles.
- About half of open clusters have so many massive blue stars that they appear to be young.
- Globular clusters often don't have any bright blue stars and are thought to be very old.
- All clusters in the plane of our galaxy are being disrupted; almost no globulars are known there. | <urn:uuid:55cb0d25-03bc-4b27-b2d9-bd8791574c03> | 3.703125 | 1,733 | Structured Data | Science & Tech. | 55.346886 |
Newton came up with one more law when he started thinking about the interaction of objects.
- He had already talked about what happens when there is no force (1st Law).
- He then talked about what happens when there is a force (2nd Law).
- But what happens when you have objects interacting, affecting each other?
The 3rd Law (The Law of Action-Reaction)
“For every action force there is an equal and opposite reaction force.”
Anytime an object applies a force to another object, there is an equal and opposite force back on the original object.
- If you push on a wall you feel a force against your hand… the wall is pushing back on you with as much force as you apply to it.
- If this wasn't happening, your hand would accelerate through the wall!
Standing on the ground, you push on the ground with a force due to gravity (Fg down) and the ground pushes back on you (FN up).
- FN is the normal force.
- It balances out the force due to gravity down.
- The normal force is always perpendicular to the surface the object is on.
- Without it there would be a net force down on you and you would accelerate down.
- This is an example of an action-reaction pair, two forces that are equal but opposite to each other.
There is one ultra important thing to remember when you are looking at action-reaction pairs.
- The two forces that you are looking at are each acting on different objects!
- If you are examining what you think are action-reaction forces, and they are both acting on the same object, they are not.
- In the example above, Fg is the person pushing down on the ground, while FN is the ground pushing up on the person.
Here are some examples of action-reaction forces that depend on the objects being in direct contact, meaning that the two objects involved are actually touching each other to exert forces on each other. These are called "contact forces."
- Action: the tires on a car push on the road…
Reaction: the road pushes on the tires.
- Action: while swimming, you push the water backwards...
Reaction: the water pushes you forward.
Action-reaction pairs can also happen without friction, or even with the objects not touching each other, known as "action at a distance" forces …
- Action: a rocket pushes out exhaust…
Reaction: the exhaust pushes the rocket forward.
One of the original arguments that flight in the vacuum of space was impossible was that there would be nothing to push against. This action-reaction explains how a rocket can fly in space where there is no air to push against.
- Action: the earth pulls down on a ball…
Reaction: ball pulls up on the earth.
How can this second example be true?!?
- There is an action-reaction pair of forces given by EFb = bFE
- We know that the ball will accelerate towards the earth at 9.81 m/s2, but does the earth accelerate towards the ball at the same rate?
- If this is true you would expect the earth to be constantly bouncing up towards falling objects.
- Carefully remember Newton’s 2nd Law (F = ma).
- In this example the forces are equal, but the mass of the earth is considerably more than the ball!
- The earth has more inertia than the ball.
- Let’s assume the ball has a mass of 2.00 kg and do some calculations…
The Force of the Earth on the Ball
EFb = ma = mg
= (2.00kg) (9.81m/s2)
EFb = 19.6N
This is the force of the Earth acting on the ball, but because of Newton’s 3rd Law, it is also the force of the ball on the Earth.
EFb = bFE = 19.6N
The Acceleration of the Earth because of the Ball
bFE = ma
a = bFE / m = 19.6N / 5.98e24 kg
a = 3.28e-24 m/s2
This is such a small acceleration of the Earth towards the ball that it can’t even be measured. We can see that although the forces are equal, the accelerations do not have to be!
Sir Isaac Newton hated his own theories about gravity being an "action at a distance" force. He believed so strongly that there must be some material that connects objects that have a gravitational pull on each other, that he was one of the first scientists to seriously suggest there was a mysterious substance called the aether (sometimes spelled ether) that connected all objects in the universe.
Example 1: When a rifle fires a bullet, the force the rifle exerts on the bullet is exactly the same (but in the opposite direction) as the force the bullet exerts on the rifle… so the rifle “kicks back”. The bullet has a mass of 15 g and the rifle is 6.0 kg. The bullet leaves the 75 cm long rifle barrel moving at 70 m/s.
a) Determine the acceleration of the bullet.
Calculate this using a kinematics formula like vf2 = vi2 + 2ad and you should get 3.3e3 m/s2.
b) Determine the force on the bullet.
F = ma will let you calculate the answer 49 N.
c) Determine the acceleration of the rifle.
Again, use F = ma, but make sure that you use the correct mass. You should get 8.2 m/s2.
d) Explain why the bullet accelerates more than the rifle if the forces are the same.
Although have the same amount of force acting on them, they each have a different mass (and therefore a different inertia).
Example 2: If I push on a lawn mower, it pushes back on me with an equal, but opposite force. Explain why we don’t both just stay still.
- The answer is that these forces are acting on different bodies (and there are other forces to consider).
- It doesn’t matter to the lawn mower that there is a force on me… all that matters to the lawn mower is that there is a force on it, so it starts to move!
- Another action-reaction pair you need to consider is that I am pushing backwards on the ground, and it pushes forwards on me. | <urn:uuid:bda87df7-9cb9-4e64-a08a-7d65d6fa2715> | 4.125 | 1,385 | Tutorial | Science & Tech. | 70.901482 |
Previous abstract Next abstract
The nearby K dwarf HD 98800 is remarkable for its unusual infrared excess. The radiation between 10 and 100 microns amounts to more than 10 percent of the luminosity of the star, indicating a large disc of cold dust surrounding the star. A visual companion was discovered by Innes in 1909, and has moved about 1.5 arcsec north over the past 85 years, crossing the primary almost exactly. If physically connected, the orbit of the companion imposes some interesting geometrical constraints on the location and size of the disc.
Based on 174 spectra obtained with the CfA Digital Speedometers, we find that the visual primary is a single-lined spectroscopic binary with period 265 days, amplitude 6.5 km/s, and eccentricity 0.49, implying a low-mass stellar companion. The visual companion is a double-lined spectroscopic binary with period 314 days, eccentricity 0.7, and mass ratio 0.9. Assuming that the two spectroscopic binaries are physically bound, we can derive an orbit by combining the difference in the center-of-mass velocities with the visual positions. The period comes out to about 800 years with a semi-major axis of 130 AU and eccentricity of about 0.7. This orbit is relatively insensitive to the assumed distance. At closest approach the two binaries are separated by only 30 AU, while at ap-system they are almost 200 AU apart. Does this leave room for a dust disc? On the other hand, there is not yet any proof that the two binaries are physically connected. In that case there should be plenty of room for a dust disk around either of the spectroscopic binaries.
Wednesday program listing | <urn:uuid:28b265dd-4f9b-45d9-a5d1-34e7e7752e36> | 2.953125 | 351 | Academic Writing | Science & Tech. | 54.933139 |
Animal Species:Common Cleanerfish, Labroides dimidiatus (Valenciennes, 1839)
The Common Cleanerfish is blue to yellow above fading to white or yellow below. As its standard name suggests the species cleans larger fishes.
Blue Streak, Blue-streak Cleaner Wrasse, Bridled Beauty , Cleaner Wrasse, Common Cleaner Wrasse, Gadfly Fish, Janitor Fish (Philippines)
The Common Cleanerfish is blue to yellow above fading to white or yellow below. There is a black stripe from the eye to the caudal fin margin. The stripe widens posteriorly.
It has thick lips and a pair of canines at the front of both jaws.
The species grows to 11.5 cm in length.
The Common Cleanerfish occurs in tropical (and some temperate) marine waters of the Indo-Pacific. In Australia it is recorded from southern to north-western Western Australia and from northern Queensland to southern New South Wales.
The map below shows the Australian distribution of the species based on public sightings and specimens in Australian Museums. Click on the map for detailed information. Source: Atlas of Living Australia.
Distribution by collection data
The Common Cleanerfish occurs on rocky and coral reefs.
Feeding and Diet
The Common Cleanerfish is well known for its feeding behaviour. It establishes a "cleaning station" often a cave or overhang, where it swims in a bobbing, dance-like motion. Larger fishes come to the cleaning station to have ectoparasites removed. The Common Cleanerfish swims around the fish picking off and eating the parasites. It often enters the mouth and gill chamber of large fishes.
- Grutter, A.S., Deveney, M.R., Whittington, I.D. & R.J.G.Lester. 2002. The effect of the cleaner fish Labroides dimidiatus on the capsalid monogenean Benedenia lolo parasite of the labrid fish Hemigymnus melapterus. Journal of Fish Biology. 61: 1098-1108.
- Hutchins, B. & R. Swainston. 1986. Sea Fishes of Southern Australia. Complete Field Guide for Anglers and Divers. Swainston Publishing. Pp. 180.
- Kuiter, R.H. 1996. Guide to Sea Fishes of Australia. New Holland. Pp. 433.
- Kuiter, R.H. 2000. Coastal Fishes of South-eastern Australia. Gary Allen. Pp. 437.
- Randall, J.E., Allen, G.R. & R.C. Steene. 1997. Fishes of the Great Barrier Reef and Coral Sea. Crawford House Press. Pp. 557.
Mark McGrouther , Collection Manager, Ichthyology
Tags fishes, ichthyology, Common Cleanerfish, Labroides dimidiatus, Labridae, blue, yellow, white, 'normal fish', 10 cm - 30 cm, countershaded, stripes or bands, rocky reef, coral reef, marine, adult, big lips, | <urn:uuid:9b6adbc9-2619-4ce4-b109-65d6caf2659b> | 3.171875 | 670 | Knowledge Article | Science & Tech. | 64.772179 |
In a Swiss laboratory, a group of ten robots is competing for food. Prowling around a small arena, the machines are part of an innovative study looking at the evolution of communication, from engineers Sara Mitri and Dario Floreano and evolutionary biologist Laurent Keller.
They programmed robots with the task of finding a “food source” indicated by a light-coloured ring at one end of the arena, which they could “see” at close range with downward-facing sensors. The other end of the arena, labelled with a darker ring was “poisoned”. The bots get points based on how much time they spend near food or poison, which indicates how successful they are at their artificial lives.
They can also talk to one another. Each can produce a blue light that others can detect with cameras and that can give away the position of the food because of the flashing robots congregating nearby. In short, the blue light carries information, and after a few generations, the robots quickly evolved the ability to conceal that information and deceive one another.
Their evolution was made possible because each one was powered by an artificial neural network controlled by a binary “genome”. The network consisted of 11 neurons that were connected to the robot’s sensors and 3 that controlled its two tracks and its blue light. The neurons were linked via 33 connections – synpases – and the strength of these connections was each controlled by a single 8-bit gene. In total, each robot’s 264-bit genome determines how it reacts to information gleaned from its senses.
In the experiment, each round consisted of 100 groups of 10 robots, each competing for food in a separate arena. The 200 robots with the highest scores – the fittest of the population – “survived” to the next round. Their 33 genes were randomly mutated (with a 1 in 100 chance that any bit with change) and the robots were “mated” with each other to shuffle their genomes. The result was a new generation of robots, whose behaviour was inherited from the most successful representatives of the previous cohort. | <urn:uuid:969a48fc-5099-422e-bdbc-cf3876a463d7> | 3.890625 | 438 | Nonfiction Writing | Science & Tech. | 44.050489 |
May 2, 2012
Animals and plants are distributed around the world naturally.
Their individual adaptations make certain habitats ideal and others
not. Throughout history, humans have moved plants and animals to
new areas--sometimes sucessfully and at other times producing
disastrous results. In this Science Bite, Museum scientist John Demboski
talks about introduced species and how they have become a part of
our daily lives. | <urn:uuid:572f29b7-fe5e-48f1-9a79-4889569a53ce> | 2.984375 | 87 | Truncated | Science & Tech. | 30.252019 |