text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Seymouriamorpha were a small but widespread group of limbed vertebrates (tetrapods). They have long been considered reptiliomorphs, and most paleontologists may still accept this point of view, but some analyses suggest that seymouriamorphs are stem-tetrapods (not more closely relates to Amniota than to Lissamphibia). Many seymouriamorphs were terrestrial or semi-aquatic. However, aquatic larvae bearing external gills and grooves from the lateral line system has been found, making them unquestionably amphibians. The adults were terrestrial. They ranged from lizard-sized creatures (30 centimeters) to crocodile-sized 150 centimeter long animals. They were reptile-like. If seymouriamorphs are reptiliomorphs, they were the distant relatives of amniotes. Seymouriamorphs form into three main groups, Kotlassiidae, Discosauriscidae, and Seymouriidae, a group that includes the best known genus, Seymouria. The last seymouriamorph became extinct by the end of Permian.
Cladogram based on Ruta, Jeffery, & Coates (2003):
Cladogram based on Klembara (2009) & Klembara (2010):
- ^ Laurin, Michel (2010). How Vertebrates Left the Water. Berkeley: University of California Press. ISBN 978-0-520-26647-6 [Amazon-US | Amazon-UK].
- ^ Bulanov, V. V. (2003). "Evolution and systematics of seymouriamorph parareptiles". Paleontological Journal 37 (Supplement 1): 1–105.
- ^ Olson, E. C. (1951). "Fauna of upper Vale and Choza: 1-5 Fieldiana:". Geology 10 (11): 89–128.
- ^ Ruta, M.; Jeffery, J. E.; and Coates, M. I. (2003). "A supertree of early tetrapods". Proceedings of the Royal Society B 270 (1532): 2507–16. doi:10.1098/rspb.2003.2524. PMC 1691537. PMID 14667343.
- ^ Klembara, Jozef (2009). "The skeletal anatomy and relationships of a new discosauriscid seymouriamorph from the lower Permian of Moravia (Czech Republic)". Annals of Carnegie Museum 77 (4): 451–483. doi:10.2992/0097-4463-77.4.451.
- ^ Klembara, Jozef (2011). "The cranial anatomy, ontogeny, and relationships of Karpinskiosaurus secundus (Amalitzky) (Seymouriamorpha, Karpinskiosauridae) from the Upper Permian of European Russia". Zoological Journal of the Linnean Society 161 (1): 184–212. doi:10.1111/j.1096-3642.2009.00629.x. | <urn:uuid:07bc2709-590c-4dba-ad11-cc7bfc43815f> | 3.71875 | 660 | Knowledge Article | Science & Tech. | 57.26456 |
Using the interactivity, can you make a regular hexagon from yellow
triangles the same size as a regular hexagon made from green
If the yellow equilateral triangle is taken as the unit for area,
what size is the hole ?
Which of these roads will satisfy a Munchkin builder?
Which of these triangular jigsaws are impossible to finish?
Can you make a square from these triangles?
Can you work out where the blue-and-red brick roads end?
Some carefully thought-out understanding of how powers of powers
Go to last month's problems to see more solutions.
If you think that mathematical proof is really clearcut and
universal then you should read this article.
Solve the equations to identify the clue numbers in this Sudoku problem. | <urn:uuid:3084e913-38d0-49f7-9d4d-06c00cbf2b5a> | 2.734375 | 163 | Content Listing | Science & Tech. | 50.520179 |
|+ Visit the NASA portal|
The rover we designed for transportation of our Mars’ astronauts would be able to hold four astronauts. It has tread wheels two on either side of a ball shaped rover giving it stability. The wheel treads would have the ability to raise and lower to accommodate changes in the Martian landscape, when necessary the wheels can convert into a hovering mode.
The rover is powered by a light- weight solar panel similar to our solar calculators. To exit the rover release a robotic probe there’s a hatch on the bottom of the rover’s balled body. The rover is made of a lightweight inflatable rubber; it has a reflective surface to help deflect radiation and the heat on the Martian surface. There is a clear window of strong plastic and glass alloyed to allow astronauts a clear view.
This is a design that could be tested easily on Earth’s surface where there is high volcanic activity. But it’s not a practical vehicle for Earth given when exploring Earth we don’t need to worry about radiation, excessive heat, or deal with weight limits for transportation of vehicles.
NASA Official: Mark León
Last Updated: May 2005
+ Contact Us | <urn:uuid:8735e886-37e3-4fc1-aa5b-fe02dce19c44> | 3.796875 | 248 | Knowledge Article | Science & Tech. | 47.390098 |
Science Fair Project Encyclopedia
Big dumb booster
Big Dumb Boosters (BDB) are a general class of launch vehicle built around the idea that it is cheaper to build and operate a large, strong, heavy rocket of simple design than it is to build a smaller, lighter, more cleverly-designed one. Even though the large booster is less efficient, its total cost of operation is cheaper because it is easier to build, operate and maintain.
In general, Russian rockets are closer to the BDB concept than their US counterparts. US rockets tend to be built of the most modern, lightest materials available and to extremely tight tolerances. Many of them require very careful handling to avoid being damaged while on the ground. Russian rockets, on the other hand, tend to be built more heavily. They are built with larger margins of strength and to looser tolerances.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:8c2d1430-b5cb-482e-921f-355c7cec3c80> | 3.609375 | 207 | Knowledge Article | Science & Tech. | 48.213047 |
The black-footed ferret (Mustela nigripes) once occurred throughout the grasslands and basins of interior North America, from southern Canada to Texas . Black-footed ferrets live in burrows made by prairie dogs, hunt prairie dogs for food and are obligate associates of the prairie dog . Their historical range is nearly identical to that of three prairie dog species, the black-tailed prairie dog, Gunnison's prairie dog, and white-tailed prairie dog . Prairie dogs were formerly abundant on the prairies of the continent and their colonies could possibly have supported as many as 5.6 million black-footed ferrets in the late 1800s . With the development and improved distribution of rodenticides in the early 1900s and expanded agricultural tillage, however, prairie dogs were rapidly eliminated and are now absent from an estimated 90-95% of their historically occupied area .
Ferret decline was linked to this rapid decline and fragmentation of prairie dog populations . Of the approximately 130 counties and provinces where ferrets had been found since 1880, only 10 were known to have ferrets by the 1960s . In 1971, six ferrets were caught and removed from a declining population in South Dakota and a first effort at captive breeding was attempted . The effort was unsuccessful and the last captive ferret died in 1979 . Following this loss, the black-footed ferret was thought to be extinct throughout North America . In 1981 however, a small relic population was discovered in a prairie dog colony near Meeteetse, Wyoming . The ferrets that remained at this site were eventually brought into captivity to protect them from an outbreak of sylvatic plague and distemper . These ferrets then became the founder population for reintroduction efforts . Today, all ferrets known to exist in the wild are the result of reintroduction efforts . Thus far, there have been 11 reintroduction sites including sites in Wyoming, Montana, South Dakota, Arizona, the Utah/Colorado border, and Mexico . Between 1991 and 1999, a total of 1,185 ferrets were released . Reproduction has occurred in the wild in six states . Currently two reintroduced populations are established and no longer require releases of captive-raised ferrets, one in western South Dakota and the other in southeastern Wyoming . Biologists estimate that a total of about 400 black-footed ferrets are alive in the wild in all the states where releases have occurred . Somewhere around another 400 are typically held in captive-breeding facilities around the country .
WYOMING: Meeteetse: In 1981, after it was believed the black-footed ferret had become extinct, a population was discovered distributed among approximately 7,400 ac of white-tailed prairie dog colonies . Total population estimates were 88 (28 adults) in 1983 and 129 (43 adults) in 1984 . Between 1986 and 1987, however, canine distemper and sylvatic plague decimated the rediscovered population . During this period, 18 ferrets were captured and later became the founder population for captive breeding efforts . These efforts were successful and have provided ferrets for reintroduction at 11 sites in the western US .
Shirley Basin: In 1991, Shirley Basin, Wyoming became the first site for black-footed ferret reintroduction . Currently, it is the only population of black-footed ferrets known to exist in the state . Two hundred twenty eight black-footed ferrets were released in the area between 1991-94 . In 1995, however, sylvatic plague caused a sharp decline in local white-tailed prairie dog populations . Despite the disease, a small number of the ferrets survived . In 1997, five ferrets were observed during spotlight surveys . The number spotted during surveys rose to 15 in 2000, 19 in 2001, and at least 52 in 2003 . Because not all of the area was searched and because spotlight surveys do not detect all ferrets these numbers represent only a portion of the population . In 2004, surveys of about 10% of the suspected black-footed ferret range in the Shirley Basin discovered a minimum of 21 litters and a total of 88 animals . Since 1991, the area's prairie dog acreage has increased in portions of the survey area and black-footed ferrets have been discovered in some of the new acreage . In 2005, permission was granted to proceed with plans for Wyoming’s first black-footed ferret reintroduction since 1994 . Plans include the release of 50 captive bred young-of-the-year into Shirley Basin .
SOUTH DAKOTA: Cheyenne River Sioux Reservation: In 2000, 42 ferrets were released on the Cheyenne River Sioux Reservation and those releases are continuing . In 2005 it was reported that the reservation had 115 ferrets .
Conata Basin /Badlands site: Black-footed ferrets were reintroduced into the Conata Basin/Badlands area of southwestern South Dakota in 1994-1999 . Thirty-six ferrets were released in 1994 . In 1995, there were at least two wild-born litters . Thirty-three more ferrets were released in 1995 and in 1996 there were as many seven as wild-born litters . Releases at this site indicated that “preconditioning” ferrets to the site prior to release resulted in higher success . As of 2000, this site had at least 200 ferrets and appeared to be the first established, self-sustaining wild population since reintroductions began . In 2005 245 ferrets were reported and because plague was confirmed in prairie dog colonies near the ferrets, a preventative treatment program was initiated .
In 2004, 93 ferrets were released onto the Rosebud Sioux Reservation in South Dakota . In 2005 42 survivors were counted and 15 new kits were observed later that year .
UTAH/COLORADO: In 1999, 72 ferrets were released in Utah’s Coyote Basin located along the Colorado/Utah border . Since then 255 ferrets have been released at this site . This population appears to be making good progress, with a minimum of 34 animals detected on one core release area in 2002 and with documented wild production every year since 2000 .
In 2001, a reintroduction effort began on the Colorado side of the Colorado/Utah border with the release of 35 ferrets at the Wolf Creek Management Area . A total of 189 ferrets have been released at this site to date . Based on a weeklong counting operation in late August 2005 where 5 sightings were confirmed, and 5 others were unconfirmed, ferrets here are surviving (although these numbers are low, they are representative of a larger population) . Wolf Creek’s first confirmation of a wild-born kit took place in November 2005 .
ARIZONA: Reintroductions began at Aubrey Valley in Arizona in 1996 when 4 ferrets were released into large fenced enclosures on a reintroduction site in Coconino County . Thirty-five ferrets were later released into ten on-site enclosures . An additional 15 kits were added to the site in the fall of 1996 . As of 2003, this reintroduction had moderate success with at least two successive generations of wild-born kits . In 2004, 24 ferrets were captured and tagged and in 2005, 35 were captured and tagged. In both years, others were observed but not captured suggesting that there has been a solid increase in numbers .
MONTANA: The first reintroduction attempt in Montana was initiated on the Charles M. Russell National Wildlife Refuge in 1994 [5,14] with the release of 35 ferrets . In 1995, 33 ferrets were released . In 1996, at least 10 females (some wild-born) were known to have produced litters totaling at least 15 kits . In the fall of 1996, an additional 39 ferrets were released onto prairie dog colonies unoccupied by ferrets . By fall of 1999, a total of 171 captive-reared kits had been released and the release of captive ferrets ceased . Wild-born kit production increased each year to a peak of 44 observed kits from 15 litters during summer 2000 . In 2001, however, a population crash began that continued through 2002 and the population appeared in danger of extirpation without additional releases . Thirty-seven captive ferrets were released in 2003 and 21 more were released in 2005 and the number of wild-born ferrets increased from just a few individual in the spring of 2003 to over 10 in 2005 .
In 1997, a second Montana release site was established on the Fort Belknap Indian Community when 23 ferrets were released . Since then, 110 black-footed ferret kits have been released at this site (the Snake Butte reintroduction site) . This reintroduction has met with only moderate success . Despite additional releases, as of 1999 only one litter had been born in the wild at the site .
MEXICO: Ninety-one captive-reared ferrets were reintroduced into northern Chihuahua, Mexico, in the fall of 2001 . This site supports the largest contiguous colony of black-tailed prairie dogs found in North America today . In 2002, an additional 69 were reintroduced on adjacent areas of the El Cuervo complex . Initial follow-up survey results were promising, with at least 26 ferrets documented during 2002, of which nine were wild-born . More recently, probably because of drought, numbers appear to have declined with only two animals seen in a recent survey .
CAPTIVE POPULATION: In 1988, the single captive population of black-footed ferrets held at the Wyoming Game and Fish Department’s Sybille Wildlife Research and Conservation Education Center was split into three separate captive subpopulations to avoid the possibility that a single catastrophic event could wipe out the entire population . Currently, the captive ferret population is divided among seven captive-breeding facilities throughout the United States and Canada . The captive population of juveniles and adult ferrets now fluctuates annually between 300 and 600 animals .
Wyoming Game and Fish Department. 2004. Black footed ferret (Mustela nigripes). Available at <http://gf.state.wy.us/wildlife/CompConvStrategy/Species/Mammals/PDFS/Black-footed%20Ferrett.pdf>.
U.S. Fish and Wildlife Service. 1988. Black-footed Ferret Recovery plan. Denver, Co. 154pp.
U.S. Fish and Wildlife Service. 2000. Black-footed ferret. Website <http://mountain-prairie.fws.gov/species/mammals/blackfootedferret> accessed March, 2006.
Matchett, R. 2006. Personal communication and graph “UL Bend NWR Black-footed Ferret Population” provided by Randy Matchett USFWS Senior Biologist, CMR NWR. April, 2006.
Lockhart, M. and P. Marinari. 2000. Ferrets Home on the Range. Endangered Species Bulletin. XXV(3):16-17.
Lewandowski, J. 2005. Ferrets faring well in Northwest Colorado. Colorado Division of Wildlife. Website <http://www.blackfootedferret.org/news/Ferrets-faring%20-well-9-21-05.htm> accessed March, 2006.
McIntosh, P. 2001. Action Alert: Sioux Reservation Site of Latest Release. National Wildlife 39(1).
Wyoming Game and Fish Department. 1997. Black-footed Ferret. Wild Times 13(8). Available at <http://gf.state.wy.us/services/publications/wildtimes/ferret.htm>.
Wyoming Game and Fish Department. 2004. August Surveys Show Shirley Basin Ferrets Continue to Prosper. Press Release 10/8/2004. Available at <http://gf.state.wy.us/services/news/pressreleases/04/10/08/041008_1.asp>
Wyoming Game and Fish Department. 2003 Shirley Basin Ferrets Alive and Well; August Surveys Tally Over 50. Press Release 9/19/2003. Available at <http://gf.state.wy.us/services/news/pressreleases/03/09/19/030919_1.asp>
Wyoming Game and Fish Department. 2005. Shirley Basin Ferret Success Sets Stage for Another Reintroduction this Fall. Press Release 8/5/2005. Available at <http://gf.state.wy.us/services/news/pressreleases/05/08/05/050805_1.asp>.
Associated Press. 2005. Count Shows Ferret Population Doing Well. Rapid City Journal, December 26, 2005. Available at < http://www.aberdeennews.com/mld/aberdeennews/news/13490084.htm>.
Dowd-Stukel, D. 1997. Dakota Natural Heritage: Black-footed ferret. South Dakota Wildlife Diversity Program. Website <http://www.sdgfp.info/Wildlife/Diversity/Digest%20Articles/bfferret.htm> accessed March, 2006.
Reading, R.P., T.W. Clark, A Vargas, L.R. Hanebury, Miller B.J., and D. Biggins. 1996. Recent Direction in Black-footed Ferret Recovery. Available at <http://www.umich.edu/~esupdate/library/96.10-11/reading.html>
U.S. Fish and Wildlife Service. 2005. Plague Found Near Black-footed Ferrets in Conata Basin. Press Release, August 31, 2005. Available at <http://news.fws.gov/NewsReleases/showNews.cfm?newsId=0E06107D-65BF-03E7-24804099B69C5D62>.
Bureau of Land Management. 2006. Black-footed ferrets in Colorado: Colorado-Utah Black-footed Ferret Project. Website <http://www.co.blm.gov/lsra/bffwebpage.htm> updated February 2006, accessed March, 2006.
Lockhart, M., J. Pacheco, R. List, and G. Cebellos. Black-footed Ferrets Thrive in Mexico. Endangered Species Bulletin XXVIII(3):12-13.
Holmes, B. 2006. Personal communication with Brian Holmes, Wildlife Biologist, BLM White River Office. April, 2006.
Copenhaver, L. 2006. Endangered black-footed ferret making comeback. Tucson Citizen, January 2, 2006.
U.S. Fish and Wildlife Service. 2002. Charles M. Russel National Widlife Refuge, Lewiston Montana Annual Narrative. Available at <http://cmr.fws.gov/Annual%20Narratives/2002%20Annual%20Narrative/Wildlife.htm>.
U.S. Fish and Wildlife Service. 1993. Proposed Establishment of a Nonessential Experimental
Population of Black-Footed Ferrets In Southwestern South Dakota. Federal Register, May 19, 1993 (58, FR 29176-29186).
U.S. Fish and Wildlife Service. 2002. Proposed Establishment of a Nonessential Experimental
Population of Black-footed Ferrets in South-central South Dakota Federal Register, September 11, 2002 (67 FR 57558-57567). | <urn:uuid:ef97af06-ea62-4c94-8758-8f9c0f636efa> | 3.703125 | 3,198 | Knowledge Article | Science & Tech. | 56.82413 |
Carbon dioxide (CO2): Carbon dioxide occurs naturally in the atmosphere, is exhaled by humans and other animals and is used by plants in photosynthesis. Growing plants and the oceans act as carbon sinks, taking carbon dioxide from the atmosphere and storing the carbon. As plant material decomposes, the carbon is released back into the atmosphere, largely as carbon dioxide. Burning fossil fuels, land clearing and other activities of modern industrial society have caused the concentration of carbon dioxide in the atmosphere to climb from about 280 parts per billion to 380 parts per billion, causing warming and other climate changes.
Covalent Bond: is a chemical bond that involves sharing a pair of electrons between atoms in a molecule
Geosequestration: Carbon Dioxide is captured, compressed and injected into deep geological formations.
IPCC: Intergovernmental Panel on Climate Change
IGCC: Integrated Gasification Combined Cycle
Intergovernmental Panel on Climate Change (IPCC): Established in 1988 by the World Meteorological Organization and the United Nations Environment Programme to assess on a comprehensive, objective, open and transparent basis the scientific, technical and socio-economic information relevant to understanding the scientific basis of risk of human-induced climate change, its potential impacts and options for adaptation and mitigation. The IPCC does not carry out research nor does it monitor climate related data or other relevant parameters. It bases its assessment mainly on peer reviewed and published scientific/technical literature. Learn more...
Integrated Gasification Combined Cycle (IGCC): A process where fossil fuel is not combusted but is reacted at high pressure and temperature to form a synthesis gas, which is further reacted with water, to produce carbon dioxide (which can be captured) and hydrogen (which is combusted for energy).
Kyoto Protocol:An international agreement which seeks to reduce annual greenhouse gas emissions by developed nations in its first commitment period, 2008-2012, to 5 per cent less than 1990 emissions. It seeks to achieve this by imposing mandatory emissions targets on developed nations that ratify the Protocol. Learn more...
PCC: Post-combustion capture. The capture of carbon dioxide (usually involving separation of carbon dioxide from other flue gases) after fossil fuel has been combusted.
Pre-combustion capture: The capture of carbon dioxide before the combustion of fuel. This could be done through Integrated Gasification Combined Cycle, where the fossil fuel is not combusted but reacted at high pressure and temperature to form a synthesis gas, which is further reacted with water, to produce carbon dioxide (which is captured) and hydrogen (which is combusted for energy).
Seismic: A method of exploring the underlying strata of the earth, in which shocks are created, the resulting vibrations providing geological information
Supercritical state: Carbon dioxide (or any substance) is said to be in a supercritical state when its temperature and pressure are above its critical point. The critical point is the highest temperature and pressure at which it can exist as a gas and liquid in equilibrium. In its supercritical state, a substance shows properties of both liquids and gases, expanding to fill its container like a gas, but with the density of a liquid. The critical point for carbon dioxide occurs at a pressure of 73.8 bar (73 atm) and a temperature of 31.1°C.
UNFCCC: United Nations Framework Convention on Climate Change
United Nations Framework Convention on Climate Change (UNFCCC): Sets an overall framework for intergovernmental efforts to tackle the challenge posed by climate change. The Convention, which entered into force in 1994, enjoys almost universal global membership, with 189 countries having ratified it. Learn more...
Van der Waals Forces: Weak, short-range electrostatic attractive forces between uncharged molecules, arising from the interaction of permanent or transient electric dipole moments.
Viscous Fingering: is the formation of patterns in a morphologically unstable interface between two fluids in a porous medium - see image | <urn:uuid:9bc81c90-5dff-47f6-8169-13c49ced5400> | 3.796875 | 800 | Structured Data | Science & Tech. | 23.818364 |
A tornado is a violently rotating column of air in contact with and extending between a cloud (often a thunderstorm cloud) and the surface of the earth. Winds in most tornadoes blow at 100 mph or less, but in the most violent, and least frequent, wind speeds can exceed 250 mph.
Tornadoes, often nicknamed "twisters," typically track along the ground for a few miles or less and are less than 100 yards wide, though some monsters can remain in contact with the earth for well over fifty miles and exceed one mile in width.
Several conditions are required for the development of tornadoes and the thunderstorm clouds with which most tornadoes are associated. Abundant low level moisture is necessary to contribute to the development of a thunderstorm, and a "trigger" (perhaps a cold front or other low level zone of converging winds) is needed to lift the moist air aloft.
Once the air begins to rise and becomes saturated, it will continue rising to great heights and produce a thunderstorm cloud, if the atmosphere is unstable. An unstable atmosphere is one where the temperature decreases rapidly with height.
Atmospheric instability can also occur when dry air overlays moist air near the earth's surface. Finally, tornadoes usually form in areas where winds at all levels of the atmosphere are not only strong, but also turn with height in a clockwise, or veering, direction.
Tornadoes can appear as a traditional funnel shape, or in a slender rope-like form. Some have a churning, smoky look to them, and others contain "multiple vortices" - small, individual tornadoes rotating around a common center. Even others may be nearly invisible, with only swirling dust or debris at ground level as the only indication of the tornado's presence. | <urn:uuid:d186afb7-da99-4b57-92ab-2d08af0c6061> | 4.21875 | 366 | Knowledge Article | Science & Tech. | 39.99338 |
While the work of Mausumi Dikpati suggests that meridional flows in the sun's convective layer may allow us to forecast sunspot activity (6 March, p 38), other forces may also be at work. In particular, the giant planets in the solar system may play a role through the gravitational pull they exert on the massive amount of fluid flowing in the outer layer of the sun.
Curiously, this gravitational force can be expressed as a Fourier series whose most important terms have interesting periodicities: one of these coincides with the 11-year cycle of the sunspots. What we may be seeing, therefore, is the direct influence of planetary tidal forces and their effects on the stability of the magnetic loops created in the meridional flows in the sun's convective layer. These forces could be a major factor in the cycle of magnetic loops believed to create the sunspots.
Jupiter is ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:20122795-0890-44e8-a395-a344506a39b7> | 3.296875 | 216 | Truncated | Science & Tech. | 48.704907 |
Marine Invasive Species
Brenda Bowling, Dickinson Marine Lab, Dickinson,Texas
Approximately 50,000 non-indigenous species have been introduced into the United States, especially in the last 200 years. In fact, nearly 98% of the crops and animals raised for food in our agricultural industry were introduced from other countries. The benefits (food, clothing, jobs, recreation, etc.) derived from many of these introductions are obvious; however, sometimes these species cause massive damages. When a non-indigenous species causes (or is likely to cause) economic, environmental or human health damages, the species is termed "invasive". Although only a small percentage of introduced species become invasive, the social, economical and environmental harm they can cause is overwhelming.
Balanced ecosystems provide food and habitat for healthy populations and control measures that keep predators and parasites in check. When non-native species invade an ecosystem, the balance is disrupted. Lack of local natural predators allows some invasives to thrive and compete with native species for food and space. They prey on local populations and can introduce foreign parasites and pathogens that the ecosystem may have no mechanisms to control. Even worse, they can alter the natural habitat and possibly the genetic structure of native species. Invasive species impact nearly one-half of the species on the federal threatened and endangered lists and is second only to habitat loss as the cause of species extinction. The International Union for the Conservation of Nature (UICN) lists invasive species as the second largest threat (next to habitat loss) to ecosystem biodiversity. The economic costs are enormous. The cost to the United States in damages, clean-up and control of invasives is estimated at more than $100 billion per year.
Each year, marine invasives cause the collapse of many commercial and recreational fisheries and millions of dollars in property damage. In a worst-case example, San Francisco Bay has documented 212 known invasive species, with another 123 species considered possible invasives. From 1961 through 1995, on average one new invasive species was introduced into the bay every 14 weeks, resulting in some areas where 100% of the common organisms are non-indigenous. This has had a profound effect on the ecosystem, with structural changes to the estuarine habitat causing increased erosion and loss of vital shorebird nursery habitat. The invasives also probably contributed to the extinction of one freshwater fish and may be having a negative impact on several endangered birds and mammals.
The threat of overwhelming invasions into estuaries in the Gulf of Mexico is increasing, because of the many ways introductions enter our waters. The shipping industry, through ballast water exchange and hull fouling, is a major transporter of non-indigenous species, but aquaculture, seafood processing, aquarium and pet industries, restaurants and seafood dealers, the bait industry and even biological researchers also contribute.
Unfortunately, Gulf of Mexico estuaries have already been invaded by several marine invasives. The brown mussel (a close relative of the zebra mussel) was discovered in Texas in 1989 on a Corpus Christi jetty. A native of Brazil, Venezuela, and South Africa, it is believed to have entered Texas in ballast water or on the hull of a Venezuelan ship. The brown mussel, found in clumps of 25,000 to 30,000 per square meter, is a fouling organism that multiplies and causes damages similar to zebra mussels in the Great Lakes. So far, the damages have been limited to a few clogged intake pipes and buoys weighted down with them. However, since its arrival, the brown mussel has spread north to the Colorado River and south to Vera Cruz, Mexico. It has also spread from the coastline to platforms 16 miles off Port Aransas and to many sites in Corpus Christi Bay and Lower Laguna Madre.
The Australian spotted jellyfish was thought to have arrived in the Caribbean in the 1970's or 1980's from Panama Canal ship crossings, but wasn't seen in quantities until the summer of 2000 when they were observed from Florida to Texas. They are mass producers -- up to 300 adult jellyfish from one egg -- and became so abundant that they clogged up several inshore estuaries from Mobile Bay to the Mississippi River. The jellyfish directly impacted the shrimping industry by clogging up shrimpers' nets. They also eat algae, plankton, fish eggs and small fish and were so thick in some areas that they literally ate 100% of the zooplankton in the water, including some valuable fish larvae. That year, they displaced many native organisms and are thought to have blocked the transportation of fish and shrimp larvae into vital nursery habitat. They put a temporary damper on tourism and hurt commercial and recreational fishing. The jellyfish left at the end of the summer, but could reappear any time as they have off Florida's and Mississippi's coasts and become established.
Other marine invasives that have the potential for threatening the Gulf of Mexico include the Asian green mussel (another relative of the zebra mussel) currently spreading through southern Florida and the lionfish (competitors with snapper and grouper) found on the Atlantic side of Florida, probably the result of accidental or intentional releases from the aquarium trade. The rapa whelk, which wiped out native oysters in the Black Sea, have been discovered in Chesapeake Bay and may threaten their oyster and clam populations. Several crab species, including the European green crab and the Chinese mitten crab, have devastated clam industries and spread a harmful human parasite to other estuaries on both coasts.
Currently, there are few legal or regulatory management tools in place to prevent or control these introductions; however, management of invasive species is becoming a high priority for many local and state governments. The best and least costly method to manage invasives is to prevent their introduction. Unfortunately, once invasive species become established, eradication is extremely difficult. Although research is now being conducted to develop better, safer methods of prevention control of invasives, public support for legislation and research, especially from anglers, is vital to successfully protect our waters from marine invaders.
© Copyright Texas Parks and Wildlife Department. No part of this work may be copied, reproduced, or translated in any form or medium without the prior written consent of Texas Parks Wildlife Department except where specifically noted. If you want to use these articles, see Site Policies. | <urn:uuid:bedb4440-238a-4118-b063-62b09e876106> | 3.796875 | 1,314 | Knowledge Article | Science & Tech. | 25.903494 |
A photo of the COSPIN instrument
Click on image for full size
European Space Agency
COSPIN Instrument Page
COSPIN is one of the instruments on the Ulysses spacecraft.
COSPIN stands for the COsmic and Solar Particle INvestigation. The COSPIN instrument is actually made up of 5 different sensors: the Dual Anisotropy Telescopes (ATs), the Low Energy Telescope (LET), the High Energy Telescope (HET), the High Flux Telescope (HFT), and the Kiel Electron Telescope (KET). Pictures of some of these sensors appear here.
COSPIN is another of the instruments onboard Ulysses that is helping us make a map of the heliosphere. The Earth is of course inside the heliosphere, or the region of space influenced by the solar wind. Because the solar wind affects life on Earth, it is important that we understand the heliosphere and all of the particles within this region. COSPIN does just that. It collects data about the solar wind and about galactic cosmic rays within the heliosphere.
Spacecraft like Pioneer 10 and 11 and Voyager 1 and 2 started making a map of the heliosphere. They've mapped to the outer planets and beyond! But, Ulysses has a special orbit that allows it to map areas no other spacecraft has ever been to.
The Ulysses probe was launched in 1990. It is still alive and well. The builders of COSPIN knew that Ulysses would be in space a long time and so they took special care to assure that COSPIN would be able to survive a long time in space. They also followed the ground rule of many spacecraft designers in that no single failure of any one sensor should result in the failure of another sensor. This assures us that useful data will be coming from COSPIN for a long time to come.
Shop Windows to the Universe Science Store!
Our online store
on science education, ranging from evolution
, classroom research
, and the need for science and math literacy
You might also be interested in:
The rare arrangement of planets Jupiter, Saturn, Uranus, and Neptune in the 1980's made it possible for the Voyager spacecrafts to visit them over a 12 year span instead of the normal 30. They used gravity...more
The Hubble Space Telescope (HST) was one of the most important exploration tools of the past two decades, and will continue to serve as a great resource well into the new millennium. The HST found numerous...more
Driven by a recent surge in space research, the Apollo program hoped to add to the accomplishments of the Lunar Orbiter and Surveyor missions of the late 1960's. Apollo 11 was the name of the first mission...more
Apollo 12 was launched on Nov. 14, 1969, surviving a lightning strike which temporarily shut down many systems, and arrived at the Moon three days later. Astronauts Charles Conrad and Alan Bean descended...more
Apollo 15 marked the start of a new series of missions from the Apollo space program, each capable of exploring more lunar terrain than ever before. Launched on July 26, 1971, Apollo 15 reached the Moon...more
NASA chose Deep Impact to be part of a special series called the Discovery Program on July 7, 1999. The Discovery program specializes in low-cost, scientific projects. In May 2001, Deep Impact was given...more
The Galileo spacecraft was launched on October 19, 1989. Galileo had two parts: an orbiter and a descent probe that parachuted into Jupiter's atmosphere. Galileo's main mission was to explore Jupiter and...more | <urn:uuid:12a7fdf4-8083-4763-83c0-fff8b7c72787> | 3.4375 | 748 | Knowledge Article | Science & Tech. | 55.958203 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2002 January 7
Explanation: Sometimes the simplest shapes are the hardest to explain. For example, the origin of the mysterious cone-shaped region seen on the far left remains a mystery. The interstellar formation, dubbed the Cone Nebula, is located about 2700 light years away. Other features in the image include red emission from diffuse interstellar hydrogen, wispy filaments of dark dust, and bright star S Monocerotis, visible on the far right. Blue reflection nebulae surround the brighter stars. The dark Cone Nebula region clearly contains much dust which blocks light from the emission nebula and open cluster NGC 2264 behind it. One hypothesis holds that the Cone Nebula is formed by wind particles from an energetic source blowing past the Bok Globule at the head of the cone.
Authors & editors:
Jerry Bonnell (USRA)
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
& Michigan Tech. U. | <urn:uuid:eb096c06-7396-40dc-b252-03798a0323d1> | 3.9375 | 240 | Knowledge Article | Science & Tech. | 43.265955 |
This book will be on earthquakes, a natural disaster that can kill and destroy. But with proper prediction, we can figure out where there is a lot of danger and how to prevent as many casualties as possible. If you are interested in volcanoes, but think that you should learn about seismology as well, then here is the place
Table of Contents
- Plate Tectonics
- Basics of Earthquakes
- Measuring and Predicting Earthquakes
- Seismology and the Seismograph
- The Seismologist
- Special Topics
- Some Earthquakes Around the World
Added IRIS to EQ resources.
- Volcanoes—Volcanoes and earthquakes are often seen together.
- The US Geological Society (USGS)—The USGS researches earthquakes and volcanoes, among other things.
- DK E-guide Earth—The DK E-guide to Earth is Internet-linked. Here is the section on earthquakes.
- Incorporated Research Institutions for Seismology (IRIS)—IRIS is a non-profit consortium dedicated to the acquisition, management, and distribution of seismological data. | <urn:uuid:f8e69e36-2737-4f90-9730-848bf480c7dd> | 3.859375 | 233 | Content Listing | Science & Tech. | 31.707895 |
Evolutionary and Historical Ecology
To understand the conditions under which the ponderosa pine forests of the American Southwest evolved, a functional, evolutionary theory of ecosystem health is essential. Without it, we are unlikely to be able to provide prescriptions for the return of our forests to health, much less prevent or control wildfires.
Ecology is the study of the interactions between organisms and their environments. Evolution is defined as any change in the gene pool. Evolutionary ecologists study the adaptation of organisms to their environment, from molecular to ecosystemic levels.
Humans have adapted to—and have been the periodic instigators of—ecosystem change for a least a million years. While most ecologists study nonhuman relationships, historical ecologists recognize that human action modifies the habits and distributions of nonhuman species.
Ecologists pursue an integrative approach to the study of ecosystem health by working with other natural and physical scientists such as biologists, geologists, anthropologists, archeologists and environmental historians. Their combined resources are many, including Native American land-use history, Anglo and Hispanic settlement history, soil phytolith analysis, packrat middens, land surveys, repeat photography, fire scars, and dendrochronology.
In the western United States ecologists are called upon to build meaningful links between basic research and practical application. They communicate the relevance of their science to federal agencies and to other institutions that benefit from their insights. They inform the public about the nature, progress, and implications of their discoveries.
Allen, Craig D. et al. 2002. Ecological restoration of southwestern ponderosa pine ecosystems: A broad perspective. Ecological Applications. 12(5), pp. 1418-1433. An important recent paper promoting a broad and flexible perspective on ecological restoration in southwestern forests.
Cooper, C.F. 1960. Changes in vegetation, structure, and growth of southwestern pine forests since white settlement. Ecology 42:493-499. Classic study of southwestern forest ecosystem changes due to grazing, logging, and fire exclusion associated with European settlement.
Covington, W. Wallace. 2003. The evolutionary and historical context. Pages 26-47 in Friederici, Peter, ed. Ecological Restoration of Southwestern Ponderosa Pine Forests: A Sourcebook for Research and Application. Washington, D.C. Island Press. The Director of the Ecological Restoration Institute summarizes the state of knowledge of ponderosa pine reference conditions in the Southwest.
Crumley, Carole L. ed. 1993. Historical Ecology: Cultural Knowledge and Changing Landscapes. Santa Fe, NM: School of American Research Press; [Seattle]: Distributed by the University of Washington Press. A useful collection exploring the theory and application of historical ecology.
Egan, David and Evelyn A. Howell, eds. 2001. The Historical Ecology Handbook: A Restorationist's Guide to Reference Ecosystems. Boulder, CO: Island Press. Why historical ecology matters to ecosystem restorationists, and the importance of reference ecosystems.
Fulé, P.Z., M.M. Moore, and W.W. Covington. 1997. Determining reference conditions for ecosystem management in southwestern ponderosa pine forests. Ecological Applications 7(3):895-908. Reconstruction of pre-European forest structure and fire regime at Camp Navajo, AZ.
Lee, Kai N., 1995. Compass and Gyroscope: Integrating Science and Politics for the Environment. Washington D.C.: Island Press. A succinct analysis of the relations between the ecological sciences and the political process.
Leopold, A. 1924. Grass, brush, timber, and fire in southern Arizona. Journal of Forestry 22:1-10. Pioneering observations about grazing, fire, and trees by the founder of ecological restoration.
Leopold, A. 1937. Conservationist in Mexico. American Forests 37:118-120, 146. Contrast between “natural” forests in northern Mexico and degraded forests in the southwestern U.S.
Leopold, A. 1941. Wilderness as a land laboratory. Living Wilderness 6:3. The need for a point of reference or “base-datum” of natural ecosystems.
Moir, William H., B. Geils, M.A. Benoit, and D. Scurlock. 1997. Ecology of Southwestern Ponderosa Pine Forests. Pages 3-27 in Block, William M. and D.M. Finch, tech. ed. Songbird ecology in southwestern ponderosa pine forests: a literature review. Gen. Tech. Rep. RM-GTR-292. Fort Collins, CO: U.S. Dept. of Agriculture, Forest Service, Rocky Mountain Forest and Range Experiment Station. 152 p. A fine technical review of the literature. Available online at http://www.gffp.org/pine/ecology.htm
Pianka, E. R. 1974. Evolutionary Ecology. Harper and Row, New York. 356 pp. The most accessible text on the subject.
Swetnam, T.W., C.D. Allen, and J. Betancourt. 1999. Applied historical ecology: Using the past to manage for the future. Ecological Applications 9(4):1189-1206. Examples from the Southwest to illustrate some of the values and limitations of applied historical ecology, and the need for multiple, comparative histories from many locations. Available online at http://www.mesc.usgs.gov/products/pubs/258/258.pdf
Last edited June 25, 2003 | <urn:uuid:aefeac5e-a2aa-4595-bc8f-9f3fd5a015e9> | 3.1875 | 1,153 | Knowledge Article | Science & Tech. | 37.603474 |
I was a bit surprised to read the attached article in the IET Power Engineer, because I have been a member of the UK Institute of Electrical Engineers for many years and this kind of very technical nuclear article was usually in the UK BNES journal. Maybe nuclear is getting more relevant. The article is about improving PWR efficiency by using hollow fuel pellets and having the cooling water interspersed with nanofluids of oxides and metals. I wondered if either of these two efficiency mods could be applied to CANDU. However hollow fuel elements would seem very difficult to make; would they not require an inner Zircalloy tube ? As for the oxides and metals in the cooling water, would this not require fuel enrichment and cause more contamination. A final pet peeve of mine is the measurement of “efficiency” of a nuclear reactor as though it was a coal fired plant. Efficiency of different kinds of nuclear reactor may be compared in terms of the full cycle of mining and extracting the uranium, transportation, processing, enrichment ( if PWR) and even the eventual recovery of the energy in the used fuel.
Cool water and hollow rods.doc
Description: MS-Word document | <urn:uuid:c67cbb89-d6f7-40eb-b36a-be09e89e47a6> | 3.0625 | 242 | Comment Section | Science & Tech. | 38.767444 |
Save the environment... and moneyNPR reports on another significant, if underreported source of pollution: computers.
Computers and computer monitors in the United States are responsible for the unnecessary production of millions of tons of greenhouse gases every year, according to the Environmental Protection Agency. In U.S. companies alone, more than $1 billion a year is wasted on electricity for computer monitors that are turned on when they shouldn't be. EPA officials say emissions could be drastically reduced if companies and individual computer users would follow a few energy-saving guidelines.
The complete audio version of this story can be accessed by clicking here.
So if you unnecessarily leave your home computer on while you're asleep or at work, don't whine about SUV owners killing the ozone layer! | <urn:uuid:c3d671fc-9c4a-4b12-ae24-b008e40d8dcd> | 2.71875 | 153 | Personal Blog | Science & Tech. | 33.296891 |
One Hundred and Fifty Years of Debris-Flows in the Swiss Alps
Bollschweiler, M. and Stoffel, M. 2010. Changes and trends in debris-flow frequency since AD 1850: Results from the Swiss Alps. The Holocene 20: 907-916.
In a study designed to explore this question, Bollschweiler and Stoffel developed a history of debris-flow frequencies for eight different areas in the Zermatt Valley -- a dry inner-alpine valley of the Valais Alps (Switzerland, with central coordinates of 46°10'N, 47°7'E) -- based on data obtained from "tree-ring series of affected conifers and complemented, where available, with data from local archives," which work entailed the sampling of 2467 individual trees that had been impacted by debris-flow activity in order to obtain 4491 pertinent increment cores.
The two Swiss scientists found there were peaks in debris-flow activity "toward the end of the Little Ice Age and in the early twentieth century when warm-wet conditions prevailed during summers in the Swiss Alps," but they say they also observed "a considerable decrease in frequency over the past decades which results from a decrease in the frequency of triggering precipitation events." Most importantly, they report that when longer-term changes were sought, they could not identify "any significant trends in the debris-flow series between 1850 and 2009."
In discussing their real-world debris-flow results, Bollschweiler and Stoffel say they "contradict the widely accepted assumption that climatic changes will univocally lead to an increase in event frequency." But they add that their findings "are in concert with data from Jomelli et al. (2007), indicating that the most recent past (2000-2009) represents the period with the lowest frequency of debris-flow events since AD 1900," which latter decade is touted by climate alarmists as having been the warmest such period of the past millennium or more. Hence, the world's global warming gurus once again appear to be close to one hundred and eighty degrees out of phase with reality on this significant subject, insofar as it has been empirically examined to date.
Jomelli, V., Brunstein, D., Grancher, D. and Pech, P. 2007. Is the response of hill slope debris flows to recent climate change univocal? A case study in the Massif des Ecrins (French Alps). Climatic Change 85: 119-137. | <urn:uuid:e929388b-7e9a-4169-9578-03971ba4068e> | 3.25 | 522 | Academic Writing | Science & Tech. | 51.193875 |
Read all about the number pi and the mathematicians who have tried to find out its value as accurately as possible.
Use the Cuisenaire rods environment to investigate ratio. Can you
find pairs of rods in the ratio 3:2? How about 9:6?
A card pairing game involving knowledge of simple ratio.
Rachel has a bag of nuts.
For every cashew nut in the bag, there are two peanuts.
There are 8 cashews in Rachel's bag. How may peanuts are there? | <urn:uuid:20574fce-20d2-4668-b55b-4b0a88f9d594> | 2.84375 | 106 | Content Listing | Science & Tech. | 68.026252 |
Red-Eyed Tree Frog (Agalychnis callidryas)
Thanks to their big bulging red eyes, it's not hard to recognize red-eyed tree frogs! This alien-like feature is a defense mechanism called "startle coloration." When the frog closes its eyes, its green eyelids help it to blend in with the leafy environment. If the nocturnal frog is approached while asleep during the day, its suddenly open eyes will momentarily paralyze the predator, providing the frog with a few seconds to escape. However, the frogs' eyes are not their only fashion statement! To match the brilliance of their eyes, these frogs have bright lime green bodies that sometimes feature hints of yellow or blue. According to their mood, red-eyed tree frogs can even become a dark green or reddish-brown color. They have white bellies and throats but their sides are blue with white borders and vertical white bars. Their feet are bright red or orange. Adept climbers, red-eyed tree frogs have cup-like footpads that enable them to spend their days clinging to leaves in the rainforest canopy, and their nights hunting for insects and smaller frogs. Male red-eyed tree frogs can grow up to two inches in length and females can grow up to three inches.
First identified by herpetologist Edward Cope in the 1860s, the red-eyed tree frog is found in the lowlands and on slopes of Central America and as far north as Mexico. As with other amphibians, red-eyed tree frogs start life as tadpoles in temporary or permanent ponds. As adult frogs, they remain dependent on water to keep their skin moist, staying close to water sources such as rivers found in humid lowland rainforests. Red-eyed tree frogs can be found clinging to branches, tree trunks and even underneath tree leaves. Adults live in the canopy layer of the rainforest, sometimes hiding inside bromeliads.
Red-eyed tree frogs are carnivores, feeding mostly on insects. They prefer crickets, flies, grasshoppers and moths. Sometimes, they will eat smaller frogs. For tadpoles, fruit flies and pinhead crickets are the meals of choice.
Frogs have historically been an indicator species, evidence of an ecosystem's health or its impending vulnerability. Not surprisingly, the world's amphibian population has experienced a decline in recent years; research indicates that factors include chemical contamination from pesticide use, acid rain, and fertilizers, the introduction of foreign predators, and increased UV-B exposure from a weakened ozone layer that may damage fragile eggs. Though the red-eyed tree frog itself is not endangered, its rainforest home is under constant threat. | <urn:uuid:c926620f-8715-4a4f-a8b2-d483eff3fd4e> | 4.03125 | 555 | Knowledge Article | Science & Tech. | 49.431952 |
The central regions of our Galaxy are dominated by the crowded stellar bulge. Although stars in the bulge are typically observed to be almost as abundant in metals as the sun, the bulge is generally considered to have been one of the earliest parts of the Galaxy to form.
This project is on microlensed stars in the Galactic bulge. Microlensing occurs when a "lens" object (typically a low-mass star) becomes closely aligned with a more distant "source" star, whose image it magnifies and brightens. In the case of the bulge, the source stars are dwarfs which become bright enough while they are being microlensed (typically for a couple of weeks) to allow high resolution spectroscopy. So far, about 48 examples have been observed in this way, and some of them appear to be quite young. This does not fit into the usual picture of an old bulge.
The geometry of microlensing suggests that at least some of these stars may lie in the Galactic disk, on the far side of the bulge. That would be more consistent with the young ages. The aim of this project is to use the best available models of the bulge and inner disk of the Galaxy to calculate where the microlensed dwarfs are most likely to be located - do they belong to the bulge, or do they preferentially lie in the disk on the far side of the bulge? | <urn:uuid:03268a48-2acd-4a8d-82dd-35a1cd56f169> | 3.65625 | 294 | Academic Writing | Science & Tech. | 45.585237 |
This X-Ray movie of the sun, produced by Dr. Steven Hill of NOAAs Space Environment Center, is from October 19, 2001 through November 4, 2001. The data for it is from GOES Solar X-Ray Imager, SXI, which is an instrument attached to the GOES 12 satellite. The Space Environment Center receives a stream of the data which it then uses to make space weather alerts and forecast services. The SXI collects one image per minute and varies the exposure settings to allow for three different views to see coronal structures, active regions and solar flares.
The corona is the outermost layer of the sun. The dark regions near the poles are coronal holes. These are areas where the suns magnetic field extends into space allowing the hot gas to escape, so those areas are cooler, explaining the darker color. The lightest features on here are solar flares which are violent explosions on the surface of the sun. Solar flares can emit radiation which interferes with satellites near Earth. The origin of solar flares is usually near sunspots. Another notable feature on the surface of the sun is the rotation pattern. The sun does not rotate uniformly; the equator rotates faster than the poles and even those speeds vary some. This erratic rotation pattern provides a good idea of the gassy composition of the sun.
- Coronal holes: dark areas near the poles
- Solar Flares: Light areas that sporadically pop up
- Faster rotation at equator than poles
- Mike Biere, NOAA/GSD
Stephan Hill, NOAA/SEC
- Astronomy, Solar System, Sun, X-ray | <urn:uuid:8bdb4735-f93d-40e3-962e-c1338a2e9a4f> | 3.671875 | 338 | Knowledge Article | Science & Tech. | 44.967672 |
Science Fair Project Encyclopedia
The stamen is the male organ of a flower. Each stamen generally bears four pollen-sacs (microsporangia) which are associated to form the anther, and carried on a stalk called the filament. The development of the microsporangia and the contained haploid spores (called pollen-grains) is closely comparable with that of the microsporangia in gymnosperms or heterosporous ferns. The pollen is set free by the opening (dehiscence) of the anther, generally by means of longitudinal slits, but sometimes by pores, as in the heath family (Ericaceae), or by valves, as in the barberry family. It is then dropped or carried by some external agent — wind, water or some member of the animal kingdom — on to the receptive surface of the carpel of the same or another flower.
Typical flowers have six stamens inside a perianth (the petals and sepals together), arranged in a whorl around the pistil. But in some species there are many more than six present in a flower (see, for example, the spider tree flower at right). Collectively, the stamens are called an androecium (from Greek andros oikia: man's house). They are positioned just below the gynoecium. The anthers are bilocular. i.e. they have two locules. Each locule contains a microsporangium. The tissue between the locules and the cells is called the connective.
In an immature, unopened bud, the filaments are still short. Their function is then to transport nutrients to the developing pollen. They start to lengthen once the bud opens. The anther can be attached to the filament in two ways:
- basifixed : attached at its base to the filament; this gives rise to a longitudinal dehiscence (opening along its length to release pollen).
- versatile : attached at its center to the filament; pollen is then released through pores (poricidal dehiscence).
Stamens can be connate (fused or joined in the same whorl) :
- monadelphous : fused into a single, compound structure
- diadelphous : joined partially tinto two androecial structures.
- synantherous : only the anthers are connate (such as in the Asteraceae)
Stamens can also be adnate (fused or joined from more than one whorl)
- epipetalous : adnate to the corolla.
- didynamous : occurring in two pairs of different length.
- tetradynamos : occurring as a set of six filaments with two shorter ones.
- exserted : extending beyonf the coralla.
- included : not extending from the coralla.
In the typical flower (that is, the majority of flowering plant species) each flower has both a pistil and stamens. The bisexual plants are called hermaphrodites or perfect flowers.
However, in some species the flowers are unisexual with only either male or female parts (monoecious = on the same plant; dioecious = on different plants). A flower with only male reproductive parts is called androecious. A flower with only female reproductive parts is called gynoecious
A flower having only functional stamens is called a staminate flower.
An abortive or rudimentary stamen is called a staminodium, such as in Scrophularia nodosa .
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:587760ae-bd7d-4208-89b3-8bc8d257723c> | 3.859375 | 800 | Knowledge Article | Science & Tech. | 35.617941 |
Camelopardalis and Ursa Minor - Downloadable article
In the Giraffe and Little Bear, galaxies, nebulae, star clusters, and other stellar arrangements dance around the North Celestial Pole.
March 3, 2009
|This downloadable article is from an Astronomy magazine 45-article series called "Celestial Portraits." The collection highlights all 88 constellations in the sky and explains how to observe each constellation's deep-sky targets. The articles feature star charts, stunning pictures, and constellation mythology. We've put together 11 digital packages. Each one contains four Celestial Portraits articles for you to purchase and download. |
"Camelopardalis and Ursa Minor" is one of four articles included in Celestial Portraits Package 11.
This month we look at two constellations that are well placed year-round when viewed from mid-northern latitudes. One contains the Northern Hemisphere's "stationary" pole star. The other fills in the north polar sequence, a set of stars used to define magnitudes, between Cassiopeia and the Big Dipper.
Camelopardalis the Giraffe is a sizeable constellation containing no stars brighter than 4th magnitude. The most prominent pattern of this star-poor area strings together five stars in a long arc near the Perseus border. The Milky Way slices through the southwestern corner of the constellation, resulting in fields rich with clusters and nebulae. Only three stars have Bayer (Greek letter) designations, although there are another dozen brighter than magnitude 5. Alpha (α) Camelopardalis is a distant giant shining with the light of tens of thousands of suns, while Gamma (γ) Camelopardalis resides 300 light-years away, virtually in our neighborhood. To read the complete article, purchase and download Celestial Portraits Package 11.
|Deep-sky objects in Camelopardalis and Ursa Minor|
Stock 23, Kemble's Cascade, NGC 1501, NGC 1502, IC 356, IC 342, Collinder 464, S Camelopardalis, NGC 2146, NGC 2336, NGC 2366, NGC 2403, NGC 2655, NGC 3172, IC 3568, UGC 9749, NGC 6217 | <urn:uuid:0fe5831f-7e8e-4c0f-a61f-4d46d13fbdf3> | 3.03125 | 465 | Truncated | Science & Tech. | 45.736007 |
We've shared with you the story of rafting ants, but there's so much more to these little insects. We thought we would take recent ant stories and share them with you on this Friday news round-up.
Ants Defending Trees
Did you know that some ants defend their tree homes from invading plants? But how can they tell the difference between the host tree and other plants? Colorado State University scientists decided to find out.
Ants known as Pseudomyrmex triplarinus are found in the Peruvian rainforest and have evolved a symbiotic relationship with Triplaris americana trees, receiving shelter and sustenance in return for defense.
“The ants inhabit hollow channels inside the tree and aggressively fight off any invaders including other plants, yet how these ants recognize their host tree compared to other plants has not been studied,” said lead author Tiffany Weir. “We found that the ants distinguish between their host trees and encroaching species through recognition of the plant's surface waxes.”
Once a competing plant is recognized the ants prune them to defend their host. The research is published in the journal Biotropica.
Would Ants Facebook?
Scientists have assumed for years that interaction networks without central control (think Facebook, Twitter and even the spread of disease) have universal properties that make them efficient at spreading information. Just think of the local grapevine—let something slip, and it seems like no time at all before nearly everyone knows.
But University of Arizona researchers, studying ants, have found that not all of these networks function alike. Their study was published last month in PLoS ONE.
The researchers chose to use ant colonies as models for self-directed networks because they are composed of many individual components—the ants—with no apparent central organization and yet are able to function as a colony.
This research is incredibly detail-oriented. After relocating ant colonies to their lab, the scientists painted each one to identify the individuals. They then filmed the ants, recording roughly 9,000 interactions between 300 to 400 individual ants.
The results were surprising. Contrary to predictions that ant networks would spread information efficiently in the same way as other self-directed networks, the researchers found that the ants actually are inefficient at spreading information.
According to lead author Benjamin Blonder:
They could be just walking around completely randomly bumping into each other. We were able to show that the real ants consistently had rates of information flow that were lower than even that expectation. Not only are they not efficient, they're also slower than random. They're actually avoiding each other.
So this raises a big question: If you have this ant colony that is presumably very good at surviving and persisting, and there are a lot of good reasons to think it's optimal to get messages from one part to the other, how come they don't do it?
One possible explanation is a concept most of us already are familiar with: “If you spend too much time interacting, then you're not actually getting anything done,” said Blonder.
Another possibility is that individual ants are responsible for only their region and only need to communicate with other ants in that region.
So why does all of this matter? Understanding how interaction networks function could have applications from building self-directed networks to perform specific functions (such as unmanned drones to explore other planets) to preventing the spread of disease.
Can we hear some catcalls, please? Go over to the Daily Mail to see what we mean. These gorgeous, hi-res ant images come from Antweb, a project that calls the Academy home and “provides tools for exploring the diversity and identification of ants,” according to the website. The goal is to image all ant species, around 12,000 of ‘em. Check out a few from the nearly 5,000 already on the web.
Image: Benjamin Blonder | <urn:uuid:4c5c7e06-e9c0-4706-9de6-8709639b8606> | 3.453125 | 803 | Content Listing | Science & Tech. | 43.31362 |
Let denote a constant interval number system, built from an underlying number system :
A general methodology for constructing constant interval models of real functions will be presented in this section. We will assume that an order-preserving mapping exists:
Throughout this section we may treat members of as constant functions, to ease the upcoming transition to linear interval arithmetic. Rather than describe the procedures and in a formal language, we will discuss evaluations of with examples. It is understood that much of the examination of g occurs while is being implemented, rather than during execution. Of course, such examination is possible during execution, and may be useful for complicated functions; interval arithmetic may be used to help perform such examinations. Complicated functions may be handled without direct analysis; the interval inclusion property allows such functions to be treated as compositions of simpler functions.
Knowledge of basic vector calculus is assumed; see for reference. See, for example, [19, 27] for other approaches to the implementation of constant interval arithmetic.
|Jeff Tupper||March 1996| | <urn:uuid:4e24b5af-923c-45ea-a9c8-68cc3c22d78a> | 2.984375 | 209 | Academic Writing | Science & Tech. | 23.682267 |
TLS is a Win32 mechanism that allows multiple threads of a process to store and retrieve data that is unique for each thread. Any one thread allocates an index, and then this index is available for use by all the threads in the process. Each thread in the process has its own TLS slot for this index where a 32-bit pointer to data can be stored and retrieved from. TLS is used as follows:
- Use the Win32 API TlsAlloc to allocate a TLS index.
- In a thread that needs to use TLS storage, allocate dynamic storage and then use the Win32 API TlsSetValue to associate the index with a pointer to the dynamic storage.
- In the same thread, when any piece of code wants to retrieve the data structure stored in the specified TLS index, use the Win32 API TlsGetValue, which retrieves the pointer stored using TlsSetValue.
- Use the Win32 API TlsFree to free the index when all the threads are done using the index. In addition, each thread must free the dynamic storage it allocated to associate with the TLS index.
The APIs TlsSetValue and TlsGetValue are designed for extremely fast storage and retrieval. As a result, these APIs do minimal parameter checking. For example, as long as the index specified in these APIs is less than the value TLS_MINIMUM_AVAILABLE, these APIs succeed, even though the index may not have been allocated. It is up to the programmer to ensure that the index is valid. | <urn:uuid:eb62f10c-9498-4443-bb08-ea43130fe99c> | 3.125 | 317 | Documentation | Software Dev. | 48.938868 |
Bored Wombat wrote:
small technical point; Isn't it winter in the Antarctic now? and don't we know why antarctic sea ice is expanding in the summer anyway?http://news.nationalgeographic.com/news ... vironment/
Some people try and argue that it's the Ozone Hole that has created circulation changes that have managed to cool Antarctica over the last 30 years,
I think that the circulation changes are what has insulated Antarctica (apart from West Antarctica) from global warming explains well the (geographical) distribution of the sea ice build-up and loss in Antarctica. I believe that there is evidence that ozone plays a part in the circulation changes, but also evidence that it has not.
but I don't buy that explaination, especially since the Ozone Hole has recovered somewhat recently, and we have continued to see an increase in Antarctic Sea Ice.
Depending on your meaning of "somewhat". It is still near the deepest and and largest it has ever been, give or take only a few percent, whereas the minimum O₃ concentration in the hole is still half what it was in 1979.
Perhaps it is the same reason for why the Arctic Sea Ice is declining... the oceanic oscillations.
I believe the cause of the declining Arctic sea ice is much less complicated. It is attributed to anthropogenic global warming. If it were due to oscillations, it would be oscillating, not declining.
Ozone may have contributed to the cooling of Antarctica, but I think that we need to understand all of the processes first, before we say things with absolute certainty.
Stratospheric Temperatures have started increasing recently (since 1995) which could represent a recovery in the Ozone Layer (Liu and Weng 2009
I also think that Ozone Depletion is far more complicated than what most people make it out to be. Anthropogenic CFCs had played a major role in Ozone Depletion, but there were also major natural components like Solar Proton Storms that also depleted the Ozone layer.
I think that most of the warming in the Arctic is probably due to regional climate change up there amplified by Arctic Amplification, but you can not deny that natural variability has played a significant role in Arctic Sea Ice depletion. Various studies that I have read estimate the contribution from natural variability to sea ice decline to be around 40-50% or so. | <urn:uuid:6a403074-b828-47d5-a994-5726a46775e8> | 2.6875 | 500 | Comment Section | Science & Tech. | 44.286014 |
The wars and rebellions that punctuated China's ancient dynasties have inspired epic books and films. Now it seems the course of the nation's history may have been influenced by a rather more mundane force: the weather.
China's archives track the lives of the country's clans over the last millennium in voluminous detail. This inspired David Zhang at the University of Hong Kong and his colleagues to scour the documents for links between conflict and climate. They found that periods of cold weather preceded 12 of the 15 major bouts of warfare they studied.
The link makes sense, they say, since cold weather would have prompted food shortages in what was then an overwhelmingly agrarian society. Peasant uprisings would follow, destabilising governments and inviting invasions from neighbouring regions (Human Ecology, vol 35, p 403).
Zhang is not the first to suggest a link between climate and conflict. Previous studies ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:5d1497a9-13f9-462f-9bfe-c5ad483cdf69> | 3.046875 | 213 | Truncated | Science & Tech. | 54.209258 |
Caitlin Stier, video intern
Stare at the ellipses in this video and you'll start to experience trippy effects. The illusion, developed by vision researchers Gideon Caplovitz and Kyle Killebrew of the University of Nevada in Reno, features curved shapes spinning at a constant rate where changes in contrast, width and colour alter your perception.
At first, a single thin ellipse multiplies. If you fix your eyes on the centre of the screen, dark smudges may appear to blot out the center of the ellipses as your eyes are quickly exposed to stimuli with highly contrasting colours. Next, the shapes gradually become plumper, a transformation that makes the objects seem to slow down and rotate more fluidly. The animation then appears in different colours which seems to enhance the effect. Focussing on the colourful ellipses while the colours fade can also make them turn into polygons, a change that seems to occur at different times from person to person.
According to Caplovitz, the apparent change in speed is caused by how we sample information around us to detect movement. As the ellipses become chubbier, their shape weakens the sense of motion and they appear to move more slowly compared to their elongated counterparts. When colour is added into the mix, it also distracts from picking up movement. As the shapes become nearly circular, they seem to roll like jelly due to ambiguous information about their rotation.
The shape-shifting effect induced by colour was a surprise to Caplovitz. Although he was able to identify that it occurs when all colours look equally illuminated, he is still puzzled by individual variations in perception. "What's really neat is that it can be unique for every person," says Caplovitz. He hopes to hone in on the moment when the distortion looks most extreme for different people.
Were you able to see any or all of these effects? Let us know in the comments section below. | <urn:uuid:1a7d6270-218d-48b8-aad5-39a9687b8e67> | 3.375 | 406 | Personal Blog | Science & Tech. | 42.970433 |
Earth Science Literacy - Big Idea 3
Earth is a complex system of interacting rock, water, air, and life.
Big Idea 3.1
The four major systems of Earth are the geosphere, hydrosphere, atmosphere, and biosphere. The geosphere includes a metallic core, solid and molten rock, soil, and sediments. The atmosphere is the envelope of gas surrounding Earth. The hydrosphere includes the ice, water vapor, and liquid water in the atmosphere, the ocean, lakes, streams, soils, and groundwater. The biosphere includes Earth’s life, which can be found in many parts of the geosphere, hydrosphere, and atmosphere. Humans are part of the biosphere, and human activities have important impacts on all four spheres.
Big Idea 3.2
All Earth processes are the result of energy flowing and mass cycling within and between Earth’s systems. This energy is derived from the sun and Earth’s interior. The flowing energy and cycling matter cause chemical and physical changes in Earth’s materials and living organisms. For example, large amounts of carbon continually cycle among systems of rock, water, air, organisms, and fossil fuels such as coal and oil.
Big Idea 3.3
Earth exchanges mass and energy with the rest of the Solar System. Earth gains and loses energy through incoming solar radiation, heat loss to space, and gravitational forces from the sun, moon, and planets. Earth gains mass from the impacts of meteoroids and comets and loses mass by the escape of gases into space.
Big Idea 3.4
Earth’s systems interact over a wide range of temporal and spatial scales. These scales range from microscopic to global in size and operate over fractions of a second to billions of years. These interactions among Earth’s systems have shaped Earth’s history and will determine Earth’s future.
Big Idea 3.5
Regions where organisms actively interact with each other and their environment are called ecosystems. Ecosystems provide the goods (food, fuel, oxygen, and nutrients) and services (climate regulation, water cycling and purification, and soil development and maintenance) necessary to sustain the biosphere. Ecosystems are considered the planet’s essential life-support units.
Big Idea 3.6
Earth’s systems are dynamic; they continually react to changing influences. Components of Earth’s systems may appear stable, change slowly over long periods of time, or change abruptly with significant consequences for living organism.
Big Idea 3.7
Changes in part of one system can cause new changes to that system or to other systems, often in surprising and complex ways. These new changes may take the form of “feedbacks” that can increase or decrease the original changes and can be unpredictable and/or irreversible. A deep knowledge of how most feedbacks work within and between Earth’s systems is still lacking.
Big Idea 3.8
Earth’s climate is an example of how complex interactions among systems can result in relatively sudden and significant changes. The geologic record shows that interactions among tectonic events, solar inputs, planetary orbits, ocean circulation, volcanic activity, glaciers, vegetation, and human activities can cause appreciable, and in some cases rapid, changes to global and regional patterns of temperature and precipitation. | <urn:uuid:48bbf87d-9e97-4d1b-8d4b-02c598ed41f5> | 3.921875 | 689 | Tutorial | Science & Tech. | 43.589 |
Hamiltonian Circuits in Plane Graphs
Department of Mathematics
York College (CUNY)
Jamaica, New York 11451
Attempts to prove the Four Color Conjecture (now the Four Color Theorem), which asked if every plane graph could be face colored with 4 or few colors, resulted in considerable interest in whether or not plane 3-valent 3-connected graphs have Hamiltonian circuits. The question of whether or not a plane graph can be 4 face colored can be reduced to the question of whether or not a 3-valent, 3-connected (hence, 3-polytopal) plane graph can be 4 face colored. It is easy to see, as was observed by Hassler Whitney, that any plane graph (whatever its valences might be) which has a Hamiltonian circuit can be 4 face colored. The idea for how to show this in general is illustrated via the diagram in Figure 1. The edges which are not part of the Hamiltonian circuit either appear in the interior of the Hamiltonian circuit C or its exterior. The interior diagonals have the property that faces on opposite sides of any (interior) diagonal have to have different colors. So, one chooses any region in the interior, and colors it, say, A. Now every time one crosses an interior diagonal one changes the color between A, and a second color B, but all one needs for the interior faces are these two colors. Similarly, one can alternately color the faces in the exterior of the Hamiltonian circuit C and D.
Note that it may well be possible to face color the plane graph with fewer than 4 colors, but we are not looking for the value of the chromatic number (minimal number of face colors) for the particular graph, only for a guarantee that it can be colored with 4 or fewer colors.
For a long time it appeared that it might be true that plane 3-valent 3-connected graphs might all have hamiltonian circuits. Part of the problem is that the number of vertices in a smallest counterexample we now know has 38 vertices. However, William Tutte, in 1946, was the first to construct a a 3-valent, 3-polytopal graph which lacks an HC.
For a long time it appeared to be hard to come by examples of non-hamiltonian 3-valent 3-polytopal graphs. However, in 1968, a surprisingly simple but clever argument made it possible to construct examples of such graphs relatively easily. This argument was provided by the Latvian mathematician Emanuel Grinberg in 1968. Easy constructions of non-hamiltonian graphs follows from the use of the formula:
Here, p'i denotes the number of faces with i sides in the interior of a HC in a plane graph and p''i the number of faces with i sides in the exterior of the HC.
To clarify the notation in (*) and how the proof goes, examine the diagram in Figure 2.
We start with our graph embedded in the plane and an HC indicated. For the example above we have:
|value of i||p'i||p''i|
Substituting in (*) we have:
0-2-6+0+6+6-7 = 0 as required if Grinberg's equation holds.
Let us denote by n the number of vertices in our plane graph. Furthermore let the number of edges that are interior to the hamiltonian circuit, the "interior chords" or "interior diagonals" of the circuit be denoted by di. Similarly, de will denote the number of edges that are exterior to the hamiltonian circuit, the "exterior chords" or "exterior diagonals."
Note that interior diagonals are edges of two interior faces, while the exterior diagonals are edges of two exterior faces. Furthermore, the edges of the hamiltonian circuit, which are the same in number as the number of the vertices of the graph (since the vertices lie on a hamiltonian circuit) touch one interior face and one exterior face.
These observations allow us to see that if we weight the number of interior faces with i sides by i we get the number of vertices in the graph plus the number of diagonals counted twice, and similarly for the exterior faces.
Subtracting we get that:
However, the number of interior faces is given by 1 + di and the number of exterior faces is given by 1 + de.
Hence, we have:
When we substitute these in ($) we get:
which simplifies to the Grinberg equation (*)!
The way that the Grinberg equation is applied are varied but the most straightforward is to construct a plane graph which has faces all of which have a number of sides congruent to 2 mod 3, except for one face S. Since the faces S must be interior to or exterior to any HC in such a graph the Grinberg equation gives rise to a contraction, when looked at modulo 3. All the faces which have a number of sides such as 5, 8, 11, etc. give rise to zero terms mod 3, while the face S contributes (|S| -2) with either a positive or negative sign.
There are a variety of other things to notice here.
a. If a 3-valent graph has a hamiltonian circuit C, the edges not in the circuit C form a perfect matching of the graph. A perfect matching is a collection of disjoint edges which includes all of the vertices of the graph.
b. If a 3-valent graph has an HC then the edges of the graph can be edge colored with 3 colors. This is done by alternately coloring the edges of the HC with, say, the colors a and b. Now the remaining edges of the graph, those which constitute a perfect matching can be colored with a third color.
There are still many areas of investigation on the interface between colorings, hamiltonian circuits, and polyhedra.
One simple to state problem which is still open is:
Does every plane, 3-valent, 3-connected, bipartite graph have a hamiltonian circuit?
If any of the words above are omitted the statement fails to be true.
Grinberg, E., Plane homogeneous graphs of degree three without hamiltonian circuits, Latvian Math. Yearbook, 5 (1968) 51-58.
Tutte, W. T. "On Hamiltonian Circuits." J. London Math. Soc. 21, 98-101, 1946. | <urn:uuid:71f63e8d-318f-4b25-bdf2-909eaae999f2> | 3.015625 | 1,381 | Academic Writing | Science & Tech. | 55.139792 |
Yes, there are cases where one gene has become two. Or, at least, where multiple functions carried out by a single protein, the product of one gene, are carried out by distinct proteins, the products of different genes, in another species.
One case I have personally worked with is the bacterial SelB protein. It is essential for selenoprotein biosynthesis and has a dual function as an elongation factor and RNA binding protein. In eukaryotes, these functions are split between two proteins coded for by different genes. The Sec specific elongation factor EfSec and the RNA binding protein SBP2. See here for more information.
Chimeric transcripts are also examples of this. One I happen to know of is the protein secp43 which in the mosquito A. gambiae is encoded by genomic sequences found on two different chromosomes . Essentially, by two different genes.
The problem is that although we can easily find such cases of one protein's function being shared between multiple proteins in other species, or of one protein being coded for by multiple transcripts, it is hard to reconstruct the evolutionary history that gave rise to the current situation. So, whether the current situation has arisen because of alpha complementation or other causes is hard to know.
1) Lescure A, Fagegaltier D, Carbon P, Krol A., Protein factors mediating selenoprotein synthesis. Curr Protein Pept Sci. 2002 Feb;3(1):143-51.
2) Chapple CE, Guigó R., Relaxation of selective constraints causes independent selenoprotein extinction in insect genomes. PLoS One, 2008, 3(8):e2968. | <urn:uuid:495b0143-8e8a-47df-b579-b1c003826298> | 2.9375 | 348 | Q&A Forum | Science & Tech. | 47.72639 |
Project: BOREASProject: BOREAS
The Boreal Ecosystem-Atmosphere Study was a large-scale international interdisciplinary experiment in the boreal forests of central Canada. Its focus was improving our understanding of the exchanges of radiative energy, sensible heat, water, CO2 and trace gases between the boreal forest and the lower atmosphere. A primary objective of BOREAS was to collect the data needed to improve computer simulation models of the important processes controlling these exchanges so that scientists can anticipate the effects of global change, principally altered temperature and precipitation patterns, on the biome. BOREAS has available 3 companion files.
Data Set: BOREAS HYD-03 Subcanopy Radiation DataData Set: BOREAS HYD-03 Subcanopy Radiation Data
This table contains the sub-canopy radiation data collected by HYD-3.
Detailed Documentation: Data Set Reference Document
Hardy, J. P., and R. E. Davis. 1998. BOREAS HYD-03 Subcanopy Radiation Data. Data set. Available on-line [http://www.daac.ornl.gov] from Oak Ridge National Laboratory Distributed Active Archive Center, Oak Ridge, Tennessee, U.S.A. doi:10.3334/ORNLDAAC/266
Data Set Files:Download Data Set Files: (1.7 MBytes in 1 File)
All Data Taken At Latitude: 53.99N To 53.63N, Longitude: 104.69W To 106.20W
You will need to Register/Sign In in order to see the individual data set files. When you register or sign-in, you will be able to download individual data set files to your desktop, add them to your shopping cart, or add the whole data set to your cart. | <urn:uuid:656367f3-a116-44c1-8f5d-c8800e914f90> | 3.265625 | 379 | Structured Data | Science & Tech. | 47.51854 |
One way RNA-based life might have survived could have been to retreat to niches where DNA-based life could not compete. RNA-based organisms might not make proteins, and so they could live where key ingredients for proteins, like sulfur, are absent. RNA-based organisms might also be far smaller than DNA-based life, allowing them to fit in fine rock pores where conventional microbes could not exist. Then again, these extreme environments may not be needed for RNA life to be flourishing today without us knowing it. Even if RNA life were living out in the open, “the life detection tools that we have today would not find it,” Benner says.
The reason why biology’s standard tools would fail to detect an RNA-based organism is because they assume that all metabolisms must be similar to our own. For example, one popular way to look for microbial life is to scoop up some soil, water, or even air, and extract all the DNA it contains. In this way, researchers can reconstruct genes and sometimes even entire genomes of species that are new to science. In March, genome pioneer J. Craig Venter and his colleagues published the sequences of 6 million new genes they had collected by trawling the world’s oceans. As powerful as this technique is, however, it has a big limitation: It can identify only DNA. Venter’s samples could have been full of RNA-based life that would have slipped through his net.
Yet even RNA-based organisms would still be distant cousins of our DNA-based selves. Davies has been contemplating the possibility that we share the planet with even more exotic beings. He is focusing on the idea that life might have begun more than once on Earth, each time taking a very different form. “I’ve had this idea for a few years,” he says. “Why did life have to happen only once on Earth? There’s no real deep reason why.” Davies and his colleague Charles H. Lineweaver of Australian National University did a rough calculation based on the geologically short time between the end of the early bombardment of Earth and the indications of the first signs of life. They estimate there is a 95 percent chance that life originated twice or more.
It is possible that the other form (or forms) was snuffed out by a giant impact in the early years of Earth. But Davies argues that we cannot rule out the possibility that it survived. It may have escaped disaster deep underground. Or perhaps a microbe-bearing rock was hurled by an impact into space and landed on Venus or Mars, which may have been more hospitable to life billions of years ago. An impact on one of those planets could have sent the descendants of the microbial refugees back home to Earth—or even seeded Earth with other life-forms that arose there.
There’s no reason in physics or chemistry why these different ways of building a life-form wouldn’t work. If these alternative life-forms did emerge on Earth, though, they would have eventually had to compete with DNA-based life for living space. At least at the level of multicellular creatures—fungi, animals, plants, algae—scientists are pretty sure that DNA-based life forms did beat out their competition (just look around). But Davies reiterates the warning that we can’t assume that DNA-based creatures automatically eradicated all other life-forms from the planet. After all, life as we know it is surprisingly diverse. Recent estimates put the number of microbes in the ocean at 360 octillion—that’s 36 followed by 28 zeros. A typical scoop of water may have a few very common species living alongside thousands of very rare ones. Alternative life-forms might find there is actually a lot of room to survive in such an ecosystem.
One way to find out is to try to create alternative life-forms ourselves and see how they do alongside their bacterial brethren. Today scientists are already tinkering with DNA-based life to create new kinds of organisms. Some are rewriting the genetic code, for example. All living things use a four-letter alphabet to spell genes with DNA. They build proteins from a 20-letter alphabet of amino acids. But chemists can create other alphabets. Engineered bacteria now use amino acids that have never existed in nature to build proteins.
All this genetic engineering is a lot of work, however. Why not just search for alternative microbial life directly? It’s not as easy as it sounds: Under a microscope, they would probably look like utterly normal microbes, even though inside they would be hiding radically different molecular machinery. As with RNA-based life-forms, ordinary ways of detecting that machinery would overlook them. “It’s like looking for a common English phrase in a book written in French,” says Davies. “You’re not going to find it.”
When Davies first began to ponder multiple origins of life a few years ago, he felt very much alone. But recently he and other like-minded scientists have been joining forces. In December 2006, he hosted the first meeting on the subject. Some of the attendees, Davies among them, are now thinking about the practical challenges posed by detecting alternative or alien life on Earth.
“We know that life could be different, but we don’t know how different,” says Carol Cleland, a philosopher at the University of Colorado. “There are people poised to [search] seriously, but it’s hard to think of where to start,” she says. Cleland thinks one good place would be desert varnish, a mysterious coating of iron and manganese that coats the ground and cliffs in many deserts of the world. Geologists don’t have many explanations for how it could form through ordinary chemistry, she says, and there’s little evidence that ordinary bacteria are responsible. Some supporters of alternative life suggest looking for signs of metabolic activity in varnishes, which could be detected by watching for the flow of radioactive tracers through any hidden organisms.
“There are a whole range of things one could look for,” says Davies. He suggests designing new probes for exotic genetic material. Another possibility might be to discover new life by process of elimination, which entails putting metabolically active samples gathered from as many locations as possible into petri dishes—and then trying progressively harder to kill whatever is in the dish. That is how scientists discovered an astonishing microbe called Deinococcus radiodurans, which can withstand more than 1,000 times the amount of radiation required to kill a human. They simply blasted a collection of bacteria with radiation until they were all dead, except for Deinococcus. “If you could eliminate life as we know it and something was still growing, that might be life as we don’t know it,” says Davies.
Benner warns that the search for alternative life on Earth will probably move at a glacial pace. “I can’t even offer you hope,” he says. NASA has been shifting funding away from life-detection technology development toward a manned return to the moon—the so-called Vision for Space Exploration. Other funding agencies would probably look askance at such a peculiar proposal. “You don’t know how to put on a sheet of paper the design of a device you’d use to detect alternative life,” says Benner. It would also be hard to rule out false positives in such a search.
Yet Davies points out that looking for alternative life on Earth is potentially a much cheaper and simpler way to ask the same big question astrobiologists would like to answer by searching for life elsewhere in the solar system or on worlds beyond. Is life easy for the universe to make, or is it hard? Is it a rare fluke, or a cosmic imperative? “If life is easy to make and is widespread, then it should have happened many times on Earth,” Davies says. “The best way to test for that is to look for it.” | <urn:uuid:b990b9aa-461b-498b-8138-dcd134c8c8d8> | 4.375 | 1,686 | Nonfiction Writing | Science & Tech. | 47.987713 |
FREE ENERGY, SIMPLY EXPLAINED
Imagine a cylinder, closed at one end and fitted with a sliding piston at the other. Trapped between the piston and the blind end is an ideal gas. (Visualized as a lot of particles flying around like ricocheting rifle bullets, with random energies - loosely speaking, speeds - and random positions. Importantly, they neither attract nor repel one other.)
Push in the piston and then allow the gas to cool back to the temperature of the surroundings, for pushing in the piston will have warmed the gas up. It is easy to see that the gas will, if allowed to do so, push the piston back out again; to its original position. This is one example of "free energy".
It is called free energy as opposed to just energy because there is no energy stored in the compressed gas. The compressed gas cannot contain any potential energy (the energy something has by dint of its position in a force field, such as the energy deemed to be in a brick balanced on the lip of a cliff) because there is no force acting between the particles. There is no force tending to push the particles apart. This is not what pushes out the piston.
What then is responsible for the piston being pushed out again? The particles bang against the piston and push it out with innumerable tiny blows. Each time they hit they transfer a little of their kinetic energy (the energy of motion, the "smashing power" of a flying brick) to the piston; each time they hit they slow down a little.
Slowing down equals cooling in thermodynamics, where temperature is identified as a measure of how fast particles are moving. (More properly: temperature is proportional to the kinetic energy of the average particle.) The slowed, cooled, particles pick up kinetic energy from the surroundings when they bang against the - vibrating - particles that make up the walls of the cylinder. This speeds them up again and the whole process repeats.
Thus the process consists in converting the heat energy of the surroundings into work energy (the capacity to push a mass with a force through a distance) and using this work energy to drive the piston back out. It is not energy stored in the compressed gas which pushes the piston, it is the compressed gas's ability to convert heat to work - in thermodynamic terms this facility ultimately derives from the lowering of its entropy during the compression.
For completeness, imagine, now, that the gas particles repel each other. Again compress the gas and proceed as before. This time the piston is thrust back out again with a greater force so that there is more energy. The free energy in this case is defined to be the total energy used pushing out the piston, i.e. the sum of the ideal gas energy as above and the potential energy due to the particles repelling each other.
It might help to see that the free-energy of the ideal gas above is not real energy by noticing that the compressed ideal gas has the same mass as the uncompressed ideal gas. By E = mc2, they must therefore have the same energies.
I assume I have just described Helmholtz free energy - which measures the maximum ammount of work energy it is possible to extract during a process. I am mystified by Gibbs free energy. | <urn:uuid:92cc12a9-f42c-46ea-a78c-b34f4f78088d> | 3.609375 | 675 | Personal Blog | Science & Tech. | 47.845151 |
Figure 4.1 Hoefer SE 400 Sturdier Electrophoresis units
Electrophoresis may be the main technique for molecular separation in today's cell biology laboratory. Because it is such a powerful technique, and yet reasonably easy and inexpensive, it has become commonplace. In spite of the many physical arrangments for the apparatus, and regardless of the medium through which molecules are allowed to migrate, all electrophoretic separations depend upon the charge distribution of the molecules being separated. 1
Electrophoresis can be one dimensional (i.e. one plane of separation) or two dimensional. One dimensional electrophoresis is used for most routine protein and nucleic acid separations. Two dimensional separation of proteins is used for finger printing , and when properly constructed can be extremely accurate in resolving all of the proteins present within a cell (greater than 1,500).
The support medium for electrophoresis can be formed into a gel within a tube or it can be layered into flat sheets. The tubes are used for easy one dimensional separations (nearly anyone can make their own apparatus from inexpensive materials found in any lab), while the sheets have a larger surface area and are better for two- dimensional separations. Figure 4.1 shows a typical slab electrophoresis unit.
When the detergent SDS (sodium dodecyl sulfate) 2 is used with proteins, all of the proteins become negatively charged by their attachment to the SDS anions. When separated on a polyacrylamide gel, the procedure is abbreviated as SDS--PAGE (for Sodium Dodecyl Sulfate PolyAcrylamide Gel Electrophoresis). The technique has become a standard means for molecular weight determination.
Polyacrylamide gels are formed from the polymerization of two compounds, acrylamide and N,N-methylene- bis-acrylamide (Bis, for short). Bis is a cross-linking agent for the gels. The polymerization is initiated by the addition of ammonium persulfate along with either -dimethyl amino-propionitrile (DMAP) or N,N,N,N,- tetramethylethylenediamine (TEMED). The gels are neutral, hydrophillic, three-dimensional networks of long hydrocarbons crosslinked by methylene groups.
The separation of molecules within a gel is determined by the relative size of the pores formed within the gel. The pore size of a gel is determined by two factors, the total amount of acrylamide present (designated as %T) and the amount of cross-linker (%C). As the total amount of acrylamide increases, the pore size decreases. With cross- linking, 5%C gives the smallest pore size. Any increase or decrease in %C increases the pore size. Gels are designated as percent solutions and will have two necessary parameters. The total acrylamide is given as a % (w/v) of the acrylamide plus the bis-acrylamide. Thus, a 7 1/2 %T would indicate that there is a total of 7.5 gms of acrylamide and bis per 100 ml of gel. A gel designated as 7.5%T:5%C would have a total of 7.5% (w/v) acrylamide + bis, and the bis would be 5% of the total (with pure acrylamide composing the remaining 2.5%).
Proteins with molecular weights ranging from 10,000 to 1,000,000 may be separated with 7 1/2% acrylamide gels, while proteins with higher molecular weights require lower acrylamide gel concentrations. Conversely, gels up to 30% have been used to separate small polypeptides. The higher the gel concentration, the smaller the pore size of the gel and the better it will be able to separate smaller molecules. The percent gel to use depends on the molecular weight of the protein to be separated. Use 5% gels for proteins ranging from 60,000 to 200,000 daltons, 10% gels for a range of 16,000 to 70,000 daltons and 15% gels for a range of 12,000 to 45,000 daltons. 3
Cationic vs anionic systems
In electrophoresis, proteins are separated on the basis of charge, and the charge of a protein can be either + or -- , depending upon the pH of the buffer. In normal operation, a column of gel is partitioned into three sections, known as the Separating or Running Gel, the Stacking Gel and the Sample Gel. The sample gel may be eliminated and the sample introduced via a dense non-convective medium such as sucrose. Electrodes are attached to the ends of the column and an electric current passed through the partitioned gels. If the electrodes are arranged in such a way that the upper bath is -- (cathode), while the lower bath is + (anode), and -- anions are allowed to flow toward the anode, the system is known as an anionic system. Flow in the opposite direction, with + cations flowing to the cathode is a cationic system.
Tube vs Slab Systems
Figure 4.2 Electrophoretic separations of proteins
Two basic approaches have been used in the design of electrophoresis protocols. One, column electrophoresis, uses tubular gels formed in glass tubes, while the other, slab gel electrophoresis, uses flat gels formed between two plates of glass. Tube gels have an advantage in that the movement of molecules through the gels is less prone to lateral movement and thus there is a slightly improved resolution of the bands, particularly for proteins. It is also more economical, since it is relatively easy to construct homemade systems from materials on hand. However, slab gels have the advantage of allowing for two dimensional analysis, and of running multiple samples simultaneously in the same gel.
Slab gels are designed with multiple lanes set up such that samples run in parallel. The size and number of the lanes can be varied and, since the samples run in the same medium, there is less likelihood of sample variation due to minor changes in the gel structure. Slab gels are unquestionably the the technique of choice for any blot analyses and for autoradiographic analysis. Consequently, for laboratories performing routine nucleic acid analyses, and those employing antigenic controls, slab gels have become standard. The availability of reasonably priced commercial slab gel units has increased the use of slab gel systems, and the use of tube gels is becoming rare.
The theory and operation of slab gel electrophoresis is identical to tube gel electrophoresis. Which system is used depends more on the experience of the investigator than on any other factor, and the availability of equipment.
Figure 4.2 presents a typical protein separation pattern.
Continuous vs discontinuous gel systems
Figure 4.3 Schematic diagram of electrophoresis
The original use of gels as separating media involved using a single gel with a uniform pH throughout. Molecules were separated on the basis of their mobility through a single gel matrix. This system has only occasional use in today's laboratory. It has been replaced with discontinous, 4 multiple gel systems. In multiple gel systems, a separating gel is augmented with a stacking gel and an optional sample gel. These gels can have different concentrations of the same support media, or may be completely different agents. The key difference is how the molecules separate when they enter the separating gel. The proteins in the sample gel will concentrate into a small zone in the stacking gel before entering the separating gel. The zone within the stacking gel can range in thickness from a few microns to a full millimeter. As the proteins are stacked in concentrated bands, they continue to migrate into the separating gel in concentrated narrow bands. The bands then are separated from each other on a discontinuous (i.e. disc ) pH gel. 5
Once the protein bands enter the separating gel, separation of the bands is enhanced by ions passing through the gel column in pairs. Each ioin in the pair has the same charge polarity as the protein (usually negative), but differ in charge magnitude. One ion will have a much greater charge magnitude than the proteins, while the other has a lesser charge magnitude than the proteins. The ion having a greater charge will move faster and is thus the leading ion, while the ion with the lesser charge will be the trailing ion. When an anionic system is employed, the Cl¯ and glycinate (glycine as its acid derivative) ions are derived from the reservoir buffer (Tris-Glycine). The leading ion is usually Cl¯ glycinate is the trailing ion. A schematic of this anionic system is shown in Figure 4.3. Chloride ions enter the separating gel first and rapidly move down the gel, followed by the proteins and then the glycinate ions. The glycinate ions overtake the proteins and ultimately establish a uniform linear voltage gradient within the gel. The proteins then sort themselves within this gradient according to their charge and size.
Figure 4.4 Agarose separation of cDNA
While acrylamide gels have become the standard for protein analysis, they are less suitable for extremely high molecular weight nucleic acids (above 200,000 daltons). In order to properly separate these large molecules, the acrylamide concentration needs to be reduced to a level where it remains liquid.
The gels can be formed, however, by the addition of agarose, a naturally linear polysaccharide, to the low concentration of acrylamide. With the addition of agarose, acrylamide concentrations of 0.5% can be used and molecular weights of up to 3.5 x 10 daltons can be separated. This is particularly useful for the separation of large sequences of DNA. Consequently, agarose-acrylamide gels are used extensively in today's genetic laboratories for the determination of gene maps. This chapter will concentrate on the separation of proteins, but Figure 4.4 demonstrates the separation of DNA fragments on an agarose gel.
Return to Table of Contents | <urn:uuid:c4aed503-29ab-4fb7-aa33-7749a9e9dffb> | 3.109375 | 2,146 | Academic Writing | Science & Tech. | 37.982705 |
Nuclear physics is the study of the composition, behavior and interaction of atomic nuclei and their constituent parts. It differs from particle physics in that it spans a lower energy range where nucleons and even nuclei are stable and interactions can generally be described in terms of nucleon and meson degrees of freedom instead of quark and gluon degrees of freedom.
Effective theory known as Quantum Hadron Dynamics (QHD) and Chiral Perturbation Theory useful for some problems in nuclear despite being known to be incomplete. Nuclear physicists also study the transition region where QCD corrections become important but the theory is not fully perturbative.
Nuclear physics provides the basic tools for nuclear power engineering and nuclear weaponry, though most nuclear physicists are not-involved in either of these fields. | <urn:uuid:8549f392-afaa-4a7a-977e-a2b3a5b2dc0a> | 2.953125 | 160 | Knowledge Article | Science & Tech. | 25.220093 |
Going Supernova 3Dgif
Click on Refresh Page
to activate gif movie
Modeling the collapse of a massive star represents one of the greatest challenges in computational physics. All four fundamental forces of nature are in play, giving us a cosmic laboratory with conditions unlike anywhere else in the Universe. Only if we truly understand the fundamental physics involved and do a perfect job of implementing the computational algorithms will we be able to reproduce the ever-increasing quality of the observational data.
It focuses on the deaths of aged stars in supernova explosions, which are among the most violent events in nature, unleashing power that can briefly outshine a galaxy of 100 billion stars. When a supernova explodes, it blasts oxygen, carbon and other vital chemical elements through space and creates heavier elements like copper and nickel.
Unlike Type I supernovae, which are powered by a thermonuclear explosion of a white dwarf star, Type II supernovae, the more frequently occurring type modeled by Warren and Fryer, are powered by the massive star's gravitational collapse. The star begins its life burning hydrogen, then heavier elements as the hydrogen is exhausted and the temperature rises. Eventually, the core of the star consists entirely of iron, which can no longer provide the energy to resist the enormous gravitational forces pushing down on it.
As the iron atoms are crushed together, the core temperature rises to more than 10 billion degrees. The force of gravity overcomes the repulsive force between the nuclei and, in a few tenths of a second, the core of the star collapses from its original size of about one-half the diameter of Earth to 100 kilometers. The core heats the material surrounding it not with light, but by radiating most of its energy in neutrinos, nearly massless sub-atomic particles that can pass through tons of matter without being affected. As the in-falling gas approaches the core, it is exposed to a higher and higher flux of neutrinos. A tiny fraction of those neutrinos are absorbed. They heat the gas, which expands and becomes buoyant.
The heated gas floats upward in large bubbles carrying energy away from the core and is replaced by colder gas that sinks toward the core and in turn becomes heated. This heat transfer from the core to the envelope of the star results in enough energy transfer to create an explosion.
"With these three-dimensional results, we have reached the final battleground and are ready to attack the more exotic problems that involve rotation and non-symmetric accretion."
los alamos news/releases 4 June 2002
LOS ALAMOS, N.M., June 4, 2002 -- Astrophysicists from Los Alamos National Laboratory, New Mexico, created the first 3-D computer simulations of the spectacular explosion that marks the death of a massive star. Presented to the American Astronomical Society meeting in Albuquerque, N.M., the research by Michael Warren and Chris Fryer eliminates some of the doubts about earlier 2-D modeling and paves the way for rapid advances on other, more exotic questions about supernovae.
The work of Warren and Fryer is part of a larger Supernova Science Center effort, which includes scientists from the University of Arizona, the University of California Santa Cruz and Lawrence Livermore National Laboratory. The Supernova Science Center is funded by the Department of Energy's Office of Science, Scientific Discovery Through Advanced Computing program.
More information is available at www.supersci.org.
Other groups, including the Terascale Supernova Initiative headed by Anthony Mezzacappa at Oak Ridge National Laboratory and the group led by Thomas Janka at the Max Planck Institute for Astrophysics in Germany, are also making rapid progress in the area of core-collapse supernovae.
Los Alamos National Laboratory is operated by the University of California for the National Nuclear Security Administration (NNSA) of the U.S. Department of Energy and works in partnership with NNSA's Sandia and Lawrence Livermore national laboratories to support NNSA in its mission.
Los Alamos enhances global security by ensuring the safety and reliability of the U.S. nuclear weapons stockpile, developing technical solutions to reduce the threat of weapons of mass destruction and solving problems related to energy, environment, infrastructure, health and national security concerns..
Images available online los alamos national laboratories
SciDac Supernova Science Centre
Are-we-made-of-stardust by Plato
If you obey all the rules you miss all the fun.
Katharine Hepburn more Famous Quotes | <urn:uuid:d7ab8a0e-563a-4d8c-ba50-0db9c6512257> | 3.640625 | 933 | Personal Blog | Science & Tech. | 27.363036 |
By Jay Kernis
Scientists from around the world are building the world’s most advanced radio telescope in Chile’s Atacama Desert, on a plateau half-way between Earth and space above 40 percent of the planet’s atmosphere.
The observatory, referred to by the acronym ALMA, officially known as the Atacama Large Millimeter/Submillimeter Array, is the highest ever built. Located at 16,500 feet, the antennas will pick up radio and microwave signals from the edge of the universe to see things in space that were once invisible.
Eventually there will be 66 antennas spread across the plateau. All of them can be pointed at the same time at a patch of outer space.
When Rock Center Correspondent Harry Smith found out we could report the ALMA story, he said, “Find me the next Carl Sagan to travel to Chile with us—someone who is passionate about astronomy and can really explain what is going on there.”
Rock Center's Harry Smith & National Radio Astronomy Observatory Astronomer Scott Ransom
When he was around eight years old, Scott Ransom watched Carl Sagan’s Cosmos series on PBS, read Sagan’s books and decided that he wanted to be an astronaut. He told us, “Once I found out how big the solar system was and how big the galaxy is and how big the universe is—and consequently, how tiny we are, what a tiny little, insignificant component—it just blew my mind.”
He thought he’d go to a military academy, become a test pilot, and then apply for astronaut training. “Because of my eyesight,” Ransom says, “there was no way I would be able to become an astronaut.” But Ransom did graduate from West Point and Harvard, and today spends hours each day exploring outer space.
Dr. Ransom has been an astronomer at the National Radio Astronomy Observatory in Charlottesville, Virginia since 2004. The NRAO leads the North American efforts in Chile, and in Charlottesville, scientists build receivers that capture the radio waves in ALMA’s huge antennas.
"This is the largest, most sophisticated ground-based observatory that the world has ever created," Dr. Ransom said. "It could take weeks or months of observing to do what ALMA's going to be able to do in a day or hour."
A few days after touring ALMA in Chile, Ransom returned to NRAO headquarters in Virginia and talked about his favorite images from space.
CLICK ON EACH IMAGE BELOW to see Harry Smith’s conversation with Dr. Ransom.
In 2010, Ransom received one of his field’s top honors from the American Astronomical Society. He studies neutron stars and pulsars—the exotic objects that form after the largest stars burn all their hydrogen and explode into a supernova. They may be only 10 or 20 miles across, but, explains Ransom, they can “give off 10-thousand times more energy from its rotation than all the energy than our sun puts out.” Personally, Ransom has discovered nearly 100 of them. | <urn:uuid:2281dbf8-b5fe-4a30-8b11-e552760b3d2d> | 3.140625 | 655 | Audio Transcript | Science & Tech. | 49.990767 |
This week, on WNYU’s science and technology show The Doppler Effect, we had a really interesting sound.
That is the sound of the radio frequencies of the aurora borealis.
Now, let’s be clear. That’s not a sound you can hear. If you listen to the whole segment from the WNYU show (here) you’ll hear Bez Laderman and Jonathan Zrake - two NYU physicists – discuss why it’s highly improbably for the aurora to make any kind of sound that we can hear on the ground. If you don’t want to go listen (but I hope you do), here’s basically what they say:
There are reports of a sound associated with the auroras. It’s a hissing, rustling, or crackling noise, or some combination of the three. But if you think about what the aurora borealis is – which Zrake describes in the podcast (electromagnetic winds also happen to be the subject of his PhD work) – that doesn’t make a ton of sense. Basically, auroras happen when solar winds blow charged particles to Earth, and those particles collide with atoms up in the upper atmosphere. For that to make a sound, they explain, it would have to be really, really loud.
Here’s Bez’s example: how many times have you seen a jet plane in the sky, but not heard it. That jet plane is really, really loud. Something like 80 decibels. And it’s about 3500 feet in the air.
Now, the aurora is 50 miles above us, in the upper atmosphere. That’s 264 000 feet. Which means it has to travel through 75 times the amount of atmosphere to reach us. The atmosphere up there is thinner, which makes it even harder for sound to propagate (since sound is made of pressure waves which need something to travel through) down to us. The collision of those particles would have to be ridiculously loud.
You should really listen to their whole show – where they explain some more of the details about auroras, and go into some of the reasons why there might be sounds or not. But for now, you can sit back and relax and listen to the eerie, yet relaxing sound of the Aurora’s radio waves. | <urn:uuid:e02a2027-f08c-431b-87e8-387061522a48> | 3.15625 | 494 | Personal Blog | Science & Tech. | 66.711667 |
There are four types of biological transmission , they are :
A- PROPOGATIVE TRANSMISSION - in this type of transmission only multiplication of the organism takes place in the host without any kind of development that is change in form . example for this type of transmission is THE PLAGUE BACILLUS IN THE RAT FLEA.
B -CYCLOPROPOGATIVE TRANSMISSION - in this type of transmission not only does the parasite multiply vigorously it also changes its form . example for such type of transmission is
malaria parasite undergoing its sexual life cycle in the female anopheles mosquito.
C-CYCLODEVELOPMENTAL TRANSMISSION - in this type of transmission the parasite only undergoes change in form or development in the host but does not multiply .example for this type of transmission is the FILARIAL PARASITE IN CULEX MOSQUITO .
D- MECHANICAL TRANSMISSION - in this type of transmission there is neither change in the form of the parasite in the host nor multiplication of the parasite in the host . | <urn:uuid:e6ce5335-fa5b-4b12-9322-c72ec3816066> | 2.796875 | 228 | Listicle | Science & Tech. | 24.234231 |
Infrared Satellite Images
Infrared satellite measurements are related to the brightness temperature.
For an infrared picture, warmer objects appear darker than
colder objects, as in the example below
(a composite of data from
GOES-8 and GOES-10 satellites).
Since temperature in the troposphere decreases with height,
high level clouds are colder than
low level clouds.
Therefore, low clouds
(like those found over North Carolina and Virginia)
appear darker on an infrared image and
(like those found throughout the eastern U.S.) appear brighter.
The very dark shades of gray in parts of the Rocky Mountains
and in the deserts of the Southwest indicate regions
where the ground is being heated by the sun.
vis -vs- ir | <urn:uuid:1862d7f8-ee61-4f21-9d70-5a045fb88624> | 3.671875 | 159 | Knowledge Article | Science & Tech. | 41.639231 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
testing for geostress
...in choice of chamber orientation, shape, and support design, is usually determined in exploratory drifts. Two methods are common, although each is still in the development stage. One is an “ overcoring” method (developed in Sweden and South Africa) used for ranges up to about 100 feet out from the drift and employing a cylindrical instrument known as a borehole deformeter. A small...
What made you want to look up "overcoring"? Please share what surprised you most... | <urn:uuid:f8631f62-6201-40b1-a7ce-748dc3a93aa5> | 2.875 | 146 | Truncated | Science & Tech. | 51.726212 |
DORMOUSEDORMOUSE (MUSCARDINUS AVELLANARIUS)
LOCAL BIODIVERSITY ACTION PLAN
Dormice usually weigh between 15-30g, reaching 65-85mm in length. They have a white throat, pale yellow/white fur on their underparts and brown/orange fur on their upperparts. Their thick bushy tail makes them easy to distinguish from other mouse-sized mammals.
Dormice inhabit coppiced woodlands, deciduous woodland with scrub, and hedgerows.
Dormice eat fruits, seeds, flowers and insects. However prior to hibernation nuts (chestnuts and hazelnuts) are the more important food sources.
Dormice are very fast and agile and can escape many predators such as raptors and foxes, however foxes have been known to dig up dormice while they are hibernating underground.
Dormice can live for up to 5 years
Dormice are nocturnal and spend the day sleeping in nests that are typically about 15cm in diameter and made from honeysuckle bark, grass, moss and leaves, woven to entirely surround the animal. They are good climbers and spend most of their time in tree canopies.
Dormice spend the months from October to April in hibernation.
Common dormice rear one or two litters a year, typically of four young (although the litter size can range from 1-7). The young first leave the nest after four weeks, but they may remain with their mother for a further seven weeks.
Nationally, the dormouse has disappeared from most of the north of England. A Mammal Society survey in the late 1970s showed it had become extinct in 7 northern counties where it was once known to occur, including the Cheshire region.
The last known dormouse record for the Cheshire region was 1910. Presently, the dormouse occurs mainly in the south of England, where it is widespread, but often with a very patchy distribution.
The only dormice in the Cheshire region are an introduced population in the Wych Valley. This population is the result of the release of captive bred animals - 29 in 1996 and 24 in 1997. Monitoring indicates the dormice have established themselves, with breeding occurring every year from 1996 onwards.
The population has grown steadily, with increasing numbers of juveniles recruited into the population each year. The dormice have colonized all parts of the release site and appear to be spreading along the Wych Valley. In 2002/2003 the dormice crossed the Wych Brook into Wrexham (North Wales).
The dormouse is listed on Schedule 5 of the Wildlife & Countryside Act 1981. Listed on Appendix 3 of the Bonn Convention and Annex IVa of the EC Habitats Directive. Schedule 2 of the Conservation (Natural Habitats &c.) Regulations 1994 (Regulation 38). It is on the UK Biodiversity Steering Group Short List of Globally Threatened/Declining Species.
* The continual destruction of ancient semi-natural woodlands which are their required habitat.
* The fragmentation of woodlands is a potential threat. Dormice are reluctant to cross open ground, so are unable to move easily between woodlands (a minimum area of 20ha is required to maintain a viable population).
* Changes in woodland management, especially the decline in long-rotation hazel coppicing.
* The continued intrusion of cattle into woodlands, especially during winter.
* Continual climatic variations.
* Loss or fragmentation of hedgerows which act as dispersal corridors.
How are we helping to conserve the dormice in the Cheshire region?
* The decline of the dormouse has led Natural England to include it in their national Species Recovery Programme. This programme includes the reintroduction of dormice to a number of counties where they have been lost. Cheshire Wildlife Trust have managed the reintroduction of dormice to a woodland in the south of the Cheshire region. 300 specially designed dormice nesting boxes have been constructed and put up at the site and boxes are monitored monthly from May to October. Data from the monitoring is passed on to the National Dormouse Monitoring Scheme.
* A dormouse nestbox sponsorship scheme has been set up, to increase awareness and raise funds. Nestboxes are being located in possible dispersal areas, to monitor expansion of the population.
* The Wych Valley Project has been set up to promote general conservation in the area, using the dormouse as a flagship.
Objectives, Targets and Actions
The objectives, targets and actions to help conserve Dormice in the Cheshire region can be found on the Biodiversity Action Reporting System (BARS) along with full details of our progress so far.
How to find out more about Dormice?
BBC Wildfacts website - www.bbc.co.uk/nature/wildfacts/factfiles/263.shtml Great Nut Hunt - www.greatnuthunt.org.uk
The Mammal Society - www.abdn.ac.uk/mammal/dormouse.shtml
UK BAP for Dormice - www.ukbap.org.uk/UKPlans.aspx?ID=462
How can you get involved?
Sponsor a nestbox with Cheshire Wildlife Trust.
Nest boxes are an important part of the Cheshire Dormouse Project. Dormice will readily use purpose-built nest boxes. The boxes are built of wood and are similar in design to those frequently put up in gardens for small birds.
You can be involved with the Dormouse Project by sponsoring a nest box. You can sponsor a box for yourself, or as an unusual present to someone else.
LBAP Chair Sue Tatman, Cheshire Wildlife Trust
Phone: 01948 820728 National Lead Partners The Wildlife Trusts
Natural England National Lead Contact Tony Mitchell-Jones, Natural England
Phone: 0300 060 0788
Mervyn Newman, Devon Wildlife Trust
References & Glossary
Bright, P. & Morris, P. (1989): A practical guide to Dormouse Conservation. The Mammal Society.
Bright, P. & Morris, P. (1992): The Dormouse. The Mammal Society.
Bright, P., Morris, P. & Mitchell-Jones, T. (1996): Dormouse Conservation Handbook. English Nature | <urn:uuid:b234feee-a564-4f07-94fc-43852829f878> | 3.625 | 1,340 | Knowledge Article | Science & Tech. | 48.880388 |
When writing up HTML source code, all line breaks created by pressing the 'Enter' key will be ignored by the web browser and will not register as line breaks on the actual web page. In order to create line breaks and hence format your visible text into paragraphs, you will use what are known as block-level
or block elements
. The main difference between block elements and inline elements
is that the end tag of a block element forces a line break in the visible text. Some commonly used block elements are:
|Renders text into paragraphs with a blank line in between each... (More)|
|Renders text into generic division with no blank line in between each... (More)|
|Renders text into an indented paragraph which is typically used to indicate quoted text... (More)|
These are all described in more detail below:
- p (paragraph) ~ Creating paragraphs using HTML is accomplished by placing the text which you wish to be rendered as a paragraph in between the
<p>...</p> tags. The end
</p> tag terminates the first paragraph while a new start
<p> tag begins a new paragraph. Example:
Example 1A - SOURCE CODE
This is the first paragraph. In printed text, paragraphs are traditionally rendered by merely breaking the flow of text to the next line and then indenting the first line of the new paragraph. In web pages, however, a new paragraph is typically rendered by breaking the flow of text to the next line which then contains a line of 'white space'. The blank line is subsequently followed by the new paragraph whose first line appears flush with the left margin.
This is the next paragraph. Note that any line breaks created in the source code by pressing the 'Enter' key will be ignored by the web browser when it displays the web page. Creating line breaks in the visible text can only be accomplished by using one of various HTML elements. In this case, we are using the 'p' element.
Click here to view the result of Example 1A...
<p>...</p> tags, the end tag may be omitted. Each
<p> start tag will automatically begin a new paragraph. Example:
First paragraph starts here...
Second paragraph starts here...
Third paragraph starts here...
- div (generic division) ~ The
div element differs from the
p element in that the
</div> end tag only breaks the flow of text to the next line. No blank line will appear between it and the succeeding text. Example:
Example 2 - SOURCE CODE
This is the first generic division. The text will continue to flow normally until this 'div' element is terminated by the end tag.
This is the second generic division. Use the 'div' element in conjunction with CSS to achieve a much more refined control over formatting blocks of text on your web page.
Click here to view the result of Example 2...
Note that, unlike the
</p> end tag, the
</div> end tag is mandatory.
Here's a quick example of how you can use the style attribute with the
div element to create a traditional paragraphing style:
Example 3 - SOURCE CODE
<div style="text-indent: 30px;">
This uses the 'div' element in conjunction with the 'style' attribute to create a paragraph style similar to printed text paragraphing. The first line of text in this generic division is indented by a space of 30 pixels.
<div style="text-indent: 30px;">
Here is the second generic division. The flow of text breaks to the next line and the first line of text is also indented by a space of 30 pixels.
Click here to view the result of Example 3...
- blockquote ~ This element can be used to indent an entire block of text. This is typically used to indicate that the text is quoted from another source although it can also be used simply to offset the text for aesthetic reasons. To display text as such, place it within the
<blockquote>...</blockquote> tags. Here's an example:
Example 4 - SOURCE CODE
In the official HTML 4.01 Specification, the World Wide Web Consortium (W3C) makes this distinction between block level and inline elements:
<blockquote>Generally, block-level elements may contain inline elements and other block-level elements. Generally, inline elements may contain only data and other inline elements. Inherent in this structural distinction is the idea that block elements create "larger" structures than inline elements.
Click here to view the result of Example 4...
Of course, there may be times when you wish to create line breaks at will without having to decide in advance which sections of text to enclose in a block element. There may also be times when you wish to prevent
the line from breaking in between two words. How to accomplish both of these as well as how to align text on your web page
will be covered next... | <urn:uuid:49481e3d-aae8-4f0b-8cbe-3ee317351583> | 4.15625 | 1,043 | Tutorial | Software Dev. | 61.592068 |
THE NEEM, a tropical tree grown in north Queensland, could become the basis of an important new pesticide industry. Compounds extracted from the seeds of the neem could replace many of the synthetic organic pesticides now in use.
More than 300 of the world's worst insect pests can be controlled by neem seed extracts, which are claimed not to harm humans or beneficial insects. This is the message brought back this month from a World Neem Conference in the Indian city of Bangalore by Martin Rice, an entomologist from the University of Queensland.
The chemicals can be applied to locusts, fleas, ticks, mites, and malaria- carrying mosquitoes. They are also effective against a wide range of pests which attack fruit and vegetables.
The potential benefits of neem extracts were illustrated dramatically when two of the delegates to the conference needed medical attention after being exposed to insecticide in their hotel. ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:a70f751a-3272-4397-bb82-066f82c139e0> | 3.234375 | 211 | Truncated | Science & Tech. | 44.444934 |
Waves, Sound and Light: Wave Basics
Wave Basics: Audio Guided Solution
Sachi is rock'n to her favorite radio station - 102.3 FM. The station broadcasts radio signals with a frequency of 1.023 x 108 Hz. The radio wave signal travel through the air at a speed of 2.997 x 108 m/s. Determine the wavelength of these radio waves.
Audio Guided Solution
Habits of an Effective Problem Solver
- Read the problem carefully and develop a mental picture of the physical situation. If necessary, sketch a simple diagram of the physical situation to help you visualize it.
- Identify the known and unknown quantities and record in an organized manner, often times they can be recorded on the diagram itself. Equate given values to the symbols used to represent the corresponding quantity (e.g., v = 12.8 m/s, λ = 4.52 m, f = ???).
- Use physics formulas and conceptual reasoning to plot a strategy for solving for the unknown quantity.
- Identify the appropriate formula(s) to use.
- Perform substitutions and algebraic manipulations in order to solve for the unknown quantity.
Read About It!
Get more information on the topic of Waves at The Physics Classroom Tutorial.
Return to Problem Set
Return to Overview | <urn:uuid:2375dffd-6aac-4edf-ae19-29d35c4fc255> | 4.3125 | 276 | Tutorial | Science & Tech. | 61.394156 |
Heptachlor was used extensively until the 1970s as a broadspectrum insecticide on a wide variety of agricultural crops, with the major use on corn. It also had nonagricultural uses including seed treatment, home and garden uses, and termite control.
It has a low vapour pressure, low water solubility and is hydrolysed in surface water to 1hydroxychlordene, and a halflife in water estimated at one day. But, experimentally when heptachlor was added to river water and exposed to sunlight, only 25% remained after a week, this corresponds to a halflife in water of 3.5 days. When released directly into water, it adsorbs strongly to suspended and bottom sediment.
The experimental value for the Henry's law constant is suggesting that heptachlor partitions somewhat rapidly volatilize to the atmosphere from surface water. Volatilization from soil particles is also possible and is an important mechanism of transport of heptachlor from land surfaces. In the vapour phase photooxidation is the key degradation process. The atmospheric halflife for heptachlor reacting with hydroxyl radicals was estimated at about 6 hours but still heptachlor is subject to longrange transport and wet deposition.
The logarithmic soil organic carbon adsorption coefficient for heptachlor was estimated to be 4.38 which indicates a very high sorption tendency, suggesting that it will adsorb strongly to soil and is not likely to leach into groundwater in most cases. These properties suggest that heptachlor can remain deep in soil for years. The organic matter content of the soil is another factor affecting mobility. Heptachlor is less likely to leach from soil with high organic matter content. The halflife of heptachlor in temperate soil was reported to range between 6 months and 3.5 yrs. It has been found in studies that 16 years after the application of heptachlor, approximately 10% of the original amount was still present in the soil.
The log KOW for heptachlor suggests a high potential for bioaccumulation and biomagnification in the aquatic food chain. | <urn:uuid:21f54b45-1c2b-4945-b34f-1da45e0a292b> | 3.296875 | 452 | Knowledge Article | Science & Tech. | 34.492743 |
The intricate cosmic web of dark matter and galaxies spanning more than one billion light years. The pink-yellow plumes seen with gravitational lensing show us where the dark matter is.
What is Gravitational Lensing?
Cosmology is the branch of astronomy which asks the biggest questions of all – what is the Universe made of? How did it form? How old is it? What will happen to our Universe in the distant future? How and why do the biggest structures in the Universe come about?
Humanity has been asking questions like this for millennia, but it is only in the past century that modern telescopes have been powerful enough to start providing meaningful answers. Our understanding of the Universe today can be summarised in one simple pie chart:
This chart shows the total mass-energy content of the Universe. Mass-energy equivalence means that we can equate the two; mass is just a measure of the internal energy content of an object. All the 'regular' matter in the Universe – the stuff that makes up galaxies, planets, stars, nebulae, dust, rocks and gas – is known as baryonic matter, and only makes up 4% of the mass-energy content of the Universe. The other two pieces of the pie are dark matter and dark energy, and together they make up almost all of the known Universe. Dark energy and dark matter are so named because cosmologists don't know what they actually are – we can't see them directly and can only infer their existence by the effect they have on the regular matter that we can see. It might seem strange to claim that most of the Universe is invisible and unknown to us, but the evidence for these mysterious entities is compelling. To learn more about dark matter and dark energy, follow the links on the left hand side.
When astronomers refer to lensing, they are talking about an effect called gravitational lensing. Normal lenses such as the ones in a magnifying glass or a pair of spectacles work by bending light rays that pass through them in a process known as refraction, in order to focus the light somewhere (such as in your eye).
Gravitational lensing works in an analogous way and is an effect of Einstein's theory of general relativity – simply put, mass bends light. The gravitational field of a massive object will extend far into space, and cause light rays passing close to that object (and thus through its gravitational field) to be bent and refocused somewhere else. The more massive the object, the stronger its gravitational field and hence the greater the bending of light rays - just like using denser materials to make optical lenses results in a greater amount of refraction.
Gravitational lensing happens on all scales – the gravitational field of galaxies and clusters of galaxies can lens light, but so can smaller objects such as stars and planets. Even the mass of our own bodies will lens light passing near us a tiny bit, although the effect is too small to ever measure.
So what are the effects of lensing? The kind of lensing that cosmologists are interested in is apparent only on the largest scales – by looking at galaxies and clusters of galaxies. When astronomers take a telescope image of a part of the night sky, we can see many galaxies on that image. However, in between the Earth and those galaxies is a mysterious entity called dark matter. Dark matter is invisible, but it does have mass, making up around 85% of the mass of the Universe. This means that light rays coming towards us from distant galaxies will pass through the gravitational field of dark matter and hence will be bent by the lensing effect.
Dark matter is found wherever 'normal' matter, such as the stuff that makes up galaxies, is found. For example, a large galaxy cluster will contain a very great amount of dark matter, which exists within and around the galaxies that make up that cluster. Light coming from more distant galaxies that passes close to a cluster may be distorted – lensed – by its mass. It is the dark matter in the cluster that does almost all of the lensing as it outweighs regular matter by a factor of six or so. The effects can be very strong and very strange; the images of the distant, lensed galaxies are stretched and pulled into arcs as the light passes close to the foreground cluster. This can be seen in the image below of the famous Abell 2218 cluster. The real galaxies are not this shape – they are usually elliptical or spiral shaped – they just appear this way because of lensing.
This strange shape distortion comes from the fact that galaxies are large objects, and the light rays leaving one side of the galaxy (e.g. the left hand side from our point of view) will pass through a different part of space than the light rays leaving the other side (e.g. the right hand side). The light rays will therefore pass through different parts of the dark matter's gravitational field and will be bent in slightly different ways. The net effect of this is a distortion to the shape of the galaxy image, which can in some cases be very severe. Another interesting effect that can occur due to lensing is the formation of multiple images of the same galaxy. This occurs because light rays from a distant galaxy that would otherwise diverge may be focused together by lensing. From the point of view of an observer on the Earth, it looks as if two very similar light rays have travelled along straight lines from different parts of the sky. You can see this in the orange lines in the schematic above - we can see more than one image of the same galaxy in different places. Lensing can also act like a magnifying glass, allowing us to see images of galaxies that would otherwise be too faint to see.
An example of multiple images is shown below in an image from the Hubble Space Telescope. There are 3 images of the same galaxy, and 5 images of a type of galaxy called a quasar. The images are not the same shape or size because each image will have passed through a different region of space on its journey to us, and hence will have been distorted differently. A technique known as spectroscopy is used to determine which images came from the same galaxy.
Image: NASA/ESA, K Sharon (Tel Aviv University), E. Ofek (Caltech)
If the lensing effect is strong enough to be seen by the human eye on an astronomical image, like in Abell 2218, we call this strong lensing. Strong lensing only happens when a massive cluster of galaxies lies between us and some other galaxies - it is the further-away galaxies that have their shapes changed by lensing. In this case, it is easy to see and measure the effects of lensing. However, there are not that many clusters in the sky that are so big that they cause such a large lensing effect - most of the time, we don’t see galaxies stretched into arcs or multiply-imaged. So these instances of strong lensing are very useful - and pretty - but rare.
However, the fact that there is some dark matter in between us and every distant galaxy we see means that ALL galaxies are lensed - even if it is only slightly. In fact, most galaxies are lensed such that their shapes are altered by only 1%, an effect we call weak gravitational lensing.
We can never see this shape modification with our own eyes on an image because it is too small - but if we have some way to measure this, it could tell us a lot about how dark matter behaves across the whole sky (and not just in massive clusters) as it is a ubiquitous effect. But if we can’t see the effect, how do we measure it? How do we know how strong the lensing effect is on a particular galaxy?
It turns out that we don't need to know how much an individual galaxy image has been lensed – we can instead work out the average lensing effect on a set of galaxies. To do so, cosmologists have to make a couple of assumptions: firstly, that all galaxies are roughly elliptical in overall shape, and secondly that they are orientated randomly on the sky, as shown in the left hand side of the figure below. In the presence of a lensing effect, we would expect that the galaxies in a patch of sky would appear to align themselves together slightly on the sky, as lensing stretches all their images in the same direction. In this way, any deviation from a random distribution of galaxy shape orientations is a direct measure of the lensing signal in that patch of sky. Weak lensing can thus be used to measure the gravitational lensing signal on any part of the sky.
Image: E Grocutt, IfA, Edinburgh
Why is lensing useful?
Gravitational lensing is useful to cosmologists because it is directly sensitive to the amount and distribution of dark matter. This is because the amount of light bending is sensitive only to the strength of the gravitational field it passes through*, which is mostly generated by the mass of the dark matter in the Universe. This means that to measure the amount of lensing on a patch of sky, we don't need to know anything about what kind of galaxies we are observing, how they form and behave or what colour light they emit. This makes gravitational lensing a very clean and reliable cosmological probe as it relies on few assumptions or approximations.
Lensing can therefore help astronomers work out exactly how much dark matter there is in the Universe as a whole (the fraction of the pie chart at the top of the page that dark matter takes up), and also how it is distributed. An example dark matter map constructed from CFHTLenS data is shown below.
Image: CFHTLenS Collaboration
Lensing has also been used to help verify the existence of dark matter itself. The image below is known as the Bullet Cluster, and it has been observed in both optical (visible) light and in X-ray. The majority of the light coming from the Bullet cluster comes from hot X-ray emitting gas, and has been overlaid onto the visible-light image in pink. Superimposed in blue is the location of the dark matter in the cluster, determined from measuring the lensing signal from the visible-light images of the galaxies. The offset between the pink X-ray gas and the blue dark matter regions tells us that what we are observing is actually the aftermath of a collision between two galaxy clusters. During the collision, the baryonic X-ray gas particles (the 'normal' matter) will interact with each other through both gravity and electrostatic forces, slowing and shocking one another. The dark matter particles, however, only interact through gravity and can pass through each other unimpeded by electrostatic interactions. This means that the X-ray gas lags behind the dark matter as the two clusters escape the collision, causing the observed offset - most of the visible matter is now in the centre of the image, but lensing tells us that most of the mass lies further out.
Image: Composite Credit: X-ray: NASA/CXC/CfA/ M.Markevitch et al.;
Lensing Map: NASA/STScI; ESO WFI; Magellan/U.Arizona/ D.Clowe et al.
Optical: NASA/STScI; Magellan/U.Arizona/D.Clowe et al.
Some scientists believe that since the only observed effects of dark matter are gravitational, then perhaps our understanding of gravity is incomplete. It is possible that we are not observing a new type of matter, but that the laws of gravity as we understand them are wrong. As a result, many different modified gravity theories have arisen to explain the dark matter phenomenon. The Bullet cluster provides strong evidence for the existence of dark matter, as this offset between the light and mass is exactly what scientists expect to see if dark matter is real, and it is hard to explain under many theories of modified gravity.
If we know something about the distances to the galaxies we look at with our telescopes, lensing can also tell us about the nature of dark energy because the amount of dark energy affects how galaxies and clusters form and develop. Measuring their distribution with distance through gravitational lensing can help us constrain the amount of dark energy in the Universe to a higher degree of precision. The light from distant galaxies began travelling towards us many millions (or even billions) of years ago, providing a window into the early Universe. This means that it is also possible to work out if the amount of dark energy changes over time by observing galaxy structures at different distances from us. Thus, gravitational lensing is a clean probe of the Universe and has much to tell us about its two most mysterious components - dark matter and dark energy.
*In fact, this is one way in which gravitational lensing differs from optical lensing, as gravitational lensing is independent of the wavelength (colour) of the light. All light rays are bent the same amount by gravity. Optical lenses cause light of different colours to bend by varying amounts in a process called diffraction, resulting in the splitting of light into rainbows. There is no such analogous effect with gravitational lensing.
Author: Emma Grocutt | <urn:uuid:344ba3e7-cee6-4b1a-9c42-64834335863f> | 4 | 2,719 | Knowledge Article | Science & Tech. | 46.809957 |
What Are They, and Where Are They?What Are They, and Where Are They?
SUMMARY: The jovian planets are essentially big balls of gas, each surrounded by many moons and rings.
Moons and Rings
Why so different?
Jupiter's Relative Size
The Great Red Spot Pinwheel
Can Photosynthesis Occur at Saturn?
Patterns and Fingerprints
Using Spectral Data To Explore Saturn and Titan
Jupiter, Saturn, Uranus and Neptune collectively make up the group known as the jovian planets. The general structures of the jovian planets are opposite those of the terrestrial planets. Rather than having thin atmospheres around relatively large rocky bodies, the jovian planets have relatively small, dense cores surrounded by massive layers of gas. Made almost entirely of hydrogen and helium, these planets do not have solid surfaces.
Unlike the spherical shapes of terrestrial planets, the jovian planets are all slightly oblong. The jovian planets rotate much faster than any of the terrestrial worlds. Gravity by itself would make a planet spherical, but their rapid rotation flattens out their spherical shapes by flinging material near the equator outward.
Observations of clouds at different latitudes suggest that the jovian planets rotate at different speeds near their equators than near their poles.
- Jupiter : 10 hours
- Saturn : 10 hours
- Uranus : 16-17 hours
- Neptune : 16-17 hours
Moons and Rings
After size, perhaps the most noticeable difference between the jovian and terrestrial planets involves moons and rings. The terrestrial planets are nearly isolated worlds, with only Earth (1 moon) and Mars (2 moons) orbited by any moons at all. In contrast, many moons and rings orbit each of the jovian planets.
All four jovian planets have rings, although only Saturn's rings are easily visible from Earth. Rings are composed of countless small pieces of rock and ice, each orbiting its planet like a tiny moon. The rings look flat because the particles all orbit in essentially the same plane. The rings are located closer to the planets than any of their moderately sized or large moons, but the inner edge of the rings is still well above the planet's cloud tops.
(click to enlarge)
Why so Different
Why are the jovian planets so different from the terrestrial planets? We can trace almost all the differences to the formation of the solar system. The frost line marked an important dividing point in the solar nebula. Within the frost line, temperatures were too high for hydrogen ices to form. The only solid particles were made of metal and rock. Beyond the frost line, where hydrogen compounds could condense, the solid particles included ices as well as metal and rock.
While terrestrial planets accreted from planetesimals made of rocks and metals, they ended up too small to capture significant amounts of the abundant hydrogen and helium gas in the solar nebula. The jovian planets, however, formed farther from the Sun where ices and rocks were plentiful. The cores accreted rapidly into large clumps of ice and rock. Eventually, they got so large, they captured a large amount of hydrogen and other gasses from the surrounding nebula with their enormous gravity.
MORE INFO ON PLANET FORMATION SOME QUICK PLANET FACTS
Cloud Altitudes in Jovian Planet Atmospheres
The atmospheres of Jupiter and Saturn are made almost entirely of hydrogen and helium, although there is some evidence they contain hydrogen compounds. Uranus and Neptune are made primarily of hydrogen compounds, with smaller traces of hydrogen, helium, metal and rock. The most common hydrogen compounds are methane (CH4), ammonia (NH3), and water (H2O).
The farther away a planet is from the Sun, the cooler its atmosphere will be. This means that the same gases will condense to form clouds at different altitudes on different planets because the condensation of a gas requires a specific amount of pressure and temperature. Ammonia, ammonium hydrosulfide and water make up the 3 cloud layers of Jupiter and Saturn. You can see from the graph to the right that these condense at lower altitudes in Saturn's atmosphere than they do in Jupiter's atmosphere.
The cores of all four jovian planets are made of some combination of rock, metal and hydrogen compounds. Jupiter and Saturn have similar interiors, with layers extending outward of metallic hydrogen, liquid hydrogen, gaseous hydrogen, and topped with a layer of visible clouds. Unlike Jupiter and Saturn, Uranus and Neptune have cores of rock and metal, but also water, methane and ammonia. The layer surrounding the core is made of gaseous hydrogen, covered with a layer of visible clouds similar to Jupiter's and Saturn's.
Just like the terrestrial planets, the deeper you go, the hotter and denser it gets. An increase in temperature and density means an increase in pressure.
A Pie Slice of Jupiter's Density
Eric Weisstein's world of astronomy | <urn:uuid:da105e25-1704-40f4-9b46-c35596216e85> | 4.28125 | 1,038 | Knowledge Article | Science & Tech. | 40.107511 |
In summary, the resent observational results in cosmology strongly suggest that we live in a universe that is spatially flat, expanding at an accelerated rate, homogeneous and isotropic on large scales, and is approximately 13 billion years old. The expansion of the universe is described by Eq. (63), and its metric by Eq. (64). We have seen that roughly 96% of the matter and energy in the universe consists of cold dark matter and the cosmological constant. We now know basic facts about the universe much more precisely than we ever have. However, since we cannot speak with confidence about the nature of dark matter or the cosmological constant, perhaps the most interesting thing about all of this is that knowing more about the universe has only shown us just how little we really understand.
As mentioned previously, the most common view of the cosmological constant is that it is a form of vacuum energy due, perhaps, to quantum fluctuations in spacetime. However, within the context of general relativity alone there is no need for such an interpretation; is just a natural part of the geometric theory. If, however, we adopt the view that the cosmological constant belongs more with the energy-momentum tensor than with the curvature tensor, this opens up a host of possibilities including the possibility that is a function of time.
In conclusion, it is also important to state that although this paper emphasizes what the recent results say about our present universe, these results also have strong implications for our understanding of the distant past and future of the universe. For an entertaining discussion of the future of the universe see Ref. 42. Concerning the past, the results on anisotropies in the CMB have provided strong evidence in favor of the inflationary scenario, which requires a -like field in the early universe to drive the inflationary dynamics. To quote White and Cohn , "Of dozens of theories proposed before 1990, only inflation and cosmological defects survived after the COBE announcement, and only inflation is currently regarded as viable by the majority of cosmologists."
We would like to especially acknowledge (and recommend) the excellent website of Dr. Wayne Hu. This resource was very useful in helping us to learn about the physics of CMB anisotropies. We are also grateful to Dr. Manasse Mbonye for making several useful suggestions. | <urn:uuid:adca7275-9b55-40de-9ea2-2b1656420317> | 3.046875 | 488 | Academic Writing | Science & Tech. | 32.072057 |
- This is about distribution in a mathematical sense, other meanings can be found at distribution
In mathematics, a distribution is a generalisation of a function. Distributions were introduced in the middle of the 20th century by Laurent Schwartz, who received a Fields Medal for his work on them. The Fields medal is comparable to a Nobel Prize in mathematics, which does not exist.
Distributions were introduced to model certain concepts from Physics. Physics has the concept of a mass of points in space. The Dirac delta function can model an electromagnetic charge of a point in space. The Dirac delta function is zero everywhere, except at one point, where it is infinitely large. This needs to be the case, because the density function needs to be 1. There is no function that can meet this criterion, except if integraton is taken as a function in the mathematical sense.
Today, distributions are used in different fields of mathematics and physics, for example to model Partial differential equations or Fourier analyses, which are important for Quantum electrodynamics or signal processing. | <urn:uuid:c18be62c-d1f6-434e-b627-e092e84272c8> | 3.671875 | 216 | Knowledge Article | Science & Tech. | 34.301708 |
Measure Chlorophyll in Field?
sl5dl at cc.usu.edu
Mon Jun 12 14:57:11 EST 1995
bo1000 at aol.com (Bo1000) wrote:
>Does anyone know how to measure chlorophyll-a in lake water? We would
>prefer a small device we could carry in a boat in order to understand the
>photosynthetic activity of algae, etc.
We measure foliar chlorophyll almost daily using a SPAD 502 chlorophyll meter.
However, for measuring chl underwater, you may need a spectrophotometer or
even a 4 band radiometer (which imitates LANDSAT Thematic Mapper Bands). Then you
could relate chlorophyll content to the ratio between the blue and the green bands.
Check the literature on phytoplankton determinations from satellites!
More information about the Photosyn | <urn:uuid:a624953b-6b73-4f6f-bdb9-2d92c58ca7a8> | 2.828125 | 194 | Comment Section | Science & Tech. | 51.860702 |
Spectrum to RGB Conversion
|In 1931, the International Commission on Illumination (CIE)
defined three standard primaries, called X, Y
and Z. The corresponding functions , , and are called color-matching
functions. The color-matching function is defined to match the eye's
sensitivity to brightness; the other two do not correlate with any perceptual attibutes. X,
Y and Z represent the weights of the respective
color-matching functions needed to approximate a particular spectrum.
To match a color with power distribution P, the amounts of the primaries are given by the following formulae :
where k for self-luminous bodies, such as CRT, is equal to 680 lumens per watt.
[ R ] [ 3.240479
-1.537150 -0.498535 ] [ X ]
The range for valid R, G, B values is [0,1]. Note, this matrix has negative coefficients. Some XYZ color may transform to R, G, B values that are negative or greater than one. This means that not all visible colors can be produced using the RGB system. | <urn:uuid:28e12795-bc1d-4bf0-a9b7-fde54add8653> | 3.8125 | 242 | Knowledge Article | Science & Tech. | 65.031626 |
clone, group of organisms, all of which are descended from a single individual through asexual reproduction, as in a pure cell culture of bacteria. Except for changes in the hereditary material that come about by mutation, all members of a clone are genetically identical. In 1962 John Gurdon was the first to clone an animal when he transferred cell nuclei from adult frog intestinal cells and injected them into egg cells from which the nucleus had been removed; the eggs then developed into tadpoles. Laboratory experiments in in vitro fertilization of human eggs led in 1993 to the "cloning" of human embryos by dividing such fertilized eggs at a very early stage of development, but this technique actually produces a twin rather than a clone. In a true mammalian clone (as in Gurdon's frog clone) the nucleus from a body cell of an animal is inserted into an egg, which then develops into an individual that is genetically identical to the original animal.
Later experiments in cloning resulted in the development of a sheep from a cell of an adult ewe (in Scotland, in 1996), and since then rodents, cattle, swine, and other animals have also been cloned from adult animals. Despite these trumpeted successes, producing cloned mammals is enormously difficult, with most attempts ending in failure; cloning succeeds 4% or less of the time in the species that have been successfully cloned. In addition, some studies have indicated that cloned animals are less healthy than normally reproduced animals.
In 2001 researchers in Massachusetts announced that they were trying to clone humans in an attempt to extract stem cells. The National Academy of Sciences, while supporting (2001) such so-called therapeutic or research cloning, has opposed (2002) the cloning of humans for reproductive purposes, deeming it unsafe, but many ethicists, religious and political leaders, and others have called for banning human cloning for any purpose. South Korean scientists announced in 2004 that they had cloned 30 human embryos, but an investigation in 2005 determined that the data had been fabricated.
See G. Kolata, Clone (1997).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on clone from Infoplease:
See more Encyclopedia articles on: Biology: General | <urn:uuid:c4784d27-07bb-4dbf-8123-4b35ff861ac6> | 3.5625 | 461 | Structured Data | Science & Tech. | 29.500766 |
The Gulf oil spill, terrible though it is, has focused attention on one of the least-known environments on earth. Scientists used to believe that the deep ocean was uninhabited. As scientist Tim Flannery explains, “The eternal dark, the almost inconceivable pressure, and the extreme cold that exist below one thousand meters were, [scientists] thought, so forbidding as to have all but extinguished life. The reverse is in fact true....(Below 200 meters) lies the largest habitat on earth.”
While less than 10% of this area has been explored by humans, what we have discovered to date has found its way into children’s books filled with tantalizing glimpses of ten-foot-long red worms and the enormous clams, crabs and tube worms that thrive around deep hydrothermal vents.
In “Creeps from the Deep,” Leighton Taylor opens with clear explanations of the weird conditions deep under the ocean – darkness, intense pressure and little food. He introduces animals whose names indicate the horror they inspire: the black sea dragon, the vampire squid and the viperfish are just a few. Stunning color photographs and clear text will help children understand not just how strange these animals are, but how cleverly they have adapted to extreme conditions.
How have scientists learned about deep ocean life? Deborah Kovacs’ “Dive to the Deep Ocean” tells the story through a history of the “Alvin,” the first manned submersible to dive up to three miles down into the ocean.
The story of Alvin is a story of human ingenuity, but it’s also a story about the value of persistence. First launched in 1964, over the years the Alvin has been trapped in a narrow fissure in the sea floor, filled with water and sunk to the bottom of the sea (luckily, the passengers made it to safety, and the Alvin was finally spotted and brought to the surface), and been disassembled and reassembled every few years to ensure its safety. Over the last forty years, Alvin has been fitted with the most up-to-date technology, and its cameras, robotic arms and halogen lights allow scientists to illuminate the deep-sea darkness and record what they find there.
It was in the 1970s that the Alvin first descended to the hydrothermal vents near the Galapagos Islands. Expecting to see a desert of mud and rocks, scientists were astonished to discover giant creatures living in water that was not only much hotter than the surrounding seawater but filled with chemicals that would be deadly to most forms of life.
Research like this underlies the discoveries in “Beneath Blue Waters” by Deborah Kovacs and Kate Madin, which describes the creatures found from sea level down to the benthopelagic zone at the bottom of the ocean. Here you’ll meet the deep-sea cucumber, translucent in the submersible’s spotlight, and the enormous jellyfish known as Deepstaria enigmatica, so big that it takes several moments for its body to move past the sub’s window.
Steve Jenkins uses intricate paper collage illustrations in “Down, Down, Down: A Journey to the Bottom of the Sea.” Sea life from the very surface of the ocean down to the benthopelagic depths is displayed against a background that gets darker as each page progresses downwards. Sidebars and endnotes offer tantalizing information about the mysterious creatures that live in this recently discovered and now endangered environment.
First published in the Free Lance-Star on May 25, 2010. | <urn:uuid:85b1daf7-8be7-4925-8aa2-365392d6c8f1> | 3.671875 | 746 | Content Listing | Science & Tech. | 44.638216 |
The orange K type star is towards the dimmer and cooler end of the sequence of spectral types, although still hot and bright enough for any planets to be good candidates for colonization. These stars are distinguished by a strong metallic line, and molecular bands of CH and CN.
These stars are somewhat like Sol, although rather smaller and dimmer. The statistics for K type stars are given below:
|Spectral type ||Luminosity - Sol = 1 ||temperature (Kelvin) ||Mass - Sol = 1 ||Radius - Sol = 1 ||Density gm/cm3 |
While a K0 type star has a mass of about 0.8 times that of Sol, its luminosity is only about 0.4 times as much. The F5 spectral type is even worse off, its luminosity is only one fifth that of Sol, and a surface temperature of 4,900° K. But what they lack in brightness these stars make up in longevity; even the K0 type will spend some 17 billion years on the main sequence. And while these stars have a fairly small life-zone as far as terragen comfort goes, but cold adapted life benefits, and Europan Type worlds and life may be common if conditions are favourable. Arean Type worlds are also very common, and these are always good candidates for terraforming. K-type solar systems are therefore quite popular among development corporations and colonists. | <urn:uuid:ca8f76df-2710-42cc-ae7b-a5b15e9f56c0> | 3.15625 | 291 | Knowledge Article | Science & Tech. | 49.446266 |
An interesting prime number spiral was discovered in 1963 by Stanislaw M. Ulam, and is now called "the Ulam spiral". It reveals a strange property of the prime numbers.
A positive integer (1, 2, 3, ... ) greater than 1 is called prime if its only divisors are 1 and itself. The first few prime numbers are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, ... The series is infinite, since there is no largest prime number. The proof of this goes back to the ancient Greek mathematician Euclid.
The Ulam spiral of prime numbers is constructed as follows: Consider a rectangular grid. We start with the central point and arrange the positive integers in a spiral fashion (anticlockwise) as at below left. The prime numbers are then marked (here with blue boxes). Since the primes occur in an irregular manner in the sequence of numbers one might expect that in this grid they would occur more or less at random, and so form something like a random pattern. But, on the contrary, there is a clear tendency for the prime numbers to form diagonal lines. This can be seen more clearly in the image below right, in which each of the 70,255 pixels corresponds to a position in the number sequence and the primes are marked by white pixels. This pattern is puzzling, since no complete explanation has been given for why the prime numbers should line up in this way. | <urn:uuid:10eb8ca8-2fa5-4694-babe-9fac3cf377f8> | 3.703125 | 307 | Knowledge Article | Science & Tech. | 67.137323 |
We recently asked National Weather Service forecaster Alex Tardy to explain why San Diego, Riverside and San Bernardino counties have had so many thunderstorms in recent weeks. (Turns out it is a seasonal thing, mostly.) We chatted Tardy up again when we became curious about the life cycle of a thunderstorm. Here's some of the bullet points he provided, based on the type of photographs we've been publishing.
Q: How long does it take for a thundercloud to form and fall apart?
A: Usually a one hour cycle, from cumulus to cumulonimbus (thunderstorm). Individual storms can last longer in the right environment (wind shear, boundaries, high instability).
Q: Is there a big difference in wind speed -- and wind direction -- at various altitudes in the clouds?
A: Wind speed is light through the height of the cloud (its growing straight up). Temps are likely in the 20s at about 18,000 feet in the storm ... The strong storms we have been seeing of late (lots of lightning and heavy rain) are upwards of minus 40 degrees at the top.
Q: Why are there cotton ball-shaped areas in thunderclouds?
A: Those are the turbulent areas of the cloud -- rising air which condenses into cloud droplets as it rises and enters cooler air.
Q: Some areas of these clouds are gray. Why?
A: They are likely lower, younger (newer) clouds that are in the shadow of the tall thundercloud (so there is no sun on them.) The sun angle appears rather low (in the late afternoon) and the side of the storm is showing up bright (higher portions).
Q: Does the entire cloud move in the same direction? How fast do clouds like this move?
A: Usually the answer is no for a mature thunderstorm. But in this developing cumulus (towering cumulus or young thunderstorm) the cloud mass is likely mostly moving in the same direction except, possibly, the lowest clouds or the bottom of the storm (if you were underneath). | <urn:uuid:6ab52497-f273-4726-b0ef-876bb34530a7> | 3.515625 | 432 | Q&A Forum | Science & Tech. | 61.621784 |
Can You See Orion?
The interactive animation below shows you what the constellation Orion might look like.
You can adjust the darkness of the sky by moving the top slider back and forth. Move it to the right (towards the tent) to see Orion in a dark sky, the way it might look if you were out camping far away from any lights. Move the slider to the left (towards the streetlight) to see how Orion might look from a city where there are many lights around. "Magnitude" is a term astronomers use to describe how bright stars are. Stars with a magnitude that is low, like 1 or 2, are quite bright. Stars with higher number magnitudes, like 4 or 5, are dimmer. The dimmest stars that most people can see with their naked eye are around magnitude 6. The slider shows the "magnitude limit" of the dimmest stars you could see with different amounts ambient light.
The bottom slider, on top of the spinning globe, lets you change where you are on Earth. Move it up and down to see how Orion might look from different latitudes.
If you can't see the animation below, or it doesn't seem to be working right, you may need to get the latest Flash player plugin for your computer.
Shop Windows to the Universe Science Store!
The Fall 2009 issue of The Earth Scientist
, which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store
You might also be interested in:
Astronomers use the term "magnitude" to describe the brightness of an object. The magnitude scale for stars was invented by the ancient Greeks, possibly by Hipparchus around 150 B.C. The Greeks grouped...more
What is light pollution? Simply put, light pollution is the unwanted illumination of the night sky created by human activity. Light pollution is sometimes said to be an undesirable byproduct of our industrialized...more
Light pollution is the unwanted illumination of the night sky created by human activity. Light pollution is a broad term that refers to multiple problems, all of which are caused by inefficient, annoying,...more
Citizen science projects involve the public in scientific research and data collection. Typically, people around the world observe phenomena from their own locale, send in data via the Internet, and then...more
In recent years astronomers have become able to detect "starspots" on distant stars! Like the sunspots that frequently dot the "surface" of the nearest star, our Sun, starspots are relatively cool, dark...more
Because of the rotation of the Earth and its orbit around the Sun, we divide the stars and constellations into two groups. Some stars and constellations never rise nor set, and they are called circumpolar....more
In the 1960's, the United States launched a series of satellites to look for very high energy photons, called Gamma Rays, that are produced whenever a nuclear bomb explodes. These satellites soon detected...more | <urn:uuid:9268b657-87c0-462e-bf3d-7fce30993b48> | 3.4375 | 630 | Content Listing | Science & Tech. | 58.733744 |
AstronomyAstronomy, which etymologically means "law of the stars", (from Greek: αστρονομία = άστρον + νόμος) is a science involving the observation and explanation of events occurring outside Earth and its atmosphere. It studies the origins, evolution, physical and chemical properties of objects that can be observed in the sky (and are outside the earth), as well as the processes involving them.
, photographed by the crew of Apollo 11 as they circled the Moon in 1969. Located near the center of the far side of Earth's Moon, its diameter is about 93 kilometers (58 miles).]]
Astronomy is one of the few sciences where amateurs still play an active role, especially in the discovery and monitoring of transient phenomena. Astronomy is not to be confused with astrology, a pseudoscience that attempts to predict a person's destiny by tracking the paths of astronomical objects. Although the two fields share a common origin, they are quite different; astronomers embrace the scientific method, while astrologers do not.
Divisions of astronomy
In its earliest days, going back to ancient Greece and other ancient civilizations, astronomy consisted largely of astrometry, measuring positions of stars and planets in the sky. Later, the work of Kepler and Newton paved the way for celestial mechanics, mathematically predicting the motions of celestial bodies interacting under gravity, and solar system objects in particular. Much of the effort in these two areas, once done largely by hand, is highly automated nowadays, to the extent that they are rarely considered as independent disciplines anymore. Motions and positions of objects are now easily known, and modern astronomy concerns itself much more with trying to observe and understand the actual physical nature of celestial objects—what makes them "tick".
Ever since the twentieth century the field of professional astronomy has tended to split into observational astronomy and theoretical astrophysics. Although most astronomers incorporate elements of both into their research, because of the different skills involved, most professional astronomers tend to specialize in one or the other. Observational astronomy is concerned mostly with getting data, which involves building and maintaining instruments and processing the resulting data; this branch is at times referred to as "astrometry" or simply as "astronomy." Theoretical astrophysics is concerned mainly with figuring out the observational implications of different models, and involves working with computer or analytic models.
The fields of study are also categorized in another two ways: by "subject", usually according to the region of space (e.g. Galactic astronomy) or "problems addressed" (such as star formation or cosmology); or by the way used for obtaining information.
By subject or problem addressed
on Mars. Photographed by Mars Global Surveyor, the long dark streak is formed by a moving swirling column of Martian atmosphere (with similarities to a terrestrial tornado). The dust devil itself (the black spot) is climbing the crater wall. The streaks on the right are sand dunes on the crater floor.]]
Also, there are other disciplines that may be considered part of astronomy:
- Astrobiology: the study of the advent and evolution of biological systems in the universe.
- Astrometry: the study of the position of objects in the sky and their changes of position. Defines the system of coordinates used and the kinematics of objects in our galaxy.
- Cosmology: the study of the universe as a whole and its evolution.
- Galactic astronomy: the study of the structure and components of our galaxy and of other galaxies.
- Extragalactic astronomy: the study of objects (mainly galaxies) outside our galaxy.
- Galaxy formation and evolution: the study of the formation of the galaxies, and their evolution.
- Planetary Sciences: the study of the planets of the solar system.
- Stellar astronomy: the study of the stars.
- Stellar evolution: the study of the evolution of stars from their formation to their end as a stellar remnant.
- Star formation: the study of the condition and processes that led to the formation of stars in the interior of gas clouds, and the process of formation itself.
See list of astronomical topics for a more exhaustive list of astronomy-related pages.
Ways of obtaining information
In astronomy, information is mainly received from the detection and analysis of electromagnetic radiation,
photons, but information is also carried by cosmic rays, neutrinos, meteors, and, in the near future, gravitational waves (see LIGO and LISA).
A traditional division of astronomy is given by the region of the electromagnetic spectrum observed:
. This image shows several blue, loop-shaped objects that are multiple images of the same galaxy. They have been duplicated by the gravitational lens effect of the cluster of yellow galaxies near the photograph's center. The lens is produced by the cluster's gravitational field that bends light to magnify and distort the image of a more distant object.]]
- Optical astronomy describes the techniques used to detect and analyze light in and slightly around the wavelengths that can be detected with the eyes (about 400 - 800 nm). The most common tool is the telescope, with electronic imagers and spectrographs.
- Infrared astronomy deals with the detection of infrared radiation (wavelengths longer than red light). The most common tool is the telescope but with the instrument optimized for infrared. Space telescopes are also used to eliminate noise (electromagnetic interference) from the atmosphere.
- Radio astronomy uses completely different instruments to detect radiation of wavelengths of mm to cm. The receivers are similar to those used in radio broadcast transmission (which uses those wavelengths of radiation). See also Radio telescopes.
- High-energy astronomy
Optical and radio astronomy can be performed with ground-based observatories, because the atmosphere is transparent at those wavelengths. Infrared light is heavily absorbed by
water vapor, so infrared observatories have to be located in high, dry places or in space.
The atmosphere is opaque at the wavelengths used by X-ray astronomy, gamma-ray astronomy, UV astronomy and, except for a few wavelength "windows", Far infrared astronomy, so observations
can be carried out only from balloons or space observatories.
In the early part of its history, astronomy involved only the observation and predictions of the motions of the objects in the sky that could be seen with the naked eye. The Rigveda refers to the 27 constellations associated with the motions of the sun and also the 12 zodiacal divisions of the sky. The ancient Greeks made important contributions to astronomy, among them the definition of the magnitude system. The Bible contains a number of statements on the position of the earth in the universe and the nature of the stars and planets, most of which are poetic rather than literal; see Biblical cosmology. In 500 AD, Aryabhata presented a mathematical system that took the earth to spin on its axis and considered the motions of the planets with respect to the sun.
Astronomy was mostly stagnant in medieval Europe, but flourished meanwhile in the Arab world. The late 9th century Islamic astronomer al-Farghani (Abu'l-Abbas Ahmad ibn Muhammad ibn Kathir al-Farghani) wrote extensively on the motion of celestial bodies. His work was translated into Latin in the 12th century. In the late 10th century, a huge observatory was built near Tehran, Iran, by the astronomer al-Khujandi who observed a series of meridian transits of the Sun, which allowed him to calculate the obliquity of the ecliptic. In Persia, Omar Khayyam (Ghiyath al-Din Abu'l-Fath Umar ibn Ibrahim al-Nisaburi al-Khayyami) compiled many tables and performed a reformation of the calendar that was more accurate than the Julian and came close to the Gregorian.
During the Renaissance Copernicus proposed a heliocentric model of the Solar System. His work was defended, expanded upon, and corrected by Galileo Galilei and Johannes Kepler. Kepler was the first to devise a system that described correctly the details of the motion of the planets with the Sun at the center. However, Kepler did not succeed in formulating a theory behind the laws he wrote down. It was left to Newton's invention of celestial dynamics and his law of gravitation to finally explain the motions of the planets.
Stars were found to be faraway objects. With the advent of spectroscopy it was proved that they were similar to our own sun, but with a wide range of temperatures, masses and sizes. The existence of our galaxy, the Milky Way, as a separate group of stars was only proven in the 20th century, along with the existence of "external" galaxies, and soon after, the expansion of the universe seen in the recession of most galaxies from us. Cosmology made huge advances during the 20th century, with the model of the big bang heavily supported by the evidence provided by astronomy and physics, such as the cosmic microwave background radiation, Hubble's Law and cosmological abundances of elements.
For a more detailed history of astronomy, see the history of astronomy.
. The ejection of gas, from the dying star at the center, has symmetrical patterns unlike the chaotic patterns expected from an ordinary explosion.]]
Timelines in astronomy
References: Formulas and Constants
Source | Copyright
Webmasters: Add your website here:
Readers: Edit |
Harmonia Macrocosmica by Andreas Cellarius
Atlas of the heavens as seen by the astronomers of the time of its 1661 printing: Copernicus, Ptolemy, Brahe, and Aratus. Entire book has been digitized and the images may be browsed or searched.
History and Philosophy of Western Astronomy
The History chapter of an introductory astronomy course.
Jewish Astronomy, From Ancient to Modern Times
Astronomy in Israel from Og's Circle to the Wise Observatory.
The ÀryabhatÃya of Àryabhata
The oldest exact astronomic constant? The ratio of earth rotations to lunar orbits in Aryabhata's AD 498 writing.
Astronomy in Japan
Historical and modern Japanese astronomy, and its place in Japanese culture
The Star of Bethlehem
An investigation of the science and history which bear on the mysterious star said to have accompanied the advent of Christ.
Northern California History of Astronomy Luncheon and Discussion Association
Announcements of discussions (in Oakland, California), open to the public, on various astronomy history topics, and an archive of past discussions
Rundetaarn (The Round Tower)
Astronomy in Denmark: Brahe, Roemer, Hertzsprung.
Hindu Cosmological Time Cycles
An accurate calendar and the progenitor of the sexagesimal (base 60) "degree, minutes, seconds" measurement system.
History of Astronomy in Ancient India
Eclipse calculation, heliocentric theory, size of the world.
History of Astronomy Pages
Frederik Kaiser (1808-1872) and the professionalisation of Dutch astronomy; The "Lost Letters" of J.C. Kapteyn (1851-1922); Many history of astronomy and history of science links with a Dutch flavour.
Astronomical References in the Ruba'iyât of Omar Khayyam
Excerpts and commentary with reproductions of some of Elihu Vedder's illustrations.
Archive of the History of Astronomy Discussion Group, the mailing list for scholars in this field.
Tycho's Star Maps
Celestial atlases and globes on exhibit at the Royal Observatory, Greenwich.
X-ray Astronomy at Goddard
Describes observations using balloons, rockets and satellites.
Historical Astronomy Division
American Astronomical Society division devoted to history, with a link to pages on the history of the society itself.
A Brief SETI Chronology
A timeline of the search for extraterrestrial intelligence, from Morrison and Coconni to SETI@home.
A Science Odyssey - Physics and Astronomy
PBS articles about 20th century astronomy and physics.
Electronic Newsletter for the History of Astronomy
Complete archive and subscription information.
Biennial History of Astronomy Conferences
Workshops at Notre Dame, papers presented, abstracts, group pictures of attendees.
From Stargazers to Starships
Tutorial/historical exposition of the motion of Earth in space, Newtonian mechanics and spaceflight, on a high school level.
The Babylonian Theory of the Planets
A substantial look at the subject, and a review of N. M. Swerdlow's book.
The Manchester Astronomical Society History
The first hundred years, list of presidents, list of archived documents.
A rare celestial atlas discovered in the library of the Manchester Astronomical Society.
Russell A. Hulse - Nobel Lecture
The discovery of the binary pulsar.
The Cosmic Microwave Background Radiation
Robert Woodrow Wilson Nobel Lecture.
Yahoo group moderated by Stuart Williams, F.R.A.S.
This Month in the History of Astronomy
Short descriptions of important discoveries, events, and birthdays sorted by month.
Digital Archive of Historical Astronomy Pictures
Images from the history of astronomy, old telescopes, pictures of astronomers, observatories.
Features history of discoveries and references.
The Society For The History Of Astronomy
Academic and popular topics, with a focus on Britain.
Heavenly Mathematics: Cultural Astronomy
An interdisciplinary course on cultural astronomy.
Article by Owen Gingerich explores refinement and criticism of Ptolemaic astronomy, including religious influences on direction, and describes precursors to and influences on Copernicus.
Gene Smith's Brief History of Astronomy
Covers the development of this ancient science from days of Stonehenge (3100 BC) to the discovery of Pulsars (1968 AD). Includes related resource links.
Understanding Tidal Friction
Article by Peter Brosche on gaining understanding of changes in the Moon's orbit.
Big Ear Radio Observatory
Radio astronomy and the search for extraterrestrial intelligence.
Phase I of the Electronic History of Astronomy, developed in the Whipple Museum of the History of Science at Trinity College, Cambridge. Covers the history of instruments and techniques, themes such as astrology and calendar reform, and biographies of major historical astronomers.
History of SETI
An overview of the history of the search for extraterrestrial intelligence.
Roemer and the First Determination of the Velocity of Light
Burndy Library online publication of 1940 article detailing the background and details of Roemer's work.
Little Green Men, White Dwarfs or Pulsars?
A personal account by Jocelyn Bell Burnell on the discovery of pulsars.
History of Astronomy
Offers a history of the field and science. Features links to related sites, awards and contact details. Provided by the Working Group for the History of Astronomy.
Out of This World
Exhibition catalog for The Golden Age of the Celestial Atlas. Includes an historical essay and sample pictures.
Astronomy in Sweden 1860-1940
From the Uppsala University Newsletter for History of Science. | <urn:uuid:39e8c1d3-e73a-4de1-be32-a6841bb182f6> | 3.65625 | 3,185 | Knowledge Article | Science & Tech. | 27.184357 |
Karol Borsuk conjectured in 1933 that every bounded set in can be covered by sets of smaller diameter. Jeff Kahn and I found a counterexample in 1993. It is based on the Frankl-Wilson theorem.
Let be the set of vectors of length . Suppose that and is a prime, as the conditions of Frankl-Wilson theorem require. Let . All vectors in are unit vectors.
Consider the set . is a subset of .
Remark: If , regard as the by matrix with entries .
It is easy to verify that:
It follows that all vectors in are unit vectors, and that the inner product between every two of them is nonnegative. The diameter of is therefore . (Here we use the fact that the square of the distance between two unit vectors and is 2 minus twice their inner product.)
Suppose that has a smaller diameter. Write for some subset of . This means that (and hence also ) does not contain two orthogonal vectors and therefore by the Frankl-Wilson theorem
It follows that the number of sets of smaller diameter needed to cover is at least . This clearly refutes Borsuk’s conjecture for large enough . Sababa.
Let me explain in a few more words how looks like when is large. the dominant binomial coefficient in the the sum defining is . (Since every binomial coefficient in the sum is smaller than half the next binomial coefficient.) is smaller than . When is large this binomial coefficient equal to , where is the entropy function. To verify it and even to get better estimates you can use Stirling’s formula.
We will further discuss Borsuk’s conjecture and related problems in a later post.
An extremely flattering I was very happy to read a description of the work of Jeff Kahn and me in the context of surprises in mathematics and theoretical computer science can be found in Dick’s Lipton’s blog. | <urn:uuid:5b5c5465-fb32-4b2d-b399-2939c0caec0f> | 2.6875 | 400 | Personal Blog | Science & Tech. | 60.671828 |
Eqs. (61) and (62) provide maximum likelihood estimators only when the noise in which the signal is buried is Gaussian. There are general theorems in statistics indicating that the Gaussian noise is ubiquitous. One is the central limit theorem, which states that the mean of any set of variables with any distribution having a finite mean and variance tends to the normal distribution. The other comes from the information theory and says that the probability distribution of a random variable with a given mean and variance, which has the maximum entropy (minimum information) is the Gaussian distribution. Nevertheless, analysis of the data from gravitational-wave detectors shows that the noise in the detector may be non-Gaussian (see, e.g., Figure 6 in ). The noise in the detector may also be a non-linear and a non-stationary random process.
The maximum likelihood method does not require that the noise in the detector be Gaussian or stationary. However, in order to derive the optimum statistic and calculate the Fisher matrix we need to know the statistical properties of the data. The probability distribution of the data may be complicated, and the derivation of the optimum statistic, the calculation of the Fisher matrix components and the false alarm probabilities may be impractical. However, there is one important result that we have already mentioned. The matched-filter, which is optimal for the Gaussian case is also a linear filter that gives maximum signal-to-noise ratio no matter what the distribution of the data. Monte Carlo simulations performed by Finn for the case of a network of detectors indicate that the performance of matched-filtering (i.e., the maximum likelihood method for Gaussian noise) is satisfactory for the case of non-Gaussian and stationary noise.
Allen et al. [10, 11] derived an optimal (in the Neyman–Pearson sense, for weak signals) signal processing strategy, when the detector noise is non-Gaussian and exhibits tail terms. This strategy is robust, meaning that it is close to optimal for Gaussian noise but far less sensitive than conventional methods to the excess large events that form the tail of the distribution. This strategy is based on a locally optimal test that amounts to comparing a first non-zero derivative
The non-stationarity in the case of Gaussian and uncorrelated noise can be easily incorporated into matched filtering (see Appendix C of ). Let us assume that a noise sample in the data has a Gaussian pdf with a variance and zero mean (, where is the number of data points). Different noise samples may have distributions with different variances. We also assume that the noise samples are uncorrelated, then the autocorrelation function of the noise is given by [see Eq. (39)]
In the remaining part of this section we review some statistical tests and methods to detect non-Gaussianity, non-stationarity, and non-linearity in the data. A classical test for a sequence of data to be Gaussian is the Kolmogorov–Smirnov test . It calculates the maximum distance between the cumulative distribution of the data and that of a normal distribution, and assesses the significance of the distance. A similar test is the Lillifors test , but it adjusts for the fact that the parameters of the normal distribution are estimated from the data rather than specified in advance. Another test is the Jarque–Bera test , which determines whether sample skewness and kurtosis are unusually different from their Gaussian values.
A useful test to detect outliers in the data is Grubbs’ test . This test assumes that the data has an underlying Gaussian probability distribution but it is corrupted by some disturbances. Grubbs’ test detects outliers iteratively. Outliers are removed one by one and the test is iterated until no outliers are detected. Grubbs’ test is a test of the null hypothesis:
against the alternate hypothesis:
The Grubbs’ test statistic is the largest absolute deviation from the sample mean in units of the sample standard deviation, so it is defined as
Grubbs’ test has been used to identify outliers in the search of Virgo data for gravitational-wave signals from the Vela pulsar . A test to discriminate spurious events due to non-stationarity and non-Gaussianity of the data from genuine gravitational-wave signals has been developed by Allen . This test, called the time-frequency discriminator, is applicable to the case of broadband signals, such as those coming from compact coalescing binaries.
Let now and be two discrete-in-time random processes () and let be independent and identically distributed (i.i.d.) random variables. We call the process linear if it can be represented by
If Hypothesis 1 holds, we can test for linearity, that is, we have a second hypothesis testing problem:
If Hypothesis 4 holds, the process is linear.
Using the above tests we can detect non-Gaussianity and, if the process is non-Gaussian, non-linearity of the process. The distribution of the test statistic , Eq. (142), can be calculated in terms of distributions. For more details see .
It is not difficult to examine non-stationarity of the data. One can divide the data into short segments and for each segment calculate the mean, standard deviation and estimate the spectrum. One can then investigate the variation of these quantities from one segment of the data to the other. This simple analysis can be useful in identifying and eliminating bad data. Another quantity to examine is the autocorrelation function of the data. For a stationary process the autocorrelation function should decay to zero. A test to detect certain non-stationarities used for analysis of econometric time series is the Dickey–Fuller test . It models the data by an autoregressive process and it tests whether values of the parameters of the process deviate from those allowed by a stationary model. A robust test for detecting non-stationarity in data from gravitational-wave detectors has been developed by Mohanty . The test involves applying Student’s t-test to Fourier coefficients of segments of the data. Still another block-normal approach has been studied by McNabb et al. . It identifies places in the data stream where the characteristic statistics of the data change. These change points divide the data into blocks in characteristics are stationary.
Living Rev. Relativity 15, (2012), 4
This work is licensed under a Creative Commons License. | <urn:uuid:901d87c5-259e-410b-a239-50321d8e73ff> | 3.40625 | 1,348 | Academic Writing | Science & Tech. | 35.875117 |
Ecology is the study of the interaction of living things with their environment.
Investigating ecosystems is difficult because of the huge number of biotic and abiotic factors. An area is studied by quadrat or transect sampling. Statistics like standard deviation and Chi Squared are used to analyse. See Studying Ecosystems
Competion ariese as a result of limited resources in an environment, and relates to natural selection. Predator and prey numbers vary according to the predator prey cycle, where they rise and fall ... find out why! See Predation
Nutrient cycles look at how important molecules in an ecosystem are transferred, the most important two cycles are carbon and nitrogen as these are essential organic molecules. Fixation, dentrification etc ...See Nutrient Cycles
Human Influence on Ecosystems
Humans have polluted ecosystems by changes in farming practice (monoculture, hedgerow removal); fertilisers causing eutrophication and biomagnification of pesticides. But there are new techniques minimising these. See Human Influence
Food Chains and Energy
A food chain shows how energy is transfered through an ecosystem by eating. At each trophic level, energy is lost which is why a pyramid of numbers tends to show there are more consumers than producers. See Food Chains and Energy
The theory of variation and natural selection, famously proposed by Darwin. Evidence is found in melanism of the peppered moth; and the darker coloured moths numbers reduced during the industrial revolution. See Evolution
What is an ecosystem? A population is kept stable by abiotic factors, predation and competition (red squirrels example). Succession is the process of a habitat being stabilised, this can be seen in sand dunes. See Ecosystems
Classification of Species
Taxonomy is putting organisms into catagories; by kingdom, phylum, class, order, family, genus and species. There are five kingdoms. Speciation the creation of new species, this results from isolation. See Classification.
All living things must adapt in order to survive in their environment, the species that is better adapted will be able to live longer in this environment and pass on their genotype as much as possible through evolution.. See Adaptation | <urn:uuid:68f3d507-0449-44c4-a567-aa3900457804> | 3.625 | 454 | Knowledge Article | Science & Tech. | 28.995158 |
Alaska creatures without us
By Ned Rozell In Alan Weisman¹s book, The World Without Us, the author ponders ³a world from which we all suddenly vanished. Tomorrow.² In last week¹s column, a few experts discussed the fate of Alaska structures if Alaskans were to disappear. This week, people who study Alaska¹s wildlife donate some thought to the subject. Alaska¹s lack of people has benefited many species, including caribou, which still outnumber Alaskans, and salmon, which torpedo up our rivers with a staggering, wonderful density that was once seen all over the west coast of North America. Mark Wipfli has spent many hours on salmon streams throughout Alaska, and the University of Alaska biologist has thought many times of mankind¹s impact on salmon. If people were to disappear, Wipfli envisions a slow healing of damage done to salmon habitat. In Alaska, that means the recovery from logging and mining of streamside forests that provide everything from fish food in the form of insects to the contribution of dead trees to waterways (for erosion control and creation of eddies and other features good for salmon). Old-growth forests (with trees aged from 50 to 200 years) provide ideal conditions for salmon, just as those same trees have benefited us with stout building materials. The mining of minerals we use every day has also disrupted life for salmon. ³If we vanished . . . there would no longer be harvesting or overharvesting,² Wipfli said. ³Mining impacts to watersheds would slowly diminish, but would probably take a lot longer. And dams would eventually crumble and tumble, allowing rivers to flow like they once did.² The bottom line is salmon ‹ and the marine, freshwater and terrestrial ecosystems that support them ‹ would be better off without us,² he said. ³We continue to create barriers and stressors that collectively make it more difficult for salmon to thrive like they historically did, especially in the Lower 48.² Along a robust population of salmon, Alaska also is not yet experiencing a bird shortage. ³Birds from six of the seven continents come to Alaska to breed each year ‹that¹s billions and billions of birds,² said biologist Sue Guers of the Alaska Bird Observatory in Fairbanks. ³These numbers are estimates from nowImagine what it was like before our time.² Alaska¹s many million acres of unpeopled river valleys and tundra plains would continue to attract birds if we were gone, but some species would miss us, Guers said. Ravens and gray jays that pick at what we leave behind in cities and towns would revert back to following wolf packs, and the pigeons that live in Fairbanks might find life impossible at 40 below without the warm exhaust of heated buildings. ³Most other species would most likely benefit from humans disappearing,² Guers said. ³Think about all the habitat destruction going on in the Lower 48 and in Central and South America ‹ loss of habitat is one of the major causes of species loss and biodiversity.² As years passed without humanity, nature will take down other bird barriers, including wind turbines, cellphone towers, and what Wiesman cited as mankind¹s most damaging invention to birds, window glass. But he also wrote that housecats, the expert hunters that kill billions of songbirds worldwide each year, would do quite well without us. Large mammals like moose and caribou on far-away hilltops might not miss us at all, said biologist Tom Paragi with the Alaska Department of Fish and Game. ³I don¹t think the remote portions of Alaska would be much different than we see today, because of intact habitats,² Paragi said. ³In contrast, if you Œre-wilded¹ Iowa or Manhattan, you¹d have smaller populations of white-tailed deer and raccoons after wolves, bears and cougars come back.² One of the biggest differences between Alaska and the rest of the world is that we have cleared so little of the landscape for farming here, Paragi said. That has allowed moose their willows and caribou their lichen, as well as the space to breed and move around. Hunters and predator-control programs affect local populations of moose and caribou, but Paragi said he doesn¹t think either would change much in abundance if people were to disappear. ³Moose density near urban Alaska would almost certainly go down as human disturbance of vegetation ended and predators increased, but one lightning-caused fire could change the landscape in a few days more than even a large amount of logging,² he said. Each biologist in this story also mentioned the lingering affects of a warmer climate and how that may endure after people checked out. ³If we generally have milder winters, species like wood bison, mule deer and fishers will likely continue to spread westward into Alaska, along with deer ticks and others along for the ride on the mammals,² Paragi said. ³A huge unknown is how long human-induced climate-change effects, including ocean acidification, will linger and continue to impact and change ecosystems once we¹re gone,² said Wipfli, the salmon expert. ³Undoubtedly at least hundreds, more like thousands, of years.² ³Problems like climate-change, pollution and introduction of exotic species all over the world means migrant birds are getting impacted by humans during all aspects of their life cycle,² Guers said.
More Environmental Services » | <urn:uuid:a7feecb6-0846-4294-9fdd-8e541a37372c> | 3.609375 | 1,157 | Truncated | Science & Tech. | 41.390447 |
This page is sponsored by Google Ads. ARN does not necessarily select or endorse the organizations or products advertised above.
(video available at Science and Technology News Network)
In the search for suitable dwelling places within our own solar system, requirement number one is liquid water. If the Earth were too close to the sun, its oceans would boil away. Too far, and they'd be frozen over. Earth happens to be situated in a ring-shaped zone that's not too far in, not too far out. Astronomers call that ring the "Circumstellar Habitable Zone."
Now scientists say there's a comparable zone in our entire galaxy, the Milky Way, and Earth also occupies a prime location within it. Astrobiologist Guillermo Gonzalez of Iowa State University and his colleagues at the University of Washington described the Galactic Habitable Zone (GHZ) in the October issue of Scientific American magazine.
The search for other habitable solar systems intensified in October 1995, when astronomers at the Geneva Observatory in Switzerland discovered the first known planet around another star. Since then, 73 more Jupiter-sized planets have been added to the ever-growing list of extrasolar planets. (Earth-sized planets are still too small for astronomers to detect).
As the data piled up, Gonzalez studied it, asking, What is different about stars with planets from stars without planets? "I found that stars with planets have a much higher concentration of heavy elements in their atmospheres compared to stars without planets, " he says. In fact, stars with planets have heavy metal concentrations similar to the Sun. Those stars, including the Sun, are found in a ring in the Milky Way's disk a constant distance from the center.
Gonzalez says that's because "you need a certain minimum concentration of heavy elements in a forming stellar system in order for it to be accompanied by giant planets." Stars close to the center of the Milky Way have a higher concentration of heavy elements, while stars further away have a lower concentration of heavy elements. So, in the galactic real-estate market, if you're too far out you won't even find a lot to build on.
But that doesn't mean closer is better. While stars closer to the center of the Milky Way might have planets, their distance from a black hole defines whether they are safe. "That's where you find high energy radiation that's lethal to most forms of life," says Gonzalez.
And even within the boundaries of the GHZ there are neighborhoods to be avoided. "The habitable zone does not include the spiral arms in the galaxy because they're very dangerous places for life," says Gonzalez. Hazards in the spiral arms include supernovae explosions which also produce deadly radiation. And star-forming regions may be the source of mysterious gamma ray bursts, the most powerful energy sources in the universe.
Gonzalez says these recent discoveries could make science fiction a little less fun now that plausible settings for advanced civilizations are known to be "somewhat uninteresting looking placesfar from spiral arms, far from the galactic center, and far from star-forming regions," he quips. "Science fiction writers are going to have to be content with putting their civilizations in places that look very much like the Earth."
File Date: 011.22.01
This data file may be reproduced in its entirety
for non-commercial use.
A return link to the Access Research Network web site would be appreciated.
Documents on this site which have been reproduced from a previous publication are copyrighted through the individual publication. See the body of the above document for specific copyright information. | <urn:uuid:81ed74ab-13af-422a-aebd-6cd5908904c8> | 3.671875 | 732 | Truncated | Science & Tech. | 43.140444 |
has a first page and every catalog a first entry.
And so this lovely blue cosmic cloud begins the
den Bergh Catalog (vdB) of stars surrounded by reflection nebulae.
Interstellar dust clouds
reflecting the light of the nearby stars,
the nebulae usually appear blue because scattering by the dust grains
is more effective at shorter (bluer) wavelengths.
The same type of
scattering gives planet Earth its
Van den Bergh's 1966 list contains a total of 158 entries more
easily visible from the northern hemisphere, including
cluster stars and other popular targets for astroimagers.
Less than 5 light-years across,
VdB1 lies about 1,600 light-years distant in the constellation
on this scene, two intriguing nebulae at the right show loops and
outflow features associated with the energetic process of star formation.
Within are extremely young variable stars
(top) and V376 Cas.
Mt. Lemmon SkyCenter,
University of Arizona | <urn:uuid:e3efccb5-5f24-474c-8e65-a2514470e970> | 2.8125 | 215 | Knowledge Article | Science & Tech. | 44.987047 |
|If you drop a hammer and a feather together, which reaches the ground first?
On the Earth, it's the hammer, but is the reason only because of
Scientists even before
and tested this simple experiment and felt that without air resistance, all objects would fall the same way.
Galileo tested this principle himself and noted that two heavy balls of different masses reached the ground simultaneously, although many historians are skeptical that he did this experiment from
Leaning Tower of Pisa as folklore suggests.
A good place free of air resistance to test this equivalence principle is
Earth's Moon, and so in 1971,
Apollo 15 astronaut
dropped both a
hammer and a feather together toward the surface of the Moon.
Sure enough, just as scientists including Galileo and
Einstein would have predicted, they reached the
lunar surface at the same time.
equivalence principle states that the acceleration an object feels due to gravity does not depend on its mass, density, composition, color, shape, or anything else.
The equivalence principle is so important to modern physics that its depth and reach are still being
tested even today.
Apollo 15 Crew, | <urn:uuid:f7417f09-03cf-49bf-8d67-83786041ff10> | 3.96875 | 245 | Knowledge Article | Science & Tech. | 36.072984 |
Full name: Faraday constant
Plural form: faradays
Category type: electric charge
Scale factor: 96485.3399
The SI derived unit for electric charge is the coulomb.
1 coulomb is equal to 1.03642688209E-5 Faraday constant.
Valid units must be of the electric charge type.
You can use this form to select from known units:
I'm feeling lucky, show me some random units
In physics and chemistry, the Faraday constant (named after Michael Faraday) is the magnitude of electric charge per mole of electrons. While most uses of the Faraday constant, denoted F, have been replaced by the standard SI unit, the coulomb, the Faraday is still widely used in calculations in electrochemistry. | <urn:uuid:0b88e418-f6fc-4b06-9408-67449b6631d4> | 2.96875 | 168 | Structured Data | Science & Tech. | 45.207522 |
radical, in mathematics, symbol placed over a number or expression, called the radicand, to indicate a root of the radicand. When used without a sign or index number, it designates the positive square root of the radicand, i.e., 2. If both square roots are meant, the radical sign is preceded by ±. To indicate higher roots of the radicand, e.g., cube or fourth roots, an index number is used. The radical sign is generally taken to indicate the principal root of the radicand, although any radicand will have n different n th roots. The term radical is sometimes used loosely to refer to the entire expression consisting of radical sign and radicand.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Mathematics | <urn:uuid:ef4384bc-a50f-480b-a51e-5c413971862c> | 3.96875 | 179 | Knowledge Article | Science & Tech. | 42.516935 |
InformATE (Inform Ahora Totalmente en Español -- Inform, Now Totally in Spanish) is a programming language and a design system for interactive fiction in the Spanish language. InformATE is an Inform library, based on Inform v6.30. It was originally created by José Luis Díaz as "Zak McKraken", and is currently mantained by the spanish Inform Community. Games created with InformATE can be compiled with the Inform compiler for both the Z-machine and Glulx.
InformATE not only supports the Spanish language during game play, but also translates Spanish grammar, along with Inform's dictionary, commands, meta-commands, and all of the Z-machine system's messages. In fact, an InformATE piece of code reads (mostly) in Spanish with a few exceptions: some reserved words such as if, else, switch, class, object, with, has and so forth remain in English.
Object Ropa "ropa nueva" with nombre_f 'ropa', nombre_fp 'prendas', nombre_mp 'vestidos', adjetivos 'nueva', descripcion "Es mi ropa nueva. Es estupenda", antes [; Desvestir : if (localizacion ofclass Localidad_Exterior) "¿Y pasar frío? No haré tal cosa..."; else "¿Y si me ve alguien más? Qué verguenza..."; ], has femenino prenda;
As seen on the above example, InformATE allows the author to actually code games in Spanish. However, being not just a translation but a complete rework of the original Inform library, InformATE and Inform are completely incompatible with each other, especially for later versions of either library.
InformATE's main documentation is DocumentatE, a Spanish HTML document largely based on The Inform Designer's Manual (the DM4). It also has a basic tutorial in HTML, La Torre and an incrementally built-up example program, La Casa. Moreover, Grendelkhan published on-line his Taller Creativo, a workshop with basics and practices of adventure craft. The workshop includes an incrementally built-up example program on InformATE. | <urn:uuid:1f3c78d1-3b69-41ef-8d33-45815b2f2e9a> | 2.75 | 486 | Knowledge Article | Software Dev. | 36.377639 |
Pollution of subsurface waters and soils is a common problem across the United States and the world. However, a growing body of evidence suggests that laboratory studies, particularly those involving biological and chemical remediation, do not accurately mimic what occurs in the field. These laboratory studies usually involve removal of sediments and/or groundwater and subjecting these materials to treatments within the lab, followed by an assessment of the likelihood of these treatments to achieve cleanup objectives in the field. Obtaining samples for use in the lab often causes many types of stress to the biota in these samples, which results in shifts in the microbial community. Subsequently, data generated from these altered communities may not be predictive for the field site. A device that performs laboratory-scale experiments in the field is a way to overcome these laboratory shortcomings. INL has designed, fabricated and implemented flow through in situ reactors (FTISRs) to meet this need.
The invention is a direct-push installed, flow-through microcosm, which takes column studies from the lab to the field. In doing so, many lab artifacts disappear, including errors from lab-to-field differences in temperature, microbial communities, dissolved gases, soil disturbances, and lab errors resulting from a lack of continual microbial recruitment from the surrounding formation.
Additionally, the reactors allow mass balance closure in contaminant precipitation studies, as the core may be removed after the investigation is complete. Also unique, the treated core containing the precipitated contaminant may be tested for remobilization by challenging with the appropriate acid or oxidative challenges, in situ.
In situ reactors can be used to optimize full-scale remediation efforts, currently an iterative and expensive task. Further, during installation, in situ reactors cause little disruption to the soil matrix and little or no introduction of exogenous contaminants, yielding exceptional results for examination of microbial flora and the chemical matrix after the core removal.
INL scientists have tested the reactors in the laboratory and field, receiving easy to interpret and usable resultant data. Both the field and laboratory installations have been successful and straightforward, without plugging or reactor damage, including running the reactors for multiple months in scenarios receiving organic carbon amendments.
- Radtke, CW, Blackwelder, DB. 2004. US Patent #6,681,872 “In Situ Reactor”
- Radtke, CW. 2005. In situ microcosm. Ph.D. Dissertation Chapter VI, available at http://etd.lib.ttu.edu/theses/available/etd-04082005-154502/unrestricted/CoreyRadtkeDissertationFINAL.pdf.
- Blackwelder, DB, and Radtke, CW. 2005. Mesoscale Treatability Study Using Field Deployable, Flow-Through Microcosms. in: B.C. Alleman and M.E. Kelley (Conference Chairs), In In Situ and On-Site Bioremediation—Proceedings of the Eighth International In Situ and On-Site Bioremediation Symposium. Baltimore, Maryland. Battelle Press, Columbus, OH. Abstract A-14.
- Blackwelder, DB, and Radtke, CW. 2005. Flow-through in situ reactors as a method for performing laboratory scale investigations within a treatment zone. In he Joint International Symposium for Subsurface Microbiology, Jackson Hole, WY. American Society for Microbiology. | <urn:uuid:67f6f5a4-8f93-47fe-a52d-5991ade8b166> | 3.125 | 713 | Academic Writing | Science & Tech. | 28.915148 |
If a 1.5V battery stores 5.0kj of energy,for how many minutes could it sustain a current of 1,2A?
I got around 5.48g... I think it's wrong
When I use this formula do I have to convert torr to atm and mL to L?
If there are 5 liters of oxygen gas (O2) at 300 K and 400 torr, what will it weigh? Answer in units of g. Do I use the Ideal gas law formula for this problem?
How much Na is needed to react with H2O to liberate 179 mL H2 gas at STP? Answer in units of g Please help me with what formulas to use and what to do.
Work it out. How many joules of heat is given out when pieces of iron of mass 50g and specific heat capacity =460kgk cools from 80¡èc to 20¡èc?
Newton law of cooling and cooling collision.
Write on Newton law of cooling and cooling collision.
For Further Reading | <urn:uuid:7136614d-0a6a-40e1-8e4a-ddefd7d87fcc> | 2.796875 | 222 | Q&A Forum | Science & Tech. | 94.533579 |
This module provides an interface to the mechanisms used to implement the import statement. It defines the following constants and functions:
Return the magic string value used to recognize byte-compiled code files (.pyc files). (This value may be different for each Python version.)
Return a list of 3-element tuples, each describing a particular type of module. Each triple has the form (suffix, mode, type), where suffix is a string to be appended to the module name to form the filename to search for, mode is the mode string to pass to the built-in open() function to open the file (this can be 'r' for text files or 'rb' for binary files), and type is the file type, which has one of the values PY_SOURCE, PY_COMPILED, or C_EXTENSION, described below.
Try to find the module name on the search path path. If path is a list of directory names, each directory is searched for files with any of the suffixes returned by get_suffixes() above. Invalid names in the list are silently ignored (but all list items must be strings). If path is omitted or None, the list of directory names given by sys.path is searched, but first it searches a few special places: it tries to find a built-in module with the given name (C_BUILTIN), then a frozen module (PY_FROZEN), and on some systems some other places are looked in as well (on Windows, it looks in the registry which may point to a specific file).
If search is successful, the return value is a 3-element tuple (file, pathname, description):
file is an open file object positioned at the beginning, pathname is the pathname of the file found, and description is a 3-element tuple as contained in the list returned by get_suffixes() describing the kind of module found.
If the module does not live in a file, the returned file is None, pathname is the empty string, and the description tuple contains empty strings for its suffix and mode; the module type is indicated as given in parentheses above. If the search is unsuccessful, ImportError is raised. Other exceptions indicate problems with the arguments or environment.
If the module is a package, file is None, pathname is the package path and the last item in the description tuple is PKG_DIRECTORY.
This function does not handle hierarchical module names (names containing dots). In order to find P.*M*, that is, submodule M of package P, use find_module() and load_module() to find and load package P, and then use find_module() with the path argument set to P.__path__. When P itself has a dotted name, apply this recipe recursively.
imp.load_module(name, file, pathname, description)
Load a module that was previously found by find_module() (or by an otherwise conducted search yielding compatible results). This function does more than importing the module: if the module was already imported, it is equivalent to a reload()! The name argument indicates the full module name (including the package name, if this is a submodule of a package). The file argument is an open file, and pathname is the corresponding file name; these can be None and '', respectively, when the module is a package or not being loaded from a file. The description argument is a tuple, as would be returned by get_suffixes(), describing what kind of module must be loaded.
If the load is successful, the return value is the module object; otherwise, an exception (usually ImportError) is raised.
Important: the caller is responsible for closing the file argument, if it was not None, even when an exception is raised. This is best done using a try ... finally statement.
Return a new empty module object called name. This object is not inserted in sys.modules.
Return True if the import lock is currently held, else False. On platforms without threads, always return False.
On platforms with threads, a thread executing an import holds an internal lock until the import is complete. This lock blocks other threads from doing an import until the original import completes, which in turn prevents other threads from seeing incomplete module objects constructed by the original thread while in the process of completing its import (and the imports, if any, triggered by that).
Acquire the interpreter’s import lock for the current thread. This lock should be used by import hooks to ensure thread-safety when importing modules. On platforms without threads, this function does nothing.
Once a thread has acquired the import lock, the same thread may acquire it again without blocking; the thread must release it once for each time it has acquired it.
On platforms without threads, this function does nothing.
New in version 2.3.
Release the interpreter’s import lock. On platforms without threads, this function does nothing.
New in version 2.3.
The following constants with integer values, defined in this module, are used to indicate the search result of find_module().
The module was found as a source file.
The module was found as a compiled code object file.
The module was found as dynamically loadable shared library.
The module was found as a package directory.
The module was found as a built-in module.
The module was found as a frozen module (see init_frozen()).
The following constant and functions are obsolete; their functionality is available through find_module() or load_module(). They are kept around for backward compatibility:
Initialize the built-in module called name and return its module object along with storing it in sys.modules. If the module was already initialized, it will be initialized again. Re- initialization involves the copying of the built-in module’s __dict__ from the cached module over the module’s entry in sys.modules. If there is no built-in module called name, None is returned.
Initialize the frozen module called name and return its module object. If the module was already initialized, it will be initialized again. If there is no frozen module called name, None is returned. (Frozen modules are modules written in Python whose compiled byte-code object is incorporated into a custom-built Python interpreter by Python’s freeze utility. See Tools/freeze/ for now.)
Return 1 if there is a built-in module called name which can be initialized again. Return -1 if there is a built-in module called name which cannot be initialized again (see init_builtin()). Return 0 if there is no built-in module called name.
Return True if there is a frozen module (see init_frozen()) called name, or False if there is no such module.
imp.load_compiled(name, pathname[, file])
Load and initialize a module implemented as a byte-compiled code file and return its module object. If the module was already initialized, it will be initialized again. The name argument is used to create or access a module object. The pathname argument points to the byte-compiled code file. The file argument is the byte-compiled code file, open for reading in binary mode, from the beginning. It must currently be a real file object, not a user-defined class emulating a file.
imp.load_dynamic(name, pathname[, file])
Load and initialize a module implemented as a dynamically loadable shared library and return its module object. If the module was already initialized, it will be initialized again. Re- initialization involves copying the __dict__ attribute of the cached instance of the module over the value used in the module cached in sys.modules. The pathname argument must point to the shared library. The name argument is used to construct the name of the initialization function: an external C function called initname() in the shared library is called. The optional file argument is ignored. (Note: using shared libraries is highly system dependent, and not all systems support it.)
imp.load_source(name, pathname[, file])
Load and initialize a module implemented as a Python source file and return its module object. If the module was already initialized, it will be initialized again. The name argument is used to create or access a module object. The pathname argument points to the source file. The file argument is the source file, open for reading as text, from the beginning. It must currently be a real file object, not a user-defined class emulating a file. Note that if a properly matching byte-compiled file (with suffix .pyc or .pyo) exists, it will be used instead of parsing the given source file.
class class imp.NullImporter(path_string)
The NullImporter type is a PEP 302 import hook that handles non-directory path strings by failing to find any modules. Calling this type with an existing directory or empty string raises ImportError. Otherwise, a NullImporter instance is returned.
Python adds instances of this type to sys.path_importer_cache for any path entries that are not directories and are not handled by any other path hooks on sys.path_hooks. Instances have only one method:
find_module(fullname[, path])This method always returns None, indicating that the requested module could not be found.
New in version 2.5.
The following function emulates what was the standard import statement up to Python 1.4 (no hierarchical module names). (This implementation wouldn’t work in that version, since find_module() has been extended and load_module() has been added in 1.4.)
import imp import sys
- def __import__(name, globals=None, locals=None, fromlist=None):
# Fast path: see if the module has already been imported. try:return sys.modules[name]
- except KeyError:
# If any of the following calls raises an exception, # there’s a problem we can’t handle – let the caller handle it.
fp, pathname, description = imp.find_module(name)
- return imp.load_module(name, fp, pathname, description)
# Since we may exit via an exception, close fp explicitly. if fp:fp.close()
A more complete example that implements hierarchical module names and includes a reload() function can be found in the module knee. The knee module can be found in Demo/imputil/ in the Python source distribution. | <urn:uuid:89c60590-1408-4e00-81b2-bc5c4d3d755c> | 2.71875 | 2,233 | Documentation | Software Dev. | 44.016073 |
Joined: 16 Mar 2004
|Posted: Wed Apr 02, 2008 3:29 pm Post subject: Tiny tweezers and yeast help show how cancer drug works
|0 July 2007 EurekaAlert
Tiny tweezers and yeast help St. Jude show how cancer drug works
Researchers show that topotecan poisoning of topoisomerase causes cell death by forcing the accumulation of supercoils in DNA that trigger cell suicide.
The annoying bulges of an over-wound telephone cord that shorten its reach and limit a caller’s motion help to explain why drugs called camptothecins are so effective in killing cancer cells, according to investigators at St. Jude Children's Research Hospital and Delft University of Technology.
Using a type of nanotechnology called magnetic tweezers as well as yeast cells, investigators showed that a camptothecin drug called topotecan kills cancer cells by preventing an enzyme, called DNA topoisomerase I, from uncoiling double-stranded DNA in those cells. Instead, the DNA becomes locked in tight twists, called supercoils, which bulge out from the side of the over-wound DNA molecule—much like the bulges in an over-wound telephone cord. If these supercoils accumulate and persist while the cell is trying to separate the two strands of DNA to make exact copies of the chromosomes during cell division, the cells will die.
Nanotechnology studies work at a scale of about 100 nanometers or less. For comparison, one nanometer is approximately 10 times the size of an atom; and 10 nanometers is one-thousandth of the diameter of a human hair.
In this first-of-its-kind study, researchers used the microscopic magnetic tweezers to monitor changes in the length of an individual DNA molecule caused by the action of a single topoisomerase I enzyme; and to study how the binding of a single topotecan molecule to this enzyme-DNA complex alters DNA uncoiling. Based on the results of those studies, scientists developed the supercoil theory to explain the drug’s ability to kill cancer cells, and then tested that theory in yeast cells. Their conclusion—that accumulation of DNA supercoiling kills the cells—provides a novel model for how topotecan works; and it provides insights into the drug’s action that could help scientists in the clinical development of these agents. A report on this work appears in the advanced, online issue of “Nature.”
“This is the first time that the tools of nanotechnology have helped scientists to develop a biological hypothesis that was subsequently tested by follow-up experiments in a living organism,” said Mary-Ann Bjornsti, Ph.D., a member of the St. Jude Department of Molecular Pharmacology.
Delft University nanotechnology researchers in the laboratory of Nynke Dekker developed the magnetic tweezers for studies in biophysics and adapted the technique to the current study on the effect of topotecan on topoisomerase I in cooperation with Bjornsti, a co-author of the “Nature” report.
DNA is a double-stranded molecule resembling a flexible ladder. The sides of the ladder are backbones that hold half of each rung of the ladder. The entire molecule is twisted, somewhat like a flexible telephone cord.
Before cell division, a molecular machine unzips double-stranded DNA by slicing through the rungs of the ladder, separating the two strands into a wishbone-like shape called the “replication fork,” Bjornsti said. The separation of these strands is a critical step in the duplication of a cell’s chromosomes, which must occur before a cell divides. However, this also increases tension in the DNA ahead of the fork, causing it to buckle into supercoils.
To allow the replication fork to keep unraveling the double-stranded DNA, the cell uses the topoisomerase I enzyme, which makes a temporary nick in the backbone of one of the two strands of super-coiled DNA. This allows the DNA strands to uncoil, which removes the supercoils so the replication fork can continue separating the two strands and synthesize exact copies of each chromosome. Topotecan exploits the binding of topoisomerase I to double-stranded DNA that occurs when the cell tries to separate these strands.
Researchers already knew that topotecan “poisons” topoisomerase by binding to both the enzyme and to the nicked, single strand of DNA. This traps topoisomerase in place, turning the topotecan-topoisomerase-DNA complex into a roadblock that prevents the replication fork from advancing.
“Until now conventional wisdom was that topotecan kills cancer cells simply because the replication fork collided with the trapped topoisomerase,” Bjornsti said. “Our study suggests that the positive supercoiling that accumulates ahead of the replication fork contributes to cell killing.”
The researchers made their discovery using the magnetic tweezers technique to attach one end of a double-stranded DNA molecule from a magnetic bead while securing the other end to a glass surface. They then rotated a tiny magnet over the top of the magnetic bead, which in turn rotated the bead holding the DNA, twisting the DNA into supercoils and shortening it to about one-seventh its original length.
When the team added topoisomerase to the DNA, the strand uncoiled to its original length within a few seconds. This suggested that the enzyme had nicked the supercoils, relieving tension and allowing the DNA to expand to its previous length. But in the presence of topotecan, the rate of DNA uncoiling due to topoisomerase was reduced 20-fold compared to uncoiling without topotecan. However, the surprising finding was that drug binding slowed toposiomerase uncoiling of overwound DNA (positive supercoils) more than the rewinding of the strands of DNA that was underwound (negative supercoils).
“Our finding that topotecan preferentially slows the uncoiling of overwound or positively supercoiled DNA for such a long period of time, suggested that DNA supercoiling actually prevented the replication fork from advancing, which triggered cell death,” Bjornsti said. “We decided to test this model by studying the effect of camptothecin on DNA in yeast cells during the process of gene expression.” During gene expression, the DNA strands are separated so the cell can copy the genetic information into RNA—a process called transcription. RNA is a modified form of the gene that the cell uses to make the protein for which the gene codes.
Bjornsti’s team inserted rings of double-stranded DNA called plasmids into yeast cells to create a model for studying camptothecin’s effect. Topotecan is an analog or related drug of camptothecin. As in DNA replication, gene expression requires the unwinding of the DNA strands. However, instead of duplicating DNA, transcription machinery makes an RNA message, which is then “translated” into proteins. With gene transcription, the unwinding of DNA produces positive supercoils in front of the transcription machinery, while negative supercoils form behind it.
When the investigators added topoisomerase, the positive and negative supercoils disappeared at about the same rate, apparently because the removal of positive supercoils was balanced by a similar reduction in negative supercoiling. When scientests added the drug camptothecin, the positive supercoils were removed more slowly by topoisomerase I than the negative supercoils. This was strong evidence that camptothecin (or topotecan) poisoning of topoisomerase I preferentially triggers the accumulation of positive supercoiling ahead of complexes that unwind DNA, such as the transcription machinery or replication forks.
However, camptothecin did not cause the accumulation of positive supercoils in yeast cells, expressing a topoisomerase that was resistant to the drug. This was further evidence that camptothecins, such as topotecan, kill cells by preventing topoisomerase from uncoiling positive supercoils.
Story posted: 10th July 2007 | <urn:uuid:a94766b2-7950-45e8-a6ae-b080e7e22350> | 3.0625 | 1,758 | Comment Section | Science & Tech. | 26.45857 |
AS WE flew over the Pacific Northwest of the US, I reflected on the contrast between the views before me. Ahead was that engineering triumph, a flexing wing of a jet. But beneath me the chequerboard of recently cleared forest was a dramatic reminder that, despite our technological advances, humans are still raiding nature's larder. We depend on nature to provide the raw material for such everyday things as planks and paper. Our challenge is to reconcile human demands with shrinking forests.
One response to this challenge is the concept of ecologically sustainable forest management, the attempt to harvest products without diminishing the capacity of the forests to sustain wildlife, yield clean water and protect the soils in which they grow.
Although the broadening of foresters' intellectual horizons is to be welcomed, the world-weary among us may still ask what will actually change. Cynics may claim that the new ecologically ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:8c30c9e2-816b-4c1d-9473-aa1dc73c2c87> | 2.96875 | 211 | Truncated | Science & Tech. | 42.761085 |
Everyone has heard of Darwin -- he gets most of the credit for the theory of evolution. And while he deserves to be recognized for his contributions, of course, he couldn't have gotten there on his own.
In this segment, we'll talk with biologist Sean Carroll about the flora, fauna, fossils, and scientists that, over the years, have helped to prove Darwin was right. In his new book, "Remarkable Creatures," he writes about the scientists and adventurers, both formally trained and self-taught, who inspired Darwin and helped provide support for his ideas. From Alexander Van Humboldt, to Alfred Russell Wallace, and Roy Chapman Andrews, they're names that you may not have heard, but whose contributions to science were enormous.
We'll also talk with naturalist Bruce Means about his recent discovery of a new family of frogs has Guyana, and about his work on this continent looking at the snakes and frogs of the Florida Panhandle. How many fantastic species are yet to be discovered in remote jungles -- and in the weeds behind your garden? We're broadcasting this week from the campus of Florida State University in Tallahassee, as part of the Origins '09 Symposium.
Produced by Annette Heist, Senior Producer | <urn:uuid:0141afc1-458f-4d9c-b94b-42621f9e6083> | 2.6875 | 256 | Truncated | Science & Tech. | 41.338796 |
Web edition: October 31, 2012
She was born, like all hurricanes, as a faintly inauspicious stirring of winds. But she didn’t come from off the coast of Africa, as many tropical Atlantic storms do. She was a child of the Caribbean.
On the evening of October 19, a trough of low air pressure drifted slowly in the Caribbean Sea, east of Costa Rica and south of Haiti. The U.S. National Hurricane Center gave this tropical wave only a 20 percent chance of strengthening into a named storm.
But the surrounding atmosphere was full of water vapor, possibly thanks to the recently departed Hurricane Rafael. Weak winds on either side of the trough began slowly to pull it into a counterclockwise rotation.
Over the next few days, thunderstorms sucked up heat energy from the warm Caribbean waters and became better organized. Surface air pressures began to drop. The storm coalesced. On October 22, Tropical Depression Eighteen was officially born.
Soon the storm gathered enough strength to become Tropical Storm and then Hurricane Sandy. It shuffled north through Jamaica, Cuba and the Bahamas. Its heavy rains left at least 69 dead across the Caribbean — mostly in Haiti, which it did not even hit directly.
Then arrived a combination of meteorological factors popularly known as the perfect storm, a set of conditions guaranteed to deliver devastation to the U.S. Northeast. Down from the north and west came a low-pressure system that dug in its heels farther south across the continent than such systems normally do. Out to the east, over the Atlantic, a ridge of high pressure nestled down. Between them, these two systems blocked the usual west-to-east flow of the jet stream.
Up from the south came Sandy. The jet stream could not push her out harmlessly to the east as it normally would. Instead, the ridge system merged with tropical Sandy to create a rare hybrid storm.
“One gave you the circulation, and the other gave you a lot more warm ocean water,” says Shuyi Chen, a hurricane modeler at the University of Miami. “To me the interesting thing is all the multiple players that are involved and the odds that are required for them all to be in the same place at the same time.”
That, in a nutshell, is the lesson of Sandy: She showed how rare meteorological conditions can combine just so to generate a once-in-a-lifetime superstorm. To better predict any future Sandys, scientists are working to understand exactly how she was born, grew and died.
Tropical storms often develop in the Caribbean in October, Chen says, but most fizzle out long before making landfall. To probe why Sandy took a different path, researchers have run sophisticated computer simulations and even taken to the skies themselves.
Some computer models fared better than others. The European Center for Medium-Range Weather Forecasts predicted Sandy’s sharp westward turn days before the U.S. equivalent did. The European model uses higher-resolution data, farther in advance, than the leading American model does.
To help improve such models, “hurricane hunter” teams flew planes into the developing storm. A group from the National Oceanic and Atmospheric Administration uses turboprops to collect Doppler radar data on a storm’s structure. “It’s kind of like a CT scan of the inside of the storm,” says Robert Rogers, a meteorologist with NOAA’s hurricane research division in Miami.
Seven turboprop flights into Sandy showed how the storm weakened at first as it ran into wind shear — differing wind speeds at different altitudes — as well as dry air flowing off the continent and from behind the blocking ridge. But then Sandy passed over the warm waters of the Gulf Stream and got another boost as she interacted with the blocking ridge, which seems to have forced her rotating winds back into vertical alignment, Rogers says.
Once Sandy moved a bit farther north, NOAA sent out a Gulfstream-IV jet to explore the merger with the blocking ridge. Instead of drawing energy from warm sea surface temperatures as hurricanes do, Sandy took on characteristics of what’s called an extratropical cyclone — powered by temperature contrasts in the atmosphere. Scientists on the Gulfstream flights measured a jet of strong winds on Sandy’s south side that is often seen in these cyclones as they get stronger, Rogers says.
Overhead, satellites were gathering their own views of the storm’s astounding power. NASA’s Tropical Rainfall Measurement Mission spotted Sandy dumping more than 2 inches of rain an hour into the waters off Florida. Satellite images also showed how the storm’s eyewall — the bands of clouds immediately surrounding its eye — recovered after partially breaking apart north of Cuba.
“Sandy just didn’t have the energy to support a very strong storm in its inner core,” Chen says. Because of that, Sandy never got above Category 2, the second-lowest hurricane classification possible based on the storm’s wind speed.
Even so, the records she set are astonishing. When Sandy lurched ashore near Atlantic City, N.J., on the evening of October 29, the barometric pressure was 946 millibars, tying the 1938 Long Island hurricane for lowest recorded landfall pressure north of Cape Hatteras, N.C. The storm engulfed nearly the entire eastern third of the United States, some 1.8 million square miles. (Though neither is a global record: 1979’s Super Typhoon Tip recorded pressures of 870 millibars as it spun in the Pacific Ocean. Tip could have covered nearly the entire western half of the United States.)
Whether Sandy is a harbinger of future storms remains to be seen. Climate scientists have calculated that globally rising temperatures could bring more intense Atlantic hurricanes in the future, and one recent paper in the Proceedings of the National Academy of Sciences uses storm-surge records along the Atlantic coast to argue that large surges have become more common since 1923. Other work has suggested that losing Arctic sea ice, which has been declining in recent decades, could lead to the presence of more blocking ridges in the Atlantic like the one that steered Sandy into New Jersey. But the link between climate change and hurricanes remains murky, and the link between climate change and any individual storm is impossible to draw.
For now, as the East Coast mops up from this legendary superstorm, scientists are focusing on how Sandy might help them improve future hurricane predictions. One big lesson may be that researchers need to look at the broader environments in which hurricanes form, Chen says. “If you can’t forecast the very broad area — the trough to the west, the high pressure to the east, and Sandy itself,” she says, “you can’t really get the whole picture right.”
National Hurricane Center [Go to]
A. Witze. Storm front. Science News. Vol. 181, June 2, 2012, p. 26. Available online:
S. Perkins. Scour power. Science News. Vol. 178, August 28, 2010, p. 14. Available online: [Go to]
S. Perkins. Storm center. Science News. Vol. 171, June 23, 2007, p. 392. Available online: [Go to] | <urn:uuid:3e26ddf3-7913-4543-a364-2adef4c51eb3> | 3.21875 | 1,518 | Knowledge Article | Science & Tech. | 53.311641 |
Homo sapiens may not have been responsible for the five distinct spasms of extinctions in geological time that began an estimated 440 million years ago, but humans are centrally implicated in the ongoing sixth wave of severe biodiversity loss. The Convention on Biological Diversity (CBD) was drafted in 1992 to stem the decline. It entered into force a year later with the avowed aim of significantly reducing loss of species and even using them where compatible to alleviate poverty. But nearly two decades later, the treaty has largely failed to meet its targets. There is now another opportunity available to make it work. The parties to the CBD are holding their 10th conference in the Japanese city of Nagoya and with sufficient political will they can reverse the tide of species losses. The member-countries have done well to acknowledge the all-round disappointment that their renewed commitment made in 2002 to reduce biodiversity loss remains a dead letter. They are now challenged to deliver on their assurances and act more intelligently on climate change, habitat loss and degradation, excessive exploitation, spread of invasive alien species, and pollution, all of which affect plant and animal survival. What provides some hope is the persistence of a large amount of biological diversity.
The key to conservation is to recognise the role of nature in providing ecosystem goods such as fodder, fibre, genetic resources, fresh water, and services such as cleansing of air, nutrient flow, erosion prevention, flood control, pollination, and disease regulation. That this economic dimension of nature is being increasingly accepted the world over is heartening. At the Nagoya conference, the Group of 77 and China have made the forward-looking suggestion that countries of the South should forge closer cooperation to protect biodiversity, and use financial resources available from developed-country partners. In particular, fast-developing China’s focus on protecting 35 priority conservation areas making up 23 per cent of the country is extremely promising. India is also focussed on growth, but it needs to do more for ecosystems facing the onslaught of poorly planned development. It must begin by showing genuine recognition of nature’s value. National development policy cannot afford to ignore the central role played by biodiversity. At the global level, the CBD has the opportunity once again to arrive at a consensus on sustainable use of plant diversity. Such an agreement will help local communities access and benefit from use of invaluable genetic resources. The ethical imperative to save the world’s species is to restrict consumption of all natural resources to a sustainable level and allow for natural renewal. | <urn:uuid:7ece3f79-d620-45a8-bdac-0f5bc65d8b0c> | 3.359375 | 504 | Nonfiction Writing | Science & Tech. | 23.343049 |
We removed non-native fish from a section of the river and the endangered native species humpback chub increased in abundance. But it is not yet clear that decreased competition explains the rebound in population.
Using genetic analysis of organic material found in aquatic environments it is possible to detect the presence of organisms without necessarily observing or capturing individuals. Explains terms, methods, and prospective utility of this approach.
Buffelgrass (Pennisetum ciliare) poses a problem in the deserts of the United States, growing in dense stands and introducing a wildfire risk in an ecosystem not adapted to fire. This report explains what we are doing to help mitigate its effects.
Describes the Conte Anadromous Fish Laboratory of the Leetown Science Center, which performs research directed towards restoration and protection of anadromous fish with lists of research projects and facilities.
The Fish Health Branch, Leetown Science Center, investigates fish health and disease issues associated with genetics, pathogens and environmental stress. With links to workshops, leaflets, and announcements relating for fish health.
Report describes an electronic database of annotated citations relevant to fish passage through dams. Document may be searched using the search form or downloaded as an Endnote, Microsoft Word, or WordPerfect | <urn:uuid:22715960-9d45-4c19-8c69-40e25d325b19> | 3.09375 | 255 | Content Listing | Science & Tech. | 20.600664 |
The basis of this article is especially written for the Coastal Wiki by the main author referred to at the bottom of this page.
Natural shoreline mechanisms
Uprush and backwash
The transport of sediment across the beach face is performed by wave uprush and backwash. The uprush moves sand onshore while the backwash transports it offshore.
The wave motion also interacts with the beach groundwater flow. Seawater may infiltrate into the sand at the upper part of the beach (around the shoreline) during swash wave motion if the beach groundwater table is relatively low. In contrast, groundwater exfiltration may occur across the beach with a high water table. Such interactions have a considerable impact on the sediment transport in the swash zone.
Three mechanisms related to the uprush and backwash processes (see definitions of coastal terms) are relevant with respect to beach drainage. These mechanisms directly affect the resulting sediment transport. Given a certain groundwater table in the beach profile and consider a situation without active beach drainage:
- during uprush: sediment stabilisation and boundary layer thinning due to infiltration of water; the mass of water which has to return to sea diminishes; sediment particles are transported in landward direction,
- during backwash
- less water retuns to sea; however, still rather high velocities due to gravity effects; sediment particles are transported in seaward direction,
- destabilisation and boundary layer thickening due to exfiltration of groundwater.
Under accretive (wave) conditions the landward directed sediment transport processes apparently over-class the seaward directed processes. Under erosive (wave) conditions it is the other way around.
Working and application of beach drainage
Active beach drainage
When an active drainage system is installed under the beach face and parallel to the coastline, the aforementioned mechanisms will alter:
- during uprush: seawater infiltration under an artificially lowered water table was found to enhance, but transport of particles in landward direction hardly change,
- during backwash:
- less water returns to sea (smaller transport of particles in seaward direction),
- groundwater ex filtration is reduced.
Consequently it is expected that an artificially lowering of the groundwater table, with a drainage system, changes the coastal processes. In case of accretive conditions an increase of the accretion is expected. In case of erosive conditions a decrease of the beach erosion results. The above conclusion is confirmed by field and laboratory measurements. Figure 1 illustrates the lowering of the groundwater level due active beach drainage.
The pipes of a beach drainage system are buried in the beach parallel to the coastline and drain the seawater away to a collector sump and pumping station. The collected seawater may be discharged back to sea but can also be used to various applications (marinas oxygenation, desalination plants, swimming pools…).
The system includes minimal environmental impact compared with various hard protection methods.
More than 30 Beach drainage systems have been installed in Denmark, USA, UK, Japan, Spain, Sweden, France, Italy and Malaysia.
- Definition of beach drainage system
- Protection against coastal erosion
- Soft shoreline protection solutions
- Shore nourishment
- Karambas Th. V. (2003). Modelling of infiltration – exfiltration effects of cross-shore sediment transport in the swash zone, Coastal Engineering Journal, 45, no 1: 63-82.
- Law A. W-K., Lim S-Y, Liu B-Y (2002). A note on transient beach evolution with artificial seepage in the swash zone, Journal of Coastal Research, 18 (2): 379-387.
- Sato M., Fukushima T., Nishi R. Fukunaga M. (1996), On the change of velocity field in nearshore zone due to coastal drain and the consequent beach transformation, Proc. 25th International Conference on Coastal Engineering 1996, ASCE, pp. 2666-2676.
- Sato M., Nishi R., Nakamura K., Sasaki T., (2003). Short-term field experiments on beach transformation under the operation of a coastal drain system, Soft Shore Protection, Kluwer Academic Publishers, pp 171-182.
- Ioannidis D. and Th. V. Karambas (2007): "‘Soft’ shore protection methods: Beach drain system", 10th Int. Conf. on Environmental Science and Technology, CEST2007, Kos Island, GREECE, A-528-535
Please note that others may also have edited the contents of this article. | <urn:uuid:e81e3e67-18b4-4e11-a0d9-8faec97e7955> | 3.5 | 945 | Knowledge Article | Science & Tech. | 37.602348 |
Just say you were writing a calculator and when the user clicks 7 it puts 7 in a edit control, then say the user clicks 8 you want it to put the 8 next to the 7. insted of clearing the 7 and putting an 8.
How is this done. is there a WM_ or a ES_ because at the moment i cant find one. i have searched everywhere. The way im doing it now is i have a buffer and i store the text from the edit control to the buffer but this courses more problems further in my program. :( all i want to is is append the text to the end...
are you using MFC or Win32?
Regardless of what you're using, I think your goal should be to make it work by:
1) Read the control's text
2) Append the digit using standard string commands
3) Write the new text to the control
This is how I'd do it. Using OWL/MFC, this is a pretty easy task; I've never tried it in WinAPI but it shouldn't be that hard -- it sounds like you're already doing reading/writing of the control, so it's no harder than your current system. | <urn:uuid:f47de8bf-e064-471b-acad-4e6f9b111904> | 2.953125 | 248 | Comment Section | Software Dev. | 85.127418 |
powerful space explosion may herald star's death by black hole
"a huge, powerful star explosion detonated in deep space last week — an ultra-bright conflagaration that has astronomers scratching their heads over exactly how it happened.
the explosion may be the death cry of a star as it was ripped apart by a black hole, scientists said. high-energy radiation continues to brighten and fade from the march 28 blast's location, about 3.8 billion light-years from earth in the constellation draco.
astronomers say they've never witnessed an explosion so bright, long-lasting and variable before, according to nasa officials.
the explosion looks like a gamma-ray burst — the most powerful type of explosion in the universe, which usually mark the destruction of a massive star — but the flaring emissions from these dramatic events never last more than a few hours, researchers said.
"we know of objects in our own galaxy that can produce repeated bursts, but they are thousands to millions of times less powerful than the bursts we are seeing now," said andrew fruchter, of the space telescope science institute in baltimore, in a statement today (april 7). "this is truly extraordinary." | <urn:uuid:8a81d43b-5eaa-4447-a191-749db357a593> | 2.71875 | 248 | Comment Section | Science & Tech. | 41.995568 |
Hubble Space Telescope
The Hubble Telescope: Star Birth
Buy Astronomy Posters At AllPosters.com
The Hubble Space Telescope - a joint ESA/NASA project - is a 2.4-meter reflecting telescope which was deployed in low-Earth orbit (600 kilometers) by the crew of the space shuttle Discovery (STS-31) on 25 April 1990. During its years of operation HST has managed to become one of the most important science projects ever. It is a long-term spacebased observatory. The observations are carried out in visible, infrared and ultraviolet light. (Space Astronomy)
HST has in many ways revolutionised modern astronomy, being a highly efficient tool for making new discoveries, but also by driving astronomical research in general. HST was designed to take advantage of being above the Earth's disturbing atmosphere, and thereby providing astronomers with observations of very high resolution - essentially opening new windows to planets, stars and galaxies. HST was designed as a flagship mission of high standard, and has served to pave the way for other spacebased observatories. Hubble Space Telescope is named after Edwin Powell Hubble (1889-1953) who was one of the great pioneers of modern astronomy.
HST is an observatory first dreamt of in the 1940s, designed and built in the 1970s and 80s, and operational only in the 1990s. Since its preliminary inception, HST was designed to be a different type of mission for NASA -- a long term space- based observatory. To accomplish this goal and protect the spacecraft against instrument and equipment failures, NASA had always planned on regular servicing missions.
HST was designed with modular components so that on subsequent Shuttle missions it could be recovered and have faulty or obsolete parts replaced with new or improved instruments before being re-released into orbit.
HST is as large as a school bus and looks like a five-story tower of stacked silver canisters. Each canister houses important telescope equipment: the focusing mirrors, computers, imaging instruments, and pointing and control mechanisms. Extending from the telescope are solar panels for generating electricity and antennas for communicating with operators on the ground.
Power for the two on-board computers and the scientific instruments internal link is provided by two 2.4 x 12.1 m solar panels The power generated by the arrays is also used to charge six nickel-hydrogen batteries which provide power to the spacecraft during the roughly 25 minutes per orbit in which HST flies through the Earth's shadow.
The 12-ton telescope collects faint starlight with an 8-foot-diameter mirror. The mirror - tucked inside a long, hollow tube that blocks the glare from the sun, Earth, and moon - is slightly curved to focus and magnify light.
Unlike ground-based telescopes, astronomers cannot look through Hubbles lens to see the universe. Instead, Hubbles scientific instruments are the astronomers electronic eyes. The telescopes instruments include cameras and spectrographs. The cameras dont use photographic film, but rather electronic detectors similar to those used in home video cameras. The spectrographs collect data by separating starlight into its rainbow of colors, just as a prism does to sunlight. By closely studying the colors of light from a star, astronomers can decode the stars temperature, motion, composition, and age.
Hubble must maintain a steady position to take long exposures sometimes hours of the same subject to produce images of distant or faint objects. Otherwise the images will be blurred. To accomplish this mission, the telescope must battle such celestial elements as air drag, the suns radiation, and the gravitational pull of objects.
To improve its stability during observations, the telescope uses an elaborate system for attitude control. For Hubble, maintaining proper direction is similar to a sailor fighting the wind and water to keep his sailboat on course. Manoeuvring is performed by reaction wheels and its position in space monitored by four of six gyros. Pointing maintained in this way is known as 'coarse track mode'. Hubble is successful because of its sophisticated pointing control system, which includes gyroscopes and Fine Guidance Sensors (FGSs), which can be used to lock onto guide stars (fine lock) to reduce spacecraft drift and increase pointing accuracy.
Once the telescope locks onto an object, its sensors check for movement 40 times a second. If movement occurs, the wheels, which are constantly rotating, change speeds to smoothly move the telescope back into position.
Once Hubble gathers pictures and data on celestial objects, its computers turn the information into long strings of numbers that are beamed to Earth as radio signals. This information streams through a series of satellite relays to the Goddard Space Flight Center and then by telephone line to the Space Telescope Science Institute, where the numbers are turned back into pictures and data.
The information collected daily by Hubble is stored on optical computer disks. A single days worth of observations would fill an encyclopedia. The constantly growing collection of Hubble pictures and data are a unique scientific resource for current and future astronomers. | <urn:uuid:7293063e-3a2a-47cb-8c96-95d4a4f52fb4> | 3.921875 | 1,020 | Knowledge Article | Science & Tech. | 34.336001 |
Operators are used as a means for object composition and embedding. Simple parsers may be composed to form composites through operator overloading, crafted to approximate the syntax of an Extended Backus-Normal Form (EBNF) variant. An expression such as:
a | b
actually yields a new parser type which is a composite of its operands, a and b. Taking this example further, if a and b were of type chlit<>, the result would have the composite type:
alternative<chlit<>, chlit<> >
In general, for any binary operator, it will take its two arguments, parser1 and parser2, and create a new composed parser of the form
where parser1 and parser2 can be arbitrarily complex parsers themselves, with the only limitations being what your compiler imposes.
||Union||Match a or b. Also referred to as alternative|
||Intersection||Match a and b|
||Difference||Match a but not b. If both match and b's matched text is shorter than a's matched text, a successful match is made|
||XOR||Match a or b, but not both|
Alternative operands are tried one by one on a first come first served basis starting from the leftmost operand. After a successfully matched alternative is found, the parser concludes its search, essentially short-circuiting the search for other potentially viable candidates. This short-circuiting implicitly gives the highest priority to the leftmost alternative.
Short-circuiting is done in the same manner as C or C++'s logical expressions; e.g. if (x < 3 || y < 2) where, if x evaluates to be less than 3, the y < 2 test is not done at all. In addition to providing an implicit priority rule for alternatives which is necessary, given the non-deterministic nature of the Spirit parser compiler, short-circuiting improves the execution time. If the order of your alternatives is logically irrelevant, strive to put the (expected) most common choice first for maximum efficiency.
Some researchers assert that the intersections (e.g. a & b) let us define context sensitive languages ("XBNF" [citing Leu-Weiner, 1973]). "The theory of defining a language as the intersection of a finite number of context free languages was developed by Leu and Weiner in 1973".
The complement operator ~ was originally put into consideration. Further understanding of its value and meaning leads us to uncertainty. The basic problem stems from the fact that ~a will yield U-a, where U is the universal set of all strings. However, where it makes sense, some parsers can be complemented (see the primitive character parsers for examples).
||Sequence||Match a and b in sequence|
||Sequential-and||Sequential-and. Same as above, match a and b in sequence|
||Sequential-or||Match a or b in sequence|
The sequencing operator >> can alternatively be thought of as the sequential-and operator. The expression a && b reads as match a and b in sequence. Continuing this logic, we can also have a sequential-or operator where the expression a || b reads as match a or b and in sequence. That is, if both a and b match, it must be in sequence; this is equivalent to a >> !b | b.
|Optional and Loops|
||Kleene star||Match a zero (0) or more times|
||Positive||Match a one (1) or more times|
||Optional||Match a zero (0) or one (1) time|
||List||Match a list of one or more repetitions of a separated by occurrences of b. This is the same as a >> *(b >> a). Note that a must not also match b|
If we look more closely, take note that we generalized the optional expression of the form !a in the same category as loops. This is logical, considering that the optional matches the expression following it zero (0) or one (1) time.
Primitive type operands
For binary operators, one of the operands but not both may be a char, wchar_t, char const* or wchar_t const*. Where P is a parser object, here are some examples:
P | 'x' P - L"Hello World" 'x' >> P "bebop" >> P
It is important to emphasize that C++ mandates that operators may only be overloaded if at least one argument is a user-defined type. Typically, in an expression involving multiple operators, explicitly typing the leftmost operand as a parser is enough to cause propagation to all the rest of the operands to its right to be regarded as parsers. Examples:
r = 'a' | 'b' | 'c' | 'd'; // ill formed r = ch_p('a') | 'b' | 'c' | 'd'; // OK
The second case is parsed as follows:
r (((chlit<char> | char) | char) | char) a (chlit<char> | char) r (((a) | char) | char) b (a | char) r (((b)) | char) c (b | char) r (((c)))
Operator precedence and grouping
Since we are defining our meta-language in C++, we follow C/C++'s operator precedence rules. Grouping expressions inside the parentheses override this (e.g., *(a | b) reads: match a or b zero (0) or more times).
Copyright © 1998-2003 Joel de Guzman
Use, modification and distribution is subject to the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) | <urn:uuid:36bf1b25-5a1a-43f2-9b1c-6dbdabd38474> | 3.875 | 1,248 | Documentation | Software Dev. | 53.35573 |
Yes, you read that right. Data collected by NOAA reveal that the month of March saw a total of 7,755 daytime and 7,515 nighttime record-breaking high temperatures, making last month far and away "the warmest March on record."
The video featured up top shows the locations of each daytime and nighttime record (or tied record) in sequence, over the course of the month. It's absolutely staggering.
For more information, check out NOAA's State of the Climate Report, which reveals that the first three months of 2012 were also the warmest on record for the contiguous U.S., with an average temperature of 42.0°F — that's 6.0°F above the long-term average, and 1.4 degrees higher than the all-time record. According to the Washington Post, these records are usually broken by just one- or two-tenths of a degree.
Other highlights from the report include:
- Twenty-five states, all east of the Rockies, had their warmest first quarter on record, and an additional 16 states had first-quarter temperatures ranking among their ten warmest.
- Numerous cities had a record warm January-March, including Chicago, Boston, and Washington, D.C. No state in the contiguous U.S. had below-average January-March temperatures.
- Alaska had its ninth coolest January-March period; temperatures were 5.2°F below average.
- The nationally-averaged precipitation total for January-March was 0.29 inches below the long-term average. States across the Pacific Northwest and Southern Plains were wetter than average, while the Intermountain West, parts of the Ohio Valley, and the entire Eastern Seaboard were drier than average.
- NOAA's U.S. Climate Extremes Index, an index that tracks the highest 10 percent and lowest 10 percent of extremes in temperature, precipitation, drought and tropical cyclones, was 39 percent, nearly twice the long-term average and the highest value on record for the January-March period. The predominant factor was the large area experiencing extremes in warm daily maximum and minimum temperatures. | <urn:uuid:39b25f31-7316-48d8-bd09-4a7eb2892d4b> | 2.828125 | 441 | Listicle | Science & Tech. | 54.976095 |
The diagram below has rotational symmetry. How many regular pentagons are there?.
Please help to get the solution. Please find Attached
I don't see those. The central pentagon is formed with quadrilaterals, isn't it? If the question is about rotational symmetry, then perhaps we're supposed to get five pentagons from each of the inner and outer ones by rotations through 360/5 degrees? I don't know, I'm just askin'. If this is true, you'd have 10 pentagons, right? I'm assuming we're not allowed reflections.5 small pentagons forming the central pentagon. | <urn:uuid:15767da7-7cc9-42b0-a79c-bef58588ca85> | 3.34375 | 132 | Q&A Forum | Science & Tech. | 66.954545 |
Submarine canyons are dominant features of the outer continental shelf and slope of the US East coast from Cape Hatteras to the Gulf of Maine. Click image for larger view and image credit.
The Science of Deep-water
Submarine canyons are dominant features of the outer continental shelf and slope of the US East coast from Cape Hatteras to the Gulf of Maine. There are 13 major canyons in the Middle Atlantic Bight (MAB) region, and minor canyons are abundant. The canyons vary in size, shape, and morphological complexity; some were scoured by the flow of rivers during past low sea level periods, but most formed via other erosional processes, such as mud-slides, debris flows, and turbidity currents.
Target study areas
Cutting deeply into the bottom and linking the shelf to the deep sea, canyons are conduits that funnel anthropogenic pollutants, organic carbon, and sediments from shallow to deeper waters. The most southerly of these canyons (just north of Cape Hatteras) occurs in an extremely dynamic and productive area known as “The Point”. The Point has been characterized as one of the hottest fishing spots on the east coast, apparently fueled by upwelling generated by the collision of several major currents over complex bottom topography. Further north, large canyons (e.g., Norfolk, Baltimore, Washington, Hudson, Lydonia) occur at regular intervals.
The geology of these features has been well-studied; however, despite their well-known biological productivity, biological data are quite limited (particularly deeper than 200 m). While there are studies on fishes at The Point and a few of the canyons to the north, there is very little information on the benthic invertebrate communities of the slope. We know that vulnerable and productive habitats such as deep-sea corals and hydrocarbon seeps occur in and around some of these canyons, yet these habitats are poorly explored. The canyons between Cape Hatteras and Cape Cod are less well-known than those further north, and yet these are the subject of potential oil exploration, intensive fisheries, and are possible National Marine Sanctuary (NMS) candidates. Some of the areas around the Mid-Atlantic canyons, have been designated as Essential Fish Habitat by the Fishery Management Councils.
The middle Atlantic includes some of the most historically significant waters in the US. The approaches to Chesapeake and Delaware Bays have a long history of exploration, warfare, commerce, fishing, and recreation, leaving a rich, but poorly understood, repository of cultural material on the seafloor. This diversity and intensity of human activity along the middle Atlantic region has created an important submerged cultural landscape. The ocean floor is marked by fishing vessels (and their gear), warships, military experiments and ammunition, and the remnants of commercial shipping dating back 400 years. While the area is historically significant and archaeologically sensitive, gaps in our knowledge are extensive and much of the reported information about shipwreck locations is incorrect and/or inaccurate.
These canyon ecosystems are important targets of study for the following reasons:
- Canyons are productivity hotspots and support different communities compared with non-canyon habitats; however, there is very little information available on the biology and ecology of the mid-Atlantic canyons.
- These canyons harbor the only known deep-sea coral communities in the mid-Atlantic region.
- An earlier study indicated the presence of a cold seep community in Baltimore Canyon; this is the only known cold seep north of Cape Hatteras, NC.
- The mid-Atlantic canyons have several sites of cultural historical interest, such as shipwrecks and ancient human population centers.
- The canyons and surrounding shelf areas support many valuable fisheries species and are sites of intense recreational and commercial fishing activity.
- Sensitive fauna within the canyons may be impacted by human activities, such as fishing and energy industry activities.
- There is currently little habitat protection in the study canyons; however, the regional fisheries management councils are discussing establishing protection. Establishment of effective marine protection requires good ecological information.
The Mid-Atlantic Deepwater Canyons project is co-funded by the Bureau of Ocean Energy Management (BOEM) and NOAA’s Office of Ocean Exploration and Research (which provides the ship and ROV for research cruises). The project is managed by Continental Shelf Associates, and includes scientific partners from several academic institutions and the US Geological Survey.
Our first research cruise was in June 2011 using the NOAA ship Nancy Foste; the primary objective of this expedition was to conduct multibeam sonar mapping of major canyons and potential shipwreck sites in the study region. This was a successful cruise, which resulted in nearly 1,400 sq. km of high resolution maps of the seafloor, nine new shipwreck targets, 32 hydrographic profiles to describe the water column environment and a shipboard outreach effort to communicate our scientific findings to the public.
Our next research cruise aboard the NOAA ship Nancy Foster this summer will use the Kraken II ROV (University of Connecticut) to conduct video and photo transects, collect samples of invertebrates and fishes for various biological studies, deploy instruments to collect long-term environmental data and survey several archaeological sites. We will have a strong education and public outreach component to the cruise which will enable the public to follow our progress as we explore these little known ecosystems.
Sign up for the Ocean Explorer E-mail Update List. | <urn:uuid:97b2df46-f0b5-49ef-80e7-1e8ce5b0381d> | 3.515625 | 1,159 | Knowledge Article | Science & Tech. | 21.341261 |
Paper airplanes are like real airplanes in their basic physics. Some points:
They should be mildly nose-heavy. (The tail actually presses downward, to counteract the nose-heaviness.) If they are too nose-heavy, they will just arrow into the ground.
If they are tail heavy, they will go up, and then slide backward.
If they are neutral-balanced, they will go up and down with a scalloped motion.
If they are mildly nose-heavy, they will be stable, because if they slow down, the nose will drop, which makes them go faster, thus more lift, which brings the nose back up.
The speed is determined by how much up-elevator you put on the back.
If you put a lot of up-elevator, they will tend to turn up, which slows them down, so they will be stable at a slower speed.
If you put neutral elevator, they will have to be going much faster to bring the nose up, so they will tend to fly faster.
A paper airplane, like any airplane, will always descend unless something is pushing it.
That's because by descending it is using gravity to overcome its drag and keep its speed up.
If you want it to stay up longer, trim the elevator up so that it travels more slowly.
Also, anything you can do to reduce drag will help it stay up.
If you want it to go in a straight line, rather than turn, all you can do is try to balance it left-to-right.
That's a problem with airplanes in general.
There's very little you can do to make them stable in the roll axis.
That's why when pilots wander into clouds, where they can't see a horizon, they can easily get into a spiral, unless they can keep the wings level by trusting their instruments. | <urn:uuid:2b53760c-1d66-4a16-a2dd-4c72e199bcd0> | 3.109375 | 389 | Q&A Forum | Science & Tech. | 69.662821 |
Mixins are one of Ruby's defining features, but often one of the most difficult to understand for those new to Ruby. They're not difficult to understand, but they're not something most programmers have encountered before as most languages don't have mixins. A mixin is a way for code to be shared across multiple classes and is closely related to duck typing.
Duck typing is a technique that allows you to use objects that can respond to a certain set of method calls interchangeably. For example, objects that respond to the method calls rotate, move and draw could all be used as graphics objects in a game program. The game program doesn't care what type these objects really are, just that they can respond to those method calls. After all, if the object walks like a duck and quacks like a duck, it must be a duck.
Mixins can be used to expand the duck typing functionality even further. Continuing with the drawable elements for a game program example, we'll assume that drawable objects must have rotate, move, scale and draw methods. In order to provide a more complete interface, with relative and absolute moves, partial or complete draws, etc, there are two ways to go about it. Since what we're really talking about here is sharing common code to do all this work using the primitive methods described above, you could either use inheritance or mixins.
Inheritance would work just fine. All drawable objects just subclass the Drawable class that has all these methods. However, all your drawable objects must now be subclassed from Drawable or be wrapped in a Drawable class. This type of rigid class hierarchy is very un-Ruby.
Mixins provide a much more Ruby-like solution to this problem. Mixins are a group of methods not yet attached to any class. They exist in a module, and are not useful until included in a class. Once included, the methods in the mixin module are now normal methods of the class.
Continuing with our hypothetical drawable shapes example, here's pseudocode for two shapes. One, a raster image, uses image manipulation libraries to provide scaling and rotation. The other, a geometric polygon, uses geometry to do scaling and rotation. The underlaying mechanism for doing these tasks couldn't be more different.
class Polygon include Drawable def draw # Use geometric line drawing end def move(newx, newy) # Move all points to be centered around newx,newy end end class Image include Drawable def draw # Use blitting to draw image end def move(newx, newy) # Move image so top left corner is at newx,newy end end
Now, the following mixin will provide a more detailed interface for the game programmer to use. They'll be able to rotate without having to worry about center points, scale along any axis, etc. These are all things that can be implemented using just the primitive methods provided in the drawable classes, the mixin methods have no special access or knowledge of any class its methods are included into.
module Drawable def slide(deltax, deltay) # Use move method to implement slide end end | <urn:uuid:06873574-ced8-45fb-9182-7f78cee32b70> | 3.578125 | 656 | Knowledge Article | Software Dev. | 41.056783 |
Naomi Ginnever - Photobiology & Solar Radiation
University of St. Andrews, United Kingdom
Does Exposure Time and Different Wavelengths of Light Effect the Photobleaching
of Colored Dissolved Organic Matter (CDOM)?
Colored Dissolved Organic Matter (CDOM) absorbs wavelengths in the ultraviolet (UV) and visible portions of the light spectrum. CDOM affects light availability, aquatic photochemistry, phytoplankton activity, and ocean color. UV light ranges from below 280 nm to 400 nm while Photosynthetically Active Radiation (PAR) is 400-700 nm in range. CDOM reduces the depth to which light penetrates attenuating the UV and blue regions of the spectrum more than green or red portions. UV affects organisms by damaging their DNA and inhibiting photosynthesis. UV light degrades chromophores in CDOM and hence bleaching occurs. There have been past studies on this topic conducted by Tzortziou et al. and DelVecchio-Blough and the present study will combine timed sampling with polychromatic light and marsh derived water. Samples were collected from a weir on the Kirkpatrick marsh, 0.2um filtered, and stored in the dark at 4 degrees Celsius. Filtered samples were warmed to room temperature and poured into a cuvette and excited using a Xenon lamp. Sub-samples were taken at 4 hour intervals for 40 hours. CDOM absorbance measurements were taken using a spectrophotometer. Conclusions were that absorbance and fluorescence of CDOM exposed to UV reduces over time. Spectrally dependent and non-spectrally dependent photobleaching occurs during exposure and is most pronounced at the 350 nm cutoff. Further study of the detailed analysis of CDOM photobleaching during transport through the Bay would allow for a better picture of this process in natural systems.
Funding provided by the Smithsonian Institution Women’s Committee | <urn:uuid:ced7b4c0-f97d-4200-ab3f-2b910137cff2> | 2.953125 | 396 | Academic Writing | Science & Tech. | 34.031532 |
Don’t they seem so serene sparkling away in the night sky?
It’s a shame but we all know that’s not always been the case.
It is generally thought that stars form in a violent reaction between gas particles when dense parts of molecular clouds collapse from their own gravity (for more information on star formation check out this website - it explains it brilliantly - http://www.universetoday.com/24190/how-does-a-star-form/).
Stars similar to our own sun formed in galactic clusters and are thought to form the centre of these molecular nebulae. During formation the massive young stars give off hot winds to carve bubbles inside of these gigantic clouds. Yes you read that right - BUBBLES!
More and more of these massive bubbles are being discovered thanks to a huge venture involving the public - “The Milky Way Project”. The scheme, named after our very own galaxy, comprises of over 440,000 images taken for the survey, which aimed to map around 85% of the Milky Way, were taken by a camera onboard the Spitzer Space Telescope in association with an analysis called GLIMPSE (Galactic Legacy Infrared Mid-Plane Survey Extraordinaire - good job they came up with an acronym ‘cause that’s rather a mouthful!). Spitzer’s high resolution infrared camera is able to plot the galactic plane in great detail making it possible to see these amazing galactic bubbles. More information about GLIMPSE and Spitzer can be found here: http://www.astro.wisc.edu/sirtf/
The project is attempting to, among other things, determine exactly what these bubbles actually are. At the moment physicist think that these regions around the young stars are actually a bit like shockwaves which can be seen in infrared light (in the image above credited to NASA and “The Milky Way Project” the red area represents where the ‘shock’ has already passed through and the bright green ring around it is where the ‘shock’ is now in the gas cloud).
By using the “bubble-drawing interface” on their website it is hoped that the general public can lend a hand trying to plot and track down some more of these unusual characteristics in order to aid in scientists’ understanding, and perhaps wile away a rainy lunch hour or two.
To read more about “The Milky Way Project” visit their website - http://www.milkywayproject.org/ - it looks like a great way to get involved!
17 January 2012 | <urn:uuid:a5d0097e-25c7-4479-883f-60346ccc1b6c> | 3.390625 | 544 | Personal Blog | Science & Tech. | 51.559943 |
My Thanks to Ned Nikolov, who has just sent the first part of the ‘Response to comments on the Unified Theory of Climate’ to us.
Part 1: Magnitude of the Natural ‘Greenhouse’ Effect
Ned Nikolov, Ph.D. and Karl Zeller, Ph.D.
January 17, 2012
We’ve decided to split our expanded explanation into two parts, so that we do not overwhelm people. From what we’ve seen on the blogs so far, there appear to be 2 main areas of confusion: 1) the size of the GH effect. Most people have a hard time wrapping their minds around the fact the atmosphere boosts the surface temperature by well over 100K; and 2) the physical nature of the pressure-controlled thermal enhancement. Although, this follows seamlessly from the gas law, most people (including PhD scientists) appear to be totally confused as to how precisely the effect of pressure works or is even possible. So, this will be topic of our reply Part 2.
(a) The term Greenhouse Effect (GE) is inherently misleading due to the fact that the free atmosphere, imposing no restriction on convective cooling, does not really work as a closed greenhouse.
(b) ATE accurately conveys the physical essence of the phenomenon, which is the temperature boost at the surface due to the presence of atmosphere;
(c) Reasoning in terms of ATE (Atmospheric Thermal Effect) vs. GE (Greenhouse effect) helps broaden the discussion beyond radiative transfer; and
(d) Unlike GE, the term Atmospheric Thermal Effect implies no underlying physical mechanism(s).
We start with the undisputable fact that the atmosphere provides extra warmth to the surface of Earth compared to an airless environment such as on the Moon. This prompts two basic questions:
(1) What is the magnitude of this extra warmth, i.e. the size of ATE ? and (2) How does the atmosphere produce it, i.e. what is the physical mechanism of ATE ? In this reply we address the first question.
The pdf is available here. UTC_Blog_Reply_Part-1
Please try to focus on the content of the pdf in comments to this thread. We can carry on posting our general thoughts about the overall theory and how best to formulate our understanding of the proposed gravity effect on the existing threads – thanks. | <urn:uuid:2ebeba28-a1b1-4bb2-8c81-89d0ab9bcc70> | 2.71875 | 498 | Comment Section | Science & Tech. | 57.058333 |
Bucky Fuller (1895-1983) is widely recognized as one of the
world’s great modern visionaries of the 20th century. He was a
natural Futurist, not because of his intellect, but his wisdom to
challenge widely held assumptions from the world around him.
He blended his skills as a writer, thinker, and engineer into a
concept he called “Comprehensive Anticipatory Design Science.”
Bucky believed that the essence of human life on the planet is to
solve problems and continue expanding our awareness and views of
what is possible.
Our best strategy for addressing problems of the 21st century
might be to revisit the core principles of his philosophy related
to design, shape and energy. If the Whitney curators, are correct,
Bucky Fuller might turn out to be one of the most influential
thinkers of not one, but two centuries.
Planet Earth is about to get its own version of the Web!
Cisco Systems is partnering with NASA to create a massive online collaborative global monitoring platform called the "Planetary Skin" to capture, collect, analyze and report data on environmental conditions around the world, while also providing researchers social web services for collaboration.
This type of platform is essential for Climate and Ecosystem researchers, but it also might be a sneak peak at the future of the Internet.
'Smart Planet': Age of Sensors & Structured Data If life in the past few decades has been forever altered by complex microprocessor chips, the next century could see the same social disruption via simple, low cost networked sensors and 'embedded objects' that mirror a digital signal of our analog world. But making this disconnected data relevant is a challenge.
The 'Planetary Skin' platform [video] will stitch together 'petabytes' of unstructured data collected by sensors (land, sea, air, space) reporting on changing environmental conditions. The platform will also allow for 'streamlining of decision making' and 'collaborative swarming' on analysis of relevant data. The project's first layer, “Rainforest Skin,” will be prototyped during 2009.
Good for NASA, Great for Cisco, and Wonderful for 'Mirror World' Metaverse Enthusiasts The benefits to NASA and Planetary system researchers is clear. Forget about Facebook, these scientists are looking for a functional digital research simulation 'Mirror World' (as envisioned by David Gelertner).
Meanwhile, Cisco is working diligently to make itself the most relevant web company in the next era of Internet architecture where collaboration, video, 3D simulations and structured data change the nature of our interactions. 'Planetary Skin' might be Cisco Systems under the radar, but out in the open effort of essentially building its own Internet of Tomorrow.
If you’re interested in how a specific future year may shape up,
the Future Scanner offers a
wealth of information to this end. A quick search through
revealed a plethora of information about the expected state of
cancer treatment, interfaces, artificial intelligence, robotics,
the environment and much more.
Take a look at the following results for a quick snapshot of
Health of the General Population: Although it
has been predicted that 75% of Americans
will be overweight by then, what and how
we’re eating might be very different from today. Check out
designs from the competition “Dining in 2015” as well as the
potential for elegant designer
fruit that could hit grocery stores by 2015. And though the
future of fruit is exciting, the future of food prices may not be
so, according to this
prediction that cereal prices will rise by between 10% and 20% by
2015 due to supplies not matching future demands, according to the
Policy Research Institute.
The most exciting prediction regarding health in 2015 is the
likelihood that cancer may be well on its way to being cured.
According to this Future
Blogger post by futuretalk “Dr.
Andrew von Eschenbach, then director of the National Cancer
Institute outlined his goal to eliminate suffering and death from
cancer by 2015.”
Gadgets and Gizmos: Lots of exciting
technologies to look forward to in this year. Check out these awesome
laptop prototypes as well as Nokia’s
Nano-phone being developed with the 2015 goal in mind.
“If we can really understand the problem, the answer will come out of it, because the answer is not separate from the problem.” – Jiddu Krishnamurti
“The dogmas of the quiet past are inadequate to the stormy present. The occasion is piled high with difficulty, and we must rise with the occasion. As our case is new, so we must think anew and act anew.” – Abraham Lincoln.
Grand Challenges can be defined as fundamental problems in need of solutions. An Energy Grand Challenge is indeed what its name implies – a competition to be challenged and won in regard to energy use, sustainability, cost, and efficiency.
Multiple teams enter as candidates to reach the goal, whether it is a certain level of fuel efficiency, carbon dioxide removal, or future energy solutions. The winner receives a prize, usually in the form of a generously large sum of money. But the Challenge’s impact, however, is not only on the team that wins the grand prize, but the technology that springs from the research, which can expand its positive influence to affect the world.
I recently came upon an interesting article about a village in Japan being built entirely out of Styrofoam. The walls of these buildings are pretty thick, but it only takes three people a few hours to assemble and a layer of mortar and paint ensure protection from the elements. Here’s a short clip of the actual assembly…
Having grown up in a Bucky Fuller dome structure, I immediately took a liking to this shape. Not only is the dome incredibly strong, but it also uses less material than the average home. But having also been raised by hippies, any mention of the word Styrofoam sends chills down my spine. I agree, it’s a great material for a dome structure in that it’s highly insulated against cold and hot temperatures and, like in the video, very easy to build. But there are myriad problems with such a building material.
For instance, the disposal of the houses would be an environmental catastrophe. Also, imagine the toll that 20 years of sun and rain would exert on such a light and highly corrosive structure. There’s a reason water is called the Universal Solvent – it can eat through just about anything given enough time. The idea of an entire village, much less a country, having all its Styrofoam houses replaced is staggering (maybe ship them to war-torn countries to be made into napalm?).
According to a June 15 analysis published in the French bi-monthly magazine L’Auto-Journal, a long-standing car magazine, the European Union will soon no longer be on the short list of the top 3 contributors of greenhouse gases. The French-originated NAC (Nouvelle Affaire de Carburant) program, widely known as the New Fuel Deal by the English-speaking world, was initially criticized by citizens of nearly every European nation for being an economic fiasco.
The brainchild of French President Nicolas Sarkozy, who served a six month stint as EU president, has certainly paid off for the environment, despite the widespread criticism and dire predictions. The Affaire was created by the members of the EU’s French-led APRE Summit (Automobile-fabricants pour la Protection et la Régénération de l’Environment, or ACRE – Auto-makers for the Conservation and Regenration of the Environment) in 2011, which formed an impressive international think-tank consisting of automobile manufacturers, leaders in the alternative fuel industry, financial wizards and various government officials. Despite initial opposition from such countries as the Czech Republic and Ireland, the plan was consensually ratified in February, 2010. | <urn:uuid:d41d5a2a-4025-43ff-a509-ab6ae5844285> | 2.703125 | 1,686 | Personal Blog | Science & Tech. | 35.550579 |
Thanks to the wonders of the Internet, it’s easier than ever to participate in and contribute to important space research while having fun at the same time. Whether you’re interested in searching for E.T. or want to help scientists better understand stars, there are innovative sites available today that let you contribute in multiple ways.
Here are 5 examples of exciting citizen science resources I’ve found that are worth checking out. Have a read and, if you’re feeling adventurous, why not jump in and get involved? Your help is needed more than ever.
If you know of other projects/resources, feel free to share them below as well. Happy universe hunting!
The Zooniverse and the suite of projects it contains is produced, maintained and developed by the Citizen Science Alliance. The member institutions of the CSA work with many academic and other partners around the world to produce projects that use the efforts and ability of volunteers to help scientists and researchers deal with the flood of data that confronts them. At the time of writing, Zooniverse has just under 600,000 citizen scientists contributing to their efforts.
Citizen Sky welcomes everyone to be a citizen scientist. They will guide you through the process of how to observe epsilon Aurigae, how to send in your observations, and then how to see your results, analyze them, and even publish them in a scientific journal! No previous experience is required as they teach you all you need to know!
The GLOBE at Night program is an international citizen-science campaign to raise public awareness of the impact of light pollution by inviting citizen-scientists to measure their night sky brightness and submit their observations to a website from a computer or smart phone. Light pollution threatens not only our “right to starlight”, but can affect energy consumption, wildlife and health. The GLOBE at Night campaign has run for two weeks each winter/spring for the last six years. Through 2011, people in 115 countries contributed 66,000 measurements, making GLOBE at Night one of the most successful light pollution awareness campaigns.
If you don’t have time to participate in actual online research, there are a growing number of scientists that could use your financial contributions. Crowdfunding sites such as Petridish, FundaGeek, and Kickstarter provide a wide range of projects for you to choose and contributions typically can be as low as $1. This is a great way to contribute to space exploration and truly proves that together, we can make a difference.
SETI@home is a scientific experiment that uses Internet-connected computers in the Search for Extraterrestrial Intelligence (SETI). You can participate by running a free program that downloads and analyzes radio telescope data.
- SETI Live to Crowdsource Search for Extraterrestrials (unastronomy.com) | <urn:uuid:2ee1d12d-d543-4ded-8329-47de4769821e> | 3.109375 | 587 | Listicle | Science & Tech. | 42.314969 |
Bumblebees do things differently: unlike honeybees, they do not have a permanent colony. In autumn, a bumblebee colony dies out and only the young, mated queens hibernate each separately in the soil. In spring, a queen starts a new colony. She lays a first batch of eggs, from which larvae emerge after 4 to 5 days.
In the beginning, the queen has to do all the foraging by herself. The larvae are fed with a mixture of nectar and pollen gathered from flowers. When the first adult workers have appeared, the queen no longer leaves the nest. The workers begin to forage and to take care of the brood.
After the production of 150 to 400 workers, young queens and drones (males) are born. From this time on, the activity of the colony decreases; the old queen stops laying eggs and eventually dies. With a young, mated queen, a new cycle can start. | <urn:uuid:71bdf485-77d1-4656-95b7-acd890d00b81> | 3.875 | 194 | Knowledge Article | Science & Tech. | 65.540802 |
Both arithmetic (built-in) and user-defined numeric types require proper
(that is, with (in-class) integral constants).
The library uses
std::numeric_limits<T>::is_specialized to detect whether the type
is builtin or user defined, and
std::numeric_limits<T>::is_signed to detect whether the type is
integer or floating point; and whether it is signed/unsigned.
policies uses unqualified calls to functions
ceil(); but the standard functions are introduced
in scope by a using directive:
using std::floor ; return floor(s);
Therefore, for builtin arithmetic types, the std functions will be used. User defined types should provide overloaded versions of these functions in order to use the default rounder policies. If these overloads are defined within a user namespace argument dependent lookup (ADL) should find them, but if your compiler has a weak ADL you might need to put these functions some place else or write your own rounder policy.
rounder policy needs to determine if the source value is positive or not,
and for this it evaluates the expression
< static_cast<S>(0). Therefore,
user defined types require a visible
operator< in order to use the
Trunc<> policy (the default).
If a User Defined Type is involved in a conversion, it is assumed
that the UDT has wider
range than any built-in type, and consequently the values of some
members are hardwired regardless of the reality. The following table summarizes
udt_mixture can be used to detect whether
a UDT is involved and to infer the validity of the other members as shown
Because User Defined Numeric Types might have peculiar ranges (such as an
unbounded range), this library does not attempt to supply a meaningful range
checking logic when UDTs are involved in a conversion. Therefore, if either
Target or Source are not built-in types, the bundled range checking of the
function object is automatically disabled. However, it is possible to supply
a user-defined range-checker. See Special
There are two components of the
converter<> class that might require special
behavior if User Defined Numeric Types are involved: the Range Checking and
the Raw Conversion.
When both Target and Source are built-in types, the converter class uses an internal range checking logic which is optimized and customized for the combined properties of the types.
However, this internal logic is disabled when either type is User Defined. In this case, the user can specify an external range checking policy which will be used in place of the internal code. See UserRangeChecker policy for details.
The converter class performs the actual conversion using a Raw Converter
policy. The default raw converter simply performs a
However, if the a UDT is involved, the
might not work. In this case, the user can implement and pass a different
raw converter policy. See RawConverter
policy for details | <urn:uuid:f4550be3-5da9-4fa5-a1c4-86e95e76ec91> | 2.703125 | 640 | Documentation | Software Dev. | 28.8983 |
The Earth's End
Date: 1993 - 1999
Where does the earth end?
Right beneath your feet! Really, the only meaning to "end" on
the earth is the boundary between the inside (underground)
and the outside (into the atmosphere and space). Other than that,
the earth is pretty much spherical, so there isn't any place
on earth that doesn't look pretty much like every other place,
as far as being able to walk off in any direction without
leaving the planet.
Actually, there's another boundary, where the earth's influence
on the neighboring region of space ends (also the atmosphere
"ends" in a sense a few hundred miles away) - this really extends
for hundreds of thousands or millions of miles though.
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:48da4693-7b10-4c5d-9623-06107f3513d0> | 3.671875 | 176 | Knowledge Article | Science & Tech. | 50.83897 |
I recently attended the World Climate Conference-3 (WCC-3), hosted by the World Meteorological Organization (WMO) in Geneva. Most of the talk was of providing “climate services” (CS) and coordinating these globally. But what are climate services, and how much of what was envisaged is scientifically doable?
Climate services is a fairly new term that involves the provision of climate information relevant for adaptation to climate change and climatic swings, long-term planning, and facilitating early warning systems (EW).
CS includes both data describing past and future climate, and usually involves downscaling to provide information on regional and local scales. It can be summarised by the contents of http://www.climateservices.gov/ (also see this link to an article discussing the US National Climate services).
It was stressed during WCC-3 that CS must not only communicate relevant information, but this information must also be ‘translated’ to non-expert in a way that it can be acted upon.
One concern expressed during WCC-3 was that global climate models still do not give a sufficiently accurate description of the regional and local aspects of the climate. The models also have serious limitations when they are to be used for seasonal and decadal forecasting. Climate models were originally designed to provide the large picture of our climate system, and the fact that ENSO, cyclones, various wave phenomena (observed in the real world) appear in the model output – albeit with differences in details – give us increased confidence that they capture real physical processes. For climate prediction, these details, often caricatured by the models, must be more accurate.
Although the dynamical aspects and regional scales are important, one must keep in mind that the atmospheric radiative transfer atmospheric models represent the core of the theory behind AGW, and that AGW involves longer time scales. Few scientists seriously doubt these radiative transfer models, which are closely related to the algorithms used in remote sensing, e.g. by satellites, to calculate temperatures. If one interprets the the New Scientist report from the WCC-3 as that the situation is no longer as dire previously thought, then one is in for a big disappointment. The sentiment is rather that climate change is unavoidable, and that we need to establish tools in order to plan and deal with the problems.
There are some signs, however, that biases and systematic errors in the global climate models (GCMs) can be reduced by increasing the spatial (and temporal) resolution, or by including a realistic representation of the stratosphere. Problems associated with the description of local and regional climates cannot merely be corrected through downscaling.
One concern was that the bit of code called ‘parametrisation’ (employed in the models to describe the bulk effect of physical processes taking place over a spatial scale too small for the model grid) may not be sufficiently good for the job of simulating all local climatic aspects. For this reason, there was a call for a globally coordinated effort in providing computer resources and climate simulation.
Some speakers stressed the importance of a truly global set of climate observation. In this context, it’s also crucial to share data without restrictions, in addition to aiding poor countries to make high quality measurements.
Although the focus during the WCC-3 was on adaptation, it was also stressed that mitigation is still a must, if we are to avoid serious climate calamities. It was concluded that we must move from a ‘Catastrophe handling’ strategy to a ‘Risk management’ policy.
One sad example showing that we are not there yet, was the forecasted June-August 2008 floods over the western/central Africa. It was the first time in history when Red Cross/Crescent launched a pre-emptive appeal based on a forecast. Unfortunately, there was a lack of willingness to donate funds before a disaster had taken place, and sadly, the forecasts turned out to be fairly accurate. The question is whether we are doing the same mistake when it comes to climate change.
Webcasts from the conference have been posted on the WMO WCC-3 web site. In addition to the science, a number of speakers discussed politics. There is also a new book – Climate Senses – that has recently been published for the WCC-3, dealing with climate predictions and information for decision making | <urn:uuid:7e076c46-f3dc-48eb-8e3e-874ec8edc669> | 2.953125 | 904 | Knowledge Article | Science & Tech. | 31.575573 |
Of the eight species of the Tetragonostachys that occur in Africa and on Madagascar only the species
Selaginella wightii occurs outside of the area in southern India and on Sri Lanka. Understandably the African and
Asian populations have been classified into two varieties based upon the degree of cilia on the sporophylls.
These varieties are: S. wightii var. wightii, the Asian species, and S. wightii var. phillipsiana, the
African species, which has been recently classified as its own species in The Flora of East Africa, by Bernard
) and is so regarded by Michael Hassler and Brian Swale in their Checklist of World Ferns website (http://homepages.caverock.net.nz/~bj/fern/
In an email from Roy Gereau, a specialist in pteriodophytes working for the Missouri Botanical Gardens in the USA, he points out that in Dr. Verdcourt's Flora the species S. wightii does not occur on the African continent but does on Mauritius
whereas Hassler & Swale notes S. wightii as occurring in Tanzania. However, based upon notes by Rolla M. Tryon, Jr. the distinction between S. wightii and S. phillipsiana does not appear to warrant segregation of the
established two varieties without futher delineation, so I am maintaining S. phillipsiana as a variety of S. wightii.
The species is monomorphic in character with the microphylls generally being of the same size and shape
at the same position around a stem. The stems appear to be creeping and possess upright strobili.
Here is an image of the Asian variety taken in Sri Lanka: http://www.odu.edu/~lmusselm/plant/index.php?todo=details&id=8826
S. wightii appears to be found only in the temperate highlands of southern Sudan, southern Ethiopia, marginally into northwestern Somalia occurring in rocky habitats at 1200-1900 meters (3937-6234 ft)*, western Kenya, and into northeastern Tanzania, as per Mr. Gereau's email of Dr. Verdcourt's findings, found only at two locations at elevations of 900 to 1200 meters (2953-3937 ft). The climate in the areas of occurrence is very moderate varying from averages of 15 C to 25 C (59 F to
77 F). The daily change in temperature can be as much as a 12 degree rise from the morning lows.
Monsoonal rains occur twice during the year with heavier rains occurring roughly during April into June produce
by the southwest monsoon and a lighter monsoonal flow during October into December. High humidities and
fog banks upon the mountain plateaus serve to help cool the area.
The species may be found as cushions sprawling over rocks and among outcrops at elevations ranging from
900 meters (21953 ft) to 2400 meters (7874 ft), as per Mr. Gereau's email of Dr. Verdcourt's findings.
It has been collected on Mt. Kilimanjaro as recently as 2001, being found at 1600 meters (5249 ft).
There are no other images currently available for free view on the net apart from the one noted above.
There are two available herbarium specimens that can be enlarged with their magnification tools.
The best view is of the speciment collected in India by R. M. Tryon Jr. at the Berlin Herbariumhttp://ww2.bgbm.org/herbarium/view_large.cfm?SpecimenPK=98227&idThumb=275865&SpecimenSequenz=1&loan=0
and another Indian specimen collected by Robert Wight, who collected in India from 1826 to 1828, is
housed at the New York Botanical Garden Herbariumhttp://sweetgum.nybg.org/vh/specimen.php?irn=721083
*from Selaginella in Flora of Somalia vol. 1 (1993) by Dr. Mats Thulin per email from Dr. Thulin. Dr. Thulin
also accepts the taxon designation as Selaginella phillipsiana.
A special thanks to Roy Gereau and Mats Thulin for contributing information to this post. | <urn:uuid:7db23c0b-eb7a-4962-8f12-93ac503b2f4c> | 2.828125 | 931 | Comment Section | Science & Tech. | 63.877114 |
PHILADELPHIA â When semiconductor nanorods are exposed to light, they blink in a seemingly random pattern. By clustering nanorods together, physicists at the University of Pennsylvania have shown that their combined âonâ time is increased dramatically providing new insight into this mysterious blinking behavior.
The research was conducted by associate professor Marija Drndicâs group, including graduate student Siying Wang and postdoctorial fellows Claudia Querner and Tali Dadosh, all of the Department of Physics and Astronomy in Pennâs School of Arts and Sciences. They collaborated with Catherine Crouch of Swarthmore College and Dmitry Novikov of New York Universityâs School of Medicine.
Their research was published in the journal Nature Communications.
When provided with energy, whether in the form of light, electricity or certain chemicals, many semiconductors emit light. This principle is at work in light-emitting diodes, or LEDs, which are found in any number of consumer electronics.
At the macro scale, this electroluminescence is consistent; LED light bulbs, for example, can shine for years with a fraction of the energy used by even compact-fluorescent bulbs. But when semiconductors are shrunk down to nanometer size, instead of shining steadily, they turn âonâ and âoffâ in an unpredictable fashion, switching between emitting light and being dark for variable lengths of time. For the decade since this was observed, many research groups around the world have sought to uncover the mechanism of this phenomenon, which is still not completely understood.
âBlinking has been studied in many different nanoscale materials for over a decade, as it is surprising and intriguing, but itâs the statistics of the blinking that are so unusual,â Drndic said. âThese nanorods can be âonâ and âoffâ for all scales of time, from a microsecond to hours. Thatâs why we worked with Dmitry Novikov, who studies stochastic phenomena in physical and biological systems. These unusual Levi statistics arise when many factors compete with each other at different time scales, resulting in a rather complex behavior, with examples ranging from earthquakes to biological processes to stock market fluctuations.â
Drndic and her research team, through a combination of imaging techniques, have shown that clustering these nanorod semiconductors greatly increases their total âonâ time in a kind of âcampfire effect.â Adding a rod to the cluster has a multiplying effect on the âonâ period of the group.
âIf you put nanorods together, if each one blinks in rare short bursts, you would think the maximum âonâ time for the group will not be much bigger than that for one nanorod, since their bursts mostly donât overlap,â Novikov said. âWhat we see are greatly prolonged âonâ bursts when nanorods are very close together, as if they help each other to keep shining, or âburning.ââ
Drndicâs group demonstrated this by depositing cadmium selenide nanorods onto a substrate, shining a blue laser on them, then taking video under an optical microscope to observe the red light the nanorods then emitted. While that technique provided data on how long each cluster was âon,â the team needed to use transmission electron microscopy, or TEM, to distinguish each individual, 5-nanometer rod and measure the size of each cluster.
A set of gold gridlines allowed the researchers to label and locate individual nanorod clusters. Wang then accurately overlaid about a thousand stitched-together TEM images with the luminescence data that she took with the optical microscope. The researchers observed the âcampfire effectâ in clusters as small as two and as large as 110, when the cluster effectively took on macroscale properties and stopped blinking entirely.
While the exact mechanism that causes this prolonged luminescence canât yet be pinpointed, Drndicâs teamâs findings support the idea that interactions between electrons in the cluster are at the root of the effect.
âBy moving from one end of a nanorod to the other, or otherwise changing position, we hypothesize that electrons in one rod can influence those in neighboring rods in ways that enhance the other rodsâ ability to give off light,â Crouch said. âWe hope our findings will give insight into these nanoscale interactions, as well as helping guide future work to understand blinking in single nanoparticles.â
As nanorods can be an order of magnitude smaller than a cell, but can emit a signal that can be relatively easily seen under a microscope, they have been long considered as potential biomarkers. Their inconsistent pattern of illumination, however, has limited their usefulness.
âBiologists use semiconductor nanocrystals as fluorescent labels. One significant disadvantage is that they blink,â Drndic said. âIf the emission time could be extended to many minutes it makes them much more usable. With further development of the synthesis, perhaps clusters could be designed as improved labels.â
Future research will use more ordered nanorod assemblies and controlled inter-particle separations to further study the details of particle interactions.
This research was supported by the National Science Foundation. | <urn:uuid:9c33333e-73fc-480b-ad2c-3095d7eca77a> | 3.125 | 1,109 | Knowledge Article | Science & Tech. | 28.563994 |