text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Space is three-dimensional... or is it? In fact, we are all used to living in a curved, multidimensional universe. And a mathematical argument might just explain how those higher dimensions are hidden from view.
A bizarre set of of 8-dimensional numbers could explain how to handle string-theory's extra dimensions, why elementary particles come in families of three... and maybe even how spacetime emerges in four dimensions.
String theory has one very unique consequence that no other theory of physics before has had: it predicts the number of dimensions of space-time. But where are these other dimensions hiding and will we ever observe them?
Learning mathematics involves a progression to higher and higher concepts, building on the foundations of what we have already learnt. But Andrew Irving and Ebrahim Patel explain that no matter how high your mathematical knowledge reaches you must never lose sight of your foundations, no matter how basic they may seem. | <urn:uuid:18bf156b-691c-4c6d-8b31-cb2137ad78a6> | 3.265625 | 187 | Truncated | Science & Tech. | 45.262217 |
Identifying and cataloging biological diversity is challenging. One way to do go about IDing all the life forms is to sequence a known region of the genome in all those species. This is known as DNA barcoding. An article in PNAS reports on the DNA sequence of a gene found useful for DNA barcoding in plants. In a review of the paper, the following table is presented:
|Number of species||All (or most)||One (or few)|
|Number of gene regions||One (or few)||All (or most)|
The gist: DNA barcoding results in the sequencing of a single gene in a bunch of species, while genome sequencing gives us the sequence of an entire genome in a single species. This may be true now, but for how long? The dropping price of sequencing will allow us to get information from many genomic regions in many species. These won’t be high quality whole genome sequences, but the age of doing DNA barcoding with a single gene won’t last for long. | <urn:uuid:6ca4e7fc-9d6d-406a-897a-2ed6c9eb6167> | 3.296875 | 217 | Personal Blog | Science & Tech. | 51.896838 |
November 11, 2008 2 Comments
Architecture Pattern – It expresses a fundamental structural organization or schema for software systems. It provides a set of predefined subsystems, specifies their responsibilities, and includes rules and guidelines for organizing the relationships between them.
This article details the various attributes of a system, and where does software architecture fit it.
Design Pattern – It provides a scheme for refining the subsystems or components of a software system, or the relationships between them. It describes a commonly recurring structure of communicating components that solves a general design problem within a particular context.
Examples : Adapter, Fascade, Singleton, Proxy to name a few.The book on Design Patterns by GOF, provides an exahustive list of design patterns. The Hillside site also has various patterns. With the dynamic change in software its becoming increasingly large to create new patterns, like the one Yahoo has. The TRIZ journal entry also correlates how to use design patterns along with software engineering.
Idiom – It is a low-level pattern specific to a programming language. An idiom describes how to implement particular aspects of components or the relationships between them using the features of the given language.
Example : Idioms generally relate to a particular programming language. Curiously Recurring Template Pattern is considered an idiom in C++. | <urn:uuid:4e9a41dd-5b94-484d-aef0-5f4030f2ebde> | 3.734375 | 269 | Personal Blog | Software Dev. | 20.668774 |
The DTD of TEI P3 defines a large number of element types, with a wide variety of meanings. In addition, it defines one element (<seg>), which has no specified meaning. The <seg> element may be used:
The <seg> element can be used only for phrase-level elements, because <seg> is a member of class phrase. It thus can appear within paragraphs, etc. (strictly: within any element with a content model of paraContent, specialPara, or phrase.seq), but not between paragraphs, directly within text divisions.
It would be convenient to have an anonymous element type usable at the component level of documents; this would allow a cleaner markup of
The element <ab> will be added to the additional tag set for linking and alignment, in section 14.3, which is where <seg> is defined.
It will have the following description:``<ab>: contains any arbitrary component-level unit of text''. As a member of the seg class, it will inherit attributes type and ident (this last should be given a more meaningful name: function is proposed). Like <seg>,<ab> should also take an additional attribute subtype, with the description``provides a subcategorization of the text block, if needed''
The tag list at the beginning of the section should list the elements in the order <anchor>, <seg>, and <ab>, and the discussion of the <anchor> element should be moved from the end of the discussion section, where it is currently lost, to the beginning.
The discussion of <seg> and <ab> should read: ``
The <seg> and <ab> elements can be used at the encoder's discretion to mark almost any segment of the text which is of interest for processing. One use of these elements is to mark text features for which these Guidelines otherwise provide no appropriate markup, i.e. as a simple extension mechanism. Another use is to provide an identifier for some segment which is to be pointed at by some other element, i.e. to provide a target, or a part of a target, for a <ptr> or other similar element.
Several examples of uses for the <seg> element are provided elsewhere ...
(Continue with current discussion of <seg> element.)
The remainder of this chapter contains a number of examples of the use of the <seg> element simply to provide an element to which an identifier may be attached, for example so that another segment may be linked or related to it in some way.
The <ab> element performs a similar function for portions of the text which occur not within paragraphs or other component-level elements, but at the component level themselves. It may be used, for example, to tag the canonical verse divisions of Biblical texts:
<div1 type='book' n='Gen'> <head>The First Book of Moses, Called</head> <head type='main'>Genesis</head> <div2 type='chapter' n='1'> <ab n='1'>In the beginning God created the heaven and the earth.</ab> <ab n='2'>And the earth was without form, and void; and darkness <hi>was</hi> upon the face of the deep. And the Spirit of God moved upon the face of the waters.</ab> <ab n='3'>And God said, Let there be light: and there was light.</ab> <!-- ... -->
In other cases, where the text clearly indicates paragraph divisions containing one or more verses, the <p> element may be used to tag the paragraphs, and the <seg> element used to subdivide them. The <ab> element is provided as an alternative to the <p> element;it may not be used within paragraphs. The <seg> element, by contrast, may appear only within and not between paragraphs (or anonymous block elements).
<div1 type='book' n='Gen'><head>Das Erste Buch Mose.</head> <div2 type='chapter' n='1'> <p> <seg n='1'>Am Anfang schuff Gott Himel vnd Erden.</seg> <seg n='2'>Vnd die Erde war wüst vnd leer / vnd es war finster auff der Tieffe / Vnd der Geist Gottes schwebet auff dem Wasser.</seg> </p> <p> <seg n='3'>Vnd Gott sprach / Es werde Liecht / Vnd es ward Liecht.</seg> <!-- ... -->
The <ab> element is also useful for marking dramatic speeches when it is not clear whether the speech is to be regarded as prose or verse. If, for example, am encoder does not wish to express an opinion as to whether the opening lines of The Tempest are to be regarded as prose or as verse, they might be tagged as follows:
<div1 type=act n='I'> <div2 type=scene n='1'> <head rend=italic>Actus primus, Scena prima.</head> <stage type=setting rend=italic> A tempestuous noise of Thunder and Lightning heard: Enter a Ship-master, and a Boteswaine.</stage> <sp><speaker>Master.</speaker><ab> Bote-swaine.</ab></sp> <sp><speaker>Botes.</speaker><ab> Heere Master: What cheere?</ab></sp> <sp><speaker>Mast.</speaker><ab> Good: Speake to th' Mariners: fall too't, yarely, or we run our selues a ground, bestirre, bestirre. <stage type=move>Exit.</stage></ab></sp> <stage type=move>Enter Mariners.</stage> <sp><speaker>Botes.</speaker> <ab>Heigh my hearts, cheerely, cheerely my harts: yare, yare: Take in the toppe-sale: Tend to th' Masters whistle: Blow till thou burst thy winde, if roome e-nough.</ab></sp>See further section 6.11.2, "Core Tags for Drama," on p. 212, and section 10.2.4, "Speech Contents," on p. 285). ''
References to <seg> in10.2.4 , such as the following ``or <seg> elements, in case of doubt as to whether the material should be treated as verse or prose.'' should be changed to refer to <ab>..
Section 14.3 should be renamed Segments, Blocks, and Anchors .
The declaration for <ab> should be
<!ELEMENT ab - O (%paraContent;) > <!ATTLIST ab %a.global; %a.seg; subtype CDATA #IMPLIED TEIform CDATA 'ab' >
Automagically generated by lite2html on 5 Mar 1997 | <urn:uuid:92ca8a0d-d126-49d0-940e-c85d3721d05d> | 3.109375 | 1,502 | Documentation | Software Dev. | 65.076986 |
'Wave Structure of Matter'
Wave Structure of Matter (WSM) is the idea that matter is a purely wave phenomenon and can be understood as spherical standing waves and is therefore made of the same stuff as light. It is suggested that all pages/articles to do with WSM are named with Wsm followed by whatever it is about beginning with a capital, see list below.
Getting started on Wiki WSM
- Bookmark this page so that you can start here in future.
- Have a look at HomePage to learn about this Wiki site.
- TourOfWikiWorld to learn more about what WE are thinking. Take a tour| rule.
- Visit the classroom|, NewbieHelp or SandBox to learn more on making and editing your own WikiWorld pages.
- Click "Edit" at the bottom to change or add to a page.
- If you have already looked around, see RecentChanges for what's happening.
- First time visitors, please sign your name in RecentVisitors so the WikiWorld community can give you a friendly shout out. See WikiWiki for more info.
- TextFormattingRules explains how to make your pages format nicely.
- WARNING - when editing things it is important to press reload/refresh before making changes (if you have ever edited the page before) because most browsers use the old cache version and lose everyone elses additions to the page.
- --> From NicoBenschop (24may04): people interested to switch away from Yahoo/WSM: Sign-in please (see above instructions) for discussing an alternative place. (re wsm-2560 and wsm-2564).
- WsmMembers list of members with details and home pages etc
- WsmOverview a description of WSM
- WsmGuidelines for how to build this site and discuss things
- WsmLinks to sites of interest not on Wiki
- WsmAnimations links and descriptions of Gab's great WSM animations
- WsmExperiments list of physics experiments that are relevant to WSM
- WsmDetails is the possible alternatives and details of WSM with argument and counterargument
- VirtualClassroomWaveStructureMatter for FAQ and even inFAQ | <urn:uuid:9e9fdac3-516d-4067-80b3-4cb939860282> | 2.84375 | 463 | Content Listing | Science & Tech. | 40.326434 |
Resonance energy transfer
resonance energy transfer --> fluorescence energy transfer
(Science: technique) transfer of energy from one fluorochrome to another. The emission wavelength of the fluorochrome excited by the incident light must approximately match the excitation wavelength of the second fluorochrome.
Results from our forum
... of photosynthesis, which helps plants get energy from the light. Chlorophyll molecules are ... per photosystem) is to absorb light and transfer that light energy by resonance energy transfer to a specific chlorophyll pair ...
See entire post | <urn:uuid:bafde75c-e855-4357-a3a0-1266d86c41ec> | 3.265625 | 116 | Structured Data | Science & Tech. | 22.408333 |
This year's Atlantic hurricane season is expected to be busier than usual and the 2011 summer forecast calls for some extreme weather. Heidi Cullen makes the climate connection.
In different parts of the American West, climate influences wildfires in unexpected ways.
Reactor accidents and radiation leaks aren't the only risks with nuclear energy; nuclear weapons are another problem.
Engineers who design nuclear plants can plan a worst case scenario, but this may not be enough to prevent all accidents.
Both India and China have big expansion plans for nuclear power. Here's a look at why they are expanding and what the risks are.
With what feels like an especially long winter coming to an end, Dr. Heidi Cullen gives a climate outlook for spring 2011.
Besides being a force of nature, the wind is a promising renewable energy resource. But where does wind come from, and what's its true energy potential?
What we’ve known as “normals†for our climate during the past decade will very likely change soon. | <urn:uuid:e9ba8b95-6ff9-49e9-9c2e-f72507ce968f> | 2.71875 | 220 | Content Listing | Science & Tech. | 56.996259 |
A helicase protein moves rapidly on highly flexible single-stranded DNA track, using power provided by ATP hydrolysis. Once the helicase encounters a physical blockade that it cannot surmount, a conformational change in the helicase protein results in the recruitment of the initial site on the DNA, forming a loop. The helicase protein then snaps back to the beginning site on the DNA and repeats the movement. Repetitive movement on the DNA may keep it clear of potentially toxic proteins.
Animation: Courtesy of Taekjip Ha | <urn:uuid:5854116c-4d19-4e30-8b99-e38ffba477c2> | 2.953125 | 108 | Knowledge Article | Science & Tech. | 27.014485 |
Despite their ubiquity as research tools, there is still much we don’t fully understand about bacteria — such as how they divide. Learning more could point the way to new desperately needed antibiotics.
with red bands of FtsZ
September 2012--Antibiotics work in a variety of ways. Some destroy the bacterial protein synthesis machinery. Some abort the construction of the bacterial cell wall. Still others inhibit an enzyme bacteria need to make a vital vitamin.
But microbes have increasingly developed resistance to these points of attack, and the World Health Organization, among others, has sounded the alarm that the utility of many antibiotics could soon expire.
One possible solution is to devise new classes of antibiotics, ones that target novel parts of the bacterial architecture. That’s where Erin Goley’s research comes in. Goley, an assistant professor of biological chemistry, is not a drug researcher, but her basic science studies may one day prove useful to drug developers.
Goley focuses on the bacterial cytoskeleton, a collection of cellular proteins that assemble themselves into a scaffold that gives the bacterial cell its shape and may help to orchestrate cell division.
“No antibiotics currently in use target the bacterial cytoskeleton,” says Goley.
In fact, it was only in the late 1990s that biologists discovered bacteria even had a cytoskeleton. The cytoskeleton was first identified in the cells of eukaryotic organisms (those, such as plants and animals, whose cells have specialized organelles and a discrete nucleus). Bacteria are tiny, for one thing, and until the advent of advanced imaging technology, scientists could not get a good look inside. The species Goley studies, Caulobacter crescentus, is a mere 500 nanometers across, or about one one-hundredth the size of an average human cell.
Second, a cell wall encases most bacterial species, and scientists assumed this semi-rigid structure obviated the need for a cytoskeleton.
Electron micrograph of FtsZ
filaments forming bundles in
the presence of FzlA.
These assumptions turned out to be wrong. In 1998, structural biologist Jan Löwe, in the United Kingdom, demonstrated that a bacterial protein called FtsZ is an evolutionary counterpart of tubulin, a key protein component of the eukaryote cytoskeleton. The finding implied that the cytoskeleton was not a eukaryotic invention. “This was a huge discovery,” says Goley. “It excited eukaryotic cell biologists and microbiologists alike.”
Soon, scientists discovered other bacterial cytoskeleton proteins, and conducted studies that indicated these proteins endowed bacterial species with their idiosyncratic shapes (from rods to stars) and also aided in cell division.
This radical shift in thinking (as close to a revolution as can happen in cell biology) occurred as Goley was finishing her Ph.D. at the University of California at Berkeley in a tangentially related field. She decided to join the revolution.
“I was fascinated by the fundamental cell biology of these cytoskeletal proteins in bacteria,” says Goley, “and I thought there were still a lot of unanswered questions.”
To explore these questions, she is using the crescent-shaped Caulobacter, a non-pathogenic resident of freshwater lakes and streams. It is an ideal research model for a number of reasons, including the fact that whole populations of the cells can be induced to grow in synchrony, says Goley, “which allows us to dissect processes in time.” Plus, the basic cytoskeletal proteins are the same from one bacterial species to the next. So insights gleaned from research on Caulobacter will likely apply to pathogenic species.
One area that particularly intrigues Goley is the cytoskeleton’s role in cell division. In Caulobacter and other bacteria, one of the first steps in cell division occurs when FtsZ molecules link together to form a ring encircling the cell along the underside of the cell membrane. The cell then divides at the site of this “Z ring.”
Goley can label FtsZ proteins with a fluorescent protein and watch these events unfold through a fluorescence microscope. But that approach only provides a distant view of events and leaves many questions. For example: Does the Z ring serve simply as a spatial marker of cell division, something like a seamstress’s chalk line noting where to cut? Or does it exert a force that constricts and eventually guillotines the cell in two? To answer that question, Goley is planning biochemistry studies to measure the forces that FtsZ can generate.
Another question is which proteins, in addition to FtsZ, help to form the Z ring. Goley and other scientists have shown that a host of different proteins congregate around FtsZ. But it’s not clear which ones contribute to Z ring formation, and which just happen to hang out there. In a study conducted while she was a postdoc at Stanford University School of Medicine, Goley and her mentor, Professor of Developmental Biology Lucy Shapiro, identified one protein, a molecule called FzlA, that appears to play a critical role.
Goley and Shapiro demonstrated that on its own, FtsZ looks rather plain — something like an uncooked piece of spaghetti. But in the presence of FzlA, these unassuming straight lengths of spaghetti transform dramatically, sculpting themselves into “striking arcs and helical bundles,” the team reported in the September 24, 2010 issue of Molecular Cell.
One of Goley’s hypotheses is that such shape changes provide the force that enables the Z ring to divide the cell. It could be that FtsZ begins as a straight molecule anchored to the cell membrane, says Goley. When the molecules start to arc and twist, they pull the membrane inward, acting something like a cinching waistband on a dress.
“I hope this work gives us a molecular-level understanding of which parts of the Z ring are most critical,” says Goley, who joined the Hopkins faculty a year ago and stresses that many of her experiments are in the “very, very early stages.” Such knowledge could help scientists identify drugs that hobble the Z ring’s division machinery, halting bacterial multiplication and the spread of infection.
Goley notes that a research team in the United Kingdom a few years ago provided the proof of principle for this approach; the scientists used an FtsZ inhibitor to cure mice infected with the super-resistant bacterium MRSA. Clinical trials of the inhibitor have not been conducted, says Goley. “But the mouse study was a good indication that FtsZ is a viable target.”
-- Melissa Hendricks
Erin Goley on how bacterial science is undergoing a renaissance | <urn:uuid:76688a6e-289a-4299-9a19-67381891f2d0> | 3.53125 | 1,439 | Nonfiction Writing | Science & Tech. | 37.045698 |
The term acid rain is commonly used to mean the deposition of acidic components in rain, snow, fog, dew, or dry particles. The more accurate term is acid precipitation. "Clean" or unpolluted rain is slightly acidic, because carbon dioxide and water in the air react together to form carbonic acid, a weak acid. Rain acquires additional acidity through the reaction of air pollutants (primarily oxides of sulfur and nitrogen) with water in the air, to form strong acids (such as sulfuric acid and nitric acid). The main sources of these pollutants are emissions from vehicles, industrial plants, and power-generating plants.
Acid rain has been shown to have adverse effects on forests, freshwater, and soils, killing off insect and aquatic life forms. It also damages buildings and statues, and may adversely affect human health. These problems, which have increased with population and industrial growth, are being addressed by the use of pollution control equipment that reduces the emission of sulfur and nitrogen oxides.
Acid rain was first observed by Robert Angus Smith in Manchester, England. In 1852, he reported the relationship between acid rain and atmospheric pollution. It was, however, not until the late 1960s that scientists began widely observing and studying the phenomenon. Harold Harvey of Canada was among the first to research a "dead" lake. In the United States, public awareness of the problem was heightened in the 1990s, after the New York Times promulgated reports from the Hubbard Brook Experimental Forest in New Hampshire of the myriad deleterious environmental effects resulting from acid rain.
Since the Industrial Revolution, emissions of sulfur and nitrogen oxides to the atmosphere have increased. Occasional pH readings of well below 2.4 (the acidity of vinegar) have been reported in industrialized areas of China, Eastern Europe, Russia, and areas downwind from them. These areas all burn sulfur-containing coal to generate heat and electricity.
Emissions of chemicals leading to acidification
The most significant gas that leads to acidification of rainwater is sulfur dioxide (SO2). In addition, emissions of nitrogen oxides, which are oxidized to form nitric acid, are of increasing importance due to stricter controls on emissions of sulfur-containing compounds. It has been estimated that about 70 Tg(S) per year in the form of SO2 comes from fossil fuel combustion and industry, 2.8 Tg(S) per year comes from wildfires, and 7-8 Tg(S) per year comes from volcanoes.
Sulfur and nitrogen compounds are the principal causes of acid rain. Many of them are generated by human activity, such as electricity generation, factories, and motor vehicles. Coal power plants are among the most polluting. The gases can be carried hundreds of kilometers in the atmosphere before they are converted to acids and deposited.
Factories used to have short chimneys to release smoke, but because they polluted the air in their nearby localities, factories now have tall smokestacks. The problem with this "solution" is that those pollutants get carried far off, releasing gases into regional atmospheric circulation and contributing to the spread of acid rain. Often deposition occurs at considerable distances downwind of the emissions, with mountainous regions tending to receive the most (because of their higher rainfall). An example of this effect is the low pH of rain (compared to the local emissions) that falls in Scandinavia.
Chemistry in cloud droplets
When clouds are present, the loss rate of SO2 is faster than can be explained by gas phase chemistry alone. This is due to reactions in the liquid water droplets.
Sulfur dioxide dissolves in water and then, like carbon dioxide, hydrolyzes in a series of equilibrium reactions:
- SO2 (g) + H2O ⇌ SO2·H2O
- SO2·H2O ⇌ H++HSO3-
- HSO3- ⇌ H++SO32-
Many aqueous reactions oxidize sulfur from S(IV) to S(VI), leading to the formation of sulfuric acid. The most important oxidation reactions are with ozone, hydrogen peroxide, and oxygen. (Reactions with oxygen are catalyzed by iron and manganese in the cloud droplets).
Wet deposition of acids occurs when any form of precipitation (rain, snow, and so forth) removes acids from the atmosphere and delivers it to the Earth's surface. This can result from the deposition of acids produced in the raindrops (see aqueous phase chemistry above) or by the precipitation removing the acids either in clouds or below clouds. Wet removal of both gases and aerosol are both of importance for wet deposition.
Acid deposition also occurs via dry deposition in the absence of precipitation. This can be responsible for as much as 20-60 percent of total acid deposition. This occurs when particles and gases stick to the ground, plants, or other surfaces.
Surface waters and aquatic animals
Both the lower pH and higher aluminum concentrations in surface water that occur as a result of acid rain can cause damage to fish and other aquatic animals. At pH levels lower than 5, most fish eggs will not hatch, and lower pH levels can kill adult fish. As lakes become more acidic, biodiversity is reduced. Acid rain has eliminated insect life and some fish species, including the brook trout in some Appalachian streams and creeks. There has been some debate on the extent to which man-made causes of lake acidity have cause fish kills - for example Edward Krug..
Soil biology can be seriously damaged by acid rain. Some tropical microbes can quickly consume acids but other microbes are unable to tolerate low pH levels and are killed. The enzymes of these microbes are denatured (changed in shape so they no longer function) by the acid. The hydronium ions of acid rain also mobilize toxins and leach away essential nutrients and minerals
Forests and other vegetation
Acid rain can slow the growth of forests, cause leaves and needles to turn brown and fall off and die. In extreme cases, trees or whole acres of forest can die. The death of trees is not usually a direct result of acid rain, but it often weakens trees and makes them more susceptible to other threats. Damage to soils (noted above) can also cause problems. High altitude forests are especially vulnerable as they are often surrounded by clouds and fog which are more acidic than rain.
Other plants can also be damaged by acid rain but the effect on food crops is minimized by the application of fertilizers to replace lost nutrients. In cultivated areas, limestone may also be added to increase the ability of the soil to keep the pH stable, but this tactic is largely unusable in the case of wilderness lands. Acid Rain depletes minerals from the soil and then it stunts the growth of the plant.
Some scientists have suggested direct links to human health, but none have been proven. However, fine particles, a large fraction of which are formed from the same gases as acid rain (sulfur dioxide and nitrogen dioxide), have been shown to cause illness and premature deaths such as cancer and other deadly diseases For more information on the health effects of aerosol see: Particulate#Health effects.
Other adverse effects
Acid rain can also cause damage to certain building materials and historical monuments. This is because the sulfuric acid in the rain chemically reacts with the calcium compounds in the stones (limestone, sandstone, marble and granite) to create gypsum, which then flakes off. This is also commonly seen on old gravestones where the acid rain can cause the inscription to become completely illegible. Acid rain also causes an increased rate of oxidation for iron. Visibility is also reduced by sulfate and nitrate in the atmosphere.
In the United States and various other countries, many coal-burning power plants use flue gas desulfurization (FGD) to remove sulfur-containing gases from their stack gases. An example of FGD is the wet scrubber, which is basically a reaction tower equipped with a fan that passes hot smoke stack gases through the tower. Lime or limestone in slurry form is also injected into the tower to mix with the stack gases and combine with the sulfur dioxide present. The calcium carbonate of the limestone produces pH-neutral calcium sulfate that is physically removed from the scrubber. In other words, the scrubber turns sulfur pollution into industrial sulfates.
In some areas, the sulfates are sold to chemical companies as gypsum when the purity of calcium sulfate is high. In others, they are placed in landfills. However, the effects of acid rain can last for generations, as the effects of pH level change can stimulate the continued leaching of undesirable chemicals into otherwise pristine water sources, killing off vulnerable insect and fish species and blocking efforts to restore native life.
A number of international treaties have been signed regarding the long-range transport of atmospheric pollutants. One example is the Sulphur Emissions Reduction Protocol under the Convention on Long-Range Transboundary Air Pollution.
A more recent regulatory scheme involves emissions trading. In this scheme, every current polluting facility is given an emissions license that becomes part of capital equipment. Operators can then install pollution control equipment, and sell parts of their emissions licenses. The intent here is to give operators economic incentives to install pollution controls.
- ↑ Distilled water, which contains no carbon dioxide, has a neutral pH of 7. Liquids with a pH less than 7 are acidic, and those with a pH greater than 7 are alkaline (or basic).
- ↑ Biomass Burning and Global Change. NASA. Retrieved October 10, 2007.
- ↑ Ibid. Industrial acid rain is a substantial problem in China
- ↑ Hari Sud. 2006. CHINA: Industrialization Pollutes Its Countryside With Acid Rain. South Asia Analysis Group. Retrieved October 10, 2007.
- ↑ H. Berresheim, P. H. Wine and D. D. Davies. 1995. Sulfur in the Atmosphere. In Composition, Chemistry and Climate of the Atmosphere. (Van Nostran Rheingold).
- ↑ List of EMEP publications. EMEP. Retrieved October 10, 2007.
- ↑ Effects of Acid Rain - Surface Waters and own Aquatic Animals. U.S. EPA. Retrieved October 10, 2007.
- ↑ Acid Test: Edward Krug Flunks Political Science. The Reason Foundation. Retrieved October 10, 2007.
- ↑ H. Rodhe, et al., "The Global Distribution of Acidifying Wet Deposition." Environmental Science & Technology 36 (20): 4382-4388.
- ↑ Effects of Acid Rain - Forests. U.S. EPA. Retrieved October 10, 2007.
- ↑ Effects of acid rain - human health. U.S. EPA. Retrieved October 10, 2007.
- ↑ Effects of Acid Rain - Materials. U.S. EPA. Retrieved October 10, 2007.
- ↑ Effects of Acid Rain - Visibility. U.S. EPA. Retrieved October 10, 2007.
- McCormick, John. 1989. Acid Earth: The Global Threat of Acid Pollution. London, UK: Earthscan. ISBN 185383033X.
- Morgan, Sally and Jenny Vaughan. 2007. Acid Rain (Earth SOS). London, UK: Franklin Watts Ltd. ISBN 0749676728.
- Parks, Peggy J. 2005. Our Environment - Acid Rain (Our Environment). Farmington Hills, MI: KidHaven Press (Thomson Gale). ISBN 0737726288.
All links retrieved August 18, 2012.
- National Acid Precipitation Assessment Program Report - a 98-page report to Congress.
- Acid rain for schools.
- U.S. Environmental Protection Agency - New England Acid Rain Program (superficial).
- Acid Rain (more depth than ref. above).
- U.S. Geological Survey - What is acid rain?.
- Acid Rain: A Continuing National Tragedy - a report from The Adirondack Council on acid rain in the Adirondack region.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | <urn:uuid:b5090c8a-41c7-4a24-89be-d39ed92aa867> | 3.859375 | 2,637 | Knowledge Article | Science & Tech. | 45.123593 |
Planets and their moons have collided with asteroids and comets frequently throughout our solar system's 4.5-billion-year history. Some of these impacts can be so powerful that they blast huge craters into the Earth's surface, generating enough heat to vaporise the crashing object.
The Earth is struck by extraterrestrial objects all the time. Some of these impacts are large enough to leave craters and there are about 150 known craters on Earth. The biggest crater, called Vredefort Dome, is about 2 billion years old and stretches over 300km in South Africa.
It is only since we have been able to look back at Earth from space that we have been able to see these huge scars. Many more impacts have made craters, but because the Earth's surface is always changing they get covered up. Scientists can sometimes detect these craters using sound waves.
In contrast, the Moon's surface does not change, so craters stay around on the surface for much longer. You can see hundreds of craters on the Moon if you look closely.
The Earth is bombarded continuously by bits of debris from space. Most of this is cosmic dust and is far too small to be noticed. Around 40,000 larger pieces of asteroid also hit Earth each year. Some are recovered as meteorites, but they are rarely big enough to make craters.
Only fragments of asteroids or comets tens of metres across hit with enough force to make craters. An impact capable of generating a 1km-sized crater probably occurs once every 6,000 years whereas a 200km crater only occurs every 100 million years.
The frequency of impacts is known as the impact flux. The impact flux has been fairly constant throughout Earth's history except for a violent period (known as the great bombardment) shortly after the solar system formed when there were twice as many impacts.
The explosive energy of an impact can be very destructive. The force of a large collision, pushing millions of tonnes of dust into the atmosphere may have even caused the extinction of the dinosaurs. However, impacts during the great bombardment may also be responsible for creating the conditions for life in the first place, bringing essential water and carbon to our planet. | <urn:uuid:adacbea1-a8f4-4b9b-8930-0e8e9b11c190> | 4.4375 | 448 | Knowledge Article | Science & Tech. | 53.943182 |
In my last post, I discussed how scientists have been able to prove that the increase in carbon dioxide in the atmosphere is due to human activity. It’s now time to discuss why carbon dioxide matters. Below is a chart from the IPCC of radiative forcings, which I explain below.
Before I can explain forcings, there is an important concept that we need to remember from high school physics: every system is in an energy equilibrium (the second law of thermodynamics). This applies to the Earth, meaning that the Earth must radiate as much energy as it absorbs. Translated into what actually happens, the Earth must emit as much energy as it absorbs from the sun, meaning that if the amount it absorbs rises, the Earth’s temperature must rise to compensate.
A forcing is anything that perturbs that balance, by either raising or lowering the amount of energy the Earth absorbs from the sun. Positive forcings raise that amount, mostly by trapping sunlight that has already entered the Earth’s atmosphere and has been reflected off the Earth’s surface, preventing it from escaping into space. Negative forcings raise the albedo of the Earth, meaning that they increase the amount of sunlight the Earth reflects.
You’ll notice that human activities have caused positive (red) and negative (blue) forcings, meaning that humans have put pressure in both directions. However, you’ll notice the bar at the bottom showing “total net human activities,” which is clearly positive.
There has been a focus on carbon dioxide for two reasons. First, it is by far the largest forcing. Second, it is much longer lived in the atmosphere than other greenhouse gases. While methane (CH4) is a much stronger greenhouse gas, it only remains in the atmosphere for about ten years. In contrast, carbon dioxide lasts for millennia.
It is important to note the size of the error bar on “cloud albedo effect.” The two aerosol categories are the result of burning fossil fuels, just like greenhouse gases. Many of you will remember the acid rain problem from the late 1980s, which was effectively reduced by the first Bush Administration by implementing a cap and trade policy. These are the same compounds. The reason for the large error bar is that the US Government has so far been unwilling to fund an accurate measure of the aerosol content of the atmosphere.
Aerosols only last about two years in the atmosphere. This means that, were humans to stop burning fossil fuels, the aerosols would all immediately leave the atmosphere, while the carbon dioxide would remain, increasing the pace of warming. If all emissions were to stop today, it’s estimated that temperatures would continue to rise for 20 to 30 more years. They would then stay elevated for over 10,000 years.
The chart measures the net effect of all of these forcings between 1750 and 2005. You’ll note towards the bottom that the researchers included “solar irradiance,” which is the amount of warming that can be attributed to the sun. Many climate change deniers like to assert that the warming is being driven by solar activity, which seems ridiculous when you realize what a small forcing it is. There is, however, a small chance they are correct. Ironically, if they are correct, it means that climate change will be unimaginably more severe than current projections.
Once again we have to return to the error bar on “cloud albedo effect.” If the albedo effect is very large (so at the far left of the error bar), then it would be effectively cutting the warming due to carbon dioxide by a third from current estimates. This would mean that much more of the observed warming has been due to solar activity, as the deniers maintain. However, remember the behavior of aerosols: they last only two years in the atmosphere. Eventually, humanity will have to stop burning fossil fuels; the forcings from greenhouse gases will continue to increase as we burn more. When that happens, the aerosols will be gone, and the net forcing due to human activity will be substantially higher than the current projections.
The chances of this are rather slim; you’ll note that the median of that error bar is still very low. My point in explaining this was simply to show the absurd nature of the arguments of the deniers. There is simply no evidence that the sun has been driving more than a tiny fraction of the warming. And, if it were in fact responsible, then the climate crisis is even larger than the worst scare mongering has predicted. | <urn:uuid:f5c19fa8-4ad3-42ea-a33d-cb6e05eae3f5> | 3.8125 | 945 | Personal Blog | Science & Tech. | 48.136245 |
outstanding problem in the oceanic sciences is the rate of heat and
freshwater transport from the equator to the poles, for it is this
transport which powers the Earth's weather and climate system."
Keffer and Holloway, Nature (1988).
MOCHA is a collaborative project, partnered with the UK RAPID
Program, to measure the meridional overturning circulation (MOC) and
ocean heat transport in the North Atlantic Ocean (Figure 1). These
transports are primarily associated with the Thermohaline Circulation.
Simply put, warm waters move poleward at the surface of the ocean,
where they cool and sink, to return equatorward in the deep ocean.
Figure 1. A schematic for the North Atlantic Overturning (click on the image for details, courtesy of RAPID/NERC)
models suggest that the MOC in the Atlantic, and the accompanying
oceanic heat flux, vary considerably on interannual time scales. In
addition to abrupt climate change scenarios in which the MOC can
virtually shut off (Manabe and Stouffer, 1993; Vellinga and Wood,
2002), the “normal” interdecadal variation may range from 20% to 30% of
its long-term mean value, according to some models (e.g., Hakkinen,
1999). However, until recently no direct measurement system had been
put in place that could provide regular estimates of the meridional
overturning circulation to determine its natural variability or to
assess these model predictions. Such a system is now deployed along
26.5°N in the Atlantic as part of the joint U.K./U.S. RAPID-MOCHA
program, which has been continuously observing the MOC since March 2004.
To visit the U.K. RAPID MOC website, click here. | <urn:uuid:34a9b561-4add-465a-99cf-188f5047f107> | 3.15625 | 403 | Knowledge Article | Science & Tech. | 48.129646 |
Predators & Plants
April 26, 2012
Eliminating bears, wolves, and other top predators has far-reaching consequences.
BOB HIRSHON (host):
How predators protect plants. I’m Bob Hirshon and this is Science Update.
(SFX: Wolf howl)
The howl of the wolf isn’t heard as much as it used to be, and as a result, prey like moose, deer, and elk are thriving. But according to Oregon State University forestry professor William Ripple, this isn’t a good thing. He analyzed 42 ecological studies over the past 50 years. And he found that declines in top predators, especially wolves, lead to a significant loss of plants and trees – because there are more large herbivores like deer to eat them. And he says we probably can’t do the wolves’ work by hunting the herbivores ourselves.
WILLIAM RIPPLE (OregonStateUniversity):
For example, the wolves are on the ground 24/7, and behaviorally are very different than human hunters.
But reintroducing wolves to an area, as was done inYellowstone National Park, has been shown to restore many trees and plants to healthier numbers. I’m Bob Hirshon for AAAS, the Science Society. | <urn:uuid:47b3d10b-571f-49e2-8c15-5bb0c29965ed> | 3.234375 | 277 | Truncated | Science & Tech. | 52.687727 |
This is certainly the first question you will find asked anytime someone mentioned the term “biomass.” Out of all of the various alternative energy sources being talked about out there, from solar and wind, to water and nuclear, the one that most people on average know the least about is biomass. So what is biomass? This is question is likely to appear on a million quizzes, tests, and homework assignments in schools throughout the world, so it’s best if there is a website somewhere that can act as a supplementary study material. And viola! Here it is!
Welcome to What is Biomass Dot Net!
Thanks for visiting the website! We are happy to have you and hope to be able to answer this question of “what is biomass?” without much pain and suffering as you’d might find in a school textbook. Let’s get on with it then!
What is Biomass and How is it Renewable?
So let’s answer this question, then. To keep things simple at first, biomass is essentially what is called a renewable energy source, as opposed to a lot of energy sources we use now that are limited by their appearance on the planet. For instance, there is only so much coal or oil on this planet and when it is gone, it is gone. So a renewable energy source like biomass can help save our inevitable future of running out of these other energy sources.
The reason biomass is considered renewable is because it comes from living organisms, or from ones that have just recently died. This means that biomass is biological in nature.
Biological creatures, such as humans, cows, insects, dogs, and more, can reproduce infinitely as long as we have other renewable items such as food. So if biological beings are renewable, then the biomass that comes from them is also renewable!
So What Can We Do With This Renewable Biomass? What is Biomass Dot Net Can Tell You!
The coolest thing about biomass is that we can use it just as it is. This is called direct consumption of biomass and is a very efficient way of using it, but there are also other things we have created that need fuel in a different format. We have some pretty complicated machines! So we have to convert the biomass and use it indirectly, and these energy products of biomass are called biofuel! We can even drive some cars with biofuel at this point. It would be a great goal to get all cars driving on biofuel soon. That would reduce the usage of oil and it would lessen the pollution our cars push out into the air we breathe!
Tell Me More About Biomass Specifically!
Okay, geez! You don’t have to be so pushy! What is biomass more specifically you ask? Biomass is specifically being derived at this point from plant materials. We use it specifically to burn inside of gasifiers and steam generators to make heat. What happens is that this heat creates pressure which can be used to turn a turbine. The biomass combusts, produces heat and pressure, which can turn a turbine which is very much like the propellors on an airplane.
The turbine spins and turns a generator. What happens in the generator is that a magnet is turned within a coil of wires, which generates an electrical current. We can then use this electricity to power all of our awesome stuff like refrigerators, microwaves, computers, televisions, and more. This is far more preferable than the danger of nuclear power or running out of oil.
Where Can We Get Biomass From?
Some of the places we get the biological matter from to use as biomass is from dead trees and bushes in forests. We can extract stumps, pick up branches and sticks that have fallen, and even take full trees that have fallen over naturally. You know how when you do yardwork and your push mower bag fills up? That can be used for biomass. I hate to say it, but even the stuff that goes down the toilet is perfect for biomass and biofuel!
Some companies have even taken to growing plants specifically for the use of making biomass. So when you start seeing biomass popping up and you ask what is biomass, you’ll know that it’s the hemp, willow trees, poplar trees, bamboo stalks, algae, sugarcane, corn, and even eucalyptus being grown to produce it. If you can reel that list off to someone, you know how smart you’ll look? You are ahead of the game just by reading this site!
What Are Some Other Biomass Sources?
If you want to get real down and dirty with the details, because your test coming up in your biology class will probably ask this, then we’ll get down and dirty. Biomass is very simple structurally. It’s mainly made out of carbon, which is what every living thing on this planet is based off of, and then oxygen and hydrogen.
There are five main sources for biomass energy, and they are:
- Garbage (generated by humans)
- Wood (trees and plants)
- Waste (excrement from humans, trash from factories)
- Gases (largely from decomposing trash at landfills)
- Alcohol (liquid sugar used as fuels)
Do you see now why we should really be switching over to biomass? It makes so much sense. It’s cleaner and doesn’t create pollution and is renewable! So if someone asks you “what is biomass?” now you can answer!
What is Biomass Energy Conversion?
So now we know what biomass is and where it comes from, but how do we put it to use besides just burning it and turning a turbine and generator? We have some cool technologies for that. If we want to get biofuel or biogas, then we have to do some things to biomass first.
First, there is thermal conversion, which we’ve already mentioned, which is using heat to burn the biomass and change it into another form, such as biogas. There are more details than this that we won’t get into, but that is the basics of thermal conversion. Another type is chemical conversion, which basically means that you expose your biomass to certain chemicals. This can sythesize other chemicals all together, or create a liquid or gas. The final type to mention is biochemical conversion which means we let nature run its course and let the biomass decompose.
These methods all have a lower environmental impact than other energy sources we currently use like there’s no tomorrow. And if we continue, there may not be a tomorrow!
The Main Benefits of Biomass Are…
The biggest benefit of biomass is that we don’t have to be concerned with a lot of the problems of common power production. Fo rinstance, we can reduce greenhouse gases big time. The emissions are a real problem for our ozone layer, but if we can reduce it, our ozone layer may have a chance to repair ourselves. It protects us from radiation, so we need it! Biomass lets carbon dioxide off instead, which helps our plants produce more oxygen for us. This is how we get back into the cycle of nature. It’s how it’s meant to work.
It also helps us justify some of the forest industries like paper mills. If we can use the trash from those factories as biomass, then at least we are being that less destructive and wasteful. We can’t stop using paper, but we can reduce it’s negative impact by using biomass.
Thanks for Visiting What Is Biomass Dot Net!
Thanks for stopping by. If you want more information, there is plenty in the other pages of the website, which you can find on the menu to the right side. It lists all of the other pages of amazing biomass info for you to read about. You will surely pass your test now! If you want, please contact us at the contact page and we can try to help you further. Have fun reading, and best of luck with everything! | <urn:uuid:af053203-0ba3-43aa-8548-26f55082d4fb> | 3.375 | 1,689 | Knowledge Article | Science & Tech. | 60.131578 |
Subscribe to the Podcast
A collection of videos produced by the DNALC, highlighting key topics and recent projects.
A DNA barcode is a DNA sequence that uniquely identifies each species of living thing. Dr. Mark Stoeckle from The Rockefeller University talks about the history of DNA barcoding, from the first paper published in 2003 to the international consortium of researchers that exists today.
Duration: 2 minutes, 34 seconds
POSTED November 3, 2011
From dinosaur DNA to monkey's uncles, we've stored previous episodes here. | <urn:uuid:b1ba53fc-9650-44b0-b273-922934b8fa69> | 3.09375 | 111 | Truncated | Science & Tech. | 43.271765 |
Theory has suggested that the West Antarctic Ice Sheet may be inherently unstable. Recent observations lend weight to this hypothesis. We reassess the potential contribution to eustatic and regional sea level from a rapid collapse of the ice sheet and find that previous assessments have substantially overestimated its likely primary contribution. We obtain a value for the global, eustatic sea-level rise contribution of about 3.3 meters, with important regional variations. The maximum increase is concentrated along the Pacific and Atlantic seaboard of the United States, where the value is about 25% greater than the global mean, even for the case of a partial collapse.
In the figure, Antarctic surface topography (gray shading) and bed topography (brown) define the region of interest. For clarity, the ice shelves in West Antarctica are not shown. Areas more than 200 meters below sea level in East Antarctica are indicated by blue shading. Notations are: AP, Antarctic Peninsula; EMIC, Ellsworth Mountain Ice Cap; ECR, Executive Committee Range; MBLIC, Marie Byrd Land Ice Cap; WM, Whitmoor Mountains; TR, Thiel Range; Ba, Bailey Glacier; SL, Slessor Ice Stream; Fo, Foundation Ice Stream; Re, Recovery Glacier; To, Totten Glacier; Au, Aurora Basin; Me, Mertz Glacier; Ni, Ninnis Glacier; WSB, Wilkes Subglacial Basin; FR, Flood Range; a.s.l., above sea level.
Citation: Bamber, J., Riva, R., Vermeersen, B., and LeBrocq, A., Reassessment of the Potential Sea-Level Rise from a Collapse of the West Antarctic Ice Sheet, Science 324 (5929), 901. [DOI: 10.1126/science.1169335] | <urn:uuid:8bd19454-c3a7-4436-8327-b2a5267e4993> | 2.9375 | 376 | Academic Writing | Science & Tech. | 46.6051 |
Sample Assignment: Writing a Web Server
The goal of this assignment is to build a functional web server. This assignment will guide you through you the basics of distributed programming, client/server structures, and issues in building high performance servers.
Before beginning this experiment, you should be prepared with the following.
- You have GENI credentials to obtain GENI resources. (If not, see SignMeUp).
- You are able to use Flack to request GENI resources. (If not, see the Flack tutorial).
- You are comfortable using ssh and executing basic commands using a UNIX shell. Tips about how to login to GENI hosts.
- You are comfortable with coding in C or C++
- Start Flack and create a new slice
- Load the rspec from this URL to Flack http://www.gpolab.bbn.com/experiment-support/WebServer/websrv.rspec.
- submit for sliver creation (also fine to use omni, if you prefer). Your sliver should look something like this:
In this setup, there is one host acting as a web server. To test that the webserver is up visit the web page of the Server host, use either of the following techniques:
- Press on the (i) button in Flack and then press the Visit button, or
- Open a web browser and go to the webpage http://<pcname>.emulab.net. In the above example this would be http://pc484.emulab.net.
If the installation is successful you should see a page that is similar to this:
You will use the following techniques during this experiment.
Start and stop the web server
In the original setup of your sliver there a webserver already installed and running on the Server host. As you implement your own webserver you might need to stop or start the installed webserver.
- To Stop the webserver run:
sudo /sbin/service httpd stopTo verify that you have stopped the webserver, try to visit the above web page, you should get an error. (You may need to refresh your browser, if it has cached the page from a previous visit.)
- To Start the webserver run:
sudo /sbin/service httpd start
Command Line Web Transfers
Instead of using a web browser, you can also use command line tools for web transfers. To do this, follow these steps:
- Log in to Client1.
- You can download the web page using this command
[inki@Client1 ~]$ wget http://server --2012-07-06 04:59:09-- http://server/ Resolving server... 10.10.1.1 Connecting to server|10.10.1.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 548 [text/html] Saving to: “index.html” 100%[======================================>] 548 --.-K/s in 0s 2012-07-06 04:59:09 (120 MB/s) - “index.html” saved [548/548]Note: In the above command we used http://server instead of http://pc484.emulab.net so that we can contact the web server over the private connection we have created, instead of the server's public interface. The private connections are the ones that are represented with lines between hosts in Flack. When you do load testing on your web server, you should run tests from the two client machines in your test configuration, using the http://server address, so that you are testing the performance of your server and not your Internet connection to the lab.
- The above command only downloads the index.html file from the webserver. As we are going to see later a web page may include other web pages or objects such as images, videos etc. In order to force wget to download all dependencies of a page use the following options :
[inki@Client1 ~]$ wget -m -p http://serverThis will produce a directory, server, with the following data structure. Run:
[inki@Client1 ~]$ ls server/ home.html index.html links.html media top.html
Build your own Server
At a high level, a web server listens for connections on a socket (bound to a specific port on a host machine). Clients connect to this socket and use a simple text-based protocol to retrieve files from the server. For example, you might try the following command on Client1:
% telnet server 80 GET /index.html HTTP/1.0
(Type two carriage returns after the "GET" command). This will return to you (on the command line) the HTML representing the "front page" of the web server that is running on the Server host.)
One of the key things to keep in mind in building your web server is that the server is translating relative filenames (such as index.html ) to absolute filenames in a local filesystem. For example, you might decide to keep all the files for your server in ~10abc/cs339/server/files/, which we call the document root. When your server gets a request for index.html (which is the default web page if no file is specified), it will prepend the document root to the specified file and determine if the file exists, and if the proper permissions are set on the file (typically the file has to be world readable). If the file does not exist, a file not found error is returned. If a file is present but the proper permissions are not set, a permission denied error is returned. Otherwise, an HTTP OK message is returned along with the contents of a file.
In our setup we are using the Apache web server. The default document root for Apache on a host running Fedora 10 is under /var/www/html.
- Login to the Server host
[inki@server ~]$ ls /var/www/html/This should give you a similar structure to the directory structure you got when you downloaded the whole site with wget on the previous steps.
You should also note that since index.html is the default file, web servers typically translate "GET /" to "GET /index.html". That way index.html is assumed to be the filename if no explicit filename is present. This is also why the two URLs http://server (or http://pc484.emulab.net) and http://server/index.html (or http://pc484.emulab.net/index.html) return equivalent results.
When you type a URL into a web browser, the server retrieves the contents of the requested file. If the file is of type text/html and HTTP/1.0 is being used, the browser will parse the html for embedded links (such as images) and then make separate connections to the web server to retrieve the embedded files. If a web page contains 4 images, a total of five separate connections will be made to the web server to retrieve the html and the four image files.
Using HTTP/1.0, a separate connection is used for each requested file. This implies that the TCP connections being used never get out of the slow start phase. HTTP/1.1 attempts to address this limitation. When using HTTP/1.1, the server keeps connections to clients open, allowing for "persistent" connections and pipelining of client requests. That is, after the results of a single request are returned (e.g., index.html), the server should by default leave the connection open for some period of time, allowing the client to reuse that connection to make subsequent requests. One key issue here is determining how long to keep the connection open. This timeout needs to be configured in the server and ideally should be dynamic based on the number of other active connections the server is currently supporting. Thus if the server is idle, it can afford to leave the connection open for a relatively long period of time. If the server is busy servicing several clients at once, it may not be able to afford to have an idle connection sitting around (consuming kernel/thread resources) for very long. You should develop a simple heuristic to determine this timeout in your server.
At a high level, your web server will be structured something like the following:
Forever loop: Listen for connections
- Accept new connection from incoming client
- Parse HTTP request
- Ensure well-formed request (return error otherwise)
- Determine if target file exists and if permissions are set properly (return error otherwise)
- Transmit contents of file to connect (by performing reads on the file and writes on the socket)
- Close the connection (if HTTP/1.0)
You will have three main choices in how you structure your web server in the context of the above simple structure:
- A multi-threaded approach will spawn a new thread for each incoming connection. That is, once the server accepts a connection, it will spawn a thread to parse the request, transmit the file, etc.
- A multi-process approach maintains a worker pool of active processes to hand requests off to from the main server. This approach is largely appropriate because of its portability (relative to assuming the presence of a given threads package across multiple hardware/software platform). It does face increased context-switch overhead relative to a multi-threaded approach.
- An event-driven architecture will keep a list of active connections and loop over them, performing a little bit of work on behalf of each connection. For example, there might be a loop that first checks to see if any new connections are pending to the server (performing appropriate bookkeeping if so), and then it will loop overall all existing client connections and send a "block" of file data to each (e.g., 4096 bytes, or 8192 bytes, matching the granularity of disk block size). This event-driven architecture has the primary advantage of avoiding any synchronization issues associated with a multi-threaded model (though synchronization effects should be limited in your simple web server) and avoids the performance overhead of context switching among a number of threads.
You may choose from C or C++ to build your web server but you must do it in Linux (although the code should run on any Unix system). In C/C++, you will want to become familiar with the interactions of the following system calls to build your system: socket(), select(), listen(), accept(), connect() . We outline a number of resources below with additional information on these system calls. A good book is also available on this topic (there is a reference copy of this in the lab).
What to hand in
- Write a paper that describes your chosen architecture and implementation details. Describe any problems that you encountered. In addition to describing the structure of your server, include a discussion that addresses the following questions:
- Web servers often use ".htaccess" files to restrict access to clients based on their IP address. Although it wasn't required, how would you go about supporting .htaccess in your server?
- Performance differences between HTTP/1.0 and HTTP/1.1. Can you think of a scenario in which HTTP/1.0 may perform better than HTTP/1.1? Can you think of a scenario when HTTP/1.1 outperforms HTTP/1.0? Think about bandwidth, latency, and file size. Consider some of the pros and cons of using a connection per session versus using a connection per object. The difference between the two comes down to the following:
- Only a single connection is established for all retrieved objects, meaning that slow start is only incurred once (assuming that the pipeline is kept full) and that the overhead of establishing and tearing down a TCP connection is also only incurred once.
- However, all objects must be retrieved in serial in HTTP/1.1 meaning that some of the benefits of parallelism are lost.
- Submit the code of your webserver. Make the server document directory (the directory which the webserver uses to serve files) a command line option. The command line option must be specified as -document_root. Make the port that the server listens on a command line option. The option must be specified as -port . Thus, I should be able to run your server as
$ ./server -document_root "/tmp/assignment1_files" -port 8888 (Note that you should use ports between 8000 and 9999 for testing purposes.) | <urn:uuid:2b8a58f9-52df-432a-8d54-89fde105f2f4> | 2.859375 | 2,618 | Tutorial | Software Dev. | 63.697082 |
May 4, 2011: Einstein was right again. There is a space-time vortex around Earth, and its shape precisely matches the predictions of Einstein's theory of gravity.
Researchers confirmed these points at a press conference today at NASA headquarters where they announced the long-awaited results of Gravity Probe B (GP-B).
"The space-time around Earth appears to be distorted just as general relativity predicts," says Stanford University physicist Francis Everitt, principal investigator of the Gravity Probe B mission.
"This is an epic result," adds Clifford Will of Washington University in St. Louis. An expert in Einstein's theories, Will chairs an independent panel of the National Research Council set up by NASA in 1998 to monitor and review the results of Gravity Probe B. "One day," he predicts, "this will be written up in textbooks as one of the classic experiments in the history of physics."
Read the full article here | <urn:uuid:ca2f8ca5-bad7-4cc8-b81a-483c4cac5318> | 2.796875 | 185 | Truncated | Science & Tech. | 45.563782 |
The arrival of the third decade of X-ray astronomy midway through the ROSAT mission coincided with advances in CCD detector technology that have allowed a vast increase in the power of X-ray spectroscopy. These advances, coupled with nested-mirror systems, mark a clear maturing in the field and a move away from large samples of objects with limited information to limited samples with very detailed information. This is an inevitable progression that emerging disciplines experience, radio astronomy being a prime example. With the advance of aperture synthesis, radio astronomers in ~ 35 AJ (Anno Jansky) could obtain insights into the nature of individual sources and the focus moved away from surveys. In the past decade, radio astronomy has turned back to surveys (NVSS, FIRST, WENSS, 4MASS), and this will happen in X-ray astronomy (but hopefully in less than 25 years time!).
This trend for more detailed study has had a huge impact on cluster research, and the need for spatially resolved spectroscopy of clusters was apparent from the first X-ray detections of clusters. The nature of cluster surveys has also changed with a greater emphasis on understanding complete samples in many different wavelength regimes (e.g., Crawford et al. 1999; Giovannini, Tordi, & Ferretti 1999; Pimbblet et al. 2002). A sample of 200 clusters is a great resource but of little use without some information about the X-ray temperature, iron abundance, X-ray surface brightness profile, optical photometry and spectroscopy, or radio imaging. The availability of new optical, near-infrared, and radio surveys (e.g., SDSS, UKIDSS, NVSS, FIRST) will make the multiwavelength aspects of these studies much easier, but the need for further X-ray observations is hard to avoid.
5.1. ASCA Observations
The first step in this progression was the Japanese-US satellite ASCA. The nested, foil-replicated mirrors of ASCA resulted in a relatively asymmetric, broad point-spread function (2' FWHM), but the excellent performance of the SIS CCD detectors provided some very high-quality spectra for clusters (Mushotzky & Scharf 1997; Markevitch 1998; Fukazawa et al. 2000; Ikebe et al. 2002).
Over the course of the seven-year pointed phase, ASCA provided accurate temperatures and iron abundances for most of the 350 clusters observed. While few complete samples were observed, the ASCA data are an excellent complement to archival ROSAT observations. The notable exceptions to this are the flux-limited sample of 61 ROSAT-selected clusters (Ikebe et al. 2002) and the complete sample of 0.3 < z < 0.4 EMSS clusters (Henry 1997) from which limits of the evolution of the cluster temperature function can be derived.
5.2. The Unfulfilled Potential of ABRIXAS
One of the most disappointing events in X-ray astronomy was the unfortunate failure of the German satellite ABRIXAS in June 1998. Its simple design and the track record of the team behind ROSAT meant the planned 3-year, all-sky survey ABRIXAS would have had a huge impact on X-ray astronomy. The survey depth envisioned of 1.5 × 10-13 erg s-1cm-2 (0.5-2.0 keV) and 9 × 10-13 (2-12 keV) would have detected in excess of 20,000 clusters (i.e., more than the number required to keep pace with exponential growth).
5.3. Chandra and XMM-Newton
The launch of Chandra and XMM-Newton in 1999 has seen X-ray astronomy reach full maturity. The sub-arcsecond imaging of Chandra and unprecedented throughput of XMM-Newton have had a profound impact of our understanding of clusters (e.g., McNamara et al. 2000; Peterson et al. 2001; Allen, Schmidt, & Fabian 2002). The potential for surveys with both satellites is largely through serendipitous detections, but several important pointed surveys are being undertaken.
The only large Chandra serendipitous survey is CHamP (Wilkes et al. 2001), which will cover 14° in 5 years and identify 8,000 X-ray sources of all types, of which 150-250 will be clusters (which will all be spatially resolved). The relatively small number of clusters makes this sample unlikely to set any strong cosmological constraints, but it will act as an excellent control sample for past and future samples to test how spatial resolution affects detection statistics.
XMM-Newton has a program similar to CHamP, the XID program that has three tiers: faint (10-15 erg s-1cm-2, 0.5°), medium (10-14 erg s-1cm-2, 3°), and bright (10-13 erg s-1cm-2, 100°). Again, like CHamP, the number of clusters detected in the XID program will be small (< 50), so from a purely cluster view-point is not particularly relevant. There are currently two dedicated serendipitous cluster surveys. One expands on the XID programme (Schwope et al. 2003) and the other (the X-ray Cluster Survey, XCS, Romer et al. 2001) aims to extract all potential cluster candidates from the XMM-Newton archive and compile a sample of > 5,000 clusters from up to 1,000° over the full lifetime of the satellite. The contrast of XCS to CHamP and XID illustrates the huge increase in efficiency when one class of objects is chosen over the study of "complete" X-ray samples or contiguous area X-ray surveys, such as the XMM-LSS (Pierre et al. 2003), where the number of detected cluster is relatively small.
The principal pointed cluster surveys with Chandra and/or XMM-Newton target a sample of MACS clusters (Ebeling et al. 2001) with Chandra using GTO and GO time (PIs Van Speybroeck and Ebeling), a sample of REFLEX clusters with XMM-Newton in GO time (PI Böhringer) and a sample of SHARC clusters with XMM-Newton in GTO time (PI Lumb). Each of these projects is designed to determine the cluster temperature function, but will clearly have may other potential uses. These projects are all based on sub-samples of ROSAT-selected clusters to minimize the number of observations required. The reluctance of time allocation committees to devote time to complete samples in preference to the "exotica" (e.g., most distant, strongly lensing, etc., which predominate in successful proposals) is a hindrance to this "targeted" survey approach. | <urn:uuid:0de7d0f6-517c-437f-a709-0013daba7bf2> | 3 | 1,443 | Academic Writing | Science & Tech. | 56.388768 |
Copyright © University of Cambridge. All rights reserved.
A bicycle passes along a path and leaves some tracks.
Is it possible to examine the curve of the individual tracks (both tyres have identical tread) and from them say which track was made by the front wheel and which by the back wheel?
Is it possible to say in which direction along the path the bicycle was travelling ?The image above is just a picture to suggest the context, but if you'd like some accurate curves (tracks) to play with see the Hint | <urn:uuid:6707279a-b795-4b10-b856-0bef1f771f7a> | 2.75 | 107 | Q&A Forum | Science & Tech. | 54.822043 |
Earth at Perihelion
Earth at Perihelion On January 4, 2001, our planet made its annual closest
approach to the Sun.
January 4, 2001 -- This morning at 5 o'clock Eastern Standard time (0900 UT) Earth made its annual closest approach to the Sun -- an event astronomers call perihelion. Northerners shouldn't expect any relief from the cold, however. Although sunlight falling on Earth will be slightly more intense today than it is in July, winter will continue unabated.
"Seasonal weather patterns are shaped primarily by the 23.5-degree tilt of our planet's spin axis, not by Earth's elliptical orbit," explains George Lebo, a professor of astronomy at the University of Florida. "During northern winter the north pole is tilted away from the Sun. Days are short and that makes it cold. The fact that we're a little closer to the Sun in January doesn't make much difference. It's still chilly -- even here in Florida!"
Right: Duane Hilton created this view of the perihelion Sun shining down on a snowy scene in central California. Don't stare at the Sun at perihelion -- it can blind you just as it might at any other time of the year!
Seasons are reversed in the southern hemisphere. When the north pole is tilted away from the Sun, as it is now, the south pole is tilted toward it. As a result, summer is in full swing south of the equator even as northerners are bracing for a long winter.
Sign up for EXPRESS SCIENCE NEWS delivery
Editor's Note: Do you have trouble remembering the difference between perihelion and aphelion? An old astronomer's trick is to recall that the words "away" and "aphelion" both begin with the letter "A".
Earth's distance from the Sun doesn't change much throughout the year, but there are measurable differences in solar heating that result from our planet's slightly elliptical orbit.
"Averaged over the globe, sunlight falling on Earth in
January [at perihelion] is about 7% more intense than it is in
July [at aphelion]," says Roy Spencer of the Global Hydrology
and Climate Center in Huntsville, AL. "The fact that the
northern hemisphere of Earth has more land, while the southern
hemisphere has more water, tends to moderate the impact of differences
in sunlight between perihelion and aphelion."
Sunlight raises the temperature of continents more than it does oceans. (In other words, land has a lower heat capacity than water does.) In July (aphelion) the land-crowded northern half of our planet is tilted toward the Sun. Aphelion sunlight is a little weaker than sunlight at other times of the year, but it nevertheless does a good job warming the continents. In fact, say climate scientists, northern summer in July when the Sun is more distant than usual is a bit warmer than its southern counterpart in January.
Most planets follow orbits that are more elliptical than Earth's. Pluto's orbit, the most eccentric of all the planets, is so lopsided that Pluto is sometimes closer to the Sun than Neptune is. Pluto's latest 20-year stint as the eighth planet --rather than the ninth-- ended in February 1999 when the diminutive world crossed Neptune's orbit on its way back to the outer solar system. NASA scientists hope to send a probe to the retreating planet before Pluto's thin atmosphere freezes and falls to the ground.
Right: The orbits of Mercury (red), Earth (blue) and Mars (black). The solid lines indicate each planet's elliptical path around the Sun. The dotted lines show circular paths with the same mean separation from the center. Earth is almost exactly the same distance from the Sun at aphelion and perihelion, but the orbits of Mars and Mercury depart significantly from a circle. For more information, please visit Bridgewater College's Interactive Planetary Orbits web site.
After Pluto, Mercury and Mars have the most elliptical planetary orbits. The eccentricity of Mars's orbit has a big impact on the Red Planet's seasons. Northern summer on Mars lasts 24 days longer than northern winter because the planet is close to aphelion during the summer. Planets move more slowly at aphelion than they do at perihelion (see Kepler's 2nd Law of planetary motion) and, so, seasons occurring near aphelion last longer. Northern summer on Earth is ~5 days longer than northern winter for the same reason. It's a difference that goes largely unnoticed on our planet, but it's unmistakable on Mars.
During the long northern Martian summer, so much carbon dioxide frost at the planet's north pole sublimes into gaseous form that the northern summertime air pressure increases by ~30%. The Martian atmosphere literally waxes and wanes with the seasons -- all because of the planet's elliptical orbit.
Back on Earth, aphelion and perihelion are just two ordinary days on the calendar. There's no danger that our atmosphere will freeze and fall to the ground at aphelion, or that perihelion will herald a smothering blast of carbon dioxide. Sometimes there's just no substitute for a circular orbit!
closest point to the Sun
Aphelion Distance |
farthest point from the Sun
|Notes: 1 AU, the average distance from the Earth to the Sun, equals 93 million miles or 150 million kilometers. The eccentricity of a planet's orbit measures how much it departs from a perfect circle. Orbits with zero eccentricity (e = 0) are circular; orbits with eccentricities close to 1 (e ~ 1) are long and skinny. Planetary orbits tend to be almost circular while comets and many asteroids follow more eccentric paths.|
Daily Earth Temperatures from Satellites -View global atmospheric temperature trends at different layers of the atmosphere, courtesy of the Global Hydrology and Climate Center.
Join our growing list of subscribers - sign up for our express news delivery and you will receive a mail message every time we post a new story!!!
|The Science and Technology Directorate at NASA's Marshall Space Flight Center sponsors the Science@NASA web sites. The mission of Science@NASA is to help the public understand how exciting NASA research is and to help NASA scientists fulfill their outreach responsibilities.|
|For lesson plans and educational activities related to breaking science news, please visit Thursday's Classroom||
Production Editor: Dr. Tony PhillipsCurator: Bryan Walls
Media Relations: Steve Roy
Responsible NASA official: Ron Koczor | <urn:uuid:78b61413-86fa-4f84-869b-bedbb5f15307> | 3.90625 | 1,382 | Knowledge Article | Science & Tech. | 47.371617 |
Guest post by Bob Tisdale
There are numerous blog posts and discussions about how the GISS global temperature anomaly product GISTEMP differs from the Hadley Centre and NCDC datasets.
The repeated reasons presented for this are, GISS uses 1200km radius smoothing to fill in the areas of the globe with sparse surface temperature readings, and the area this has the greatest impact is the Arctic. Typically, a map or comparison of global temperature anomaly maps is included, similar to Figure 1. The top two maps were cropped from Figure 3 in the Real Climate post “2009 temperatures by Jim Hansen”. I added the third.
The bottom map was created at the GISS Global Maps webpage. It’s a map of the GISTEMP Global Temperature Anomaly product with 250km radius smoothing for the calendar year 2005, the same year as the top two maps. I did not include a temperature scale because the bottom map was provided to allow a visual comparison of the spatial coverage of the HadCRUT product and the GISTEMP product with 250km radius smoothing. Examine the Arctic and the ocean surrounding Antarctica, the Southern Ocean. Notice a difference? In 2005, the HadCRUT data had better coverage of the Arctic and Southern Oceans than the GISTEMP dataset with 250km radius smoothing. What’s missing in the GISTEMP product? There’s no sea surface temperature data.
GISS DELETES POLAR SEA SURFACE TEMPERATURE DATA
The general regions where GISS deletes Sea Surface Temperature data are shown in Figure 2. Three areas are highlighted: two cover the Arctic Ocean, and a third surrounds Antarctica. The specific locations are clarified in the following. GISS then uses their 1200km radius smoothing to replace the sea surface data with land data.
“Areas covered occasionally by sea ice are masked using a time-independent mask.”
This means that vast regions of Sea Surface Temperature (SST) anomaly data in the Arctic Ocean and Southern Ocean are deleted from the GISTEMP record. GISS does not delete all of the Arctic and Southern Ocean SST anomaly data, just the data from the areas where the annual sea ice melt occurs, and those are good portions of them.
I have looked for but have not found an explanation for this exclusion of Sea Surface Temperature data in the papers provided on the GISTEMP references page.
THE AREA OF THE ARCTIC OCEAN WHERE GISS DELETES SST DATA
Figure 3 shows four Arctic (North Pole Stereographic, 65N-90N) maps prepared using the map-making feature of the KNMI Climate Explorer. The maps illustrate temperature anomalies and sea ice cover for the month of September, 2005. The calendar year 2005 was chosen because it was used in the RealClimate post by Jim Hansen, and September is shown because the minimum Arctic sea ice coverage occurs then. The contour levels on the temperature maps were established to reveal the Sea Surface Temperature anomalies. Cell (a) shows the Sea Ice Cover using the Reynolds (OI.v2) Sea Ice Concentration data.
The data for the Sea Ice Cover map has been scaled so that zero sea ice is represented by grey. In the other cells, areas with no data are represented by white. Cell (b) illustrates the SST anomalies presented by the Reynolds (OI.v2) Sea Surface Temperature anomaly data. GISS has used the Reynolds (OI.v2) SST data since December 1981. It’s easy to see that SST anomaly data covers the vast majority of Arctic Ocean basin, wherever the drop in sea ice permits. Most of the data in these areas, however, are excluded by GISS in its GISTEMP product. This can be seen in Cell (c), which shows the GISTEMP surface temperature anomalies with 250km radius smoothing. The only SST anomaly data used by GISS exists north of the North Atlantic and north of Scandinavia.
The rest of the SST data has been deleted.
The colored cells that appear over oceans (for example, north of Siberia and west of northwestern Greenland) in Cell (c) are land surface data extending over the Arctic Ocean by the GISS 250km radius smoothing. And provided as a reference, Cell (d) presents the GISTEMP “combined” land plus sea surface temperature anomalies with 1200km radius smoothing, which is the standard global temperature anomaly product from GISS. Much of the Arctic Ocean in Cell (d) is colored red, indicating temperature anomalies greater than 1 deg C, while Cell (b) show considerably less area with elevated Sea Surface Temperature anomalies.
Basically, GISS excludes Arctic Ocean SST data from 65N to 90N and, for round numbers, from 40E to 40W. This is a good portion of the Arctic Ocean. Of course, the impact would be seasonal and would depend on the seasonal drop in sea ice extent or cover. The sea ice extent or cover has to decrease annually in order for sea surface temperature to be measured.
I’ll use the above-listed coordinates for the examples that follow, but keep in mind that they do not include areas of sea ice in the Northern Hemisphere south of 65N where sea surface temperature data are also deleted by GISS. These additional areas are highlighted in Figure 4. They include the Bering Sea, Hudson Bay, Baffin Bay and the Davis Strait between Greenland and Canada, and the Sea of Okhotsk to the southwest of the Kamchatka Peninsula.
Note: GISS uses Hadley Centre HADISST data as its source of Sea Surface Temperature (SST) data from January 1880 to November 1981 and NCDC Reynolds (OI.v2) data from December 1981 to present. To eliminate the need to switch between or merge SST datasets, this post only examines the period from 1982 to present. And to assure the graphics presented in Figures 3 and 6 are not biased by differences in base years of the GISTEMP data and the Reynolds (OI.v2) SST data, the latter of which has only been available since November 1981, I’ve used the period of 1982 to 2009 as base years for all anomaly data.
WHY WOULD DELETING SEA SURFACE TEMPEATURE DATA AND REPLACING IT WITH LAND SURFACE DATA BE IMPORTANT?
Land Surface Temperature variations are much greater than Sea Surface Temperature variations. Refer to Figure 5. Since January 1982, the trend in GISTEMP Arctic Land Surface Temperature Anomalies (65N-90N, 40E-40W) with 250km radius smoothing is approximately 8 times higher than the Sea Surface Temperature anomaly trend for the same area.
The Arctic Ocean SST anomaly linear trend is 0.082 deg C/ decade, while the linear trend for the land surface temperature anomalies is 0.68 deg C/decade. And as a reference, the “combined” GISTEMP Arctic temperature anomaly trend for that area is 9 times the SST anomaly trend.
By deleting the Sea Surface Temperature anomaly data, GISS relies on the dataset with the greater month-to-month variation and the much higher temperature anomaly trend for its depictions of Arctic temperature anomalies. This obviously biases the Arctic “combined” temperature anomalies in this area.
GISS DELETES SEA SURFACE TEMPERATURE DATA IN THE SOUTHERN HEMISPHERE, TOO
Figure 6 shows four maps of Antarctica and the Southern Ocean (South Pole Stereographic, 90S-60S). It is similar to Figure 8. Cell (b) illustrates the SST anomalies presented by the Reynolds (OI.v2) Sea Surface Temperature anomaly data. SST anomaly data covers most of the Southern Ocean, but GISS deletes a substantial portion of it, as shown in Cell (c). The only SST anomaly data exists toward some northern portions of the Southern Ocean. These are areas not “covered occasionally by sea ice”.
Figure 7 illustrates the following temperature anomalies for the latitude band from 75S-60S:
-Sea Surface Temperature, and
-Land Surface temperature of the GISTEMP product with 250km radius smoothing, and
-Combined Land and Sea Surface of the GISTEMP product with 1200km radius smoothing, the GISTEMP standard product.
The variability of the Antarctic land surface temperature anomaly data is much greater than the Southern Ocean sea surface temperature data. The linear trend of the sea surface temperature anomalies are negative while the land surface temperature data has a significant positive trend, so deleting the major portions of the Southern Ocean sea surface temperature data as shown in Cell (c) of Figure 6 and replacing it with land surface temperature data raises temperature anomalies for the region during periods of sea ice melt.
Note that the combined GISTEMP product has a lower trend than the land only data. Part of this decrease in trend results because the latitude band used in this comparison still includes portions of sea surface temperature data that is not excluded by GISS (because it doesn’t change to sea ice in those areas).
ZONAL MEAN GRAPHS REINFORCE THE REASON FOR THE GISS DIVERGENCE
When you create a map at the GISS Global Maps webpage, two graphics appear. The top one is the map, examples of which are illustrated in Figure 1, and the bottom is a Zonal Mean graph. The Zonal Mean graph presents the average temperature anomalies for latitudes, starting near the South Pole at 89S and ending near the North Pole at 89N. Figure 8 is a sample. It illustrates the changes (rises and falls) in Zonal Mean temperature anomalies from 1982 to 2009 of the GISTEMP combined land and sea surface temperature product with 1200km radius smoothing. The greatest change in the zonal mean temperature anomalies occurs at the North Pole, the Arctic. This is caused by a phenomenon called Polar Amplification.
To produce a graph similar to the GISS plot of the changes in Zonal Mean Temperature Anomalies, I determined the linear trends of the GISTEMP combined product (1200km radius smoothing) in 5 degree latitude increments from 90S-90N, for the years 1982 to 2009, then multiplied the decadal trends by 2.8 decades. I repeated the process for HADCRUT data. Refer to Figure 9.
The two datasets are similar between the latitudes of 50S-50N, but then diverge toward the poles. As noted numerous times in this post, GISS deletes sea surface temperature data at higher latitudes (poleward of approximately 50S and 50N), and replaces it with land surface data.
Figure 10 shows the differences between the changes in GISTEMP and HADCRUT Zonal Mean Temperature Anomalies. This better illustrates the divergence at latitudes where GISS deletes Sea Surface Temperature data and replaces it with land surface temperature anomaly data, that latter of which naturally has higher linear trends during this period.
Maps and data of sea ice cover and temperature anomalies are available through the KNMI Climate Explorer: | <urn:uuid:bc9d1746-0d2b-42a0-b5d9-c9d97fa74664> | 2.984375 | 2,306 | Personal Blog | Science & Tech. | 40.692408 |
Science Fair Project Encyclopedia
Standard conditions for temperature and pressure
Temperature and air pressure can vary from one place to another on the Earth, and can also vary in the same place with time. These values, however, are very important in many chemical and physical processes, in particular with regard to measurements. Therefore, it is necessary to define standard conditions for temperature and pressure.
In chemistry, the term standard temperature and pressure (abbreviated STP) denotes an exact reference temperature of 0°C (273.15 K) and pressure of 1 atm (defined as 101.325 kPa). These values approximate freezing temperature of water and atmospheric pressure at sea level.
Also in chemistry, the term Standard Ambient Temperature and Pressure (abbreviated SATP) denotes a reference temperature of 25°C (298.15 K) and pressure of 100 kPa. Although there are many variations of the definition, the most accepted one is the temperature and pressure where the equilibrium constant for the autoionization of water is 1.0x10-14.
The Army Standard Metro atmosphere, now used only in ballistics, defines sea-level conditions as 750.000 mmHg of pressure (29.5275 inHg, 99.9918 kPa), 59°F (15°C), and 78% humidity. (Ref: U.S. Army Ballistic Research Laboratory, U.S. Army Aberdeen Proving Ground)
The International Civil Aviation Organisation (ICAO) defines the sea-level International Standard Atmosphere (ISA) as 101.325 kPa, 15°C and 0% humidity. These values provide a reference for calculating various aircraft performance figures, such as endurance, range, airspeed, and fuel consumption. When used to calculate performance at any pressure altitude other than sea level, the temperature is adjusted using the prescribed lapse rate which is -6.5 °C/km for the first 11 km.
(Ref: Manual of the ICAO Standard Atmosphere (extended to 80 kilometres (262 500 feet)), Doc 7488 / Third Edition, 1993)
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:4ab2406d-92ef-4cee-810a-c894e5039c6f> | 3.8125 | 460 | Knowledge Article | Science & Tech. | 42.991609 |
Physics Tip Sheet #69, June 28, 2007
American Physical Society
Highlights in this Issue: the first heat transistor, remote controlled nanomachines, high performance energy storage, and the physics of crash landing on sand.
The First Heat Transistor
J. Pekola et al.
Remotely controlled nanomachines
M. J. Comstock et al.
Physicists at the
Scientists have experimented with shape-shifting azobenzene in previous studies, but the molecules only responded properly when suspended in liquids or incorporated into plastics, neither of which makes a very good foundation for complex nanomachines. In order to get the molecular machines to function while mounted on a gold surface, the physicists first had to add legs built of carbon and hydrogen atoms to hold the molecules slightly away from the metal. Although the legs anchoring the molecules to the surface only provided a fraction of a nanometer of clearance (less than a billionth of a meter), it was enough to allow the molecules to move in response to the UV illumination.
The team confirmed their achievement with a series of scanning tunneling microscope images showing that they could switch the molecules' shapes from one configuration and back again. - JR
High-performance energy storage
Vivek Ranjan et al.
North Carolina State University physicists have recently deduced a way to improve high-energy-density capacitors so that they can store up to seven times as much energy per unit volume than the common capacitor. High performance capacitors would enable hybrid and electric cars with much greater acceleration, better and faster steering of rockets and spacecraft, better regeneration of electricity when using brakes in electric cars, and improved lasers, among many other electrical applications.
Air pressure matters when landing on sandy planets
G. Caballero et al.
A steel ball dropped into loose, fine sand makes an impressive splash, according to physicists of the Physics of Fluids group investigating the fluid-like properties of sand at the University of Twente in the Netherlands. Such considerations factor into designing a rover to land on and move about Martian dunes or other dusty surfaces.
The American Physical Society (www.aps.org) is a non-profit membership organization working to advance and diffuse the knowledge of physics through its outstanding research journals, scientific meetings, and education, outreach, advocacy and international activities. APS represents over 50,000 members, including physicists in academia, national laboratories and industry in the United States and throughout the world. Society offices are located in College Park, MD (Headquarters), Ridge, NY, and Washington, DC. | <urn:uuid:a4d72fc7-782f-4ec9-abaf-f01904bbd0db> | 3.09375 | 528 | Content Listing | Science & Tech. | 25.750173 |
Plants in Hawaii
This is an abridged version of the site developed by Dr. Clifford W. Smith, email@example.com. (Sponsored by the Botany Department and the National Park Service Cooperative Park Studies Unit of the University of Hawaii at Manoa.) The full version of this site can be found at Hawaiian Alien Plant Studies. This version includes only species for which illustrations are available, and as additional illustrations have become available, new species have been added to the list that may not appear on the "full" version.
Most recent additions: February 23, 2000
Pest Plants of Hawaiian Native Ecosystems - Alien plant species that are among the greatest threats to native Hawaiian biota.
Wedelia trilobata; wedelia; Asteraceae | <urn:uuid:c0e2da11-9cc7-4658-bde3-4ec394927ab6> | 2.984375 | 158 | Content Listing | Science & Tech. | 39.421394 |
The Atlantic Meridional Overturning Circulation (AMOC) is a major current in the Atlantic Ocean, characterized by a northward flow of warm, salty water in the upper layers of the Atlantic, and a southward flow of colder water in the deep Atlantic. The AMOC is an important component of the Earth’s climate system.
|Topographic map of the Nordic Seas and subpolar basins with schematic circulation of surface currents (solid curves) and deep currents (dashed curves) that form a portion of the Atlantic meridional overturning circulation. Colors of curves indicate approximate temperatures. Source: R. Curry, Woods Hole Oceanographic Institution/Science/USGCRP.|
This ocean current system transports a substantial amount of heat energy from the tropics and Southern Hemisphere toward the North Atlantic, where the heat is then transferred to the atmosphere. Changes in this ocean circulation could have a profound impact on many aspects of the global climate system.
There is growing evidence that fluctuations in Atlantic sea surface temperatures, hypothesized to be related to fluctuations in the AMOC, have played a prominent role in significant climate fluctuations around the globe on a variety of time scales.
Measurements across the North Atlantic suggest multidecadal swings in sea surface temperatures that may be at least in part due to fluctuations in the AMOC. The figure below describes this variation in North Atlantic sea surface temperatures for the period 1856 to 2009. The repeatitive cycle obvious in this figure is known as the Atlantic Multidecadal Oscillation (AMO). Evidence from paleorecords suggests that there have also been large, decadal-scale changes in the AMOC, particularly during glacial times. These abrupt changes have had a profound impact on climate, both locally in the Atlantic and in remote locations around the globe.
Variation in the Atlantic Multidecadal Oscillation from 1856–2009. Data Source: NOAA, Image Source: Wikipedia.
At its northern boundary, the AMOC interacts with the circulation of the Arctic Ocean. The summer Arctic sea ice cover has undergone dramatic retreat since satellite records began in 1979, amounting to a loss of almost 30% of the September ice cover in 29 years. Climate model simulations suggest that rapid and sustained September Arctic ice loss is likely in future 21st century climate projections.
Monthly May ice extent for 1979 to 2011 shows a decline of 2.4% per decade. Source: National Snow and Ice Data Center.
Because the AMOC's heat transport makes a substantial contribution to the moderate climate of maritime and continental Europe, and any slowdown in the overturning circulation would have profound implications for climate change, there have been questions about the likelihood of an "collapse" or an abrupt change. In the a 2008 study on Abrupt Climate Change by the U.S. Climate Change Science Program, the following conclusions were drawn :
It is very likely that the strength of the AMOC will decrease over the course of the 21st century in response to increasing greenhouse gases, with a best estimate decrease of 25–30%.
Even with the projected moderate AMOC weakening, it is still very likely that on multidecadal to century time scales a warming trend will occur over most of the European region downstream of the North Atlantic Current in response to increasing greenhouse gases, as well as over North America.
It is very unlikely that the AMOC will undergo a collapse or an abrupt transition to a weakened state during the 21st century.
It is also unlikely that the AMOC will collapse beyond the end of the 21st century because of global warming, although the possibility cannot be entirely excluded.
Although it is very unlikely that the AMOC will collapse in the 21st century, the potential consequences of this event could be severe. These might include a southward shift of the tropical rainfall belts, additional sea level rise around the North Atlantic, and disruptions to marine ecosystems.
The oceans play a crucial role in the climate system. Ocean currents move substantial amounts of heat, most prominently from lower latitudes, where heat is absorbed by the upper ocean, to higher latitudes, where heat is released to the atmosphere. This poleward transport of heat is a fundamental driver of the climate system and has crucial impacts on the distribution of climate as we know it today. Variations in the poleward transport of heat by the oceans have the potential to make significant changes in the climate system on a variety of space and time scales.
In addition to transporting heat, the oceans have the capacity to store vast amounts of heat. On the seasonal time scale this heat storage and release has an obvious climatic impact, delaying peak seasonal warmth over some continental regions by a month after the summer solstice. On longer time scales, the ocean absorbs and stores most of the extra heating that comes from increasing greenhouse gases (Levitus et al., 2001), thereby delaying the full warming of the atmosphere that will occur in response to increasing greenhouse gases.
One of the most prominent ocean circulation systems is the Atlantic Meridional Overturning Circulation (AMOC). As described in subsequent sections, and as illustrated below, this circulation system is characterized by northward flowing warm, saline water in the upper layers of the Atlantic (red curve), a cooling and freshening of the water at higher northern latitudes of the Atlantic in the Nordic and Labrador Seas, and southward flowing colder water at depth (light blue curve). This circulation transports heat from the South Atlantic and tropical North Atlantic to the subpolar and polar North Atlantic, where that heat is released to the atmosphere with substantial impacts on climate over large regions.
The Atlantic branch of this global MOC consists of two primary overturning cells:
- an “upper” cell in which warm upper ocean waters flow northward in the upper 1,000 meters (m) to supply the formation of North Atlantic Deep Water (NADW), which returns southward at depths of approximately 1,500-4,500 m; and,
- a “deep” cell in which Antarctic Bottom Waters (ABW) flow northward below depths of about 4,500 m and gradually rise into the lower part of the southward-flowing NADW.
Of these two cells, the upper cell is by far the stronger and is the most important to the meridional transport of heat in the Atlantic, owing to the large temperature difference (~15° C) between the northward-flowing upper ocean waters and the southward-flowing NADW.
Schematic of the ocean circulation (from Kuhlbrodt et al., 2007) associated with the global Meridional Overturning Circulation (MOC), with special focus on the Atlantic section of the flow (AMOC). The red curves in the Atlantic indicate the northward flow of water in the upper layers. The filled orange circles in the Nordic and Labrador Seas indicate regions where near-surface water cools and becomes denser, causing the water to sink to deeper layers of the Atlantic. This process is referred to as “water mass transformation,” or “deep water formation.” In this process heat is released to the atmosphere. The light blue curve denotes the southward flow of cold water at depth. At the southern end of the Atlantic, the AMOC connects with the Antarctic Circumpolar Current (ACC). Deep water formation sites in the high latitudes of the Southern Ocean are also indicated with filled orange circles. These contribute to the production of Antarctic Bottom Water (AABW), which flows northward near the bottom of the Atlantic (indicated by dark blue lines in the Atlantic). The circles with interior dots indicate regions where water upwells from deeper layers to the upper ocean (see Section 2 for more discussion on where upwelling occurs as part of the global MOC).
In assessing the “state of the AMOC,” we must be clear to define what this means and how it relates to other common terminology. The terms Atlantic Meridional Overturning Circulation (AMOC) and Thermohaline Circulation (THC) are often used interchangeably but have distinctly different meanings. The AMOC is defined as the total (basin-wide) circulation in the latitude depth plane, as typically quantified by a meridional transport streamfunction. Thus, at any given latitude, the maximum value of this streamfunction, and the depth at which this occurs, specifies the total amount of water moving meridionally above this depth (and below it, in the reverse direction). The AMOC, by itself, does not include any information on what drives the circulation.
In contrast, the term “THC” implies a specific driving mechanism related to creation and destruction of buoyancy. Rahmstorf (2002) defines this as “currents driven by fluxes of heat and fresh water across the sea surface and subsequent interior mixing of heat and salt.” The total AMOC at any specific location may include contributions from the THC, as well as contributions from wind-driven overturning cells.
It is difficult to cleanly separate overturning circulations into a “wind-driven” and “buoyancy-driven” contribution. Therefore, nearly all modern investigations of the overturning circulation have focused on the strictly quantifiable definition of the AMOC as given above. We will follow the same approach in this report, while recognizing that changes in the thermohaline forcing of the AMOC, and particularly those taking place in the high latitudes of the North Atlantic, are ultimately most relevant to the issue of abrupt climate change.
There is growing evidence that fluctuations in Atlantic sea surface temperatures (SSTs), hypothesized to be related to fluctuations in the AMOC, have played a prominent role in significant climate fluctuations around the globe on a variety of time scales. Evidence from the instrumental record (based on the last ~130 years) shows pronounced, multidecadal swings in SST averaged over the North Atlantic.
These multidecadal fluctuations may be at least partly a consequence of fluctuations in the AMOC. Recent modeling and observational analyses have shown that these multidecadal shifts in Atlantic temperature exert a substantial influence on the climate system ranging from modulating African and Indian monsoonal rainfall to influencing tropical Atlantic atmospheric circulation conditions relevant to hurricanes. Atlantic SSTs also influence summer climate conditions over North America and Western Europe.
Evidence from paleorecords (discussed more completely in subsequent sections) suggests that there have been large, decadal-scale changes in the AMOC, particularly during glacial times. These abrupt change events have had a profound impact on climate, both locally in the Atlantic and in remote locations around the globe. Research suggests that these abrupt events were related to massive discharges of freshwater into the North Atlantic from collapsing land-based ice sheets. Temperature changes of more than 10o C on time scales of a decade or two have been attributed to these abrupt change events.
- The Potential for Abrupt Change in the Atlantic Meridional Overturning Circulation, Lead Author: Thomas L. Delworth, NOAA; Contributing Authors: Peter U. Clark, Oregon State University; Marika Holland, National Center for Atmospheric Research; William E. Johns, University of Miami; Till Kuhlbrodt, University of Reading; Jean Lynch-Stieglitz, Georgia Institute of Technology; Carrie Morrill, University of Colorado/NOAA; Richard Seager, Columbia University; Andrew J. Weaver, University of Victoria; Rong Zhang, NOAA.
- Abrupt Climate Change A report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research, U.S. Geological Survey, Reston, VA. Lead Authors: Peter U. Clark, Oregon State University; Andrew J. Weaver, University of Victoria; Contributing Authors: Edward Brook, Oregon State University; Edward R. Cook, Columbia University; Thomas L. Delworth, NOAA; Konrad Steffen, University of Colorado.
- Levitus, S., J.I. Antonov, J. Wang, T.L. Delworth, K.W. Dixon, and A.J. Broccoli, 2001: Anthropogenic warming of Earth’s climate system. Science, 292(5515), 267-270.
- Kuhlbrodt, T., A. Griesel, M. Montoya, A. Levermann, M. Hofmann, and S. Rahmstorf, 2007: On the driving processes of the Atlantic meridional overturning circulation. Rev. Geophys., 45, RG2001, doi:10.1029/2004RG000166.
- Rahmstorf, S., 2002: Ocean circulation and climate during the past 120,000 years. Nature, 419, 207-214. | <urn:uuid:1545c247-9f0a-4ab1-936e-803dfb1cb901> | 3.875 | 2,626 | Knowledge Article | Science & Tech. | 34.944694 |
When we are talking about uploading of file, we mean to say upload file physically and save into a particular location in server. We can also upload file that means save file into database. Let us first examine how we can upload file physically in a particular location in server. The code snippet has been written in c#.
In ASP.NET, we can do the file upload using HTML Input (File) control (as shown in above figure: Figure1). In order to start the process we need to follow the following basic steps:
Step I: Add the control:
<input id="FileUpload" type="file" runat="server"/>
<form id="frmUpload" enctype="multipart/form-data" runat="server">
- We may check file type and allow user to upload some specific file only;
- We may allow user to upload upto a maximum size of file;
- We may check the length of file and disallow user to upload empty file etc.
How to check file type?
System.IO.Path.GetExtension(FileUpload.PostedFile.FileName);//It will give the extension of file and thus help to identify file type.
if(FileUpload.PostedFile.ContentLength<=5242880) //5242880 bytes i.e., restricted upto 5mb
if(FileUpload.PostedFile.ContentLength>0) //File should not be empty by checking size of uploaded file in bytes
In order to upload multiple file we can simply take multiple input file control. But if we don't want to use multiple input file control instead keep one input file control only, here is the solution:
ArrayList arrayList = new ArrayList();
foreach (System.Web.UI.HtmlControls.HtmlInputFile HIF in arrayList)
In web.config :
Once we save the file into file storage server or into database, we need to think about download of the uploaded file .Here I am describing how we can download file from physical storage server (see Figure2):
protected void btnDownload_Click(object sender, EventArgs e)
string fileName = "MyFile.txt";
System.IO.FileStream fs = null;
fs = System.IO.File.Open("D:\\FileStorage\\UploadedFile\\MyFile.txt ", System.IO.FileMode.Open);
byte btFile = new byte[fs.Length];
fs.Read(btFile, 0, Convert.ToInt32(fs.Length));
Response.AddHeader("Content-disposition", "attachment; filename=" + fileName);
Response.ContentType = "application/octet-stream";
fs = null;
In my forthcoming article I will discuss how to save physical file into database.
So we have seen upload of files and how to download uploaded file. Uploading and downloading files is very common in real life application | <urn:uuid:ee438a7d-eba5-472b-9c80-de27edc42389> | 3.03125 | 627 | Tutorial | Software Dev. | 48.784818 |
It's the news rain fans, snow fans, and skiers love to hear: La Nina conditions have officially formed, meaning perhaps a cool and wet/snowy winter is in the offing around here.
NOAA made the announcement Thursday in regards to their forecast update to the Atlantic Hurricane season, but for the Pacific Northwest where hurricanes might as well be Bigfoot, we pay attention for different reasons.
La Nina is the term for the phenomenon where ocean waters cool in the equatorial region of Pacific Ocean. It's part of an oscillation with El Nino, which is when the waters warm. Both have great, but opposite effects on worldwide climate.
For the Pacific Northwest, La Ninas tend to bring cooler and wetter than normal conditions for autumn and winter. It doesn't necessarily mean a big, snowy winter (although 2008-09 was a La Nina winter) just that the odds of bringing cold and wet together at the same time are higher. But La Nina winters are typically big mountain snowpack winters, so ski resorts should fare pretty well if climate standards hold.
On the other hand, this does present greater risks for the Green River Valley and other river flood-prone areas. Last winter's El Nino played true to form of being warm and dry (remember our warm January?) and thus we never really had any kind of flooding event. That could change this winter. So those who live in the flood plains, don't let your guard down.
Scientists still aren't really sure yet what causes this back-and-forth tug of war between La Nina and El Nino, officially known as the "El Nino-Southern Oscillation" or ENSO, only that it repeats every 3-7 years, but rarely in the same way.
An average ENSO will probably see an El Nino winter, then perhaps 1-2 years of "neutral conditions" before a drift into La Nina for a winter, then reverse back to El Nino over another few years.
However, this year's shift was quite rapid. In fact, we blew right through the neutral stage, radically shifting from El Nino to La Nina in just the course of a few months this summer.
You can take a peek at this ENSO chart that shows El Nino and La Nina conditions since 1950. If the 3-month running average temperature in the part of the Pacific where this happens is 0.5C degrees or warmer than normal, then it's considered El Nino conditions. 0.5C or colder is La Nina. Anything in between is neutral. This chart has not yet been updated for August, but I'm guessing that number came out today at -0.5C or so and thus the La Nina declaration.
The chart shows there have been some years with quick turn-arounds -- 1973 (very wet Nov-Jan, cool Nov and Jan), 1988 (quite wet / avg temps) and 1998 (*very* wet Nov-Feb) come to mind. Most quick turn-arounds seemed to usher in a long La Nina pattern so we'll see how this goes.
More information on La Nina:
Why the East Coast and South care about La Nina:
La Nina conditions tend to make for active hurricane seasons, as the pattern limits wind shear in the tropics, which normally works against storm development. They are also concerned that the water temperatures in the Gulf of Mexico are at record warmth levels, so there is plenty of fuel for hurricanes to develop.
For the rest of the nation, it's take El Nino and reverse -- that means a likely dry winter across California, Texas and the south (except for hurricanes) and a wetter winter across the northern plains.
Our own hurricane?
While actual hurricanes are non-existent here due to our chilly ocean waters, the local weather community was abuzz on a strange item on the radar early this morning.
Take a peek at what was going on over Northern Vancouver Island:
Of course, it looked sort of like what an actual hurricane looks like on radar only ours just had some light rain. This was likely just related to low pressure still in the area. Cliff Mass' excellent blog featured this interesting "twist" today and has more on what likely caused it. | <urn:uuid:23b476bc-5054-4c07-83bd-e3d4f832e03d> | 2.734375 | 876 | Personal Blog | Science & Tech. | 58.75505 |
Sexes are separate. During copulation the male mounts the shell of the female and inserts the penis under the lip of her shell and into the pallial oviduct. Mating may last for several hours. Sperm are transferred to the copulatory bursa, and later pass to another storage sac, the seminal receptacle, where they can survive for many months.
Females spawn several thousand fertilized eggs in a short period, and retain these within the mantle cavity where they are attached in a thin layer to the folds of the reduced gills.
Larvae hatch after several days as early veligers and are released simultaneously when the female descends the tree to reach the water level. Normal planktotrophic development follows, lasting an estimated 8 to 10 weeks. The length of the larval shell (protoconch) at settlement is about 0.32 to 0.42 mm. The combination of ovoviviparity and planktotrophic development is unusual in molluscs, but is characteristic of all members of the subgenus Littorinopsis. It is believed to be an adaptation to permit rapid release of larvae, thus minimising the time spent at the water surface where the female is vulnerable to aquatic predators.
Females spawn probably once a month, in a lunar cycle, and at least in Queensland, Australia, the breeding season lasts throughout the year.
Growth is rapid, reaching a shell length of 20 mm in about 8 months, and is slightly faster in females, giving rise to a small dimorphism in size. Males mature at about 16 mm, and females at 20 mm. Maximum lifespan is about 2 years. These data are from a population in northern Queensland. | <urn:uuid:a82db71e-33d4-42cf-863e-9a7222e93dec> | 3.34375 | 354 | Knowledge Article | Science & Tech. | 47.362965 |
The Teaching of Quantum Mechanics
This World Wide Web page written by
Oberlin College Physics Department;
last updated 18 January 2005.
These World Wide Web pages present tips and techniques that I have
found useful in teaching junior-senior level quantum mechanics courses.
In the flood of details, it is often hard to remember that the
three characteristic traits of quantum mechanics are probability,
interference, and entanglement.
- Probability in quantum mechanics is neither more nor less
difficult than probability in any other area.
- Every teacher of quantum mechanics should be aware of
the following four resources, which give exquisite experimental
evidence for quantal interference:
- A. Tonomura, J. Endo, T. Matsuda, T. Kawasaki, and H. Ezawa,
"Demonstration of single-electron buildup of an interference pattern",
American Journal of Physics, 57 (1989) 117-120.
of the experimental results described above.
- R. Gahler and A. Zeilinger, "Wave-optical experiments with very cold
neutrons", American Journal of Physics, 59 (1991) 316-324.
- Olaf Nairz, Markus Arndt, and Anton Zeilinger,
"Quantum interference experiments with large molecules",
American Journal of Physics, 71 (2003) 319-325.
- Different teachers and different texts vary considerably in the
question of how and whether to treat entanglement. Textbooks often
give little help. If you do wish to treat entanglement,
I recommend these articles:
- N.D. Mermin "Is the moon there when nobody looks?
Reality and the quantum theory", Physics Today,
38(4) (April 1985) 38-47.
- P.G. Kwiat and L. Hardy, "The mystery of the quantum cakes",
American Journal of Physics, 68 (2000) 33-36.
It is not surprising that I also recommend perusing the article:
(If you have the free
Acrobat Reader software, then you may read the above article by
clicking on its title. It is posted here with permission from
the American Journal Physics.
Copyright 1996, American Association of Physics Teachers. It may
be downloaded for personal use only. Any other use requires prior
permission from the author and the American Association of Physics
Teachers. This posted version has been modified slightly from the
version that appeared in the American Journal of Physics.) | <urn:uuid:f4471547-77d4-4150-97af-aff1f21ae2fd> | 3.203125 | 528 | Content Listing | Science & Tech. | 45.377967 |
Here's how the circuit works: Electrons, here known as surface plasmons, oscillate on tiny molecules called nanoparticles. These plasmons act as a 'super lenses,' which gather all solar light hitting the circuit. Once the light's collected, the particles pose as electrodes to ferry away the electricity for a device to use.
Currently, though, researchers can only produce and harness small amounts of energy from the photovoltaic circuits, nowhere near enough to power consumer electronics. But scientists are sure power production will only increase in the future with creative methods like stacking circuits to absorb and focus more light energy.
Self-charging photovoltaic circuitry might be used in display screen pixels or painted on the outside of iPads and smartphones to scavenge sunlight and charge the devices, according to Dawn Bonnell, a researcher on the project. It also could potentially offer just the right power solution for small robotic devices or help computers operate on light alone.
For now, the circuits still represent a fairly major scientific breakthrough to the researchers, one science has grappled with for some time. When working with microscopic parts, invisible to the eye, it's hard for scientists to ensure each component is the right distance away from the others to form a proper circuit. Molecules must be connected to particles, and particles must be separated from one another by a molecule-long space.
Silicon circuits aren't likely to be replaced in the near future, Bonnell says, but this breakthrough could one day make phone chargers much less necessaryat least on sunny days. | <urn:uuid:19186cd3-0d30-49c9-b951-052ca8a165ab> | 4.3125 | 316 | Knowledge Article | Science & Tech. | 31.21486 |
Hadrosaurus is a hadrosaurid dinosaur genus.
In 1858, a skeleton of a dinosaur from this genus was the first full dinosaur skeleton found in North America, and in 1868 it became the first ever mounted dinosaur skeleton.
Hadrosaurus foulkii is the only species in this genus.
For more information about the topic Hadrosaurus, read the full article at Wikipedia.org, or see the following related articles:
Recommend this page on Facebook, Twitter,
and Google +1:
Other bookmarking and sharing tools: | <urn:uuid:99285f40-1c3d-4031-9f23-2bd01d4097d4> | 3.078125 | 114 | Knowledge Article | Science & Tech. | 34.255 |
[If you cannot see the flash video below, you can click here for a high quality mp4 video.]
Interviewee: John Rogers, University of Illinois Urbana Champaign
All Eyes on Science
By Heather Mayer
A new digital camera not only resembles a human eye, it works like one too. Researchers mimicked the human eye’s curvature to make light detectors with many advantages over conventional flat chips. The sophisticated camera is a combination of optoelectronics and a biologically inspired design.
“This kind of technology that we’ve explored with the electronic eye, very naturally integrates with different body parts and organs,” says researcher John Rogers.
With a conventional camera design, the detector surface has always been flat, generating a flat image. But the eye-shaped camera allows for advanced imaging.
Up until now, nobody has been able to develop a curved surface because manufacturers only create photo-detector arrays for flat planar surfaces, not hemispherical ones, says Rogers.
Also on ScienCentral
But Rogers and his colleagues wired together silicon light detectors — one pixel in size each — with flexible cables, allowing the detectors to become eye-shaped. The authors published their paper in the journal Nature last August.
“That lets you go from the planar or flat condition in which it’s initially fabricated into this hemispherical shape that we were seeking to achieve for the artificial retina,” explains Rogers. “So what you need in order to make a hemispherical-shaped eye camera is not just bendable…but actually stretchable like a rubber band.”
The curved design can obtain a wide-angle field of view, which opens the doors for a very compact, high-performance camera, says Rogers.
A Look at the Future
Although the eye camera isn’t “state of the art” yet, Rogers says, it’s a good starting point for more advanced technologies in optics and even other parts of the human body.
“We think the initial applications will be in advanced surveillance systems, for example military-type applications, but also in night vision systems. … You can reduce the cost and weight and size of the night vision system.”
Rogers says he and his team are optimistic that this design could play an important role in retinal implants.
“All of the exciting work on implants is done just with conventional, flat detector arrays, and our technology could potentially serve as a drop-in replacement for those flat planar chips that are currently used,” he says.
Rogers and his team are now working on pacemakers that can wrap around the heart, which means more natural implants.
“(The pacemaker will) pace it in very sophisticated ways that goes beyond anything that’s possible with current technology,” Rogers says.
Elsewhere on the Web:
Share on Facebook |
Tweet This | | <urn:uuid:aae8a5b2-bff5-4d2a-a877-44dbd8b8e0ae> | 3.40625 | 610 | Truncated | Science & Tech. | 33.787737 |
The Sun varies on time scales of minutes to decades, and most likely longer. The relative energy content, i.e. the change in the radiation that is transmitted to the Earth, varies depending on the particular solar feature responsible. Thus, the study of both solar weather and solar climate is important in many aspects of the terrestrial environment.
The progress of solar variability studies will rely upon further ground-based and space-based observational programs, as well as a concerted theoretical effort to understand the detailed character of features on the solar surface and the mechanisms which cause internal changes in the solar structure. Only then can the possibility of predicting solar radiation variations be considered.
Up: Constant as the Sun? Previous: The Climate Connection?
Approved by Peter Fox
Last revised: Mon Apr 10 15:08:11 MDT 2000 | <urn:uuid:58a95578-6339-4331-b306-b34572485830> | 3.578125 | 167 | Knowledge Article | Science & Tech. | 38.391982 |
On DNA. Which apparently can last for 1000's of years!
"Forget hard disks or DVDs. If you want to store vast amounts of information look instead to DNA, the molecule of which genes are made. Scientists in the UK have stored about a megabyte's worth of text, images and speech into a speck of DNA and then retrieved that data back almost faultlessly. They say that a larger-scale version of the technology could provide an extremely dense and long-lived form of digital storage that is particularly well suited to data archiving"
"Half a Million DVDs in Your DNA"
At the storage density achieved, a single gram of DNA would hold 2.2 million gigabits of information, or about what you can store in 468,000 DVDs
Edited by seeder, 23 January 2013 - 08:27 PM. | <urn:uuid:b46e83d0-0e7c-4fc1-82b2-e05fb60f9b53> | 2.84375 | 173 | Comment Section | Science & Tech. | 63.736115 |
Animals having no backbone or spinal column, such as insects, mollusks, crustaceans, worms, and similar organisms.
USGS Zebra Mussel Monitoring Program for north Texas [ More info] General characteristics of this invasive freshwater mollusk, focusing on how we monitor its presence and migration through this region.
West Nile Virus (WNV) [ More info] National Wildlife Health Center studies the West Nile Virus to learn the current geographic extent, to understand how the virus moves between birds, mosquitoes, and humans, and to predict future movements of the virus.
Whales and walrus: tillers of the seafloor [ More info] Report on sea floor marks in the Northeast Bering Sea and Chukchi Sea that were identified as pits and furrows caused by whales and walruses in the process of feeding on bottom crustaceans.
Alphabetical Index of Topics
a b c d e f g h i j k l m n o p q r s t u v w x y z | <urn:uuid:e8d49a87-5602-4d8e-a620-4c50c290e31e> | 3.25 | 212 | Content Listing | Science & Tech. | 37.132556 |
There are many types of dangerous radiation in space. Astronauts must be careful to remain safe and healthy. Spacewalks are especially dangerous times for radiation exposure.
Click on image for full size
Image courtesy of NASA.
Radiation Dangers to Astronauts
Astronauts are exposed to many different types of dangerous radiation in space. Space agencies, like NASA, must carefully monitor the radiation exposure of astronauts to make sure they remain safe and healthy.
Earth's atmosphere and magnetic field both serve as radiation shields for those of us who are on Earth's surface. Most piloted space missions (ones with astronauts aboard) "fly" in Low Earth Orbit (LEO), slightly above Earth's atmosphere. Astronauts in LEO are still within the protective bubble of Earth's magnetosphere, which deflects many types of particle radiation. Astronauts are outside of the protection of our atmosphere, however, and are thus at greater risk of exposure to high-energy electromagnetic radiation including ultraviolet "light", X-rays, and gamma rays. Even in LEO, astronauts must take precautions to deal with radiation, especially when they are outside on spacewalks or when "solar storms" are brewing.
Trips by astronauts to the Moon, Mars, and asteroids will provide us with bigger challenges protecting astronauts from radiation as they leave the protection of Earth's magnetosphere behind. The Moon offers almost no protection, as it lacks both an atmosphere and a magnetic field. Mars has a very thin atmosphere and a weak, regional magnetic field in some locations. Early Mars bases may be built at low elevation (at the bottom of the "deepest" parts of the atmosphere) locations that are also within a regional magnetic field, in order to take advantage of as much natural radiation shielding as possible.
Shop Windows to the Universe Science Store!
Our online store
on science education, ranging from evolution
, classroom research
, and the need for science and math literacy
You might also be interested in:
The International Space Station (ISS) is a large space station that orbits Earth. There are astronauts and cosmonauts living onboard the ISS right now. The ISS isn't completely finished, though. New sections...more
Astronauts are exposed to many different types of dangerous radiation in space. Space agencies, like NASA, must carefully monitor the radiation exposure of astronauts to make sure they remain safe and...more
The space shuttle Atlantis finished its last flight when it landed on May 26, 2010. NASA is retiring the whole fleet of space shuttle orbiters by the end of 2010. Discovery and Endeavor are the other two...more
The Hubble Space Telescope (HST) was one of the most important exploration tools of the past two decades, and will continue to serve as a great resource well into the new millennium. The HST found numerous...more
Driven by a recent surge in space research, the Apollo program hoped to add to the accomplishments of the Lunar Orbiter and Surveyor missions of the late 1960's. Apollo 11 was the name of the first mission...more
Apollo 12 was launched on Nov. 14, 1969, surviving a lightning strike which temporarily shut down many systems, and arrived at the Moon three days later. Astronauts Charles Conrad and Alan Bean descended...more
Apollo 15 marked the start of a new series of missions from the Apollo space program, each capable of exploring more lunar terrain than ever before. Launched on July 26, 1971, Apollo 15 reached the Moon...more | <urn:uuid:ed75ba1c-c142-4458-98e8-afc3e9e811cc> | 3.703125 | 711 | Knowledge Article | Science & Tech. | 45.710642 |
A correct theory of gravity will show us these four (4) things:
1. It will show us why gravity also acts like acceleration (principle
2. It will show us the actual cause of gravity.
3. It will show us why gravitational mass and inertial mass are
4. It will show us the speed of gravitational attraction.
Newton said gravity was acting at a much faster speed than Einstein.
Which one of them was right? | <urn:uuid:6dbe45bd-6a70-4c55-ba29-6fbf42c3ca55> | 3.375 | 96 | Q&A Forum | Science & Tech. | 73.97598 |
Frozen Fruit Fly’s Potential
Scientists in Czech Republic have recently thawed a fruit fly which had been frozen the year before while pupating. During this year, the drosophila melanogaster was preserved through several generations at temperatures of 23˚C.
One of the possible side-effects scientists were testing for included the fruit fly’s ability to develop and mate post-freeze. Luckily, the fly has been able to both successfully metamorph and mate to produce healthy offspring.
Only a few insects can tolerate freezing, as the accumulation of ice crystals in most vertebrates’ bodies is either very harmful or fatal. Vladimír Koštál and his team of researchers have reported that these flies can survive being frozen, but require a diet of cryoperservatives and amino acids from close evolutionary relatives native to the Arctic.
Implications of this study, if further tested could lead to the better understanding of genes’ underlying susceptibility to cold. By singling-out how some organisms are able to thrive in cold could help researchers understand how humans could, in order to help organs survive on ice for longer periods so they can be transplanted. In addition, this could prove useful for entomologists, as they could preserve whole organisms for study rather than acquiring whole gene pools. | <urn:uuid:6fcdbcd1-e704-4e8d-975b-3c9d95756f24> | 3.84375 | 269 | Content Listing | Science & Tech. | 29.260238 |
February 02, 2007
Follow Up: IPCC and Hurricanes
Posted to Author: Pielke Jr., R. | Climate Change | Disasters | Science + Politics
The IPCC report is out (PDF) and here is what it says about hurricanes (tropical cyclones). Kudos to the scientists involved. Despite the pressures, on tropical cyclones they figured out a way to maintain consistency with the actual balance of opinion(s) in the community of relevant experts.
Here is the discussion of observed changes:
There is observational evidence for an increase of intense tropical cyclone activity in the North Atlantic since about 1970, correlated with increases of tropical sea surface temperatures. There are also suggestions of increased intense tropical cyclone activity in some other regions where concerns over data quality are greater. Multi-decadal variability and the quality of the tropical cyclone records prior to routine satellite observations in about 1970 complicate the detection of long-term trends in tropical cyclone activity. There is no clear trend in the annual numbers of tropical cyclones.
Interestingly, in a table that discusses attribution of trends to anthropogenic causes it reports that there are some trends observed in some regions in tropical cyclone behavior, writing that these trends "more likely than not" represent the "likelihood of a human contribution to observed trend." But then this statement is footnoted with the following qualification:
Magnitude of anthropogenic contributions not assessed. Attribution for these phenomena based on expert judgement rather than formal attribution studies.
So there might be a human contribution (and presumably this is just to the observed upwards trends observed in some basins, and not to downward trends observed in others, but this is unclear) but the human contribution itself has not been quantitatively assessed, yet the experts, using their judgment, expect it to be there. In plain English this is what is called a "hypothesis" and not a "conclusion." And it is a fair representation of the issue.
The projections for the future are as frequently represented in the literature:
Based on a range of models, it is likely that future tropical cyclones (typhoons and hurricanes) will become more intense, with larger peak wind speeds and more heavy precipitation associated with ongoing increases of tropical SSTs. There is less confidence in projections of a global decrease in numbers of tropical cyclones. The apparent increase in the proportion of very intense storms since 1970 in some regions is much larger than simulated by current models for that period.
This comment on the process was offered by Australia's Neville Nicholls, who was one of the authors responsible for drafting the language on tropical cyclones:
"I was disappointed that after more than two years carefully analysing the literature on possible links between tropical cyclones and global warming that even before the report was approved it was being misreported and misrepresented. We concluded that the question of whether there was a greenhouse-cyclone link was pretty much a toss of a coin at the present state of the science, with just a slight leaning towards the likelihood of such a link. But the premature reports suggested that we were asserting the existence of much stronger evidence. I hope that when people read the real report they will see that it is a careful and balanced assessment of all the evidence."
The open atmosphere of negotiations in the IPCC is probably something that should be revised. How anyone can deny that political factors were everpresent in the negotiations isn't paying attention.Posted on February 2, 2007 05:37 AM
Interesting. It confirms that yesterday's AP 'scoop' was not just misleading but, as some had expected all along, simply wrong ("The IPCC report is a marked departure from a November 2006 statement by the World Meteorological Organization...."). I doubt that AP will withdraw or correct the report. So much for the state of science journalism.
Posted by: Benny Peiser at February 2, 2007 07:05 AM
Benny- Call me a cynic, but my view is that the AP was either tricked into revealing an early draft by those wanting to create pressure on the process from the outside, or the AP was itself engaging in such behavior. The earlier drafts were an "open secret" and even can be found online.
The permeable IPCC negotiating process obviously needs some rethinking, as this case illustrates.
However, on the other hand score one for scientific leadership, as the IPCC narrowly avoided a major controversy. So perhaps the process worked after all.
Posted by: Roger Pielke, Jr. at February 2, 2007 07:34 AM
Thank you for your thoughtful and balanced assessment of what the IPCC SPM says. You have got it right. Your careful analysis on what the report says and how it compares to the WMO consensus statement is most appreciated.
Posted by: Randy Dole at February 3, 2007 11:00 AM
Neville Nicholls comments about "misreporting and misrepresenting" lead me to wonder who Nicholls is thinking of? The better case would be that news media are doing the misreporting. Or perhaps journalists were mislead by advocates outside the IPCC process. The worse case would be that IPCC authors are misrepresenting; that IPCC authors were the source of the incorrect reports.
A similar question is posed by Rogers comment that the AP may have been tricked by those wanting to create pressure on the process from the outside.
Regarding the following report in the NYT one has to wonder if in this case the 'pressing" from the US and China was in the direction of keeping the IPCC in line with the existing consensus.
"Scientists involved in the discussions said Thursday that the U.S. delegation, led by political appointees, was pressing to play down language pointing to a link between intensification of hurricanes and warming caused by human activity."
Posted by: Cortlandt at February 3, 2007 11:36 AM
Neeever mind. I was thinking by mass. My mistake.
I would like to know how, since climate is a balance of forcings, they can say that CO2 forcing has increased by 20% in the last 10 years. It seems that since temperature hasn't increased, CO2 forcing must not be that strong.
Posted by: Steve Hemphill at February 3, 2007 06:05 PM
It's absolutely wrong to say that global temperature hasn't increased in the last 10 years. Fitting a linear regression to the annual GISS global temperature from 1996-2006, the temperature rose by 0.29 C/decade. The linear correlation coefficient between temperature and year is 0.70-the p-value for the slope being greater than zero is 0.018. (Even if you start with the 1998 warm year, the regression indicates warming, although the significance isn't as strong, mostly because of the small number of points in the regression. If you use the monthly values from 1998-2006, the 0.19 C/decade estimate is highly significant, p=0.002.)
Posted by: Harold Brooks at February 3, 2007 06:29 PM
Steve Hemphill wrote: "temperature hasn't increased"
What in the world are you talking about? The steepest average rate of temperature increase in the whole global instrumental record is in the last 10-20 years (e.g., >0.3 deg C just since the first IPCC assessment in 1990, see the black line in AR4 Fig SPM-3(a)). That's part of the basis for the the IPCC AR4 conclusion that warming is "unequivocal."
Posted by: Scott Saleska at February 3, 2007 08:41 PM
Graceful summary, Roger, thanks.
But has anyone suggested that there aren't political factors present in the full IPCC negotiations?
Posted by: Scott Saleska at February 3, 2007 10:02 PM
You blew any credibility you had with me with that one, Steve.
Posted by: Mark Hadfield at February 4, 2007 11:58 AM
I notice nobody's taken on the 20% increase in CO2 forcing. What I'm saying is if the dogma is correct that ghg forcing is responsible for x (33?) degrees of warming on Earth and CO2 forcing has gone up by 20%, why is the warmest year still back in 1998?
Mark - good comeback ;-)
Posted by: Steve Hemphill at February 5, 2007 06:54 AM
As stated in the SPM, the 20% increase since 1995 is relative to the pre-industrial baseline greenhouse effect (~280 ppm CO2), not relative to zero ghg in the atmosphere. The natural greenhouse effect is irrelevant to the 20%. If 1995 was ~1 C higher than a natural greenhouse effect, we'd expect 20% higher radiative forcing to give an additional 0.2 C in 2005.
Interannual variability (the strong El Nino in 1998) is why 1998 is still the warmest year on record.
Posted by: Harold Brooks at February 5, 2007 07:47 AM
Steve, I withdraw my "lost any credibility" remark.
My underlying point was that to say temperature hasn't increased in the last 10 years is wrong, in my opinion. If you'd care to back it up, fine, but "1998 is still the warmest year on record" won't do it.
Posted by: Mark Hadfield at February 5, 2007 03:14 PM
Any word from Landsea? He walk out of IPCC based on the cyclones, and you're now saying that its OK. So I'm curious as to his reaction. http://www.cnsnews.com/ViewCulture.asp?Page=/Culture/archive/200702/CUL20070202a.html has him saying "Landsea told Cybercast News Service his primary concern was with how lead authors representing the IPCC were interacting with the public and the media." - should we take that to mean that the report itself is OK by him?
Posted by: William Connolley at February 5, 2007 03:21 PM
My point is that the concept of ignoring natural forcing and saying that "forcing has increased by 20% in the last 10 years" is unbelievable sound bite bait, playing to alarmism. I can't believe that scientists, especially anyone that does modeling, would not cringe at that. If CO2 forcing truly increased by 20% in the last 10 years we would have undoubtedly seen a year warmer than 1998 - which we haven't. Since we haven't anyway, I don't see any justification for alarm bells - but it's not surprising, since we really have no clue about feedbacks. Forcings are easy, feedbacks are hard.
Also, I have a question on models. Do models calculate ppm changes by mass, or by mole?
Posted by: Steve Hemphill at February 6, 2007 06:20 PM
I don't know, but provided the models account for the concentration correctly, why would this specific choice of units matter? It's a bit like asking: Does the model use SI, British Units, or code everything in terms of non-dimensional parameters? Any method is fine as long as you do all conversions properly and don't screw up.
Posted by: margo at February 7, 2007 09:27 AM | <urn:uuid:64ea560c-85c4-4e52-a70c-988e06218bab> | 2.703125 | 2,306 | Comment Section | Science & Tech. | 55.694524 |
Why Heisenberg Is Right (and wrong)
Heisenberg's theory claims you can't measure a particle down the tiniest detail, it will be somewhat inaccurate. However, to transport a particle you will need to know the exact details and then produce another particle with those same characteristics somewhere else. Correct? Well, not really.
The basic foundation to Quantum Teleportation is the concept of entanglement. In this process, two photons are "entangled" (would make sense, right?). Two entangled photons act like any others, until a measurement is taken.
Measure the polarization of a photon (being part of a light wave), and it is vertical or horizontal. (Actually, there is no absolute "vertical", there are only two states perpendicular to each other). A photon has a 50% chance of being in either state. If you set up a polarizing filter (like those in polarized sun glasses), it would let in 50% of a light wave through, those photons with a polarization the same as what the filter lets through.
What's so special with entanglement? Simple, once photon is measured to be either vertical or horizontal, the other will be measured to the exact same way. (As if when rolling two separate dice, the roll of one always matches the other.) Each measurement by itself will always be a 50% chance, only the two measurements will happen to match. Sound weird? Einstein called it "spooky action at a distance."
Now the real questions: When can I be teleported? How much will it cost? Can I have the schematics to build one and be rich? Now the answers: Not anytime soon. Nothing more than photons has been transferred.
If you take some unknown photon (say, photon X) and have it interfere with one of the entangled pair (say, photon A; the other is going to photon B), the result could behave in a certain set of ways. The particles could cancel, reenforce, or could have been polarized different ways and do nothing. However, the effect of this mess can be applied to photon B (by rotating the polarization) and making a perfect copy of photon X (without ever finding out what it really is).
At the University of Innsbruck, they really did this. However, they only transferred a quarter of the photons (only those when X matched A). Since they controlled the polarization of X, they could make sure it was really being teleported.
Another interesting effect arises from this: entanglement itself can be teleported. A can be entangled with B, and the new copy would still be entangled with B. Weird.
No-cloning (nothing to do with Dolly the sheep)
You can't make an exact copy of a particle. Sorry, no-can-do. If did, you could get around Heisenberg's theory (by measuring one accurately for position, the other accurately for momentum; together, you detailed information for both, unlike what Heisenberg said). Unfortunately, nature likes the rule and prevents you from copying it, you can only move it. | <urn:uuid:34a28952-3f67-4938-9539-84024d3393d6> | 3.328125 | 640 | Knowledge Article | Science & Tech. | 53.153203 |
See also the
Dr. Math FAQ:
3D and higher
Browse Middle School Two-Dimensional Geometry
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Pythagorean theorem proofs.
- Euler's Formula [11/26/2001]
I have to find Euler's formula for two-dimensional figures and explain it
at a university level and at an elementary-school level.
- Figuring Out Formulas for Area [08/24/1999]
How can I remember formulas for squares, triangles, cylinders, etc?
- Figuring Square Footage and Yardage [12/14/2002]
What is the exact calculation used in order to determine square
footage or yardage of a given room?
- Finding Areas of Different Polygons [09/02/1997]
Could you please tell me how to work out the area for an equilateral
heptagon, octagon, nonagon, decagon, unedecagon, and dodecagon?
- Finding Captain Kidd's Treasure [05/17/2000]
How can you find Captain Kidd's treasure if you can't find the tree
referenced in the instructions?
- Finding Line Segment Lengths [12/21/2001]
Can you help me find line segments AB and BC if points A,B,C, and D are
collinear with B between A and C and C between B and D...?
- Fitting a Picture to a Frame [11/14/1996]
I have a picture frame that needs to have twice as much border on the
side of the picture as on the top. If the photo is half the area of the
frame, how wide should the borders be?
- Flatland, by Edwin Abbott [8/21/1996]
I need any info about the book FLATLAND by Edwin Abbott and how it can be
used in the classroom.
- Flips, Reflections, Rotation, and Quadrants [11/10/1998]
These terms are being used to describe the movement of shapes. What do
they really stand for?
- Geometric Probability [05/23/2001]
If an arrow is equally likely to hit anywhere inside a circular target
that is 3 feet in diameter, what is the probability that it will hit
inside a bull's-eye that is 6 inches in diameter?
- Geometric Proofs [11/28/1998]
I am tring to help a friend learn geometric proofs. Do you have any
- Geometry Books [4/17/1996]
I am having trouble finding an instructive book on geometry. Do you have
- Geometry Project [11/1/1994]
Think of three examples of geometry in the real world, one non-man-made (found in nature).
- Geometry Proofs: Lines and Planes [11/08/1998]
Show that two intersecting lines intersect in exactly one point...
- Geometry Proofs with Lines [2/6/1996]
Prove that if two lines are cut by a transversal so that alternate
exterior angles are congruent, then the lines are parallel.
- Grad as a Measure of an Angle [03/20/2002]
I would like to know about the origins, use in the past, and whether (and
how) the grad is used now.
- How Can a Line Have Length? [04/14/1998]
If a Euclidean point has no length, how can a union of Euclidean points
form a Euclidean line segment with length?
- If You Know Perimeter, Can You Find Area? [6/30/1996]
Can one determine the acreage of an irregularly shaped field if only the
distance around the edge of the field (in feet) is known?
- Instruments for Measuring Angles [09/28/2001]
I need the name, picture, or description of five devices used to measure
- Intersection of Circles [10/08/1998]
I do not understand the term intersection in geometry.
- Intersection of two angles [8/31/1996]
Draw a diagram in which the intersection of angle AEF and angle DPC is
- Introduction to Quadrants in the Coordinate Plane [02/05/2004]
What is a quadrant? How do you find it? I had to identify 9 ordered
pairs on a graph. Now I need to also name the quadrant each one is in.
- Is a Star Concave or Convex? [4/2/1996]
Is a star-shaped polygon convex?
- Is Henry Guilty? (Geometry Puzzle) [6/10/1996]
In Hughmoar County, residents shall be allowed to build a straight road
between two homes as long as the new road is not perpendicular to any
existing county road...
- Jobs That Use Geometry [12/18/2001]
I would like to learn how geometry is used in real life. What jobs
- Join the Dots [09/02/2001]
Given 9 dots in a square, how can you connect them with only 4 lines
without picking up your pencil or going through a dot more than once?
- Left Angles [06/01/1999]
What is a left angle?
- Linear and Board Feet [01/04/1999]
Can you explain the terms linear foot and board foot as they are used in
the lumber industry?
- Linear Footage [04/20/2002]
There is a fence I want to buy, but the ad says '4 foot tall, 50
linear feet. What is a linear foot compared to a regular foot?
- Math in soccer [05/21/1999]
How is math involved in soccer?
- Maximum Number of Intersections of n Distinct Lines [10/07/1998]
Find a pattern for the maximum number of intersections of n lines, where
n is greater than or equal to 2.
- The Meaning of Locus [03/04/2003]
What is the locus of points equidistant from two parallel lines 8
- Measuring Angles Without a Protractor [01/26/1999]
Given an isosceles trapezoid and one angle, how would we find the other
- Mobius Strips, Spheres, and Dimensionality [05/28/2003]
Is a Mobius Strip 2-D or 3-D? Or is it 1-D? What about a sphere?
- Names of Triangles and Angles [07/25/2001]
My teacher told me to find names of triangles other than equilateral and
- Net of a Box [05/23/1999]
Choose dimensions (length, width, and height) and find the surface area
and volume of a box; then draw a flat pattern of the box.
- Net of a Hexagonal Pyramid [02/05/2001]
How would you draw the nets of a hexagonal pyramid and a rectangular
- Nets in a Geometrical Sense [03/07/1999]
What is the "net" of a shape?
- Non-Euclidean Geometry for 9th Graders [12/23/1994]
I would to know if there is non-euclidean geometry that would be
appropriate in difficulty for ninth graders to study.
- Non-parallel Glide Reflections [10/21/1998]
A glide reflection consists of a line reflection and a translation
parallel to the reflection. What if the translation is not parallel to | <urn:uuid:2a73dfe1-f316-4ac8-864c-066d370c339a> | 2.875 | 1,638 | Q&A Forum | Science & Tech. | 70.115489 |
4.5. Magnetic diffusivity equation
From Eqs. (4.22) and (4.23) the Ohmic electric field can be expressed as
which inserted into Eq. (4.19) leads to the magnetic diffusivity equation
The first term of Eq. (4.29) is the dynamo term. The second term of Eq. (4.29) is the magnetic diffusivity term whose effect is to dissipate the magnetic field. By comparing the left and the right hand side of Eq. (4.29), the typical time scale of resistive phenomena is
where L is the typical length scale of the magnetic field. In a non-relativistic plasma the conductivity goes typically as T3/2 . In the case of planets, like the earth, one can wonder why a sizable magnetic field can still be present. One of the theories is that the dynamo term regenerates continuously the magnetic field which is dissipated by the diffusivity term . In the case of the galactic disk the value of the conductivity (13) is given by 7 × 10-7 Hz. Thus, for L kpc t 109(L / kpc)2 sec.
In Eq. (4.30) the typical time of resistive phenomena has been introduced. Eq. (4.30) can also give the typical resistive length scale once the time-scale of the system is specified. Suppose that the time-scale of the system is given by tU ~ H0-1 ~ 1018 sec where H0 is the present value off the Hubble parameter. Then
leading to L ~ AU. The scale (4.31) gives then the upper limit on the diffusion scale for a magnetic field whose lifetime is comparable with the age of the Universe at the present epoch. Magnetic fields with typical correlation scale larger than L are not affected by resistivity. On the other hand, magnetic fields with typical correlation scale L < L are diffused. The value L ~ AU is consistent with the phenomenological evidence that there are no magnetic fields coherent over scales smaller than 10-5 pc.
The dynamo term may be responsible for the origin of the magnetic field of the galaxy. The galaxy has a typical rotation period of 3 × 108 yrs and comparing this number with the typical age of the galaxy, (1010 yrs), it can be appreciated that the galaxy performed about 30 rotations since the time of the protogalactic collapse.
From Eq. (4.29) the usual structure of the dynamo term may be derived by carefully averaging over the velocity filed according to the procedure of [89, 90]. By assuming that the motion of the fluid is random and with zero mean velocity the average is taken over the ensemble of the possible velocity fields. In more physical terms this averaging procedure of Eq. (4.29) is equivalent to average over scales and times exceeding the characteristic correlation scale and time 0 of the velocity field. This procedure assumes that the correlation scale of the magnetic field is much bigger than the correlation scale of the velocity field which is required to be divergence-less ( . = 0). In this approximation the magnetic diffusivity equation can be written as:
is the so-called dynamo term which vanishes in the absence of vorticity. In Eqs. (4.32)-(4.33) is the magnetic field averaged over times longer that 0 which is the typical correlation time of the velocity field.
It can be argued that the essential requirement for the consistence of the mentioned averaging procedure is that the turbulent velocity field has to be "globally" non-mirror symmetric . If the system would be, globally, invariant under parity transformations, then, the term would simply vanish. This observation is related to the turbulent features of cosmic systems. In cosmic turbulence the systems are usually rotating and, moreover, they possess a gradient in the matter density (think, for instance, to the case of the galaxy). It is then plausible that parity is broken at the level of the galaxy since terms like m . × are not vanishing .
The dynamo term, as it appears in Eq. (4.32), has a simple electrodynamical meaning, namely, it can be interpreted as a mean ohmic current directed along the magnetic field :
This equation tells us that an ensemble of screw-like vortices with zero mean helicity is able to generate loops in the magnetic flux tubes in a plane orthogonal to the one of the original field. Consider, as a simple application of Eq. (4.32), the case where the magnetic field profile is given by
For this profile the magnetic gyrotropy is non-vanishing, i.e. . × = k f2(t). From Eq. (4.32), using Eq. (4.35) f (t) obeys the following equation
admits exponentially growing solutions for sufficiently large scales, i.e. k < 4 || . Notice that in this naive example the term is assumed to be constant. However, as the amplification proceeds, may develop a dependence upon ||2, i.e. 0(1 - ||2) 0[1 - f2(t)]. In the case of Eq. (4.36) this modification will introduce non-linear terms whose effect will be to stop the growth of the magnetic field. This regime is often called saturation of the dynamo and the non-linear equations appearing in this context are sometimes called Landau equations in analogy with the Landau equations appearing in hydrodynamical turbulence.
In spite of the fact that in the previous example the velocity field has been averaged, its evolution obeys the Navier-Stokes equation
where is the thermal viscosity coefficient. Since in MHD the matter current is solenoidal (i.e. .( ) = 0) the incompressible closure = 0, corresponds to a solenoidal velocity field . = 0. Recalling Eq. (4.22), the Lorentz force term can be re-expressed through vector identities and Eq. (4.37) becomes
In typical applications to the evolution of magnetic fields prior to recombination the magnetic pressure term is always smaller than the fluid pressure (14), i.e. p >> ||2. Furthermore, there are cases where the Lorentz force term can be ignored. This is the so-called force free approximation. Defining the kinetic helicity as = × , the magnetic diffusivity and Navier-Stokes equations can be written in a rather simple and symmetric form
In MHD various dimensionless ratios can be defined. The most frequently used are the magnetic Reynolds number, the kinetic Reynolds number and the Prandtl number:
where LB and Lv are the typical scales of variation of the magnetic and velocity fields. In the absence of pressure and density perturbations the combined system of Eqs. (4.22) and (4.38) can be linearized easily. Using then the incompressible closure the propagating modes are the Alfvén waves whose typical dispersion relation is 2 = ca2 k2 where ca = || / (4 )1/2. Often the Lundqvist number is called, in plasma literature [85, 87] magnetic Reynolds number. This confusion arises from the fact that the Lunqvist number, i.e. ca L , is the magnetic Reynolds number when v coincides with the Alfvén velocity. To have a very large Lundqvist number implies that the the conductivity is very large. In this sense the Lunqvist number characterizes, in fusion theory, the rate of growth of resistive instabilities and it is not necessarily related to the possible occurrence of turbulent dynamics. On the contrary, as large Reynolds numbers are related to the occurrence of hydrodynamical turbulence, large magnetic Reynolds numbers are related to the occurence of MHD turbulence .
13 It is common use in the astrophysical applications to work directly with = (4 )-1. In the case of the galactic disks = 1026 cm2 Hz. The variable denotes, in the present review, the conformal time coordinate. Back.
14 Recall that in fusion studies the quantity = 8 ||2 / p is usually defined. If the plasma is confined, then is of order 1. On the contrary, if >> 1, as implied by the critical density bound in the early Universe, then the plasma may be compressed at higher temperatures and densities. Back. | <urn:uuid:edbd6312-40b8-4b50-80bb-5d3a1866eac6> | 2.765625 | 1,773 | Academic Writing | Science & Tech. | 53.949838 |
A Simple Makefile Example
C_SIMPLE is a simple example of how a makefile can be
used to manage a set of C files.
The underlying task is to approximate the integral of a
function F(T) using the midpoint rule. The program
to do this is broken up into three C files and a header file:
the main program, which tries the midpoint rule three times,
each time using more intervals.
midpoint.c carries out
the midpoint rule to approximate the integral of a function.
f.c evaluates the function whose
approximate integral is desired.
the header file, which contains the interface to each routine.
the makefile for managing these files.
c_simple.out contains the
output from a run of the compiled program.
You can go up one level to
the MAKEFILES page.
Last revised on 04 December 2006. | <urn:uuid:b73fa1bd-ba34-456c-a6cd-3957d144c1a3> | 3.5 | 192 | Documentation | Software Dev. | 56.605329 |
The Maunder Minimum was a period of anomalously low solar activity that occurred between 1645 and 1715. It corresponded to the coldest part of the "Little Ice Age", which occurred between approximately 1400 and 1850.
During the Little Ice Age, and particularly the Maunder Minimum, the earth was substantially colder than it is today. Glaciation increased markedly, winters were historically harsh, and major ocean areas and seas froze over.
There is some debate (mainly from the Global Warming crowd) about whether there is a correlation between solar activity and climate, but in the case of the Maunder Minimum, it would be one hell of a coincidence.
The Little Ice Age followed the Medieval Warm Period, during which temperatures were substantially warmer than they are today. Unlike the Maunder Minimum, we do not have great observational data on solar activity during the Medieval Warm Period, although studies of solar activity cycles suggest it was much higher than during the subsequent cold period.
The Medieval Warm Period was famous for its disappearance from IPCC data after the advent of the Global Warming craze:
Data now shows that global warming ceased about 10 years ago and global cooling is accelerating, corresponding almost perfectly with the end of a period of very high solar activity. Here's another fun graphic:
Could this theory (that drastically diminished solar activity has ended global warming and is responsible for abrupt global cooling since 2007) be wrong? Could we still be experiencing global warming despite all evidence to the contrary? I'm still looking for science supporting global warming that still holds up.
Although I understand it is in no way scientific, I'm very much fascinated by anecdotal evidence. There are great descriptions about how warm it was during the Medieval Warm Period (descriptions of Scandinavian settlements in Greenland sounded like they were in Central Europe) and the Little Ice Age (ports that have been ice-free since the beginning of the 20th century (like New York) were completely closed by ice for entire winters). I love the old illustrations of life in the 19th century, showing routine travel by sleigh in the mid-Atlantic states, ice skating on lakes that haven't frozen at all in my lifetime, etc.
I certainly noted apparent warming in the 1970s, 80s, and 90s. Years went by with no snow in many places that previously had lots of it. The tropical ocean was nearly 10 degrees warmer in the winter months than it is now.
But in the last 7-8 years I've noticed cooler weather, with a dramatic drop in the last 2. I used to live in the tropics and for years would scuba dive during the winter in a thin wetsuit. The ocean temperature would bottom out in the low 70s in January, then rise steadily. Since 2006 the temps have dropped steadily and are now around 65. (This is based on NOAA automated observations at Key West). My swimming pool in North Carolina was apparently 10 degrees cooler for most of the summer of 2009 than it was in 2008.
In Washington DC there has been more snow by the end of December 2009 than I have ever seen (possibly since 1969, but I was pretty young then so I'm not sure). The ski areas on the east coast have more snow by the end of December 2009 than they did at any point in the winter throughout the 1980s and 1990s. There were many years where there was almost no snow on the ski areas in North Carolina - at one point we thought they might go out of business. This year they already had more snow base by late December than they did by late February last year - and last year was the best I had seen since perhaps the 1970s.
Although this is all anecdotal, these anecdotes seem to be repeated around the world. 2008 was the coldest winter in China in 100 years. Britain expects 2009 to the the coldest winter in 100 years.
I have an increasingly strong feeling that it will be cold and getting colder for at least the next generation and in a very few years "Global Warming" will be one of the biggest jokes of our lifetimes. | <urn:uuid:7267bb7e-6874-430b-ab34-da2f7cd83691> | 3.21875 | 831 | Personal Blog | Science & Tech. | 49.45675 |
April 10, 2013
Visible as a small, sparkling hook in the dark sky, this beautiful object is known as J082354.96+280621.6, or J082354.96 for short. It is a starburst galaxy, so named because of the incredibly (and unusually) high rate of star formation occurring within it.
March 29, 2013
Life as we know it doesn't thrive on planets without ozone layers, which is why the recovery of Earth's ozone layer is so important. A new instrument slated for launch to the ISS will monitor our planet's protective ozone cocoon with greater depth and precision than ever before.
March 26, 2013
A comet is heading for Mars, and there is a chance that it might hit the Red Planet in October 2014. An impact wouldn't necessarily mean the end of NASA's Mars program. But it would transform the program along with Mars itself.
March 21, 2013
The European Space Agency's Planck spacecraft has released the most detailed map ever made of the oldest light in the universe, revealing new information about its age, contents and origins.
March 15, 2013
Comet Pan-STARRS has survived its encounter with the sun and is now emerging from twilight in the sunset skies of the northern hemisphere. A NASA spacecraft has beamed back spectacular pictures of a "wild and ragged" tail behind the comet's active nucleus.
March 12, 2013
An analysis of a rock sample collected by NASA's Curiosity rover shows ancient Mars could have supported living microbes.
March 10, 2013
Vegetation growth at Earth's northern latitudes increasingly resembles lusher latitudes to the south, according to a NASA-funded study. "It's like Winnipeg, Manitoba, moving to Minneapolis-Saint Paul in only 30 years," says one of the lead researchers.
March 8, 2013
Using data from an aging NASA spacecraft, researchers have found signs of an energy source in the solar wind that has caught the attention of fusion researchers.
March 7, 2013
NASA Science has just created the newest form of tracking satellites. The Interactive Satellite Tracker, or iSat, is now available and is conveniently launched in your browser.
March 1, 2013
Something unexpected is happening on the sun. 2013 is supposed to be the year of Solar Max, but solar activity is much lower than expected. At least one leading forecaster expects the sun to rebound with a double-peaked maximum later this year. | <urn:uuid:9c158216-eefd-4e63-b5bd-2b253801b024> | 3.171875 | 504 | Content Listing | Science & Tech. | 58.50805 |
In an attempt to move into the field of enterprise application development I started refreshing my Java recently. I was going through a well known book when I stumbled upon the implementation of Strings in Java. I have high esteem for the developers at Sun, but I really could not digest the fact that Sun engineers thought 2 bytes would be enough for characters. It was kind of Y2kish. Now that the UTF has grown above the usual number 16 bits can represent I was eager to find how Sun tackled this problem. The book touched the matter vaguely but since that didn’t completely clear my doubts I decided to investigate. And these are my findings.
Before we dive into the Java implementation of the standard, we should understand what UTF is. At least some of us might have seen it somewhere. May be those of us who have the creepy behavior of going through the source of an HTML page might have seen the following,
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />.
And we may have a vague idea on what it does. Don’t worry let’s understand what it really is.
Unicode is an internationally accepted standard to define character sets and corresponding encoding. And the above piece of code is informing the browser that the document should be interpreted using the utf-8 character set. So what exactly is UTF or Uniform Transformation Format?
In order to completely know about Unicode, we have to traverse a few decades back when people thought the earth was the center of the universe and it was indeed flat. Oh sorry, we have to traverse a few decades back when the majority of the software was written by English speaking people. It was only quite natural they thought the only set of characters ever to be encountered in the realm of programming would be from English alphabets, numerical digits and a couple of other prominent characters. So it was logical to use 28 or 256 bits to represent the set of characters. It was enough for the normally used characters back then and also space were left for the inclusion of characters in the future. The problem started when different people started encoding different characters in the free space. What evolved was chaos. Also when the internet happened, all the people around the globe started using technology and started tweaking programs to their likes and in their own native languages. It became impossible to fit all the characters into the tiny space of 256 bits. Soon the encoding system known as ASCII ran out of space and a need for a better encoding system evolved. To make a long story short, thus evolved UTF. More on this can be read from here and here.
In UTF characters are represented by code points. It is usually a hex number preceded by U+. For example U+2122 means TM, the trademark symbol. The prominent UTF encodings are UTF-8, UTF-16 and the latest one being UTF-32. These three are methods to represent the UTF character set using 8 bit, 16 bit and 32 bit respectively. To get a more detailed and exclusive idea on Unicode please read Joel Spolsky’s post.
In Java from beginning characters were represented by 16 bits and for some time it was enough to represent all the characters. But since the characters included into UTF overgrew the 16 bit realm, Java was faced with a dilemma, either to change the char representation into 32 bit or use some other methods.. It is not of much issue since most of the characters outside the 16 bit representation is rarely used. But since Java is a language which believes a in portability very much and the engineers in Sun are much more intelligent than the average developers like us, they found a way to circumvent this issue. Java is now equipped to represent all the characters in the 32 bit realm also. So how does Java tackle the supplementary characters out side the 16 bits? What Sun employed to get out of this mess was UTF-16 encoding. So what is UTF-16?
To quote Wikipedia UTF- 18 is a variable length character encoding for Unicode, capable of encoding the entire Unicode repertoire. The encoding maps character to a sequence of 16 bit words. Characters are known as code points and 16 bit words are known as code units. The basic characters from the Basic Multilingual Plane’ can be represented using the 16 bits. For characters outside this we need to use a pair of 16 bit words called as the surrogate pair. Thus all the characters that can be encoded by 232 or U+0000 through U+10FFFF , except for U+D800–U+DFFF (These are not assigned any characters) can be specified using UTF-16. Why are these numbers not assigned any characters? It is an intelligent choice made by the Unicode community to design the UTF-16 encoding scheme.
The characters outside the BMP (those from U+10000 through U+10FFFF) are represented using a pair of 16 bit words as I said before. These pair is known as surrogate pair. Now 1000016 is subtracted from the original code point to make it a 20 bit number. Now it is divided to two 10 bit numbers each of which is loaded into a surrogate with the higher order bits in the first. the two surrogates will be in the range 0xD800–0xDBFF and 0xDC00-0xDFFF. Thus since we have left out those region unassigned we can be sure it isn’t a character but need processing before the original code point is found out. You can read the UTF-16 specification from Sun here. | <urn:uuid:fa835a53-97fb-4343-a770-33a5d5dc6ec3> | 3.203125 | 1,138 | Personal Blog | Software Dev. | 57.271953 |
|Spooky shapes seem to haunt
starry expanse, drifting through the night in the royal constellation
Of course, the shapes are cosmic dust clouds faintly visible
in dimly reflected starlight.
Far from your own
neighborhood on planet Earth, they lurk at the edge of the
molecular cloud complex some 1,200 light-years away.
Over 2 light-years across the ghostly nebula and relatively
Bok globule, also known as
is near the center of the field.
The core of the dark cloud on the right is collapsing and is
likely a binary star system in the
early stages of formation.
Even so, if the spooky shapes could talk, they might well wish you a
Mt. Lemmon SkyCenter,
University of Arizona | <urn:uuid:8ebc2082-fc8d-41d1-b108-d7689b5dcb1b> | 2.984375 | 169 | Content Listing | Science & Tech. | 52.57884 |
Lake effect snow is highly localized snowfall, sometimes intense, that forms downwind of large bodies of water such as the Great Lakes. They usually take the shape of narrow bands and can produce significant snow accumulation within very short time periods. As shown on the map below, the Great Lakes have a tremendous influence on the amount of snowfall that falls downwind of their location. On the map below, 100 cm equals about 40 inches.
Lake effect snow actually requires a specific set of conditions involving the atmosphere, land, and water surface. There are 5 primary ingredients the play a role in the formation of lake effect snow.
1. Cold air over a relatively warmer body of water - this results in instability in the atmosphere, and creates a situation where heat and moisture is lifted from the water and transported by the wind downstream. The warm moist air rises and snow showers and squalls form. Typically a temperature difference of 13 degrees C or 24 degrees F between the lake water surface and the air at the 850 mb pressure level or roughly 4500 feet above the lake is needed to reach absolute instability where heat and moisture can be vigorously lifted from the water into the air. The relatively warmer air from the lake cools as it rises and condenses into clouds which produce snow.
2. A layer of cold air at the earth's surface that is sufficiently deep - For the Great Lakes Region, cold air masses originate in the high latitudes of North America and then "spill southward" with an upper level trough or buckle in the jet stream. The depth of the cold air is an important player in determining the intensity of the snowfall possible. Generally, an arctic air mass at least 3000 feet deep is needed to generate good lake effect snow development, and usually air masses greater than 7500 feet deep are associated with the strongest lake effect snowstorms.
The above satellite image shows the lake effect snow band streaming south across Lake Michigan into northwest Indiana on Tuesday, November 18, 2008 at 9:46 am CST.
3. Fetch - Fetch here relates to the distance the air travels over the water. Greater fetches can produce heavier and more intense lake effect snow because of the opportunity for greater "heating" and "moistening" by the relatively warmer waters. In the Great Lakes, wind direction plays a key role in determining the fetch. Below are a few examples of wind direction and fetch.
4. Little if any wind shear (change in wind direction with height) through the layer of cold air - If there is little wind shear, this favors the type of convection necessary to produce intense and well organized lake effect snow bands. If the wind changes significantly with height through the cold layer (for example by 60 degrees in direction or more), the convection process that leads to lake effect snow is disrupted and diminished. This will typically result in only flurries rather than strong well organized lake effect bands, all other things being equal.
The above Doppler radar image shows an intense lake effect snow band impacting northeast Illinois and northwest Indiana on March 7 of 1996.
5. Upstream Moisture - How moist an air mass is before it even comes in contact with the lake has been shown to play a key role in determining how heavy snowfall can be. If an air mass has lower moisture, it is more difficult to get condensation, clouds, and snow. On the other hand, if an air mass has higher moisture content (relative humidity of 70 percent or greater), heavier snowfalls are possible. Upstream lakes can actually add to the amount of pre-existing moisture. For example, if an arctic air mass moving across Lake Michigan has also moved across Lake Superior, there has already been additional moistening of that air mass before it even gets to Lake Michigan! This can result in heavier snowfalls for areas downwind of Lake Michigan.
Lake Effect Snow Formation - The diagram below summarizes the key ingredients for lake effect snow. Arctic air with a temperature difference of at least 13 deg C or 24 deg F between the water and the air at about 4500 feet flows over the lake surface. The depth of the cold air (marked by the dashed line labeled "capping inversion") must be enough to support convection to develop as heat and moisture is transported from the water into the air. Greater snows can result if the fetch (distance the air travels across the water) is longer, wind shear minimal, and initial moisture content of the airmass greater. Rising air moving over the water condenses to form snow showers and squalls, and snow bands develop and extend inland downwind of the shoreline.
After looking at the numerous parameters the influence the creation of lake effect snow and modulate its intensity, and seeing the complex atmospheric circulations that exist and interact, it is easy to see why predicting lake effect storms is so challenging! | <urn:uuid:409f4919-2806-4dab-9ebc-61c894ba17a5> | 4.34375 | 991 | Knowledge Article | Science & Tech. | 46.57842 |
If there's one thing that science fiction is nearly universal on, it's that traveling at or beyond the speed of light looks really freakin' awesome. Star Trek had those rainbow star lines zipping by, Star Wars had something similar, and Spaceballs went all the way to plaid. The reality, unfortunately, may not be nearly as cool.
Physics students at the University of Leicester decided to buzzkill the entire sci-fi genre by going and calculating what would happen if you were to kick a spaceship up to the speed of light and look out a window. This is about the extent of what you'd see:
Yeah. Boresville. The reason this is all you'd get is because of the Doppler effect, which says that the frequency of a wave changes as you move relative to its source. You've heard the Doppler effect in action plenty of times: it's what causes a police siren to change from high pitch to low pitch as the police car drives past you. As the car approaches, the sound waves it's emitting bunch together, increasing their frequency and consequently their pitch. And as the car passes and drives away, the sound waves spread apart, decreasing their frequency and pitch.
This same thing happens with light waves too: moving towards a light-emitting object (like a star) causes the light waves to bunch up a bit, increasing their frequency (and decreasing their wavelength) and shifting the light towards the bluer end of the spectrum. Moving away from a light-emitting object runs the whole thing backwards, leading to a shift to the red, which is how we know the Universe is expanding: distant galaxies are all redshifted, meaning that they're moving away from us.
So anyway, back to light speed. If you're moving at (say) 99.99995 percent of the speed of light, which is what these students used for their calculations, light from stars will be shifted so far towards the blue end of the spectrum that it'll end up way past what we can see with our eyes, turning into x-rays that are effectively invisible. Meanwhile, very long wavelength light that we ordinarily can't see, like cosmic background radiation, is shifted up into the visible. So essentially, stars disappear, and all we see is the leftover glow from the Big Bang as a formless blob of light.
We are of course obligated to point out that none of the spaceships in sci-fi are traveling at lightspeed; they're all traveling beyond lightspeed. So maybe there's just a transient flash of cosmic background radiation, before something far more interesting (and ludicrous) happens once you break the lightspeed barrier. | <urn:uuid:7d0f7d60-1892-42d4-91e2-5d9744a6368a> | 3.40625 | 547 | Personal Blog | Science & Tech. | 55.760526 |
The realisation in the 1980s that DNA is sometimes preserved in very ancient biological specimens opened up a new field of research within evolutionary biology.
Ancient DNA can be seen as a time machine that opens up a window to the past, enabling us to trace molecular evolutionary processes through time and space.
Although the questions that can be answered by studying ancient DNA are limited by the degree of DNA preservation, it can be a very important and powerful source of information.
For example, it allows us to:
Ancient DNA analysis is being increasingly incorporated into research addressing questions relating to:
Find out what conditions suit the preservation of ancient DNA. Learn about quality and contamination issues and how these can be overcome.
Ancient DNA can be described as any DNA extracted from ancient biological specimens, such as archaeological bones, teeth or other tissue.
Ancient DNA can derive from a wide spectrum of sources, including: | <urn:uuid:6ab70854-29c4-45a5-8522-07f302fb7cbe> | 3.796875 | 180 | Knowledge Article | Science & Tech. | 23.175769 |
Feb. 1, 2010 Biodiversity in freshwater systems is impacted as much or more by environmental change than tropical rain forests, according to University of Oklahoma Professor Caryn Vaughn, who serves as director of the Oklahoma Biological Survey. "When we think about species becoming extinct, we don't necessarily think of the common species in freshwater systems, many of which are declining," says Vaughn.
"We need to be concerned about these declines, because these common species provide many goods and services for humans," she states. "Factors underlying these declines include water pollution, habitat destruction and degradation, and environmental changes, such as overexploitation of water and aquatic organisms, all of which are linked to human activities. Freshwater biodiversity is also threatened by climate change which is predicted to alter species ranges and abundance."
Vaughn studies freshwater mussels, or clams, that live in Oklahoma's rivers. North America contains the highest diversity of freshwater mussels in the world with over 300 species, but over 50 percent of these species are declining. Oklahoma contains 55 mussel species, mainly in rivers in the eastern portion of the state.
The roles freshwater mussels fill in ecosystems have not been studied so far, so Vaughn's study is at the forefront of research on freshwater ecosystems. "We have seen that environmental changes are leading to species shifts in freshwater ecosystems, including changes in Oklahoma's mussel fauna," remarks Vaughn. "We need to understand how these changes will influence the services mussels provide in these systems."
Mussels feed by filtering material from the water with their gills, thus mussels act as a biofiltration system in freshwater ecosystems. Losses of these critical species can result in diminished water quality and added expenses for water treatment. Because they are large with hard shells, mussels also provide or improve habitat for many other aquatic organisms.
Multiple approaches are needed to reach Vaughn's research goal of understanding the goods and services provided by mussel communities, how these may be affected by environmental change and how we can better manage our water resources to protect mussels and meet human needs. The study is being done in southeast Oklahoma where there is an abundance of mussel species.
Vaughn believes we have to rethink how we use water in the future because it will impact quality of life for the next 100 years. "Water is our most precious resource," says Vaughn. "Sustainable water quantity and quality is a fundamental need of both wildlife and humans and is a critical component for economic growth." She works with several state agencies and participates on a task force to address water challenges in the state and make recommendations for protecting this resource.
Vaughn recently published an article on the subject in the January 2010 issue of the scientific journal, BioScience. The National Science Foundation provided funding through a grant for this research.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:e460733b-6190-4ae4-8c56-61d1668c7d76> | 3.6875 | 618 | Truncated | Science & Tech. | 30.995973 |
WINGING IT: Ground birds often seek out trees and other elevated spots for safety. Juveniles not yet capable of flight accomplish this by running up the inclines, flapping their wings to enhance traction. The way these birds employ their developing wings may demonstrate the process by which avian flight evolved. Image:
BOZEMAN, MONT.--It's not often that a presentation given to the Society of Vertebrate Paleontology elicits coos and clucks of sympathy. These are, after all, the scientists who study Tyrannosaurus rex and other fearsome beasts of the past. But that's exactly the reaction Kenneth Dial got when, at the group's annual meeting last October, he showed video footage of a fuzzy little partridge chick with its wings taped to its sides trying to climb a tree--only to tumble down into Dial's waiting hands. Unfettered, however, the chick flapped its tiny wings while climbing and steadily made its way up. After teasing the audience for its sentimental display, the University of Montana biologist returned to the matter at hand: explaining how this and other experiments involving ground-dwelling birds led him to hatch a new hypothesis regarding the origin of avian flight.
Traditionally, scholars have advanced two theories for how bird flight evolved. One of these, dubbed the arboreal model, holds that it developed in a tree-dwelling ancestor that was built for gliding but started flapping to extend its air time. The other, known as the cursorial theory, posits that flight arose in small, bipedal terrestrial theropod dinosaurs that sped along the ground with arms outstretched and leaped into the air while pursuing prey or evading predators. Feathers on their forelimbs enhanced lift, thereby allowing the creatures to take wing.
This article was originally published with the title Taking Wing. | <urn:uuid:af1499ad-2246-4b4f-8538-9cc369e440af> | 4.1875 | 381 | Truncated | Science & Tech. | 40.75473 |
Galaxies: Collisions, Types & Other Facts
If you gaze out into the night sky with a telescope, and see beyond what’s visible to the naked eye, you could see a lot of “stars” that are actually imposters. Many of the points of light we often think are individual stellar objects are actually galaxies, collections of millions to trillions of stars. Galaxies are composed of stars, dust, and dark matter, all held together by gravity. Below we discuss galaxy formation, galactic collisions and other facts about these so-called “island universes.”
Galactic & Black Holes
Galaxies come in a variety of shapes, sizes, and ages. Many have black holes at their centers. In some cases, a galaxy’s central black hole is extremely large or active, it’s surrounding area producing a tremendous amount of energy that astronomers can see over great distances. Material circling the black hole may be accelerated outward by its jets. Other galaxies may contain objects like quasars, the most energetic bodies in the universe, at their core.
Astronomers aren't certain of exactly how galaxies formed. After the Big Bang, space was made up almost entirely of hydrogen and helium. Some astronomers think that gravity pulled dust and gas together to form individual stars, and those stars drew closer together into collections that ultimately became galaxies. Others think that the mass of what would become galaxies drew together before the stars within them were created.
In the 1900s, many astronomers thought that the entire universe lay within our galaxy, the Milky Way. Others argued that the spiral-shaped blobs thought to be dust and gas were separate; Harlow Shapley called them "island universes." It wasn't until 1924, when Edwin Hubble identified several special pulsing stars called Cepheid variables in some of these so-called nebulae and realized that they lay outside of the known span of the Milky Way, that astronomers realized they were, in fact, completely unique collections of stars at distances well beyond our home galaxy.
After Hubble measured the distance to individual galaxies, he went on to measure their Doppler shift — how much light from the galaxies was stretched out due to their motion. He determined that galaxies all around the Milky Way are moving away from us at terrific speeds. The farther away the galaxies are, the faster they are fleeing. Because of this, he was able to determine that the universe itself is expanding. Later astronomers determined that this expansion is accelerating.
Galaxies are classified by their shape. Each type has different characteristics and a different history of evolution.
Some, like the Milky Way, have arms spiraling outward around their center. Known as “spiral galaxies,” these groups make up most of the galaxies that astronomers can see. The gas and dust in a spiral galaxy circles the center at speeds of hundreds of miles per second, creating their pinwheel shape. Some, known more precisely as “barred spirals,” have a bar structure in their center, formed by dust and gas funneled into the center. Present in all spirals, the dust and gas fuel star formation, so spiral galaxies are constantly forming stars today.
Elliptical galaxies lack the spiral arms of their more flamboyant cousins. Their appearance ranges from extremely circular to very stretched out. Elliptical galaxies have less dust than their spiral counterparts, and so the star-making process has all but ended. Most of their stars are older. Although they make up a smaller portion of the visible galaxies, astronomers think that over half the galaxies in the universe are elliptical.
The remaining 3 percent of the galaxies in the universe are known as irregular galaxies. They are neither round nor boast spiral arms, and their shapes lack specific definition. The gravity of other galaxies has often affected them, stretching them out or warping them. Collisions or close calls with other galaxies can also deform their shapes.
When galaxies collide
Galaxies don't float through space in isolation. They are bunched together in groups known as clusters. Some clusters are large, containing over a thousand galaxies. Others are smaller. The Milky Way lays within the cluster known as the Local Group, which only contains 50 galaxies.
Occasionally, they slam into one another, merging their stars and dust together. This is an important step in the evolution and growth of many galaxies.
Individual stars generally don't collide in a galactic collision, but the influx of dust and gas bumps up the rate of star formation. The Milky Way is set to collide with the Andromeda galaxy in about 5 billion years.
— Nola Taylor Redd | <urn:uuid:04a2aa0e-c2e6-40e2-877f-b8b43fce563c> | 4.0625 | 947 | Knowledge Article | Science & Tech. | 42.420471 |
Accessibility and Usability
Web developers should strive to build web sites that are both accessible, and usable. These are different, but related concepts. Accessibility deals with whether or not people can physically access the content of a web site. Usability considers how people use the web site once they access it.
The following resources provide information about how to build accessible web sites.
Usability is concerned with designing web sites that are easy to use, and meet the needs of the site owners and visitors. It deals with wording of menus, navigation features, effectiveness of search engines, user interaction, etc.
The following links provide more information on building usable web sites. | <urn:uuid:6d863091-d472-451b-bff3-18205d4e5f43> | 2.96875 | 135 | Knowledge Article | Software Dev. | 24.314037 |
CMC: Global weather forecast model from the "Canadian Weather Service"
|Updated:||2 times per day, from 10:00 and 23:00 GMT|
|Greenwich Mean Time:||12:00 GMT = 12:00 GMT|
|Resolution:||0.6° x 0.6°|
This map shows isotachs - lines on a given surface connecting points with
equal wind speed - together with isobars - the line of equal atmospheric
pressure at 10m above the ground. The unit used is
kph (kilometers per hour). (wind-converter)
|NWP||Numerical weather prediction uses current weather conditions as input into mathematical models of the atmosphere to predict the weather. Although the first efforts to accomplish this were done in the 1920s, it wasn't until the advent of the computer and computer simulation that it was feasible to do in real-time. Manipulating the huge datasets and performing the complex calculations necessary to do this on a resolution fine enough to make the results useful requires the use of some of the most powerful supercomputers in the world. A number of forecast models, both global and regional in scale, are run to help create forecasts for nations worldwide. Use of model ensemble forecasts helps to define the forecast uncertainty and extend weather forecasting farther into the future than would otherwise be possible.|
Wikipedia, Numerical weather prediction, http://en.wikipedia.org/wiki/Numerical_weather_prediction(as of Feb. 9, 2010, 20:50 GMT). | <urn:uuid:8bc4e2c9-c3f2-4284-9ad4-93ade6227861> | 3.359375 | 323 | Knowledge Article | Science & Tech. | 45.999644 |
By Daniel Clery, ScienceNOW
In the high-stakes race to realize fusion energy, a smaller lab may be putting the squeeze on the big boys. Worldwide efforts to harness fusion—the power source of the sun and stars—for energy on Earth currently focus on two multibillion dollar facilities: the ITER fusion reactor in France and the National Ignition Facility (NIF) in California. But other, cheaper approaches exist—and one of them may have a chance to be the first to reach “break-even,” a key milestone in which a process produces more energy than needed to trigger the fusion reaction.
Researchers at the Sandia National Laboratory in Albuquerque, New Mexico, will announce in a Physical Review Letters (PRL) paper accepted for publication that their process, known as magnetized liner inertial fusion (MagLIF) and first proposed 2 years ago, has passed the first of three tests, putting it on track for an attempt at the coveted break-even. Tests of the remaining components of the process will continue next year, and the team expects to take its first shot at fusion before the end of 2013.
Fusion reactors heat and squeeze a plasma—an ionized gas—composed of the hydrogen isotopes deuterium and tritium, compressing the isotopes until their nuclei overcome their mutual repulsion and fuse together. Out of this pressure-cooker emerge helium nuclei, neutrons, and a lot of energy. The temperature required for fusion is more than 100 million°C—so you have to put a lot of energy in before you start to get anything out. ITER and NIF are planning to attack this problem in different ways. ITER, which will be finished in 2019 or 2020, will attempt fusion by containing a plasma with enormous magnetic fields and heating it with particle beams and radio waves. NIF, in contrast, takes a tiny capsule filled with hydrogen fuel and crushes it with a powerful laser pulse. NIF has been operating for a few years but has yet to achieve break-even. | <urn:uuid:71b50e70-204f-4de7-b0d1-9a2b45f46e7d> | 3.9375 | 423 | Truncated | Science & Tech. | 38.512909 |
Exploration 3 (Taxicab Geometry)
Department of Mathematics and Computer Studies
York College (CUNY)
Jamaica, New York
Distance below refers to distance using the taxicab metric. Assume that angles are measured as in the Euclidean plane. The lines in this plane have linear equations.
1. Suppose that one is given the points A = (0, 0), B = (2, 4), C = (4, 6) and D = (-2, 4).
a. Find the taxicab distance between every pair of these points.
b. Draw a taxicab circle of radius 2 centered at B.
c. Which of the points A, C, D (if any) lies on or inside the taxicab circle of radius 2 centered at B?
d. Draw the graph (in the analytical geometry sense) of points which are equidistant from D and C.
2. Draw those points which are equidistant from (-2, -2) and (-4, -4).
3. Find the coordinates of 7 points that lie on the taxicab circle of radius 2 centered at (0, 0).
4. Draw the graph of the points in the taxicab plane the sum of whose distances from (-2, 0) and (2, 0) is 8. What name is commonly given to a "curve" of this kind in the Euclidean plane? | <urn:uuid:6dfe1e1b-3e16-4e0d-bd7b-556052f6c15b> | 3.28125 | 304 | Tutorial | Science & Tech. | 81.95183 |
Many interesting information about the properties of electrons in a crystal is encoded in the so-called band structure, which, for the semiconducting material gallium arsenide (GaAs), looks like this:
Source: Michael Rohlfing, Peter Krüger, and Johannes Pollmann: Quasiparticle band-structure calculations for C, Si, Ge, GaAs, and SiC using Gaussian-orbital basis sets, Phys. Rev. B48 (1993) 17791-17805 (doi: 10.1103/PhysRevB.48.17791), Fig. 3.
The band structure plot shows the energy of electrons in the crystal as a function of momentum. Energy, along the vertical axis, is measured in units of electron Volt (eV), while momentum, on the horizontal axis, is given along specific directions in the crystal, which are denoted by the symbols Γ, Δ, X, Λ, and L.
For free electrons under conditions as typically encountered in solid-state physics, the relation between Energy E and momentum p is simply given by the classical, Newtonian formula E = p²/2m, where m is the mass of the electron. In a plot, energy as a function of momentum would be represented by a simple parabola. If we take into account quantum mechanics, we know for example that the energy states of electrons in atoms come in discrete steps. The energy levels of free electrons, however, are not quantised: According to quantum mechanics, free electrons are described just by plane waves. The inverse of the wavelength λ is, up to a factor of 2π, the so-called wavevector k = 2π/λ, which is proportional to momentum, k = p/ħ. Here, ħ, called "h-bar", is Planck's constant h divided by 2π - there are a lot of factors 2π floating around in this business...). Hence, for the free electron the relation between energy and wavevector is also given by a parabola, E = ħ²k²/2m.
We know, however, that electrons of atoms in a crystal are not free, but subject to the periodic potential of the positively charged atomic cores. As a consequence, the simple quadratic relation between wavevector respectively momentum and energy will be modified. Here, for example, is the crystal structure of gallium arsenide:
Source: Wikipedia on Gallium Arsenide.
The brown balls mark the positions of the gallium atoms, the arsenic atoms are shown in violet. The crystal structure is called fcc (face-centred cubic), because the gallium atoms are located the corners and at the centres of the faces of a cube. Electrons in gallium arsenide have to reflect the symmetry of this crystal structure. Their wavefunctions are given by so-called Bloch waves.
Moreover, the wavelength of the electrons should not be shorter than the lattice constants of the crystal, i.e. the smallest periodicity length. This is because the same electron state could be described as well by a function with a longer wavelength. As a consequence of the minimal wavelength, the wavevectors have a maximal length. The possible wavevectors, then, all lay within a geometrical shape which is called the Brillouin zone. The Brillouin zone of the gallium arsenide crystal is shown here:
Source: Wikipedia on the Brillouin Zone.
The centre of the Brillouin zone, corresponding to the wavevector k = 0 with no momentum, is denoted by Γ. Moreover, we see that the points L and X denote the centres of the hexagonal and quadratic faces of the Brillouin zone, thus corresponding to the maximal wavevectors in directions of the GaAs crystal normal to the faces of the cube and along the diagonal. The points Λ and Δ just denote half of these maximal wavevectors.
Now, we can understand the band structure plot. The figure shows the energy for electrons with wavevectors between the centre and the border of the Brillouin zone along the directions Γ-L and Γ-X. Lines are calculated, and the dots represent data points measured by photoemission spectroscopy.
We see that the energy of the electrons can be well described by a parabola close to the Γ point, but not for the whole Brillouin zone. Moreover, remarkably, there is not just one curve, but many. And there is a certain energy region which is not covered by any wavevector in the Brillouin zone - there are just no states available at these energies. This is the so-called band gap. States below the the gap are called states in the valence band, states above the gap are in the conduction band. Depending on whether there are electrons in the conduction band or not, the material can conduct electricity or not - it is a metal, or an insulator. In gallium arsenide, the valence band is completely full, and a few electrons are usually thermally excited to the conduction band. That's typical for a semiconductor.
Something that makes gallium arsenide special is the feature marked in orange in the band structure plot: The maximum of the valence band is at the same wavevector as the minimum of the conduction band. This is called a direct gap, and it means that electrons can be promoted from the conduction band to the valence band by the absorption of light. Similarly, the transition of an electron across the direct gap is accompanied by the emission of light. At a direct bandgap, a crystal can absorb and emit light, much like an isolated atom. The band gap of gallium arsenide at room temperature is 1.43 eV, corresponding to light of the wavelength 870 nm in the near infrared.
Thus, gallium arsenide with its direct band gap was one of the first materials used to build light emitting diodes (LEDs) and solid-state lasers.
Here is a nice, a bit more elaborate introduction to the concepts behind band structure plots (beware, pop stars pop up on the page).
This post is part of our 2007 advent calendar A Plottl A Day. | <urn:uuid:1fe61c41-c107-452b-9e0b-e4c0cfeebd82> | 3.546875 | 1,299 | Personal Blog | Science & Tech. | 48.265196 |
Comprehensive DescriptionRead full entry
This robust frog may be brown, reddish-brown or red above with a variable number of large, black spots and blotches on the back, sides, and legs. The spots are usually irregular-shaped, with indistinct edges and light centers. The skin on back and sides is often covered with small bumps and tubercles. The eyes are upturned. The lower abdomen and the undersides of the hind legs are usually colored by a reddish-orange or salmon-colored pigment that appears as though it has been painted on (Leonard et al. 1993; Nussbaum 1984; Stebbins 1985). Oregon spotted frogs have relatively short hind legs and extensive webbing between the toes of the hind feet. Sexually mature females range between 60 and 100 mm snout-vent length and males range between 45 and 75 mm (Licht 1975).
Since nearly the time of its original description in 1853, the systematics of the "Western Spotted Frog" group has been a source of both confusion and debate. In 1996, however, a team led by David M. Green published the results of a study on the genetics of Spotted Frogs and concluded that the group actually contained two "sibling" species-the Oregon Spotted Frog and the Columbia Spotted Frog (Green et al. 1996 1997) . The decision to "split" the species was based upon the results of laboratory studies that indicated significant genetic differences, despite a lack of reliable morphological differences. Because the two species have allopatric ranges, they may be reliably identified based upon the location where a frog is encountered.
See another account at californiaherps.com. | <urn:uuid:2ba4f805-6268-481c-9eb9-d77917c0cb94> | 3.46875 | 347 | Knowledge Article | Science & Tech. | 50.194629 |
Debug Tests into Unit Test Suites
Because one small test will often lead us to discover the true source of a bug, it
that effective debugging should involve lots of testing. There's simply no way to build a reliable system without thoroughly testing it.
Although many programmers may be unfamiliar with the XP style of coding (in which writing the tests is interleaved with writing the code), there is one form of testing that
does while coding:
. Some programmers write these tests using
statements, others write them to work with a standalone debugger, others with the debugger in their IDE of choice.
Quite often, you will see programmers discard these tests once they've gotten the program to run correctly. This is a waste of very good tests. Why not incorporate them into the unit test suite over a program? After all, if one of them were to exhibit an unexpected result, we'd want to know about it.
Incorporating debugging tests into the unit tests can be done with relatively little effort, but there are some adaptations that have to be made. Frequently, we'll write tests for debugging to display information about the internal state of the program, whereas we'll write unit tests to signal only if they fail. That's because debugging and unit testing serve different purposes.
When debugging, we are trying to form a more accurate model of the program's behavior, so as to diagnose a bug. But when testing, all we want to know is whether the code passed the tests. If instead we wrote the unit tests to print out a result and then manually checked that it was what we expected, we'd waste a lot of time (or, more likely, we would seldom run the unit tests). So, unit tests tend to be written such that the result is solely one of "pass" or "fail".
Debugging tests can still be used as unit tests with only
modification. Consider that when the program is working correctly, the result of running a debugging test will match some expected result. It is straightforward to modify such a test so that, instead of simply printing out this result, it
the result to what's expected. It can then be incorporated into a unit test suite quite easily.
Just as debugging helps in developing a large suite of unit tests over a program, unit tests can help significantly when debugging. When diagnosing a bug, if you can first run a suite of unit tests and verify that they pass, you can rule out a huge number of potential
for a bug. In this way, unit tests allow you to leverage your cognitive energy when modeling program behavior. This is yet another way that debugging is like performing a scientific experiment.
forms an explanation of an experimental result, he automatically rules out all sorts of explanations that would defy a set of accepted principles about the way the world works. For example, he assumes that the results of his experiment will not change depending on the current weather conditions on Jupiter (well, unless he is performing an experiment on Jupiter), or depending on what he plans to eat for dinner. Unit tests enforce accepted principles of program behavior. | <urn:uuid:56816e4b-f663-4c45-9cfd-08444cb35782> | 2.796875 | 631 | Documentation | Software Dev. | 47.656533 |
An ultraviolet spectral Atlas of a sunspot with high spectral and
spatial resolution in the wavelength region 1190 - 1730 A is
presented. The sunspot was observed with the High Resolution Telescope
and Spectrograph (HRTS). The HRTS instrument was built at the U.S.
Naval Research Laboratory (NRL), Washington, D.C. (Bartoe and
Brueckner, 1975). The instrument combines high spatial, spectral, and ... time resolution with an extensive wavelength and angular
coverage. This makes HRTS particularly well suited for studies of fine
structure and mass flows in the upper solar atmosphere. HRTS has
flown six times on rockets between 1975 and 1989 and as a part of
Spacelab 2 in 1985.
The spectrograms used for the Atlas are from the second HRTS rocket
flight, known as HRTS II, flown on 13 February 1978 aboard a Black
Brant VC rocket (NASA Flight 21.042) at White Sands, New Mexico.
During the rocket flight the slit was oriented radially from the solar
disc center through the active region McMath 15139, including a
sunspot, and across the solar limb. The Solar Pointing Aerobee Rocket
Control System (SPARCS) kept the spectrograph slit positioned on the
solar surface during the observing time of 4.2 minutes. The spatial
resolution on this flight was 2 arcsec with a time resolution from 0.2
- 20.2 sec.
The HRTS spectra were recorded on Eastman Kodak 101-01 photographic
film. Microphotometry of the spectrograms has been carried out at the
Institute of Theoretical Astrophysics in Oslo. The data reduction
includes correcting the spectral images for geometrical distortion,
Fourier filtering to remove high frequency noise, transformation to
absolute calibrated solar intensity and calibration of the wavelength
The absolute intensity calibration was obtained by comparing relative
intensity scans of a quiet solar region with absolute intensities from
the Skylab S082B calibration rocket, CALROC The resulting absolute
intensities are accurate to within 30% (rms).
The wavelength scale was established using solar lines from neutral
and singly ionized atoms as reference lines. From this wavelength
scale velocities accurate to 2 km/s can be measured over the entire
wavelength range. The measured velocities are, however, relative to
the average velocity in the chromosphere where the reference lines are
The Atlas contains spectra of three different areas in the sunspot and
also of an active region and a quiet region. The selected areas are
averaged over several arcsec, ranging from 3.5 arcsec in the sunspot
to 18 arcsec in the quiet region. The transition region lines in the
Atlas show the most extreme example known of downflowing gas above a
sunspot, a phenomenon which seems to be commonly occurring in sunspots.
One of the selected areas in the sunspot is a light bridge crossing
the spot. This is the most interesting sunspot region where the
continuum radiation is enhanced and measurable throughout the HRTS
spectral range. A number of lines appear which do not occur in the
regular sunspot spectrum.
The Atlas is available in a machine readable form together with an IDL
program to interactively measure linewidths, total intensities and
solar wavelengths. See: http://zeus.nascom.nasa.gov/~pbrekke/HRTS/ | <urn:uuid:12f88ac1-72be-42aa-8db2-4d5c4a126f44> | 2.796875 | 749 | Knowledge Article | Science & Tech. | 45.379705 |
The concentration, movement and transformation of metal pollutants in Antarctic soils was investigated with two main objectives; i) to assess the fate of pollutants already present in the soil and ii) to establish experimental plots from which the rate of metal movements can be determined. The work was carried out at two different sites: Marble Point and Scott Base. Marble Point was a ... construction camp in 1957-1958 but later abandoned. Many point sources of pollution remain scattered on or near the ground surface. Eleven apparently undisturbed metal point sources of pollution were selected for sampling including objects containing lead, copper, zinc and tin. Where possible soil was sampled laterally and vertically at 1-5cm intervals, to depths of up to 20cm. Reference samples were taken for each site within close proximity, but away from the influence of the point source. A composite reference sample was taken from the perimeter of the abandoned camp and four water samples were also collected. A total of 113 soil samples adjacent to point sources were collected in the Marble Point area. The establishment of experiment plots dosed with low levels of metal ion salt solution was carried out on the northern side of Scott Base. Twelve plots were marked out and at the centre of each plot, a small hole was dug, and the soil from approximately 20 cm below the surface was removed. The sample was contaminated with a low level of a solution of either copper or zinc chloride and was replaced into the hole for resampling in the 2007/08 season. Nine samples of copper foil and zinc wool were also buried and recollected at a later date. Marble Point soil samples were analysed to determine general soil characteristics and levels of pollutant metals. Soil and metal foil samples from the Scott Base site are stored in the Transitional Facility at the School of Geography, Geology and Environmental Science, University of Auckland. | <urn:uuid:375f0a4e-169d-470a-96bb-6bc3360250f2> | 3.171875 | 369 | Academic Writing | Science & Tech. | 41.810076 |
O'PHIUROI'DEA (Neo-Lat. nom. pl., from Ok. four, ()phis. serpent apd, our," thin. 'rhe class of brittle stars. or sand stars, of which Ophiura is the typical genus. Ophiuroids (or ophiurans I are star-shaped, freely moving echino Berms (q.v.), with a flat, roundish or polygonal disk, from which suddenly arise live arms, which are slender, cylindrical, and contain no spacious continuation of the etelomie cavity of the disk, or hepatic ewea, while there is no vent. '11w re are no ambulacral grooN es in the arms, and the suckers are but little used in locomotion, whicl is mainly effected by the arms themselves, the 'feet,' or ambulacra, being thrust out laterally and acting as tactile organs. The mouth and also the madreporite are on the under side of the disk. On the ventral surface, also, are five slits which connect with a corresponding number of respiratory saes (bursa.) into which the ovaries or spermaries open. The eggs passing out through these slits are fertilized in the water, the sexes being distinct. The ophiuroids, as a rule, pass through a well-marked metamorphosis, the free-swimming young being called a pluteus. Certain forms undergo self-division, and in others development is direct. The class is divided into
two orders, Ophiurida and Eurya/ida, the lat ter having the arms greatly subdivided into long curly tendrils, as in Astrophyton, the basket fish. Fossil ophiuroids begin to appear in the Silurian period. while genuine modern forms arose in the middle Trias. See BRITTLE STARS.
Both orders of the Ophiuroidea—the Ophiurida having simple arms and the Euryalida with bi-:melted arms—are represented from the Silu rian onward, and the fossil forms show few impor tant differences from their modern descendants. They are usually rare, lint a few localities in the Devonian and in the Triassic and Jurassic shales and limestones have furnished well-preserved specimens in abundance. Such localities are Bundenbach (Devonian) and Solenhofen (Juras sic) in Germany, and Crawfordsville (Carhonifer ons) in Indiana. The more important genera are: Silurian—Eueladia, Protaster, Tieniaster; Devonian—Protaster, Uphiura; Carboniferous Onychaster; Mesozoic—Aspidura, Geocoma, and ophioglypha, with other modern genera. Sec | <urn:uuid:61889476-6ea0-4a45-a0d1-e274035ae47c> | 3.34375 | 604 | Knowledge Article | Science & Tech. | 41.911154 |
Two well-known enough physicists were born on August 1st: Walter Gerlach in 1889 and Douglas Osheroff in 1945.
Douglas Osheroff – congratulations to Stanford – was born to an originally Jewish Russian father and an originally Slovak mother in the Washington state.
As an undergrad, he would attend Feynman's lectures at Caltech, among other things. In the late 1960s, he would already investigate helium-3 at Cornell together with David Lee and Robert Richardson. Note that helium-3 is a fermion, whether we talk about the nucleus or the whole atom, so it has to pair with another helium-3 to produce a boson which may exhibit phenomena similar to superfluidity and/or related but inequivalent Bose-Einstein condensation.
The superfluidity of helium-4 had been known from the 1937 experiments by Kapitsa, Allen, and Misener, and it's debatable whether the superfluidity of yet another substance is such a radical discovery (although the pairing is needed, much like lower temperatures than for helium-4) – but many advances in the 1960s differ from their ancestors in the 1930s by similar "details'.
The three men later shared the 1996 Nobel prize for their discovery of the superfluidity of helium-3 – for work that Osheroff did many years before he received his PhD (in 1973).
Walter Gerlach was born in a pure enough German family; he's one of the physicists who highlight the dominance of the German physics at the beginning of the 20th century. I wonder whether he was a relative of the Gerlach after whom the highest peak of Slovakia (and former Czechoslovakia) is named (well, the peak is named after the village Gerlachov but I guess that this village's name also has some "human" origin).
His PhD would be about radiation. During the First World War, he served in the German Army under Max Wien (a physicist but you shouldn't confuse him with Wilhelm Wien after whom the displacement law in the black body radiation is named) but the service was physics-oriented – he did WiFi telegraphy. ;-)
Most importantly, he began to teach in Frankfurt in 1920 and in November 1921, he together with Otto Stern discovered the spin quantization in magnetic fields via their Stern-Gerlach experiment. Much later, he would probably be among the top 10 physicists who worked on the nuclear bomb for a rogue state, the Nazi Germany, but he wasn't punished for that in any way.
But let me return to the Stern-Gerlach experiment. A furnace was shooting silver atoms into a tube with some magnetic field. The silver atoms only produced two spots on the photographic plate, in disagreement with classical physics that would predict a whole line – with the destination determined by the orientation of the magnetic moment vector.
Now, we must realize that in 1921-1922, people wouldn't have modern quantum mechanics yet. They were interpreting the results in the framework of the "old quantum theory" – Bohr and Bohr-Sommerfeld models etc. Those models had "realist" trajectories that just happened to be quantized according to an ad hoc rule.
It seems clear to me that no classical model of this kind could ever describe similar experiments in a satisfactory way. The spin is one of the observables that make the need for quantum mechanics most directly obvious. By thinking about the Stern-Gerlach experiment and/or easily generalized gedanken or real experiments carefully enough, you must be able to figure out that the values of \(J_z=\pm 1/2\) cannot be "classically determined" in advance because that would violate the rotational symmetry; a restriction saying that the projection of the spin with respect to any axis is quantized (and what holds for one axis must hold for all of them by the rotational symmetry) has no solutions.
There were several confusing enough experiments (for a classical physicist) that made the birth of quantum mechanics kind of inevitable – the smartest guys ultimately needed three more years to find its probabilistic-and-operator foundations. It could have been faster but it could have been slower, too. | <urn:uuid:9444fa76-dc0e-4eee-b292-4ab8815a7013> | 3.375 | 870 | Personal Blog | Science & Tech. | 36.952832 |
Lightweight Directory Access Protocol
The Lightweight Directory Access Protocol (LDAP) is a directory service protocol that runs on a layer above the TCP/IP stack. It provides a mechanism used to connect to, search, and modify Internet directories.
The LDAP directory service is based on a client-server model. The function of LDAP is to enable access to an existing directory.
The data model (data and namespace) of LDAP is similar to that of the X.500 OSI directory service, but with lower resource requirements. The associated LDAP API simplifies writing Internet directory service applications.
The LDAP API is applicable to directory management and browser applications that do not have directory service support as their primary function. Conversely, LDAP is neither applicable to creating directories, nor specifying how a directory service operates.
The LDAP API documentation in the Platform Software Development Kit (SDK) is intended for experienced C and C++ programmers and Internet directory developers.
LDAP supports the C and C++ programming languages.
A familiarity with directory services and the LDAP Client/Server Model are necessary for the development with the LDAP API.
Client applications that use the LDAP API, run on Windows Vista, Windows XP, and Windows 2000. All platforms must have TCP/IP installed.
Active Directory servers that support client applications using the LDAP API include Windows Server 2008, Windows Server 2003, and Windows 2000 Server.
General information about the Lightweight Directory Access Protocol API.
Programmer's guide to using the Lightweight Directory Access Protocol API.
Reference information for LDAP.
Build date: 10/26/2012 | <urn:uuid:abb5d231-9633-4fdf-a5ca-f1fc70519ee0> | 2.78125 | 335 | Documentation | Software Dev. | 31.043445 |
Impact of changes in diffuse radiation on the global land carbon sink
Mercado, Lina M.; Bellouin, Nicolas; Sitch, Stephen; Boucher, Olivier; Huntingford, Chris; Wild, Martin; Cox, Peter M.. 2009 Impact of changes in diffuse radiation on the global land carbon sink. Nature, 458. 1014-1017. 10.1038/nature07949Before downloading, please read NORA policies.
MercadoN006771PP.pdf - Accepted Version
Plant photosynthesis tends to increase with irradiance. However, recent theoretical and observational studies have demonstrated that photosynthesis is also more efficient under diffuse light conditions1–5. Changes in cloud cover or atmospheric aerosol loadings, arising from either volcanic or anthropogenic emissions, alter both the total photosynthetically active radiation reaching the surface and the fraction of this radiation which is diffuse, with uncertain overall effects on global plant productivity and the land carbon sink. Here we estimate the impact of variations in diffuse fraction on the land carbon sink using a global model modified to account for the effects of variations in both direct and diffuse radiation on canopy photosynthesis. We estimate that variations in diffuse fraction, associated largely with the ‘global dimming’ period6–8, enhanced the land carbon sink by approximately one-quarter between 1960 and 1999. However, under a climate mitigation scenario for the twenty-first century in which sulphate aerosols decline before atmospheric CO2 is stabilized, this ‘diffuse-radiation’ fertilisation effect declines rapidly to near zero by the end of the twenty-first century.
|Programmes:||CEH Programmes pre-2009 publications > Biogeochemistry|
|CEH Sections:||Harding (to 31.07.11)|
|Additional Keywords:||global dimming and brightenning, land carbon sink, photosynthesis|
|NORA Subject Terms:||Ecology and Environment
|Date made live:||27 May 2009 10:39|
Actions (login required) | <urn:uuid:d27e65cc-8aae-4b72-9c71-b27cceb0e357> | 2.6875 | 421 | Academic Writing | Science & Tech. | 26.524493 |
Make a cube out of straws and have a go at this practical
Reasoning about the number of matches needed to build squares that
share their sides.
How can the same pieces of the tangram make this bowl before and after it was chipped? Use the interactivity to try and work out what is going on!
Look at this image for a short while before turning away.
Here is a Word document containing the image.
This image is taken from the NRICH Mathematics Posters CD called "Exploring Squares" published by Virtual Image. More details of this and the "Exploring Circles" CD can be foundhere. | <urn:uuid:3c707d5b-2817-4e1f-a0c8-d8c9599f0068> | 3.34375 | 133 | Tutorial | Science & Tech. | 55.906923 |
2. Use of Optogenetics Techniques to Discover Pathway-specific Feedforward Circuits Between Thalamus and Neocortex
All brain functions involve coordinated neural activities in many brain regions. Therefore, it is important to understand how different brain regions interact with each other. One particularly important coupled system in the brain is the neocortex and the thalamus. The neocortex, the thalamus, and the axonal tracts that interconnect these structures comprise the vast majority of the mammalian brain and are crucial for sensation, perception and consciousness. Thalamocortical (TC) pathways provide the major extrinsic input to neocortex, and corticothalamic (CT) pathways are a principal source of synaptic input to thalamus. These pathways are entwined, making their study challenging by conventional electrical stimulation methods. We used cell- specific expression of channelrhodopsin-2 (ChR2), a light-sensitive cation channel, in either thalamocortical or corticothalamic projection cells to manipulate the activity on these tracts and study their effects on their target locations in mouse brain slices.
Viral delivery of ChR2
Lentiviruses carrying fusion genes for ChR2 and fluorescent proteins (pLenti-Synapsin-hChR2(H134R)-EYFP-WPRE) were injected into ventrobasal thalamic complex (VB) or the barrel cortex of ICR or GIN mice in vivo, between postnatal days 8 and 15. Typical viral titers were 1010 IU/ml. Injection volumes were between 0.3 and 2 µl. After allowing 1–3 weeks for ChR2 expression, acute somatosensory thalamocortical or horizontal brain slices (300 µm thick) were prepared for in vitro recording and stimulation (Figure 1).
Selective labeling of TC and CT pathways
On the other hand, injections into barrel cortex produced ChR2/EYFP expression in cortical neurons, including CT projection cells and their axons within ventrobasal thalamus and thalamic reticular nucleus (TRN) (Figure 3A and B). The optical stimulation of neurons in barrel cortex generated spike responses due to their ChR2 expression (Figure 3C and D).
Effects of TC activity on cortical cells
These excitatory thalamic synapses onto cortical neurons can drive spiking in cortical inhibitory interneurons. Since these interneurons synapse make many local synapses, TC input is able to produce powerful feedforward inhibition in surrounding cells. In accordance with this, we found that laser stimulation of ChR2-expressing TC arbors nearly always produced feedforward inhibition (inhibition was observed in 63/67 cortical cells tested; Figure 5). | <urn:uuid:8a186870-1822-418a-9259-1c193b4f95d7> | 2.703125 | 576 | Academic Writing | Science & Tech. | 21.986468 |
| Experimental Chemistry
|| Main project page|
Previous entry Next entry
To conduct the salt test on the porphyrin film, to take UV-Vis of the remaining solutions, and to measure the sodium content of the PVOH/glutaraldehyde film that was soaked in NaOH.
- The acetone that the dye film was placed in was changed.
Porphyrin Salt Test
- The porphyrin film was removed from the 2% wt H2SO4 and was placed in 50 mL of the sodium sulfate prepared last class.
- The time for the porphyrin film to change from green to pink was recorded. (took
- The film was sliced to have the width of the UV-Vis cuvette.
- The thickness of the film was measured.
- The sliced film was placed in the cuvette and a UV-Vis spectra was taken.
- The sliced film was then placed into additional 2% wt H2SO4 until it turned green. (Time to turn green =
- Once the film slice had turned green, it was placed into 50 mL of the solution of aluminum sulfate prepared last class.
- Add data and results here...
- start time for soaking in sodium sulfate was 1:32
- The old acetone that the dye film was placed in (on Wednesday) was still clear in color.
- The H2SO4 solution that the porphyrin has been soaking in since Wednesday was green (when the film was removed from it)
- At the end of 2 hours and 15 minutes, the porphyrin film was still green, so we left it to soak in the sodium sulfate solution over the weekend. UV-Vis will be taken on Monday. | <urn:uuid:81a5a5e8-18a1-4bf9-b8f9-63e16554df34> | 3.109375 | 369 | Personal Blog | Science & Tech. | 67.472268 |
|Unifying gravity with the other three interactions ( the electromagnetic, weak, and strong interactions) would provide a theory of everything (TOE), rather than a GUT Grand Unified Theory. GUTs are often seen as an intermediate step towards a TOE. And now suddenly on April 6th, Mystery signal at Fermilab hints at 'technicolour' force. Last week at Tevatron collider scientists spotted evidence of new particles that might point to a previously unidentified force of nature or yet another force, "The Fifth Force".
I will not go too much into the physics of the finding as it is very confusing. But all last week on holiday driving through Europe, I tried to read everything I can grab in a few reading moments I had. This is exciting and yet kind of disappointing to me as I was hoping that we are getting closer to this unified theory of a single force holding the universe together. and yet this news takes us further from The 'Holy Grail' of physics. in next two paragraphs I just quickly go over some of the recent history in Physics,
Isaac Newton wrote down his theory of gravity in 1689, and his equations are used to this day to send space probes to the outer edges of our Solar System. It took over 200 years and the genius of Albert Einstein to discover a deeper theory of General Theory of Relativity that describes the force we see as gravity as being due to the bending and curving of space and time (or to be more accurate "space-time") by heavy objects like the Earth and Sun.
Now at atomic scale and at the temperatures common to our world, four discrete forces govern the interactions of matter - gravity, electromagnetism, the weak nuclear force, and the strong nuclear force. Each force is carried by a separate "messenger particle" unique to it and is the subject of most of the recent research works in big accelerator research. The strong force is by far the strongest of the forces, followed by the electromagnetic force, the weak force, and finally the extremely weak gravitational force. Though these four forces govern every matter interaction, a theory that unites them all is still being sought. The most recent candidate was the string theory.
So with this sighting at Fermilab's Tevartron collider a glimpse of an unidentified particle that, should it prove to be real, will radically alter physicists' prevailing ideas about how nature works and how particles get their mass.
This experiment started some 20 years ago byEstia Eichten, Fermi National Accelerator Laboratory - Recently at Fermilab's the work is on the theory known as Technicolor, which proposes the existence of a fifth fundamental force in addition to the four already known: gravity, electromagnetism, and the strong and weak nuclear forces. They also explain that Technicolour is very similar to the strong force, which binds quarks together in the nuclei of atoms, only it operates at much higher energies. It is also able to give particles their mass – rendering the Higgs boson unnecessary.
The new force is released with a zoo of new particles. Lane and Eichten's model predicts that a technicolour particle called a technirho would often decay into a W boson and another particle called a technipion.
So what is the problem?
The problem is that if technicolour is correct, it would not be able to resolve all the questions left unanswered by the standard model as it stands - For example, one of them is about the Big Bang Theory where physicists assume that at the high energies found in the early universe, the fundamental forces of nature were unified into a single superforce. Supersymmetry, physicists' leading contender for a theory beyond the standard model, paves a way for the forces to unite at high energies, but technicolour does not. So more experiments are required to see which theory is true.
Basically, The Standard Model of physics, which explains how sub-atomic particles interact with the four known forces of nature – gravity, electromagnetism and the strong and weak nuclear forces – goes on to predicts that the Higgs boson, if it exists, could also explain why things have a weight to them or cause gravity.
The researchers believe the anomaly in this data indicated that the undiscovered sub-atomic particle has a mass of about 150 times that of a proton. WOW - Proton is the positively-charged entity within an atom's nucleus. If this proves to be the case, it could spell the end of the idea that matter has a mass because of the existence of another kind of sub-atomic particle called the Higgs boson, the so-called "God particle" predicted by theoretical physicists but yet to be found.
So basically 20 years later with millions of pounds spent in many locations we are no closer in the big unified theory. I know that this snapshot of what happened with the collision can be just a statistical mistake - but if correct then we are in trouble as the new particle find turns physics upside-down.
"If in this experiment the signal is what we think it is, we could be on the verge of a different view to why matter has mass, whereas light doesn't," said Professor Kenneth Lane, a theoretical physicist at Boston University. "We might be seeing the signal for a new kind of nuclear interaction which we have called 'technicolour'. This scenario basically replaces the Higgs boson." | <urn:uuid:39ea5235-d15c-49c7-849f-bb3913395388> | 2.6875 | 1,120 | Truncated | Science & Tech. | 37.648411 |
Where there are no opposing forces, a moving body needs noforce to keep it moving with a steady velocity ( Newton's first law of motion ).
If, however, a resultant force does act on a moving body in the direction of its motion, then it will accelerate ( Newton's second law of motion ) and the work done by the force will become converted into increased kinetic energy in the body.
In order to calculate the kinetic energy of a body of mass m moving with velocity v, we begin by supposing that the body starts from rest and is acted upon by a force F ( no friction or other forces acting ).
This force will give the body a uniform acceleration a, and will acquire a final velocity v, after travelling a distance x. These quantities, a,v and x will be related by the equation
v² = u² + 2 ax
In accordance with the law of conservation of energy, the work done by the force F in pushing the body through distance x will become converted into kinetic energy of motion in the body.
work done = force × distance
= F × x
F = ma
therefore, substituting for F,
work done = ma × x .......................( 1 )
Applying the equation v² = u² + 2 ax and remembering that u = 0
v² = 0 + 2 ax
whence a = v² / 2x
substituting this value of a in equation ( 1 ), we obtain,
work done = m × v² / 2x × x = kinetic energy
or kinetic energy ( k.e.) = 1/2 m v²
This expression, 1/2 m v², gives the energy in joules or ergs according to the system of units used.
Worked examples :
1. A stone of mass 500 g is thrown vertically upwards with a velocity of 15 m /s.
Find : ( a) the potential energy at greatest height; ( b ) the kinetic energy on reaching the ground ( Assume g = 10 m / s ² and neglect air resistance. )
To solve this problem we use the equation of motion, v² = u² + 2 ax, replacing a by g since we are dealing with gravitational acceleration. Thus,
v² = u² + 2 gx
in which u = 15 m / s
v = 0 m / s
g = - 10 m / s ²
hence, by substution,
v² = 15² + 2 ( -10) × x
whence x = - 15² / 2 ( -10) = 11.25 m
Potential energy = weight × height
= mg × x ( m in kg, of course)
= 0.5 × 10 × 11.25 = 56.25 J
In accordance with the principle of conservation of energy, the whole of this potential energy becomes converted to kinetic energy when the stone reaches the ground again.
kinetic energy on reaching the gorund = 56.25 J.
2. During a shunting operation, a truck of total mass 15 metric tons ( t ) moving at 1 m / s, collides with a stationary truck of mass 10 t. If the two trucks are automatically connected so that they move off together, find their velocity. Also calculate the kinetic energy of the trucks: (a) before; ( b ) after collision. Explain why these are not equal. Solution
By the principle of conservation of momentum,
momentum before collision = momentum after collision
Let v = common velocity after collision, then using t m / s units of momentum,
( 15 × 1 ) + ( 10 × 0 ) = ( 15 + 10 ) × v
or v = 15 / 25 = 0.6 m
Using the formula K.E = 1/2 m v² ( m in kg; v in m / s )
K.E. before collision = 1/2 m v² = 1/2 × 15000 × 1² = 7500 J
K.E. after collision = 1/2 m v² = 1/2 × 25000 × 0.6 = 4500 J
In accordance with the principle of conservation of energy, the total energy after collision is the same as that before.
Before collision the whole of the energy is kinetic in the moving truck, but when collision occurs part of this becomes converted into internal energy in both trucks ( k.e. and p.e. of molecules) and part into sound energy ( k.e. and p.e. of air molecules). The remainder is left as mechanical kinetic energy in both trucks. Consequently, mechanical kinetic energy after collision is less than mechanical kinetic energy before collision.
3. Water is pumped through a hose-pipe at the rate of 75 litres/min and issues from the nozzle with a velocity of
20 m / s.
Find: ( a ) the force of reaction on the nozzle in newtons; ( b) the useful power of the pump in watts. ( Assume 1 litre of water has a mass of 1 kg ).
The reaction on the nozzle is equal to the force required to set the water in motion. ( Newton's third law of motion )
We know that - I hope - F = ma ( = the rate of change in momentum)
this may be written F = mv / t
in which m = 75 kg
t = 60 s
Substituting these values we obtain,
reaction on the nozzle = F = 75 × 20 / 60 = 25 N
The useful power of the pump may be found from the kinetic energy of the issuing water.
kinetic energy supplied per second = 1/ 2 × ( mass of water / s ) velocity ²
= 1/ 2 × 75/60 × 20²
=250 J /s
i.e., useful power = 250 W. | <urn:uuid:42b2a718-c314-4b11-870f-fe84fa891933> | 4.1875 | 1,219 | Tutorial | Science & Tech. | 81.537744 |
The giant cluster of elliptical galaxies in the center of this image contains so much dark matter mass that its gravitational field bends light. This means that for distant galaxies in the background, the cluster acts as a magnifying glass, bending and concentrating the distant object’s light towards Hubble. These gravitational lenses are one tool astronomers can use to extend Hubble’s vision beyond what it would normally be capable of seeing.
NASA/ESA/J. Richard (CRAL) and J.P. Kneib (LAM); acknowledgement: Marc Postman (STScI) | <urn:uuid:84f5d03d-c707-46eb-8e32-689502bf12cb> | 3.0625 | 118 | Knowledge Article | Science & Tech. | 56.951983 |
Cars kill more animals in protected areas
Other studies have linked roadkill numbers to factors such as climate or the animal’s activity. For example, amphibians often get squashed on their way to breeding ponds during the rainy season. A team of European scientists wondered if the protection status of the area also made a difference. After all, people often visit wildlife refuges, and with more people comes more traffic.
The researchers surveyed 4,920 kilometers of roads in 41 counties across Catalonia, Spain. The team performed the surveys in spring and autumn 2002, scanning for animal carcasses from a slow-moving car. Next, the researchers tried to determine if roadkill numbers were linked to the climate, season, or protection status of the area.
The team recorded 2,013 dead animals on the roads, more than half of which were amphibians. Mammals, birds, and reptiles were also found. While the climate did not seem to affect the number of roadkills, the protection status did. The higher the level of protection, the more roadkill victims the team found in that area, according to the study in Biodiversity and Conservation.
The authors speculate that protected areas may simply have more animals crossing the roads. But these refuges can also draw more tourists and road development. In one park, the researchers found the most roadkills during peak tourist season. Managers could try to avert these accidents by directing amphibians to new breeding sites, the team suggests. — Roberta Kwok | 6 August 2012
Source: Garriga, N. et al. 2012. Are protected areas truly protected? The impact of road traffic on vertebrate fauna. Biodiversity and Conservation doi: 10.1007/s10531-012-0332-0.
Image © shutterstock.com | <urn:uuid:41c96c32-ffd3-47b7-8e6f-31e1cbf183d0> | 3.015625 | 372 | Truncated | Science & Tech. | 52.773478 |
It's long been accepted by biologists that environmental factors cause the diversity-or number-of species to increase before eventually leveling off. Some recent work, however, has suggested that species diversity continues instead of entering into a state of equilibrium.
But new research on lizards in the Caribbean not only supports the original theory that finite space, limited food supplies, and competition for resources all work together to achieve equilibrium; it builds on the theory by extending it over a much longer timespan.
The research was done by Daniel Rabosky of the University of California, Berkeley and Richard Glor of the University of Rochester who studied patterns of species accumulation of lizards over millions of years on the four Caribbean islands of Puerto Rico, Jamaica, Hispaniola, and Cuba. Their paper is being published December 21 in the journal, Proceedings of the National Academy of Sciences.
Glor and Rabosky focused on species diversity-the number of distinct species of lizards-not the number of individual lizards.
"Geographic size correlates to diversity," said Glor. "In general, the larger the area, the greater the number of species that can be supported. For example, there are 60 species of Anolis lizards on Cuba, but far fewer species on the much smaller islands of Jamaica and Puerto Rico." There are only 6 species on Jamaica and 10 on Puerto Rico.
Ecologists Robert MacArthur of Princeton University and E.O. Wilson of Harvard University established the theory of island biogeography in the 1960s to explain the diversity and richness of species in restricted habitats, as well as the limits on the growth in number of species. | <urn:uuid:eacd493e-e2b1-4dd9-936e-eacd05221dae> | 3.5 | 331 | Knowledge Article | Science & Tech. | 29.89977 |
Arctic ice set to match all-time record low
Satellite measurements reveal that volumes have fallen consistently over past 30 years
Steve Connor is the Science Editor of The Independent. He has won many awards for his journalism, including five-times winner of the prestigious British science writers’ award; the David Perlman Award of the American Geophysical Union; twice commended as specialist journalist of the year in the UK Press Awards; UK health journalist of the year and a special merit award of the European School of Oncology for his investigative journalism. He has a degree in zoology from the University of Oxford and has a special interest in genetics and medical science, human evolution and origins, climate change and the environment.
Wednesday 07 September 2011
The area of the Arctic that is covered by floating sea ice at the end of this summer's period of melting is likely to match the all-time record low of 2007, scientists said yesterday.
Some researchers believe that the actual volume of sea ice in the Arctic has already fallen to a record minimum this summer. The extent of the Arctic covered by sea ice this summer has also continued to decline – a trend seen since 1979 when the first satellite measurements were collected.
Although satellites are good at measuring the surface area of ocean that is covered by the floating sea ice, it is not so easy to assess ice volume, which requires accurate measurements of ice thickness over wide regions.
Satellites have produced clear evidence that the sea-ice extent – the area covered by at least 15 per cent of ice – has fallen consistently and significantly each summer over the past 30 years. Since 1979, sea ice extent in summer has fallen by around 30 per cent, according to satellite data.
Walt Meier, of the US National Snow and Ice Data Centre in Boulder, Colorado, said that at the moment the Arctic sea ice is on track to be second or third lowest in terms of sea-ice extent, although there is still about another week or so until the summer melt period finally comes to an end.
"A lot still depends on the weather. If a warm front comes through, there could still be some rapid melting. But at present we think it could be close to or as low as the 2007 record minimum," Dr Meier said.
The sea ice in the Arctic goes through annual cycles of melting in summer and reforming each winter. However, as average temperatures in the the Arctic region have increased in recent decades – faster than in most other regions of the world – summer sea ice has disappeared faster than predicted, and winter ice has not reformed as readily as it once did.
In 2007, there was a "perfect storm" of driving winds that piled the sea ice up against the Greenland coastline and high pressure that removed cloud cover at the height of summer season to create idea conditions for the melting of the sea ice. This year the sea ice is more dispersed, but in terms of total surface area covered by ice, it probably ranks close or equal to 2007, Dr Meier said.
The last four summers have experienced the four lowest minima since satellite readings were first gathered and eight of the ten lowest summers have occurred in the past decade, he said. At the same time, there has been a marked decrease in thick "multi-year" sea ice that is older than five years, and an increase in the proportion of thinner, younger ice which is more likely to melt away completely in summer.
Scientists at the University of Washington in Seattle estimated that the actual volume of sea in the Arctic is already at an all-time low, lower even than in 2007 because then the ice that was left was older, multi-year ice several metres thick. However, estimating ice volume is notoriously difficult.
Jeremy Paxman reveals he has heard senior Tories calling activists 'swivel-eyed loons'
Gay couple beaten in park urge MPs to moderate language on gay marriage
Strewth mate. Aussies wave goodbye to Britain as it becomes too pricey to stay
X marks the spot: The find that could rewrite Australian history
Oklahoma tornado latest: Obama pledges support for 'as long as it takes' to rebuild the suburb of Moore
- 2 Austerity has hardened the nation's heart
- 3 Gay couple beaten in park urge MPs to moderate language on gay marriage
- 4 X marks the spot: The find that could rewrite Australian history
BMF is the UK’s biggest and best loved outdoor fitness classes
Find out what The Independent's resident travel expert has to say about one of the most beautiful small cities in the world
Win anything from gadgets to five-star holidays on our competitions and offers page.
£850 - £1000 per day: Orgtel: Programme Change Manager - Banking - London - £8...
Negotiable: Progressive Recruitment: Safety Engineer North West England
Negotiable: Progressive Recruitment: CE&I Engineers Urgently required North We...
£21000 - £36000 per annum: Randstad Education Crawley: We are currently recrui... | <urn:uuid:18ca467b-337a-4816-bf25-512fc2902ec6> | 2.84375 | 1,027 | Truncated | Science & Tech. | 40.134768 |
With a healthy appetitie for uranium and petroleum, this family of bacteria clean up nuclear waste and other toxic materials. A team of researchers has discovered exactly how they use their arms to do this.
After hearing the stories about the work that leaders from the gulf coast and their organizations have done, it’s clear to me that they are changing the paradigm of gulf coast recovery -- changing the way buildings are developed in the gulf and creating a generation of green builders in New Orleans who work closely with low-income communities.
As awareness builds for clean-burning cookstoves in the developing world, the Department of Energy is working with other government agencies and NGOs to make stoves cleaner, more efficient and more affordable.
Check out this epic demolition video from the Hanford Site in Washington state. But its more than just great footage -- this represents important progress in the cleanup of the environmental legacy of one of America's most famous scientific undertakings -- the Manhattan Project.
Glass discovered in a Roman shipwreck could unlock more answers about how glass will stands the test of time for millennia to come -- research that is very relevant to vitrification, an effective method for storing nuclear waste in glass. | <urn:uuid:ec4353c3-f773-4a27-8664-4886b81d3730> | 3.140625 | 241 | Content Listing | Science & Tech. | 28.625 |
For thousands of years,
farmers have surveyed their fields and eyed the sky, hoping for good weather and a bumper crop. And when they found particular plants that fared well even in bad weather, were especially prolific, or resisted disease that destroyed neighboring crops, they naturally tried to capture those desirable traits by crossbreeding them into other plants. But it has always been a game of hit or miss. Unable to look inside the plants and know exactly what was producing their favorable characteristics, one could only mix and match plants and hope for the best.
This article was originally published with the title Back to the Future of Cereals. | <urn:uuid:5e555722-ec69-47cc-afc0-c6232a473b08> | 3.328125 | 126 | Truncated | Science & Tech. | 45.299545 |
Title: Foliar moisture content of Pacific Northwest vegetation and its relation to wildland fire behavior.
Author: Agee, James K.; Wright, Clinton S.; Williamson, Nathan; Huff, Mark H.
Source: Forest Ecology and Management. 167: 57-66
Description: Fotiar moisture was monitored for five conifers and associated understory vegetation in Pacific Northwest forests. Decline in foliar moisture of new foliage occurred over the dry season, while less variation was evident in older foliage. Late season foliar moisture ranged from 130 to 170%. In riparian-upland comparisons, largest differences were found for understory vegetation, with less variation evident for overstory trees. Minimum foliar moisture values of 100-120% are appropriate to use in crown fire risk assessment for the Pacific Northwest.
Keywords: Foliar moisture, Pacific northwest, Wildland fire behavior, Crown fire
View and Print this Publication (895 KB)
- We recommend that you also print this page and attach it to the printout of the article, to retain the full citation information.
- This article was written and prepared by U.S. Government employees on official time, and is therefore in the public domain.
Get the latest version of the Adobe Acrobat reader or Acrobat Reader for Windows with Search and Accessibility
Agee, James K.; Wright, Clinton S.; Williamson, Nathan; Huff, Mark H. 2002. Foliar moisture content of Pacific Northwest vegetation and its relation to wildland fire behavior.. Forest Ecology and Management. 167: 57-66. | <urn:uuid:08650bb4-8ff2-4297-a1f4-27fd03b0f588> | 2.796875 | 325 | Truncated | Science & Tech. | 41.001028 |
The VMware Infrastructure management object model is a complex system of data structures designed to provision, manage, monitor, and control the life-cycle of all components that can possibly comprise virtual infrastructure. The VMware infrastructure management architecture is patterned after Javas JMX (Java Management Extensions) infrastructure in which objects are used to instrument other objects, on a remote server. The data structures defined for the object model include both managed object types, as described on this page, and data object types.
A managed object type is a core data structure of the server-side object model. Instances of various managed object types are referred to generically as managed objects, of which there are two broad categories:
Managed objects can contain both properties and operations. An operation is Web-services terminology for what might be called a method in other programming languages, such as Java. (In fact, the word method is used in the API Reference rather than operation, but you may see the two words used, interchangeably.)
Regardless of these subtle language differences, working with the server from a client involves a few common steps, starting with connecting to the server, authenticating user-account credentials, and obtaining a session. (See the Programming Guide for details).
After connecting to the server system, the client application must then obtain a reference to the ServiceInstance managed object. This figure shows the ServiceInstance and some of its associated data objects. (In this figure, the property names are not shown: only the data type of the associated property).
The MOR in the figure above is an abbreviation for ManagedObjectReference, a data object type that provides a reference to server-side objects for use by client applications. See the Programming Guide for more information.
All managed object types are listed in the frame at the left of this page. Click a name to display the reference documentation for the managed object.
To quickly find any entry, start typing its name in the Quick Index.Back to Home | <urn:uuid:be27f86a-1fdf-4523-88b2-aa91fddbb1ae> | 2.75 | 400 | Documentation | Software Dev. | 23.750903 |
Once all the writes are performed, the document.close() method causes any output written to the output stream to be displayed.
Note: If a document already exists in the target, it will be cleared. If this method has no arguments, a new window (about:blank) is displayed.
|MIMEtype||Optional. The type of document you are writing to. Default value is "text/html"|
|replace||Optional. If set, the history entry for the new document inherits the history entry from the document which opened this document|
The open() method is supported in all major browsers.
Open an output stream, add some text, then close the output stream:
Open an output stream (in a new window; about:blank), add some text, then close the output stream:
Your message has been sent to W3Schools. | <urn:uuid:38f6fa04-7c6f-48f1-bc30-76c78094ae3c> | 2.9375 | 180 | Documentation | Software Dev. | 55.51235 |
On a Sunday morning early in January, about two dozen prominent physicists gathered behind closed doors at the California Institute of Technology to ponder the state of their craft.
American physicists were not exactly sitting on the sidelines last July when CERN announced the probable discovery of the long-sought Higgs boson, the key to understanding the origin of mass and life in the universe.
The United States contributed $531 million to building and equipping the Large Hadron Collider, the multibillion-dollar European machine with which the discovery was made. About 1,200 Americans work at CERN, including Joe Incandela from the University of California, Santa Barbara, who led one of the two teams making the July announcement.
But as science goes forward, American particle physicists are wondering what role, if any, they will play in the future in high-energy physics — the search for the fundamental particles and forces of nature — a field they once dominated.
“There is enormous angst in the field,” said Michael S. Turner, a physicist and cosmologist at the University of Chicago, who attended the Caltech meeting.
After canceling the Superconducting Super Collider, which would have been the world’s most powerful physics machine, in 1993, and shutting down Fermilab’s Tevatron in 2011, the United States no longer owns the tool of choice in physics, a particle collider.
Fermilab’s biggest project going forward is a plan to shoot a beam of neutrinos, ghostlike particles, 800 miles through the earth to a detector at the old Homestake gold mine in Lead, S.D., to investigate their shape-shifting properties.
The results could bear on one of the deep-seated and intractable problems in cosmology, namely why the universe is made of matter and not antimatter, but there is not enough money in the project’s budget to put the detector below ground, at the bottom of the mine, where it would be sheltered from cosmic rays and able to observe neutrinos from distant supernova explosions, instead of on the surface.
Americans who want to taste the thrills of the frontiers of high-energy physics have to cast their eyes east to CERN’s collider, which is set to dominate the field for the next 20 years. Or they might look west, to Japan, which is budgeting about $120 billion in stimulus money to help recover from the disaster at the Fukushima nuclear power plant after the earthquake and tsunami in 2011 and wants to use some of it to host the next big machine, the International Linear Collider, which would be 20 miles long and could manufacture Higgs bosons for precision study.
In February, in a ceremony at a physics conference in Vancouver, British Columbia, the team that had been designing the collider for the last decade handed over the plans to a new consortium, the Linear Collider Collaboration, directed by Lyn Evans, who built the Large Hadron Collider at CERN. Dr. Evans said the next big highlight of his career would be seeing construction start in the next couple of years in Japan.
How desperately does the United States want to participate in these projects, from which the next great advances in our understanding of the universe could come?
“Our issue is that Europe and Asia are contemplating or have made $10 billion investments in particle physics,” explained Jim Siegrist, associate director for high-energy physics at the Department of Energy, who says that kind of money is not going to be forthcoming in the United States. “How we compete is a problem for us.”
Physicists are hoping to have some answers by this summer when they convene in Minneapolis for Snowmass, a planning conference named after the Colorado resort where it used to be held until the place got too expensive. In the meantime there are only questions, like what is the country’s future relationship with CERN?
Read More: Here | <urn:uuid:61e31c54-c1e4-4d17-a5bf-fb1dde5b4db7> | 2.90625 | 822 | Truncated | Science & Tech. | 33.502669 |
Returns a new mem object referring to nwords (an int) of newly allocated and cleared memory. Each word is either 1, 2, or 4 bytes as specified by wordz (an int, default 1). Indexing of mem objects performs the obvious operations, and thus pointers work too.
Returns a copy of old. If old is an intrinsically atomic type such as an int or string, the new will be the same object as the old. But if old is an array, set, or struct, a copy will be returned. The copy will be a new non-atomic object (even if old was atomic) which will contain exactly the same objects as old and will be equal to it (that is ==). If old is a struct with a super struct, new will have the same super (exactly the same super, not a copy of it).
This function can be used to include data in a program source file which is out-of-band with respect to the normal parse stream. But to do this it is necessary to know up to what character in the file in question the parser has consumed.
In general: after having parsed any simple statement the parser will have consumed up to and including the terminating semicolon, and no more. Also, after having parsed a compound statement the parser will have consumed up to and including the terminating close brace and no more. For example:
static help = gettokens(currentfile(), "", "!") ;This is the text of the help message. It follows exactly after the ; because that is exactly up to where the parser will have consumed. We are using the gettokens() function (as described below) to read the text. ! static otherVariable = "etc...";
static s = [struct a = 1, b = 2, c = 3]; static v, k; forall (v, k in s) printf("%s=%d\n", k, v); del(s, "b"); printf("\n"); forall (v, k in s) printf("%s=%d\n", k, v);
Enters an internal event loop and never returns (but can be broken out of with an error). The exact nature of the event loop is system specific. Some dynamically loaded modules require an event loop for their operation.
Causes the interpreter to finish execution and exit. If no parameter, the empty string or NULL is passed the exit status is zero. If an integer is passed that is the exit status. If a non-empty string is passed then that string is printed to the interpreter's standard error output and an exit status of one used. This is implementation dependent and may be replaced by a more general exception mechanism. Avoid.
Opens the named file for reading or writing according to
and returns a file object that may be used to perform I/O on the file.
is the same as in C and is passed directly to the C library
function. If mode is not specified
Formats a string based on fmt and args as per sprintf (below) and outputs the result to file. See sprintf. Changes to ICI's printf have made fprintf redundant and it may be removed in future versions of the interpreter. Avoid.
Reads a line of text from file and returns it as a string. Any end-of-line marker is removed. Returns NULL upon end of file. If file is not given the current value of stdin in the current scope is used.
Seps must be a string. It is interpreted as a set of characters which do not from part of the token. Any leading sequence of these characters is first skipped. Then a sequence of characters not in seps is gathered until end of file or a character from seps is found. This terminating character is not consumed. The gathered string is returned, or NULL if end of file was encountered before any token was gathered.
If seps is a string, it is interpreted as a set of characters, any sequence of which will separate one token from the next. In this case leading and trailing separators in the input stream are discarded.
forall (token in gettokens(currentfile())) printf("<%s>", token) ; This is my line of data. printf("\n");
forall (token in gettokens(currentfile(), ':', "*")) printf("<%s>", token) ;:abc::def:ghi:* printf("\n");
gsub performs text substitution using regular expressions. It takes the first parameter, matches it against the second parameter and then replaces the matched portion of the string with the third parameter. If the second parameter is a string it is converted to a regular expression as if the regexp function had been called. Gsub does the replacement multiple times to replace all occurrances of the pattern. It returns the new string formed by the replacement. If there is no match this is original string. The replacement string may contain the special sequence "\&" which is replaced by the string that matched the regular expression. Parenthesized portions of the regular expression may be matched by using \ n where n is a decimal digit.
Returns a string formed from the concatenation of elements of array. Integers in the array will be interpreted as character codes; strings in the array will be included in the concatenation directly. Other types are ignored.
Parses the code contained in the file named by the string into the scope. If scope is not passed the current scope is used. Include always returns the scope into which the code was parsed. The file is opened by calling the current definition of the ICI fopen() function so path searching can be implemented by overriding that function.
If start (an integer) is positive the sub-interval starts at that offset (offset 0 is the first element). If start is negative the sub-interval starts that many elements from the end of the string (offset -1 is the last element, -2 the second last etc).
Returns an array of all the keys from struct. The order is not predictable, but is repeatable if no elements are added or deleted from the struct between calls and is the same order as taken by a forall loop.
Returns a memory object which refers to a particular area of memory in the ICI interpreter's address space. Note that this is a highly dangerous operation. Many implementations will not include this function or restrict its use. It is designed for diagnostics, embedded systems and controllers. See the alloc function above.
Returns a file, which when read will fetch successive bytes from the given memory object. The memory object must have an access size of one (see
above). The file is read-only and the mode, if passed, must be one of
If x is an int or float, it is returned directly. If x is a string it will be converted to an int or float depending on its appearance; applying octal and hex interpretations according to the normal ICI source parsing conventions. (That is, if it starts with a 0x it will be interpreted as a hex number, else if it starts with a 0 it will be interpreted as an octal number, else it will be interpreted as a decimal number.)
Parses source in a new variable scope, or, if scope (a struct) is supplied, in that scope. Source may either be a file or a string, and in either case it is the source of text for the parse. If the parse is successful, the variables scope structure of the sub-module is returned. If an explicit scope was supplied this will be that structure.
If scope is not supplied a new struct is created for the auto variables. This structure in turn is given a new structure as its super struct for the static variables. Finally, this structure's super is set to the current static variables. Thus the static variables of the current module form the externs of the sub-module.
In the first case the file will eventually be closed by garbage collection, but exactly when this will happen is unpredictable. The underlying system may only allow a limited number of simultaneous open files. Thus if the program continues to open files in this fashion a system limit may be reached before the unused files are garbage collected.
Executes a new process, specified as a shell command line as for the
function, and returns a file that either reads or writes to the standard input or output of the process according to
. If mode is
the reading from the file reads from the standard output of the process. If mode is
writing to the file writes to the standard input of the process. If mode is not speicified it defaults to
Formats a string based on fmt and args as per sprintf (below) and outputs the result to the file or to the current value of the stdout variable in the current scope if the first parameter is not a file. The current stdout must be a file. See sprintf.
Returns a compiled regular expression derived from string This is the method of generating regular expressions at run-time, as opposed to the direct lexical form. For example, the following three expressions are similar:
except that the middle form computes the regular expression each time it is executed. Note that when a regular expression includes a # character the regexp function must be used, as the direct lexical form has no method of escaping a #.
The optional second parameter is a bit-set that controls various aspects of the compiled regular expression's behaviour. This value is passed directly to the PCRE package's regular expression compilation function. Presently no symbolic names are defined for the possible values and interested parties are directed to the PCRE documention included with the ICI source code.
Returns a compiled regular expression derived from string that is case-insensitive. I.e., the regexp will match a string regardless of the case of alphabetic characters. Literal regular expressions to perform case-insensitive matching may be constructed using the special PCRE notation for such purposes, see See The settings of PCRE_CASELESS, PCRE_MULTILINE, PCRE_DOTALL, and PCRE_EXTENDED can be changed from within the pattern by a sequence of Perl option letters enclosed between "(?" and ")". The option letters are.
Returns the current scope structure. This is a struct whose base element holds the auto variables, the super of that hold the statics, the super of that holds the externs etc. Note that this is a real reference to the current scope structure. Changing, adding and deleting elements of these structures will affect the values and presence of variables in the current scope.
If a replacement is given, that struct replaces the current scope structure, with the obvious implications. This should clearly be used with caution. Replacing the current scope with a structure which has no reference to the standard functions also has the obvious effect.
Set the input/output position for a file and returns the new I/O position or -1 if an error ocurred. The arguments are the same as for the C library's fseek function. If the file object does not support setting the I/O position or the seek operation fails an error is raised.
Files are, in general, system dependent. This is the only standard routine which opens a file. But on systems that support byte stream files, the function fopen will be set to the most appropriate method of opening a file for general use. The interpretation of mode is largely system dependent, but the strings "r", "w", and "rw" should be used for read, write, and read-write file access respectively.
Sort the content of the array using the heap sort algorithm with func as the comparison function. The comparison function is called with two elements of the array as parameters, a and b . If a is equal to b the function should return zero. If a is less than b , -1, and if a is greater than b , 1.
Return a formatted string based on fmt (a string) and args. Most of the usual % format escapes of ANSI C printf are supported. In particular; the integer format letters diouxXc are supported, but if a float is provided it will be converted to an int. The floating point format letters feEgG are supported, but if the argument is an int it will be converted to a float. The string format letter, s is supported and requires a string. Finally the % format to get a single % works.
Returns a short textual representation of any. If any is an int or float it is converted as if by a %d or %g format. If it is a string it is returned directly. Any other type will returns its type name surrounded by angle brackets, as in <struct>.
Returns a new structure. This is the run-time equivalent of the struct literal. If there are an odd number of arguments the first is used as the super of the new struct; it must be a struct. The remaining pairs of arguments are treated as key and value pairs to initialise the structure with; they may be of any type. For example:
Sub performs text substitution using regular expressions. It takes the first parameter, matches it against the second parameter and then replaces the matched portion of the string with the third parameter. If the second parameter is a string it is converted to a regular expression as if the regexp function had been called. Sub does the replacement once (unlike gsub). It returns the new string formed by the replacement. If there is no match this is original string. The replacement string may contain the special sequence "\&" which is replaced by the string that matched the regular expression. Parenthesized portions of the regular expression may be matched by using \ n where n is a decimal digit.
Returns the current super struct of struct, and, if replacement is supplied, sets it to a new value. If replacement is NULL any current super struct reference is cleared (that is, after this struct will have no super).
Converts between calendar time and arithmetic time. An arithmetic time is expressed as a signed float time in seconds since 0:00, 1st Jan 2000 UTC. The calendar time is expressed as a structure with fields revealing the local (including current daylight saving adjustment) calendar date and time. Fields in the calendar structure are:
Returns a representation of the call stack of the current program at the time of the call. It can be used to perform stack tracebacks and related debugging operations. The result is an array of structures, each of which is a variable scope (see scope ) structure of succesively deeper nestings of the current function nesting.
Blocks (waits) until an event indicated by any of its arguments occurs, then returns that argument. The interpretation of an event depends on the nature of each argument. A file argument is triggered when input is available on the file. A float argument waits for that many seconds to expire, an int for that many millisecond (they then return 0, not the argument given). Other interpretations are implementation dependent. Where several events occur simultaneously, the first as listed in the arguments will be returned. | <urn:uuid:f11a8cfa-8c7b-46c1-8813-a5e0a601b337> | 2.78125 | 3,144 | Documentation | Software Dev. | 55.215824 |
Okay, lemme do this for Upriver:
Radius of the sun: 7 108 m
Mass of the sun: 2 1030 kg
Average density: 1410 kg m-3
Density of iron: 8 103 kg m-3
So we take a shell of iron of thickness D and calculate how heavy it is.
Volume is 4 pi R2 D (when the thickness is much smaller than the radius of the Sun).
V = 4 pi R2 D = 6 1018 D m[sup3[/sup] = A D
Knowing the density of iron and the volume with parameter D and the mass of the Sun we can get an estimate of how thick the iron layer can be:
D = M / A*Fe = 2 1030 / 6 1018 * 8 103 = 41 106 m
Well, here we see that the iron shell in the Sun is only 6% of the radius of the Sun.
I assume that the inside of the shell is filled with cheese :-) | <urn:uuid:e37e524f-6450-4108-bfe6-2d8a14ba626e> | 3 | 209 | Comment Section | Science & Tech. | 65.658846 |
Cosmic strings are theoretical fault lines in the universe, defective links between different regions of space created in the moments after the Big Bang. And they might be theoretical no longer - distant quasars show the fingerprints of these strings.
Compared to cosmic strings, black holes seem downright sensible. These strings - no relation to the subatomic strings of theoretical physics - are one-dimensional objects, meaning they have length, but no height or width. They are defects in the fabric of the universe, a byproduct of the universe cooling in the first instants after the Big Bang. The easiest way to think about these strings is to see them as the cosmic equivalent of the cracks that form in ice over a frozen lake.
Of course, that doesn't capture the full measure of their one-dimensional weirdness. Since they have no width or height, they are incomprehensibly narrow, with a diameter that would make even a tiny photon look fat. They're also dense, as a string that's even a mile long would weigh considerably more than Earth. These strings expanded right along with the universe, ultimately stretching across the entire known universe in a more or less straight line, or forming massive rings many thousands of times bigger than our galaxy.
We've not yet directly observed these strings, but researchers at the University of Buffalo say they've found clear indirect proof. They studied 355 quasars - incredibly bright galaxies with super-massive black holes at their center - at the furthest corners of the observable universe. All quasars emit massive energy jets pointed in a particular direction, and through very careful study it's possible to figure out the directions of the jets.
183 of those quasar jets lined up to form a pair of enormous rings in the sky, suggesting two massive circular structures exist - or had existed - to orient the direction of the jets. The only known candidates for such colossal structures are cosmic strings, providing compelling indirect evidence for them. If we confirm the existence of cosmic strings, it will greatly improve our understanding of the formation of the earliest galaxies.
This isn't clinching proof - some scientists, like Arizona State's Tanmay Vachaspati, are skeptical cosmic strings that formed nanoseconds after the Big Bang could last long enough after the Big Bang to affect quasars in this way. But this new hypothesis provides testable predictions to further explore the existence of these strings, and these quasar rings might eventually prove to be for cosmic strings what Cygnus X-1 was for black holes. | <urn:uuid:6c4c6ca9-5d25-44b6-85b2-1a324d27b225> | 3.1875 | 511 | Nonfiction Writing | Science & Tech. | 39.987471 |
This article only skims the surface of Galois theory and should
probably be accessible to a 17 or 18 year old school student with a
strong interest in mathematics.
The binary operation * for combining sets is defined as the union
of two sets minus their intersection. Prove the set of all subsets
of a set S together with the binary operation * forms a group.
An environment for exploring the properties of small groups. | <urn:uuid:0d761ffe-41cb-43fa-928c-c98ab19f6ebd> | 3.09375 | 89 | Knowledge Article | Science & Tech. | 51.560565 |
Copyright © University of Cambridge. All rights reserved.
'Difference Dynamics' printed from http://nrich.maths.org/
This iterative process produces a chain of different sequences
which is actually a sequence of sequences . Start with short
sequences of small numbers so that you can easily produce a
chain of sequences and notice patterns. Look out for sequences that
have already occured earlier in your chain. | <urn:uuid:7d2ec004-7d45-4579-9d79-8bbed30e2492> | 2.9375 | 87 | Knowledge Article | Science & Tech. | 43.271429 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2001 September 21
Explanation: Black hole candidate XTE J1118+480 is known to roam the halo of our Milky Way Galaxy. This exotic system - thought to be a stellar mass black hole consuming matter from a companion star - was discovered only last year as a flaring celestial x-ray source. Suggestively termed a microquasar, recent radio and archival optical observations of its motion through the sky have now allowed its orbit to be calculated. Illustrated above, the black hole's present galactic location is indicated by the purple dot, with the Sun's position in yellow. A mere 6,000 light-years from the Sun now, XTE J1118+480's orbit is traced by the orange line, backtracked for some 230 million years into the past based on models of the Galaxy. Astronomers note this black hole's orbit about the galactic center, looping high above and below the Galaxy's plane of gas, dust,and stars, is similar to orbits of globular star clusters, ancient denizens of our Galaxy. It seems likely that XTE J1118+480 too has its origins in the early history and halo of the Milky Way.
Authors & editors:
Jerry Bonnell (USRA)
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
& Michigan Tech. U. | <urn:uuid:f42782d0-e533-4a20-8e2f-967437500a8b> | 3.59375 | 318 | Knowledge Article | Science & Tech. | 50.55 |
"Syntax" is a type for representing source code in Racket, which is a wrapper around S-expression (see a recent blog post for details). "Syntax value" and "syntax object" are all synonyms of this, and ni the ancient days of the
mzscheme language functions that deal with syntax used
syntax-value in the name. These days we use just "syntax" more often, and for a plural form we use "syntaxes".
An "S-expression" is either a primitive piece of data that can be typed in code (symbols, numbers, strings, booleans, etc -- in Racket you could also include other types), or a list of these things. An S-expression is therefore any nested structure of lists made of these primitive types at the fringe. Sometimes this includes vectors too (since they can be typed in using the
#(...) syntax) but more usually they're left out.
Finally, "datum" is another name for an S-expression, sometimes when you want to refer to the fact that it's a piece of data that has an input representation. You can see how R5RS introduces it:
<Datum> may be any external representation of a Scheme object [...]. This notation is used to include literal constants in Scheme code.
As for your questions:
What's the difference between s-expression and symbol?
A symbols is an S-expression, an S-expression may contain symbols.
What's the difference between s-expression and datum?
Nothing really. (Although some subtle intentions differences might be there.)
What's the difference between (syntax, syntax values and syntax object) from s-expression?
They are the representation of program syntax used by macros in racket -- they contain the S-expressions, but they add source location information, lexical context, syntax properties, and certificates. See that blog post for a quick introduction. | <urn:uuid:0ada049d-c720-4883-9c84-f0b4bac8523e> | 3.578125 | 409 | Q&A Forum | Software Dev. | 54.437352 |
The World's Highest Laboratory
The space station's finished. Now what?
- By Guy Gugliotta
- Air & Space magazine, March 2012
(Page 3 of 5)
Scientists are interested in jatropha because it produces a fruit whose seeds yield an astonishingly pure oil that is an ideal source of biodiesel. “The problem is we don’t have commercial cultivars yet,” Vendrame says. “It’s a wild species and it’s like somebody found the first corn plant.” Jatropha may be a source of revenue for growers in the state of Florida, which, worried about winter freezes, citrus canker, and other threats to its groves, is looking for a profitable alternative to oranges and grapefruit. But jatropha’s early promise faded because of improper cultivation practices; in addition, there are too many varieties of the plant with too few desirable characteristics.
Vendrame is using his best plants on the station, and comparing the results with his ground samples, hoping to more quickly identify genes that produce the most oil. In traditional harvesting on Earth, many generations must be produced before a useful cultivar can be found. “Microgravity might accelerate the process,” he says. “We may be able to save 10 years.”
The station’s international partners are also planning Earth-focused activities. The Canada-based company UrtheCast has an agreement with the Russian space agency, Roscosmos, and aerospace company RSC Energia to launch and install a pair of Earth-observation cameras on the Russian side of the station this year. One camera will be fixed in place and point straight down, while a second, boom-mounted camera will be movable to focus on details, says UrtheCast president Scott Larson. Both feeds will be streamed on the Internet for free, but Larson says the company expects to make money by selling its raw data to other firms and organizations, and through advertising.
The European Space Agency has steadily expanded its station science agenda since the launch and installation of its Columbus laboratory in 2008. Columbus now has 150 projects under way and expects to move into experiments lasting much longer. “For us, it’s a steady evolution,” says Martin Zell, chief of ESA’s Astronauts and ISS Utilisation Department. “In our research into plant biology, immunology, neurophysiology, fluids and materials research, we are implementing several experiments where the new objectives are based on the results of previous experiments.” Like NASA, ESA is moving into studying the effects of prolonged spaceflight on the body.
In the third lab, the Japanese-built Kibo, experiments are focused on space medicine, biology, Earth observations, materials production, biotechnology, and communications research. The Japanese space agency, JAXA, has selected 19 candidate experiments to fly in 2012—15 in life sciences and four in materials sciences. Some of the more exotic involve studying the effects of microgravity on zebrafish, mouse embryos, and mammalian reproductive cells.
EVIDENCE IS MOUNTING that the odd behavior of cells and microbes in space might lead to shortcuts in fields as diverse as disease pathology and crop development. Because this is one area where research in microgravity might translate into useful science on Earth, biological experiments will be a key focus of the National Laboratory. Experiments on both the shuttle and the station have shown that microgravity alters gene expression in microbes and plant and animal cells, and researchers want to continue to use the station to gain insights into these changes.
Much of NASA’s optimism for the National Laboratory springs from an unusual 2006 experiment initiated by microbiologist Cheryl Nickerson of Arizona State University’s Biodesign Institute and NASA’s Mark Ott, a microbiologist at the Johnson Space Center in Houston. Ott had noted that astronauts’ immune systems appeared to weaken during spaceflight, “and I asked him whether we knew anything about the effects of spaceflight on microbial pathogens,” Nickerson recalls. “He said ‘Not really.’ ” It was not a farfetched question. Biologists have long known that microbes have an uncanny ability to thrive in extreme environments, such as ocean floor vents, cave ponds, or toxic waste dumps. Why not in space?
So Nickerson and Ott decided to fly cultures of salmonella bacteria aboard space shuttle Atlantis to see what happened. The cultures were placed in individual chambers that the shuttle crew activated for 24 hours. Back on Earth, the team tested the samples and found that, in space, several key salmonella genes manifested themselves differently, and that a particular protein acted as a master switch to turn up the infectivity of the bacteria. The group validated the work on another shuttle mission, showing that they could turn off the infectivity by manipulating salts in the cultures. | <urn:uuid:36225de9-4ca8-4365-86ee-7acfbb554466> | 2.9375 | 1,012 | Truncated | Science & Tech. | 30.710313 |
Physics in the Republic of Armenia
The republic of Armenia is located in the mountainous region of the Caucasus. In 1991, the people of Armenia were amongst the first to take advantage of the Gorbachev’s reform movements to become an independent entity separate from the Soviet Union. The country is home to approximately 3 million Armenians with 99% literacy. Scientists in Armenia had distinguished themselves worldwide in their scientific accomplishments. Some of the most noteworthy facilities and institutions include the Observatory at Byurakan, the cosmic ray observatory on Mount Aragats (9000 ft. above sea level), the Yerevan Physics Institute, and the Armenian National Academy of Sciences.
Victor Ambartsumian who was also one of the founding members of the Armenian National Academies founded the Byurakan Astrophysical Observatory. Ambartsumian was the first to propose the existence of active galactic nuclei (AGN) 1. He was an elected or honorary member of 28 science academies around the world including the US, France, USSR, and others. Observations at Byurakan Observatory led to the identification of Markarian galaxies: a type of galaxy with unusually strong emissions at near-ultraviolet wavelengths. Astronomer Benjamin Eghishe Markarian first observed these in the 1960s.
The president of the Republic of Armenia established the Victor Ambartsumian Prize in 2009 to be awarded to outstanding scientists from any country and nationality having significant contributions in science. The Prize totals USD 500,000 and was awarded for the first time in 2010 to Prof. Michel Mayor (Observatory of Geneva, Switzerland) and his two team members for their important contributions to the study of the relationship between planetary systems and their host stars.
The Yerevan Physics Institute (YerPhi) is home to divisions of Accelerator physics, Experimental Physics, Theoretical Physics, Applied Physics, and Cosmic Ray physics. Armenia had the highest energy electron synchrotron (6 GeV) in the Soviet Union. With the construction of the CEBAF accelerator at 6 GeV at Thomas Jefferson National Laboratory in Newport News, the team of physicists from Armenia became one of the most important external collaborating groups at that laboratory. Particle physicists from Armenia designed, tested and performed the commissioning of the TOF system for the OLYMPUS spectrometer at DESY and continue with the data analysis from the HERMES and H1 experiments. There are also robust groups from YerPhi on the CMS, ALICE, and ATLAS experiments located at the LHC. The groups from Armenia work on the construction, performance optimization, and calibration of various ATLAS and CMS systems.
In addition to JLAB, DESY, and CERN, the accelerator physicists from Armenia have also been collaborating in accelerator developments at various Russian Federation Institutes including SRIERA (St. Petersburg), JINR (DUBNA), MRTI, ANSALOO-VEI (Moscow).
The last few years, there was a global shortage of 99Mo isotope for diagnostic and therapeutic procedures followed by a race to restoring a stable supply of this isotope by getting away from production of this radio-isotope via highly enriched Uranium in reactors to using (gamma, n) reactions. Today, physicists at YerPhi are engaged in research and development efforts associated with better isotope production techniques. YerPhi has just completed negotiations with IBA of Belgium towards the purchase of a cyclotron, Cyclone 18/18. This is a first step to developing a center of radioisotope production, and research in nuclear medicine. Simultaneously, It is expected that two of the 18 MeV proton beams of this new accelerator can be used to jump start a research program in radioactive ion beams.
Lightning storm detected from Mt. Aragats along with the direct observation of the secondary cosmic ray fluxes.
The Cosmic Ray Division (CRD) 2 of YerPhi is perhaps one of the most visible and active groups at YerPhi partly due to its energetic director (Ashot Chilingarian). Cosmic Ray research in Armenia has a long and accomplished history beginning in the early 1940’s. It is certainly one of the largest cosmic ray institutions in the world. The original aim of the CRD was in particle astrophysics, specifically, in studying the high-energy cosmic rays that bombard the earth. The construction of two research station sites at Nor Amberd (CRD-sites) on the slopes of Mount Aragats were amongst the first permanent high-mountain research stations built some 65 years ago3. There were early direct observations of cosmic rays made in Armenia that could not be made by satellites or balloons. Discoveries from Armenia were crucial to directly studying the particle fluxes of cosmic rays in the TeV-PeV energy region. Some of the significant past discoveries3 from the CRD include the measurements of the energy spectrum and charge ration of the horizontal muon flux and the measurements of cosmic ray spectra in the “knee” region (1014-1016 eV) of the cosmic ray spectrum using the MAKET-ANI, and GAMMA detectors in the 1980’s. The observatory on Mount Aragats was known for studying the origin and acceleration of high-energy cosmic rays but after 1991, the scarcity of resources with respect to detectors caused a shift of interest4.
After 1991, many of the scientific institutions in Armenia have experienced difficulties in re-establishing themselves and the ability to maintain or develop appropriate research infrastructure to continue doing forefront research at home in Armenia with reduced levels of financial support. Difficulties were maintaining the accelerator facilities, memberships in the international Astronomical union, the member fees at the LHC, as well as the update of the local research infrastructure and personnel.
In 2009, Professor Yuri Oganessian of the Flerov Laboratory of Nuclear Reactions in Dubna, Russia organized at the encouragement of the then minister of economy, Nerses Yeritsyan, an international committee of experts (InComEx) from the US, the UK, Germany, France, Bulgaria, Switzerland, and Russia to evaluate the scientific activities of YerPhi and to make recommendations to the government of Armenia regarding the future of the Yerevan Physics Institute. The InComEx group encouraged the government to support YerPhi and recognize it as a great national resource. The photograph shows the meeting of Dr. Oganessian of Dubna with the President of Armenia Serzh Sargsyan. The outcome was the founding of a new national laboratory of Armenia! The name of the laboratory is Alikhanyan National Laboratory to recognize the original founders of YerPhi, Abraham Alikhanov and Artem Alikhanian. The budget of the laboratory has been doubled and enthusiastic activities continue.
In conclusion, the government of Armenia has realized the importance of science and new discoveries in creating an innovative economy and supports the new national laboratory of Armenia as a way to get there.
1. “Problems of Physics and Evolution of the Universe”, a collection of papers published on the occasion of Ambartsumian’s 70th birthday, edited by L. V. Mirzoyan, publishing house of the Armenian Academy of Sciences, Yerevan (1978).
3. “Cosmic Ray research in Armenia”, A. Chilingarian, R. Mirzoyan, and M. Zazyan, Advances in Space Research, Vol 44, 1183 (2009).
4.“Armenia detects space weather”, Daisy Yuhas, Symmetry, Vol. 7, issue 5 (2010).
5.“Ground-based observations of thunderstorm correlated fluxes of high energy electrons, gammas, and neutrons”, A. Chilingarian, A. Daryan et al., Phys. Rev. D 82, 043009 (2010).
Ani Aprahamian is the Frank M. Freimann Professor of Physics at the University of Notre Dame
Disclaimer - The articles and opinion pieces found in this issue of the APS Forum on International Physics Newsletter are not peer refereed and represent solely the views of the authors and not necessarily the views of the APS. | <urn:uuid:ebe33043-8c65-4a19-8351-9ba2cbb10e60> | 2.765625 | 1,716 | Knowledge Article | Science & Tech. | 30.314772 |
The Cogs of Precognition
|Scaling Up Expert Opinion
Astrobiology Magazine asked experts in online education to rank their wish-list of innovations. The question was posed to former Vice President of the UNext Corporation, Doug Ryan: "From the European Space Agency's list of science fiction inventions that should be made real, please pick two and discuss how you believe it would most dramatically change the world?"
Doug Ryan: "My feeling is that the inventions that would add the most to our future are not those that correct for human weaknesses, but those that would help us push beyond the limits of our greatest strengths. In particular, our creativity, imagination, and curiosity. Of all the inventions listed, I would find the most value in 'Instantaneous Communication' and 'Waldos.'
"Instant communication would allow real-time intellectual collaboration between anyone in the universe. Experts could assist and challenge each other from across the galaxy, increasing both the resources and the pace around any given problem."
Waldos: Telepresence Device
"As I understand them, 'waldos' would take the effect one step further by enabling people not to just interact, but to act in a real-time collaborative mode. Imagine an important new building being built on a distant planet. Using instant communication, architects and engineers from multiple worlds could collaborate on the design. Then, using 'waldos', the best metalworkers, electricians, and other tradesmen could work on the building's construction and completion."
"I'll take that over warp drive any day."
Astrobiology Magazine: Based on your background in online education, what do you think NASA should do to incorporate new models for training and communication?
Ryan: "One problem may be how to scale experts, i.e., how to share a limited number of experts with a far more numerous group of people seeking their expertise."
"Some forms of advanced video conferencing might help this somewhat, in terms of increasing the number of people who can observe the expert."
"However, it achieves this increase in scale only by resorting to a passive broadcast model that depends on discouraging individual interactions and questions. The better, albeit far more difficult, solution would be to incorporate the desired expertise into some form of interactive learning object with which participants could interact."
"My recommendations in order would be:
1) Focus on high fidelity simulations
2) Build flexibility into programs and systems so that training can take a different path for different users
3) Explore ways to integrate training into everyday task completion as opposed to segregated 'training sessions' that tend to discriminate against the people who most need training."
Douglas Ryan received a B.S. in engineering from Princeton University and M.B.A. from the University of Chicago's Graduate School of Business.
Mr. Ryan was co-producer on two independent films, released theatrically in the U.S., one of which won the Best Long Feature Film at the South by Southwest Festival. He is currently at Young & Rubicam in Chicago. Prior to that, he was Vice-President for UNext, an online education company that developed education courses in partnership with Stanford University, The University of Chicago, Columbia University, Carnegie Mellon University, and the London School of Economics and Political Science. He earlier also served as Vice President for the Netdox Company, a secure Internet messaging services company.
Which gadgets can unlock the next technological revolutions? What is the next big thing?
To propose answers to this question, the sixteen nations of the European Space Agency commissioned a project called "Innovative Technologies from Science Fiction for Space Applications" (ITSF). Their results were co-published with two supervisory foundations, the Swiss museum Maison d'Ailleurs and the astronautical society, or OURS Foundation. One aim was to discover what their study called the facts of 'hard science-fiction': literature that uses either established or carefully extrapolated science as its backbone.
|Innovative Technologies from Science Fiction. Credit: ITSF/ESA
As Caltech physicist, author and visiting scholar for NASA's Exobiology Center, David Brin, described in his PBS interview for the special, Closer To Truth: "perhaps an alternative name could have been 'speculative history' because [hard science-fiction authors] deal in different pasts, alternate presents, extension of the human drama into the future...Einstein used the word gedanken experiment and he coined it, he said that just sitting on a streetcar in Bern, leaving the clock tower and imagining he was riding on a beam of light, was 50% of the work [of relativity].
Augmented Science: Galileo's Ship
The history of drawing inspiration from speculative literature is deep with success stories.
As early as 1632, to advocate for his classical principle of relativity, Galileo used a fictional character called Salviati who while locked in a closed room below a ship deck, observes a small fish tank which remains quiescent and undisturbed unless the ship accelerates. In dialogue format, he answers all the common scientific arguments against the idea that the earth moves.
"Jurassic Park" probably taught more people about DNA and what that means than most colleges in the country. --Robert Kuhn, PBS
Predating lunar travel classics by H.G. Wells and Jules Verne were Cyrano de Bergerac's Comical History of the States and Empires of the Moon (1656), space travel in Voltaire's Micromégas (1752), and alien cultures in Jonathan Swift's Gulliver's Travels (1726). Even as the liquid-propelled rockets were first being tested by Robert Goddard in the 1920's, technical proposals had already appeared for planetary landers (1928) and aerodynamically-stabilized rocket fins (1929).
Perhaps the most detailed and famous publication was Sir Arthur C. Clarke's 1945 paper, "Can Rocket Stations Give World-wide Radio Coverage?", that laid down the principles of modern satellite communications and geostationary orbits [Wireless World, October 1945].
A half-century later, even a few hours of interruption in this global network today would seem catastrophic: crippled health care delivery, financial disruption including failed automated teller machines and credit card validations, grounded travellers for lack of airline weather tracking, and global TV blackouts. But in 1945, the idea of geostationary satellites had a different kind of reception, as Clarke wrote: "Many may consider the solution proposed [for extra-terrestrial relay services] too far-fetched to be taken seriously. Such an attitude is unreasonable, as everything envisaged here is a logical extension of developments in the last ten years..."
|The rocks inside a crater on the Asteroid Eros. Numerous small impacts on the asteroid show brown boulders visible interior to the less exposed (white) lip of the crater. False-color for emphasis. Credit: NEAR Project, JHU APL, NASA
The European space study, appropriately timed for Clarke's "Space Odyssey" series, completed its first project phase in 2001. Altogether fifty fact sheets and technical dossiers were published to catalog the inventions that should be made real. In addition, more than two hundred technologies were outlined and graded for future feasibility studies. Ranging from astrobiology to propulsion, their complete 'what-if' list is available in broad categories online.
Examples Pushing the Envelope
One mission that has been described in the ESA study is soon to become closer to fact: a fantastic mission to a comet. Seventeen years ago, astrobiologist David Brin's "Heart of the Comet" , extended Jules Vernes' mythical tour of the solar system on a comet.
Verne got many of his science guesses right. For instance, although not well-understood at the time, he correctly attributed that-- given the distance of his travellers from the Sun-- then a comet would resemble something more like an ice-ball, and not a fiery-hot world. He wrote: "The solidity of the ice was perfect; the utter stillness of the air at the time when the final congelation of the waters had taken place had resulted in the formation of a surface that for smoothness would rival a skating-rink; without a crack or flaw it extended far beyond the range of vision."
But the asteroid and cometary science planned for international missions is approaching the realm of fantastic. On Valentine's Day, 2001, the Near-Shoemaker spacecraft successfully landed on the asteroid, Eros. Its remarkable journey--to soft-land on a peanut shaped asteroid - about 176 million kilometers (109 million miles) from Earth, prompted Andrew Cheng, NEAR Project Scientist, to note: "On Monday, 12 February 2001, the NEAR spacecraft touched down on asteroid Eros, after transmitting 69 close-up images of the surface during its final descent. Watching that event was the most exciting experience of my life."
|Icy-rock core of Halley's Comet
In May 2003, a Japanese probe [called Muses-C] lifted off on the world's first mission to collect samples from the surface of an asteroid, part of a four-year journey covering nearly 400 million miles.
On Jan. 2, 2004, the spacecraft called Stardust will fly within 75 miles of a cometary main body (called Wild-2), close enough to trap small particles from the coma, the gas-and-dust envelope surrounding the comet's nucleus. Stardust will be traveling at about 13,400 miles per hour and will capture comet particles traveling at the speed of a bullet fired from a rifle. Launched in February 1999, Stardust was designed to capture particles from Wild 2 and return them to Earth for analysis. The spacecraft already has collected grains of interstellar dust. It is the first U.S. sample-return mission since the last moon landing in 1972.
In the next 5 or so years, there will be multiple encounters of spacecraft with comets and asteroids. All the following missions are fully funded, though only not all have already been launched :
|2001 Sept. 22
||Deep Space One
|2004 Jan. 1
||(coma sample return)
|2005 July 3
||(big mass impact)
But according to David Brin, the most intriguing categories of his speculative histories are the ones that are either interrupted or pre-empted.
The rule on staying alive as a forecaster is to give them a number or give them a date, but never give them both at once. -- Jane Bryant Quinn, US financial columnist
Brin explained: "I think the most powerful science fiction stories are not those that accurately predict the future, but, rather, those that have prevented futures, the self-preventing prophecy that came across so chilling, and so many people read it and were so moved, that the very scenario that might have plausibly happened didn't happen, the two that really prevented the futures they described, "1984," by George Orwell, and probably the greatest science fiction author who ever lived, Karl Marx's "Das Kapital," which utterly prevented the scenario that it described".
This year offers a case of what seems to be science fiction as astronomical fact: the closest approach between Mars and the Earth in 73,000 years. Four current Mars missions hope to take advantage of the confluence, as this summer the Red Planet will appear brighter than Jupiter as the brightest object in the night sky. As a timely prelude of things to come, the moon will eclipse Mars tonight in North America for up to 90 minutes. But more than a hundred years ago, in 1894--one of the last times such dramatic astronomical events gripped the visual imagination of authors-- many of the modern concepts about intelligent life elsewhere first took shape. The celestial mechanics of the night sky translated to a cultural picture of what life elsewhere might resemble.
For instance, the idea that Mars might have a humanoid civilization is relatively modern, but needed both an event and a real-world, technological boost from telescope builders. Mars as a home needed first for astronomers to describe what appeared to them as an elaborate martian canal system. This lineage continued from when astronomer Percival Lowell began advocating for the canals on Mars, until H.G. Wells further propagated those civilizations in his classic "War of the Worlds". In an interview on the ESA project for Radio Netherlands, NASA astrobiologist Chris McKay pointed out this lineage--and that science and our cultural ideas about astrobiology are intertwined with these events: "When people first started pointing telescopes at Mars," McKay explained, "they noticed seasonal changes very much like on Earth. Then Percival Lowell reported seeing 'canals' on Mars and created an elaborate story that they had been made by a dying Martian civilization."
From Galileo's ship to Einstein's thought experiments about travelling on a light beam, the technical dossier of 'what will be the next big thing?' continues to be a relevant question for both speculative historians and science planners alike.
Related Web Pages
Long, Strange Trips
PBS: Is Science Fiction Science? Michael Crichton, David Brin, Octavia Butler
Search for Life in the Universe: Part I
A Perfect World I: Tyson
A Perfect World II: Richardson
A Perfect World III: Goldin
A Perfect World IV: Venter
A Perfect World V: Hendricks
A Perfect World VI: Fuller | <urn:uuid:49ca766c-465d-4c95-8650-9c7b7f3ad657> | 2.796875 | 2,773 | Content Listing | Science & Tech. | 38.001492 |
The giant wetas are the world’s heaviest insects. The heaviest ever recorded was a female that weighed 71g (2.5oz). That's three times the weight of an average house mouse. In fact, wetas are the insect equivalent of mice. They evolved in the small rodent niche because in New Zealand there were no mice to compete with and no nocturnal mammalian predators to hunt them.
Scientific name: Deinacrida
Giant weta are several species of weta in the genus Deinacrida of the family Anostostomatidae. Giant weta are endemic to New Zealand and are examples of island gigantism.
There are eleven species of giant weta, most of which are larger than other weta, despite the latter already being large by insect standards. Large species can be up to 10 cm (4 in) not inclusive of legs and antennae, with body mass usually no more than 35 g. One captive female reached a mass of about 70 g (2.5 oz.), making it one of the heaviest documented insects in the world and heavier than a sparrow. This is, however, unnatural, as this individual was unmated and retained an abnormal number of eggs. The largest species of giant weta is the Little Barrier Island giant weta, also known as the wetapunga. One example reported in 2011 weighed 71 g, and a 72 g specimen has been recorded.
Giant weta tend to be less social and more passive than other weta. Their genus name, Deinacrida, is Greek for "terrible grasshopper". They are found primarily on New Zealand offshore islands, having been almost exterminated on the mainland islands by introduced mammalian pests.
This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so. | <urn:uuid:3c7e7b84-c167-45ad-a474-c39fcdd3ecae> | 3.390625 | 432 | Knowledge Article | Science & Tech. | 54.312504 |
Defining "Deep sea"
pjhe at soc.soton.ac.uk
Tue Aug 10 07:41:20 EST 2004
For my book The Biology of the Deep Ocean (OUP) I took "deep ocean" to
refer to any bit of the ocean where the bottom depth was greater than 200m.
Thus it included everything from the surface to the sea floor except over
the continental shelves. This is very convenient for descriptive and
discussion purposes and makes particular sense when dealing with the
pelagic fauna, many of which come and go to and from the upper 200m.
I don't believe there is a "standard" definition, and I don't think any
reader will quibble with whatever view you take, as long as it is not too
At 16:23 02/08/04 +0100, Brad Buran wrote:
>I am currently preparing a manuscript that examines the structure of the
>inner ears in several deep-sea species. In the introduction, I am
>attempting to define, or at least describe, what the deep sea is (as
>contrasted with shallower waters). Since this manuscript is for a
>general morphology journal rather than an oceanography journal, I feel
>that it is essential to give the readers some background on the deep
>Older books appear to define the deep sea as anything below 1,000 m
>beneath the surface. However, at several recent conferences I have been
>at, people appear to refer to shallower depths (such as 250 m) as the
>deep sea. Is anyone aware of a standard definition or a conventional
>description that is used by deep sea researchers?
>Thank you for your time,
More information about the Deepsea | <urn:uuid:e7c354e0-3daa-43d5-a5da-590285ddf33d> | 3.140625 | 374 | Comment Section | Science & Tech. | 53.65993 |
Lecture #26, April 22
READ SNOW CHAPTER 13 PAGES 286-301
LOOK OVER THOUGHT QUESTIONS #1,2,8,9,10.
LIGHTNING BOLT HURLING ZEUS
Unlike Venus, the planet Jupiter has just the right name. Not only
is it the largest of the planets, but it "hurls" the greatest
between Io and its poles.
The high rotational rate of Jupiter results from the conservation of
angular momentum. Due to conservation of linear momentum, the rapid rate
of rotation has produced an oblate shape to the planet with a equatorial
diameter some 9000 km greater than its polar diameter.
Posiden (Neptune) also could be associated with the planet as huge
tides are raised in its atmosphere by Io. The volcanoes of Io are also
signs of the activities of that god of the sea and earthquakes.
Not only do those lightning bolts produce auroras in Jupiter's
atmosphere, but they generate radio bursts, which sometime produce static
on our TV sets.
MAGNETIC FIELDS OF PLANETS
Magnetic fields are produced whenever electrons flow in a circular
current, such as in wires wrapped around an iron plug as in an
electromagnet. Both charged particles and circular motion are thus
necessary to produced magnetic fields. Venus has the same hot interior
the earth, but because it rotates so slowly, 225 days, it can not generate
a magnetic field. On the other hand, rapidly rotating Jupiter produces
strongest field of all the planets.
RESONANCES AT SATURN
The great gap in the rings of Saturn between the A and B rings,
known as Cassini's division, lies at a distance of 120,000. Material that
would be in that gap would revolve with a period of 0.47 days around
Saturan. The Saturnian moon, Mimas, which a huge impact crater on its
surface, has a period of 0.94 days. The exact 1:2 resonance produces a
similar to the Kirwood gaps in the asteroid belt.
Course Home Page | More | <urn:uuid:5676d0c1-bd3c-460d-ba37-846589671d78> | 3.796875 | 458 | Truncated | Science & Tech. | 54.256683 |
This video shows the movement of blue straggler stars in globular clusters over time. Blue straggler stars are blue, bright stars, with a higher mass than the average for a cluster, and they are expected to sink towards the centre of a star cluster over time. Those closest to the cluster core are the first to migrate inwards, with more distant blue stragglers progressively moving inwards over time.
A new study using the NASA/ESA Hubble Space Telescope and the MPG/ESO 2.2-metre telescope at the ESO La Silla Observatory has shown that not all globular clusters evolve at the same rate. While all globular clusters are old (over 10 billion years), the stellar distribution within some remains youthful, with the blue straggler stars spread throughout the cluster. Others have aged prematurely, with the stars all located in the centre.
More information and download-options:
NASA, ESA, L. Calçada, F. Ferraro (University of Bologna) | <urn:uuid:0d852388-50c3-4313-bcf1-87781f49cb7d> | 3.640625 | 208 | Truncated | Science & Tech. | 44.987821 |
Alabama Beach Mouse
ALABAMA BEACH MOUSE
Photo Credit: Nick R. Holler
SCIENTIFIC NAME: Peromyscus polionotus ammobates (Bowen)
DESCRIPTION: Smallest (adults, total length =122-153 mm [4.8-6.0 in.]; weights =10.0-17.0 g [0.35-0.60 oz.]; pregnant females reaching 22-25 g [0.78-0.88 oz.]) species of Peromyscus in North America (Hall 1981b). Tail short, usually 55-65 percent of body length. Males generally smaller than females. Brown to pale gray above, with pure white undersides and feet. A dark brown mid-dorsal stripe is common. Tail bicolored, with variable (10-80 percent of tail length) dark brown stripe on dorsal surface and pure white underneath (Howell 1939; Hall 1981b).
DISTRIBUTION: Historic distribution was along the coastal dunes of Baldwin County, Alabama, from the western tip of Fort Morgan Peninsula eastward to the Perdido Bay inlet, including Ono Island. The type locality was a sand bar immediately west of Perdido Key inlet (Alabama Point, Bowen 1968). Type locality has been heavily developed and no longer exhibits natural characteristics. Because of extensive development throughout the Alabama Gulf Coast, the present-day distribution of the Alabama beach mouse is greatly reduced (Holliman 1983). Active populations are known to exist in areas of public ownership at Fort Morgan and within the Perdue Unit of the Bon Secour National Wildlife Refuge (Swilling and Wooten 2002). Discontinuous occupation of dune and scrub habitat between these two sites also occurs. Have been re-established at Gulf State Park. Trapping and visual surveys suggest extirpation from all areas east of Gulf State Park.
HABITAT: Typically includes primary, secondary, and scrub dunes of the coastal strand community (Bowen 1968, Rave and Holler 1992). Densities often greatest in sparsely vegetated areas within the primary dune zone. Recent research indicated that scrub habitat is more important than previously thought. Recognition of the value of this habitat as refugia from hurricanes and other storm events has prompted formal redesignation of the Critical Habitat limit for this subspecies. Only rarely found associated with human dwellings.
LIFE HISTORY AND ECOLOGY: Monogamous; pair bonding strong and parental cooperation in rearing has been noted (Blair 1951, Margulis 1997, Swilling and Wooten 1992). Litter sizes range from two to eight (mode = four) (Caldwell and Gentry 1965, Smith 1966). Gestation period averages 28 days with a postpartum estrus common. Reproduction occurs throughout the year, but typically slows during summer and peaks during late fall/early winter in correlation with availability of forage seeds. A semifossorial, nocturnal rodent that digs distinctive burrows in sandy soils. Burrows typically consist of an entrance tube up to one meter (three feet) deep leading to one or more chambers (Hayne 1936, Smith 1966). An escape tunnel is normally present from the nest chamber to just below the surface. Nests of dried grasses and other fibers are found in the central chamber. Burrow openings are frequently located within vegetation. A fan-shaped plume of expelled sand is characteristic of active burrows. Entrance tunnels are blocked several centimeters (three to five inches) below surface by sand plugs, presumably for predator defense. Granivorous-omnivorous, with a majority of diet being seasonal seeds (Smith 1966, Gentry and Smith 1968, Moyers 1996). Wind-deposited seeds such as sea oats and bluestem important components of diet; acorns eaten when available. Also consumes a variety of animal foods, including both insects and vertebrates. Insects reported in diet include beetles, leaf hoppers, true bugs, and ants. Nocturnal, with daytime activity rare; nightly movements directly affected by weather conditions. Radio tracking indicates activity throughout the night, with peaks occurring shortly after dusk and again after midnight (Lynn 2000). Capable of dispersing over five kilometers (3.1 miles) (Smith 1966) and commonly traverse 0.5 kilometers (0.31 miles) of habitat per night, but most observations indicate that individuals settle within a few hundred meters (200-1,000 feet) of their natal sites. Juveniles disperse an average of 160 meters (500 feet), effectively one home range, away from the natal site (Swilling and Wooten 2002). Dispersal distances for juvenile males and females not reported to differ. Home range size varies according to season and reproductive state. Average values reported for Alabama beach mice were 4,086 -5,512 square meters (43,981-59,330 square feet) from trapping data and 6,783-7,000 square meters (73,011-75,347 square feet) from telemetry data, but ranges as small as 389 square meters (4,187 square feet) and as large as 29,330 square meters (315,715 square feet) have been observed (Lynn 2000). Home range sizes do not differ significantly between males and females. In general, populations show little evidence of intraspecies competition with increasing densities yielding increased compaction of home ranges. This combination of tolerance and dispersal results in the formation of spatial "neighborhoods" within populations. For Alabama beach mice, approximate size of these spatial units is 550 meters (1,800 feet; linear) with occupancy by 40-70 mice. Average life span in natural populations less than nine months although common to encounter mice more than one year of age. Captures of mice known to be two years old have been reported and captive mice have reached four or more years of age. Preyed upon by the red and gray fox , great horned owl, great blue heron, weasel, striped skunk, raccoon, various snakes including coach-whip and pygmy and eastern diamondback rattlesnakes, and domestic dogs and cats.
BASIS FOR STATUS CLASSIFICATION: Habitat loss and fragmentation associated with residential and commercial real estate development single most important factor contributing to imperiled status. Existing or proposed beachfront development will substantially alter all Alabama beach mouse habitat not in public ownership. Reduction of available habitat and isolation of the remaining populations substantially increases vulnerability to the effects of tropical storms, weather cycles, predation, and other environmental factors. Substantial disagreement exists as to the current status and appropriate management protocol for the Alabama beach mouse. Various researchers have argued it is in immediate jeopardy of range-wide extinction if habitat loss is allowed to continue. This position is supported by evidence of widespread extirpation from all developed areas in the eastern portion of the historic distribution. Similar levels of development are occurring throughout the Fort Morgan Peninsula and real estate development on all private areas is proceeding rapidly. In addition, Population Viability Analyses indicate that extinction of even the largest remaining populations is likely within 50 years if current trends continue (Oli et al. 2001). Listed as endangered by the U.S. Fish and Wildlife Service in 1986.
Author: Michael C. Wooten | <urn:uuid:e77771f5-33e5-42dc-8c93-6cf4fd674434> | 3.078125 | 1,517 | Knowledge Article | Science & Tech. | 38.617562 |
[Haskell-beginners] Bit arithmetic in Haskell
byorgey at seas.upenn.edu
Tue Dec 9 08:02:46 EST 2008
On Tue, Dec 09, 2008 at 05:30:28PM +0600, Artyom Shalkhakov wrote:
> I'm trying to do some bit arithmetic. Here's the function:
> > import Data.Bits
> > import Data.Word
> > g :: Word32 -> [Word32]
> > g x = [(x `shiftR` 24) .&. 0xFF,
> > (x `shiftR` 16) .&. 0xFF,
> > (x `shiftR` 8) .&. 0xFF,
> > x .&. 0xFF]
> This function should give bytes for the given number, like this:
> g 255 -> [0,0,0,255]
This is the answer I get when I evaluate (g 255).
> g 256 -> [0,0,1,255]
This is incorrect -- the bytes for 256 are [0,0,1,0], which is
correctly computed by g. [0,0,1,255] would be 1*256 + 255 = 511, and
giving 511 as input to g indeed results in [0,0,1,255].
> g 65535 -> [0,0,255,255]
When I evaluate (g 65535) this is what I get, too.
In short, it seems to me that g works perfectly. If it doesn't work
for you, can you give specific examples of the output it should give,
and the output you get instead?
More information about the Beginners | <urn:uuid:73bf1989-dd00-41d8-b8dc-d74236fbd925> | 3.234375 | 379 | Comment Section | Software Dev. | 113.180951 |
May 22, 1995: Peering into the heart of two recently exploded double-star systems, the Hubble telescope has surprised researchers by finding that the white dwarf stars at the center of the fireworks are cooler than expected and spin more slowly than previously thought.
Each dwarf - dense, burned-out stars that have collapsed to the size of Earth - is in a compact binary system, called a cataclysmic variable, where its companion is a normal star similar to, but smaller than the Sun. The stars are so close together that the entire binary system would fit inside the Sun. Their closeness allows gas to flow from the normal star onto the dwarf, where it swirls into a pancake-shaped disk [see illustration]. When the disk of gas periodically collapses onto the white dwarf, it unleashes a burst of kinetic energy, called a dwarf nova outburst. Once dumped onto the dwarf's surface, hydrogen accumulates until it undergoes thermonuclear fusion, which eventually triggers an explosion.See the rest: | <urn:uuid:7369b36b-d6d1-4a00-bbc8-98ac088f9878> | 3.671875 | 204 | Knowledge Article | Science & Tech. | 36.054538 |