text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Science Fair Project Encyclopedia
Gilbert N. Lewis
His family moved to Lincoln, Nebraska when he was 9. He was homeschooled until age 9. He went to public school from age 9 to 14 and then he went to the University of Nebraska, and three years later transferred to the Harvard University where he showed an interest in economics, but concentrated in chemistry, getting his B.A. in 1896 and his Ph.D. in 1899. His first published work, a study of thermochemical and electrochemical properties of amalgams, was based on his doctoral research and was published in 1898.
After earning his Ph.D., he stayed as an instructor for a year before taking a traveling fellowship, studying under the physical chemist Wilhelm Ostwald at Leipzig and Walter Nernst at Göttingen. He then returned to Harvard as an instructor for three more years, and in 1904 left to become superintendent of weights and measures for the Bureau of Science of the Philippine Islands in Manila. The next year he returned to Cambridge when the Massachusetts Institute of Technology (MIT) appointed him to a faculty position, in which he had a chance to join a group of outstanding physical chemists under the direction of Arthur Amos Noyes. He quickly rose in rank, becoming assistant professor in 1907, associate professor on 1908, and full professor in 1911. He left MIT to become professor of physical chemistry and dean of the College of Chemistry at the University of California, Berkeley in 1912.
In 1916, he formulated the idea that a covalent bond consisted of a shared pair of electrons and defined the term odd molecule when an electron is not shared. His ideas on chemical bonding were expanded upon by Irving Langmuir and became the inspiration for the studies on the nature of the chemical bond by Linus Pauling.
In 1923, he formulated the electron-pair theory of acid-base reactions. In the so-called Lewis theory of acids and bases, a "Lewis acid" is an electron-pair acceptor and a "Lewis base" is an electron-pair donor.
Students of chemistry learn about a notation system for the valence electrons which is known as the Lewis dot structure.
Based on work by J. Willard Gibbs, it was known that chemical reactions proceeded to an equilibrium determined by the free energy of the substances taking part. Lewis spent 25 years determining free energies of various substances. In 1923 he and Merle Randall published the results of this study and formalizing chemical thermodynamics.
Lewis was the first to produce a pure sample of deuterium oxide (heavy water) in 1933. By accelerating deuterons (deuterium nuclei) in Ernest O. Lawrence's cyclotron, he was able to study many of the properties of atomic nuclei.
In the last years of his life, he established that phosphorescence of organic molecules involves an excited triplet state (a state in which electrons that would normally be paired with opposite spins are instead excited to have their spin vectors in the same direction) and measured the magnetic properties of this triplet state.
He died at age 70 of a heart attack while working in his laboratory in Berkeley.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:d12b4004-dc26-44a9-9bd4-1694fa753456> | 3.765625 | 674 | Knowledge Article | Science & Tech. | 46.404644 |
Securing Virtual Private Networks (VPN), Page 2
Asymmetric Encryption, or public key encryption, depends on a pair of keys called public key and private key; hence the name. The keys are selected such that, if data is encrypted through key 1, it can be only decrypted through key 2 and vice versa. Of the two keys, we tell about one to everybody and call it a public key. The other is kept private for decrypting and called a private key. For example, our e-mail account has a public e-mail address that we give to everyone we want to but we won't tell the password to anyone.
Suppose a person named Linda is a broker and she gets a request mail by James Anderson for buying some stock shares for his company. She performs all the arrangements and sends a confirmation mail to James. In the end, she sends a bill to him for the payment; at this point, James completely denies that he has ever sent a mail to Linda for any stock shares. Now what should Linda do? She is in extreme trouble because there is no clue to prove that James was the actual e-mailer.
The solution is provided by the use of public key encryption; if Linda has encrypted the data by a public key, it can be decrypted only through Linda's private key which should be told only to James, so when James replies to the confirmation mail for the shares, it is known for sure that the answering person is no other then James Anderson and he is caught. This is source authentication.
If we use the hashing scheme, such as MD5, on our data and generate a hash value for it at the source computer and send it along the data to the target, the destination computer will also compute its hash code for the received data. If the hash generated by the destination is same as the one received by the source, our data integrity is preserved; in other words, the data has reached its destination without any change or loss. This hash code is called a digital signature when sent with e-mail data.
- Data Integrity
- Data origin authentication
- Replay prevention
- Limited traffic flow confidentiality
Replay prevention means that if somebody gets to know the keys by some means and resends your messages again or if someone gets to know the user name and password of your account, he or she can directly learn all your important business transactions and deals with others and can enjoy full authority to make other deals with them on your account using your name.
IKE is a mechanism in IPSec where we exchange the key. It is a hybrid protocol that implements Oakley and Skeme key exchanges inside the ISAKMP framework. While IKE can be used with other protocols, its initial implementation is with the IPSec protocol. IKE provides authentication of the IPSec peers, negotiates IPSec keys, and negotiates IPSec security associations. The main features of IKE are as follows:
- Negotiates policy to protect communication
- Authenticated Diffie-Hellman key exchange
- Negotiates (possibly multiple) security associations (SA) for IPSec.
Diffie-Hellman is a public-key cryptography protocol that allows two parties to establish a shared secret over an unsecured communication channel. Diffie-Hellman is used within IKE to establish session keys. 768-bit and 1024-bit Diffie-Hellman groups are supported.
Security Association (SA) combines the agreed upon principles for VPN communication. This is done by IKE. The secret key exchange is the main process so that the dependent data to be delivered is secured.
Isakmp + oakley is the IKE policy that we define to start the encryption process. The Internet Security Association and Key Management Protocol (isakmp) is a protocol framework that defines payload formats, the mechanics of implementing a key exchange protocol, and the negotiation of a security association. Oakley is a key exchange protocol that defines how to derive authenticated keying material. Skeme is a key exchange protocol that defines how to derive authenticated keying material, with rapid key refreshment.
MD5 (Message Digest 5) is a hash algorithm used to authenticate packet data. HMAC is a variant that provides an additional level of hashing. The Data Encryption Standard (DES) is used to encrypt packet data. IKE implements the 56-bit DES-CBC with Explicit IV standard. Authentication header is used for data integrity and source authentication whereas encapsulating security protocol is used for confidentiality. | <urn:uuid:d54a77d8-14d9-44f5-a59c-c1e8e5622021> | 3.59375 | 923 | Documentation | Software Dev. | 38.205978 |
The National Weather Service map for Nov. 2, 2012 showed two areas of low pressure over eastern Canada, near Quebec.
That's where the remnants of Sandy are located and the storm's massive cloud cover continues to linger over a large area. That low pressure area is associated with Sandy's remnants.
A visible image from NOAA's GOES-13 satellite at 1:31 p.m. EDT on Nov. 2, 2012 showed the remnant clouds from Sandy still linger over the Great Lakes east to New England.
In Canada, Sandy's clouds stretch from Newfoundland and Labrador west over Quebec, Ottawa and Toronto. The GOES image was created by NASA's GOES Project at the NASA Goddard Space Flight Center, Greenbelt, Md.
By Monday, Nov. 6, the National Weather Service map projects that the low pressure area associated with Sandy's remnants will be offshore.
Rob Gutro | Source: EurekAlert!
Further information: www.nasa.gov
More articles from Earth Sciences:
Tracking the Earth’s Mantle
24.05.2013 | Syracuse University
Strong earthquake at exceptional depth
24.05.2013 | Helmholtz-Zentrum Potsdam - Deutsches GeoForschungsZentrum GFZ
This morning at 05:45 CEST, the earth trembled beneath the Okhotsk Sea in the Pacific Northwest. The quake, with a magnitude of 8.2, took place at an exceptional depth of 605 kilometers.
Because of the great depth of the earthquake a tsunami is not expected and there should also be no major damage due to shaking.
Professor Frederik Tilmann of the GFZ German Research Centre for Geosciences: "The epicenter is exceptionally deep, far below the earth's crust in the mantle. Such strong ...
The Ring Nebula's distinctive shape makes it a popular illustration for astronomy books. But new observations by NASA's Hubble Space Telescope of the glowing gas shroud around an old, dying, sun-like star reveal a new twist.
"The nebula is not like a bagel, but rather, it's like a jelly doughnut, because it's filled with material in the middle," said C. Robert O'Dell of Vanderbilt University in Nashville, Tenn.
He leads a research team that used Hubble and several ground-based telescopes to obtain the best view yet of ...
New indicator molecules visualise the activation of auto-aggressive T cells in the body as never before
Biological processes are generally based on events at the molecular and cellular level. To understand what happens in the course of infections, diseases or normal bodily functions, scientists would need to examine individual cells and their activity directly in the tissue.
The development of new microscopes and fluorescent dyes in ...
A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials.
The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers.
Droplets in this toroidal shape made ...
Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry.
Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada.
High manufacturing cost and a short lifetime are still a major obstacle on ...
24.05.2013 | Life Sciences
24.05.2013 | Ecology, The Environment and Conservation
24.05.2013 | Physics and Astronomy
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:b39cb64e-7bee-4377-a0cc-43a476f4409c> | 3.078125 | 849 | Content Listing | Science & Tech. | 57.870693 |
Overview of energetic particle hazards during prospective manned missions to Mars
McKenna-Lawlor, Susan and Gonçalves, P. and Keating, A. and Reitz, G. and Matthiä, D. (2011) Overview of energetic particle hazards during prospective manned missions to Mars. Planetary and Space Science, 63-64, pp. 123-132. Elsevier. DOI: 10.1016/j.pss.2011.06.017.
Full text not available from this repository.
A scenario for an initial manned mission to Mars involves transits through the Van Allen Radiation Belts, a 30 day ‘short surface stay’ and a 400 day Cruise Phase (to/from the planet). The contribution to the total dose incurred through transiting the belts is relatively small and manageable. Estimates of the particle radiation hazard incurred during a 30 day stay on the surface (using ESA's Mars Energetic Radiation Environment Models dMEREM and e MEREM) indicate that the dose is not expected to be particularly challenging health-wise due to the shielding effect provided by the Martian atmosphere and the body of the planet. This is in accord with estimations obtained using the Langley HZETRN code. Estimates of GCR exposure in free space during the minimum phase of Solar Cycle 23 determined using the CREME2009 model are in reasonable agreement with published results obtained using HZETRN (which they exceed by about 10%). The Cruise Phase poses a significant radiation problem due to the cumulative effects of isotropic Galactic Cosmic Radiation over 400 days. The occurrence during this period of a large Solar Energetic Particle (SEP) event, especially if it has a hard energy spectrum, could be catastrophic health wise to the crew. Such particle events are rare but they are not currently predictable. An overview of mitigating strategies currently under development to meet the radiation challenge is provided and it is shown that the health problem posed by energetic particle radiation is presently unresolved.
|Title:||Overview of energetic particle hazards during prospective manned missions to Mars|
|Journal or Publication Title:||Planetary and Space Science|
|In Open Access:||No|
|In ISI Web of Science:||Yes|
|Page Range:||pp. 123-132|
|Keywords:||Mars, Galactic cosmic radiation, Solar energetic particles, Manned missions|
|HGF - Research field:||Aeronautics, Space and Transport|
|HGF - Program:||Raumfahrt|
|HGF - Program Themes:||R FR - Forschung unter Weltraumbedingungen|
|DLR - Research area:||Raumfahrt|
|DLR - Program:||R FR - Forschung unter Weltraumbedingungen|
|DLR - Research theme (Project):||R - Vorhaben Strahlenbiologie|
|Institutes and Institutions:||Institute of Aerospace Medicine > Radiation Biology|
|Deposited By:||Kerstin Kopp|
|Deposited On:||06 Dec 2011 12:40|
|Last Modified:||26 Mar 2013 13:33|
Repository Staff Only: item control page | <urn:uuid:0a877dc3-b90c-418c-9954-e24b1f183af1> | 3.046875 | 675 | Academic Writing | Science & Tech. | 37.943032 |
Forests.org News Archive
Non-profit forests news links and archive of materials no longer on web provided on these terms to help find solutions and for posterity
Disclaimer & Conditions for Use
Share on Facebook
* @EcoInternet has long campaigned
Report: Large-scale forest biomass energy not sustainable
Large-scale production could sacrifice forest ecosystem integrity and actually lead to higher greenhouse gas emissions
Large-scale use of forest biomass for energy production may be unsustainable and is likely to increase greenhouse gas emissions in the long run, according to a new study.
The research was done by the Max-Planck Institute for Biogeochemistry in Germany, Oregon State University, and other universities in Switzerland, Austria and France. The work was supported by several agencies in Europe and the U.S. Department of Energy.
The results show that a significant shift to forest biomass energy production would create a ubstantial risk of sacrificing forest integrity and sustainability with no guarantee that it would mitigate climate change,” according to the researchers.
Early assumptions that biomass energy production would be greenhouse-neutral, or even reduce greenhouse emissions “are based on erroneous assumptions,” the researchers said, adding that large-scale biomass energy production would have negative impacts on forest ecosystems, including ...
Rate Article: 1 (Worst) to 10 (Best)
Search the Internet with Forests.org's Search Engine for more information on: 'forest biomass energy'
Forests.org users agree to the site disclaimer as a condition for use. | <urn:uuid:e4bc34e9-7dd5-425c-ab46-a34b301afbd3> | 2.875 | 311 | Content Listing | Science & Tech. | 23.634599 |
The height of a hill (in feet) is given by , where is the distance north, is the distance east of South Hadley.
(a) Where is the top of the hill located.
So . Then what? Set it equal to ?
(b) How high is this hill?
This is the magnitude of ?
(c) How steep is the slope (in feet per mile) at a point mile north and one mile east of South Hadley? In what direction is the slope steepest at that point?
Plug in value in gradient and find magnitude? | <urn:uuid:b23bafd4-eb66-4068-bcb2-d4c802aaaf2c> | 3.09375 | 120 | Q&A Forum | Science & Tech. | 87.536331 |
Ghosts of Tsunamis Past
Brian Atwater is one of those people whose surname coincidentally fits his or her line of work. As a geologist for the U.S. Geological Survey, he studies earthquakes and tsunamis of the past few thousand years. Doing this often requires him to get wet along shorelines, tidal marshes, and river deltas to investigate the residue of these catastrophes, buried where land meets sea.
For tsunami researchers, witnessing an event as wide-reaching and destructive as last winter’s Indian Ocean tsunami is exceedingly rare. This makes Atwater’s soggy forays into geologic history quite valuable. By unearthing sediment deposits tsunamis leave behind, scientists can study the waves’ origins, extent, and frequency. Such work helps avert surprises from locations that have the geological apparatus to produce a tsunami, but haven’tin written history, at least.
History in Layers
Science Bulletins met up with Atwater at the Copalis River estuary on the Pacific coast of Washington State. The estuary is one of dozens that sit above an enormous fault plane that slants beneath the Pacific coast. Called the Cascadia subduction zone, the fault stretches 1,100 km from Vancouver Island, B.C., to northern California. One tectonic plate descends beneath another here. As they abrade, the overriding continental plate sticks and warps atop the subducting one. Strain builds over time. When the plates suddenly slip free, an earthquake occurs.
At this moment, the continental plate springs upward, and can launch a massive volume of water: a tsunami. Thus “unstuck,” the plate settles, now lower in elevation than it was prior to the earthquake.
Despite the fault’s existence, written records from the Pacific Northwest coast, which began about 200 years ago, are silent on the subject of large earthquakes and tsunamis generated from them. “I first went to the Copalis River in the spring of 1986,” says Atwater. “At that time, very few scientists believed that large earthquakes and tsunamis could happen here, and nobody had demonstrated that they had.” But Atwater became one of the first researchers to find geologic proof.
Atwater explains as he hacks into a marshy bank of the Copalis River with a World War II folding shovel. “What I’m unearthing is a record of a catastrophe from 300 years ago,” he says. Atwater points to the lowest of three distinct bands of sediment stacked in the bank. “Around 1700, this salt marsh, represented by the soil here, was up at the level of the present marsh above us. But then the land abruptly dropped a meter or two during an earthquake.” He traces the 10 cm thick layer of sand above the ancient marsh. “Then comes the tsunami, and lays down a sheet of sand. The sea was free to come in because the continental plate had dropped, so the ocean then laid down this top layer of mud.”
In 1986, Atwater surveyed a sand sheet that he suspected a tsunami washed into Willapa Bay, Washington. Sand deposits had been associated with only one tsunami previously, the 1960 event in the southeastern Pacific that affected Chile and Japan. Jody Bourgeois, a sedimentologist at the University of Washington, investigated further. “We started with little to go on,” she recalls. “We had to show that the sand layer was from a surge of a tsunami wave, and not from a high tide or a storm.” A team of colleagues mapped the sand and land-level changes along coastal Washington State and compared the results with deposits and eyewitness accounts of the 1960 tsunami. The picture began to come together.
Atwater and other researchers later found key clues in eerie stands of Western red cedars bordering the Copalis River and three other estuaries in the state. “We call it a ghost forest because the trees have been standing dead for centuries,” Atwater says. It’s a sign not of a tsunami, but of an earthquake capable of causing one. “After the earthquake drops the continental plate, saltwater can come in at high tide and routinely cover the forest floor, killing the trees,” he explains.
A Restless Record
At the start of World War II, a Japanese geographer looking at municipal documents from Japan’s Pacific coast noted a mention of a destructive tsunami on the evening of January 26, 1700. In 1996, Japanese researchers proposed that this event and the one that had affected the Pacific Northwest were one and the same. Tree-ring dating of the Western red cedars corroborated this date: the trees had all died at once, somewhere between August 1699 and May 1700. Recent analysis of flooding and other damage from the Japanese documents show that the tsunami’s parent earthquake was a gargantuan magnitude 9.
The mounting evidence since 1986 has convinced Earth scientists that the Pacific Northwest’s 1700 earthquake was just the most recent tsunami-generating quake of a surprisingly fitful fault. Additional deposit data have disclosed seven great Cascadia earthquakes over the past 3,500 years, with an average interval of 500 years.
As the geologic record unfolds, other researchers such as Ruth Ludwin, a seismologist at the Pacific Northwest Seismograph Network, are digging up oral traditions of Northwest Coast native communities that existed previous to written records. “It turns out there are stories amongst those tribes that are consistent with historical earthquakes and tsunamis on the coast of Cascadia,” says Bourgeois.
As the continental plate sluggishly gathers strain at Cascadia, there is no doubt that another massive earthquake and tsunami will roil the Pacific Northwest. When is unclear. To find out how science is reducing the risk of surprise and widespread damage, follow the essays about tsunami computer modeling and measurement in real time.
Natural Resources Canada: Giant Megathrust Earthquakes
A stellar introduction to the geology of these tsunami-producing disasters.
The Orphan Tsunami of 1700
Exhaustive historic Japanese maps and writings reveal the mystery of the 1700 tsunami.
Casadia Megathrust Earthquakes in Pacific Northwest Indian Legend
Ruth Ludwin's research into native stories about the area's large historical earthquakes and tsunamis.
More About This Resource...
Supplement a study of earth science with a classroom activity drawn from this Science Bulletin essay.
- Ask students what they know about earthquakes. What causes them? Have scientists identified all the locations that have the potential for a sizable earthquake?
- Have them read the essay (either online or a printed copy).
- Have them write a brief reaction to the article, focusing on what they learned about the limits of written history when it comes to identifying fault planes. | <urn:uuid:12d6fc47-6059-44c4-90b8-23653f52264f> | 4.0625 | 1,426 | Nonfiction Writing | Science & Tech. | 45.24313 |
to fling the limbs and body, as in making efforts to move; to struggle, as a horse in the mire, or as a fish on land; to roll, toss, and tumble; to flounce. They have floundered on from blunder to blunder. (Sir W. Hamilton)
The common english flounder is Pleuronectes flesus. There are several common American species used as food; as the smooth flounder (P. Glabra); the rough or winter flounder (P. Americanus); the summer flounder, or plaice (Paralichthys dentatus), atlantic coast; and the starry flounder (Pleuronectes stellatus).
Results from our forum
... blue crabs, shrimp, and fish swimming from the depths of the bay into the shallow waters of the shoreline. Generally, the bottom fish, such as flounders, catfish, and stingrays, are the most affected. Crabs are almost always a part of the event. The phenomenon in Mobile Bay has been studied ...
See entire post | <urn:uuid:41b37b93-259d-452c-b9c6-329940d1f86f> | 2.90625 | 235 | Truncated | Science & Tech. | 62.860011 |
The pattern of seasonal temperature odds across northern Australia is a result
of recent warm conditions in the Indian Ocean and an increasing level
of warmth in the Pacific. The Pacific has had the greater influence on this
The chance that the average July-September maximum temperature will exceed the
long-term median maximum temperature ranges from 60 to 70% across most of the
southern halves of both the NT and Queensland. In the southeast inland of
Queensland the chance approaches 75%.
This means that for every ten years with ocean patterns like the current, about six
or seven years would be expected to be warmer than average during the September
quarter over this broad zone stretching west-east across northern Australia, with about
three or four years being cooler.
The chances of a higher than normal seasonal average is between 45 and
60% in the far north.
Outlook confidence is related to how consistently the Pacific and Indian
Oceans affect Australian temperatures. During the September quarter,
history shows this effect on maximum temperatures to be moderately consistent
in both the NT and Queensland (see background information).
The outlook for mean minimum temperatures over July-September shows the
chance of a seasonal average above the long-term median minimum temperature is between
40 and 60% over northern Australia.
History shows the oceans' effect on minimum temperatures in the July to
September period to be moderately consistent over Queensland and the east of the NT.
Elsewhere the effect is only weakly or very weakly consistent. | <urn:uuid:226c2a80-cea6-4141-bb97-480f6be82080> | 2.9375 | 306 | Knowledge Article | Science & Tech. | 32.504394 |
I want to implement FIFO (First in First out) approach in Java. The requirement is as following, 1)I receive string data in chunks after some time interval. 2)I want to write it in FIFO buffer having a fixed size i.e. only fixed number of data can be stored(say 5). 3)Then I will pick one by one from the buffer for further processing. 4)After I pick from the buffer it should be deleted from the buffer and the space can be allocated for further data addition. Can anybody guide/ tell me how ti achieve this??? Thnx in advance.
"JavaRanch, where the deer and the Certified play" - David O'Meara
Joined: Mar 05, 2001
Thanx Cindy, but can you elaborate on this. As my understanding of BufferedInputStream is I can take i/p faster. How to achieve FIFO approach with fixed set of values.
Joined: May 25, 2001
hi, you need an array to put the input to (the size of the array is your fixed size). two indexes: one for the index to write to and one for the index to read from. then you also have to check some conditions: is there space to write, something to read ? use wait() notify()/notifyAll() to prevent deadlocks. the consumer and producer must be different threads in this scenario to prevent deadlocks. perhaps this is a little more than you were looking for but this is FIFO as i understand it........ | <urn:uuid:be760671-eec4-4870-bedd-f7e99f637db6> | 2.78125 | 319 | Comment Section | Software Dev. | 70.415686 |
docs.oracle.com wrote: public void print(String s)
Print a string. If the argument is null then the string "null" is printed. Otherwise, the string's characters are converted into bytes according to the platform's default character encoding, and these bytes are written in exactly the manner of the write(int) method.
s - The String to be printed
Ritesh raushan wrote:but i did'nt undestand...this is has-A relationship but where the object of PrintStream class is creating.
What is there, you did not understand? It simply says that the System class has an object "out" of PrintStream class. You have already shown the code there. If you are worried about the initialization of "out",then | <urn:uuid:e3612000-55c5-4f93-b37f-4bf79a94f10d> | 3.296875 | 161 | Q&A Forum | Software Dev. | 68.375 |
I was using while (true) to control some code that was supposed to run for a designated amount of time. I noticed the processor load spike by 40-60%. As there was no input to slow down execution, it was tearing through the code in the while loop and repeating as quickly as possible. Looked up a way to slow it down and discovered sleep(# of milliseconds in Windows.h. Is there either: a) a better way to go about controlling the rate of execution; or b) a cross-platform variant of sleep(time)?
Generally sleep() is not recommended. The alternative is to use events and signals to control the flow of a program. The reason for this is that programs should not be 'hanging' themselves but instead allow the user to do other tasks while the program is waiting for some new event to process.
while(true) is a brutal way to pause a program as you've seen. Depending on how you've implemented your 'timer' the actual sleep time could have been dependent on processor speed and the actual hardware you use instead of the equivalent seconds you programed it for.
All this said, there are still applications for a sleep() function and it is much better than most while variants. Since sleep functions make use of the system time they are OS specific but most OS do have some sort of sleep() function. It's just a matter of finding the right header. | <urn:uuid:980c4646-a150-4ea3-b60f-b4a043ec8f94> | 3.28125 | 289 | Q&A Forum | Software Dev. | 60.486204 |
Review of existing Red Fox, Feral Cat, Feral Rabbit, Feral Pig and Feral Goat control in Australia. II. Information Gaps
Ben Reddiex, David M. Forsyth.
Department of the Environment and Heritage, 2004
7. Results and Discussion (continued)
The first stage of this review (Reddiex et al. 2004) showed that there was little reliable knowledge about the benefits of feral rabbit control for native species and ecological communities. In contrast, there is some evidence of the impacts of rabbits on native species and ecological communities for rangelands and higher rainfall areas (see Williams et al. 1995). Feral rabbits are believed to impact on native fauna via direct competition for resources and through behavioural interactions such as exclusion of native animals from feeding areas (Williams et al. 1995). However, few studies have experimentally investigated these potential impacts (but see Robley et al. 2002).
There are reliable methods for estimating the relative abundance (i.e., spotlight counts; Caley and Morley 2002) and absolute abundance of feral rabbits (i.e., mark-recapture; Twigg et al. 2000) in most habitat types, and the effectiveness and costs of control are well known (Williams et al. 1995).
There is limited information on the benefits of feral rabbit control for native species and ecological communities. Studies that have investigated the impacts of feral rabbits on pasture composition and biomass have largely focused on modified agricultural landscapes where few threatened native species are present (e.g., Gooding 1955; Myers and Poole 1963; Croft et al. 2002). The impact of feral rabbits on native plant species has largely been inferred from exclosure studies (e.g., Lange and Graham 1983; Leigh et al. 1989; Henzell 1991). The main limitation of such studies when attempting to infer benefits of feral rabbit control is that eradication is not feasible in mainland areas of Australia (i.e., exclosures have feral rabbit densities that are not possible via conventional control).
In rangelands, the current replacement rate of many shrubs and trees is insufficient to prevent their loss in the long-term. Lange and Graham (1983) studied feral rabbit browsing of arid zone acacia (Acacia spp.) seedlings when feral rabbits were at low densities, and found that only seedlings that were protected from feral rabbits and sheep showed good growth. Several other studies have indicated that feral rabbits may prevent regeneration of many shrub and tree species (e.g., Johnson and Baird 1970; Friedel 1985; Auld 1990; Henzell 1991). In the Gammon Range National Park in South Australia, Henzell (1991) reported that feral rabbits were a critical factor in determining mulga regeneration because they killed nearly all of the seedlings, and Foran et al. (1985) found the same response for Acacia kempeana seedlings. In a replicated field experiment, Mutze et al. (1997) reported that feral rabbit control resulted in higher levels of recruitment of the arid zone shrubs of moderate palatability in South Australia. However, it is extremely difficult to undertake field experiments to assess the benefits of feral rabbit control for regeneration in rangelands as germination and establishment of vegetation in rangelands may only occur at time intervals of 5-50 years, mainly as a response to rainfall (Ireland and Andrew 1992; Williams et al. 1995).
In the Coorong National Park in South Australia, Cooke (1987) reported that feral rabbits prevented regeneration of Acacia longifolia and the sheoak Allocasuarina verticilliata. In Kosciusko National Park, where feral rabbits were excluded two new species of forbes were found in seven years, but where feral rabbits were present there was a loss of nine forb species (Leigh et al. 1987). An exclosure study in the mallee in western Victoria found 17 indigenous species of ground layer plants inside feral rabbit exclosures after 2 years that were not present outside (Cochrane and McDonald 1966). However, other herbivores were present in the study area.
The benefits of a reduction in feral rabbit densities resulting from RHD have been monitored at a number of sites (>10) across Australia (Sandell and Start 1999). Despite most of the sites only being monitored for two years post-RHD all but one of the sites found evidence of native vegetation recovery as a result of reduced feral rabbit abundance (Sandell and Start 1999). The structure of vegetation has been reported to have improved due to regeneration of native trees and shrubs, however floristic changes have been variable and dependent on climatic factors (the results for most of these sites are not available). Sandell (2002) found no evidence of widespread germination of woody seedlings, which is not surprising given the episodic nature of such regeneration in many environments.
Feral rabbits are a known or perceived threat for 84 species listed under the EPBC Act (Table 1); 13 mammals, 13 birds, 1 fish, 1 amphibian, 2 retiles, and 54 plant species. Few of these species were identified in the above overview. The 54 plant species listed under the EPBC Act for which feral rabbits are a known or perceived threat appear to have that status because feral rabbits either have been observed feeding on those species, or because browse on those species has been attributed to feral rabbits, or species have shown a positive response in areas where feral rabbits are excluded. Hence, there is limited reliable information on the benefits of conventional feral rabbit control for nearly all of the species listed in the EPBC Act for which feral rabbits have been identified as a threat.
We consider that the greatest priority is understanding the benefits of feral rabbit control for native plant species/communities. The next priority would be to determine the indirect impact of feral rabbits on native fauna species.
We advocate an experiment that assesses the functional relationship between feral rabbit density and damage to a combination of native species for which feral rabbits are a known key threatening process (Environment Australia 1999b) and other common native species that feral rabbits may impact upon. Our preferred experimental design is a response surface experiment (Mead 1988), and uses large-scale enclosures to assess the impact of feral rabbit density on native plant species diversity and composition, including seedling survival of planted shrub/tree species. We believe that the alternative approach of comparing vegetation response between feral rabbit control programs and paired non-control areas is less desirable due to potential difficulties in maintaining the desired treatments over extended periods of time and over a large scale, and limited control of other herbivores. The proposed enclosures have the advantage of enabling accurate assessment of feral rabbit densities and therefore relationship to damage, but are also large enough to simulate broad acre conditions (note that enclosures could not be used to simulate broad acre conditions for feral goats and feral pigs).
We suspect that the benefits of differing feral rabbit densities on native vegetation will vary between rangelands and high-rainfall areas (Williams et al. 1995). We therefore suggest conducting the following experiment at sites in each of these two ecosystems. However, we encourage the adoption of this design at as many sites as possible throughout the feral rabbit range. Where possible sites should be selected where published information is available on the dynamics of feral rabbit populations, including changes in abundance of feral rabbits following conventional control, and their associated impacts on native vegetation.
The experimental design would be the same for both ecosystems. The experiment should use a randomised design (see Figure 4), with different feral rabbit densities as the treatments at each site. There should be a minimum of four treatments (i.e., enclosures) at each site.
Figure 4. Experimental design for understanding the relationship between feral rabbit density and damage to native plant species in two ecosystems.
Recommended feral rabbit densities should represent typical feral rabbit densities for the regions studied and for the prevailing environmental conditions, but should include a low density and low-medium density representative of sustained conventional control of feral rabbits, and a medium-high and high density which is representative of uncontrolled feral rabbit populations. Each enclosure should include a number of relatively small exclosures that act as experimental controls. The experiment aims to examine the relationship between feral rabbit control and damage, therefore the densities within treatments should be treated to reflect management. We suggest the following management; low density treatment - remove 90-95% of feral rabbits once per year (small population levels may be prone to extinction in enclosures, and may require intensive management/reintroduction); low-medium density - remove 70-80% of feral rabbits once per year; medium-high density - remove 40-50% of feral rabbits once per year; and high density - no removal.
The reliability of the inferences increases with the number of sites and the number of replicates within each site. However, as long as there are at least five sites in each region there can be a minimum of one experiment in each site (i.e., no replication within sites). Sites should be selected so that they include the plant species predicted to respond to feral rabbit control (either in abundance and/or condition) and where possible include EPBC Act listed species for which feral rabbits are a known or perceived threat (Table 1). All treatments within a site would be undertaken on adjacent areas (see Figure 4), and all treatments should have similar soil types and vegetation composition and structure at the commencement of the study.
The size of each treatment enclosure should be at least 4 ha, but if resources permit we encourage the size of each enclosure to be increased (we have costed this experiment based on 4 ha enclosures). Feral rabbits generally do not forage far from their warrens. Wood et al. (1987) reported an inverse relationship between distance from warrens and the intensity of feeding, with 800kg/ha of forage removed <12m from the warren, 220kg/ha 25 m from the warren and 150kg/ha at 100 m from the warren. Feral rabbit home range differs markedly from one environment to another (range 0.05-4.70 ha; Myers et al. 1994). Each enclosure will be fenced in a manner that prevents feral rabbits from moving outside their intended enclosure, and they will be fenced to a height (c. 1.8 m) that prevents entry of other herbivores that are likely to affect the species of interest (e.g., Henzell 1991; Grice and Barchia 1992). Several fence designs achieve these requirements (review in Long and Robley 2004). Predator control will need to be undertaken around all enclosures throughout the duration of the experiment (predator control has been included in the experiment costing).
Prior to the treatments being imposed, the vegetation within each enclosure should be sampled. Monitoring protocols for assessing grassland species composition and biomass are widely available (e.g., dry-weight-rank technique; Mannetje and Haydock 1963; modified step-point sampling technique; Cunningham 1975). Monitoring of plant composition, condition and biomass should be undertaken quarterly as there are pronounced seasonal variation in many grassland systems.
The impact of feral rabbit densities on the survival of shrubs/tree species would be assessed through monitoring the survival of planted seedlings in all enclosure treatments. As mentioned above, germination and establishment of vegetation in rangelands may only occur at time intervals of 5-50 years, mainly as a response to rainfall (Ireland and Andrew 1992; Williams et al. 1995). Therefore, it is unlikely that establishment of shrub/tree species will occur naturally in the enclosures during the timeframe of the study. The shrub/tree species selected will act as a proxy for species that feral rabbits are believed to prevent regeneration of (e.g., Acacia spp.; Williams et al. 1995) and therefore be similar in palatability and structure. However, we have also costed the addition of a simulated rainfall treatment to this experiment that may enable natural regeneration to occur (i.e., doubling the number of enclosures at each site). This would involve replicating the above enclosures at each site and randomly selecting one block to be irrigated.
The abundance of feral rabbits should be monitored throughout the study (quarterly) using mark-recapture methods. Enclosures should also be inspected at least every fortnight, to ensure the fence has not been breached by feral rabbits or other herbivores and on the potential for incursion (e.g., overhanging branches that may fall on fences, or proximity to creeks that might erode the fence).
Other covariates should be monitored at each site. For example, rainfall is thought to be important for the germination of some seeds. Hence, a response of feral rabbit control might not occur until a threshold soil moisture has been exceeded. Other covariates might include the abundance of small native herbivores that may enter the enclosures, presence of disease (e.g., myxomatosis and RHD), and temperature.
This design will enable benefit-cost analyses to be undertaken as it will provide a relationship between incremental pest density and incremental damage, without which cost-benefit analyses are tenuous (Fleming et al. 2001).
How long should the experiments run for? The answer will depend on the plant species monitored and the environmental conditions that occur during the experiment. And there is always the possibility of 'demonic intrusion' (e.g., destruction of enclosure fences) ruining even the best design. However, we believe that there should be at least 1 sample in all treatments prior to the commencement of the study to gather accurate baseline information on the response variables that are to be assessed and at least four years of monitoring before the experiment is reviewed to enable sampling of different seasonal conditions. This design also enables the treatments to be reversed (i.e., feral rabbit densities changed between enclosures). The key relationship is that between feral rabbit density and damage. We expect that at least five sites are needed to provide a reasonable confidence interval around this relationship in each ecosystem.
Until study sites are identified, the cost of the experiment can only be considered indicative (Table 4). We estimate that the start-up costs of the experiment for one ecosystem with five sites will be (including overheads) $490K (excludes the simulated rainfall treatment). The annual ongoing cost will be $320K.
|Item||Start-up (year 1) costs ($000)||Ongoing (year 2and beyond) costs ($000)||Final year costs ($000)|
|a) Excludes simulated rainfall treatment|
|b) Includes simulated rainfall treatment|
1 Assumes 100% overheads, but not all organisations charge overheads.
2 Irrigation costs will depend upon the location of sites. | <urn:uuid:91f1eae6-af79-4b5f-9f18-83cd8ead5425> | 3.09375 | 3,040 | Academic Writing | Science & Tech. | 35.27796 |
Systems of Three Variables Systems of Three Variables
Systems of Three Variables
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- Solve this system. And here we have three equations with three unknowns.
- And just so you have a way to visualize this,
- each of these equations would actually be a plane in three dimensions.
- And so you're actually trying to figure out where three planes in three dimensions intersect.
- I won't go into the details here, I'll focus more on the mechanics,
- but you can imagine if I were to draw a three dimensional space over here.
- Now all of a sudden we'll have an x, y, and z axes.
- So you can imagine that maybe this first plane
- and I'm not drawing it the may it might actually look.
- It might look something like that. (I'm just drawing part of the plane.)
- And maybe this plane over here.
- It intersects right over there and it comes popping out like this and then it goes behind it like that.
- It keeps going in every direction, I'm just drawing part of the plane.
- And maybe this plane over here, maybe it does something like this.
- Maybe it intersects over here and over here.
- And so it pops out like that and then it goes below it like that
- and then it goes like that. I'm just doing this for visualization purposes.
- And so the intersection of this plane - the x, y and z coordinates that would satisfy
- all three of these constraints the way I drew them - would be right over here.
- So that's what we're looking for. And a lot of times these three equations with three unknown systems
- will be inconsistent. You won't have a solution here, because it's very possible to have three planes
- that all don't intersect in one place. A very simple example of that is
- well, one, they could all be parallel to each other, or they could intersect each other but maybe they
- intersect each other in kind of a triangle, so maybe one plane looks like that, then another plane maybe
- pops out like that, goes underneath. And then maybe the third plane cuts in.
- It does something like this: where it goes into that plane
- and keeps going out like that, but it intersects this plane over here.
- So you see kind of forms a triangle and they don't all intersect in one point
- so in this situation, you would have an inconsistent system. So with that out of the way, let's try to
- actually solve this system. And the trick here is to try to eliminate one variable at a time from all
- of the equations, making sure that you have the information from all three equations here
- so what we're going to do is we could maybe - it looks like the easiest to eliminate
- since we have a positive y and a negative y and then another positive y
- it seems like we can eliminate the Ys.
- We can add these two equations and come up with another equation
- that will only be in terms of x and z. And then we could use these two equations
- to come up with another equation that will only be in terms of x and z.
- But it will have all of the x and z constraint information embedded in it because
- we're using all three equations. So let's do that. So first let's add these two equations right over here.
- So we have x plus y minus three z is equal to negative ten.
- And x minus y plus two z is equal to three. So over here if we want to eliminate y, we can literally
- just add these two equations. So on the left hand side, x plus x is two x. Y plus negative y cancels out
- And then negative three z plus two z - that gives us just a negative z
- and then we have negative ten plus three, which is negative seven.
- So using these two equations we got
- two x minus z is equal to negative seven - just adding these two equations.
- Now let's do these two equatons. And we can reuse this equation as long as
- we're using new information here. Now we're using the extra constraint of this bottom equation.
- So we have x minus y plus two z is equal to three.
- And we have two x plus y minus z is equal to negative six.
- If we want to eliminate the Ys, we can just add these two equations.
- So x plus two x is three x. Negative y plus y cancels out. Two z minus z - well that is just z.
- And that is going to be equal to three plus negative six, which is negative three.
- So if I add these two equations, I get three x plus z is equal to negative three. Now I have a system
- of two equations with two unknowns. This is a little bit more traditional of a problem. So let me write
- them over here. So we have two x minus z is equal to negative seven. And then we have three x plus z
- is equal to negative three and the way this problem is set up, it gets pretty simple pretty fast, because
- if we just add these two equations, the Zs cancel out. Otherwise if it didn't happen so naturally, we'd
- have to multiply one of these equations, or maybe both of them, by some scaling factor.
- But we can just add these two equations up.
- On the left hand side, two x plus three x is five x. Negative z plus z cancels out.
- Negative seven plus negative three - that is equal to negative ten.
- Divide both sides of this equation by five and
- we get x is equal to negative two. Now we can substitute back to find the other variables.
- Maybe we can substitute back into this equation to figure out what z must be equal to.
- So we have two times x. Two times negative two minus z is equal to negative seven.
- Or negative four minus z is equal to negative seven.
- We can add four to both sides of this equation and then we get
- negative z is equal to negative seven plus four, which is negative three.
- Multiply or divide both sides by negative one and you get z is equal to three. And now we can go and
- substitute back into one of these original equations. So we have x. We know x is negative two.
- So we have negative two plus y, minus three times z.
- Well, we know z is three (so minus three times three)
- should all be equal to negative ten. And now we just solve for y.
- So we get negative two plus y minus nine is equal to negative ten. And so negative two minus nine,
- that's negative eleven. So we have
- y minus eleven is equal to negative ten. And then we can add eleven to both
- sides of this equation. And we get y is equal to negative ten plus eleven, which is one.
- So we're done!
- We've got x is equal to negative two. Z is equal to three and y is equal to one.
- Now I can actually go back and check it.
- Verify that this x, y and z works for all three constraints
- that this three dimensional coordinate lies on all three planes.
- So let's try it out. We've got x is negative two, z is three, y is one.
- So if we substituted - let me do it into each of them - so in this first equation
- that means that we have negative two plus one (remember y was equal to one).
- Let me write it over here - y is equal to one, x is equal to negative two, z is equal to three.
- That was the result we got. Yup, that's the result we got.
- So when we test it into this first one, you have negative two plus one minus three times three.
- So minus nine. This should be equal to negative ten. And it is.
- Negative two plus one is negative one, minus nine is negative ten.
- So it works for the first one. Let's try it for the second equation right over here.
- So we have negative two minus y (so, minus one) plus two times z (so, z is three, so two times three)
- So, plus six needs to be equal to three.
- So this is negative three plus six, which is indeed equal to three.
- So this satisifies the second equation. And then we have the last one right over here!
- We have two times x, so two times negative two, which is negative four. Negative four.
- Plus y, so plus one. Minus z, so minus three. Minus three.
- Needs to be equal to negative six. Negative four plus one is negative three,
- and then you subtract three again. It equals negative six.
- So it satisfies all three equations, so we can feel pretty good about our answer.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | <urn:uuid:8f5362d3-5680-431d-90cf-abf2dd7153f4> | 3.53125 | 2,224 | Truncated | Science & Tech. | 68.932723 |
void *dlopen(const char *file, int mode);
The dlopen() function shall make an executable object file specified by file available to the calling program. The class of files eligible for this operation and the manner of their construction are implementation-defined, though typically such files are executable objects such as shared libraries, relocatable files, or programs. Note that some implementations permit the construction of dependencies between such objects that are embedded within files. In such cases, a dlopen() operation shall load such dependencies in addition to the object referenced by file. Implementations may also impose specific constraints on the construction of programs that can employ dlopen() and its related services.
A successful dlopen() shall return a handle which the caller may use on subsequent calls to dlsym() and dlclose(). The value of this handle should not be interpreted in any way by the caller.
The file argument is used to construct a pathname to the object file. If file contains a slash character, the file argument is used as the pathname for the file. Otherwise, file is used in an implementation-defined manner to yield a pathname.
If the value of file is 0, dlopen() shall provide a handle on a global symbol object. This object shall provide access to the symbols from an ordered set of objects consisting of the original program image file, together with any objects loaded at program start-up as specified by that process image file (for example, shared libraries), and the set of objects loaded using a dlopen() operation together with the RTLD_GLOBAL flag. As the latter set of objects can change during execution, the set identified by handle can also change dynamically.
Only a single copy of an object file is brought into the address space, even if dlopen() is invoked multiple times in reference to the file, and even if different pathnames are used to reference the file.
The mode parameter describes how dlopen() shall operate upon file with respect to the processing of relocations and the scope of visibility of the symbols provided within file. When an object is brought into the address space of a process, it may contain references to symbols whose addresses are not known until the object is loaded. These references shall be relocated before the symbols can be accessed. The mode parameter governs when these relocations take place and may have the following values:
Any object loaded by dlopen() that requires relocations against global symbols can reference the symbols in the original process image file, any objects loaded at program start-up, from the object itself as well as any other object included in the same dlopen() invocation, and any objects that were loaded in any dlopen() invocation and which specified the RTLD_GLOBAL flag. To determine the scope of visibility for the symbols loaded with a dlopen() invocation, the mode parameter should be a bitwise-inclusive OR with one of the following values:
If neither RTLD_GLOBAL nor RTLD_LOCAL are specified, then an implementation-defined default behavior shall be applied.
If a file is specified in multiple dlopen() invocations, mode is interpreted at each invocation. Note, however, that once RTLD_NOW has been specified all relocations shall have been completed rendering further RTLD_NOW operations redundant and any further RTLD_LAZY operations irrelevant. Similarly, note that once RTLD_GLOBAL has been specified the object shall maintain the RTLD_GLOBAL status regardless of any previous or future specification of RTLD_LOCAL, as long as the object remains in the address space (see dlclose() ).
Symbols introduced into a program through calls to dlopen() may be used in relocation activities. Symbols so introduced may duplicate symbols already defined by the program or previous dlopen() operations. To resolve the ambiguities such a situation might present, the resolution of a symbol reference to symbol definition is based on a symbol resolution order. Two such resolution orders are defined: load or dependency ordering. Load order establishes an ordering among symbol definitions, such that the definition first loaded (including definitions from the image file and any dependent objects loaded with it) has priority over objects added later (via dlopen()). Load ordering is used in relocation processing. Dependency ordering uses a breadth-first order starting with a given object, then all of its dependencies, then any dependents of those, iterating until all dependencies are satisfied. With the exception of the global symbol object obtained via a dlopen() operation on a file of 0, dependency ordering is used by the dlsym() function. Load ordering is used in dlsym() operations upon the global symbol object.
When an object is first made accessible via dlopen() it and its dependent objects are added in dependency order. Once all the objects are added, relocations are performed using load order. Note that if an object or its dependencies had been previously loaded, the load and dependency orders may yield different resolutions.
The symbols introduced by dlopen() operations and available through dlsym() are at a minimum those which are exported as symbols of global scope by the object. Typically such symbols shall be those that were specified in (for example) C source code as having extern linkage. The precise manner in which an implementation constructs the set of exported symbols for a dlopen() object is specified by that implementation.
If file cannot be found, cannot be opened for reading, is not of an appropriate object format for processing by dlopen(), or if an error occurs during the process of loading file or relocating its symbolic references, dlopen() shall return NULL. More detailed diagnostic information shall be available through dlerror() .
No errors are defined.
The following sections are informative.
dlclose() , dlerror() , dlsym() , the Base Definitions volume of IEEE Std 1003.1-2001, <dlfcn.h> | <urn:uuid:ebe02864-3691-4a1d-abf8-3a23d74e1974> | 2.796875 | 1,236 | Documentation | Software Dev. | 26.429557 |
Assessing the ecological importance of clouds has substantial implications for our basic understanding of ecosystems and for predicting how they will respond to a changing climate. This study was conducted in a coastal Bishop pine forest ecosystem that experiences regular cycles of stratus cloud cover and inundation in summer. The study concludes that clouds are important to the ecological functioning of these coastal forests, providing summer shading and cooling that relieve pine and microbial drought stress as well as regular moisture inputs that elevate plant and microbial metabolism.
Mariah S. Carbone, A. Park Williams, Anthony R. Ambrose, Claudia M. Boot, Eliza S. Bradley, Todd E. Dawson, Sean M. Schaeffer, Joshua P. Schimel, Christopher J. Still
Global Change Biology, November 7, 2012 (online)
UCSB press release (includes video)
Featured Summary of this research project
Following is a sample of the media coverage of this study:
Red Orbit: Climate Change Could Affect Entire Forest Ecosystems
More information about this project's research | <urn:uuid:1620e373-382e-443f-9934-f72fad9f2c14> | 3.140625 | 211 | Knowledge Article | Science & Tech. | 34.676667 |
Apr. 9, 2008 The Texas Petawatt laser reached greater than one petawatt of laser power on Monday morning, March 31, making it the highest powered laser in the world, Todd Ditmire, a physicist at The University of Texas at Austin, said. The Texas Petawatt is the only operating petawatt laser in the United States.
Ditmire says that when the laser is turned on, it has the power output of more than 2,000 times the output of all power plants in the United States. (A petawatt is one quadrillion watts.) The laser is brighter than sunlight on the surface of the sun, but it only lasts for an instant, a 10th of a trillionth of a second (0.0000000000001 second).
Ditmire and his colleagues at the Texas Center for High-Intensity Laser Science will use the laser to create and study matter at some of the most extreme conditions in the universe, including gases at temperatures greater than those in the sun and solids at pressures of many billions of atmospheres.
This will allow them to explore many astronomical phenomena in miniature. They will create mini-supernovas, tabletop stars and very high-density plasmas that mimic exotic stellar objects known as brown dwarfs.
"We can learn about these large astronomical objects from tiny reactions in the lab because of the similarity of the mathematical equations that describe the events," said Ditmire, director of the center.
Such a powerful laser will also allow them to study advanced ideas for creating energy by controlled fusion.
The Texas Petawatt was built with funding provided by the National Nuclear Security Administration, an agency within the U. S. Department of Energy.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:05a839af-caa8-4e4e-8582-d01fd6d44523> | 3.40625 | 399 | Truncated | Science & Tech. | 43.220242 |
To move a tile into the empty space, click on the tile to be moved. If you think you need a hint to see how the map should look, click here.
Florida is the state where the most lightning flashes strike the ground. However, the number of injuries and deaths caused by lightning do not always occur in the same place. For example, Colorado is ranked 24th in the number of lightning strikes but is ranked 10th in the number of deaths caused by lightning. This is because a lot of people hike and camp in the exposed, lightning-prone mountains in the western half of the state.
Remember, if you hear thunder GO INDOORS and remain there as least 30 minutes after the LAST clap of thunder is heard.
To discover about more about lightning, go to JetStream - an Online School for Weather. | <urn:uuid:657c2196-7e85-4061-b8c3-cda01e2de3b9> | 2.921875 | 170 | Knowledge Article | Science & Tech. | 64.6025 |
The fuel in all the 3 Units is thought to have at least partially melted down despite pumping sea water and boric acid into the Units
The crisis at the three Fukushima Daiichi nuclear power stations did not come from buildings collapsing due to the March 11 earthquake of magnitude 9 but from power failure following the quake. The tsunami knocked out the generators that produced the power. Lack of power in turn caused the cooling systems of the reactors to fail.
The Fukushima nuclear reactor 1 went critical on March 1971 and is a 460 MW reactor. Unit-2 and Unit-3 are 784 MW each and went critical in July 1974 and March 1976 respectively. All the three are Boiling Water Reactors (BWR) and use demineralised water for cooling nuclear fuel.
The fuel, in the form of pellets, is kept inside a casing called cladding. The cladding is made of zirconium alloy, and it completely seals the fuel. Fuel pins in the form of bundles are kept in the reactor core. Heat is generated in the reactor core through a fission process sustained by chain reaction.
The fuel bundles are placed in such a way that the coolant can easily flow around the fuel pins. The coolant never comes in direct contact with the fuel as the fuel is kept sealed inside the zirconium alloy cladding. The coolant changes into steam as it cools the hot fuel. It is this steam that generates electricity by driving the turbines.
All the heat that is produced by nuclear fission is not used for producing electricity. The efficiency of a power plant, including nuclear, is not 100 per cent. In the case of a nuclear power plant the efficiency is 30-35 per cent. “About 3 MW of thermal energy is required to produce 1 MW of electrical energy. Hence for the 460 MW Unit-1, 1,380 MW of thermal energy is produced,” said Dr. K.S. Parthasarathy, former Secretary, Atomic Energy Regulatory Board, Mumbai. “This heat has to be removed continuously.”
In the case of the Fukushima units, demineralised water is used as coolant. Uranium-235 is used as fuel in Unit-1 and Unit-2, and MOX (a mixture of oxides of Uranium-Plutonium-239) is used as fuel in Unit-3.
Since a very high amount of heat is generated, the flow of the coolant should never be disrupted. But on March 11, pumping of the coolant failed as even the diesel generator failed after an hour's operation.
Though the power producing fission process was stopped by using control rods that absorbed the neutrons immediately after the quake, the fuel still contains fission products such as iodine-131 and caesium-137 and activation products such as plutonium-239.
“These radionuclides decay at different timescales, and they continue to produce heat during the decay period,” Dr. Parthasarathy said.
The heat produced by radioactive decay of these radionuclides is called “decay heat.”
“Just prior to the shut down of the reactor the decay heat is 7 per cent. It reduces exponentially, to about 2 per cent in the first hour. After one day, the decay heat is about 1 per cent. Then it reduces very slowly,” he said.
While the uranium fission process can be stopped and heat generation can be halted, there is no way of stopping radioactive decay of the fission products.
Apart from the original heat, the heat produced continuously by the fission products and activation products has to be removed even after the uranium fission process has been stopped.
Inability to remove this heat led to a rise in coolant temperature. According to the Nature journal, when the temperature reached around 1,000 degree C, the zirconium alloy that encased the fuel (cladding) probably began to melt or split apart. “In the process it reacted with the steam and created hydrogen gas, which is highly volatile,” Nature notes.
Though the pressure created by hydrogen gas was reduced by controlled release, the massive build-up of hydrogen led to the explosion that blew the roof of the secondary confinement (outer buildings around the reactor) in all the three units (Unit-1, Unit-2 and Unit-3). The reactor core is present inside the primary containment.
But the real danger arises from fuel melting. This would happen following the rupture of the zirconium casing. “If the heat is not removed, the zirconium cladding along with the fuel would melt and become liquid,” Dr. Parthasarathy explained. The government has said that fuel rods in Unit-3 were likely already damaged.
Effect of melted fuel
Melted fuel is called “corium.” Since melted fuel is at a very high temperature it can even “burn through the concrete containment vessel.”
According to Nature, if enough melted fuel gathers outside the fuel assembly it can “restart the power-producing reactions, and in a completely uncontrolled way.”
What may result is a “full-scale nuclear meltdown.”
Pumping of sea-water is one way to reduce the heat and avoid such catastrophic consequences. The use of boric acid, which is an excellent neutron absorber, would reduce the chances of nuclear reactions restarting even if the fuel is found loose inside the reactor core. Both these measures have been resorted to in all three Units. Despite these measures, the fuel rods were found exposed in Unit-2 on two occasions.
Fate of reactor core
While the use of sea-water can prevent fuel melt, it makes the reactor core completely useless due to corrosion.
The case of Unit-4 is different from the other three units. Unlike in the case of Unit-1, 2 and 3, the Unit-4 is under maintenance and the core has been taken out, and the spent fuel rods are kept in the cooling pond.
Whatever led to a decrease in water level, the storage pond caught fire on March 15 possibly due to hydrogen explosion. The radioactivity was released directly into the atmosphere.
Spent fuel fate unknown
It is not known if the integrity of the cladding has been already affected and the fuel exposed. Since the core of a Boiling Water Reactor (BWR) is removed only once a year or so, the number of spent rods in the pond will be more.
If the fuel is indeed exposed, the possibility of fuel melt is very likely. Though the fuel will be at a lower temperature than found inside a working reactor, there are chances of the fuel melting.
Since it does not have any containment unlike the fuel found inside a reactor, the consequences of a fuel melt would be really bad. Radioactivity is released directly into the atmosphere. Radioactivity of about 400 milliSv/hour was reported at the site immediately after the fire. | <urn:uuid:6f1b2417-0b03-45dc-b3bf-922974d30ccf> | 3.5625 | 1,446 | Knowledge Article | Science & Tech. | 53.516006 |
The molar volume is equal to the atomic weight divided by the density.
The molar volume is also known as the atomic volume.
The standard SI units are m3. Normally, however, molar volume is expressed in units of cm3. To convert quoted values to m3, divide by 1000000.
The molar volume depends upon density, phase, allotrope, and temperature. Values here are given, where possible, for the solid at 298 K.
WebElements now has an online chemistry shop at which you can buy periodic table posters, mugs, T-shirts, games, molecular models, and more. | <urn:uuid:f34e32d2-f025-42ac-9515-722ab00612c7> | 3.40625 | 132 | Knowledge Article | Science & Tech. | 47.667883 |
11.28.12 - Forecasters could soon be better able to predict how intense tropical cyclones like Hurricane Sandy will be by analyzing relative-humidity levels within their large-scale environments, finds a new NASA-led study.
05.03.12 - With 2,378 spectral eyes measuring our atmosphere, the Atmospheric Infrared Sounder could be called a "monster" of weather and climate research.
12.14.06 - Two new NASA-funded studies of ozone in the tropics using NASA satellite data not previously available are giving scientists a fuller understanding of the processes driving ozone chemistry and its impacts on pollution and climate change.
08.28.06 - The 2005 hurricane season will long be remembered both for the record-breaking number of storms and a devastating hurricane named Katrina.
08.24.05 - NASA and the NOAA today outlined research that has helped to improve the accuracy of medium-range weather forecasts.
05.29.03 - As a scientist and an artist, Graeme Stephens strives to paint an accurate picture of clouds.
04.29.03 - Your weatherperson’s job just got a little easier, thanks to new data available from advanced weather instruments aboard NASA’s Aqua satellite.
04.22.02 - Aqua Spacecraft Launched, Ready To Study Earth's Water Cycle
04.22.02 - Aqua will make measurements of the Earth at the same time, all the time. | <urn:uuid:7ef8caf9-558e-4e71-af6e-85753041430b> | 2.90625 | 295 | Content Listing | Science & Tech. | 61.17606 |
Northern Crayfish Frog
The northern crayfish frog grows to a length of nearly four inches. It has dark round spots surrounded by light borders on its chin, and the back is noticeably humped. This somewhat stubby frog lives in the southern half of Illinois and is closely associated with the hardpan clay soils south of the Shelbyville Moraine. This nocturnal frog spends the daytime hiding in crayfish or other animal burrows or under boards or logs in wet prairies, pastures or golf courses. It lays its eggs from early March to mid-April in flooded fields, farm ponds and small lakes. The call of the crayfish frog carries a considerable distance. It resembles a deep, roaring snore. | <urn:uuid:7ee60440-f04d-4d64-bb07-9679d69a6b08> | 2.859375 | 149 | Knowledge Article | Science & Tech. | 60.439231 |
[Location: Location_Category='CONTINENT', Location_Type='ANTARCTICA', Detailed_Location='SOUTH VICTORIA LAND']
New Zealand International Transantarctic Scientific Expedition (NZ ITASE) - Climate variability measured from ice cores taken along the Victoria Land CoastEntry ID: K049_1999_2008_NZ_1
Click to see members of this collection.
Abstract: The climate of the Victoria Land Coast is created by the interacting influences of the Dry Valleys, East Antarctic Ice Sheet and the Ross Sea. Slight changes can significantly alter local weather patterns and as such a climate record of the area provides ideal opportunities to study rapid, high frequency climatic variations. International polar ice coring programmes (e.g. GISP and Vostok) have ... provided powerful new insights into Earth's climate back 400,000 years, from the diverse inventory of atmospheric information stored both within the ice and trapped air bubbles. To understand and predict the local response to anthropogenically induced global warming seen in these "global" ice cores, the focus of ice core research in Antarctica is moving to the acquisition of high-resolution regional paleoclimatic archives of annual-scale that overlap with and extend the instrumental records of the last 40 years back several thousand years. This has been a key motivation behind the US-led International Transantarctic Scientific Expedition (ITASE) of which New Zealand is a member.
The New Zealand project's objective is to recover a series of ice cores from glaciers along a 14-degree latitudinal transect of the climatically sensitive Victoria Land coastline and thereby directly contribute a critical dataset to ITASE. The NZ ITASE sites (including two sites at Victoria Lower Glacier, Baldwin Glacier, Wilson Piedmont Glacier, Polar Plateau, Evans Piedmont Glacier, Mt Erebus Saddle, Whitehall Glacier, Skinner Saddle and Gawn Ice Piedmont Glacier with future sites planned at Beardmore Glacier, Roosevelt Island, and coastal sites in West Antarctica) have been chosen to capture and quantify the steep climate gradients from the Scott Coast to the Polar Plateau, the local climate system of the McMurdo Dry Valleys, and the effect of altitude within the Transantarctic Mountains. Coastal sites are especially climate sensitive and show potential to archive local, rapid climate change events that are subdued or lost in the 'global' inland ice core records.
Investigation and datasets include: GPR/GPS surveying to map bedrock topography and internal glacial structure and glacier topography, firn and ice cores to quantify the variability of climate record with analysis of temperature, crystal structure, crystal geometry, density of the snow, melt, dust/tephra occurrence, gas content, porosity, gas bubble size and geometry, snow profiles were analysed for ion content, isotopic ratios, dust content, beta radioactivity, chemical properties and mineralogy to transfer functions with the meterological record, borehole temperature and light penetration, submergence velocity measurements to analyse the mass balance of the glaciers, meterological data and ablation measurements.
(Click for Interactive Map)
Start Date: 1999-11-24
Paleo Temporal Coverage
Latitude Resolution: 1:1 to 1:3 million
Longitude Resolution: 1:1 to 1:3 million
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS
TERRESTRIAL HYDROSPHERE > SURFACE WATER > LAKES
TERRESTRIAL HYDROSPHERE > SURFACE WATER > WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > LAKES
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > ESTUARINE WETLANDS
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > LACUSTRINE WETLANDS
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > MARINE
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > MARSHES
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > PALUSTRINE WETLANDS
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > PEATLANDS
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > RIPARIAN WETLANDS
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > SWAMPS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > ESTUARINE WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > LACUSTRINE WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > MARINE
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > MARSHES
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > PALUSTRINE WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > PEATLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > RIPARIAN WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > SWAMPS
Access Constraints GLWD is available for non-commercial scientific, conservation, and educational
purposes. Any modification of the original data by users must be noted. By
submitting the download request you agree to the regulations of the data
disclaimer (PDF format, 15k). See:
Use Constraints Credit the following and fill out the short form at
Lehner, B. and P. D?ll (2004): Development and validation of a global database
of lakes, reservoirs and wetlands. Journal of Hydrology 296/1-4: 1-22.
Data Set Progress
Distribution Media: online, www
Distribution Size: 46 MB
Distribution Format: shapefiles
Email: bernhard.lehner at wwfus.org
Conservation Science Program
Province or State: DC
Postal Code: 20037
Email: doell at usf.uni-kassel.de
Center for Environmental Systems Research Kurt-Wolters-Strasse 3 Room 2208
Postal Code: 34109
Lehner, B. and P. Doll (2004): Development and validation of a global database of lakes, reservoirs and wetlands. Journal of Hydrology 296/1-4: 1-22.
Birkett, C.M., Mason, I.M. (1995): A new global lakes database for a remote sensing program studying climatically sensitive large lakes. Journal of Great Lakes Research 21(3): 307-318.
ICOLD (International Commission on Large Dams) (1998): World Register of Dams. 1998 book and CD-ROM, ICOLD, Paris.
Loveland, T.R., Reed, B.C., Brown, J.F.,Ohlen, D.O.,Zhu, J., Yang, L. and
Merchant, J.W. (2000): Development of a global land cover characteristics
database and IGBP DISCover from 1-km AVHRR data. International Journal of
Remote Sensing 21(6/7): 1303-1330 http://edcdaac.usgs.gov/glcc/glcc.html
Vorosmarty, C.J., Sharma, K.P.,Fekete, B.M., Copeland, A.H., Holden, J., Lough, J.A. (1997): The storage and aging of continental runoff in large reservoir systems of the world, Ambio 26(4): 210-219.
WCMC (World Conservation Monitoring Centre) (1993): Digital wetlands data set. Cambridge, U.K.
Creation and Review Dates
DIF Creation Date: 2004-09-22
Last DIF Revision Date: 2004-09-23 | <urn:uuid:fd2e1696-bfd5-4b5c-8110-d54a6b6aaf1e> | 2.984375 | 1,783 | Content Listing | Science & Tech. | 31.927909 |
The State of Earth’s Terrestrial Biosphere: How is it Responding to Rising Atmospheric CO2 and Warmer Temperatures?
One of the potential consequences of the historical and ongoing rise in the air’s CO2 content is global warming, which phenomenon has further been postulated to produce all sorts of other undesirable consequences. The United Nations’ Intergovernmental Panel on Climate Change, for example, contends that current levels of temperature and changing precipitation patterns (which they believe are mostly driven by the modern rise in atmospheric CO2) are beginning to stress Earth’s natural and agro-ecosystems now by reducing plant growth and development.
And looking to the future, they claim that unless drastic steps are taken to reduce the ongoing rise in the air’s CO2 content (e.g., scaling back on the use of fossil fuels that, when consumed, produce CO2), the situation will only get worse – that crops will fail, food shortages will become commonplace, and many species of plants (and the animals that depend on them for food) will be driven to extinction.
Such concerns, however, are not justified. In the ensuing report we present a meta-analysis of the peer-reviewed scientific literature, examining how the productivities of Earth’s plants have responded to the 20th and now 21st century rise in global temperature and atmospheric CO2, a rise that climate alarmists claim is unprecedented over thousands of years (temperature) to millions of years (CO2 concentration). | <urn:uuid:0c72590a-5319-4677-b2ce-d06b87305194> | 3.71875 | 311 | Academic Writing | Science & Tech. | 25.01062 |
History---Cause---Effects on Different Parts of the World---Case Studies of South East Asia Countries
Cause of El Nino
The Southern Oscillation
This natural marvel, El Nino, could be related to a shift in the air movement over the tropical Pacific Ocean. Changes in wind direction cause the alteration in circulation and temperature of the ocean, which, in turn further disrupt air movement and ocean currents. This natural episode is the largest irregularity in the year to year fluctuation of the oceanic and atmospheric systems. It is probably caused by the interaction of the two systems. It is most likely related to the Southern Oscillation, which is an irregular oscillation of the atmospheric mass between the Indonesian low pressure system and the Easter Island high pressure system. The oscillation' period is several years.
Figure (a) shows the normal conditions, and figure (b) shows the abnormal conditions during El Nino
Return to Warning and Revenge page | <urn:uuid:296b9ba8-8dac-4383-88a4-6b4153b44ae0> | 3.578125 | 193 | Knowledge Article | Science & Tech. | 25.887308 |
See also the
Browse High School Functions
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Composition of functions.
Domain and range.
Inverse of a function.
- All About Functions [11/06/1996]
Could you please explain functions?
- Defining 'Undefined' [09/15/2003]
If a function is 'undefined at x', does this refer only to vertical
asymptotes, or to other discontinuities as well?
- Definition of the Signum Function [05/31/2000]
Can you give me a simple definition of the signum function, and any
practical examples of its usage?
- Exp, Log, and Ln Functions Explained [7/31/1996]
What is the exp function? When is it needed? Also, how do I calculate Log
and Ln functions with basic arithmetic and logic?
- Function Machine [10/26/1996]
How do you find the domain and range of a function?
- Function Tests [02/19/1997]
What is the reasoning behind the vertical and horizontal line tests?
- Interval Notation [4/1/1996]
I need to learn about interval notation in terms of domain and ranges.
- Mapping Functions in the Real World [3/20/1995]
What is the purpose of learning to map a function? What is it used for in
the real world?
- Rational Inequality [10/09/2001]
Solve this rational inequality and give an answer in interval notation: -
5/(3h+2) greater than or equal to 5/h.
- Sometimes, Always, or Never True? [02/12/2002]
Is this statement always, sometimes, or never true: f(g(x))=g(f(x)) ?
- What Are Quadratic Functions? [02/27/2003]
What is the difference between a quadratic function and a quadratic
- What is a Function? [06/14/2001]
I've read many definitions and I've asked many teachers, but I still
don't completely understand.
- Why is Zero the Limit? [02/25/2002]
Why is zero called the limit of the terms in the sequence the limit of 1
over n, as n approaches infinity, equals zero?
- x Factorial and the Gamma Function [05/29/1998]
What is x! when x is 0, negative, or not a whole number?
- 2^4 = 16 AND 4^2 = 16 [10/29/2001]
Can you think of any other pair of unequal numbers that share the same
relation as 2 and 4 in the above example? What was your strategy?
- 2^x = x^2 [02/13/2002]
Find the real value without graphing.
- Absolute Value and Continuity of Functions [09/15/2004]
I know that the absolute value of a continuous function is also
continuous. Is the opposite true? That is, if the absolute value of
a function is continuous, is the function continuous?
- Algebraically Equivalent Functions [06/27/2002]
If a function can be manipulated so that it can't have a denominator
equal to zero (and thus be undefined for that value), why is the
original function still considered undefined at that value?
- Approaching Zero and Losing the Plot [11/11/2010]
Looking near the origin at plots of y = x^n for ever tinier n, a student wonders why y
= x^0 does not equal zero. By emphasizing two different limits, Doctor Ali gets the
student back into line -- specifically, y = 1.
- Are All Functions Equations? [07/16/2001]
When my x's are not continuous, would I still have a function since the
vertical line test might in fact not touch a point at all?
- Assigning Random Numbers [05/16/2000]
I am using a programming language and have a random number generator that
can generate a random number of 0, 1, or 2. How can I assign those three
values to 4, 12, and 14?
- Asymptote of a Function [06/02/2002]
Determine the value of A so that y = (Ax+5)/(3-6x) has a horizontal
asymptote at y = -2/3.
- Big O Notation and Polynomials [04/12/2001]
Given the function f(x) = (x^3 - (4x^2) + 12)/(x^2 + 2), how can I find a polynomial function g(x) such that f(x) = O(g(x)) and g(x) = O(f(x))?
- Big O, Omega, and Sigma [09/19/2001]
I cannot understand how something can be both Big O and Omega (aka Big
Theta). A general explanation of O/Omega/Theta would be helpful.
- Brackets or Parentheses? [01/07/1997]
When using interval notation to describe when a function is increasing
and decreasing, how do I know whether to use brackets or parentheses?
- Calculus of Piecewise Functions [06/07/2003]
Can I take the integral or derivative of a piecewise function like the
floor function [u] or the absolute value function |u| and still notate
it in concise form, |U| or [U]?
- Can f'(-1) Equal Zero and f''(-1) Not Equal Zero? [03/23/2004]
Is it possible to have a derivative of zero and then have a double
derivative that is not zero at that same x value? How?
- Cases Where the Newton-Raphson Method Fails [06/30/2005]
Why does the Newton-Raphson method work for some functions but not for
- Catenary Curve [03/30/1999]
Find the vertex of a catenary curve.
- Chaotic Functions [10/30/2000]
Can you give some mathematical examples of chaos theory?
- Circular Functions [01/27/2001]
How do you define circular functions? Can you give me an example?
- Closed Form Solutions [09/16/1997]
What is the exact mathematical definition of a closed form solution?
- Coconuts, Forwards and Backwards [02/02/2010]
Doctor Greenie answers a chestnut about repeated division and
remainders, first working the question forwards before using the
inverse of a function to solve the same problem backwards much more
- Composing Functions [12/02/1998]
I'm trying to find f-of-g where f(x) = 2x and g(x) = 3x^2 + 1. What
happens when you compose two functions?
- Composite Functions [4/5/1996]
1) fog(x) = 7x + 3; gof(x) = 7x - 3; f(0) = 1; g(0) = .....
- Composite Functions [01/11/1998]
My students can't understand composite functions.
- Composite Functions Using Logarithms [3/10/1996]
Suppose f and g are functions defined by f(x) = x+2 and
g(x) = x. Find all x > -2 for which:
3^[g(x)*logbase3 f(x)] = f(x).
- Composition Functions with Added x Value [05/13/2001]
If x = 1, evaluate g(f(f(x))). I'm confused with this added value of x =
- Composition of Functions [07/23/1999]
How do I find f(g(x)) if f(x) = x+2 and g(x) = 3x-1?
- Connecting the Dots [02/02/1998]
How do you know whether or not to connect the dots when graphing a real- | <urn:uuid:18fc8150-175c-488f-963d-c24d85ed509a> | 3.875 | 1,778 | Q&A Forum | Science & Tech. | 77.647692 |
Hi. can someone please get me started on answering this question.
Elizabeth started with 30 tubs of yeast. At the end of each day, she used a small quantity of yeast to make two additional tubs of yeast, from each tub of yeast that she had. She then sold 50 tubs of yeast.This routine was repeated each day.
A: give the difference equation for generating the number of tubs of yeast at the start of each day.
B: list the number of tubs for the first four days.
C: find the number of tubs of yeast needed at the very start of this procedure so that the number of tubs at the start of each day remained constant.
Thanks in advance | <urn:uuid:2f03fed1-c284-4a45-aeba-7196e477b4f6> | 3.46875 | 148 | Q&A Forum | Science & Tech. | 80.044964 |
The Sun - in the prime of its life.
Source: SOHO (ESA & NASA)
Average Distance from the Earth: 149.6 million km (1.0 AU)
Size (Equatorial Diameter):
1 392 000 km (109 x that of Earth)
1.989 x 1030 kg (333 000 x that of Earth)
27.28 days (Synodic Period)
25.38 days (Sidereal Period)
5 500°C (surface)
15 000 000°C (core)
273 m/s2 (27.9 x that of Earth)
The Sun is the star at the centre of our Solar System. It is the largest object in the Solar System, containing 99.86% of its total mass. Due to its great mass, the Sun's gravity dominates the Solar System and holds all of the planets in orbit. The energy it produces through nuclear fusion provides the light and heat for our family of planets.
The Sun is located in an outer spiral arm of the Milky Way Galaxy, approximately 28,000 light-years from the Galaxy's centre. The Sun goes around the Galaxy once every 220 million years. This length of time is known as a Cosmic Year. The Sun's path around the Galaxy is not flat, but goes up and down like a merry-go-round, passing through the plane of the Galaxy every 30 million years.
The Sun is an average yellow, main sequence star of spectral type G2 V. It is middle-aged, at least 4.6 billion years old with an expected lifespan of 10 - 12 billion years.
The Sun is a massive ball of very hot gasses held together by gravity. It is composed mostly of hydrogen (70% by mass) and helium (28%), as well as small amounts of other elements such as oxygen, nitrogen and carbon. It has no solid surface but its atmosphere has several layers.
The Inner Core has only 1.5% of the Sun's volume but half of its mass, and it is here that the nuclear fusion reactions which power the Sun occur. Deep within the Sun hydrogen atoms are fused to form helium, a process which releases incredible amounts of energy.
The energy produced in the inner core must travel out through the Sun's interior. During this journey, which can take millions of years, the energy is transformed into the light and heat that is essential for life on Earth.
The heat and light finally escape from a thin shell called the Photosphere. It is this region of the Sun we can observe and it is sometimes referred to as the surface. The photosphere has a granulated appearance arising from convection currents, like boiling water in a saucepan. There are dark, cooler patches on the photosphere called sunspots, and sometimes large eruptions, called solar flares, send energy and material out into space. Both are associated with the Sun's magnetic field.
Surrounding the photosphere are two more regions. The first is a thin shell called the chromosphere. The other is the outermost part of the Sun's atmosphere called the corona which extends far out into space and is very hot. Both of these regions were first seen during total solar eclipses. | <urn:uuid:15d5a86a-26ab-4e19-9269-91ca68be98ee> | 3.8125 | 661 | Knowledge Article | Science & Tech. | 70.488974 |
Follow instructions to fold sheets of A4 paper into pentagons and assemble them to form a dodecahedron. Calculate the error in the angle of the not perfectly regular pentagons you make.
Her walk needs to start and end at the same place, and she needs
to be able to see every part of the planet's surface at some stage
during her walk. Investigate the possible paths she could take. The
challenge is to find the shortest path you can!
One way of investigating and recording this could be to create a
net of a dodecahedron, and draw the path on the net, being careful
to consider which faces will join when the net is folded up.
Here are two nets you could use, but you may find it easier to
visualise an efficient path using a different one. | <urn:uuid:6ec97f55-acb7-4030-a140-038ab6b9d5f0> | 3.8125 | 172 | Tutorial | Science & Tech. | 55.823885 |
Hydrogen has several isotopes and one of them, deuterium, exists quite naturally in water to form . In previous experiments and several papers by Gilbert Lewis, it has been found that life is hindered in the presence of . While this may be true, my PI Steve Koch wondered if life had found a use for it because naturally occurring water has about a 17mM (millimolar) concentration of deuterium.
To put that number into perspective, when I do a typical polymerase chain reaction of DNA I add 10mM of each base of DNA (which is less than the amount of naturally occurring deuterium) to create millions of copies of a DNA template from an amount that is 1000x less then what the reaction yields. In fact most chemicals in most of my buffers on the order of the amount of naturally occurring deuterium.
So you can see it isn’t a stretch to think that nature has found a use for since it is quite abundant and life has been constantly evolving for billions of years. I want to test this hypothesis in a variety of different organisms:
- Tobacco Seeds – to act as a foil to Lewis’ experiments in which he grew tobacco seeds in pure .
- Mustard Seeds – from what I’m told mustard seeds are the powerhouse of the botanical genetics world much like Drosophila and S. cerevisiae are in their respective genetic fields.
- Escherichia coli – another molecular biological powerhouse that is very easy to grow and may be easy to see results with. We just got the facilities to be able to grow E. coli and damn it I want to use them!
- Saccharomyces cerevisiae (Yeast) – I know a guy who grows yeast for his experiments and I’m sure it wouldn’t be a stretch to get him to do so in deuterium depeleted water.
So the idea would be to try to grow these in regular water and in deuterium depleted water (no ), and in the case of E. coli and yeast, perhaps in pure because I don’t think those experiments have been carried out yet. Hopefully I will be able to conclusively state whether or not life has developed a need/use for which would be a very interesting discovery indeed! | <urn:uuid:3637438d-72fb-4541-9609-72e293929b33> | 3.125 | 475 | Personal Blog | Science & Tech. | 47.460338 |
Hydrolysis, chemical decomposition of a substance by water. The hydrogen and oxygen atoms of water combine with the atoms or groups of atoms of the hydrolyzed substance to form new compounds. Hydrolysis is speeded by heat and pressure or by mixing an acid or base with the water. Hydrolysis is important in the manufacture of many substances. Corn syrup is produced by acid hydrolysis of corn starch. Soap is made by alkaline hydrolysis of fats. Other substances made by hydrolysis are fatty acids, alcohols, and glucose.
You Might Also Like
Once considered a semiprecious metal alongside gold and silver, aluminum pretty much languished in obscurity until the 19th century. How did the metal become so ubiquitous?
We once emptied the scent pods of male musk deer into a bottle of fragrance and doused it on, feeling like a million bucks. How has perfume changed since then? | <urn:uuid:f1339040-d177-48cf-8984-695c34ea1b34> | 3.328125 | 192 | Knowledge Article | Science & Tech. | 36.911127 |
Daniel Song, a doctoral student at the University of Pennsylvania, writes from Hovsgol National Park, Mongolia, where he is studying how plants and pollinators form interaction networks.
Tuesday, June 21
Next time you grab a snack or sit down for a meal, take a minute to think about what you’re eating; chances are plants and insect pollinators were involved. Tomatoes, almonds, apples and coffee are just a few examples of the hundreds of foods consumed daily by people around the world that are insect-pollinated. How do pollinators behave in natural habitats? What goes into the decision to pollinate a certain flowering species? What is it about the flowers that attract pollinators? Especially in light of colony collapse disorder, it is ever more important that we study how natural plant communities maintain their pollination services.
Our field site in the Dalbay Valley is interesting in that it has, in close proximity, two drastically different areas: the valley floor and the upper slope. The two areas differ in almost every way: plants, soil moisture, air temperature and grazing pressure, to name just a few. Using this natural divide, I can compare and contrast pollination activity in two ecologically distinct areas. As for the pollinators, there is a diverse collection of insect pollinators buzzing around: butterflies, moths, hoverflies and bumblebees, among others.
In this beautiful backdrop, I spend my summers in northern Mongolia studying floral visual cues and pollinators. My dissertation work is divided into two parts: measuring floral and pollinator traits and monitoring pollinator visitation to flowers. The traits I am looking to measure are ones that are relevant in the act of pollinating. Take, for example, two traits I am measuring: the depth of the flower (corolla tube depth) and bumblebee tongue (proboscis) length. One reason for pollinators to visit flowers is to extract energy in the form of sugary, caloric nectar. The nectar typically sits at the base of the flower, and to reach it the bumblebee has to unfurl its tongue to taste the flower’s sweet reward. If the depth of the flower is longer than the tongue of the bumblebee, it’s unlikely that the bumblebee would visit that flower to get nectar. Corolla tube depth can, in an overly simplistic case, explain why certain bumblebees visit, or do not visit, certain flowers.
What connects the floral traits and the pollinator traits to each other is the monitoring of pollinator visitation to the flowers. The observations are painstaking and tedious but provide the key to the lock. I set up several four-square-meter plots upslope and on the valley floor to monitor pollinator visitation and flower production daily. Recording pollinator visitation to the flowers allows us to link their respective traits. This allows us to see if any repeating patterns emerge, as in the example with the corolla depth and proboscis: longer-tongued bees exclusively visit deeper flower tubes.
I will spend a full 11 summer weeks at the field site to capture the beginning and end of pollination activity as well as flower production. As the climate changes, plant and animal communities may respond in unpredictable ways. Natural pollination services (involving both the flower and pollinator) need to be studied now to anticipate how one of our most precious natural commodities will be affected. | <urn:uuid:affcbc56-d352-4351-b046-43eaf7fb7aae> | 3.90625 | 699 | Personal Blog | Science & Tech. | 34.42493 |
The Cherenkov Effect is caused by high-energy beta particles moving at velocities faster than the speed of light in water.
The effect causes the blue glow in nuclear reactors (right).
This process is similar to the sonic boom heard when an airplane exceeds the speed of sound.
Pioneer 2 re-entered Earth's atmosphere over Northwest Africa.
This was the last of the U.S. Air Force Project Able Probes.
This was America's third attempt to reach the Moon. | <urn:uuid:fe49bb92-57d4-47d8-9285-50bca8e1af5e> | 2.84375 | 105 | Knowledge Article | Science & Tech. | 70.180347 |
The exact causes of the long- and short-term Paleogene warm episodes remain enigmatic. Several pieces of geochemical evidence, including changes in the mean ocean 13C and alkalinity, point toward greenhouse forcing (Shackleton, 1986; Kennett and Stott, 1991; Zachos et al., 1993; Thomas and Shackleton, 1996). Samples recovered during Leg 198 will help constrain the nature and causes of these warm episodes.LPTM
In terms of the rate and degree of warming, the LPTM is unprecedented in Earth history (Fig. F6). The deep-sea and high-latitude oceans warmed by 4°C and 8°C, respectively. The carbon isotopic composition of the ocean decreased by 34 coeval with the warming event, suggesting a massive perturbation to the global carbon cycle (Fig. F7) (Kennett and Stott, 1991; Bains et al., 1999). The large magnitude and rate (~34/5 k.y.) of the carbon isotope excursion (CIE) is consistent with the sudden injection of a large volume of methane from clathrates stored in continental slope sediments (Dickens et al., 1995, 1997). Much of this methane would have quickly converted to CO2, stripping O2 from deep waters, contributing to the major extinction event of benthic foraminifers (Thomas, 1990), and lowering alkalinity. The result should be a sharp rise in the level of the lysocline and CCD (Dickens, 2000). Both CO2 and CH4 would also have immediately contributed to greenhouse warming.
The Leg 198 depth transect will help us determine (1) the magnitude of the tropical Pacific sea-surface and deep-water temperatures increase during the LPTM; (2) whether or not the Pacific lysocline and CCD shoaled during the CIE, whether or not bottom-water oxygenation decreased, and how these changes fit with geochemical models of clathrate release; (3) the response of planktonic and benthic populations to the LPTM in the subtropical Pacific; and (4) whether or not there is a change in the distribution of bottom-water carbon isotopes prior to and/or during the LPTM signaling possible circulation changes.Paleogene Deep-Water Circulation
Several investigators have suggested that early Cenozoic global warming would have altered deep- ocean circulation patterns by reducing the density of surface waters in high latitudes (Kennett and Shackleton, 1976; Wright and Miller, 1993; Zachos et al., 1993). This, in turn, would permit increased downwelling of highly saline but warmer waters in subtropical oceans. Such reversals or switches in circulation probably occurred suddenly rather than gradually. In fact, it has been suggested that a sudden change in intermediate-water circulation patterns may have occurred just prior to the LPTM, possibly triggering the dissociation of clathrates (Bralower et al., 1997a). There may have been additional, abrupt warming intervals in the late Paleocene and early Eocene (Thomas and Zachos, 1999; Thomas et al., 2000). These "hyperthermals" were characterized by changes in the assemblage composition of benthic foraminifers corresponding to negative shifts in planktonic and benthic foraminiferal 18O and 13C values. The ultimate cause of the hyperthermals may be similar to the LPTM, driven by the release of greenhouse gas.
Leg 198 samples will be used to assess regional and global circulation changes during the Paleogene. Major changes in the sources of waters bathing Shatsky Rise might be reflected in the spatial and vertical distribution of carbon isotope ratios in bottom waters as well as in benthic foraminiferal assemblage patterns. Several studies have shown that throughout the late Paleocene and early Eocene, the most negative deep-ocean carbon isotope values were consistently recorded by benthic foraminifers from Shatsky Rise (Miller et al., 1987b; Pak and Miller, 1992; Corfield et al., 1992). Such a pattern is similar to that in the modern ocean, implying older, nutrient-enriched waters in the Pacific, and younger, nutrient-depleted waters in the high latitudes. Although Site 577 is discontinuous across the Paleocene/Eocene boundary, isotope data from Site 865 on Allison Guyot in the equatorial Pacific suggest a possible reduction, if not reversal, in the 13C gradient between the shallow Pacific and the rest of the ocean (Bralower et al., 1995). If this was true, it would be consistent with increased production of intermediate waters in low latitudes. In summary, Leg 198 samples will help address whether there is evidence of warmer, more saline deep waters at times during the Paleogene and how export production in the Pacific changed from the Paleocene to the Eocene.EoceneOligocene Paleoceanography
The EoceneOligocene represents the final transition from a "greenhouse" to an "icehouse" world. Although this transition occurred over a period of 18 m.y., stable isotopic records reveal that much of the cooling occurred over relatively brief intervals in the late early Eocene (~5051 Ma) and earliest Oligocene (~33 Ma) (Fig. F6) (e.g., Kennett, 1977; Miller et al., 1987a; Stott et al., 1990; Miller et al., 1991; Zachos et al., 1996). Furthermore, small, ephemeral ice sheets were probably present on Antarctica sometime after the first event (Browning et al., 1996). The first large permanent ice sheets became established much later, most likely during the early Oligocene event (Zachos et al., 1992a). Current reconstructions of ocean temperature and chemistry for the Eocene and Oligocene, however, are based primarily on pelagic sediments collected in the Atlantic and Indian Oceans (Miller et al., 1987a; Zachos et al., 1992b, 1996). Very few sections suitable for such work have been recovered from the Pacific (Miller and Thomas, 1985; Miller and Fairbanks, 1985). As a consequence, we still lack a robust understanding of how global ocean chemistry and circulation evolved in response to high-latitude cooling and glaciation.
Leg 198 sections across the EoceneOligocene transition will provide a vertical depth transect of ocean chemistry and temperature changes during this important climatic transition. These sections will allow us to determine whether the basin-to-basin deep carbon isotope gradient changed during the EoceneOligocene transition in response to high-latitude cooling and glaciation, and how the lysocline/CCD in the Pacific responded to the rapid high-latitude cooling/glaciations.
Next Section | Table of Contents | <urn:uuid:209c26e8-5e7d-4dc6-a6d0-9c83856ca2c5> | 3.671875 | 1,421 | Academic Writing | Science & Tech. | 37.13825 |
This monograph was reprinted by the American Crystallographic Association, successor to the American Society for X-Ray and Electron Diffraction, May, 1966.
This Monograph has two aspects. On the one hand, it is presented as a contribution to the study of structure factors — or Fourier transforms — of atomic groupings which occur frequently in a wide variety of crystals, both organic and inorganic. Thus special attention is given to such cases as tetrahedral, octahedral and hexagonal arrays of like atoms. A section on the structure factors of small crystals is also included.
On the other hand, it is presented as a contribution to the x-ray analysis of megamolecular crystals as yet studied by few but destined, it would seem, to play an important role in the crystallography of the future. Megamolecular crystals confront crystallography with a new problem, since the structure of the molecules and indeed, to some extent, even the composition of the molecules is unknown. It is the belief of the Author that a systematic study of what may be called the language of structure factors is a necessary preliminary to the interpretation of the intensity maps of crystals made up of megamolecules of unknown structure. In the sequel the structure factors of distributions of different structural types are recorded. Such mathematical facts provide material for the study of the relationship between distributions and their structure factors — the fundamental theme throughout this Monograph.
The Marine Biological Laboratory
Woods Hole, Mass., August 1, 1945
Appendix. Fourier Series and Fourier's Integral Theorem | <urn:uuid:5435ddbe-02e0-4168-9373-7130d1226fe9> | 3.03125 | 321 | Academic Writing | Science & Tech. | 25.263875 |
Simple Sequence Repeat (SSR)- a small segment of DNA, usually 2 to 5 bp in length that repeats itself a number of times. Useful SSRs usually repeat the core motif 9-30 times. Some of the major core motifs that we use in the development of SSR markers for soybean include ATT, AT, CTT, and CT.
Polymerase Chain Reaction (PCR)- an in vitro method for producing the large amounts of a specific fragment of DNA necessary for analysis. Basically a reaction that Xeroxes DNA making millions of exact copies of the same fragment. Step over here for more information about PCR.
SSR Marker Development
The first step in the process of creating a useful Simple Sequence Repeat (SSR) DNA marker is the construction of a DNA library in which small pieces of soybean DNA are inserted into a cloning vector. We use the older and quite well-known soybean cultivar ‘Williams’ as the source of soybean DNA for our DNA libraries. The cloning vector we use is a plasmid vector, pBluescript +KS. Each individual plasmid, containing a different piece of soybean DNA, is then "transformed" or inserted into E. coli cells (look here for a graphical explanation of this). The plasmid vector with the inserted soybean DNA multiplies many times within the E. coli cell. The resulting collection of E. coli cells, each containing a plasmid with a different piece of soybean DNA, is referred to as a plasmid library. Once the library is constructed, it is screened for plasmids that contain soybean DNA with a desired SSR motif such as (ATT)n, (AT)n, (CT)n, (CTT)n, etc. Plasmid clones that are determined to contain the desired motif are then isolated so that the DNA sequence of the entire soybean insert can be determined. DNA sequence determination is performed on a Perkin-Elmer ABI 377 Automated DNA Sequencer. The raw sequence data from each plasmid insert is end-trimmed and analyzed using Perkin-Elmer ABI Auto Assembler software. The determination of DNA sequence is important for two reasons 1) it verifies the presence of the SSR in the soybean insert and 2) it provides the exact DNA sequence on either side of the SSR, which is necessary to construct primers. The DNA sequences from each new SSR-containing soybean insert are checked against each other and against all previously sequenced clones to eliminate duplicates. Clones that are unique i.e., that we have not previously identified and that possess an SSR of sufficient length, are advanced to the next step of the SSR marker development process. This is the selection of PCR primers to the regions flanking the SSR.
We use the primer selection software Oligo 5.0 to identify optimal PCR primers, which generally are 15-31 DNA bases in length and produce products ranging in length from 90-300 basepairs (bp), depending on the length of the included SSR. Primers are also selected and optimized for a 47º C annealing temperature, which is the standard annealing temperature for our PCR reactions with soybean SSR markers. Head this way to see our PCR protocols. The PCR primers are then synthesized by a local DNA synthesis firm.
Once the primers are synthesized, a number of additional tests are required before they can be utilized to produce a useful soybean SSR marker. To test their effectiveness, the primers are used in PCR amplifications of both their original plasmid, upon which the DNA sequence was determined, and on Williams soybean DNA. The PCR reactions are 32P radiolabeled and then separated on denaturing polyacrylamide DNA sequencing gels. Primers that perform well, by producing a single clean product with both the plasmid and Williams soybean DNA are advanced for further testing. This second test involves a broader array of soybean DNA that includes 12 genotypes. These include the diverse soybean cultivars Clark, Harosoy, Jackson, Williams, Archer, Amsoy, Fiskeby, Minsoy, Noir 1, Tokyo, the experimental line A81-356022, and G. soja (wild soybean ) PI 468.916. Primers that produce discrete single products that vary in size among the 12 soybean genotypes, i.e. are polymorphic, are considered useful markers and are assigned a name designated with an S followed by the core motif and then the primer number. For example, the 586th soybean SSR primer with an ATT motif would be named Satt586 as shown below. In the past, primer sets that were determined to be polymorphic in one or more mapping populations [USDA/Iowa State (A81-356022 x G. soja PI 468.916), the University of Utah (Minsoy x Noir I), and/or the University of Nebraska (Clark x Harosoy)] were mapped in one or more of these populations by collaborator K. Gordon Lark, Department of Biology, University of Utah or Alex Kahler, Biogenetic Services, Inc., Brookings, SD. Currently new markers are being positioned on the University of Utah, Minsoy x Noir 1 map by K. Gordon Lark and associates.
This image shows an autoradiograph of plasmid clone 339A2 which shows discrete PCR products which are rather polymorphic among the 12 genotypes. As shown, each polymorphism results from a differing ATT SSR repeat length. This locus was designated Satt586.
Another method of determining SSR length polymorphism is to use flourescent tags on the upper, or forward primer. This tag can then be detected when the sample is run on an automated sequencer, as shown below. | <urn:uuid:8166682f-950a-49f4-9794-d2f061a1c33d> | 3.359375 | 1,228 | Knowledge Article | Science & Tech. | 49.744644 |
Global Warming in a Nutshell
Occasionally it's good to step back from the details of global warming science and offer non-technical visitors a "Global Warming 101" perspective, sort of like The Big Picture, but starting from the very beginning and touching on many aspects of this broad topic. This article was revised and re-posted from Larry's website. The figures supplement the main text with key data, but they are mostly independent and reading the figures is not necessary for understanding the text, and vice versa.
The Greenhouse Effect
The Earth is a giant rock, hurtling through space in its orbit around the sun. It would be a frozen lifeless rock like the moon if not for the thin layer of atmosphere that traps solar energy and insulates the Earth's surface, like a transparent blanket.
The way the atmosphere traps solar energy is called (somewhat inaccurately) the Greenhouse Effect, because the effect is similar to a greenhouse or a closed car heating up in the sun. Sunlight comes in through a transparent window and is absorbed by whatever it hits, heating up the interior. Some of that heat is trapped inside, partly because glass is less transparent to heat than it is to light, and the temperature increases.
In the atmosphere, sunlight is absorbed by the Earth's surface or rooftops or whatever, and that energy is radiated as heat (infrared energy) back toward space. Most of that heat doesn't make it to space, because it gets absorbed by certain gases in the atmosphere, mainly water vapor, carbon dioxide, and methane.
Normally this is a good thing, because without the heat trapped in the atmosphere by "greenhouse gases", our planet would be frozen. But it turns out that too much of a good thing is a bad thing. If extra carbon dioxide that is not part of the natural carbon cycle is added to the atmosphere, then extra heat is trapped that would otherwise escape to space, and the atmosphere gets warmer.
So in a nutshell, Global Warming is an increase in the Earth's overall average temperature caused by adding extra carbon dioxide and other greenhouse gases to the atmosphere that absorb and trap heat.
Article continues at ENN affiliate, Global Warming is Real
Earth image via Shutterstock | <urn:uuid:e64d438b-aa81-43ef-a137-aecb708ac7a8> | 3.40625 | 458 | Truncated | Science & Tech. | 38.87625 |
Anand Jagota's Group Web Site
Deposition of CNTs on Surfaces
We are developing techniques for controlled deposition of carbon nanotubes from an aqueous suspension onto a substrate. We typically work with DNA-CNT dispersions and deposit onto silicon wafers coated with an organic self-assembled monolayer. We find that there are two possible phenomena that occur. Nanotubes appear to deposit by hopping over an electrostatic potential barrier. Under some conditions, it appears that nanotubes deposit randomly and are aligned by a passing meniscus. Under other conditions, they appear to deposit aligned, and we have hypothesized that this is due to the formation of a liquid crystal sheet.
Drop of DNA-CNT dispersion on a hydrophobic silicon substrate.
Deposition of DNA-CNT onto a hydrophobic surface followed by re-alignment by a meniscus.
Interaction potential between DNA-CNT and a surface: deposition is modeled as activated hopping over the electrostatic potential barrier.
(Constantine Khripin and Ming Zheng)
The figure below shows how we view the other possibility, that DNA-CNT rods form an elastic liquid crystal sheet.
What's this? …..
It's the phase plane plot of a differential equation
that describes the behavior of a sheet when matter can flow in and out of it to minimize free energy. It arises out of our work to understand how sheets of carbon nanotubes in solution might behave if subjected to external constraints and moments.
( Constantine Khripin & Tian Tang)
“Deformation of a liquid crystal with coupling between elasticity and concentration”, Constantine Khripin, Anand Jagota Tian Tang, Journal of Physical Chemistry C, accepted (2007).
610 758 4396 | <urn:uuid:2560344d-fe5d-4f3a-8258-07f2c02e0e6e> | 2.734375 | 374 | Academic Writing | Science & Tech. | 33.593874 |
Documentation > Greenhouse effect > Risks > Will we turn the ocean into an acid lake ?
As everybody knows (or should know !), man is now putting significant amounts of CO2 into the atmosphere, though our emissions are taking place into a natural carbon cycle much more complex, which includes exchanges between the atmosphere and the ocean.
The CO2 exchanges between the atmosphere and the surface ocean (roughly 90 billion tonnes of carbon each way per year on the above graph) do not owe much to the existence of a marine life : if we suppressed all fish and all whales, it would definitely generate some inconvenients, but not a slowdown of the CO2 exchanges between the atmosphere and the surface ocean ! These exchanges are essentially a consequence of the existence of large scale marine currents, that cool down water masses in some places (like the water of the Gulf Stream that shifts towards the North Pole), and heats up the surface ocean in some other places (for example the water of the Labrador Current that shifts away from the North Pole).
Indeed, CO2 dissolves better in cold water than in warm water (if one wonders why, the best answer will probably be "because that's how it is" !), so that where water is cooling, some CO2 goes from the atmosphere to the ocean, when where water is heating up, some CO2 goes from water to the air. This explains why, in a globally warming climate, the oceanic sink will have a tendancy to weaken : the absorption of CO2 will be lower with cooling water masses remaining a little warmer than before (on average), while the emissions of the warming water, that will reach temperatures a little higher, will be more important.
But once absorbed by seawater, this CO2 will not remain as such for the most part, no more than the CO2 "absorbed" by the continental ecsystems remains under the form of CO2. Once dissolved in seawater, part of the CO2 reats with water to form hydrogenocarbonate ions (once named bicarbonate ions), HCO3-, then carbonate ions, CO3--.
The successive reactions are thus as follows :
Actually, as any chemical reaction, these can happen either way (it depends on the initial conditions, and for given initial conditions there is of course a way which is privileged). One should then rather write :
Well, old memeories from the chemistry class might allow the reader to remember that a coumpound that "produces" H+ ions is named... an acid. What the above reaction means is that the dissolution of CO2 in sea water makes this water more acid. This property explains why, in old times, the CO2 was called "carbonic acid" (it is for example the expression used by Arrhenius in his premonitory article on how the industrial era would bring a global climate change).
What the chemical reaction above doesn't say, however, is that if we have more CO2 in the air, we will get more in the ocean. We are not facing chemistry here, but thermodynamics (ouch ! here are more and more barbarian expressions !). In other words, if we have more CO2 in the air, it "penetrates" in larger quantities in the underlying water.
From all that preceedes, one can thus deduct that if we increase the atmospheric CO2 concentration, what we are definitely doing right now, not only will we change the climate, but we will also make the ocean more acid.
Intercomparison of the past pH changes of the surface ocean (same colour code than for the graph on the left). The "vertical" curves give the surface ocean pH loss depending on the rate of atmospheric CO2 increase. For example, an increase to 2000 ppm of the CO2 concentration in a couple million years generates a decrease of the pH limited to 0,5 unit, but if it happens in a couple centuries, the ocean will become more acid, losing 0,8 pH unit.
A - variation of the acidity of the surface ocean between glacial ages and interglacial ages (such as today) : pH±0,1 in several thousand years
B - variation of the acidity of the surface ocean during the last 300 million years : pH ±0,5
C - variation of the acidity of the surface ocean during the last century, as a result of the CO2 emissions already done : pH±0,1
D - possible variation of the pH as a result of future emissions, in the "high" case : with 1000 to 2000 ppm extra of CO2 in a couple of centuries, the ocean would loose 0,5 pH units or more. Such an evolution would be without precedent for million (tens of million ? Hundreds of million ?) years, because the rate of increase of the CO2 in the air would be unprecedented. Such an evolution - in several centuries - is not impossible if the continental CO2 sinks are turned into a source.
Source Caldeira & Wickett, Nature, 2003
One can see on the aabove graph that with a CO2 concentration in the air that would reach 1000 ppm, which is definitely in the high end of the bracket of emission scenarios, the ocean acidity could increase by more than 0,5 pH unit (more exactely the pH would decrease by 0,5 unit). This would have significant consequences on the marine life, even with no associated climate change. As usual, some consequences can already be "theoretically" identified, and for others it will be a surprise, because nobody will have thought of them before they happen (which is normal when a situation is new).
Among the marine organisms that will feel a change of the acidity of the surface ocean, one will find corals, because these animals build up a skeletton made of lime, and the chemical reaction leading to the formation of calcite becomes harder to perform when the water gets more acid. May we get back to some simple chemical equations ? Then here is what normally happens when the coral builds up its skeletton :
In other terms, the coral is using calcium ions dissolved in seawater and combines them with carbonate ions to "produce" calcium carbonate, which is nothing else than the "scientific" name of .... lime.
We have seen above that the CO2 that dissolves into seawater generates some "descendants" that are hydrogenocarbonate (HCO3-) and carbonate (CO3--) ions, and that the respective proportions of each compound at equilibirum depends on the conditions at the time. In the tropics (where corals live), hydrogenocarbonate ions represent 85% of the dissolved carbon, and carbonate ions 15%. If the acidity increases, the respective proportions will change, and more precisely the share of the carbonates decreases (graph below).
Well it is only carbonates that are used by marine organisms that need to build up a shell or a limestone skeletton, and we can quite reasonnably consider that if carbonates are less available in seawater, all the concerned life will have a harder time. Indeed, laboratory experiments show that with a doubling of the CO2 compared to today, calcification (the "production" of limestone by living organisms that do so) slows down by 10 to 40%, and this applies both to corals and to the phytoplankton algae that have a shell.
What the above graph suggests is that if the ocean pH decrases by more than 0,8 units (turning the sea almost acid), we would make all marine life that needs to produce limestone very difficult, and that includes, apart from corals, all molluscs (oysters, mussels, etc), all crustaceans, a large fraction of the phytoplankton.... Hopefully, we are not there for the moment, and given the amount of CO2 in the air it requires to obtain such a drop of the pH we can resonnably consider that we will not be there any time soon, but if we cross the threshold of the terrestrial "carbon sink" reversal (and not that of the ocean), what might happen in several decades if we prolongate the trends, it is not impossible that such a drop of the pH happens much later on, making the ocean our remote descendants will face very different from what it is today.... | <urn:uuid:80afdfc9-ee1b-4320-82fe-c2ec22a61fb6> | 3.625 | 1,730 | Knowledge Article | Science & Tech. | 41.449301 |
THE Hubble Space Telescope could soon be joined by a modest relative that will peer into the cosmos on behalf of Britain's schoolchildren, students and amateur astronomers. Space Innovations, a satellite company in Newbury, Berkshire, is about to begin a feasibility study for a project dubbed the Humble Space Telescope.
Humble is the brainchild of Michael Martin-Smith, a Hull doctor and keen amateur astronomer. His plan is to fly a 20-centimetre telescope, similar to those owned by some amateurs, on a tiny satellite. "Hubble is for big science," he says. "But there is an awful lot that amateurs could do with their own telescope." Schoolchildren and university students would be among those given observing time.
The plan was originally floated as part of a proposal to build a national science centre in Derby, submitted to the Millennium Commission, which distributes money raised by Britain's National Lottery. That bid failed, but ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:17f6f9fe-6d5f-4895-8247-dc4c81fee9af> | 3.171875 | 221 | Truncated | Science & Tech. | 40.58065 |
Interspecies Feshbach resonances
Feshbach resonances are an extremely valuable tool for ultracold atom experiments. They occur, when the colliding atoms can couple to a bound molecular state belonging to a different atomic asymptote. This situation can be artificially brought about by applying a homogeneous magnetic field that introduces a Zeeman energy due to the magnetic moment of the atoms. When the magnetic moment of a bound molecular state and the combined magnetic moment of the colliding atoms is different, the Zeeman energy is different for the two states and they can be shifted relative to each other (left image). At the magnetic field that corresponds to energetic degeneracy, the scattering length of the system changes dramatically an can be tuned to any value between plus and minus infinity (right image).
Optical Dipole Trap
In order to apply a homogeneous magnetic field, the atoms need to be loaded into a trap that does not use the magnetic field coils. We therefore use an optical dipole trap based on a 10W Yb:YAG fiber laser running at 1064 nm that is focussed onto the atomic clouds. Since the light is far red-detuned from the atomic transitions, they only experience a potential energy proportional to the light intensity without scattering of photons. The magnetic field coils can then be used to apply the magnetic field. We use a horizontally crossed beam geometry in order to localize the atoms in the center of the magnetic field coils. Since they are so small, field gradients across the cloud, reducing the magnetic field resolution, need to be considered. The image shows fermionic Li atoms inside a single beam dipole trap with a long aspect ratio (top) and the crossed geometry where most atoms are in the crossing volume (bottom).
Feshbach resonances do not necessarily appear in all spin combinations. We therefore need to prepare the atoms in specific states, usually the absolute ground state, e.g. |1,+1> for bosonic Rubidium and Lithium and |1/2,+1/2> for fermionic Lithium. This can be done with very efficiently by coupling e.g. the initial state |2,+2> with radio frequency or microwave field to the desired lower state |1,+1>. In case of 7Li the frequency is about 803 MHz. By ramping this frequency across the resonance, nearly all atoms can be transferred into the absolute ground state.
The transfer efficiency can be analyzed by a Stern-Gerlach experiment in which an magnetic field gradient is applied during free fall. The different magnetic momenta then cause a splitting of the cloud of atoms into the different magnetic substates. (a) A cloud of pure |2,+2> Li without any sweep. (b) After a sweep nearly 100% of the atoms are in the |1,+1> state. (c) Applying the same sweep again restores the initial population distribution.
6Li-87Rb Feshbach Resonances
On July 5th 2007, we observed for the first time in the Fermi-Bose system Feshbach resonances. This was achieved by monitoring the atom numbers after a fixed storage time at different magnetic fields. In the vicinity of a Feshbach resonance, the collision rate increases thereby also increasing inelastic three-body recombination. For our conditions, Li-Rb-Rb three-body collisions are the dominant loss channel. We found a narrow (a) and a broad (b) Feshbach resonance in the magnetic field range from 0..1200G for 6Li and 87Rb atoms in their respective hyperfine ground states |F,mF=1/2,+1/2> and |1,+1>. The magnetic field values where they occur represent important benchmarks for an accurate determination of the interspecies interaction potentials. The broad resonance at 1067G can be used to accurately control the interspecies scattering length.
7Li-87Rb Feshbach Resonances
The first Feshbach resonances in the Bose-Bose system were observed on March 23th 2008. Five loss features were observed in total, four of which could be unambiguously assigned to heteronuclear interspecies Feshbach resonances of the 7Li-87Rb system. Both atoms are in their absolute groundstate |F,mF=1,+1>. Of particular interest is the extremely broad s-wave Feshbach resonance located at 649G with a tuning width of almost 200G making very precise control of the scattering parameters possible.
Catalytic Enhancement of 6Li p-wave Feshbach Resonances
In the Fermi-Bose mixture, the fermions are spin-polarized and do not interact through s-wave collisions due to the Pauli exclusion principle. The next higher partial wave, the p-wave, is allowed, but due to its centrifugal barrier, these collisions are highly suppressed at ultracold temperatures. Nonetheless, at 158G there occurs a p-wave resonance in the Li subsystem that makes is possible to introduce strong Li-Li collisions through the p-wave channel. The presence of Rb inside the trap introduces another loss mechanism through inelastic Li-Li-Rb collisions (right), thereby enhancing the visibility of the p-wave resonance compared to a pure Li cloud where only Li-Li-Li collisions can occur (left).
Based in our measurements of the background scattering lengths as well as the positions of Feshbach resonances, the interaction potentials were reconstructed to a high degree of precision. These calculations were performed both in the groups of Eberhard Tiemann (U Hannover) and Alejandro Saenz (HU Berlin). These potentials form the basis for scattering length tuning curves as well as for photoassociation schemes to produce deeply bound polar molecules. | <urn:uuid:b19b9582-1f3a-4486-af4b-0a5edcd9be59> | 3.46875 | 1,211 | Academic Writing | Science & Tech. | 38.86378 |
The storm classification table below has been developed by scientists
at the University of Virginia for assessing the relative power of northeasterly
storms in the west-Atlantic. The severity of such storms depends on wave
height, which not only depends on wind strength but also on its duration
and fetch, all necessary conditions to reach a 'fully
developed' sea state. Note that the wind speed is absent from this
wave height (m)
|Average duration (hr)||8||18||34||63||96|
|49.7% (2yr)||25.2% (4yr)||22.1% (5yr)||2.4% (40 yr)||0.1% (1000yr)|
|Beach erosion||minor||modest||across beach||severe||extreme|
|Dune erosion||none||minor||significant||erosion and
|Overwash||none||none||none||severe on low-
Notes: a relative frequency of 50% means one such storm every two years.
0.1% once every 1000 years.
Adapted from Cornelia Dean 'Against the tide, the battle for America's beaches'. 1999. Columbia University Press | <urn:uuid:5de2a29f-b0a1-46be-86e3-5639a65fde24> | 3.1875 | 256 | Knowledge Article | Science & Tech. | 61.965011 |
Structures Only Found in Plant (and Some Algae and Fungus) Cells
Now we move on to the fun stuff that you only wish
you had in your cells. It's OK to be jealous.
Like the cell walls in bacteria, fungi, algae, and some Archaea (single-celled organisms), plant cell walls are found just outside the plasma membrane
and provide structure and protection to the cells.
The Cell Wall: Not Yours.
Transmission electron micrograph source: Paungfoo-Lonhienne et al. PLoS ONE. 2010; 5(7).
The cell wall prevents plant cells from bursting (lys
ing…) when too much water moves into the cell across the membrane. As water pushes against the cell wall from the inside, plant cells become large and firm because pressure, known as turgor pressure
, builds up against the inside of the cell wall.
You have experienced the presence of turgor pressure when you have broken a piece of crisp celery. In the same way, if you have every tried to break a piece of wilted celery, you also know what effect the absence
of turgor pressure can have on a plant. Plant cell walls also have small openings, called plasmodesmata
, that allow cells to communicate with adjacent cells. Plant cell walls are composed primarily of a protein called cellulose
, while fungal cell walls are made of a protein called chitin
Large Central Vacuole
Most plant cells have a large membrane-bound sac called a vacuole
. In many types of plant cells, this vacuole can occupy between 30% and 80% of the total volume of the cell, making it the largest single cellular structure. The main function of the large central vacuole is to help the cell maintain water pressure, aka turgor pressure
, on the cell wall. Water molecules flow into the central vacuole and, like a big balloon inside a cardboard box, fills up and pushes outward on the cell wall.
Helpful tip: Vacuole sounds a lot like vacant, or vac
uum, and this is a good way to think about this organelle. It mostly contains a water-based solution with some organic compounds and enzymes.
Many plant and algae cells contain small, green organelles called chloroplasts
Here is what our green friend looks like:
Transmission electron micrograph source: Wikimedia Commons
These organelles are responsible for converting the energy from the sun into chemical energy, usually in the form of glucose, through the elaborate process of photosynthesis
. Chloroplasts are green because they contain large amounts of the green pigment chlorophyll
bound to proteins embedded in internal stacks of membranes called thylakoids
. Sunlight is captured by chlorophyll molecules and transferred throughout the thylakoid membranes to give off energy
. This energy is used to strip carbon from carbon dioxide in the air to make sugar
. Generally, in any given plant, only some of the cells will contain chloroplasts.Brain Snack
The chloroplast was also a free living bacterial cell at one point, and its genome is much larger than that of the mitochondrion. | <urn:uuid:d3a46709-4e6d-435e-922a-0321845e39c9> | 3.515625 | 669 | Knowledge Article | Science & Tech. | 45.342512 |
The physical ephemeris of a solar system body refers to its aspect as seen from the Earth: its apparent magnitude, the angular size of its disk, its apparent degree of illumination, the orientation of its pole, and the positions of its sub-solar and sub-Earth points. This information divides into illumination data and rotation data. Illumination data depends on a model of the body's reflectivity as a function of angle of illumination, while rotation data depends on a model of the body's rotation; both depend on the Earth-Sun-body geometry. Illumination information includes the object's apparent phase, magnitude, and the angular dimensions of its disk. Rotation information includes the instantaneous positions of the sub-solar and sub-Earth points on the object's surface and the apparent position angle of the object's axis of rotation.
In MICA's physical ephemeris tabulations, longitudes and latitudes are planetographic, and position angles are measured on the sky eastward from true north (the direction to the true, Celestial Ephemeris Pole of date). The illumination and rotation data, respectively, are the same as those found on the left-hand and right-hand pages of the 'Ephemeris for Physical Observations' section in section E of The Astronomical Almanac. All of MICA's physical ephemerides are available using either a geocentric or topocentric origin. Except for the Moon, and Venus and Mars when near the Earth, the geocentric physical ephemeris of an object is indistinguishable from the topocentric physical ephemeris. Rotation data is available for the Sun, Moon and major planets. Illumination data is available for the Moon and major planets, but not the Sun.
The MICA physical ephemerides of the planets have been calculated using the basic physical data (directions of the north poles of rotation, the prime meridians, and the size and shapes of the major planets) contained in Seidelmann (2002). Expressions for the apparent visual magnitudes of the major planets (except Mercury and Venus) are from Harris (1961). Values for V(1,0), the magnitude of a planet as seen from 1 AU and at a phase angle of 0°, are given on page E88 of The Astronomical Almanac. The MICA Version 2.0 expressions for the magnitudes of Mercury and Venus are based on the parameters given in Hilton (2003) and are the same as used in the 2005 and 2006 editions of The Astronomical Almanac. The MICA 2.0 Mercury and Venus magnitudes differ slightly from the 2004 edition (and earlier editions) of The Astronomical Almanac, which used the earlier Harris (1961) expressions. A useful discussion on the calculation of physical ephemerides is also contained in Hilton (1992). | <urn:uuid:687b2f82-b6a6-41d3-b4c7-85f8347e3ba3> | 3.875 | 578 | Knowledge Article | Science & Tech. | 29.526515 |
Every so often, someone (generally not a practicing scientist) suggests that it's time to replace science with something better. The desire often seems to be a product of either an exaggerated sense of the potential of new approaches, or a lack of understanding of what's actually going on in the world of science. This week's version, which comes courtesy of Chris Anderson, the Editor-in-Chief of Wired, manages to combine both of these features in suggesting that the advent of a cloud of scientific data may free us from the need to use the standard scientific method.
It's easy to see what has Anderson enthused. Modern scientific data sets are increasingly large, comprehensive, and electronic. Things like genome sequences tell us all there is to know about the DNA present in an organism's cells, while DNA chip experiments can determine every gene that's expressed by that cell. That data's also publicly available—out in the cloud, in the current parlance—and it's being mined successfully. That mining extends beyond traditional biological data, too, as projects like WikiProteins are also drawing on text-mining of the electronic scientific literature to suggest connections among biological activities.
There is a lot to like about these trends, and little reason not to be enthused about them. They hold the potential to suggest new avenues of research that scientists wouldn't have identified based on their own analysis of the data. But Anderson appears to take the position that the new research part of the equation has become superfluous; simply having a good algorithm that recognizes the correlation is enough.
The source of this flight of fancy was apparently a quote by Google's research director, who repurposed a cliché that most scientists are aware of: "All models are wrong, and increasingly you can succeed without them." And Google clearly has. It doesn't need to develop a theory as to why a given pattern of links can serve as an indication of valuable information; all it needs to know is that an algorithm that recognizes specific link patterns satisfies its users. Anderson's argument distills down to the suggestion that science can operate on the same level—mechanisms, models, and theories are all dispensable as long as something can pick the correlations out of masses of data.
I can't possibly imagine how he comes to that conclusion. Correlations are a way of catching a scientist's attention, but the models and mechanisms that explain them are how we make the predictions that not only advance science, but generate practical applications. One only needs to look at a promising field that lacks a strong theoretical foundation—high-temperature superconductivity springs to mind—to see how badly the lack of a theory can impact progress. Put in more practical terms, would Anderson be willing to help test a drug that was based on a poorly understood correlation pulled out of a datamine? These days, we like our drugs to have known targets and mechanisms of action and, to get there, we need standard science.
Anderson does provide two examples that he feels support his position, but they actually appear to undercut it. He notes that we know quantum mechanics is wrong on some level, but have been unable to craft a replacement theory after decades of work. But he neglects to mention two key things: without the testable predictions made by the theory, we'll never be able to tell how precisely it is wrong and, in those decades where we've failed to find a replacement, the predictions of quantum mechanics have been used to create the modern electronics industry, with the data cloud being a consequence of that.
If anything, his second example is worse. We can now perform large-scale genetic surveys of the life present in remote environments, such as the far reaches of the Pacific. Doing so has informed us that there's a lot of unexplored biodiversity on the bacterial level; fragments of sequence hint at organisms we've never encountered under a microscope. But as Anderson himself notes, the only thing we can do is make a few guesses as to the properties of the organisms based on who their relatives are, an activity that actually requires a working scientific theory, namely evolution. To do more than that, we need to deploy models of metabolism and ecology against the bacteria themselves.
Overall, the foundation of the argument for a replacement for science is correct: the data cloud is changing science, and leaving us in many cases with a Google-level understanding of the connections between things. Where Anderson stumbles is in his conclusions about what this means for science. The fact is that we couldn't have even reached this Google-level understanding without the models and mechanisms that he suggests are doomed to irrelevance. But, more importantly, nobody, including Anderson himself if he had thought about it, should be happy with stopping at this level of understanding of the natural world. | <urn:uuid:ce63cc82-d538-4652-80e1-60e4e5426cd0> | 2.890625 | 970 | Comment Section | Science & Tech. | 35.985062 |
Exercise 1: Write a complete program to read a string of characters from the standard input and insert them in the stack. Print the size of the stack when you are done after the stack is filled out. Then, print out to the screen all the vowels in reverse order.
Use classes and functions to do the above program.
Exercise 2: Write a program that takes marks of 10 students and store them in the stack then take out these from the stack and print the average of all the marks of 10 students.
Hint : make sure that every time you do a push operation you should check if the stack is not full and in pop operation you should check if the stack is not empty. | <urn:uuid:91e9acb6-98a5-4e42-8c82-ddccbbbbdce4> | 3.734375 | 144 | Tutorial | Software Dev. | 68.109888 |
Explore patterns based on a rhombus. How can you enlarge the
pattern - or explode it?
Learn how to use advanced pasting techniques to create interactive
Learn how to use increment buttons and scroll bars to create
interactive Excel resources.
Learn how to make a simple table using Excel.
Investigate factors and multiples using this interactive Excel
spreadsheet. Use the increment buttons for experimentation and
This investigation uses Excel to optimise a characteristic of
Prove that the area of a quadrilateral is given by half the product of the lengths of the diagonals multiplied by the sine of the angle between the diagonals.
Can you convince me of each of the following: If a square number is
multiplied by a square number the product is ALWAYS a square
Find b where 3723(base 10) = 123(base b).
A quadrilateral changes shape with the edge lengths constant. Show
the scalar product of the diagonals is constant. If the diagonals
are perpendicular in one position are they always perpendicular?
Show that for natural numbers x and y if x/y > 1 then x/y>(x+1)/(y+1}>1. Hence prove that the product for i=1 to n of [(2i)/(2i-1)] tends to infinity as n tends to infinity.
In this 'mesh' of sine graphs, one of the graphs is the graph of
the sine function. Find the equations of the other graphs to
reproduce the pattern. | <urn:uuid:4cec51d6-135b-40c5-bb21-14d77a4e6157> | 2.828125 | 326 | Content Listing | Science & Tech. | 54.22161 |
Is this a fair game? How many ways are there of creating a fair
game by adding odd and even numbers?
Interior angles can help us to work out which polygons will
tessellate. Can we use similar ideas to predict which polygons
combine to create semi-regular solids?
Photocopiers can reduce from A3 to A4 without distorting the image.
Explore the relationships between different paper sizes that make
Take three whole numbers. The differences between them give you
three new numbers. Find the differences between the new numbers and
keep repeating this. What happens?
Can you devise a fair scoring system when dice land edge-up or corner-up?
The problem is how did Archimedes calculate the lengths of the sides of the polygons which needed him to be able to calculate square roots?
Use cunning to work out a strategy to win this game.
Analysis of this problem is fascinating because it draws together a
heady mix of theoretical and numerical probability along with
estimates and approximations.
Go to last month's problems to see more solutions.
This article discusses what happens, and why, if you generate
chains of sequences getting the next sequence from the differences
between the adjacent terms in the sequence before it, eg (7, 2, 8,
3) maps to (5, 6, 5, 4).
These models have appeared around the Centre for Mathematical Sciences. Perhaps you would like to try to make some similar models of your own. | <urn:uuid:9ba28b8f-dc75-496b-83a0-735c236a4ac2> | 3.984375 | 316 | Content Listing | Science & Tech. | 50.453945 |
November 6, 2009
A priority queue is a data structure that permits insertion of a new element and retrieval of its smallest member; we have seen priority queues in two previous exercises. Priority queues permit sorting by inserting elements in random order and retrieving them in sorted order. Heapsort uses the heap data structure to maintain a priority queue. The heap is a tree embedded in an array, with the property that the item at each index i of the array is less than the children at indices 2i and 2i+1.
The key to understanding heapsort is a function we call
heapify that gives the sub-array
A[i .. n] the heap property if the sub-array
A[i+1 .. n] already has the property.
Heapify starts at the ith element of the array and swaps each element with its smallest child, repeating the operation at that child, stopping at the end of the array or when the current element is smaller than either of its children. Then heapsort works in two phases; the first phase forms an initial heap by calling
heapify on each element of the array from n/2 down to 1, then a second phase extracts the elements in order by repeatedly swapping the first element with the last, re-heaping the sub-array that excludes the last element, and recurring with the smaller sub-array that excludes the last element.
Your task is to write a function that sorts an array using the heapsort algorithm, using the conventions of the prior exercise. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.
Pages: 1 2 | <urn:uuid:48657a07-d7ba-47a0-8723-148aac663d7b> | 4.1875 | 351 | Tutorial | Software Dev. | 55.392596 |
An estimator is called unbiased if its expected value is equal to the true parameter value, regardless of the true parameter's value. Is the Bayes estimate unbiased?
- By part (f), E(θbayes) = θ* if and only if 1-2θ* = 0, or θ* = 0.5. Therefore, the Bayes estimate is NOT unbiased. In fact, it is biased towards central values like 0.5 rather than extreme ones such as 0 or 1. This reflects the uniform prior; even this choice makes an assumption about the data. However, notice from part (f) that as N -> infinity, the Bayes estimate does approach the true value. In general, Bayes estimators are biased, but as long as the chosen prior is "reasonable", they will eventually converge to the true value when enough data is given. In statistics, there is a division between "frequentist" and "Bayesian" philosophies and techniques. Bayesian statistics tends to require stronger assumptions, but can be more powerful as a result. | <urn:uuid:3560524f-a634-4a46-8543-58c9a5f689a5> | 3.46875 | 221 | Tutorial | Science & Tech. | 57.707005 |
We can see several types of galaxies, differring for example for their form, dimensions, brightness, mass, stellar contents and, in the end, for the energy emission distribution in the different bands of the electromagnetic spectrum.
The principal classification, called Hubble Sequency, is based on the form and it divides the galaxies between elliptical, spiral and irregular ones.
They show regular systems, approximately with a spherical form, with just few dust and interstellar gas, fitted with a really dense nucleus, whose superficial brightness decreases from the centre towards the periphery. Their structure may change from the circular form, called E0, to the extremely crushed one, described like E7. The elliptical galaxies are made, above all, by red stars (or Population II) that, according to the theory of the stellar evolution, are very ancient.
Stars are in fact used to change their color becoming old. In the first part of their life they show a blue color, becoming then more yellow-red.
They appear like systems full of interstellar gas and dusts, based on a central bulge surrounded by a disc, from where run bright spiral filaments, called arms, site of an intense stellar formation.
We can also divide these spiral galaxies into two classes: normal ones (S), with a central and at least perfectly spherical central nucleus and spiral arms, and barred spirals (SB), different from the normal ones because of a central structure placed through the nucleus, a so-called bar-shaped structure.
They appear like systems full of interstellar gas and dusts, usually being inferior, for their mass, to the spiral and elliptical galaxies. They are usually called "irregular" because their aspect has no simmetry. They typically host young stars, or stars from the population I.
In 1965, while studying the ground noise of a radio antenna...
When two galaxies start to approach, the tidal attraction forces deform their structures... | <urn:uuid:5ad91563-65db-415a-8df1-418edfcd2dd3> | 3.828125 | 393 | Knowledge Article | Science & Tech. | 40.772011 |
conic section or conic (kŏnˈĭk) [key], curve formed by the intersection of a plane and a right circular cone (conical surface). The ordinary conic sections are the circle, the ellipse, the parabola, and the hyperbola. When the plane passes through the vertex of the cone, the result is a point, a straight line, or a pair of intersecting straight lines; these are called degenerate conic sections. There are many examples of the conic sections, both in nature and in technology. The orbits of planets and satellites are elliptical, and parallel reflectors (e.g., in telescopes) are parabolic in shape.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on conic section from Fact Monster:
See more Encyclopedia articles on: Mathematics | <urn:uuid:a7f67628-f15a-4895-b79f-1fc3de5e0eca> | 4.03125 | 186 | Knowledge Article | Science & Tech. | 34.246 |
Meanwhile the carbon dioxide content of the oceans will have doubled. This raises an incidental question about the welfare of sea organisms. We know that an increase in carbon dioxide concentration increases the acidity of water, and that many marine animals are extremely sensitive to changes in acidity. However, if the carbon dioxide content of the air were to increase sevenfold, the acidity (pH) of sea water would not rise more than .5 above its present value. Thus changes in carbon dioxide concentration, which have such a profound effect on climate, will probably not disturb future marine life. Perhaps only man will be uncomfortable.
We shall be able to test the carbon dioxide theory against other theories of climatic change quite conclusively during the next half-century. Since we now can measure the sun's energy output independent of the distorting influence of the atmosphere, we shall see whether the earth's temperature trend correlates with measured fluctuations in solar radiation. If volcanic dust is the more important factor, then we may observe the earth's temperature following fluctuations in the number of large volcanic eruptions. But if carbon dioxide is the most important factor, long-term temperature records will rise continuously as long as man consumes the earth's reserves of fossil fuels. | <urn:uuid:f9594281-ab58-426b-a107-fcf7e6a7d0ad> | 3.359375 | 247 | Truncated | Science & Tech. | 30.539104 |
News Stories relating to "rain"
Wednesday, February 6, 2013
Researchers have discovered the presence of significant numbers of living microorganisms--mostly bacteria--in the middle and upper troposphere, the part of the atmosphere four to six miles above the Earth's surface. In other words, the clouds are filled with germs! Do they fall...
Tuesday, April 3, 2012
When plants talk to each other
, what do they say? Some of them compare notes on how to survive a drought and plants that have been subjected to a previous period of drought learn to deal with the stress thanks to their memories of the experience.
Wednesday, February 9, 2011
NASA's new GBM telescope has detected beams of antimatter produced above thunderstorms on Earth by energetic processes similar to those found in particle accelerators. Scientists think the antimatter particles are formed in a terrestrial gamma-ray flash (TGF), a brief burst produced inside ...
Tuesday, February 8, 2011
A new, detailed record of rainfall fluctuations in ancient Mexico that spans more than 12 centuries will help us understand the role drought
played in the rise and fall of pre-Hispanic civilizations. If there's a...
Wednesday, October 20, 2010
There are lots of places to get power (including the power to fight a war-- NOTE: Subscribers can still listen to this show), besides greenhouse gas emitting fossil fuels: wind turbines, solar panels, ethanol. How about out of the thin air? And a group of researchers thinks that if economists and investors really want to end the recession, they...
Friday, February 8, 2008
We once reported that it rains MORE on weekends. Now scientists say it rains LESS on weekends. But they all agree on the reason: air pollution.
A new NASA study has found that summer storms in the Southeaster US occur more often midweek than on weekends. They think the cause of this is air pollution created by traffic exhaust and other...
Wednesday, August 15, 2007
Recent extreme weather includes a tornado that hit Brooklyn. Sometimes it seems that storms are worse over cities?especially on the weekends?and scientists say this is TRUE. In fact, climate change caused by greenhouse gas emissions produced by humans been changing global rainfall patterns over the entire 20th Century.
Tuesday, February 19, 2002
New satellite data shows that tiny airborne particles are changing rainfall patterns around the world. These man-made particles, mostly from burning fossil fuels, make it more difficult for clouds to form and less likely to rain if they do form.
Daniel Rosenfeld of the Hebrew University of Jerusalem says that because they block sunlight... | <urn:uuid:ee97e1f7-8d97-480a-ad0c-3000610053c6> | 2.6875 | 538 | Content Listing | Science & Tech. | 57.697988 |
Determination of Monthly Mixed Layer Depth Fields
In order to define monthly mixed layer depth (MLD), a weighted average based on two sources of MLD information was created, one source based on observations and the other based on a numerical ocean model. The first was the MLD product offered by the National Ocean Data Center (NODC). Specifically, the MLD fields computed via potential density at 1°×1° from gridded temperature/salinity (T/S) (Levitus and Boyer 1994a; Levitus et al. 1994) were used. This product is available at here. The second source was Fleet Numerical Meteorology and Oceanography Center (FNMOC) model mixed layer output at a resolution of 2.5°×2.5° (Clancy and Sadler 1992). Using daily FNMOC fields from March through December 1995 and January and February, 1996, monthly means were computed and then gridded to the same resolution as the NODC fields.
The T/S observations required for the NODC MLD product are highly non-uniformly distributed over the globe, and much of the ocean is completely unsampled (see Levitus and Boyer 1994a for methodology of filling the global 1°×1° grid). As a result, the MLD fields contain unrealistic spatial distributions, horizontal gradients, and magnitudes. This problem with definition of MLD from gridded T/S is known, and a developing approach is to define MLD from individual hydrographic profiles and to grid resultant MLD estimates only where observations exist (Monterey, G., Pacific Fisheries Environmental Laboratory, Pacific Grove, Calif., personal communication.). However, such MLD fields are not currently available. Therefore, a weighting function for the NODC MLD fields was defined based on observation density. Specifically, we used the monthly average number of salinity observations at NODC levels within the upper 50 m. Based on mapped observation density, a cutoff of 75 was chosen to define where salinity was well sampled and thus where the NODC MLD fields had a sufficient observational base. Above this cutoff, the weighting for NODC MLD was 1 (~7% of the grid points). Below the cutoff, the weighting for NODC MLD was the average number of observations divided by 75. Lastly, because some NODC MLD values are extremely and unrealistically deep where few observations exist, zero weighting was assigned where NODC MLD was > 400 m. This weighting procedure retained NODC MLD estimates in relatively well-observed regions and relied on the model (FNMOC) MLD estimates for poorly observed regions (in proportion to the paucity of observations).
Following this definition of the weighted average MLD product, there still remained grid points where neither input data set provided information. Missing grid points within the latitude range 65° N to 65° S were filled with a combination of spatial and temporal averaging (±2 months and 5° of latitude/longitude). Any points not filled by this procedure were filled with the mean of all valid monthly MLD values for that grid point. Finally, a 5°×5° median filter was applied to the monthly MLD fields to smooth the boundaries where missing data were filled in the last step. | <urn:uuid:7b467fc4-3073-44f1-88ff-7923c19f037b> | 2.90625 | 686 | Academic Writing | Science & Tech. | 42.948417 |
From Universe Today (yes I know it is linked elsewhere on BAUT, but I thought worthy of repeating here)
A new look at data from the Mars Viking landers concludes that the two landers may have found the building blocks of life on the Red Planet after all way back in 1976. The surprise discovery of perchlorates by the Phoenix mission on Mars 32 years later could mean the way the Viking experiment was set up actually would have destroyed any carbon-based chemical building blocks of life – what the experiment set about to try and find.
“This doesn't say anything about the question of whether or not life has existed on Mars, but it could make a big difference in how we look for evidence to answer that question," said Chris McKay of NASA's Ames Research Center. McKay coauthored a study published online by the Journal of Geophysical Research – Planets, reanalyzing results of Viking's tests for organic chemicals in Martian soil.
The Viking lander scooped up some soil, put it in a tiny oven and heated the sample. The only organic chemicals identified in the Martian soil from that experiment chloromethane and dichloromethane — chlorine compounds interpreted at the time as likely contaminants from cleaning fluids used on the spacecraft before it left Earth. But those chemicals are exactly what the new study found when a little perchlorate — the surprise finding from Phoenix — was added to desert soil from Chile containing organics and analyzed in the manner of the Viking tests. | <urn:uuid:da3de58a-6110-4a2a-9679-c1a06ae7c42d> | 2.96875 | 301 | Comment Section | Science & Tech. | 30.115236 |
I enjoy a lively discussion. Perhaps that was my objective, rather than my original position regarding the direction and source of the force which causes the tides.
Each particle of each body "attracts" every other particle. The actual number of forces may be near infinity. If we assume both bodies to be emitting separate forces (which they do) not just one force between them (which the formula implies), then the total force between two bodies becomes the sum of the two (or vector sum) of the many individual forces.
Those two resultant forces are exactly equal in magnitude and exactly opposite in direction. Therefore they are calculated as if they were one force.
Because the two (or many) forces are calculated on different properties of the two bodies, and as there has always been discussion regarding the difference between the gravitational mass and the inertial mass, I'm wondering if we shouldn't assume a body to be both a gravitational and an inertial mass and be mindful of the separate forces.
Perhaps the formula for gravitational force could be rewritten to include those ideas.
Hopefully you can see how I've made the total force of gravity between two bodies proportionate to the sum of the two resultant forces, and then inverse to the single distance between them squared. What that does is change our presently accepted value of G = 6.6726e-11 using mks units to G = 3.3363e-11 using mks units.
The Physics book says 'that "attract" means that a body exerts a force on another body that is directed back to the first body along the line joining the two bodies'.
But, the law of gravity says that 'any two bodies "attract" each other'.
Therefore, there are separate forces exerted by each of the two bodies.
Gravity is not unlike russ_watters "skateboarders", IF BOTH PULL.
There is only one rope or line of force. The action and reaction to each force is equal and opposite.
When you talk about a tide on a celestial body, that body becomes the reference frame, and the force comes from within that body.
Yes it is true the other bodies are doing the same thing, it's just a matter of which one is the reference.
The Earth's oceans tides are caused when the Earth's oceans "attract" or gravitate towards the Moon and Sun. Those waters are the source of the gravitational force which moves them.
I have a copy of Newton's Principia (translation) here on the desk. Newton used the term "mutual attraction" more than once. He also stressed the fact that the force of gravity diminishes the farther away from the source you go.
And to perhaps clarify the direction he said this:
Please note where Newton said "each towards the other". This is my argument that the tides gravitate towards the Moon and Sun. Yes the Moon and Sun are cute and "attractive", but the tides gravitate or attract themselves to those other cute attractive bodies. Any body has the force of gravity and it uses that force to bootstrap or move itself unless it is restrained in some manner, such as the apple restrained by the tree.Originally Posted by Isaac Newton
Newton also said:
Can you see in that theorem that the moon "gravitates" or is the source of the gravitational force it bootstraps or causes itself to be moved or "drawn off"?Originally Posted by Isaac Newton | <urn:uuid:5dbb6e0f-c4ac-42e3-837d-4d765b890101> | 3.265625 | 711 | Comment Section | Science & Tech. | 47.187271 |
Check me out @ Youtube.com/aemind for more Mind Empowering videos.
I want to talk about Flexible Electrons in today’s Mind Updates video.
Even though scientist from Northwestern University had discovered a way to make electrons flexible, they could never really make them as flexible as they would need to be in order to be used in every day consumer electronics.
Not until now that is. Scientists have been able to create technology that makes electrons be able to flex up to 200 times their normal size. For the Full Article Check this out: http://www.aemind.com/flexelec
Essentially what this means is that Electonic devices are going to be able to be made much thinner and flexible. Imagine having an Iphone or Droid Phone, super thin and when that you can do the “pencil wiggle” with.
Let me know what you think about this discovery down below.
Check out Yesterday’s Mind Update video | <urn:uuid:59e56725-1acc-42a9-87eb-7c6647367721> | 3.0625 | 204 | Truncated | Science & Tech. | 56.018246 |
Vacuum expectation value
|Quantum field theory|
In quantum field theory the vacuum expectation value (also called condensate or simply VEV) of an operator is its average, expected value in the vacuum. The vacuum expectation value of an operator O is usually denoted by . One of the best known examples of an observable physical effect that results from the vacuum expectation value of an operator is the Casimir effect.
- The Higgs field has a vacuum expectation value of 246 GeV (Amsler, C.; Doser, M.; Antonelli, M.; Asner, D.; Babu, K.; Baer, H.; Band, H.; Barnett, R. et al. (2008). "Review of Particle Physics⁎". Physics Letters B 667: 1. doi:10.1016/j.physletb.2008.07.018. ) This nonzero value underlies the Higgs mechanism of the Standard Model.
- The chiral condensate in Quantum chromodynamics, about a factor of a thousand smaller than the above, gives a large effective mass to quarks, and distinguishes between phases of quark matter. This underlies the bulk of the mass of most hadrons.
- The gluon condensate in Quantum chromodynamics may also be partly responsible for masses of hadrons.
The observed Lorentz invariance of space-time allows only the formation of condensates which are Lorentz scalars and have vanishing charge. Thus fermion condensates must be of the form , where ψ is the fermion field. Similarly a tensor field, Gμν, can only have a scalar expectation value such as .
- Wightman axioms and Correlation function (quantum field theory)
- vacuum energy or dark energy
- Spontaneous symmetry breaking
|This quantum mechanics-related article is a stub. You can help Wikipedia by expanding it.| | <urn:uuid:2cd5024e-3744-4b88-a7b7-5bb7e567484c> | 3.015625 | 407 | Knowledge Article | Science & Tech. | 53.763121 |
When designing Wrapfs, we concentrated on the following aspects:
The first four points are discussed below. Performance is addressed in detail in Section 5.
Without changing a given Vnode interface, we have identified three items that file system developers want to inspect or manipulate: file data, names, and attributes. Changing file data is the most obvious (e.g., encryption).
Changing file names as part of, say, encryption is also desired. For example, a file system may refuse to create files that contain rarely used characters such as whitespace and other non-printable characters, or visually confusing names such as ``...'' (three dots) as these are sometimes used by intruders to obscure their tracks. More advanced use can be made by inspecting file names and selectively manipulating them. For example, a file system that adds immutable file support will need to have a list of file names to consider untouchable. Several file systems can place auxiliary information (such as access keys) in files that are hidden from normal view (beyond the obvious ``dot'' files) and are used only internally by the file system.
Working with file attributes promises some of the most interesting file systems, as attributes fundamentally reflect existing Unix file access control. For example, a file system can perform UID or GID mapping based on the file's ownership. It can exploit seldom used mode bits: the setuid bit on directories can be used to indicate immutable directories. Attempting to modify setuid binaries without prior authentication can be prevented. For every given file F, if a file named .acl.F exists, the file system can read the contents of that file and interpret them as additional access grants or revocations to the file F.
Advanced developers may wish to combine changing various aspects of files with additional coding inside specific vnode operations. For example, a file system may wish to track removal of vital files such as logs that can be used to analyze attackers' actions. Attempts to remove the files can be transparently translated into file renaming. The name chosen can be a special name that is hidden from normal view: it will not get listed with the rest of the files in the directory, but be available on the underlying file system. A generalized scheme can include file versioning.
This list of examples is not exhaustive. It should be considered only a hint of what can be accomplished with level of flexibility that Wrapfs offers.
The API for the Wrapfs developer is summarized in Table 1 and is described here. It consists of six calls to encode and decode file data, names, or attributes. Since it may be necessary to perform more sophisticated operations in these calls, they are passed additional information such as the current vnode, VFS, user credentials, etc.
In order to simplify the manipulation of file data, and to enable MMAP operations (necessary for executing binaries), we perform data manipulations in a size that is native to the operating system, usually 4KB or 8KB. Another compelling reason for manipulating only whole pages is that some file data changes may require it. Some encryption algorithms work on blocks of data of a known fixed size such that bytes within the block depend on preceding bytes[22,27]. It was therefore important to confine users of Wrapfs to manipulating a fixed size data buffer.
To keep Wrapfs simple, we decided that the data encoding and decoding calls will return a buffer of the same size as the one passed to it. This design decision excludes the possibility of using algorithms such as compression and decompression, because such algorithms change the size of their input data, making file offset calculations costly. Supporting such algorithms would have complicated Wrapfs considerably. Therefore we left this support out of the first implementation of Wrapfs.
We decided that all Vnode calls that write file data will call the function encode_data before writing the data to the lower level file system. Then, all Vnode calls that read file data will call the function decode_data after reading the data from the lower level file system. In a similar fashion, all Vnode functions that manipulate file names or attributes have the appropriate encode or decode function called in the right places.
The user of Wrapfs who wishes to manipulate files and their names or attributes need not worry about which Vnode functions use them, how directory reading (readdir) is being accomplished, about holding and releasing locks, updating reference counts, or caching. The file system developer only needs to fill in the relevant encoding and decoding functions. Wrapfs takes cares of all these operating system internals.
We are making the full sources to Wrapfs publicly available. This way it is possible for file system developers to modify every aspect of a prototype, not just through the six API calls. This also allows the security community to validate and improve the templates.
There are three important issues relating to the extension of the Wrapfs API to user-level: mount points, caching, and ioctls.
Wrapfs supports two ways of mounting a file system: a regular mount and an overlay mount. In a regular mount two pathnames are given: one for the mount point (say /mnt), and one for the directory to stack on (the mounted directory /usr). For example mount -t wrapfs /mnt /usr. After the mount is complete, there are two ways to access the mounted-on file system. Access via the mounted-on directory (/usr) yields the lower level files without going through Wrapfs. However, access via the mount point (/mnt) will go through Wrapfs first. This mount style exposes the mounted directory to user processes, and is useful for debugging purposes and for backups to proceed faster. (That users can bypass the mount point is a general property of stacking, not one brought on by Wrapfs). For example, in an encryption file system, a backup utility can backup files faster and safer if it uses the lower file system's files (ciphertext), rather than the ones through the mount point (cleartext).
The second mount style, an overlay mount, is accomplished using mount -t wrapfs -O /usr. Here, Wrapfs is mounted directly on top of /usr. Accessing files such as in /usr/ucb must go though Wrapfs. There is no easy way to get to the original file system's files under /usr without passing through Wrapfs first. This mount style makes backups and debugging more difficult, but has the advantage of hiding the lower level file system from user processes.
We consider an overlay mount more secure and thus made it the default mount style in Wrapfs. A sophisticated attacker might be able to overlay another file system whose purpose would be to bypass several layers and get directly into the lowest level file system. Such an attack requires root privileges, source access to all of file systems currently mounted, and understanding of kernel internals. The attacker would have to carefully follow kernel data structures to reach the ones representing the lowest level file system. This attack is therefore no easier than kernel memory manipulation via /dev/kmem.
An important point that relates to the mount style is that of caching. Most file systems cache pages to improve performance. When a stackable file system is used on top of, say, UFS, both layers may cache pages independently. Cache incoherency could result if pages at different layers are modified independently, but that could only occur in regular mounts; overlay mounts do not let user processes modify data pages at the lower layers. A mechanism for cache synchronization through a centralized cache manager was proposed by Heidemann, but that solution involved modifying the rest of the operating system and other file systems.
We decided that Wrapfs will perform its own caching, and may cache pages at the lower layer depending on the mount style. If the mount style was regular, Wrapfs caches pages also at the lower layer, because this improves performance when accessing files directly through the lower layer; in fact there is no way to avoid caching pages at the lower layer in a regular mount style, because processes can access files directly through the lower level file system. We also decided that the higher the layer is, the more authoritative it would be. For example, when writing to disk, cached pages for the same file in Wrapfs would overwrite their UFS counterparts. This policy matches the most common case of cache access, through the uppermost layer.
If an overlay mount style was used, Wrapfs does not cache pages at the lower layer. This cuts memory usage for pages by half, and performance is still very good, as pages are served off of the upper (Wrapfs) layer, where pages are always cached.
The third important user-level issue relates to the ioctl(2) system call. Ioctls have been used for years as simple means to extend the API of a file system beyond that which system and Vnode calls offer. Wrapfs allows its user to define new ioctl codes and implement their associated actions. Two ioctls are already defined: one to set a debugging level, and one to query it. Wrapfs comes with many debugging traces that can be turned on or off at run time by a root user. Other possible ioctls that can be implemented by specific file systems include passing and retrieving additional information to and from the file system. An encryption file system (such as the one described in Section 4.9) might use an ioctl mechanism to set encryption keys. | <urn:uuid:4d28475c-e388-45b6-b46c-ac097435f7e4> | 2.734375 | 1,922 | Documentation | Software Dev. | 40.365984 |
<programming> A data type composed of multiple elements. An aggregate can be homogeneous (all elements have the same type) e.g. an array, a list in a functional language, a string of characters, a file; or it can be heterogeneous (elements can have different types) e.g. a structure. In most languages aggregates can contain elements which are themselves aggregates. e.g. a list of lists.
See also union.
Try this search on Wikipedia, OneLook, Google
Nearby terms: AFUU « ag « agent « aggregate type » aggregation » aggregator » AGL | <urn:uuid:cd86a712-41e0-41d1-8555-228d13c8ef36> | 3.09375 | 130 | Structured Data | Software Dev. | 42.487918 |
Become a fan of h2g2
Earth is the third planet from the Sun, and, as far as we know, where all humans live.
- Mass: 5.9736×1024kg
- Equatorial Radius: 6378.1km
- Mean Density: 5515kg/m3
- Length of Day: 24.0000 hours
- Period of Revolution about Sun: 365.256 days
- Acceleration due to Gravity: 9.81m/s2
- Mean Orbital Velocity: 29.78km/s
- Inclination of Axis: 23.45°
- Mean Distance from the Sun: 1 AU
Earth's atmosphere is mainly made up of Nitrogen (78%). The other major atmospheric gas is Oxygen (21%) with very small amounts of Argon, Carbon Dioxide, Neon, Helium, Krypton and Hydrogen. Earth's weather is relatively calm by solar system standards - wind speeds vary from 0 to 100m/s (compared with Neptune's 560m/s wind speed).
As far as we know, Earth is the only planet in the solar system that supports life, although it is believed there are other candidates, such as Europa. Earth is suitable for living things2 because of its molten core, which causes a magnetic field around the Earth that protects the surface from harmful radiation and particles coming from the Sun, and because of its abundant water.
Earth is in possession of a rather large satellite, called the Moon.
See? It's Mostly Harmless.
The h2g2 Tour of the Solar System
Take the h2g2 Shuttle for your whistle-stop tour of the Solar System. | <urn:uuid:724ce15c-e412-477f-988f-06d9112f3002> | 3.421875 | 347 | Knowledge Article | Science & Tech. | 73.556131 |
Create reusable content
You can make elements on a screen into user controls that can be used in multiple pages throughout a prototype. For example, you can create a navigation bar, and then make the navigation bar into a user control so that you can reuse the control throughout your prototype.
For more information about creating UserControls, see Create an application flow.
You might also want to create a library of UserControls so that you can easily reuse them throughout multiple projects. You can create a Microsoft Silverlight or Windows Presentation Foundation (WPF) control library.
To create a new control library
On the File menu, click New Project, and then select either Silverlight or WPF.
If you create a Silverlight project, click Silverlight Control Library. If you create a WPF project, click WPF Control Library.
In the Name box, type a name for the project.
In the Location box, type the name of or browse to the folder where you want to store the project. By default, this is a folder named "Blend Projects" in your "My Documents" folder.
On the Language menu, select a programming language (Visual C# or Visual Basic).
For more information about creating projects in Expression Blend, see Create a new project.
Now that you have created a new control library, you can create new user controls to include in the library. Create any new user controls that you want for your project.
For more information about creating new user controls, see Create a new user control in your project.
After you have created the user controls that you want to include in your control library, build the project by clicking CTRL+SHIFT+B.
For more information, see Build a project.
Now that you have created and built your control library, you can add a reference to the library from inside your project, and reuse the controls from the library in your new project.
For more information, see Add or remove a reference. | <urn:uuid:b845d177-2172-499b-b58d-ab23ae262735> | 2.8125 | 407 | Documentation | Software Dev. | 51.050759 |
This is a clear and concise treatise on phase transition, written by Leo Kadanoff, one of the leading figures today in condensed matter physics. Even if you don't understand the mathematics, the written description of phase transition is very well presented, so I highly recommend this for everyone to read.
Abstract: This paper Looks at the early theory of phase transitions. It considers a group of related concepts derived from condensed matter and statistical physics. The key technical ideas here go under the names of "singularity", "order parameter", "mean field theory", and "variational method".
In a less technical vein, the question here is how can matter, ordinary matter, support a diversity of forms. We see this diversity each time we observe ice in contact with liquid water or see water vapor, "steam", come up from a pot of heated water. Different phases can be qualitatively different in that walking on ice is well within human capacity, but walking on liquid water is proverbially forbidden to ordinary humans. These differences have been apparent to humankind for millennia, but only brought within the domain of scientific understanding since the 1880s.
A phase transition is a change from one behavior to another. A first order phase transition involves a discontinuous jump in a some statistical variable of the system. The discontinuous property is called the order parameter. Each phase transitions has its own order parameter that range over a tremendous variety of physical properties. These properties include the density of a liquid gas transition, the magnetization in a ferromagnet, the size of a connected cluster in a percolation transition, and a condensate wave function in a superfluid or superconductor. A continuous transition occurs when that jump approaches zero. This note is about statistical mechanics and the development of mean field theory as a basis for a partial understanding of this phenomenon. | <urn:uuid:0ccff45f-bf53-4ad6-a679-cbd515f50430> | 2.921875 | 372 | Personal Blog | Science & Tech. | 28.436696 |
Explanation of Characteristics of Fluids by Ron Kurtus - Succeed in Understanding Physics. Key words: Physical Science, states of matter, solid, gas, liquid, plasma, shape, container, gravity, sphere, spreading, pour, flow, School for Champions. Copyright © Restrictions
Characteristics of Fluids
by Ron Kurtus (revised 26 March 2007)
The states of matter are solid, liquid, gas and plasma. A fluid is a subset of the states of matter, consisting of liquids, gases and plasmas. This is because they have common properties that are distinct from solids. A fluid does not have a specific shape as does a solid. Instead, fluids take the shape of their containers. They also will flow or pour when under the influence of a force such as gravity.
Questions you may have include:
- What is the natural shape of fluids?
- How do fluids take the shape of their containers?
- How do fluids flow?
This lesson will answer those questions.
Useful tool: Metric-English Conversion
Solids have specific shapes because the molecular forces holding particles in place are stronger than the kinetic energy of the molecules. Usually, the molecules just vibrate in place, with little or no other movement.
On the other hand, fluids exist at higher temperatures and thus their particles have greater kinetic energy. The shape of a fluid adapts to its environment or container.
A liquid in space will form the natural shape of a sphere. This is because the attraction between its atoms or molecules is greater than the forces from their kinetic energy moving outward. A sphere is a shape with the smallest surface area for a given volume of material.
A liquid sphere or drop of liquid—such as water—that is falling toward the Earth through the atmosphere will be a slightly flattened sphere, due to the air resistance.
If you spill some water on the floor, it will splash and spread out on the floor. Liquids like thin oil will spread out even more than water on the floor.
The molecules in a gas have more energy than when the material is in the liquid state, such that they overcome the molecular forces. A gas in space or in the atmosphere will continually spread in a shapeless form.
A gas that is heavier than air may gravitate toward the floor, where it then spreads out.
The rate that the gas expands is a function of its temperature or kinetic energy of its particles.
A plasma is an ionized gas, usually at extremely high temperatures. That means some of its electrons have been stripped off. Plasmas have most of the same properties as gases.
Shape in container
Under the influence of gravity, a fluid will take the shape of its container, provided the volume of the container is greater than or equal to the volume of the fluid.
If you pour a liquid into a container, it will take the shape of the container, provided none overflows. Under the influence of gravity, a liquid will stay in an open container, such as a cup.
If the container is filled to the top, the volume of the liquid will equal the volume of the container. This fact has been used to measure the volume of an irregularly shaped container or flask.
A gas will take the shape of its container too. If the container is open and the gas is heavier than air, it will stay in the container for a while.
For example, Chlorine gas (Cl2) is one of the few gases that is colored. If you pour it into a container, you will see the light green gas take the shape of the container. But the high energy of the gas molecules will result in it slowly dissipating into the air.
Usually, gases are put in closed containers. Since gases tend to spread, and since the rate of spreading is proportional to the temperature of the gas or the kinetic energy of its particles, there is a constant pressure on the walls of the container in all directions. This pressure increases with increased temperature or reduced volume of the container.
Because of their high temperature, plasmas are seldom placed where they could take the shape of the container.
The major feature of a fluid is that it flows when acted upon by some force. This makes a fluid different than a solid, which may be distorted by a force but will not start to flow. Typically, the force is that of gravity, but other forces can also apply.
Fluids under the influence of gravity will flow or can be poured. You certainly have poured liquids from one container to another. Gases also can be poured. Since plasmas are typically very hot, they are seldom poured.
Although, you cannot see carbon dioxide (CO2), you can demonstrate pouring it from one jar to another. This is shown by using dry ice to fill a jar with CO2 and then pouring it into a jar containing a burning candle. The candle flame will be snuffed out as the invisible CO2 is poured into the jar.
The forces caused by the acceleration, deceleration or change in direction of a moving container can cause the fluid to flow or change its shape.
The force of wind on a body of water will cause the water to flow, as well as to create surface waves.
A fluid is a subset of the states of matter, consisting of liquids, gases and plasmas. They have common properties that are distinct from solids. Fluids do not have a specific shape as do solids. Instead, fluids take the shape of their containers. They also will pour when under the influence of a force such as gravity.
Walk with fluid grace
Resources and references
Fluid Mechanics by Ira M. Cohen and Pijush K. Kundu, Academic Press (2004) $74.95
Vectors, Tensors and the Basic Equations of Fluid Mechanics by Rutherford Aris, Dover Publications (1990) $14.95
Fundamentals of Fluid Mechanics by Bruce R. Munson, Donald F. Young, Theodore H. Okiishi; Wiley (2001) $37.95
What do you think?
Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible.
Click on a button to send an email, Facebook message, Tweet, or other message to share the link for this page:
Students and researchers
The Web address of this page is:
Please include it as a link on your website or as a reference in your report, document, or thesis.
Where are you now?
Characteristics of Fluids | <urn:uuid:b04d95f3-6fb6-4aaa-b4a3-010dc6dab246> | 4.28125 | 1,374 | Truncated | Science & Tech. | 55.848482 |
Science Fair Project Encyclopedia
Intuitively, a space is complete if it "doesn't have any holes", if there aren't any "points missing". For instance, the rational numbers are not complete, because √2 is "missing" even though you can construct a Cauchy sequence of rational numbers that converge to it. (See the examples below.) It is always possible to "fill all the holes", leading to the completion of a given space, as will be explained below.
The space Q of rational numbers, with the standard metric given by the absolute value, is not complete. Consider for instance the sequence defined by x1 := 1 and xn+1 := xn/2 + 1/xn. This is a Cauchy sequence of rational numbers, but it does not converge towards any rational limit; in fact, it converges towards the irrational number √2, the square root of two.
The open interval (0,1), again with the absolute value metric, is not complete either. The sequence (1/2, 1/3, 1/4, 1/5, ...) is Cauchy, but does not have a limit in the space. However the closed interval [0,1] is complete; the sequence above has the limit 0 in this interval.
The space R of real numbers and the space C of complex numbers (with the metric given by the absolute value) are complete, and so is Euclidean space Rn. Other normed vector spaces may or may not be complete; those which are, are the Banach spaces.
If S is an arbitrary set, then the set SN of all sequences in S becomes a complete metric space if we define the distance between the sequences (xn) and (yn) to be 1/N, where N is the smallest index for which xN is distinct from yN, or 0 if there is no such index. This space is homeomorphic to the product of a countable number of copies of the discrete space S.
A subspace of a complete space is complete if and only if it is closed.
If X is a set and M is a complete metric space, then the set B(X,M) of all bounded functions f from X to M is a complete metric space. Here we define the distance in B(X,M) in terms of the distance in M as
For any metric space M, one can construct a complete metric space M' (which is also denoted as M with a bar over it), which contains M as a dense subspace. It has the following universal property: if N is any complete metric space and f is any uniformly continuous function from M to N, then there exists a unique uniformly continuous function f from M to N which extends f. The space M' is determined up to isometry by this property, and is called the completion of M.
The completion of M can be constructed as a set of equivalence classes of Cauchy sequences in M. For any two Cauchy sequences (xn)n and (yn)n in M, we may define their distance as
- d(x,y) = limn d(xn,yn).
(This limit exists because the real numbers are complete.) This is only a pseudometric, not yet a metric, since two different Cauchy sequences may have the distance 0. But "having distance 0" is an equivalence relation on the set of all Cauchy sequences, and the set of equivalence classes is a metric space, the completion of M. The original space is embedded in this space via the identification of an element x of M with the equivalence class of sequences converging to x (i.e. the equivalence class containing the sequence with constant value x). This defines an isometry onto a dense subspace, as required.
Cantor's construction of the real numbers is a special case of this; the real numbers are the completion of the rational numbers using the ordinary absolute value to measure distances. By using different notions of distance on the rationals, one obtains different incomplete metric spaces whose completions are the p-adic numbers.
If this completion procedure is applied to a normed vector space, one obtains a Banach space containing the original space as a dense subspace, and if it is applied to an inner product space, one obtains a Hilbert space containing the original space as a dense subspace.
Topologically complete spaces
Note that completeness is a property of the metric and not of the topology, meaning that a complete metric space can be homeomorphic to a non-complete one. An example is given by the real numbers, which are complete but homeomorphic to the open interval (0,1), which is not complete. Another example is given by the irrational numbers, which are not complete as a subspace of the real numbers but are homeomorphic to NN (a special case of an example in Examples above).
In topology one considers topologically complete (or completely metrizable) spaces, spaces for which there exists at least one complete metric inducing the given topology. Completely metrizable spaces can be characterized as those spaces which can be written as an intersection of countably many open subsets of some complete metric space. Since the conclusion of the Baire category theorem is purely topological, it applies to these spaces as well.
It is also possible to define the concept of completeness for uniform spaces using Cauchy nets instead of Cauchy sequences. If every Cauchy net has a limit in X, then X is called complete. One can also construct a completion for an arbitrary uniform space similar to the completion of metric spaces. The most general situation in which Cauchy nets apply is Cauchy spaces; these too have a notion of completeness and completion just like uniform spaces.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:91351a8e-8f44-4fca-9450-6a233a2a604c> | 3.25 | 1,244 | Knowledge Article | Science & Tech. | 47.902841 |
As I write this, the engine room is carefully adjusting the ship's ballast. This is an interesting physics problem for all you scholars out there. As we burn fuel, we transform the heavy liquid into a gas exhaust that leaves the confines of the ship. Therefore, we're getting lighter.
As we get lighter, we sit higher in the water, losing some stability. How do you correct for that? Can you think of a way to do it? Well, this is actually a topic I overheard Captain George Silva discussing with Chief Engineer Jeff Little and First Engineer Jim Schubert.
If you guessed that the best way to offset the weight of the fuel lost is to replace it with salt water, you're right! Underneath the lowest decks of the ship are the fuel containers and ballast containers. They run all the way from the bow to the stern and all the way from port to starboard. As you may imagine, it pays to have many tanks of different sizes in many locations. But how do you know when to fill certain tanks?
"Well, there's actually a stability program, that takes into account different factors associated with the stability of the ship," said Jim. There are many things to consider, including how far the ship could tip before it would roll over, bow vs. stern trim, port vs. starboard list, and the comfort of passengers (to prevent seasickness).
In addition to all those, this program is designed specifically for Atlantis. It factors in the location and weight of critical equipment such as the cranes, A-frame, and Alvin. That's why it's also important for scientists to provide weights and locations of any significant equipment they add to the ship. As fuel is burned, the computer program factors in all of these issues and decides which ballast tanks to fill.
"Tanks are either completely filled or empty so as to prevent the water from sloshing and upsetting the balance," Jim continued. As we've been traveling and burning fuel, we've been filling the ballast tanks. So now, we have a fair amount of water and less fuel.
How do you keep the tanks from corroding in the salt water? "The ballast tanks are concrete-lined and they have sacrificial zincs," explained Jeff. These zincs are common on ships in places that come in contact with salt water. The idea is that the corrosion takes place on the zinc instead of on the steel.
"It's also important to remember that the fuel and water are in separate tanks, so there is no mixing of fuel and water."
When we come into San Diego, the first order of business will be refueling at a fuel dock. This is an island-like platform with large containers of diesel fuel.
"We can take on 20,000 to 30,000 gallons of fuel per hour comfortably," said Jeff. "So, it'll take us about 4 to 5 hours to completely fill up."
This is no small task, because as the fuel is being added, the ballast tanks must be carefully pumped out. In this way, the ship will remain in the same position in the water. The ballast water has an interesting history.
"We need to exchange our ballast water in the open ocean, outside of 200 nautical miles, according to maritime regulations," explained Jeff.
Can you think why it is not permissible to dump water from one port (say Manzanillo, Mexico) into another port (like San Diego, California)? This is not an easy question, but the answer lies in the microscopic organisms living in the water. Atlantis and other ships that travel between ports could potentially carry organisms to new locations, introducing them to environments where they didn't exist before.
These organisms could potentially thrive in their new environment and take over and become invasive and out-compete the organisms that lived there (we call these native species). Since ballast water is one of the primary ways that this could happen, Atlantis and other ships are required to empty their ballast tanks and take on new water in the open ocean, which is believed to contain relatively few numbers of potentially invasive organisms. There are several other ways to minimize introducing new organisms, but this one seems to work well.
Isn't that fuel just going to sit in the tanks in Atlantis on the dock for a while?
"Right," said Jeff. "We add two things to the fuel. One is a treatment for the reduction of smoke when we burn it. The other is to control biological growth. If you let it sit for too long, this fungus will begin to grow."
Where will this fuel be taking Atlantis? Well, after we are unloaded, safely back ashore, Atlantis will begin preparing for its next adventure to Easter Island, off the coast of South America. | <urn:uuid:d2745840-ee98-427e-88ef-f8433a579c32> | 2.875 | 981 | Audio Transcript | Science & Tech. | 60.138815 |
DIVERSITY & ENDEMISM
Although research into the flora of the Horn of Africa is still ongoing, the best possible estimates are that there are about 5,000 species of vascular plants in the region, just over half of which – about 2,750 species – are endemic. There are strong concentrations of endemic species in northern Somalia and in the Socotra Archipelago.
Socotra also has a relatively high level of generic endemism, with 13 of the hotspot's nearly 60 endemic genera confined to the archipelago. Furthermore, the Horn of Africa is home to two endemic plant families: Barbeyaceae, which is represented by a single, relatively widespread evergreen species, Barbeya oleoides, and Dirachmaceae, represented by two threatened species, Dirachma socotrana (VU), on Socotra, and D. somalensis (EN) in central Somalia.
For thousands of years, several native tree species have provided the raw materials for some of the Horn of Africa’s most important commodities, including frankincense (from Boswellia sacra in Somalia, Yemen and Oman, and B. frereana in Somalia), myrrh (from the widespread Commiphor myrrha and C. guidottii in Somalia and eastern Ethiopia) and dragon's blood or cinnabar (from Dracaena cinnabari, EN found on Socotra). All three are gum-resins obtained from these trees. Dragon's blood, is used as a medicine and dye. The production of frankincense and myrrh is still a major economic activity in Somalia and, to some extent, in Ethiopia and northern Kenya.
Among the hotspot's other notable plant species is the spectacular cucumber tree (Dendrosicyos socotrana, VU), found only on Socotra, which has a massive water-storing trunk and tendrils on its branches. The daban or Bankoualé palm (Livistona carinensis, VU) is interesting in that the other 30 or so species of Livistona occur in Southeast Asia and Australia. The daban, which is harvested for use in the construction of homes and drainage pipes, is now found only in a few isolated localities in northeastern Somalia, Djibouti and southern Yemen. The Yeheb nut (Cordeauxia edulus, VU), an evergreen shrub or small tree with yellow flowers and edible, highly nourishing seeds is found in the dry bushlands of eastern Ethiopia and central Somalia, usually in areas of deep sand. It has been touted as a potential food crop for arid areas, but has proven difficult to cultivate.
Hundreds of new species have been discovered in Somalia alone in the last 20 years, most notable among them the Somali cyclamen (Cyclamen somalense). Known only from a small area in northern Somalia, the plant was a surprising discovery in tropical Africa, as the genus Cyclamen is otherwise found only in the Mediterranean region.
Of the 697 bird species regularly recorded in the hotspot, 24 are endemic. Seven of these species are found only in Somalia, including a bushshrike, the Bulo Burti boubou (Laniarius liberatus, CR), which was described (and is still known only) from a single individual that was released (hence the specific name liberatus) after comprehensive study. Another six species are confined entirely to Socotra, including the golden-winged grosbeak (Rhynchostruthus socotranus), the only representative of its genus. Four Endemic Bird Areas, as defined by BirdLife International, fall entirely within the hotspot.
One of the most notable endemic bird species in the hotspot is the Warsangli linnet (Carduelis johannis, EN), locally common in high, steep escarpments along the Gulf of Aden in northern Somalia. Another important flagship species is the Djibouti francolin (Francolinus ochropectus, CR), which is found only in two sites in Djibouti, Forêt de Day, which is thought to be the only viable site for this imperiled species, and the nearby Mabla Mountains.
Nearly 220 mammal species are found in the Horn of Africa, although only about 20 are endemic to the hotspot. The most notable endemics are several antelope species, including the beira (Dorcatragus megalotis, VU), dibatag (Ammodorcas clarkei), Speke's gazelle (Gazella spekei) and silver dikdik (Madoqua piacentinii, VU). The beira is confined to dry and inhospitable hills and mountains of northern Somalia, eastern Ethiopia and Djibouti, where it can survive without water. The slender dibatag, with its characteristic erect tail and long neck, is found in the bushlands of eastern Ethiopia and adjoining lowlands of northern and central Somalia. Both species have suffered from uncontrolled hunting and habitat degradation. The hotspot also has an endemic species of wild ass, the Somali wild ass (Equus africanus somaliensis, CR), while the desert warthog (Phacochoerus aethiopicus), a distinct species from the common warthog (P. africanus), is found mainly in eastern Ethiopia, Somalia and northern Kenya.
Five monotypic mammal genera are endemic to the hotspot, including the aforementioned beira and dibatag, as well as three small mammals: the Somali pygmy gerbil (Microdillus peeli), the ammodile (Ammodillus imbellis, VU) and Speke's pectinator (Pectinator spekei).
The hamadryas or sacred baboon (Papio hamadryas), which was held sacred in ancient Egypt and often mummified, is today endemic to the arid Horn, living on hillsides and escarpments bordering the southern Red Sea and the Gulf of Aden.
The Horn of Africa's highest levels of endemism occur among reptiles, with more than 90 of around 285 species found nowhere else. The hotspot's six endemic reptile genera include Haackgreerius, a monotypic genus of skink found in Somalia, and Aeluroglena, which is represented by a single species of snake, A. cucullata. Half of the endemic genera are restricted to Socotra, including the two Haemodracon gecko species and two snake genera, Ditypophis and Pachycalamus, represented by single species.
Unlike the reptiles, amphibians are relatively poorly represented in the arid Horn, with nearly 30 species recorded, of which at least six are endemic. There is only a single endemic genus, Lanzarana, which is represented by one species, Lanza's frog (L. largeni) of Somalia. Despite suitable habitats, no amphibians are known to exist on Socotra.
There are an estimated 100 species of freshwater fish in the Horn of Africa, about 10 of which are endemic. These endemics include three cave-dwelling species (each the only representative of an endemic genus) found only in Somalia, two of which – the Somalian blind barb (Barbopsis devecchii, VU) and Somalian cavefish (Phreatichthys andruzzii, VU) – are blind. No native freshwater fishes are known with certainty from Socotra, but populations of Aphanius dispar have been introduced to some waters as part of an anti-malaria program. | <urn:uuid:e9832b38-80c2-4a88-accb-36480d2dbf17> | 3.359375 | 1,614 | Knowledge Article | Science & Tech. | 25.314095 |
This tutorial series will run for five consecutive weeks. In this installment, Go expert Mark Summerfield explains how to set up Go, and then he provides two examples of Go programs that are explained in depth. The programs provide a partial overview of Go's key features and some of its key packages. The following weeks' installments will show the remaining key features and dig into many aspects that make Go a uniquely interesting language, especially for C, C++, and Java programmers.
As explained in this week's editorial, Go has many unique features and might be described as C for the 21st century. Given that one of the language's designers is Ken Thompson, the languages do indeed share a common ancestor. —Ed.
Go programs are compiled rather than interpreted. Compilation is very fast—dramatically faster than some other languages—most notably C and C++.
The standard Go compiler is called gc and its toolchain includes programs such as
8g for compiling,
8l for linking, and
godoc for viewing the Go documentation. (These are
6l.exe, and so forth, on Windows.) The strange names follow the Plan 9 operating system's compiler naming conventions where the digit identifies the processor architecture (“5” for ARM, “6” for AMD64—including Intel 64-bit processors—and “8” for Intel 386.) Fortunately, we don't need to concern ourselves with these tools, since Go provides the high-level go build tool that handles the compiling and linking for us.
All the Go code in this article has been tested using gc on Linux, Mac OS X, and Windows using Go 1. The Go developers intend to make all subsequent Go 1.x versions backward compatible with Go 1, so the text and examples here should be valid for the entire 1.x series.
To download and install Go, visit
golang.org/doc/install.html, which provides instructions and download links. Go 1 is available in source and binary form for FreeBSD 7+, Linux 2.6+, Mac OS X (Snow Leopard and Lion), and Windows 2000+, in all cases for Intel 32-bit and AMD 64-bit processor architectures. There is also support for Linux on ARM processors. Prebuilt go packages are available for the Ubuntu Linux distribution, and may be available for other Linux distros by the time you read this.
Programs built with gc use a particular calling convention. This means that programs compiled with gc can be linked only to external libraries that use the same calling convention—unless a suitable tool is used to bridge the difference. Go comes with support for using external C code from Go programs in the form of the cgo tool, and at least on Linux and BSD systems, both C and C++ code can be used in Go programs using the SWIG tool.
In addition to gc there is also the gccgo compiler. It is a Go-specific front end to gcc available for gcc from version 4.6. Like gc, gccgo may be available prebuilt for some Linux distributions. Instructions for building and installing gccgo
gccgo are given on the main Go website.
The Go Documentation
Go's official website hosts the most up-to-date Go documentation. The “Packages” link provides access to the documentation on all the Go standard library's packages—and to their source code, which can be very helpful when the documentation itself is sparse. The “Commands” link leads to the documentation for the programs distributed with Go (such as the compilers, build tools, etc.). The “Specification” link leads to an accessible, informal, and quite thorough Go language specification. And the “Effective Go” link leads to a document that explains many best practices.
The website also features a sandbox in which small (somewhat limited) Go programs can be written, compiled, and run, all online. This is useful for beginners for checking odd bits of syntax. The Go website's search box searches only the Go documentation; to search for Go resources generally, visit http://go-lang.cat-v.org/go-search.
The Go documentation can also be viewed locally, for example, in a web browser. To do this, run Go's
godoc tool with a command-line argument that tells it to operate as a web server. Here's how to do this in a Unix or Windows console, presuming godoc is in your
$ godoc -http=:8000
The port number used here is arbitrary—simply use a different one if it conflicts with an existing server.
To view the served documentation, open a web browser and give it a location of
http://localhost:8000. This will present a page that looks very similar to the golang.org web site's front page. The “Packages” link will show the documentation for Go's standard library, plus any third-party packages that have been installed under
GOPATH is defined (e.g., for local programs and packages), a link will appear beside the “Packages” link through which the relevant documentation can be accessed. (The
GOPATH environment variables are discussed later in this article.)
Editing, Compiling, and Running
Go programs are written as plain text Unicode using the UTF-8 encoding. Most modern text editors can handle this automatically, and some of the most popular may even have support for Go color syntax highlighting and automatic indentation. If your editor doesn't have Go support, try entering the editor's name in the Go search engine to see if there are suitable add-ons. For editing convenience, all of Go's keywords and operators use ASCII characters; however, Go identifiers can start with any Unicode letter followed by any Unicode letters or digits, so Go programmers can freely use their native language.
To get a feel for how to edit, compile, and run a Go program I'll start with the classic “Hello World” program—although we'll make it a tiny bit more sophisticated than usual.
If you have installed Go from a binary package or built it from source and installed it as root or Administrator, you should have at least one environment variable,
GOROOT, which contains the
PATH to the Go installation, and your
PATH should now include
%GOROOT%\bin. To check that Go is installed correctly, enter the following in a console:
$ go version
If you get a “command not found” or “‘go' is not recognized...” error message then it means that Go is not in the | <urn:uuid:c32a6fff-7288-4c6e-b13e-1269b631523d> | 3.046875 | 1,379 | Documentation | Software Dev. | 54.521296 |
Trees are very important and necessary for existence of life on this planet. It’s impossible to believe life without trees, trees play important role in our very dependable food cycle.
Our existing forests and trees we plant in our life work inline to make this planet better place to live and flourish, thus saving and growing more trees is very essential for future generations.
Oxygen; the fact is no life on this planet can survive without Oxygen, thus saying no trees no life is unavoidable fact. A fully grown and flourished tree produces as much Oxygen in a season as average 10 person’s inhales in a year. So a set of multiple trees AKA “forest” helps to filter air we breathe and produce oxygen for survival of life.
Cleaning the Soil, Human civilization flourished to their extent when we learnt cultivation. Here also trees played major role of getting land fertile which helped humans to carry agriculture techniques to grow food for their needs. Trees absorb harmful chemicals and pollutants that have entered soil. Trees can store these pollutants or change them into natural fertilizers which make soils more fertile this process is called phytoremediation. Tree filter sewage and a farm chemical, reduce the effects of animal wastes, and clean water running into rivers.
Trees Slow Storm Water RunoffFlash flooding can be reduced by a forest or by planting trees. Planting and growing wild, can intercept more than 1000 gallons of water annually when fully grown. Underground water-holding aquifers are recharged with this slowing down of water runoff.
Trees absorbs carbon dioxide, as animal’s breaths oxygen and gives out carbon dioxide to control the balance of carbon dioxide which is harmful for life and cause of global warming trees absorbs carbon dioxide. Trees use carbon dioxide to produce their food and gives out precious oxygen.
Trees help to control noise pollution; they also act as natural windbreakers which we can call as nature’s air conditioning apart from this they also provide us with shade and cool.
To save trees we must act before it’s too late, now laws exists to save trees. In many countries like Unites states and United Kingdom it’s mandatory to carry tree surveys before planning town and any construction work this helps to know condition of trees and life dependent on trees. You can check http://www.arbtech.co.uk/ what are tree surveys and they can help save trees environment. | <urn:uuid:8a6f55e8-46db-421c-8215-81f5ef4f45cf> | 3.359375 | 499 | Knowledge Article | Science & Tech. | 46.763455 |
The form element contains all the component elements of an HTML online form. Forms are a way of capturing user input whether they be personal details, user preferences or an order for goods or services. This input is entered into the form and then the information is sent, usually over the Internet, for whatever action is required in response to the information given.
<form> tag has attributes which tell the client computer where and how to send the information.
- The action attribute contains the address to which the information in the form needs to be sent and is normally a URI.
- The method attribute contains the method used to send the form information to the address given in the action attribute and can have the values get or post.
Using get the form data is sent via the address bar and appears as the form action value followed
by a question mark followed by the form data.
Using post the form information is sent in a message containing a list of control name and control value pairs.
- CORE AND LANGUAGE ATTRIBUTES - class, id, title, style, dir, lang
- MOUSE AND KEYBOARD EVENT ATTRIBUTES - onclick, ondblclick, onmousedown, onmouseup, onmouseover, onmouseout, onmousemove, onkeydown, onkeyup, onkeypress
- action - REQUIRED - the address to which the form data needs to be sent.
- method - the method used to send the form data to wherever it needs to go - can have the values get or post.
- enctype - specifies the content type to be used when the form is submitted using the post method.
- accept-charset - specifies the character encoding set or sets of the form input that the processing address can expect.
- accept - gives a list of content types that will be accepted by the form processing address.
- name - gives the form element a name so it can be referred to from a style sheet or found by a script. NOTE: The id attribute should be used to name elements in preference to this attribute.
- onsubmit - defines an event which occurs when the form is submitted.
- onreset - defines an event that occurs when the form is reset. | <urn:uuid:5a83c702-6b6a-4399-8886-6b27c00ec1c7> | 3.625 | 471 | Documentation | Software Dev. | 46.029767 |
This study deals with preparation of substrates suitable for surface-enhanced Raman spectroscopy (SERS) applications by sputtering deposition of gold layer on the polytetrafluorethylene (PTFE) foil. Time of sputtering was investigated with respect to the surface properties. The ability of PTFE-Au substrates to enhance Raman signals was investigated by immobilization of biphenyl-4,4'-dithiol (BFD) from the solutions with various concentrations. BFD was also used for preparation of sandwich structures with Au or Ag nanoparticles by two different procedures. Results showed that PTFE can be used for fabrication of SERS active substrate with easy handle properties at low cost. This substrate was sufficient for the measurement of SERS spectrum of BFD even at 10-8 mol/l concentration.
Surface-enhanced Raman scattering (SERS) has great potential as an analytical technique based on the surface enhancement of Raman signals of the molecule situated on the metal surface which is nowadays currently used for the detection of various analytes at low concentration . In general, there are two traditional operational mechanisms to describe the overall SERS effect: electromagnetic and chemical [1,2] enhancement mechanism. Electromagnetic mechanism lies in the enhancement of local electromagnetic field of incident radiation applied on the molecule which is adsorbed on or situated in the close proximity to rough metal surface. In order to obtain optimal enhancement of Raman signals of the molecule it is necessary to use nanostructured surfaces or nanoparticles of noble metals with suitable physical parameters such as their size, shape, and degree of aggregation . Many different types of SERS substrates, which meet the above requirements have been developed, including roughened electrodes [4,5], noble metal colloidal nanoparticles [6,7], silver island films [8,9], metal film over nanostructured surfaces [10,11], acid-etched metal foils , and lithographically produced nanoparticle arrays [13,14]. Plastic substrates are also known . Polymers were commonly used for improved mechanical stability of nanoparticles and better signal reproducibility via embossing surfaces and lithographic techniques [15,17]. Polytetrafluorethylene (PTFE) is a polymer with broad potential applications in microelectronics. Another advantage of this material is its high thermal stability and low degradation due to the exposition to a focused laser beam. PTFE foil has great surface roughness with improved adhesive properties of sputtering gold over layer and can be positive for electromagnetic mechanism. Gold over layer can suppress Raman background signal of the PTFE substrate .
Within the experiments described in this study we have prepared suitable SERS active substrates from synthetic polymer foils of PTFE by deposition of Au layers on its surface inside of plasma discharge . Electromagnetic mechanism enhancement was tested on rude PTFE-Au surface and sandwich structures. The fabrication of sandwich structures was realized by incorporating of self-assembled monolayer of dithiols between the layers of PTFE-Au surface and Au or Ag nanoparticles.
Preparation of gold layer on PTFE foil
The gold layers were sputtered on PTFE foils (2 cm in diameter) with a thickness of 50 μm. The time of sputtering was from 10 to 150 s and deposition parameters were described elsewhere . Microbalance was used for gravimetric determination of the amount of sputtered gold on polymeric substrate. Continuity of sputtered gold layer was determined by measuring of its resistance by the picoapermeter (Figure 1).
Figure 1. Dependence of the thickness of gold layer on time of sputtering (dash line) and resistance values of this gold layer (solid line).
The influence of time of sputtering (t = 10, 20, 30, 50, 80, 150 s) and concentration (c = 10-2, 10-3, 10-4, 10-5, 10-6, 10-7, 10-8 mol/l) of used bifunctional compound (biphenyl-4,4'-dithiol) on the intensity of SERS signals was then studied. In order to study the sputtering time gold layers were modified with biphenyl-4,4'-dithiol in methanol solutions (10-2 mol/l). PTFE foil with gold layer was placed into the methanol solution for 12 h. After that the foil was taken out from the solution, washed by pure methanol, and dried on the air. The study of concentration dependence was similar.
Preparation of nanoparticles
Gold nanoparticles (AuNPs) were obtained by citrate reduction of K[AuCl4] described elsewhere . Silver nanoparticles (AgNPs) were obtained using similar process of AgNO3 reduction published by Smitha et al. . Prepared nanoparticles were characterized by TEM and UV-Vis absorption spectroscopy. UV-Vis absorption spectroscopy was carried out using a Varian spectrophotometer, model Cary 400 SCAN, from 200 to 800 nm. The transmission electron microscopy (TEM) images were recorded using a JEOL microscope, model JEM-1010 with accelerating voltage 100 kV.
Preparation of sandwich structures
The sandwich structures were prepared by two procedures. In the first one (Figure 2a), the gold foils were modified by silver or gold nanoparticles previously covered by biphenyl-4,4'-dithiol. 1 ml of nanoparticles was added drop-wise at intensive stirring to the 1 ml of biphenyl-4,4'-dithiol solution with concentration of 5 × 10-2 mol/l. The obtained mixture was purified by centrifugation. PTFE foil with gold layer was placed to the solution of nanoparticles for 12 h. After that the foil was removed from the solution, washed by pure methanol, and dried on the air.
Figure 2. (a) Preparation of sandwich structures using modified nanoparticles. (b) Preparation of sandwich structures using modified gold layer.
In the second procedure (Figure 2b), PTFE foil with gold layer modified by biphenyl-4,4'-dithiol was prepared. Then such modified foil was placed into the solution of 2 ml of nanoparticles for 12 h. After that the foil was removed from the solution, washed by pure methanol, and dried on the air.
Raman spectral measurements were performed on Raman NIR Advantage spectrograph DeltaNu with laser excitation line 785 nm, power 100 mW in the range of 100 to 2000 cm-1 with spectral resolution 4 cm-1. Integration time was 20 s and results spectra are average of five measurements. Surface was focused by the NuScope with manual adjustment and field of view was approximately 800 μm at 100 × focal power. All measurements were carried out on two different places from both sides of PTFE foil.
Results and discussion
Properties of prepared gold layers on PTFE foil
The results of measurements of prepared gold layers on PTFE foils are shown in Figure 1. The thickness of gold layer was calculated from the mass difference of foils before and after sputtering procedure. It is clear from the table that the thickness is a linear function of sputtering time. The value of resistance is related to continuity of gold layer ; therefore, when short times are applied the resistance values are very high and the layer is discontinual, while after the applications of longer times the resistance values change to low which means that the layer becomes continual.
Preparation of nanoparticles
The average diameter of the prepared spherically shaped AuNPs electrostatically stabilized with citrate was about 15 nm and for AgNPs was about 45 nm. The wavelengths of their surface plasmon absorbance maximums (AuNPs at 520 nm and AgNPs at 430 nm) correspond well with the averages diameters estimated by TEM [21,23] (Figure 3).
Figure 3. (a) UV-Vis absorption spectra of the AuNPs and TEM image (in excision); (b) UV-Vis absorption spectra of the AgNPs and TEM image (in excision).
SERS measurements on PTFE foils
We have chosen biphenyl-4,4'-dithiol (BFD; Figure 4a) as compound for the immobilization on the PTFE foil with gold layer (PTFE-Au) due to possibility of linking it into sandwich structures. In contrast to other commercially available dithiols (i.e., ethan-1,2-dithiol, hexan-1,6-dithiol), BFD has a rigid structure that provides such orientation on the surface that the possibility of binding by both of sulfonyl groups is very improbable.
Figure 4. (a) Raman spectrum of pure BFD; (b) SERS spectrum of immobilized BFD (c = 10-2 mol/l) on the PTFE-Au; (c) SERS spectrum of immobilized BFD (c = 10-8 mol/l) on the PTFE-Au with AgNP (prepared by Figure 2b); (d) SERS spectrum of immobilized BFD (c = 10-8 mol/l) on the PTFE-Au with AuNP (prepared by Figure 2b); (e) Raman spectrum of pure PTFE.
For the evaluation of the dependence of sputtering time on the quality of SERS spectra we choose the band at 1078 cm-1 due to its high intensity and the possibility of easy baseline correction. The dependence of area of this signal on the sputtering time (Figure 5) showed that the maximal intensity of SERS signal was achieved using 30 s of the sputtering time (Figure 4b). According to Figure 1, at this time the layer is changing from discontinuous to continuous (see resistance). Due to this fact, when time of sputtering shorter than this is applied, the surface of gold layer is so much discontinuous that surface enhancement of Raman signals is very small and, on the contrary, after the application of longer sputtering time, the surface of gold layer is too much continuous, which leads to small enhancement because the surface has not got an optimal roughness. The analytical enhancement factor was calculated from the ratio of band intensity (1078 cm-1) of pure BFD solution (c = 1 × 10-2 mol/l) in CHCl3 and BFD (c = 1 × 10-8 mol/l) immobilized on PTFE-Au without and with nanoparticles (Table 1).
Figure 5. SERS spectra of BFD on PTFE-Au for (a) 30 s, (b) 20 s, (c) 50 s, (d) 80 s, (e) 150 s, (f) 10 s sputtering time. Spectra were shifted in y-axis.
Table 1. The analytical enhancement factor of the surface for immobilized BFD (calculated for c = 1 × 10-8 mol/l); sandwich structures were prepared according to Figure 2b
In the second step, we investigated the effect of concentration of BFD solution, the type of metal nanoparticles and the effect of their immobilization on the intensities of SERS signals of PTFE-Au prepared by sputtering with the duration of 30 s. The results (Table 2) show that the maximum intensity of the selected band was achieved with BFD concentration of 10-6 mol/l. The effect of sandwich structure prepared according to procedures which is showed in Figure 2b caused the enhancement of the signal even at lower concentrations, so we obtained SERS spectrum of AgNPs covered by BFD even at 10-8 mol/l (Tables 1, 2; Figure 4c). The signal at 726 cm-1 (spectrum 4b, 4c, 4d, and 4e) corresponds to deformation vibration of CF2 group of PTFE. The preparation of sandwich structures by the other procedure (Figure 2a) led to obtainment of similar spectra and the influence of the type of metal nanoparticles was negligible based on the identical SERS spectral structure. We propose that it is due to the fact that the concentration of BFD immobilized on the nanoparticles is similar.
Table 2. The dependence of the area of the selected peak 1078 cm-1 in SERS spectra on the concentration of BFD
It was found that the enhancement of Raman signals of BFD is independent on the measured side of PTFE foil due to the transparency of the foil and very thin layer of sputtered gold. Further, reproducibility of foil preparation is very high but the reproducibility of BFD- and NPs-modified foils is lower (RSD = 20%).
In summary, we have demonstrated the possible preparation of SERS active substrate with suitable properties by sputtering deposition of gold layer on the PTFE foil. Such foil is cheap, easy to manipulate with it, and offers the possibility to measure from both sides of PTFE foil. It was found out that optimum of sputtering time is for 30 s and the maximum of SERS signal intensity was achieved at 10-6 mol/l for BFD. With use of sandwich structures of nanoparticles we were able to obtain signal even at 10-8 mol/l. This substrate had the highest analytical enhancement factor (6.73 × 106).
BFD: biphenyl-4,4'-dithiol; PTFE: polytetrafluorethylene; SERS: surface enhanced Raman spectroscopy; TEM: transmission electron microscopy.
The authors declare that they have no competing interests.
PŽ was responsible for synthesis and other characterizations of nanomaterials (AuNPs, AgNPs and sandwich structures), writing up of this manuscript and participated in interpretation of experimental data. PŘ and VP were responsible for recording SERS spectra and interpretation of this data. JS and VŠ carried out part of the preparation and characterization of Polytetrafluorethylene-Au substrates. VK is the supervision and participated in the results, discussion and manuscript revision. All authors read and approved the final manuscript.
The financial support from the Ministry of Education of the Czech Republic MŠMT 6046137307, the GACR Foundation No. 203/09/0675 and GAAV CR Foundation KAN200100801 is gratefully acknowledged.
Rev Mod Phys 1985, 57:783. Publisher Full Text
Chem Soc Rev 1998, 27:241. Publisher Full Text
J Raman Spectrosc 2009, 40:183. Publisher Full Text
J Electroanal Chem 1977, 84:1. Publisher Full Text
J Am Chem Soc 1977, 99:5215. Publisher Full Text
Appl Spectrosc 1998, 52:175. Publisher Full Text
J Phys Chem 1993, 99:2101. Publisher Full Text
J Phys Chem A 2000, 104:9500. Publisher Full Text
Litorja M, Haynes LC, Haes JA, Jensen RT, Van Duyne RP: Surface-enhanced Raman scattering detected temperature programmed desorption: optical properties, nanostructure, and stability of silver films over SiO2 nanospheres.
J Phys Chem B 2001, 105:6907. Publisher Full Text
Appl Spectrosc 1989, 43:1325. Publisher Full Text
Anal Chem 1991, 63:2393. Publisher Full Text
J Phys Chem B 2003, 107:7426. Publisher Full Text
J Phys Chem C 2009, 113:17296. Publisher Full Text
Adv Mater 2008, 20:4862. Publisher Full Text
Surf Interface Anal 2007, 39:79. Publisher Full Text
J Chem Phys 2006, 124:8.
J Appl Polym Sci 2006, 99:1698. Publisher Full Text
Tetrahedron Lett 2008, 49:6448. Publisher Full Text
Spectrochim Acta A 2008, 71:186. Publisher Full Text | <urn:uuid:bfda52f2-e311-481c-82ab-037f002ffb16> | 2.765625 | 3,406 | Academic Writing | Science & Tech. | 48.110748 |
If 4 million cars were taken off the road in a single year, stopping 9 billion kilograms of carbon dioxide being discharged, most environmentalists would whoop with joy. But what if the same saving came from planting genetically modified crops?
This is the claim of an annual audit of GM crops by the International Service for the Acquisition of Agri-Biotech Applications (ISAAA), which is funded largely by the GM industry.
The audit, published on 18 January, bases its estimate on GM planting in 2005 in the US, Canada and Argentina. Graham Brookes of PG Economics in Dorchester, UK, who supplied the data, says 85 per cent of the savings come from the fact that farmers growing weedkiller-resistant GM crops don't have to plough their fields to get rid of weeds, so organic matter in the soil is not exposed to the atmosphere. This, according to the Intergovernmental Panel on Climate ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:9270afc8-64c6-45fa-9e94-2a05f17d847a> | 3.1875 | 211 | Truncated | Science & Tech. | 52.839662 |
Stopping the Earth's rotation
Name: Adam L Chorak and Janette L Gubala
What would happen if the Earth's rotation stopped?
The biggest changes would be the climate. One side would be very hot and
the other very cool and dark. There would be lots of other related changes.
There are very few ways that we could slow down the rotation. The major
forces acting to do this are related to tidal gravitational forces from the
Moon and the Sun. We should not expect this change soon.
Samuel P Bowen
All our toilets would flush straight down instead of swirling? Sorry, I
could not resist. That is like asking what would happen if the Moon
disappeared, or the Sun stopped burning, or fire became cold instead of hot.
What would happen if the Earth's tides and weather went totally bananas?
If the Sun no longer rose and set? If geostationary satellites started
orbiting overhead every 24 hours, I could not watch my MTV!
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:7fde5d86-e972-488b-a23a-d8da96a359a7> | 3.296875 | 225 | Comment Section | Science & Tech. | 65.232098 |
Professor Rodríguez-Iturbe discusses new fish diversity model
Posted May 7, 2008; 12:55 p.m.
This interview is based on a paper published in the journal Nature on May 8, 2008. Read more
Video Closed Captions
I am a hydrologist, which basically means that I deal with the movement
of water in landscapes.
The dynamics of water have a commanding effect on the biodiversity
of fish in river networks.
River networks are fractals. And fractals are this type of mathematical
construct in which the parts and the whole cannot be distinguished
from one another. River networks are fractals. Lightning is fractal.
Trees are fractals. Clouds are fractals.
To the untrained eye they may look very different but there is an
enormous amount of unity in this infinite diversity.
In a river basin the channel network fragments the space. And its character
as a fractal implies, among other things,
that the laws that govern the structure of that network are the same
regardless of whether the river basin is small or large
or is in Venezuela or is in the United States or is in Africa. It is very
different than in a savanna.
In a savanna -- seeds, animals, people move through a space that is
What we have done, really, is a mathematical model implemented in
We merged different sets of existing data from the Mississippi-Missouri
river basin. It is a fantastically simple model.
But it predicts wonderfully well all of the biodiversity characteristics that
we are interested in studying.
From the practical side, it provides us a link to the changes in biodiversity
one can expect from external things like climate change.
Another aspect is the impact that manmade structures like dams will
have in biodiversity.
From the science point of view, river basins are crucial depositories of
biodiversity, of energy resources, and human populations.
We need to understand how the different dynamics that act on them
influence each other. | <urn:uuid:c5b97874-0143-4075-bec5-b72e1c165b94> | 2.6875 | 427 | Audio Transcript | Science & Tech. | 44.210354 |
Anales del Instituto de la Patagonia
versão On-line ISSN 0718-686X
CARDENAS M, Carlos; JOHNSON G, Erling e CARVALLO B, Rubén. Surface an subglacier topography near to the O'Higgins Base Station at north of the Antarctic Peninsula. Anales Instituto Patagonia (Chile) [online]. 2011, vol.39, n.2, pp. 97-101. ISSN 0718-686X. http://dx.doi.org/10.4067/S0718-686X2011000200008.
In January 2009, Radio Echo Sounding (RES) measures were made during the campaign in the northern part of the Antarctic Peninsula, at the surroundings of the Chilean Base Bernardo O’Higgins (63° 19’ S; 57° 53’ W). The system has three main components: Transmitter, Receiver and a Data Acquisition System. The transmitting antenna, penetrating the ice and then returning to the receptor after their refection in an internal target or bedrock, carries out radio frequency signals generated by the transmitter. The records are stored in a data acquisition system for post processing purposes. All radar data collected was georeferenced with post-processed GPS measurements. With the Radar and GPS data we will obtain the surface and subglacial topography.
Palavras-chave : radar; surface; subglacial topography. | <urn:uuid:65e0b878-713f-453c-841c-b0d79e14578c> | 2.859375 | 311 | Knowledge Article | Science & Tech. | 55.694461 |
Web edition: April 26, 2004
Print edition: May 1, 2004; Vol.165 #18 (p. 287)
I know some people who carefully shield their bodies from the sun with sunscreen and clothing, and their skin is extremely pale. But if tanning acts as a protector ("Sunny Solution: Lotion speeds DNA repair, protects mice from skin cancer," SN: 3/6/04, p. 147: http://www.sciencenews.org/articles/20040306/fob2.asp), is it actually safer to maintain a "healthy" tan?
Beverly Hills, Calif.
Scientists continue to debate this question vigorously. Some say any tanning indicates skin-cell damage, but others disagree.J. Travis
I would guess that a rock measuring 1 kilometer across, landing near New Zealand 500 years ago, would have done much more than create a tsunami 300 to 500 feet high ("Killer Waves," SN: 3/6/04, p. 152: http://www.sciencenews.org/articles/20040306/bob8.asp). Was the object one km across before encountering Earth's atmosphere?
North Coventry, Pa.
Yes. The object's estimated size is before it hit the atmosphere. The damage it inflicted is indicated by the large crater it left on the ocean floor.S. Perkins
I can think of a place other than the moon where NASA could develop a closed life-support system for staging rehearsals of manned Mars exploration ("A New Flight Plan: Back to the moon," SN: 3/13/04, p. 170: http://www.sciencenews.org/articles/20040313/bob9.asp). Why not Earth? Advantages would include a protective atmosphere, a day length closer to the Martian sol, bone-and-muscle-friendly gravity, and easy access to mechanical and medical resources. The cost would be much less than that of a moon base, and crew rotation would involve motor vehicles rather than launch vehicles. Why not keep the rehearsal safe and relatively cheap?
Jeffry D. Mueller
"Born to Heal: Screening embryos to treat siblings raises hopes, dilemmas," SN: 3/13/04, p. 168: http://www.sciencenews.org/articles/20040313/bob8.asp) quotes a pediatrician as saying, "we're moving to selection on the basis of a trait that is of no benefit to the child to be born." I disagree. The child to be born would have the benefit of a healthy older sibling. Even saving the parents from the trauma of a dying child is a benefit to the new child.
Jay M. Pasachoff | <urn:uuid:c1a1a422-09a6-4622-a89a-ebf2bd2873b7> | 3.03125 | 573 | Comment Section | Science & Tech. | 77.069679 |
Narrator: This is Science Today. Recent images of dust storms on Mars are helping astronomers study its changing surface and weather conditions and prepare for NASA's mission to send rovers to explore the planet's surface in 2004. Meanwhile, biochemist Mark Thiemens of the University of California, San Diego has developed a way to chemically interpret the make-up of Martian meteorites. This can help scientists unravel the history of the red planet - including signs of past life.
Thiemens: The meteorites we get and analyze have come from different times in Martian history, so by looking at those, one has sort of a snapshot of what happened over time in the Martian atmosphere.
Narrator: Thiemens says the Mars exploration rovers will provide researchers with new samples.
Thiemens: We can certainly continue in analysis of other of these Martian meteorites that come from different times, but we really need return samples - carefully controlled and from areas where you might really get at the information you need. That you can go down to the precision and determine where your samples come from, rather than random events.
Science Today, I'm Larissa Branin. | <urn:uuid:4a2be97f-13b9-42b2-a73b-7cab01f0e7e3> | 3.8125 | 237 | Audio Transcript | Science & Tech. | 35.58345 |
The spruces can be recognized
as a genus by the sharp pointed needles, nearly square in
cross-section (not strongly flattened) and attached to the
twigs by sterigmata.
The terminal branches of mature trees of Picea abies
often hang down giving them a distinctive appearance which
together with the very large cones make this an easily recognized
species. Smaller trees without cones may be mistaken for
Picea abies is native
to Eurasia. It is planted in Wisconsin, mostly around homes
and it occasionally escapes to adjacent sites. It escapes
more frequently than is indicated by the map, but is not
a problem species. | <urn:uuid:9419f22a-c390-4998-acb3-caeec751afc7> | 2.875 | 139 | Knowledge Article | Science & Tech. | 34.795903 |
Promethium: geological information
It appears that there is no known Pm existing in the earth's crust other than in very small quantities in uranium ores where it is present as a uranium decay product.
Abundances of promethium in various environments
In this table of abundances, values are given in units of ppb (parts per billion; 1 billion = 109), both in terms of weight and in terms of numbers of atoms. Values for abundances are difficult to determine with certainty, so all values should be treated with some caution, especially so for the less common elements. Local concentrations of any element can vary from those given here an orders of magnitude or so and values in various literature sources for less common elements do seem to vary considerably.
The chart above shows the log of the abundance (on a parts per billion scale) of the elements by atom number in our sun. Notice the "sawtooth" effect where elements with even atomic numbers tend to be more strongly represented than those with odd atomic numbers. This shows up best using the "Bar chart" option on the chart.
WebElements now has a WebElements shop at which you can buy periodic table posters, mugs, T-shirts, games, molecular models, and more. | <urn:uuid:6e92b604-e1c7-4bb4-9fd0-a185f54b857c> | 3.703125 | 259 | Knowledge Article | Science & Tech. | 38.495962 |
Thermohaline Circulation: The Global Ocean Conveyor
The world has several oceans, the Pacific, the Atlantic, the Indian, the Arctic, and the Southern Ocean. While we have different names for them, they are not really separate. There are not walls between them. Water is able to move freely between oceans. They are all connected in one global ocean.
If you visit a shoreline and watch the ocean, you will see water on the move. Waves crash on the beach. Tides move water back and forth twice a day, longshore currents and rip tides transport unobservant swimmers far away from their beach towels. These are some of the small scale ways that seawater moves. Seawater moves in larger ways too. There is a large-scale pattern to the way that seawater moves around the world ocean. This pattern is driven by changes in water temperature and salinity that change the density of water. It is known as the Global Ocean Conveyor or thermohaline circulation. It affects water at the ocean surface and all the way to the deep ocean. It moves water around the world.
The Global Ocean Conveyor moves water slowly, 10 cm per second at most, but it moves a lot of water. One hundred times the amount of water that is in the Amazon River is being transported by this huge slow circulation pattern. The water moves mainly because of differences in relatively density. Water that is more dense sinks below water that is less dense. Two things affect the density of seawater: temperature and salinity.
Cold water is denser than warm water.
- Water gets colder when it loses heat to the atmosphere, especially at high latitudes.
- Water gets warmer when it is heated by incoming solar energy, especially at low latitudes.
Saltier water is denser than less salty water.
- Water gets saltier if rate of evaporation is high.
- Water gets less salty if there is an influx of freshwater either from melting ice or precipitation and runoff from land.
In the Atlantic, the circulation of seawater is driven mainly by temperature differences right now. Water heated near the equator travels at the surface of the ocean north into high latitudes where it loses some heat to the atmosphere (keeping temperatures in Northern Europe and North America relatively mild). The cooled water sinks to the deep ocean and travels the world ocean, possibly not surfacing for hundreds or even as much as a thousand years.
There is concern that as the Arctic warms and more sea ice melts, the influx of freshwater will make the seawater at high latitudes less dense. The less dense water will not be able to sink and circulate throughout the world. This may stop the global ocean conveyor and change the climate of the European and North American continents. | <urn:uuid:d154ff21-d8cc-4832-970a-8cea60ff2dd5> | 3.859375 | 572 | Knowledge Article | Science & Tech. | 47.810943 |
General relativity is a theory which describes space, time and the gravitational field in terms of a Lorentzian metric . A complete understanding of the gravitational field requires an understanding of the matter sources which generate it. In the Einstein equations, , the left hand side depends only on and is a feature of the geometry alone. On the other hand the right hand side, the energy-momentum tensor, depends not only on the metric but also on some matter fields. The right hand side of Einstein’s equations seems to have suffered from bad press coverage from an early stage. Einstein himself is often quoted as having said that the left hand side of his equations is made of marble while the right hand side is made of wood. I do not have a source for this quote – if anyone reading this does I would be grateful to hear about it. In this post I want to suggest treating that humble right hand side with more respect. If I lived in a palace made of marble with beautiful wooden furniture then I might be more impressed by the marble than by the wood. I would nevertheless do my best to prevent little boys from carving their initials into the furniture with penknives or the cat (much as I love cats) from using it as an accessory for the care of its claws.
The left hand side of the Einstein equations is universal within general relativity – it is always the same, no matter which type of physical situation is to be described. On the other hand the nature of the matter fields depends very much on what physical situation is to be described and what aspects of it are to be included in the description. It is necessary to make a choice of matter model. What is remarkable is that there is a large variety of choices which, in conjunction with the Einstein equations, lead to a consistent closed system of equations which bears no traces of the fact that other physical effects have been omitted. In fact there are three related choices which have to be made to set up the mathematical model in any given case. The first is the matter fields themselves – what kind of geometrical objects are they? The second is the expression which defines the energy-momentum tensor in terms of the matter fields and the metric. The third is the system of equations of motion which describe the dynamics of the matter. Note that in general the energy-momentum tensor depends explicitly on the metric. It is not possible to define an energy-momentum tensor unless the spacetime geometry is given. The same is true in the case of the equations of motion of the matter. They also contain the metric explicitly. Without the metric even the nature of the matter fields themselves can become ambiguous. Which positions should we choose for the indices of a tensor occurring in the description of the matter fields? From a physical point of view it is clear why the metric is necessary in so many ways. The mathematical model must be given a physical interpretation which involves the consideration of measurements. In the absence of a given geometry there is no way to talk about measurements.
When a matter model has been chosen the basic equations which are to be solved are the Einstein-matter equations, i.e the Einstein equations coupled to the equations of motion for the chosen type of matter. The unknowns are the metric and the basic matter fields. For any reasonable choice of matter fields the energy-momentum tensor has zero divergence as a consequence of the equations of motion. However the equations of motion in general contain more information than the divergence-free property of the energy-momentum tensor. For more discussion of these things together with examples see Chapter 3 of my book. I emphasize that solving the equations describing the physical situation within the given model means solving both the Einstein equations and the equations of motion of the matter. This is too often neglected in the literature. A particular danger occurs when the solutions under consideration are of low regularity. If the Einstein equations do not make sense pointwise then it should be checked that they hold in the sense of distributions. For solutions which lack regularity on a hypersurface this is expressed in the junction conditions and it is common in the literature to check that they hold. The equations of motion should also be satisfied in the sense of distributions and this is often ignored. When I use the phrase ‘in the sense of distributions’ here this is just a shorthand since the equations are nonlinear. The correct statement is that it is necessary to think carefully about the sense in which the equations are satisfied.
An example may help to make the importance of the issue clear. At the GR12 conference in Boulder in 1989 there was a heated discussion of the question, whether colliding plane waves can give rise to spontaneous creation of matter. (I emphasize that this discussion was in a purely classical context. Quantum theory was not being taken into account.) This kind of creation of matter sounds ridiculous from a physical point of view. Nevertheless people exhibited ‘solutions’ which showed this type of effect. Their mistake was that they had only verified those things which I said above were usual in the literature. They had not considered whether the equations of motion of matter were satisfied. If the equations of motion are ignored, it is not surprising that arbitrary things can happen. | <urn:uuid:7a8fe757-e6d6-4a38-bbe7-6984253d3f2f> | 2.84375 | 1,064 | Personal Blog | Science & Tech. | 44.16052 |
You can estimate your personal carbon footprint, or the carbon footprint for your household, at several Web sites.
Your answer will depend on the size of your home, how much you drive, whether you fly frequently and many other factors.
The answer can also depend (and this won't surprise the parents of teenagers) on how the question is asked.
Climatecrisis.net, the official Web site of the movie "An Inconvenient Truth," asks about travel, about utility bills and about whether any energy used is renewable.
Advocacy group The Nature Conservancy, at nature.org, asks similar questions, but adds questions about how often you eat meat or organic foods and whether recycling and composting is part of your life.
CarbonFootprint.com, a British Web site, has a more comprehensive calculator that let's you figure how much driving or riding a motorcycle or flying by plane affects your footprint. It will even let you calculate the impact of traveling a given distance by train.
According to the site, a trans-continental roundtrip of 6,000 miles by rail would create 1.1 tons of carbon dioxide. The same distance by plane would create 1.5 tons. | <urn:uuid:817dc7ec-6fef-4be8-9046-e8fccf5962b1> | 2.828125 | 246 | Tutorial | Science & Tech. | 53.207653 |
WHAT IS THE DIFFERENCE BETWEEN CLIMATE CHANGE AND GLOBAL WARMING?
Climate change is the shift in long-term, global weather patterns due to human action; it’s not exclusive to warming or cooling.
Climate change includes any change resulting from different factors, like deforestation or an increase in greenhouse gases. Global warming is one type of climate change, and it refers to the increasing temperature of the surface of Earth. According to NASA, the term global warming gained popular use after geochemist Wallace Broecker published a 1975 paper titled Climatic Change: Are We on the Brink of a Pronounced Global Warming?
Since 1880, the average surface temperature of the Earth has increased by roughly 0.9 degrees Fahrenheit, but the rate it’s increasing is faster than that, depending on which region you live in. If you’re closer to the equator, temperatures are increasing more slowly. The fastest increase in temperatures in the United States is in Alaska, where average temperatures have been increases by more than 3 degrees Fahrenheit per century. For a graph of average global temperatures by year, see the NASA website here.
HOW GREENHOUSE GASES RELATE TO CLIMATE CHANGE
Greenhouse gases are those thought to contribute to the greenhouse effect, an overall warming of the Earth as atmospheric gases trap electromagnetic radiation from the sun that would otherwise have been reflected back out into space.
Noteworthy greenhouse gases are methane, nitrous oxide, carbon dioxide, hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulfur hexafluoride (SF6). These gases are thought to affect the climate directly and indirectly, even though they constitute only a small fraction of the blanket of gases that make up the atmosphere.
Currently, the composition of the atmosphere is mostly nitrogen and oxygen, with just 0.033 percent carbon dioxide and all other gases accounting for even less.
WHICH GASES CONTRIBUTE THE MOST?
According to 2010 models cited by NASA, 20 percent of the greenhouse effect is attributed directly to carbon dioxide and 5 percent to all other greenhouse gases. The remaining 75 percent of the greenhouse effect is thought to be due to water vapor and clouds, which are naturally-occurring. However, even though carbon dioxide and the other greenhouse gases are such a small percentage of the total gas in the atmosphere, they affect when, where and how clouds form, so greenhouse gases have some relevance when it comes to 100 percent of the greenhouse effect. Carbon dioxide is thought to modulate the overall climate, like a atmospheric thermostat.
Some greenhouse gases are produced in natural processes, like forest fires, animal manure and respiration, or volcanic eruptions. However, the majority of new greenhouse gases are produced from industrial processes and energy production.
The four largest human sources of U.S. greenhouse gases in 2009 were energy, non-fuel use of fossil fuels, natural gas production, and cement manufacture, in descending order. Non-fuel, greenhouse gas-producing applications of fuels include industrial production like asphalt, lubricants, waxes and other . Emissions related to cement manufacture happen when limestone (calcium carbonate) is reacted with silica to make clinker, the lumps ground to make cement. ( Emissions of Greenhouse Gases in the United States 2009: Independent Statistics & Analysis.) | <urn:uuid:b513f196-4f91-41de-aa53-24d58e3356b7> | 3.515625 | 702 | Knowledge Article | Science & Tech. | 27.254591 |
Visual C++ Network: When does 'send()' return?
Q: When does 'send()' return?
A: As being indicated in the FAQ "Does one call to 'send()' result in one call to 'recv()'?" , there are some specialties about the 'send()' function as well which need to be understood.
If no error has occurred, 'send()' will return the number of bytes being sent, indicating success. However, this does not mean that the data has been received by the receiving side. To make it more clear, it does not mean that the data is on the wire either. In general, a successful 'send()' simply means that the data has been passed to the lower winsock layer for processing. Since TCP guarantees delivery of data, it can be assured that the data that is handed down for processing will be delivered, as long as the connection is not abnormally terminated and there is no out-of-resource condition on the receiving side.
So, when is the data handed over for processing? The data to be sent is always buffered at an intermediate level. This buffer size is defined by the 'SO_SNDBUF' option setting, which is 8K by default. If this buffer is full or 0, the application buffer for 'send()' will be locked (so that it cannot be paged out) and the 'send()' call will be blocked as long as the buffer is locked. In case of overlapped 'send', the call will fail with the error 'WSA_IO_PENDING'. Once the buffer is locked it is fetched directly by TCP for processing. Then the blocking 'send()' call succeeds and in the case of overlapped sockets, a completion is posted.
Having said the above, one thing to remember is, that 'send()' never guarantees that all the data submitted in the buffer will be sent by TCP with one call. Hence, it is important to check the return value of 'send()' to see if all of the data submitted has been sent successfully or not. If the return value is less than the length of the buffer (passed as third argument), then 'send()' should be called again and so on until all the data has been sent. Note, that this does not apply if overlapped 'send' is being used. In this case, all the data submitted would be send, thereby allowing the user to post multiple 'send' calls. So, you do not have to worry about one of the overlapped 'send' calls completing with only partially sending the data.
Thanks to Mr. Andreas Masur for his help. | <urn:uuid:38f4394b-81e3-4374-9a83-8876b5cb0b5b> | 3.078125 | 553 | Q&A Forum | Software Dev. | 63.212325 |
You may also attempt to solve it yourself as a mathematical exercise. Using your own senses, you can see that the illustration is a 7x7 grid with three red squares on it it the positions (3,4), (5,2), and (5,5) depending on how you define your coordinate system. If this drawing were on the $xy$ plane and your camera is on the $z$-axis at a height $h$ and pointing at the origin point(0,0,0), with the $x$-axis horizontal and $y$-axis vertical in your projective image, then what would the camera see?
Now think about how the camera views the image looking at it from different positions. You will have to find the six coordinates describing the camera:
its position in space, $x, y, z$
its orientation in space, however you choose to define it
Try to work out the problem. If you have trouble with it, look at stackoverflow again. Or search for more pages about projective transforms, coordinate transforms, 3-d rendering, ray-tracing, or even the basics of OpenGL (open graphics language) which will help you understand the basics of visual raytracing and projective transformations. A lot of the pages will present matrix representations of the coordinate transforms, which may help you if you understand matrices. But if you don't understand matrices, try to solve it with separate linear transformation equations. | <urn:uuid:94c7e4c9-61df-45e8-97d2-7d68de0e2d7d> | 3 | 302 | Tutorial | Science & Tech. | 48.20671 |
Why geomagnetic storms affect telegraphs, telephones, power grids and piplines
The Space Environment Center website has information on solar conditions and the effects of the solar wind, under the headings of Geomagnetic Storms, Solar Radiation and Radio Blackouts. A surprising number of things are affected by solar weather. Solar Radation is just that, ionizing particles that can be a hazard to high flying. Radio Blackouts are apparently the result of an intensification of the lower ionospheric layers that strongly absorb radio waves on the sunlit side. Geomagnetic Storms are fluctuations in the geomagnetic field, which induce electrical currents affecting various activities.
Information from the site told me that geomagnetic fluctuations are reported by a number called Kp, which ranges from 0 to 9 depending on the strength of the fluctuations in the last three-hour period. Kp has to reach about 5 before minor disturbances are possible. The larger the value of Kp, the further south the aurora can be seen. For Kp = 3 or less, the aurora is restricted to Canada, but for Kp = 9, it can be observed further south than Colorado. Numbers G1-G5 are assigned to various intensities of geomagnetic disturbance from minor to extreme, with similar indexes S1-S5 and R1-R5 for the solar and radio disturbances. All these numbers and indices make it seem quite scientific and give people something to talk about, however small the information content. I was unable to find the current values of these indices on the website, but they must be there somewhere.
A steady magnetic field is almost without effect. It turns the compass needle, but that is about it. Most of the geomagnetic field, of about a gauss, is steady and changes very slowly with time. This part is due to currents deep within the earth, and can be described as due to a dipole not quite at the center of the earth and not quite aligned with the axis of rotation. The magnetic "pole" in Canada is a south pole--the flux enters here--and the north poles of compass needles point roughly towards it. The intensity of this field decreases as the cube of the distance from the center of the earth, and interacts with the interplanetary magnetic fields to guide the charged particles of the solar wind as they approach the earth, deflecting them so that they do not reach the surface.
There is a strong electric current in the upper atmosphere circling the magnetic pole called the auroral electrojet, with a strength of millions of amperes. The strength and position of this jet are affected by incoming solar radiation. The variation in these factors causes a change in the magnetic field produced by the electrojet, and this is at least one source of geomagnetic field variations. A change in magnetic field produces an electric field, which permeates all nonconducting materials and penetrates deeply into poorly conducting materials, such as the earth's crust. This means that different points are at different electric potentials. If these points are connected by a conducting path, an electric current will flow.
A telegraph line was a conductor grounded at its ends, a perfect device for detecting earth currents. There may be actual earth currents, if the earth has a finite conductivity, but the currents in the telegraph wire are really not earth currents, but a current induced in the line itself. Nevertheless, the phenomenon was called "earth currents" and proved bothersome. However, it was often possible to operate a telegraph line on these earth currents, without a battery, quite successfully. Sometimes the voltages were high enough to cause severe damage to the apparatus or danger to the operator. Such high voltages were bypassed by "protectors" than offered a path to ground. These protectors were sometime called "lightning arrestors" when the high voltages were thought due to lightning. Of course, a nearby lightning stroke, or one to a wire, does indeed cause a voltage spike, but the results are often disastrous in this case, and rather rarer than earth currents. The earth currents are greatest at the times of rapid geomagnetic fluctuations, of course. Telephone circuits are also susceptible to these effects, and must be provided with protectors. With a two-wire circuit, the earth currents are common mode, it should be noticed. The currents are called GIC by the acronym-lovers, geomagnetically induced currents. It really should be geomagnetically induced electric fields.
There is a strong vertical electrical field near the surface of the earth, of some 100V/m, but the very low conductivity of the atmosphere means that the currents are very small. Near thunderstorms, this field can become quite large, eventually causing an electrical discharge as the air breaks down electrically. Electric current is a response to an electric field by a conducting medium.
One can also picture the changing magnetic field directly inducing a current in a wire. This actually requires a magnetic field linking a loop, a fully metallic loop or one with an earth return. In the previous picture, the wire and the earth were merely parts of one conductor in the electric field produced by the magnetic disturbance. It might be interesting to investigate some actual cases to find out the relative importance of the two points of view. There is little doubt that both occur.
These days, geomagnetic storms appear to be a major hazard to power distribution systems. On 13 March 1989, at the peak of the 11-year solar activity (sunspot) cycle 23 that began in 1986, the entire Hydro Québec power grid went down during a severe geomagnetic storm. The hazards of high winds, ice loading and lightning to transmission lines was well-known, but the geomagnetic hazard seemed to have been overlooked. It appears that high currents between grounding points caused protective devices to operate, and transformers to fail. The large direct currents through transformers would saturate the cores, and the reduced impedance would allow excessive currents that would burn out the transformers (I deduce that this is the explanation).
A power system that goes totally down is subject to extreme stress on restarting. All thermostatically controlled load is switched on at once, and the load on the system at restoration can be 600% of the normal load.
The author of the paper on the effects of geomagnetic storms on power systems in the website said that the effects are particulary strong in regions underlain with igneous rocks, which have a low electrical conductivity. He says that the "high resistivity causes more of the current to flow through the wires," an explanation that I doubt, even if the effect is real. Canada is largely on a shield of old igneous rock. I think the current in each conductor would depend on its resistance, the potential between grounding points being the same. This may be the old concept of "earth currents" again.
Pipelines are low-conductivity paths buried in the ground, and indeed earth currents of hundreds of amperes have been observed in them. Should these currents cause a problem, it would seem easy to prevent it by insulated joints. The pipes are continuously grounded, unless the protective coating of the pipe is insulating. Pipelines are affected by such a number of interesting phenomena from widely different fields (see, for example Hydrates) that this might make a good topic for a paper.
Composed by J. B. Calvert
Created 25 November 2000 | <urn:uuid:80cd1ad0-e0f7-4003-bbae-ee1c96980fbf> | 3.15625 | 1,534 | Personal Blog | Science & Tech. | 42.193 |
At a toasty 3,700 degrees Fahrenheit, the planet is hot enough to liquefy steel. And there’s not much relief from the scorching heat: Researchers at MIT and other institutions say the planet may lack reflective surfaces such as ice caps, instead absorbing most of the heat from its parent star — much as Earth’s dark oceans trap heat from the sun.
Data from the Spitzer Space Telescope reveals that 55 Cancri e is very dark, and that its sun-facing side is blistering hot. Image: NASA/JPL-Caltech
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks | <urn:uuid:b5cf01d0-6106-4b14-861c-de6667a2f4bb> | 2.875 | 142 | Truncated | Science & Tech. | 53.835613 |
How Corals Succumb to Sedimentation
Weber, M., de Beer, D., Lott, C., Polerecky, L., Kohls, K. Abed, R.M.M., Ferdelman, T.G. and Fabricius, K.E. 2012. Mechanisms of damage to corals exposed to sedimentation. Proceedings of the National Academy of Sciences USA 109: 10.1073/pnas.1100715109.
In an effort to find the true mechanisms of death by sedimentation, Weber et al. made numerous microsensor measurements in mesocosm experiments and in naturally accumulated sediment on corals. And what did they learn?
The eight researchers - hailing from Australia, Germany, Italy and Oman - found that (1) in organic-rich sediments, pH and oxygen started to decrease as soon as the sediment accumulated on the coral, that (2) "the combination of anoxia and low pH led to colony death within 24 hours," and that (3) "when hydrogen sulfide was added after 12 hours of anoxia and low pH, colonies died after an additional three hours." And based on these observations, they suggest that (4) "sedimentation kills corals through microbial processes triggered by the organic matter in the sediments, namely respiration and presumably fermentation and desulfurylation of products from tissue degradation." Put another way, they say that "first, increased microbial respiration results in reduced O2 and pH, initiating tissue degradation," and that "subsequently, the hydrogen sulfide formed by bacterial decomposition of coral tissue and mucus diffuses to the neighboring tissues, accelerating the spread of colony mortality."
Weber et al. conclude that "the organic enrichment of coastal sediments is a key process in the degradation of coral reefs exposed to terrestrial runoff." And we suggest that striving to mitigate this problem, as well as a number of other localized assaults on reef environments will prove far more effective than focusing on the more nebulous and tenuous global concern of anthropogenic CO2 emissions. Therefore, we say: think locally and act locally, for we all are stewards of our own backyards.
Burke, L., Reytar, K., Spalding, M. and Perry, A. 2011. Reefs at Risk Revisited. World Resources Institute, Washington, D.C.
Howarth, R., Chan, F., Conley, D., Garnier, J., Doney ,S.C., Marino, R. and Billen, G. 2011. Coupled biogeochemical cycles: Eutrophication and hypoxia in temperate estuaries and coastal marine ecosystems. Frontiers in Ecology and the Environment 9: 18-26.
Philipp, E. and Fabricius, K. 2003. Photophysiological stress in scleractinian corals in response to short-term sedimentation. Journal of Experimental Marine Biology and Ecology 287: 57-78. | <urn:uuid:79b3557b-3571-4783-b10d-90b6c22fd394> | 3.1875 | 616 | Academic Writing | Science & Tech. | 50.047845 |
One thing that often confuses new users to the Unix / Linux shell, is how to do (even very simple) maths. In most languages,
x = x + 1 (or even
x++) does exactly what you would expect. The Unix shell is different, however. It doesn’t have any built-in mathematical operators for variables. It can do comparisons, but maths isn’t supported, not even simple addition.
Following the Unix tradition (“do one thing, and do it well”) to the extreme, because the
bc utilites can do maths, there is absolutely no need for
sh to re-invent the wheel.
Yes, I agree. This is frustrating. If I’ve got one gripe against shell programming, then this is it.
Addition and Subtraction
So how do we cope? There are basically two ways, depending on whether we choose
#!/bin/sh echo "Give me a number: " read x echo "Give me another number: " read y ###### Here's where we have the two options: # The expr method: exprans=`expr $x + $y` # The bc method: bcans=`echo $x + $y | bc` ###### Did you see the difference? echo "According to expr, $x + $y = $exprans" echo "According to bc, $x + $y = $bcans"
As you can see, the language is slightly different for the two commands;
expr parses an expression passed to it as arguments:
expr something function something whereas
bc takes the expression as its input (
echo something function something | bc. Also, for
expr, you must put spaces around the arguments: “
expr 1+2” doesn’t work. “
expr 1 + 2” works.
Multiplication is a little awkward, too; the
* asterisk, which traditionally denotes multiplication, is a special character to the shell; it means “every file in the current directory”, so we have to delimit it with a backslash. “
*” becomes “
#!/bin/sh echo "Give me a number: " read x echo "Give me another number: " read y ###### Here's where we have the two options: # The expr method: exprans=`expr $x \\* $y` # The bc method: bcans=`echo $x \\* $y | bc` ###### Did you see the difference? echo "According to expr, $x * $y = $exprans" echo "According to bc, $x * $y = $bcans"
The other thing to note here, is the backtick (
`). This grabs the output of the command it surrounds, and passes it back to the caller. So a command like
x=`expr 1 + 2`
means that, while if you type
expr 1 + 2 at the command line, you’d get “
steve@nixshell$ expr 1 + 2 3 steve@nixshell$
If you enclose it with backticks, then the variable
$x becomes set to the output of the command. Therefore,
x=`expr 1 + 2`
Is equivalent to (but of course more flexible than):
One last thing about assigning values to variables: Whitespace MATTERS. Don’t put spaces around the
= sign. “
x = 3” won’t work. “
Update: 17 Feb 2007 : Division, and Base Conversion
As noted by Constantin, the “scale=x” function can be useful for defining precision (bc sometimes seems to downgrade your precision: “
echo 5121 / 1024 | bc” claims that the answer is “5″, which isn’t quite true; 5120/1024=5.
echo "scale = 5 ; 5121 / 1024" | bc produces an answer to 5 decimal places (5.00097)).
Another important note I would like to add, is that
bc is great at converting between bases:
Convert Decimal to Hexadecimal
steve@nixshell$ bc obase=16 12345 3039
This tells us that 12345 is represented, in Hex (Base 16) as “0×3039″.
Similarly, we can convert back to decimal (well, we can use bc to convert any base to any other base)…
steve@nixshell$ bc ibase=16 3039 12345
Or we can convert from Binary:
steve@nixshell$ bc ibase=2 01010110 86 steve@nixshell$
… which tells us that 01010110 (Binary) is 86 in decimal. We can get that in hex, like this:
steve@nixshell$ bc obase=16 ibase=2 01010110 56 steve@nixshell$
bc that the input base is 2 (Binary), and the output base is 16 (Hex). So, 01010110 (base 2) = 56 (hex) = 86 (decimal).
Note that the order does matter a lot; if we’d have said “ibase=2; obase=16″, that would be interpreted differently from “obase=16; ibase=2″.
I hope that this article will help some people out with some of the more frustrating aspects of shell programming. Please, let me know what you think. | <urn:uuid:83df7ef4-ae4a-47ac-9022-0591902c0253> | 3.421875 | 1,194 | Personal Blog | Software Dev. | 67.199412 |
This article by Alex Goodwin, age 18 of Madras College, St Andrews
describes how to find the sum of 1 + 22 + 333 + 4444 + ... to n
If a number N is expressed in binary by using only 'ones,' what can
you say about its square (in binary)?
What is the sum of: 6 + 66 + 666 + 6666 ............+ 666666666...6
where there are n sixes in the last term?
Watch the video to see how to sum the sequence. Can you adapt the method to sum other sequences?
This is an interactivity in which you have to sort into the correct
order the steps in the proof of the formula for the sum of a
Cellular is an animation that helps you make geometric sequences
composed of square cells.
Evaluate these powers of 67. What do you notice? Can you convince
someone what the answer would be to (a million sixes followed by a
When is a Fibonacci sequence also a geometric sequence? When the
ratio of successive terms is the golden ratio!
In the limit you get the sum of an infinite geometric series. What
about an infinite product (1+x)(1+x^2)(1+x^4)... ?
Explore what happens when you draw graphs of quadratic equations
with coefficients based on a geometric sequence.
The interval 0 - 1 is marked into halves, quarters, eighths ...
etc. Vertical lines are drawn at these points, heights depending on
positions. What happens as this process goes on indefinitely?
A circle is inscribed in an equilateral triangle. Smaller circles
touch it and the sides of the triangle, the process continuing
indefinitely. What is the sum of the areas of all the circles?
If you continue the pattern, can you predict what each of the following areas will be? Try to explain your prediction.
Generalise the sum of a GP by using derivatives to make the
coefficients into powers of the natural numbers.
Make a poster using equilateral triangles with sides 27, 9, 3 and 1 units assembled as stage 3 of the Von Koch fractal. Investigate areas & lengths when you repeat a process infinitely often.
What is the total area of the triangles remaining in the nth stage
of constructing a Sierpinski Triangle? Work out the dimension of
Each week a company produces X units and sells p per cent of its
stock. How should the company plan its warehouse space? | <urn:uuid:affdc18f-9a11-4272-8474-251a752b2ef6> | 2.6875 | 523 | Content Listing | Science & Tech. | 70.648475 |
Ever wonder how much daylight you can gain or lose just by getting in your car and driving either West or East?
Here’s how to figure it out. The Earth’s circumference is about 25,000 miles (40,000 km) at the equator. So if you start out at sunrise and drive 1,000 miles (1,600 km) Westward during the daylight hours, you’ll get almost an extra hour of daylight. On the other hand, if you go East, you’ll lose that much. 1,000 miles is pretty much the maximum you can go in about 12 hours, and that’s going pretty fast (about 80 mph, or 130 kph).
But there’s a trick to stealing extra daylight.
Get away from the equator. The higher (it’s my northern hemisphere bias; sorry, Aussies!) your latitude is right now, the better you’re going to do. There are two reasons:
- It’s summer in the northern hemisphere, so you start with more daylight hours. This gives you more time to travel, and the farther you go, the more daylight you can steal.
- (And this is the big reason.) The distance to go around the Earth is much shorter.
Seriously. In math-ese, the latitudinal circumference of the Earth is the equatorial circumference times the cosine of your latitude. In a more reasonable format, here’s a table for you:
|Latitude (°)||Circumference (mi)||Circumference (km)||Extra light per 1,000 mi|
|0°||25,000 mi||40,000 km||58 minutes|
|10°||24,600 mi||39,400 km||59 minutes|
|20°||23,500 mi||37,600 km||1 hour 1 minute|
|30°||21,700 mi||34,600 km||1 hour 6 minutes|
|40°||19,200 mi||30,600 km||1 hour 15 minutes|
|50°||16,100 mi||25,700 km||1 hour 29 minutes|
|60°||12,500 mi||20,000 km||1 hour 55 minutes|
|70°||8,600 mi||13,700 km||2 hours 47 minutes|
|80°||4,300 mi||6,900 km||5 hours 34 minutes|
I’m currently up at a latitude just north of 45° on a road trip, marveling at the fact that going East vs. going West gives me a difference of three hours of daylight. Imagine being up inside the Arctic Circle; it’s actually possible to outrun the Sunset!
And now you, too, can lengthen or shorten your days — at will — just by getting in your car. | <urn:uuid:0de921e0-3290-404f-a63c-311c02f2b928> | 3.21875 | 606 | Personal Blog | Science & Tech. | 88.52125 |
We recently wrote about how hydrogen production is a costly endeavor
for our water supply, as well as the electric gird, effectively making traditional methods of manufacturing a near-impossibility. But Bruce E. Logan, professor of enivronmental engineering at Penn State, has developed a technique that could change that.
Logan suggests using microbial fuel cells that run on cellulose to produce the hydrogen from natural processes rather than converting it to ethanol. By using bacteria in a microbial cell with acetic acid (vinegar), electricity, about 0.3 volts worth, was produced. The bacteria consumed the acid, releasing electrons and protons, which were captured by a cathode and anode rig, which allowed for current. When they added 0.2 volts into the mix, hydrogen gas was produced. Admittedly the amounts produced were very small, but the efficiencies here are large and they are quick to point out that "this process produces 288 percent more energy in hydrogen than the electrical energy that is added to the process."
On top of that, they are seeing between 23-56% efficiency at extracting hydrogen from sugar-based crops, which, being that the technology is new, is impressive given that conventional hydrogen production methods are only at 70% efficiency, with little likelihood of increasing further. Logan is also developing systems
to harness bacteria-produced electricity directly from animal wastewater and further using the byproducts to generate even more energy.
Given that the typical hydrogen economy has, until now, been based on massive consumption of (likely) dirty electricity, this new work may actually make hydrogen part of a larger sustainable future.
Image credit Zina Deretsky of the NSF.
Story Via Physorg | <urn:uuid:33ebe09f-28ac-4fc2-a473-b3bf6ffaf5dd> | 3.828125 | 346 | Knowledge Article | Science & Tech. | 26.003682 |
The spin of a molecule (orange) changes and deforms the nanotube (black) mounted between two electrodes (gold).
(Figure: C. Grupe/KIT)
Carbon nanotubes and magnetic molecules are considered building blocks of future nanoelectronic systems. Their electric and mechanical properties play an important role. Researchers of Karlsruhe Institute of Technology and French colleagues from Grenoble and Strasbourg have now found a way to combine both components on the atomic level and to build a quantum mechanical system with novel properties. It is reported now in the print version of nature nanotechnology journal (DOI: 10.1038/nnano.2012.258).
In their experiment the researchers used a carbon nanotube that was mounted between two metal electrodes, spanned a distance of about 1 µm, and could vibrate mechanically. Then, they applied an organic molecule with a magnetic spin due to an incorporated metal atom. This spin was oriented in an external magnetic field.
“In this setup, we demonstrated that the vibrations of the tube are influenced directly when the spin flips parallel or antiparallel to the magnetic field,” explains Mario Ruben, head of the working group at KIT. When the spin changes, the resulting recoil is transferred to the carbon nanotube and the latter starts to vibrate. Vibration changes the atomic distances of the tube and, hence, its conductance that is used as a measure of motion.
The strong interaction between a magnetic spin and mechanical vibration opens up interesting applications apart from determining the states of motion of the carbon nanotube. It is proposed to determine the masses of individual molecules and to measure magnetic forces within the nano-regime. Use as a quantum bit in a quantum computer might also be feasible.
According to the supplementary information published in the same issue of nature nanotechnology such interactions are of high importance in the quantum world, i.e. in the range of discrete energies and tunnel effects, for the future use of nanoscopic effects in macroscopic applications. Combination of spin, vibration, and rotation on the nanoscale in particular may result in entirely new applications and technologies. | <urn:uuid:75e3456e-4daf-49fc-8c91-fd04a1770b61> | 3.53125 | 447 | Academic Writing | Science & Tech. | 29.456291 |
.NET offers many useful features like Base Class Library [BCL], Just in Time compiler [JIT], intermediate code known as MSIL, code access security and Garbage Collector aka GC. GC is responsible for memory management and is credited with the success of .NET platform as well as some really strange stories .
Basics of Memory Allocation
In .NET, value types are stored on the stack and reference types [objects] are stored on heap. On deeper examination – you will find that pointers to the object are stored on the stack but the actual object memory is allocated on heap. In .NET, heap based memory is managed by GC. GC frees up the programmer from keeping count of elements stored in an array, reference counting, etc. and provides protection from common memory leaks and buffer overrun problems.
Figure 1: Memory Allocation
GC Roots – A brief explanation
GC roots represent memory locations always reachable from the program. There are four types of GC roots – a local variable in the method, static variables, managed object passed to COM and objects that implements Finalizers. Reference counting logic is used periodically to examine GC roots and to mark objects not required in future.
All objects for which we have active reference in GC roots will be marked as “live” and objects for which GC cannot find reference will be marked as “ready for collection”. Objects marked as ready for collection are removed from memory during GC sweep process. [GC operates on “mark and sweep” principal -more on this will be covered in future posts].
GC distributes available memory in different generations as Generation 0, 1 and 2 as explained below.
Figure 2: Managed Memory
Objects always start in Generation 0 of GC. All objects that survive one cycle of Generation 0 collection get promoted to Generation 1. All objects that survive one cycle of Generation 1 collection get promoted to Generation 2. Garbage Collection for Generation 2 will cause collection for Gen1 as well as Gen 0 - also known as full collection.
GC Mode of operations: Concurrent Vs Synchronous
GC supports concurrent or workstation as well as synchronous or server mode of operations. Usually, concurrent mode is used in desktop applications and synchronous mode is used in server applications like ASP.NET. In concurrent mode, GC will avoid stopping the application while garbage collection is in progress. In Synchronous mode, GC will suspend the application operation while garbage collection is in progress. Mode in which GC will operate has direct impact on application performance. CLR supports Workstation GC with Concurrent GC off [default option], Workstation GC with Concurrent GC on and Server GC mode of operations. GC mode can be set in the configuration file of the application .
In future posts we will delve deeper into the generational logic of GC and how memory is managed. We will also explore the impact of application level code on GC behavior.
1. http://www.codeproject.com/KB/showcase/IfOnlyWedUsedANTSProfiler.aspx2. http://en.wikipedia.org/wiki/Heap_(data_structure)3. http://msdn.microsoft.com/en-us/magazine/bb985010.aspx4. http://vineetgupta.spaces.live.com/blog/cns!8DE4BDC896BEE1AD!1104.entry5. http://www.simple-talk.com/dotnet/.net-framework/understanding-garbage-collection-in-.net/
Tags: .NET, software design
Powered by Exsilio Solutions | <urn:uuid:38d71226-fb2c-4737-a0c8-a7dff9658e1e> | 3.421875 | 748 | Documentation | Software Dev. | 48.797876 |
CALL me a converted skeptic. Three years ago I identified problems in previous climate studies that, in my mind, threw doubt on the very existence of global warming. Last year, following an intensive research effort involving a dozen scientists, I concluded that global warming was real and that the prior estimates of the rate of warming were correct. I’m now going a step further: Humans are almost entirely the cause. http://www.nytimes.c...?pagewanted=all
Climate 'Weirdness' Throws Ecosystems 'Out Of Kilter
Science journalist Michael Lemonick doesn't want to be a doomsday prophet, but he does want to be realistic about the threat of climate change. "Since I started writing about climate change all the way back in 1987, we've known what the cause is, we've known what the likely outcome is, and we've had time to act — and essentially we haven't acted," he tells Fresh Air's Dave Davies.
Lemonick is the co-author of a new book, Global Weirdness: Severe Storms, Deadly Heat Waves, Relentless Drought, Rising Seas, and the Weather of the Future. The book, published by the nonprofit research organization Climate Central, details the effects of climate change and greenhouse gases in ocean acidity, existing ecosystems, disruptions to food supply and rising sea levels. Lemonick says sea level has risen by about eight inches overall worldwide since around 1900, and the waters are expected to rise an estimated three feet by 2100.
"Sometimes we forget that the damage in New Orleans in 2005 from Hurricane Katrina came not from wind or rain, but from the storm surge [that caused flooding] ahead of that storm," Lemonick says. If sea levels rise as expected, "all of those storm surges are going to be starting from a level three feet higher, which means that they have much greater potential to drive inland, to wash over barrier islands, and to really inundate the coast. ... Many, many millions of people and trillions of dollars of infrastructure are in serious danger, if those projections are correct."
On how scientists calculate temperatures from hundreds of thousands of years ago
On carbon dioxide making the oceans more acidic
On how long carbon dioxide stays in the atmosphere
On the effect climate change will have on infectious diseases
The effect of climate change on animal populations | <urn:uuid:791243c9-da65-43fe-bfcf-77b17d34fc66> | 3.125 | 482 | Comment Section | Science & Tech. | 47.881151 |
functions cause the regular file named by
or referenced by
to be truncated to a size of precisely
If the file previously was larger than this size, the extra data is lost.
If the file previously was shorter, it is extended, and
the extended part reads as null bytes ('\0').
The file offset is not changed.
If the size changed, then the st_ctime and st_mtime fields
(respectively, time of last status change and
time of last modification; see
for the file are updated,
and the set-user-ID and set-group-ID permission bits may be cleared.
the file must be open for writing; with
the file must be writable.
On success, zero is returned.
On error, -1 is returned, and
is set appropriately.
Search permission is denied for a component of the path prefix,
or the named file is not writable by the user.
points outside the process's allocated address space.
is larger than the maximum file size. (XSI)
A signal was caught during execution.
is negative or larger than the maximum file size.
An I/O error occurred updating the inode.
While blocked waiting to complete,
the call was interrupted by a signal handler; see
The named file is a directory.
Too many symbolic links were encountered in translating the pathname.
A component of a pathname exceeded 255 characters,
or an entire pathname exceeded 1023 characters.
The named file does not exist.
A component of the path prefix is not a directory.
The underlying file system does not support extending
a file beyond its current size.
The named file resides on a read-only file system.
The file is a pure procedure (shared text) file that is being executed.
the same errors apply, but instead of things that can be wrong with
we now have things that can be wrong with the file descriptor,
is not a valid descriptor.
EBADF or EINVAL
is not open for writing.
does not reference a regular file.
4.4BSD, SVr4, POSIX.1-2001 (these calls first appeared in 4.2BSD).
The above description is for XSI-compliant systems.
For non-XSI-compliant systems, the POSIX standard allows
two behaviors for
exceeds the file length
is not specified at all in such an environment):
either returning an error, or extending the file.
Like most Unix implementations, Linux follows the XSI requirement
when dealing with native file systems.
However, some nonnative file systems do not permit
to be used to extend a file beyond its current length:
a notable example on Linux is VFAT. | <urn:uuid:d1be2a39-cbac-46ab-96e6-da3bed54f94e> | 2.96875 | 587 | Documentation | Software Dev. | 54.423924 |
Nitrogen in the air
Nitrogen is required by all living organisms for the synthesis of proteins, nucleic acids and other nitrogen containing compounds. The Earth’s atmosphere contains almost 80 % nitrogen gas. It cannot be used in this form by most living organisms until it has been fixed, that is reduced (combined with hydrogen), to ammonia.
The nitrogen cycle is a series of processes that convert nitrogen gas to organic substances and back to nitrogen in nature. It is a continuous cycle that is maintained by the decomposers and nitrogen bacteria. The nitrogen cycle can be broken down into four types of reaction and micro-organisms play roles in all of these. | <urn:uuid:e839757b-59b5-4a54-b4ef-6a08599b64fa> | 3.515625 | 134 | Knowledge Article | Science & Tech. | 42.104405 |
THE solar system may be older than we think it is. This claim is based on new studies of rubidium in massive stars.
One way to work out the age of the solar system is to use radioactive dating of the heavy element rubidium in meteorites, assuming that the meteorites started out with the same level of the metal as the early solar nebula, from which our solar system formed. If the amount of rubidium in the solar nebula was higher than previously thought, then we could be underestimating the age of the solar system. Now researchers have found a clue that this might be the case.
Theoretical models predict that rubidium forms in stars between four and eight times as massive as the sun. Until now, though, nobody had seen rubidium in such stars. "Astronomers were extremely frustrated that this hadn't been confirmed," says Pedro García-Hernández at the European ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:0e8efd9d-22df-45ef-b074-bd5c0496d4bb> | 4.125 | 216 | Truncated | Science & Tech. | 53.582016 |
So, believe it or not, we have our first named storm of the year, and we still have a few weeks before the official start of tropical season.
Sub tropical storm Andrea was named yesterday (wed. may 9) afternoon, and has since been downgraded to a tropical depression. It is very interesting that they decided to name this subtropical system... Pretty soon, all year will be tropical season...
the difference between extratropical storm systems and tropical storm systems is that extratropical is a cold core system, and tropical storms are warm core.
So, what does this mean:
extratropical cyclones develop from your average frontal storm systems (caused by temperature gradients), whereas tropical storms develop from rising air motion created over warm waters. A subtropical storm is a combination of the two or an in between stage.
Taking a look at sub tropical storm Andrea (the storm that developed this week), the strongest winds and swells were from the frontal system draped off the coast and interaction with high pressure in the northeast. As that system sat offshore a defined center of circulation developed off the southeast coast right over the warm gulf stream waters. | <urn:uuid:740932b0-025d-4d9a-a28f-62be37da58bc> | 3.328125 | 241 | Comment Section | Science & Tech. | 50.171031 |