text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
North Pole Environmental Observatory (NPEO) Oceanographic Mooring Data, 2001-2002 This data set, acquired with an oceanographic bottom-anchored mooring, includes sea-ice draft and depth data, conductivity, temperature, pressure, salinity, and ocean current measurements such as speed and direction. Each mooring contains vertically distributed instruments, which measure ocean properties at fixed depths and record data internally. These data are retrieved annually when the mooring is recovered. Located on the Pole Abyssal Plain about 50 kilometers from the North Pole, the 2001 mooring was chosen because it provided a suitable landing site for the supporting aircraft. Ocean depth was approximately 4300 m. Data were recorded half-hourly to hourly from 1 April 2001 through 27 April 2002. The North Pole Environmental Observatory (NPEO) is a year-round, automated scientific observatory, deploying various instruments each April in order to learn how the world's northernmost sea helps regulate global climate. It consists of a set of unmanned scientific platforms that record oceanographic, cryospheric, and atmospheric data throughout the year. More information about the project is available at the project Web site, North Pole Environmental Observatory. Data are in ASCII text format and are available via FTP. The following example shows how to cite the use of this data set in a publication. For more information, see our Use and Copyright Web page. Morison, J., K. Aagaard, R. Moritz, M. McPhee, A. Heiberg, M. Steele, and R. Andersen. 2005. North Pole Environmental Observatory (NPEO) Oceanographic Mooring Data, 2001-2002. [indicate subset used]. Boulder, Colorado USA: National Snow and Ice Data Center.
<urn:uuid:85f003ba-56cc-46cc-b6a8-b9178930b1ab>
3.046875
361
Knowledge Article
Science & Tech.
35.500763
Tonight, in the final televised debate ahead of the election, the three main party leaders will talk about the economy, the recession, public sector debt, spending or cuts, and more. All will use statistics to back up their points or to pull apart their opponents' arguments. But how can we work out whether to believe the figures and what do they really mean? Did aliens help prehistoric Britons found the ancient Woolworths civilisation? And what does tying your shoe laces have to do with DNA? Find out with this year's popular lectures organised by the London Mathematical Society. Matt Parker of Queen Mary, University of London, will explore how seemingly incredible results can actually be meaningless random patterns, and Dorothy Buck of Imperial College, London, will look at how mathematical knot theory helps to understand DNA. Being killed in a peacekeeping mission apparently depends on your nationality, at least if you're a soldier in the Spanish army. On the 1st of February 2010 the Colombian soldier John Felipe Romero serving in the Spanish army was killed in a terrorist attack in Afghanistan. It was then made public that so far 43% of the Spanish troops killed in attacks by local forces in Afghanistan and Lebanon have been foreigners. This is in striking contrast to the fact that foreign nationals make up only 7% of the Spanish army as a whole. Researchers from the University of Maryland have devised a new kind of random number generator that is cryptographically secure, inherently private and — most importantly — certified random by the laws of physics. Randomness is important, particularly in the age of the Internet, because it guarantees security. Valuable data and messages can be encrypted using long strings of random numbers to act as "keys", which encode and decode the information. Randomness implies unpredictability, so if the key is truly random, it's next to impossible for an outsider to guess it. One advantage of the UK voting system is that nobody could possibly fail to understand how it works. However, the disadvantages are well-known. Differently sized constituencies mean that the party in government doesn't necessarily have the largest share of the vote. The first-past-the-post system turns the election into a two-horse race, which leaves swathes of the population un-represented, forces tactical voting, and turns election campaigns into mud-slinging contests. There are many alternative voting systems, but is there a perfect one? The answer, in a mathematical sense, is no.
<urn:uuid:f522fd61-f12e-4e67-981a-fe2d8a7b7180>
2.78125
497
Content Listing
Science & Tech.
37.970647
The Role of Ocean Phenomena The sea's influence on climate is periodically highlighted by two ocean phenomena that exert dramatic influences on weather patterns across the United States and many other countries. These phenomena are called El Nino and La Nina. During an El Nino event, which usually lasts about a year and recurs every two to seven years, east-to-west trade winds in the tropical Pacific weaken or reverse direction. The change in winds causes ocean currents to flow eastward, transferring warm water from the western Pacific to the central and eastern Pacific. A low-pressure air mass—the type of air mass in which stormy weather develops—builds over the warm waters of the central and eastern Pacific. This air system carries heavy rainfall to the Pacific coast of South America. At the same time, a high-pressure system forms over the cool western Pacific and may lead to drought conditions in Southeast Asia. Because the changes in air pressure associated with El Nino disrupt the normal circulation of the atmosphere, weather patterns in other parts of the world are also altered. In the United States, for example, El Nino events usually result in milder winters in the Midwest, heavy rains in the South, and dry conditions in the Pacific Northwest. Meteorologists said the El Nino of 1997-1998 led to severe flooding, landslides, and several deaths in North Carolina, Tennessee, and California. A La Nina event often develops after an El Nino. La Nina is the climatic opposite of El Nino and occurs when strong trade winds push warm surface water westward, exposing lower cool waters in the east. As a result, a La Nina episode is characterized by cooler-than-normal water in the central and eastern Pacific and warmer-than-normal water in the western Pacific. This situation can lead to severe storms in Southeast Asia and drought in South America. Meteorologists said a La Nina that occurred in 1998-1999 also brought heavy rain and snow to the upper Midwest and the Pacific Northwest. Oceanographers noted that conditions typical of La Nina continued into 2001, well beyond the one-to-two year length of a typical La Nina episode. They said this was probably due to the development of a long-term ocean condition called the Pacific Decadal Oscillation. Many scientists believe that this condition, which is characterized by cold waters off the Pacific coasts of North and South America, can last 20 to 30 years and may recur every few decades. They said this cold-water phase might cause harsh winter weather across the Midwest and Northeast for years to come.
<urn:uuid:4e160463-1043-4c1d-b17f-a7319cd0e577>
3.921875
522
Knowledge Article
Science & Tech.
40.855646
Dragon is a free-flying, reusable spacecraft developed by SpaceX under NASA’s Commercial Orbital Transportation Services (COTS) program. Initiated internally by SpaceX in 2005, the Dragon spacecraft is made up of a pressurized capsule and unpressurized trunk used for Earth to LEO transport of pressurized cargo, unpressurized cargo, and/or crew members. In May 2012, SpaceX made history when its Dragon spacecraft became the first commercial vehicle in history to successfully attach to the International Space Station. Previously only four governments — the United States, Russia, Japan and the European Space Agency — had achieved this challenging technical feat. SpaceX has now begun regular missions to the Space Station, completing its first official resupply mission in October 2012. The Dragon spacecraft is comprised of 3 main elements: the Nosecone, which protects the vessel and the docking adaptor during ascent; the Spacecraft, which houses the crew and/or pressurized cargo as well as the service section containing avionics, the RCS system, parachutes, and other support infrastructure; and the Trunk, which provides for the stowage of unpressurized cargo and will support Dragon’s solar arrays and thermal radiators. In December 2008, NASA announced the selection of SpaceX’s Falcon 9 launch vehicle and Dragon spacecraft to resupply the International Space Station (ISS) when the Space Shuttle retires. The $1.6 billion contract represents a minimum of 12 flights, with an option to order additional missions for a cumulative total contract value of up to $3.1 billion. Though designed to address cargo and crew requirements for the ISS, as a free-flying spacecraft Dragon also provides an excellent platform for in-space technology demonstrations and scientific instrument testing. SpaceX is currently manifesting fully commercial, non-ISS Dragon flights under the name “DragonLab”. DragonLab represents an emergent capability for in-space experimentation.
<urn:uuid:ed5edb21-79d4-4fca-8142-29210df3cecf>
3.40625
392
Knowledge Article
Science & Tech.
24.872104
28 November - 4 December 2011 Click on images for larger versions menziesii) forest & clearcut Credit & Copyright: Bruce G. Marcot, Ph.D. Explanation: Last week we explored how sea level rise can cause the demise of a coastal forest. This week we are well inland ... in the Cascade Mountains of southern Washington state, USA ... and exploring a different kind of disturbance: conditions on the edge of a forest stand. The above photos -- both stitched panoramas -- are two views of this forest edge. What created this edge is obvious; the private land to the right was very recently clear-cut logged. What results from this kind of large patch cutting? Look at the top photo, above. It is a striking illustration of how such an abrupt edge can alter microclimate, with deep shade and protection in the forest interior to the far left and full sunlight and exposure to the far right. I recorded a temperature difference of some 8 degrees between the forest interior and the open clear-cut. In ecology, this is called a "high-contrast edge." This has great implications for the kinds of plants that can grow (or regrow), and the insects and vertebrate animals that can occupy each of these kinds of habitats. Many wildlife species closely associated with forest cover, including Northern Spotted Owls, generally will not occur in the clear-cut space for most of their life function such as breeding and nesting. Now look at the bottom photo. It illustrates that, along this kind of high-contrast edge, you typically find a great deal of woody debris on the forest floor ... and it piles up because of forestry practices that remove the merchantable timber but that leave the "slash" behind ... and also because of "blow-down" that occurs from wind, ice, and storms that hammer the edge of the forest. In fact, the trees left along the edge were ones that grew up in the protection of their neighbors in the dense forest ... but once exposed, they are now vulnerable to wind breakage because the trees are not "wind-firm." Watch the following brief time-lapse movie I made of how these trees flex and bend even in low wind ... (in stronger winds, especially under a snow or ice load, these trees may break or blow down, causing the edge to "creep" further into the forest): Next week's picture: The Fly That Serves The State < Previous ... | Archive | Index | Location | Search | About EPOW | ... Next > Author & Webmaster: Dr. Bruce G. Marcot, Tom Bruce Disclaimers and Legal Statements Original material on Ecology Picture of the Week © Bruce G. Marcot Member Theme of Taos-Telecommunity
<urn:uuid:37bea5eb-c21b-4e96-987f-52576ca9c7f6>
3.71875
577
Nonfiction Writing
Science & Tech.
60.753054
Yes, I do remember. Of course, it's true, but it also depends on how you look at it. If you include the earth in the system, then momentum is conserved, but that's not saying much of use. If a fast car hits a solid wall and comes to a dead stop, there's nothing to be gained by claiming momentum is conserved in the car-earth system. OK, momentum is not conserved, because of how we define our system. If a car hits a second (stopped) car and the two, fused together, hit a wall, can we say momentum is conserved? In the first collision, yes, in the second collision and overall, no. What if the cars are in the process of fusing (i.e. the first collision is not over) when the second collision begins, as would be the case if the stopped car were parked an inch from the wall? Momentum is only conserved up until the imposition of an external constraint force. But wait... the frictional force of the tires against the road is an external force. Put the parked car a mile away from the wall and the pair of fused cars will never reach it. Momentum was not conserved in the first place, after all. Now, what about having the parked car butted up against the wall, but have the wall just be a flimsy fence that breaks away? The head swims. In this model, I say yes, momentum is conserved in slab collisions - more or less - and that's correct dynamics for this configuration. Only 'more or less' because there are connections that consume energy when they break, and that just happens to be in the same small region of time and space as the collision. These connections are then analogous to the breakaway wall in the example above. If the slabs were just suspended in space at equal distances, zero g and without connections, no question the collisions would be between individual slabs, not 'blocks', and momentum is conserved. Add weak connections; not enough to hold the slabs against gravity. Momentum not conserved, technically, but in the limit of very weak connections, yes. Make them stronger, until strong enough to arrest immediately in gravity, no conservation at all. If one takes the collision between slabs as instantaneous, then no distance is traveled during the collision. The bodies emerge from the collision together at the new, reduced velocity. Because there is no displacement, forces of constraint can do no work during the collision; connections are broken after subsequent displacement. With respect to individual bodies, momentum is conserved. In actuality, this simulator uses a thin skin around bodies to allow some interpenetration and finite collision distances, as such is more realistic than a purely analytical computation based on an idealized assumption. Also, the connections break over a small but measurable distance at the time of collision so, practically speaking... momentum is not conserved. Hahaha, isn't this grand? This is what matters: while a connection survives, it transmits force to the next solid member, and so on. Once broken, in this discrete model, transmitted/resistive force due to connections goes to zero. Slab inertia remains An upper block may have X times the KE necessary to fail the first connection below, but it also has to accelerate the topmost lower slab to the upper block speed very quickly or the differential displacement will exceed the connection limit for the bottom of Zone C. We know there's enough energy to do so. The inelastic collision between the interface slabs only brings the fused pair of slabs to half the Zone C speed. Zone C can only be slowed by upward force applied through the surviving lowest connection , it is a mistake to take the collision as happening between all the slabs of Zone C and the one slab below. This same lowest connection applies downward force from Zone C motion to the pair of slabs in contact, and their inertia provides a reaction force. If the newly formed Zone B, now consisting of one slab, and Zone C do not reach a common speed right away - by virtue of force transmitted through the single lowest connection of Zone C - that connection fails and Zone B then has two members. Zone C is still moving faster than Zone B. Crushing up. How can the first slab of Zone B (fomerly top of Zone A) receive any more impulse from Zone C than the connection strength can provide before failure? Well, it can't. So it really doesn't matter if the upper block has a million times the KE to fail one connection, the limit to the force it can apply is the connection strength. Hence, simulations with varying connection strength (stronger at the bottom) will fail the lowest connection of Zone C under most circumstances, and crush up whether or not they continue to crush down. This model, conceptually simple with its 'incompressible' inelastic slabs separated by empty space, has characteristics far from everyday experience with materials and structure. The simplicity allows exploration of collapse dynamics without too many factors muddying the waters, but is correspondingly limited in its output and application. It is arguably closer to reality than a continuous, uniform mass distribution, though, so its lessons are worth considering. This is as close to the blocks being rigid bodies as one can get without DEFINING them to be rigid, and they're just not rigid. Only the slabs are. Major_Tom wrote:The thread is great. Thanks. Excellent question.
<urn:uuid:d98c1743-1e4b-4fc9-9cb8-a2dee680c598>
2.859375
1,141
Comment Section
Science & Tech.
52.989199
There has been some big promises made by governments when it comes to the concept of the wind farm. The wind turbine has been touted as a way to promote clean energy, but in fact these devices have an historical flair that goes back to the days when they were used as a mechanical device to turn machinery. Of course, today’s market for this green technology has seen some interesting turns that weren’t heard of years ago. There have even been recent software programs released that allow interested parties to develop their own wind farms in the cyberworld and test the hypothetical possibilities before they launch in reality. The software clearly shows how far the industry has come in just a short while and it boasts the ability to test a number of different wind turbine factors such as Load Flow and Harmonic Analysis. The need for this software parallels the growing industry where the wind farm is concerned and as a result of the detailed analysis that can be gained here, different locations can be picked for the right set ups. The industry is always moving forward with more innovations and one of the latest ones that has been reported concerns the blades that are used and how they can better adjust themselves in varying wind conditions. Syracuse University has researchers looking at many new innovations that will help these blades work more efficiently. First, the general data from the wind flowing over the blades is recorded using what is called an intelligent controller. By reducing both noise and vibration on the blades, this research is helping industry to move forward towards more and more cost-efficient green technology. There are also tests underway where scientists are experimenting with different angles of the blades to increase the efficiency. A full wind turbine comprises of fifteen different parts and the capacity is growing alongside the technology. For example, outputs will reach 447GW in the next five years and within the next two years Asia will lead the world in wind energy. Finally, all this about the industry might make you think that wind energy is on the cutting edge and has only been around for a few years, but nothing could be further from the truth. There are records of the predecessors of wind energy being mentioned as far back as 1838. In fact, one academic from Cambridge notes that the history of the wind turbine goes back even farther and that there were records of 10,000 of these units being used as far back as the 1800s. There are other early records of wind energy being used even further back in China and it’s thought that an early predecessor of those traveled to Europe by the end of the 12th century. There’s a saying, “Everything old is new again”, and that cannot be more applicable than with today’s wind turbine.
<urn:uuid:ed8759a0-da39-4469-a5e1-b7ee12693c86>
3.15625
546
Personal Blog
Science & Tech.
40.574022
|Electron Correlation in Iron-Based Superconductors| In 2008, the discovery of iron-based superconductors stimulated a worldwide burst of activity, leading to about two preprints per day ever since. With a maximum superconducting transition temperature (so far) of 55 K, it is natural to wonder if studying the new materials will help uncover one of the deepest mysteries in modern physics—the mechanism of superconductivity in the copper-based "high-temperature superconductors." One clue lies in whether the electrons in the new superconductors are as highly correlated as they are in the high-temperature superconductors. A truly international North American/European/Asian collaboration working at the ALS has now reported results from a combination of x-ray absorption spectroscopy, resonant inelastic x-ray scattering, and systematic theoretical simulations of iron-based superconductors. The team was able to settle the correlations debate by showing that electrons in the iron-based families that were studied favor itinerant (delocalized) states with only moderate correlations. The iron-based compounds are called pnictides because they contain a pnictogen; that is, an element from the nitrogen group of the periodic table. Both iron- and copper-based (cuprate) superconductors are layered compounds, and it is believed that the 3d electrons in the respective transition-metal layers play key roles in superconductivity. However, iron, together with cobalt and nickel among the 3d metals, is an archetypal ferromagnetic metal. The magnetic moments are mostly in the 3d bands, and naively, one would expect the well-aligned spin of 3d electrons of iron to prevent superconductivity, which requires pairs of electrons with opposite spin directions. Another crucial difference is that all the iron 3d orbitals contribute charge carriers, while for cuprates a single half-filled band dominates the important physics. While a half-filled band signifies a metal in conventional band theory, the electrons in cuprates are also strongly correlated, owing to the strong Coulomb interaction, which prevents two electrons from occupying the same site, resulting in a so-called Mott insulator. The lack of information on the strength of electron correlation in the iron pnictides has blocked the way toward a consensus for the minimal model needed to describe the electron pairing mechanism in these materials. Theoretical results have been controversial. To explore this issue, the collaboration performed x-ray absorption spectroscopy (XAS) and resonant inelastic x-ray scattering (RIXS) measurements at ALS Beamline 8.0.1, where they investigated five iron-containing materials, including an iron pnictide superconductor (SmO0.85FeAs) with a record high 55-K transition temperature, two non-superconducting iron pnictides (BaFe2As2 and LaFe2P2), and for comparison iron metal and an Fe2O3 insulator. The spectra of iron pnictides exhibited qualitative, in some cases quantitative, similarities to those of the iron metal but showed no features resembling the multiple peak structures seen in iron-based insulators. Furthermore, a RIXS study across the resonant XAS edges demonstrated that the resonance spectra are dominated by the nonresonant "normal fluorescence," with no observance of excitation peaks. The team interpreted these results as showing the importance of the iron metallicity and strong covalency in these new iron-based superconductors. The team then turned to a systematic theoretical study to simulate the experimental results, and more important, to pin down the upper limit of electron correlation in the new superconductors. They first performed calculations based on a Hubbard model (the simplest model in solid-state physics of interacting particles on a lattice). The calculations suggested a relatively minor role for correlations in the iron pnictides. Subsequently, the comparison between different theoretical models and experimental data indicated that, instead of localized states due to strong electron interactions, electrons in iron pnictides prefer itinerant states with moderate correlation strength. These results will help lead physicists to the mechanism of superconductivity in iron pnictides and perhaps also to their optimization for technological applications. Research conducted by W.L. Yang, J. Denlinger, and Z. Hussain (ALS); A.P. Sorini, B. Moritz, and W.-S. Lee (SLAC National Accelerator Laboratory); C.-C. Chen, J.-H. Chu, J.G. Analytis, I.R. Fisher, Z.-X. Shen, and T.P. Devereaux (SLAC National Accelerator Laboratory and Stanford University); F. Vernay and B. Delley (Paul Sherrer Institut, Switzerland); P. Olalde-Velasco (ALS and Instituto de Ciencias Nucleares, Mexico); Z.A. Ren, J. Yang, W. Lu, and Z.X. Zhao (National Laboratory for Superconductivity, China); and J. van den Brink (SLAC National Accelerator Laboratory and Leiden University, The Netherlands). Research funding: U.S. Department of Energy (DOE), Office of Basic Energy Sciences (BES); Stichting voor Fundamenteel Onderzoek der Materie (FOM), the Netherlands; and Consejo Nacional de Ciencia y Tecnología (CONACyT), Mexico. Operation of the ALS is supported by BES. Publication about this research: W.L. Yang, A.P. Sorini, C.-C. Chen, B. Moritz, W.-S. Lee, F. Vernay, P. Olalde-Velasco, J.D. Denlinger, B. Delley, J.-H. Chu, J.G. Analytis, I.R. Fisher, Z.A. Ren, J. Yang, W. Lu, Z.X. Zhao, J. van den Brink, Z. Hussain, Z.-X. Shen, and T.P. Devereaux, "Evidence for weak electronic correlations in iron pnictides," Phys. Rev. B 80, 014508 (2009).
<urn:uuid:dcb48b53-8f11-473f-9410-d93b57630d62>
3.203125
1,313
Knowledge Article
Science & Tech.
37.166312
|One of the brightest supernovas in recent years has just been recorded in the nearby Whirlpool galaxy (M51). Surprisingly, a seemingly similar supernova was recorded in M51 during 2005, following yet another one that occurred in 1994. Three supernovas in 17 years is a lot for single galaxy, and reasons for the supernova surge in M51 are being debated. Pictured above are two images of M51 taken with a small telescope: one taken on May 30 that does not show the supernova, and one taken on June 2 which does. The June 2 image is one of the first images reported to contain the supernova. The images are blinked to show the location of the Although most supernovas follow classic brightness patterns, the precise brightening and dimming pattern of this or any supernova is hard to predict in advance and can tell astronomers much about what is happening. M51 supernova, designated SN 2011dh, is still bright enough to follow with a small telescope. Therefore, sky enthusiasts are encouraged to image the as often as possible to fill in time gaps left by intermittent observations made by the world's most powerful telescopes. Views of the developing supernova are being Credit & Copyright: Stephane Lamotte Bailey, Marc Deldem, &
<urn:uuid:be08e219-d988-4ab2-8ac1-5836fcf7a6d7>
3.03125
279
Content Listing
Science & Tech.
34.496808
The Color of Ancient Ink Generally animal tissue, made up mostly of protein, degrades quickly. Over the course of millions of years, the only traces an animal leaves behind are likely skeletal remains or an impression of the shape of the animal in surrounding rock. Scientists can learn much about an animal by its bones and impressions, but without organic matter they are left with many unanswered questions. But melanin is an exception. Though organic, it resists degradation over the course of vast amounts of time. “Out of all of the organic pigments in living systems, melanin has the highest odds of being found in the fossil record,” says John Simon of the University of Virginia. “That attribute also makes it a challenge to study.” Simon and his colleagues used cutting edge techniques to study the melanin from 160 million-year-old cephalopod ink sacs. “We had to use innovative methods from chemistry, biology and physics to isolate the melanin from the inorganic material.” The researchers weren’t even sure the melanin was still inside the small, inch-long sacs. They used a combination of direct, high-resolution chemical techniques to determine whether or not the melanin had been preserved. When they found melanin present, they then compared its chemical composition to the melanin in the ink of modern cuttlefish and found a match. The researchers were somewhat amazed that the ink has changed so little over millions of years. “It’s close enough that I would argue that the pigmentation in this class of animals has not evolved in 160 million years,” Simon explains. “The whole machinery apparently has been locked in time and passed down through succeeding generations of cuttlefish.” As Simon tells National Geographic News: As far as we can tell by everything we’ve thrown at it, the [ancient] ink is indistinguishable from modern ink… … it’s a pretty good defense mechanism. The scientists hope to use similar techniques to color-in more of ancient Earth’s organisms. Their current research is published in this week’s Proceedings of the National Academy of Sciences. Image of ancient ink sac: University of Virginia
<urn:uuid:40642c59-82b0-44b2-bda9-40bb1e4bcd6e>
4.0625
465
Knowledge Article
Science & Tech.
40.611679
Belowground Carbon Storage in a Grassland Community Adair, E.C., Reich, P.B., Hobbie, S.E. and Knops, J.M.H. 2009. Interactive effects of time, CO2, N, and diversity on total belowground carbon allocation and ecosystem carbon storage in a grassland community. Ecosystems 12: 1037-1052. Results indicated that annual TBCA increased in response to all three treatment variables - "elevated CO2, enriched N, and increasing diversity" - and that it was also "positively related to standing root biomass." Upon removing the influence of root biomass, however, they found that the effects of N and diversity became neutral or even negative (depending on the year), but that "the effect of elevated CO2 remained positive." In the case of years with fire, on the other hand, they found that "greater litter production in high diversity, elevated CO2, and enhanced N treatments increased annual ecosystem C loss." Given these findings, under normal non-fire conditions, elevated CO2, N and biodiversity generally tend to increase ecosystem carbon gain; but if grasslands are frequently burned, they could actually remain neutral in this regard.
<urn:uuid:1cc27fb9-465a-46b7-8117-c5dc75470bff>
2.765625
252
Academic Writing
Science & Tech.
44.403799
I'm working on a program that will take an input (i.e: "This is a test"), and outputs it scrambled, but having the first and last letter of each word in the same place (i.e: "Tihs is a tset"). I know that I should first separate the input string into their own string values. I'm aware of using input.split(), but I'm not sure how to assign each word to it's own name (i.e: word1, word2, word3 etc...) Any help would be appreciated! Re: Separating Strings Strings can be concatenated. In other words you can add Strings with a + sign. e.g., String foo = "foo" + "bar" will result in "foobar". With this knowledge and a for loop, I'll bet you can figure this out. Note, in the future there will be times when you'll want to use a StringBuilder to do this sort of concatenation, since it is more efficient at it, but for your simple needs, the benefit is less than minimal and so I suggest you keep things as simple as possible and just use plain String concatenation.
<urn:uuid:bd7b149e-9444-4a10-bc35-6be7c054893b>
3.21875
253
Comment Section
Software Dev.
82.210294
A new study has found that there are vastly more geothermal power resources here in the US than previously considered. http://www.tgdaily.com/sustainability-f ... tudy-shows Quotations from this article: The US has the geothermal resources to produce ten times as much power as the current installed capacity of coal plants, Google-sponsored research shows. In the past, geothermal production in the US has been restricted largely to the western third of the country, in tectonically active locations such as the Geysers Field north of San Francisco. But Southern Methodist University says it's confirmed that the country has vast geothermal reserves that are realistically accessible using current technology, particularly in the eastern two-thirds of the country. This could potentially resolve the question of contributing to increasing global temperatures based upon coal and petroleum usage here in the US releasing CO2 into the atmosphere and also the risks of 'fracking' to the environment. Of course, they can always 'screw it up' but one can always hope. We have Google to thank for sponsoring this research.
<urn:uuid:33c89db7-2e14-418e-b4a3-1d53aca58fa7>
2.96875
229
Comment Section
Science & Tech.
41.115122
For millennia, climatic fluctuations have sent stressed-out species to extinction and promoted rapid evolutionary change in those left behind. In Extreme Environmental Change and Evolution Ary Hoffmann and Peter Parsons explore in detail how environmental extremes can expose natural populations to the hard glare of natural selection. For an academic text, a remarkably stress-free read. Published by Cambridge UP, £19.95, ISBN 0521446597. To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:200a2c2c-fbdd-4703-b2b9-35b488899ecb>
3.09375
110
Truncated
Science & Tech.
30.286471
Improve your problem-solving skills with problems, answers and solutions from The Calculator Pad.Flickr Physics Visit The Physics Classroom's Flickr Galleries and take a visual tour of projectile motion.Shockwave Studios Think you get the idea? Try the Hit the Target activity at the Shockwave Studios. Try something new with this problem based learning activity involving a crime scene analysis.Physclips: Projectiles View a collection of video clips and associated information related to projectile motion.Shockwave Studios Give your students a challenge with the Hit the Target activity from the Shockwave Studios.The Laboratory Looking for a lab that coordinates with this page? Try the Launcher Speed Lab from The Laboratory.Curriculum Corner This collection of sense-making activities from The Curriculum Corner will help your students understand projectile motion.Treasures from TPF Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on projectile motion. Horizontally Launched Projectile Problems One of the powers of physics is its ability to use physics principles to make predictions about the final outcome of a moving object. Such predictions are made through the application of physical principles and mathematical formulas to a given set of initial conditions. In the case of projectiles, a student of physics can use information about the initial velocity and position of a projectile to predict such things as how much time the projectile is in the air and how far the projectile will go. The physical principles that must be applied are those discussed previously in Lesson 2. The mathematical formulas that are used are commonly referred to as kinematic equations. Combining the two allows one to make predictions concerning the motion of a projectile. In a typical physics class, the predictive ability of the principles and formulas are most often demonstrated in word story problems known as projectile problems. There are two basic types of projectile problems that we will discuss in this course. While the general principles are the same for each type of problem, the approach will vary due to the fact the problems differ in terms of their initial conditions. The two types of problems are: A projectile is launched with an initial horizontal velocity from an elevated position and follows a parabolic path to the ground. Predictable unknowns include the initial speed of the projectile, the initial height of the projectile, the time of flight, and the horizontal distance of the projectile. - A pool ball leaves a 0.60-meter high table with an initial horizontal velocity of 2.4 m/s. Predict the time required for the pool ball to fall to the ground and the horizontal distance between the table's edge and the ball's landing location. - A soccer ball is kicked horizontally off a 22.0-meter high hill and lands a distance of 35.0 meters from the edge of the hill. Determine the initial horizontal velocity of the soccer ball. A projectile is launched at an angle to the horizontal and rises upwards to a peak while moving horizontally. Upon reaching the peak, the projectile falls with a motion that is symmetrical to its path upwards to the peak. Predictable unknowns include the time of flight, the horizontal range, and the height of the projectile when it is at its peak. - A football is kicked with an initial velocity of 25 m/s at an angle of 45-degrees with the horizontal. Determine the time of flight, the horizontal distance, and the peak height of the football. - A long jumper leaves the ground with an initial velocity of 12 m/s at an angle of 28-degrees above the horizontal. Determine the time of flight, the horizontal distance, and the peak height of the long-jumper. The second problem type will be the subject of the next part of Lesson 2. In this part of Lesson 2, we will focus on the first type of problem - sometimes referred to as horizontally launched projectile problems. Three common kinematic equations that will be used for both type of problems include the following: Equations for the Horizontal Motion of a Projectile The above equations work well for motion in one-dimension, but a projectile is usually moving in two dimensions - both horizontally and vertically. Since these two components of motion are independent of each other, two distinctly separate sets of equations are needed - one for the projectile's horizontal motion and one for its vertical motion. Thus, the three equations above are transformed into two sets of three equations. For the horizontal components of motion, the equations are Of these three equations, the top equation is the most commonly used. An application of projectile concepts to each of these equations would also lead one to conclude that any term with ax in it would cancel out of the equation since ax = 0 m/s/s. For the vertical components of motion, the three equations are In each of the above equations, the vertical acceleration of a projectile is known to be -9.8 m/s/s (the acceleration of gravity). Furthermore, for the special case of the first type of problem (horizontally launched projectile problems), viy = 0 m/s. Thus, any term with viy in it will cancel out of the equation. The two sets of three equations above are the kinematic equations that will be used to solve projectile motion problems. To illustrate the usefulness of the above equations in making predictions about the motion of a projectile, consider the solution to the following problem. The solution of this problem begins by equating the known or given values with the symbols of the kinematic equations - x, y, vix, viy, ax, ay, and t. Because horizontal and vertical information is used separately, it is a wise idea to organized the given information in two columns - one column for horizontal information and one column for vertical information. In this case, the following information is either given or implied in the problem statement: x = ??? vix = 2.4 m/s ax = 0 m/s/s y = -0.60 m viy = 0 m/s ay = -9.8 m/s/s vix = 2.4 m/s ax = 0 m/s/s viy = 0 m/s ay = -9.8 m/s/s As indicated in the table, the unknown quantity is the horizontal displacement (and the time of flight) of the pool ball. The solution of the problem now requires the selection of an appropriate strategy for using the kinematic equations and the known information to solve for the unknown quantities. It will almost always be the case that such a strategy demands that one of the vertical equations be used to determine the time of flight of the projectile and then one of the horizontal equations be used to find the other unknown quantities (or vice versa - first use the horizontal and then the vertical equation). An organized listing of known quantities (as in the table above) provides cues for the selection of the strategy. For example, the table above reveals that there are three quantities known about the vertical motion of the pool ball. Since each equation has four variables in it, knowledge of three of the variables allows one to calculate a fourth variable. Thus, it would be reasonable that a vertical equation is used with the vertical values to determine time and then the horizontal equations be used to determine the horizontal displacement (x). The first vertical equation (y = viyt +0.5ayt2) will allow for the determination of the time. Once the appropriate equation has been selected, the physics problem becomes transformed into an algebra problem. By substitution of known values, the equation takes the form of -0.60 m = (0 Since the first term on the right side of the equation reduces to 0, the equation can be simplified to -0.60 m = (-4.9 If both sides of the equation are divided by -5.0 m/s/s, the equation becomes 0.122 s2 = By taking the square root of both sides of the equation, the time of flight can then be determined. t = 0.350 s (rounded from 0.3499 s) Once the time has been determined, a horizontal equation can be used to determine the horizontal displacement of the pool ball. Recall from the given information, vix = 2.4 m/s and ax = 0 m/s/s. The first horizontal equation (x = vixt + 0.5axt2) can then be used to solve for "x." With the equation selected, the physics problem once more becomes transformed into an algebra problem. By substitution of known values, the equation takes the form of x = (2.4 m/s)(0.3499 s) + Since the second term on the right side of the equation reduces to 0, the equation can then be simplified to x = (2.4 m/s)(0.3499 x = 0.84 m (rounded from 0.8398 m) The answer to the stated problem is that the pool ball is in the air for 0.35 seconds and lands a horizontal distance of 0.84 m from the edge of the pool table. - Carefully read the problem and list known and unknown information in terms of the symbols of the kinematic equations. For convenience sake, make a table with horizontal information on one side and vertical information on the other side. - Identify the unknown quantity that the problem requests you to solve for. - Select either a horizontal or vertical equation to solve for the time of flight of the projectile. - With the time determined, use one of the other equations to solve for the unknown. (Usually, if a horizontal equation is used to solve for time, then a vertical equation can be used to solve for the final unknown quantity.) One caution is in order. The sole reliance upon 4- and 5-step procedures to solve physics problems is always a dangerous approach. Physics problems are usually just that - problems! While problems can often be simplified by the use of short procedures as the one above, not all problems can be solved with the above procedure. While steps 1 and 2 above are critical to your success in solving horizontally launched projectile problems, there will always be a problem that doesn't fit the mold. Problem solving is not like cooking; it is not a mere matter of following a recipe. Rather, problem solving requires careful reading, a firm grasp of conceptual physics, critical thought and analysis, and lots of disciplined practice. Never divorce conceptual understanding and critical thinking from your approach to solving problems. A soccer ball is kicked horizontally off a 22.0-meter high hill and lands a distance of 35.0 meters from the edge of the hill. Determine the initial horizontal velocity of the soccer ball.
<urn:uuid:d0ad397c-1679-4074-a2d2-0b5b0e5d0c5d>
4.1875
2,218
Tutorial
Science & Tech.
51.628427
According to a report published in Nature: Climate Change, coralline algae, which acts as a sort of glue that holds coral organisms themselves together, is more resistant to ocean acidification that previously thought. The reports' lead author, Merinda Nash, a doctoral student at the Australia National University in Canberra, tells Bruce Hill it's some rare positive data about reef systems and their ability to survive. Presenter: Bruce Hill Speaker: Merinda Nash, a doctoral student at the ANU in Canberra NASH: We're looking at coraline algae and algae are part of the plant kingdom, whereas corals are effectively an animal, even though they do have symbiotic algae. So our research specifically relates to coraline algae, which helps build the structural part of the reef. Some corals are looking OK, and that's really species-specific, so that's not actually our research. HILL: So what did your research find about the way that coraline algae can survive high ocean acidity? NASH: What we found is that we discovered this extra mineral dolomite in the coraline algae. Now what's important about that is that prior to our discovery, everybody had thought that this coraline algae was made up of magnesium that was was thought to be and is very susceptible to dissolution. So there was a lot of concern that the coraline algae, which plays a key role in building the reef and binding corals together, that this would be the first thing to dissolve as CO2 went up and that that would impact the reef structure. So we found that this presence of dolamide actually reduced the dissolutiion rate significantly to about one tenth the rate of the algae without the dolomite, so that's quite good news. HILL: So does this mean that the coral could actually prove a bit more resistant to ocean acidification than we thought before? NASH: Well, as I mentioned, our research was looking at the algae, not the coral, so and this is quite difficult it seems for most people trying to understand the difference. So the best analogy that I can give is that the algae that we look at is like the cement in the house made of bricks and the coral are the bricks. So if you could imagine a house that had bricks and no cement. It wouldn't be a very strong structure. So the coraline algae with dolomite look to be quite resistant, but what happens to the corals will be a different story. HILL: So we're talking here about essentially the concrete or the glue that holds the bricks together. The bricks are still in trouble from acidification though, is that right? NASH: It looks that way. There's a lot of research being done now that's showing the different species have different susceptibility and there's some very interesting research showing resistant increasing I guess across generations and that there can be protection for some species that are able to control their PH, but that's still fairly new, the research showing the resistance to PH species. But it does look like that the corals are still going to be quite impacted by the rising temperature and acidity. HILL: So this is more or less good news or better news than we've been having for coral reef systems, but it's not necessarily saying, don't worry, panics over. It's all going to be fine? NASH: Yeah, don't go back to driving your V8s just yet. What is really good about this is for a lot of the Pacific Islands that are coral atolls, that face high energy waves all the time, a lot of those islands are actually protected by ridges that are built just about entirely out of coraline algae in the top couple of metres. So, for example, one of the islands we looked at Rodrigues which is in the Indian Ocean. The top four metres of that reef isn't predominantly coraline algae and there's a ridge that comes up about nearly a metre that is just coraline algae and that forms the highest engery point of the reef. So where the waves are strongest is where the algae with the dolomite loves to grow. So it has a really critical role protecting shorelines and human communities on the shore from the worst affects of the high wave energy. HILL: There's still an awful lot about coral reef systems that we don't understand? NASH: Yes, you've absolutely got that right, Bruce. Some of the other work I've been involved in is the latest CSIRO Marine Report Card on Ocean Acidification and as one of the authors on that, I've seen the latest research that's been done and it's quite clear that while we are starting to understand a lot of the impacts, The more we dig into it, I think the more we realise that we don't understand it and how much further is to go.
<urn:uuid:d0561f56-e4b7-4326-845d-ac9adeff78a9>
3.65625
1,007
Audio Transcript
Science & Tech.
47.046964
© Denis Scott/Corbis Will we see it coming? A lack of cash could end the only survey dedicated to searching the southern skies for Earth-grazing comets and asteroids . That would create a blind spot in our global view of objects that could cause significant devastation should they hit Earth. The Siding Spring Survey uses images from the Siding Spring observatory in Australia as part of the global Catalina Sky Survey , an effort to discover and track potentially dangerous near-Earth objects. Astronomers sift through virtually identical images of the sky, looking for moving objects. Catalina uses a range of northern hemisphere telescopes - and the Sliding Spring Survey. But in October, Catalina cut off cash to the survey due to growing costs, caused partly by changes in the exchange rate between the Australian and US dollars. That decision was "very difficult", says Steve Larson, who heads Catalina. Since then, the southern survey has been limping along with temporary funding from the Australian National University in Canberra, but the extension is set to expire at the end of July, says survey operator Rob McNaught. The leftover building blocks of planets, near-Earth objects orbit the sun in highly elliptical orbits, and sometimes graze or hit Earth . Seeing an asteroid before it hits could save lives by providing time to evacuate a region. "Given the very best circumstances, you can predict an impact to 1 second and 1 kilometre," says McNaught. "There's no other natural disaster that you can do that for." But without a southern lookout, any object approaching Earth from below 30 degrees latitude would be invisible, says Tim Spahr of the International Astronomical Union's Minor Planet Center in Cambridge, Massachusetts. That won't be much of a problem for massive objects like the asteroid that wiped out the dinosaurs . These are rare and astronomers estimate they have already found and are tracking 94 per cent of them via software models. The worry is asteroids about 30 metres wide, which could flatten a city. Such a hit is blamed for the Tunguska event in 1908, which levelled a 2000-square-kilometre swathe of forest in Siberia. There are around a million of these smaller objects, making them the most likely to hit Earth, yet locations for less than 1 per cent of them are known . Without a southern telescope, "you could easily get blindsided by one of these ", says Don Yeomans , of NASA's Near-Earth Object Program at the Jet Propulsion Laboratory in Pasadena, California. "Whether that's a 1 per cent, 10 per cent or 20 per cent increased risk, I don't know. But it is an increased risk." What's more, as most asteroids and comets are tracked across both hemispheres, those discovered in the north could get lost without follow-up from the south. There will also be objects seen in the north that could have been spotted sooner in the south, giving more time to prepare. McNaught estimates that the survey needs about US$180,000 per year, plus a one-off $30,000 to fix the observatory dome. "I really wish I could tell you that the chances are very good that we'll be able to find some money, but I can't," says Harvey Butcher , who heads the team at Australian National University that is providing temporary funding. If the survey shuts down, there won't be another ground telescope capable of fulfilling its duties until the 2020s, when the Large Synoptic Survey Telescope is due to go online in Chile. The non-profit B612 Foundation plans to build a space telescope to scan for small asteroids but it won't launch until at least 2017. "In the interim, having one eye closed when the cost of having it open is so little seems to be penny wise and pound foolish," says B612 co-founder Russell Schweickart, a former NASA astronaut. The people's asteroid defence Citizens, defend thyselves. As governments prove slow at funding telescopes to monitor asteroids, a non-profit organisation plans to pick up the slack - though its telescope won't launch till 2017 at the earliest. The B612 Foundation - named for the asteroid that was home to the prince in The Little Prince - has announced a plan to build, fly and operate the first private space telescope. Called Sentinel, it will cost several hundred million dollars, which the foundation hopes to raise through donations. "We think this is eminently doable," says B612's Ed Lu, a former NASA astronaut, who compares the project to funding museums or concert halls. "This telescope will be owned by the people of the world." Unlike ground-based surveys, Sentinel will orbit the sun, so its view will not be confined to one hemisphere. It will look in infrared wavelengths, so small asteroids that don't reflect much visible light can be seen via their heat. Planned for launch in 2017 or 2018, Lu predicts that Sentinel will find more asteroids in its first month than all previous telescopes combined.
<urn:uuid:1d16731c-5658-470e-b53d-d6552f4f090e>
3.546875
1,047
Truncated
Science & Tech.
50.045301
Pulickel M. Ajayan1 and Otto Z. Zhou2 Department of Materials Science and Engineering Rensselaer Polytechnic Institute, Troy, NY 12180-3590, USA Ajayan@rpi.edu Curriculum in Applied and Materials Sciences Department of Physics and Astronomy University of North Carolina at Chapel Hill Chapel Hill, NC 27599-3255, USA Zhou@physics.unc.edu Abstract. Carbon nanotubes have attracted the fancy of many scientists worldwide. The small dimensions, strength and the remarkable physical properties of these structures make them a very unique material with a whole range of promising applications. In this review we describe some of the important materials science applications of carbon nanotubes. Specifically we discuss the electronic and electrochemical applications of nanotubes, nanotubes as mechanical reinforcements in high performance composites, nanotube-based field emitters, and their use as nanoprobes in metrology and biological and chemical investigations, and as templates for the creation of other nanostructures. Electronic properties and device applications of nanotubes are treated elsewhere in the book. The challenges that ensue in realizing some of these applications are also discussed from the point of view of manufacturing, processing, and cost considerations. The discovery of fullerenes provided exciting insights into carbon nanostructures and how architectures built from sp2 carbon units based on simple geometrical principles can result in new symmetries and structures that have fascinating and useful properties. Carbon nanotubes represent the most striking example. About a decade after their discovery , the new knowledge available in this field indicates that nanotubes may be used in a number of practical applications. There have been great improvements in synthesis techniques, which can now produce reasonably pure nanotubes in gram quantities. Studies of structure–topology–property relations in nanotubes have... [continues] Cite This Essay (2012, 11). Application of Carbon Nanotubes. StudyMode.com. Retrieved 11, 2012, from http://www.studymode.com/essays/Application-Of-Carbon-Nanotubes-1251768.html "Application of Carbon Nanotubes" StudyMode.com. 11 2012. 11 2012 <http://www.studymode.com/essays/Application-Of-Carbon-Nanotubes-1251768.html>. "Application of Carbon Nanotubes." StudyMode.com. 11, 2012. Accessed 11, 2012. http://www.studymode.com/essays/Application-Of-Carbon-Nanotubes-1251768.html.
<urn:uuid:15cbce36-2c4f-4bc8-822a-e8bb667baf31>
3.015625
565
Academic Writing
Science & Tech.
27.037069
NetCDF APIs are available for most programming languages used in the geosciences. The C API The C library is the core implementation on which non-Java interfaces are built. The Java API The netCDF Java library provides advanced capabilities not available in C-based APIs. The Fortran-90 API The Fortran-90 library provides Fortran 90/95 support for modelers and scientists. The Fortran-77 API The Fortran-77 library provided early Fortran support for modelers and scientists. The C++ API The two C++ APIs (legacy and netCDF-4) provide object-oriented access to netCDF data. There are multiple Python libraries for netCDF from which to choose. The Ruby API The Ruby API for netCDF was contributed as part of the Dennou Ruby Project, providing software for data analyses, visualization, and numerical simulations for geophysical studies. Two Perl APIs for netCDF are PDL::NetCDF and NetCDFPerl. Other APIs for netCDF Other APIs for netCDF have been contributed and made available for MATLAB, IDL, and R.
<urn:uuid:ccf2d277-4c7d-451f-8fc9-aa74107a68e2>
2.84375
252
Documentation
Software Dev.
43.4475
Horsefly, or Gadfly, a large fly that irritates domestic animals, primarily horses, cattle, and hogs. It is a carrier of anthrax and tularemia (rabbit fever). The black horsefly is about one inch (2.5 cm) long. It has a broad head, brownish-black wings, and a beaklike mouth. The mouth is adapted to feeding on liquid—the female feeds on blood, the male on nectar. To draw blood from a victim, the female pierces the skin with her mouth, often inflicting a painful wound. Horseflies belong to the family Tabanidae of the order Diptera. The black horsefly is Tabanus atratus.The horsefly female feeds on blood, the male on nectar.
<urn:uuid:787d5211-284f-4e9c-a3c5-b733c6aece87>
2.96875
164
Knowledge Article
Science & Tech.
60.31
Cosmic timeline 05 The Large Hadron Collider (LHC) is the world’s largest and highest-energy particle accelerator, intended to collide opposing particle beams of either protons at an energy of 7 TeV per particle, or lead nuclei at an energy of 574 TeV per nucleus. It is expected that it will address the most fundamental questions of physics, hopefully allowing progress in understanding the deepest laws of nature. The LHC lies in a tunnel 27 kilometres (17 miles) in circumference, as much as 175 metres (570 ft) beneath the Franco-Swiss border near Geneva, Switzerland. Physicists hope that the LHC will help answer the most fundamental questions in physics, questions concerning the basic laws governing the interactions and forces among the elementary objects, the deep structure of space and time, especially regarding the intersection of quantum mechanics and general relativity, where current theories and knowledge are unclear or break down altogether. These issues include: Is the Higgs mechanism for generating elementary particle masses via electroweak symmetry breaking indeed realised in nature? It is anticipated that the collider will either demonstrate (or rule out) the existence of the elusive Higgs boson(s), completing (or refuting) the Standard Model. Are there extra dimensions, as predicted by various models inspired by string theory, and can we detect them? Are electromagnetism, the strong nuclear force and the weak nuclear force just different manifestations of a single unified force, as predicted by various Grand Unification Theories? Why is gravity so many orders of magnitude weaker than the other three fundamental forces? Why are there apparent violations of the symmetry between matter and antimatter? What was the nature of the quark-gluon plasma in the early universe? On 10 September 2008, the proton beams were successfully circulated in the main ring of the LHC for the first time. On 19 September 2008, the operations were halted due to a serious fault between two superconducting bending magnets. Repairing the resulting damage and installing additional safety features took over a year. On 20 November 2009 the proton beams were successfully circulated again, On 23 November 2009, the first proton–proton collisions were recorded, at the injection energy of 450 GeV per particle. On 18 December 2009 the LHC was shut down after its initial commissioning run, which achieved proton collision energies of 2.36 TeV, with multiple bunches of protons circulating for several hours and data from over one million proton-proton collisions. The LHC resumed operations in February 2010, but it will operate at only half power. In 2012 it will be shut down for the repairs necessary to bring it to its design energy, and then it will start up again in 2013. A simulated event in the CMS detector, featuring the appearance of the Higgs boson. The Higgs boson particle The particle known as the Higgs boson is named after Peter Higgs of the University of Edinburgh, who predicted its existence in 1964. At the beginning of the Universe it was as if there was a sea full of Higgs boson particles causing the Universe to inflate. In the rapid cosmic expansion we call inflation, quantum fluctuations put the sea of Higgs particles in a state of constant flux. Virtual Higgs particles were popping in and out of existence, but were created in pairs. These pairs would be made up of one particle and an anti-particle, which re-combine when it’s time for the pair to disappear. But what happened when the Universe grew by a factor of 10 to the power of 70, was that these pairs were dragged apart before they had time to re-combine. These quantum fluctuations formed the initial irregularities from which the galaxies grew. The imprint of these fluctuations can still be detected today in the cosmic microwave background radiation that is a lasting electromagnetic echo left by the Big Bang. Particle physicists, the scientists who observe and wonder at the sub-atomic scale of Universe, believe that everything in the Universe is given its mass by the action of a single type of subatomic particle that was created from the energy that drove the rapid expansion of the early universe we call ‘inflation’. When the idea of inflation was proposed as part of the explanation of how the Universe began, some scientists thought that the rapid cosmic expansion would make it impossible for galaxies to form in the Universe. Galaxies are formed by the gravitational attraction of matter that develops from irregularities in the density of matter in space. These irregularities are the seeds from which galaxies grow. The problem as they saw it was that this inflation and expansion of the early universe would smooth out these irregularities.
<urn:uuid:536856e3-15e7-4aaa-a6e1-ceec78cd6063>
3.328125
959
Content Listing
Science & Tech.
34.671061
One of the most fascinating exchanges I had with renowned astronomer Dr. Frank Drake this week did not make it into today’s story. Dr. Frank Drake (seti-inst.edu) Drake, 78, became the first person ever to search for signals from unknown neighbors when he tuned a radio receiver to the stars in 1960. Now a leading voice at the SETI Institute, he’s imagined the scene a hundred times: If we ever discover other intelligent life out there, how will it happen? How will we react? Our culture submits some dramatic scenarios. In “Star Trek,” intelligent races bide their time until humans discover warp drive. In “Independence Day,” they blow up the White House. But that’s Hollywood talking. Drake is a scientist. His take? Before we make contact with other intelligent lifeforms, he said, we’ll learn about them on TV. “I’m pretty sure I know what we’d do,” he began: What I envision is that it will happen through a radio signal, not a UFO landing or something like that. The chances are very high that we will just barely detect a signal. Strong enough so we’ll know it exists, but not a strong enough signal that we’ll extract any information from it. Like looking at a snowy TV channel — you know there’s a channel there but you can’t see any pictures. The discovery of even one more civilization is a super bombshell because it says there are other creatures; if we find even one, it will certainly be near us. It’s very improbable that there will be two in the universe and there won’t be others. The discovery of one actually says there are many, many civilizations. What you want to know is, what are the extraterrestrials like? You’ve got to build that giant telescope till you can get enough signal noise, so there’s enough sensitivity so that you can capture their signals and enough clarity that you can read the signals. And my dream is — what you want to capture is their television. That’s how you can find out all about them without having to ask questions and wait thousands of years for answers. It will take hundreds of years to make contact, but we can learn about them. Drake hopes one day someone can build a radio telescope powerful enough to detect weak signals from light-years away (a receiver five miles in diameter should do the trick, he thinks). Today the largest radio telescope in the world is the 305-meter device at the Arecibo Observatory in Puerto Rico, but that could soon change. China began building a 500-meter radio telescope in December. Drake will speak at 7 p.m. on Saturday at 120 Kane Hall at the University of Washington as part of the department of astronomy’s open house. Tickets must be requested in advance here.
<urn:uuid:61bdad6a-3a18-4830-a1aa-c24b7735fa73>
3.078125
622
Personal Blog
Science & Tech.
60.764647
A simple example (this is not recommended as a real way of generating HTML!): from contextlib import contextmanager @contextmanager def tag(name): print("<%s>" % name) yield print("</%s>" % name) >>> with tag("h1"): ... print("foo") ... <h1> foo </h1> At the point where the generator yields, the block nested in the with statement is executed. The generator is then resumed after the block is exited. If an unhandled exception occurs in the block, it is reraised inside the generator at the point where the yield occurred. Thus, you can use a try...except...finally statement to trap the error (if any), or ensure that some cleanup takes place. If an exception is trapped merely in order to log it or to perform some action (rather than to suppress it entirely), the generator must reraise that exception. Otherwise the generator context manager will indicate to the with statement that the exception has been handled, and execution will resume with the statement immediately following the with statement. Combine multiple context managers into a single nested context manager. This function has been deprecated in favour of the multiple manager form of the with statement. The one advantage of this function over the multiple manager form of the with statement is that argument unpacking allows it to be used with a variable number of context managers as follows: from contextlib import nested with nested(*managers): do_something() Note that if the __exit__() method of one of the nested context managers indicates an exception should be suppressed, no exception information will be passed to any remaining outer context managers. Similarly, if the __exit__() method of one of the nested managers raises an exception, any previous exception state will be lost; the new exception will be passed to the __exit__() methods of any remaining outer context managers. In general, __exit__() methods should avoid raising exceptions, and in particular they should not re-raise a passed-in exception. This function has two major quirks that have led to it being deprecated. Firstly, as the context managers are all constructed before the function is invoked, the __new__() and __init__() methods of the inner context managers are not actually covered by the scope of the outer context managers. That means, for example, that using nested() to open two files is a programming error as the first file will not be closed promptly if an exception is thrown when opening the second file. Secondly, if the __enter__() method of one of the inner context managers raises an exception that is caught and suppressed by the __exit__() method of one of the outer context managers, this construct will raise RuntimeError rather than skipping the body of the with statement. Developers that need to support nesting of a variable number of context managers can either use the warnings module to suppress the DeprecationWarning raised by this function or else use this function as a model for an application specific implementation. Deprecated since version 3.1: The with-statement now supports this functionality directly (without the confusing error prone quirks). Return a context manager that closes thing upon completion of the block. This is basically equivalent to: from contextlib import contextmanager @contextmanager def closing(thing): try: yield thing finally: thing.close() And lets you write code like this: from contextlib import closing from urllib.request import urlopen with closing(urlopen('http://www.python.org')) as page: for line in page: print(line) without needing to explicitly close page. Even if an error occurs, page.close() will be called when the with block is exited.
<urn:uuid:2eac2388-554b-4b77-86ef-b622b267e997>
2.796875
765
Documentation
Software Dev.
42.618658
How the Leopard Got His Spots In one of his celebrated just-so stories, Rudyard Kipling recounted how the leopard got his spots. But taking this approach to its logical conclusion, we would need distinct stories for every animal's pattern: the leopard's spots, the cow's splotches, the panther's solid colors. And we would have to add even more stories for the complex patterning of everything from molluscs to tropical fish. But far from these different animals requiring separate and distinct explanations, there is a single underlying explanation that shows how we can get all of these varied and different patterns using a single unified theory. Beginning in 1952, with Alan Turing's publication of a paper entitled "The Chemical Basis of Morphogenesis", scientists recognized a simple set of mathematical formulas could dictate the variety of how patterns and colorings form in animals. This model is known as a reaction-diffusion model and works in a simple way: imagine you have multiple chemicals, which diffuse over a surface at different rates and can interact. While in most cases, diffusion simply creates a uniformity of a given chemical—think how pouring cream into coffee will eventually spread and dissolve and create a lighter brown—when multiple chemicals diffuse and interact, this can give rise to non-uniformity. Even though this sounds somewhat counterintuitive, not only can it occur, but it can be generated using only a simple set of equations, and in turn explain the exquisite variety of patterns seen in the animal world. Mathematical biologists have been exploring the properties of reaction-diffusion equations ever since Turing's paper. They've found that varying the parameters can generate the animal patterns we see. Some mathematicians have even examined the ways in which the size and shape of the surface can dictate the patterns that we see. As the size parameter is modified, we can easily go from such patterns as giraffe-like to those seen on Holstein cows. This elegant model can even yield simple predictions. For example, while a spotted animal can have a striped tail (and very often does) according to the model, a striped animal will never have a spotted tail. And this is exactly what we see! These equations can generate the endless variation seen in Nature, but can also show the limitations inherent in biology. The just-so of Kipling may be safely exchanged for the elegance and generality of reaction-diffusion equations.
<urn:uuid:3e545b8b-0f86-465f-891e-4807e9848fd9>
3.5625
493
Knowledge Article
Science & Tech.
32.650627
U.S. Climate Extremes IndexEntry ID: gov.noaa.ncdc.C00571 Abstract: The Climate Extremes Index (CEI) is a study prepared by and available from the National Climatic Data Center (NCDC). The CEI was first presented in 1995 as a framework for quantifying observed changes in climate within the contiguous United States. The index is based on an aggregate set of conventional climate extreme indicators which now includes extremes in land- falling tropical storm and ... hurricane wind intensity. Originally, the CEI was calculated on an annual basis, and now the revised CEI is evaluated for eight standard seasons: spring, summer, autumn, winter, annual, cold season, warm season, and hurricane season. Additional temperature and precipitation stations have been added to the analysis to improve spatial coverage without compromising completeness of data. Near real-time data have also been incorporated into the index, which allow the CEI to be calculated operationally on a seasonal basis. Purpose: To make a wide range of climatic data available to researchers and the public. SUPPLEMENTAL INFORMATION: technical report Pricing is dependent on customer order specifications. Please contact NCDC for information on fees and terms for retrieving the Data Set or Product. CURRENTNESS REFERENCE: Ground Condition ISO Topic Category Role: SERF AUTHOR Phone: (301) 614-6898 Email: Tyler.B.Stevens at nasa.gov NASA Goddard Space Flight Center Global Change Master Directory Province or State: MD Postal Code: 20771 Creation and Review Dates
<urn:uuid:28b2ee83-e95b-42c7-a53f-b4eabd5c64ad>
2.765625
337
Structured Data
Science & Tech.
32.54875
In Arrays.qsort the methods Float.compare and Double.compare are used depending on the values in the array. The compare operations perform the following (copied from GNU Classpath): return isNaN ? 0 : 1; // recall that 0.0 == -0.0, so we convert to infinites and try again if (x == 0 && y == 0) return (int) (1 / x - 1 / y); if (x == y) return x > y ? 1 : -1; In the normal case we're going to hit 6 floating point compares. The case of 0, 0 is common due to using qsort on branch profiles, and this results in 2 divides and 1 subtract. Given we're just comparing to values we should be able to do this substantially cheaper in a VM specific/magic version.
<urn:uuid:b7ad3e08-fe9d-4f8a-ac16-6806500b2138>
2.8125
183
Q&A Forum
Software Dev.
84.867375
Thawing Permafrost in the Arctic Will Speed Up Global Warming On the hill above Toolik Field Station, Gaius Shaver has been running experiments for more than 30 years to determine how fertilizer changes the tussock tundra’s floral medley. His test plots, north of Toolik Lake, are accessible only by a mazelike wooden walkway that, at points, is little more than a web of two-by-sixes. As a rule, nobody ventures off the walkway, lest they crush someone else’s experiment. On an overcast morning Shaver, clad in mud boots, a Carhartt jacket, and blue jeans, confidently makes his way along the slippery walkway and up to his plots. These are very different from the nearby untouched tundra. Because nitrogen and phosphorous are scarce in the frozen tundra in forms useful to plants, yet crucial to their growth, Shaver wanted to see what would happen if he added the nutrients to small plots. It’s been more than two decades since he started the study, which has transformed the plots from calf-high grass-dominated tussocks to knee-high shrub-covered patches. In one section covered with clear plastic—a makeshift greenhouse—birches are thriving and waist high, but few, if any, other species abound. The clear lesson: When more food is available, shrub growth takes off, shading out plants like cranberries and cotton grass. As the North Slope has warmed during the past half century, it has become shrubbier. While this rise in woody vegetation has been a boon for songbirds and moose—mammals once scarce as far north as Toolik, now regularly spotted there—it may help spur warming. “Shrubs tend to feed back into the local and regional climate,” says Michelle Mack, a University of Florida plant ecologist. During the summer, birch, willow, and alder reflect solar radiation, driving up atmospheric temperature. In winter, tall, branched woody plants trap snow that would otherwise blow across the tundra. This actually keeps the ground warmer and may allow soil microbes to remain active for longer, cycling nutrients (providing food for shrub growth) and releasing greenhouse gases. Disturbances like thermokarsts could also expand shrub cover. As with the older thermokarst at lake NE-14, decades after these features form “you see shrubs, not tussocks,” says Mack. Once shrubs move in, she adds, it could be difficult for lost permafrost to become reestablished. More woody material also means more carbon is being stored above ground, instead of in the soils. While plants may trap carbon for hundreds of years before it’s cycled back into the atmosphere, permafrost can store it for tens of thousands of years. Since trees act as carbon sinks, soaking up CO2, a shrubbier tundra might seem like a plus—but it actually exhales more carbon than tussock tundra. Although the plants in Shaver’s fertilized plots have more biomass and take up more carbon dioxide than those in the control plots, that gain is offset by the loss of carbon and nitrogen from deep soils; the nutrient-rich plots saw a net loss of nearly 2,000 grams of carbon per square meter over 20 years, Mack and colleagues reported in Nature in 2004. “The tundra is moving toward a shrubbier community, which means it will hold less carbon overall,” says Mack. The release of that carbon into the atmosphere, in turn, will feed back into more warming. All of that wood creates more potential kindling. “We know that 12,000 years ago, when the tundra was more shrubby, there were more fires,” says Mack. Now, for the first time since the woolly mammoth and sabertooth tiger went extinct, big wildfires are again raging on the North Slope. “As an undergrad I took an Arctic seminar and was taught that there are no cumulus clouds in the Arctic, no thunderstorms,” John Hobbie, a founder of Toolik Field Station, recalls of his education in the 1950s. Today in the warmer modern Arctic, thunderstorms are regular summer events. “One day this year we had 272 lightning strikes on the North Slope,” says Syndonia Bret-Harte, an ecologist at the University of Alaska-Fairbanks and Toolik’s associate science director. Most of the flashes that hit the wet tundra don’t catch fire. But the summer of 2007 was an exceptionally dry year, and in July a strike hit the grass near Toolik, smoldered for a few weeks, then exploded. Bret-Harte recalls watching the fire from camp. “When the wind was not blowing, you could see this big wall of smoke,” she says, “and that was awesome and beautiful but disturbing at the same time. When the wind shifted and brought the smoke into camp, then it was like being in this thick, acrid fog. It was gross.” The Anaktuvuk River fire, the largest ever recorded on the North Slope, burned until October, ultimately consuming nearly 350 square miles and releasing 2.2 million metric tons of carbon into the atmosphere—the amount the entire country of Barbados emits annually. Walking around the severely burned area two years later, the previously scorched earth dusts my boots and pant legs. Already some cotton grass has come back, and the green stems and white tufts growing out of singed tussocks contrast sharply with the blackened terrain. Bret-Harte estimates that plants now cover about half of the ground. But, she adds, “I haven’t seen any of the normal mosses and lichens coming back.”
<urn:uuid:abbdfda2-8898-496f-b0b6-4996cbf60e8c>
3.75
1,245
Truncated
Science & Tech.
52.034664
Single (left) and multibeam (right) echo sounding of the seafloor. Single beam systems typically have beam widths of 10-30 degrees and estimate depth by measuring the shortest slant range to the seafloor within the main beam. Multibeam (swath sonar) systems provide a series of slant range and elevation angle estimates along a fixed azimuth. This method is preferred because it measures an entire area rather than a single line on the seafloor. (University of New Brunswick) Image courtesy of Submarine Ring of Fire 2002, NOAA/OER.
<urn:uuid:349137bb-97d9-46f3-9535-97c41447e783>
3.5
122
Knowledge Article
Science & Tech.
41.367103
My Wikipedia suggestion for this problem is Faraday's law of induction. They sum it up in pretty much a single quote. The induced electromotive force (EMF) in any closed circuit is equal to the time rate of change of the magnetic flux through the circuit. There are lots of technicalities of motors and generators, but they're not necessary for this problem. The fundamental principle is that there is a wire spinning while in a magnetic field. The EMF, I'll denote $V$ for voltage, is quantified as follows with $r$ being the rotating radius of the coil assuming it's rectangular (as well as rotating in the right direction), $l$ is the other dimension of the rectangular loop, $B$ is the magnetic field, $\omega$ is the speed of rotation. $$V = B r l \omega$$ If any single one of these factors had unlimited potential to increase then a motor could deliver infinite voltage. Of course they are all limited. The most obvious way to scale up power is to make a bigger machine. There is one missing piece, which is that EMF refers to the voltage that can be either produced or converted into a mechanical action. That does not say anything of current, so taken at face value, such a simple coil rotating in a constant magnetic field would allow infinite power conversion if there were infinite current. Current in any wire or bundle of wires, is, of course limited by resistive heating limits. You can go find plenty of information about these limits but I will not cover them here. Yes, it is possible to use superconducting wires for both the primary coils as well as the magnetic field generating coils, but they also do not allow infinite energy conversion, and yes, there are companies that sell these. I'm not familiar enough with the technology to say for sure, but I believe that the problem is still resistive heating. Superconductors generate much less heat, but each unit of heat they produce is much more expensive to remove if it's a low temperature superconductor. The 2nd law of thermodynamics gives a direct penalty on a heat flux out of a refrigerated system.
<urn:uuid:bac2f75f-ba86-472c-84b6-b49002ff07f6>
3.359375
444
Q&A Forum
Science & Tech.
42.693513
|Projects in Scientific Computing: New Understanding of Life and Its Processes| Research that offers promise for reducing side-effects If there were a mantra for molecular biologists, it might be this: Structure and function are related, two sides of a coin. To know how a protein does its job inside a living cell, look at the structural details, how it's put together. The twists and turns, helices and sheets of a protein's 3D shape define what biomolecules it can interact with and how. In the 1990s, through a variety of genome projects, scientists are mapping and sequencing the genes of many organisms, and the resulting data corresponds to the root of protein structure: the linear sequence of amino acids, like beads on a chain, that precedes and determines 3D shape. As this data streams into the marketplace of knowledge, a few scientists, like Hugh Nicholas and John Hempel, are using it to open up new territory in understanding how amino-acid sequence interrelates with 3D structure. Hempel, a University of Pittsburgh biologist, has for 20 years focused on a family of enzymes called aldehyde dehydrogenase (ALDH). In the early 80s, he worked out the first two ALDH sequences to be solved and, in collaboration with Ron Lindahl at the University of South Dakota, has continued this work. In the past few years, scientists worldwide have expanded work in this area, and currently sequences are determined for over 200 related ALDH enzymes in a wide range of plants and animals. In the late-80s, Hempel began collaborating with Nicholas, a Pittsburgh Supercomputing Center scientist who specializes in sequence analysis, the process of analyzing relationships among nucleic-acids (DNA and RNA) or proteins through comparison of their sequence data. In 1993, they worked on a group of 16 ALDH sequences representing the diversity of the ALDH family sequenced at that time. In 1997, Hempel collaborated with a University of Georgia research group led by B.C. Wang in work that solved the first ALDH 3D structure. This made it possible for Nicholas and Hempel to embark on an ambitious project investigating the interplay between 3D structure and sequence data. Beginning in September 1997, Hempel's student, John Perozich, gathered 145 full-length ALDH sequences, the complete pool of sequenced ALDHs at the time. Using the sequence-analysis facility at PSC to align them, the researchers produced one of the largest multiple-sequence alignments achieved to date. Nicholas and Hempel then applied techniques they developed to identify recurring sequence elements and analyze them in relation to function. They identified 10 sequence motifs, amino-acid patterns, that recur with a high degree of regularity in the 145 ALDH sequences. Their analysis of these motifs offers fresh insight into how sequence influences 3D structure. Their research, furthermore, offers an approach with potential wide application in other protein families. ALDHs have been found in nearly every form of living thing. Their primary role in humans and other mammals is protecting the body from toxic compounds called aldehydes. Early interest (in the 70s), focused on an ALDH in the liver that helps metabolize an aldehyde (acetaldehyde) that comes from alcohol, changing it to acetic acid, which the body burns for energy. A drug called Antabuse sometimes used in treating alcoholism deactivates the relevant ALDH, making you sick if you drink. Further research has found a number of closely related but different ALDH species, over 10 now identified in humans, with various functions. A number of these are the subject of public-health research. One of them, also a liver ALDH, is genetically inactive in about half of all Asians, causing severe alcohol intolerance. In 1996 researchers showed that defects in another ALDH cause a genetic disease called Sjögren-Larsson syndrome, which involves mental retardation, scaly skin and shortened life. ALDHs also affect cancer treatment. A number of chemotherapy drugs work through conversion in the body to an aldehyde that attacks cancer cells. These therapies lose potency over time bacause the relevant ALDH increases in concentration, deactivating the aldehyde more quickly. Better structure-function knowledge will make it possible to develop specific ALDH-inhibitor drugs to regulate this kind of chemotherapy. |This phylocenetic tree plots evolutionary relations among ALDH sub-families.| As one part of their project, Hempel and Nicholas classified the 145 sequences, each more than 700 amino-acids in length, into sub-families. These groupings, explains Nicholas, are based on evolutionary adaptation that's consistent with having a common-ancestor ALDH. In one form of adaptapion, the gene that codes for a protein reappears with little change when organisms evolve to another organism from one bacterium to another, for instance. In another kind of adaptation, however, a gene duplicates within an organism. One copy of the gene can then diverge slightly in structure and take on a modified function, such as to react with a different form of aldehyde. Nicholas and Hempel tracked this adaptation by sequence analysis. "We think this happened at least 13 times in the history of ALDHs," says Nicholas. Through statistical measures of sequence similarity, Hempel and Nicholas grouped the 145 ALDH sequences into 13 distinct sub-families. "Sequences within a sub-family are more similar to each other than to the sequences in other sub-families." The researchers also generated a "phylogenetic tree," which charts evolutionary relationships among the sub-families. Each branch represents a point of divergence, where a gene duplicates and evolves to a new function. Distance between branches corresponds to evolutionary distance as measured by how much the sequences differ. This representation shows structure of an ALDH from rats. Colors show the conserved sequence motifs identified by Hempel and Nicholas. The spheres represent very highly conserved residues within each motif. Download a larger (99K) version of this image. Another product of Nicholas and Hempel's analysis is identification of conserved residues amino-acids in a protein chain are called residues, and a conserved residue stays the same across sequences. They found four 100 percent conserved residues the same amino-acid at the same position in all 145 sequences. This is reduced from 23 in their 1993 alignment of 16 ALDHs. Twelve other residues are 95 percent conserved. The logic of evolution holds that conserved residues should be important in structure and function. Nicholas and Hempel's analysis of ALDH supports this view. The four invariant residues participate in binding with other molecules involved in ALDH's catalytic function. Most of the other highly conserved residues, they found, are part of motifs highly conserved short sequences that cluster around the enzyme's active site, the part of the 3D structure where it binds with other molecules to carry out its function. The researchers also extended this kind of analysis to the 13 sub-families, using computational tools to search out what residues are conserved within a particular sub-family and discriminate that group from other ALDHs. This is a particular interest of Nicholas, who sees it as offering the potential to develop drugs that interact with only a particular ALDH, rather than the entire family: "If with chemotherapy you could give the patient an inhibitor for the ALDH that inactivates the chemotherapy, you could get by with a lower dose. You wouldn't want to inhibit basic metabolism, which inhibiting a broad spectrum of ALDHs would do. This work is a first step in that direction." A major finding from the analysis is identification of 10 sequence motifs that are themselves highly conserved. "These are stretches of sequence from five amino-acids up to 14 or 15," says Nicholas. "They're spread uniformly along the entire ALDH sequence, but when you look at the 3D structure they fold back together and come into contact with each other." This interplay between sequence and 3D structure, uncovered by Nicholas and Hempel's analysis, provides a new way of looking at proteins, say the researchers. "You can gain a great deal of insight," says Hempel, "looking at conserved residues and how they relate to 3D structure." With respect to Sjögren-Larsson syndrome, detailed analysis of several mutant ALDH sequences associated with SLS suggests hypotheses for precisely what happens to cause the syndrome, which Nicholas and Hempel are investigating with computational simulations. "It gives a synergy to the whole process of understanding how this works." A particularly interesting finding, observes Nicholas, arises from seeing that the conserved motifs contain a small, highly conserved, water-avoiding (hydrophobic) amino-acid not directly involved in enzyme function. "Each of the motifs," he ways, "has not only a functional definition but also a structural definition." These small, hydrophobic amino-acids, he notes, are generally involved with turns or tight-packing with other proteins. "This represents information about the interplay between sequence and structure that's generalizable to other proteins," says Nicholas. "It indicates that perhaps the details of how a protein folds into its 3D shape are determined by these smaller amino-acids that form at critical turns or packing junctions, where several strands of the protein come together to carry out function." These insights from sequence shed new light on traditional ways of thinking about protein structure, which tends to focus on 3D patterns (known as secondary structure) such as helices and sheets. "Our results tells us," says Nicholas, "that it's the residues at the ends of these structures that are conserved. Perhaps you can think of a protein not as a bunch of helices, but as defined by the ends of the helices, where it turns. These may be two equally informative, inversely related ways of thinking, and the genomics data suggests we should look at the turns more than we have." As a goal for this kind of work, the researchers hope to be able to build a sophisticated description based on conserved residues that identifies sequence elements that can predict 3D structure, not only in ALDH but in other protein families. "This provides a model for other systems," says Hempel. "We're developing tools needed to rigorously analyze and determine highly conserved regions and to correlate them with structure. To look at long segments with high degrees of similarity, we need capability like the Pittsburgh Supercomputing Center gives us." "To find these 10 different regions of conserved sequence," notes Nicholas, "took 60 hours of processing on our sequence-analysis facility [a VAX 8400]. You're not going to find this with desktop PCs." John Hempel, University of Pittsburgh Hugh Nicholas, Pittsburgh Supercomputing Center |Hardware||SEQ, the PSC Sequence Analysis Resource.| on the Web Structure Function Relationships Among Aldehyde Dehydrogenases. Aldehyde dehydrogenase complexed with NAD (Animation). John Perozich( Hugh Nicholas, Bi-Cheng Wang, Ronald Lindahl & John Hempel, "Relationships within the aldehyde dehydroganase expended family," Protein Science 8, 137-46 (1999). John Hempel, Hugh Nicholas & Ronald Lindahl, "Aldehyde dehydrogenases: Widespread structural and functional diversity within a shared framework," Protein Science 2, 1890-1900 (1993). Writing: Michael Schneider HTML Layout/Coding: R. Sean Fulton © Pittsburgh Supercomputing Center (PSC), Revised: Sept. 14, 1999
<urn:uuid:be3c5f80-65ea-4e15-8487-c0c6dcbc06d4>
2.828125
2,423
Knowledge Article
Science & Tech.
32.225768
The fire triangle is used to show the rule that a fire needs three things to burn. These things are heat, fuel, and oxygen. If one of these three is removed, the fire will be put out. In the middle of the fire triangle there is also a chemical reaction Without heat, a fire cannot begin. If a fire becomes cool enough, it will not keep burning. Heat can be removed by using water. This only works on some types of fire. Separating burning fuels from each other can also reduce the heat. In forest fires, burning logs are separated and placed into safe areas where there is no other fuel. Turning off the electricity in an electrical fire removes the heat source, but other fuels may have caught fire. They will continue burning until the firefighters deal with them and their fire triangles. Without oxygen, a fire cannot can not start. Oxygen may be removed from a fire by covering it in some way. Some foams and heavy gases (for example, carbon dioxide) are often used for this. The fire can also be closed off away from a source of oxygen. Once all the oxygen in the closed off area is used by the fire, it will go out if it cannot get more oxygen because it needs oxygen.
<urn:uuid:35fcb161-a49d-45f8-89fc-c11422d80dda>
4.4375
253
Knowledge Article
Science & Tech.
57.903295
Trees rings were reliable for thousands of years, but (like everyone else) they quit doing their job properly in the 1960s. Fortunately, we have conscientious climate scientists like Michael Mann to show the trees what they should have been doing. Not to be outdone at hiding the decline, Hansen and USHCN showed that they could alter the data in-situ, when they turned a US cooling trend into a warming trend. This appears to be a superior approach to Mann’s grafting on a different data set. Honorable mention goes to IPCC – which grafted altimetry sea level data on to tide gauge data – to show an increase in sea level rise rates. Apparently tide gauges also went on strike in 1994.
<urn:uuid:7cedf6f9-89d2-49df-bd00-d021d6d501b1>
2.90625
152
Personal Blog
Science & Tech.
54.222032
Global Warming Science - www.appinsys.com/GlobalWarming [last update: 2011/01/08] The Union of Concerned Scientists’ (UCS) “Climate Choices” web site (published in 2006) says: “here in the Northeast, the climate is changing. Records show that spring is arriving earlier, summers are growing hotter, and winters are becoming warmer and less snowy. These changes are consistent with global warming, an urgent phenomenon driven by heat-trapping emissions from human activities” Northeast US Region Northeast data are available from the NOAA / NCDC website – the following figures are from there [http://www.ncdc.noaa.gov/oa/climate/research/cag3/nt.html] UCS: “spring is arriving earlier”. Spring arrives in March in the Northeast. The warmest and coldest Marches were more than 50 years ago – perhaps the climate is stabilizing. UCS: “summers are growing hotter”. The hottest month is July – shown in the following figure. No significant long-term trend. Warmest July: 1955. Coldest July: 2000. But they said “summer”. Again, no significant long-term trend. Warmest summer: 1949. UCS: “winters are becoming warmer and less snowy”. January is the coldest month – no significant long-term trend. Warmest January: 1932. Winters have warmed slightly due to some very cold winters in the early 1900s. Warmest winter: 2002, second warmest: 1932. But there is no significant winter warming over the last 80 years. As for snow cover: “snow cover duration is variable in both space and time. The duration of a snow cover of 2·5 cm or greater varies from greater than 100 days in northern New England to less than 20 days across areas of Delaware, Maryland and West Virginia. Temporally, snow cover duration for the region as a whole was very short from the late 1940s through to the mid-1950s. From the late 1950s to the end of the period snow cover duration has varied around a consistent mean value. No long-term trends in snow cover duration are apparent in the record for the northeast USA.” [http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0088(19971130)17:14%3C1535::AID-JOC215%3E3.0.CO;2-7/abstract] Another study found that areas with low total snowfall have been receiving less, while areas that receive significant snow have no long-term trend. [http://journals.ametsoc.org/doi/pdf/10.1175/JTECH2017.1] The following figures are from that study showing mean annual snowfall anomaly for locations with 5 inches (left) and 40 inches (right). Beware of short-term data sets. Climate change follows an approximately 60-year cycle. Various studies show decreasing snow cover over the last 40 years. For example, the figure below-left shows change in snow cover days for the 1965 – 2005 period [http://www.cleanair-coolplanet.org/information/pdf/winterindicators_09.pdf]. The figure below right indicates the 1965-2005 trend on the figure from above-right. Temperature Extremes in the Northeast The extreme climate events in each state can be found at this NOAA / NCDC web site: The following table summarizes the hot and cold records for most of the states in the US Northeast region (these are the hottest / coldest days recorded – not state averages for the given years). Although on the south end of the Northeast, the following link describes the greatest storms in the greater Washington-Baltimore area: [http://www.erh.noaa.gov/er/lwx/Historic_Events/StormsOfCentury.html] So what are these “concerned scientists” so concerned about? According to their mission statement: “UCS seeks a great change in humanity's stewardship of the earth.” [http://www.ucsusa.org/about/] The UCS was started in 1969 as an anti-nuclear weapon organization, but switched its focus to global warming when the Soviet Union collapsed and it became clear that large amounts of funds were available from the left-wing foundations (Pew Trusts, Joyce Foundation, MacArthur Foundation…) For more information on the UCS see: [http://www.capitalresearch.org/pubs/pdf/v1186063502.pdf] And details about their funding: [http://activistcash.com/organization_financials.cfm/o/145-union-of-concerned-scientists] The UCS didn’t mention the Northeast region average annual temperature – shown below. There has been no warming trend over the last 80 years – in other words, recent warming is not unprecedented. The globally “hot” year of 2010 was below normal in the Northeast US. The last 15 years: no warming.
<urn:uuid:09b8c091-f79f-4743-9503-158b13b8d556>
3.234375
1,108
Knowledge Article
Science & Tech.
65.779754
How strange could alien life be? indication that the fundamental elements that compose most terrestrial life forms might differ out in the universe was found in unusual Mono Lake in Bacteria in Mono's lakebed gives that it not only can tolerate a large abundance of normally toxic but possibly use arsenic as a replacement for an element needed by every other known Earth-based life form. The result is surprising -- and perhaps controversial -- partly because arsenic-incorporating organic molecules were thought to be much more fragile than phosphorous-incorporating organic molecules. Pictured above is 7.5-km wide Mono Lake as seen from nearby Mount Dana. The inset picture shows GFAJ-1, the unusual bacteria that might be able to Inset: Jodi Switzer Blum
<urn:uuid:ac38ddb0-9993-46a2-aa94-2ecbf8b6aafb>
2.78125
168
Truncated
Science & Tech.
23.7656
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. discovery by CoRoT ...have been called “super-Earths.” Its density is similar to that of Earth, and thus it is a rocky planet like Earth, the first such planet to be confirmed. Another CoRoT discovery, CoRoT-2b, has a mass 22 times that of Jupiter and orbits its star every 4.26 days. CoRoT-2b is either a very large planet or a small brown dwarf with an unusually small orbital period. What made you want to look up "CoRoT-2b"? Please share what surprised you most...
<urn:uuid:4391cd11-ee0f-4d13-83b9-5295d5c59cd0>
3.0625
161
Knowledge Article
Science & Tech.
68.512203
At the interface between the two materials, e.g. air and water, light may be reflected at the interface or refracted (bent) into the new medium. For Reflection the angle of incidence = angle of reflection. For Refraction the light is bent when passing from one material to another, at an angle other than perpendicular. A measure of how effective a material is in bending light is called the Index of Refraction (n), where: Index of Refraction in Vacuum = 1 and for all other materials n > 1.0. Most minerals have n values in the range 1.4 to 2.0. A high Refractive Index indicates a low velocity for light travelling through that particular medium. Snell's law can be used to calculate how much the light will bend on travelling into the new medium. If the interface between the two materials represents the boundary between air (n ~ 1) and water (n = 1.33) and if angle of incidence = 45°, using Snell's Law the angle of refraction = 32°. The equation holds whether light travels from air to water, or water to air. In general, the light is refracted towards the normal to the boundary on entering the material with a higher refractive index and is refracted away from the normal on entering the material with lower refractive index. In labs, you will be examining refraction and actually determine the refractive index of various materials.
<urn:uuid:726315e9-8d3c-412c-9660-e1bd32e396f3>
4.21875
304
Knowledge Article
Science & Tech.
57.444118
In my article on parallel merge, I developed and optimized a generic parallel merge algorithm. It utilized multiple CPU cores and scaled well in performance. The algorithm was stable, leading directly to Parallel Merge Sort. However, it was not an in-place implementation. In this article, I develop an in-place parallel merge, which enables in-place parallel merge sort. The STL implements sequential merge and in-place merge algorithms, as inplace_merge(), which run on a single CPU core. The merge algorithm is O(n), leading to O(nlgn) merge sort, while the in-place merge is O(nlgn), leading to O(n(lgn)2) in-place merge sort. Because merge is faster than in-place merge, but requires extra memory, STL favors using merge whenever memory is available, copying the result back to the input array (under the hood). The divide-and-conquer merge algorithm described in Parallel Merge, which is not-in-place, is illustrated in Figure 1. At each step, this algorithm moves a single element X from the source array T to the destination array A, as follows. Figure 1. A merge algorithm that is not in place. The two input sub-arrays of T are from [p1 to r1] and from [p2 to r2]. The output is a single sub-array of A from [p3 to r3]. The divide step is done by choosing the middle element within the larger of the two input sub-arrays—at index q1. The value at this index is then used as the partition element to split the other input sub-array into two sections - less than X and greater than or equal to X. The partition value X (at index q1 of T) is copied to array A at index q3. The conquer step recursively merges the two portions that are smaller than or equal to X— indicated by light gray boxes and light arrows. It also recursively merges the two portions that are larger than or equal to X—indicated by darker gray boxes and dark arrows. The algorithm proceeds recursively until the termination condition —when the shortest of the two input sub-arrays has no elements. At each step, the algorithm reduces the size of the array by one element. One merge is split into two smaller merges, with the output element placed in between. Each of the two smaller merges will contain at least N/4 elements, since the left input array is split in half. This algorithm is O(n) and is stable. It was shown not to be performance competitive in its pure form. However, performance became equivalent to other merges in a hybrid form, which utilized a simple merge to terminate recursion early. The hybrid version enabled parallelism by using of divide-and-conquer method, scaled well across multiple cores, outperforming STL merge by over 5X on quad-core CPU, being limited by memory bandwidth. It was also used as the core of Parallel Merge Sort, which scaled well across multiple cores, utilizing a further hybrid approach to gain more speed, and providing a high degree of parallelism, Θ(n/lg2n).
<urn:uuid:ef30415b-999c-4e19-8443-3967628122c0>
3.125
669
Knowledge Article
Software Dev.
52.563447
Name: Mike S. I think I understand the basics of a homopolar generator. A disc is spun in front of a magnet and an electrical current flows in the disc. my question is, Can the disc be attached to the magnet (insulated and glued) and have both spun together and have an electrical current formed on the disc? The reason I ask is that there seems to be conflicting information about it and I cannot seem to find any "reliable" information on it. Thank you. No, that does not work. You have a question on one of the most subtle parts of physics. A charged particle moving through a stationary magnetic field feels a magnetic force. However, if that field is moving with the particle, though the field may be of the same strength as in the previous case, the particle feels no force! It can, perhaps, best be explained by realizing that the force is really between the (moving) charged particle producing the field and the (moving) charged particle acted on by the field. The field is a construct which (sometimes) simplifies the thinking about magnetic forces (and sometimes complicates the thinking). The physics is even more subtle. Two positive charged particles at rest repel each other. If they move at the same speed parallel to each other, there is an attractive magnetic force. Newtonian relativity says that if you run with the moving charges, the physics must be the same. However, that eliminates the magnetic force for the running observer! In fact, this problem was a main impetus for Lorenz and Einstein to develop the theory of special relativity. If you watch the particles flying by, Einstein says their clocks slow down and they are repelled more slowly, which is another way of saying the magnetic attraction slows their motion apart induced by the electrostatic repulsion. I hope this is clear. It is NOT simple. Let me know if you'd like me to try again. Best, Dick... Richard J. Plano Click here to return to the Physics Archives Update: June 2012
<urn:uuid:3bdbd406-2560-4a73-9e21-f6f0cf72f919>
3.078125
444
Q&A Forum
Science & Tech.
58.390072
(Picture - fotosearch)The Brightest Planet It is visible with the naked eye in the southern skies of Africa and I'm sure for many other continents, in South Africa you can spot it in the eastern sky from as early as 7:15 pm and looks like a bright shining star. Venus and Earth are similar in size, mass, density, composition, and distance from the sun and that is about it in comparison. Venus is covered by a thick & rapidly spinning atmosphere, creating a world with temperatures hot enough to melt lead and a surface pressure 90 times that of Earth. Because of its proximity to Earth and the way its clouds reflect sunlight, Venus appears to be the brightest planet in the sky. Like Mercury, Venus can be seen periodically passing across the face of the sun using a telescope and with the correct attachments in place to protect a person/s eye sight. These transits occur in pairs, with more than a century separating each pair and have been observed on 6 occasions according to my knowledge, approximate calculations reakon it will be visible again in June 2012.Toxic Atmosphere Venus's atmosphere consists mainly of carbon dioxide, with clouds of sulfuric acid droplets and only trace amounts of water have been detected in it's atmosphere. The thick atmosphere allows the sun's heat to enter but does not release it, resulting in surface temperatures over 880 degrees Fahrenheit (470 degrees Celsius). Probes that have landed on Venus have not survived more than a few hours before being destroyed by the incredibly high temperatures. The Venusian year (orbital period) is about 225 Earth days long, while the planet's rotation period is 243 Earth days, making a Venus day about 117 Earth days long. Venus rotates retrograde (east to west) compared with Earth's prograde (west to east) rotation. As Venus moves forward in its solar orbit while slowly rotating "backwards" on its axis, the cloud-level atmosphere zips around the planet in the opposite direction from the rotation every four Earth days, driven by constant hurricane-force winds. How this atmospheric "super rotation" forms and is maintained continues to be a topic of scientific investigation. About 90 percent of the surface of Venus appears to be recently solidified basalt lava; it is thought that the planet was completely resurfaced by volcanic activity 300 million to 500 million years ago. Sulfur compounds, possibly attributable to volcanic activity, are abundant in Venus's clouds. The corrosive chemistry and dense, moving atmosphere cause significant surface weathering and erosion. Radar images of the surface show wind streaks and sand dunes. Craters smaller than 0.9 to 1.2 miles (1.5 to 2 kilometers) across do not exist on Venus, because small meteors burn up in the dense atmosphere before they can reach the surface.Geological Features More than a thousand volcanoes or volcanic centers larger than 12 miles (20 kilometers) in diameter dot the surface of Venus. Volcanic flows have produced long, sinuous channels extending for hundreds of kilometers. Venus has two large highland areas: Ishtar Terra, about the size of Australia, in the north polar region, and Aphrodite Terra, about the size of South America, straddling the equator and extending for almost 6,000 miles (10,000 kilometers). Maxwell Montes, the highest mountain on Venus and comparable to Mount Everest on Earth, is at the eastern edge of Ishtar Terra. Venus has an iron core about 1,200 miles (3,000 kilometers) in radius. Venus has no global magnetic field; though its core iron content is similar to that of Earth, Venus rotates too slowly to generate the type of magnetic field that Earth has.
<urn:uuid:d9efd731-ba5a-49a3-8883-b7cc20a23bcc>
3.875
767
Comment Section
Science & Tech.
41.17781
The aim of this study was to develop an understanding of the interactions between glaciers and their permafrost beds that will permit models of glaciers to realistically parameterise ice motion. Improvement of such models is central to their success in predicting glacier response to climate change. A multifaceted investigation of the hydrology, sedimentology, geochemistry and deformation of ... proglacial and subglacial permafrost at the margins and beneath cold-based glaciers was investigated at the margins of the Joyce, Garwood and Hobbs Glaciers. Ground Penetrating Radar (GPR) surveys of the Joyce, Garwood and Hobbs Glaciers basal zones and moraine was completed. Samples of ice from the basal zones of the Joyce, Garwood and Hobbs glaciers together with ground ice from the upper Garwood Valley was collected and returned to New Zealand for isotopic and solute composition analysis of the ice. Meteorological stations were installed on both the Joyce and Garwood Glaciers to allow detailed examination of the radiation inputs into the glacial system. A stream gauging station was installed on an unnamed stream draining the Joyce Glacier. The gauging station was fitted with a ultra-sonic stage sensor, an electrical conductivity sensor and a turbidimeter to monitor suspended sediment transfer. Sampling frequency was every 5 minutes, averaged every 15 minutes. Hydrochemistry samples for ionic concentration were collected daily from three locations on Holland Stream. Alkalinity and pH were also measured.
<urn:uuid:3ba9450a-5152-4222-9488-ceb97717d99c>
2.984375
298
Academic Writing
Science & Tech.
24.444474
In addition to this Mono Lake discovery changing the basic understanding of life — oh, you know, no big deal — some scientists say that this outstanding finding could mean great things for the advancement of green technology. First off, because this microbe literally builds itself out of arsenic, it could be a key element in cleaning up arsenic laden toxic waste areas. Just throw a bunch of the tiny organisms into the mess and they’ll eat up all of the arsenic. The other key discovery has to do with revolutionizing green energy. Phosphorus has been integral to the formation of fertilizers and is part of the reason that ethanol — which is heavily built on phosphorus — is currently being phased out as a source of alternative energy. It takes a ton of phosphorous to grow crops that will yield ethanol and phosphorus is becoming scarcer by the minute. We’ve got a ton of arsenic lying around — that most of us don’t want anywhere near us — so researchers are expected to get to work trying to figure out if an arsenic-based ethanol could be created. In addition to the possible arsenic-based ethanol being a good alternative fuel, as a crop it is attractive because its unique chemical building blocks would make it unattractive to outside pests — read: no pesticides or fungicides needed.
<urn:uuid:b57a61bf-6517-4a56-bd21-25d917acce09>
3.15625
264
Truncated
Science & Tech.
36.407857
is a unit of absorbed radiation dose. The rad was first proposed in 1918 as "that quantity of X rays which when absorbed will cause the destruction of the malignant mammalian cells in question..." It was defined in CGS units in 1953 as the dose causing 100 ergs of energy to be absorbed by one gram of matter. It was restated in SI units in 1970 as the dose causing 0.01 joule to be absorbed per kilogram The United States Nuclear Regulatory Commission requires the use of the units curie, rad and rem as part of the Code of Federal Regulations 10CFR20. The older quantity and unit of radiation exposure (ionization in dry air) is the "roentgen" (R), where 1 R is equal to 2.58 × 10<sup>-4</sup> C /kg. The older quantity and unit of absorbed dose is the "rad," where 1 rad = 0.01 J/kg. The material absorbing the radiation can be tissue or any other medium (for example, air, water, lead shielding, etc.). To convert absorbed dose to dose equivalent, or "rem ," the biological effects in man are now considered, which is done by modifying with a quality factor. For practical scenarios, with low "linear energy transfer" (LET) radiation such as gamma or x rays, 1 R = 1 rad = 1 rem. The Système International has introduced as a rival unit, the gray (Gy); the rad is equal to the centi and 100 rads are equal to 1 Gy. The continued use of the rad is... Read More
<urn:uuid:f026ec7c-a6b2-488d-8012-26df38f607cc>
3.5625
341
Knowledge Article
Science & Tech.
68.697549
public abstract class Permission A Permission represents the status of the caller's permission to perform a certain action. You can query if the action is currently allowed and if it is possible to acquire the permission so that the action will be allowed in the There is also an API to actually acquire the permission and one to release it. As an example, a Permission might represent the ability for the user to write to a Settings object. This Permission object could then be used to decide if it is appropriate to show a "Click here to unlock" button in a dialog and to provide the mechanism to invoke when that button All known members inherited from class GLib.Object
<urn:uuid:f9cdc050-32db-4bf4-a8e3-bceeef4db4c0>
2.71875
140
Documentation
Software Dev.
36.296539
Massive glitch moves magnetar modelling forward 12 Apr 2012 The chance discovery with ESA's INTEGRAL observatory, in 2004, of highly energetic X-rays emanating from a young neutron star with an extremely strong magnetic field, provided scientists with a challenge: to explain how these objects, also known as magnetars, produce such energetic non-thermal radiation. A new, comprehensive study of one particular magnetar, that was observed by several observatories over a period of 27 months, may provide the key to homing in on the processes at play in the extreme environments of these exotic stars. The term 'magnetar' was coined in the early 1990's to describe highly magnetised, rotating neutron stars that were proposed to explain the outbursts of low-energy gamma rays from a handful of neutron stars, known as Soft Gamma-ray Repeaters (SGRs). Since then, the magnetar family has expanded to include Anomalous X-ray Pulsars (AXPs), as astronomers now recognise that these too are powered by extraordinarily strong magnetic fields. The immense magnetic fields of these stars – billions of times stronger than any magnetic field on Earth – creates a natural laboratory in space to experiment with high-energy particles. "We are dealing with extreme physics in extreme environments," notes Lucien Kuiper from SRON Netherlands Institute for Space Research in Utrecht, the Netherlands. Kuiper is the lead author on a recent paper, published in Astrophysical Journal, describing a detailed study of one particular magnetar, 1E 1547.0-5408. Illustration of a magnetar. Credit: ESA Astronomers already know that the strong magnetic field of a magnetar acts like a brake, slowing down the rate at which the neutron star spins. Occasionally, this smooth deceleration is disrupted by a sudden increase in spin rate, known as a timing glitch. A 'star quake' near the magnetar's surface is thought to be responsible for the timing glitch and it is often followed by a large outburst of radiation – the emissions that are associated with SGRs and AXPs. Until a chance encounter in 2004, the persistent emission from AXPs was assumed to be present only at low-energy X-rays. "Nobody saw any reason to look for emission from AXPs above 10 keV," says Wim Hermsen, also from SRON Netherlands Institute for Space Research, and a co-author of the study. "It was only because of the large field of view of INTEGRAL that we serendipitously detected an AXP emitting non-thermal emission that extended up to about 200 keV." Following that discovery, theoreticians have been trying to explain this unexpected high-energy emission. "How this magnetic energy is transformed into non-thermal energetic X-rays is still strongly debated," says Hermsen. To help solve this mystery, the astronomers wanted to gather data in the low- and high-energy X-ray bands from a magnetar undergoing an outburst. Given the rarity of these objects – only about 20 have been discovered to date - and the unpredictable nature of these outbursts, this was quite a challenge. Luck was on their side when in January 2009 an armada of space missions, including ESA's INTEGRAL observatory, all spotted extreme 'bursting activity' – hundreds of short duration bursts that typically lasted 0.1 s – from magnetar 1E 1547.0-5408. Some of the 200 short-duration bursts seen by INTEGRAL in January 2009. Credit: ESA Using data collected by INTEGRAL and by NASA's Swift and Rossi X-ray Timing Explorer satellites, Kuiper and his colleagues carried out a detailed study of how the emissions from the magnetar varied over a 27-month period. Shortly after starting their analyses, the astronomers hit the jackpot when they discovered a very peculiar timing glitch – the most dramatic sudden change, by 70%, in the spin down of a magnetar that has ever been detected. This timing glitch was accompanied by an outburst of low- and high-energy X-rays. High-energy unpulsed emission was detected immediately after the timing glitch, which suggests that the magnetar has a 'corona' of charged particles surrounding the neutron star. "Unlike ordinary pulsars, which only emit radiation along field lines originating from their poles, magnetars apparently have such strong fields that particles can be accelerated to produce X-rays all around the neutron star," says Hermsen. The team's data also revealed an unexpected discovery: a new transient high-energy pulse – a distinct new feature in the pulse profile – that decayed to undetectable levels in about 300 days. This detection may prove key to unravelling the processes at play in the extreme environment of a magnetar. "This case is unique; it is the first time that we have seen the creation of both pulsed and unpulsed luminous X-ray emission after a star quake," says Kuiper. Unlike the unpulsed high-energy X-rays, the pulsed emission wasn't released immediately; it was only evident in the observations 11 days after the timing glitch. The astronomers propose that this delay is consistent with one particular model of the many that have been created to describe these exotic stars. A delay in the appearance of pulsed emission is consistent with one particular model of magnetars. Image courtesy of Kuiper et al. In a model put forward in 2009 by Andrei Beloborodov from Columbia University in New York, USA, a star quake could twist the magnetic field lines that are anchored to a star's surface; these then gradually untwist, releasing magnetic energy and producing radiation. However, Beloborodov's model shows that the untwisting would occur in a peculiar way. Following a delay after the star quake, the untwisting would create a quasi-stable bundle of twisted field lines above the magnetic pole - this could explain the origin of the observed transient emission, say the astronomers. "Basically, in this compact bundle of field lines, a mildly relativistic particle stream could boost the thermal X-rays from the surface to high energies, which would then be beamed along the magnetic dipole axis. After the maximum luminosity of this new high-energy pulse is reached, the energy output would decay to lower levels until a quasi-steady state is reached," says Hermsen. "This quasi-steady state also offers a promising explanation for the persistent X-ray emission from magnetars." Although the fine details of what happens in the environs of a magnetar remain elusive, this new study has allowed astronomers to home in on possible mechanisms. "INTEGRAL continues to play an important role in the study of magnetars, in part because of its unique capabilities to image targets which are emitting radiation with energies exceeding 200 keV," comments Christoph Winkler, ESA's INTEGRAL Project Scientist. "These findings provide a new piece in the very complex puzzle of understanding how high-energy emissions are generated by magnetars," concludes Winkler. Notes for editors Magnetars are pulsars (spinning neutron stars) characterised by rotation periods between 2 and 10 seconds, occasional episodes of extremely enhanced emission (about 10–100 times the usual value) and intense, short bursts of X-rays and gamma rays; these highly energetic events are presumed to be powered by an intense magnetic field( about 1014-1015 G). The magnetar, 1E 1547.0-5408, is an Anomalous X-ray Pulsar with the fastest rotation period (2.069 seconds) yet observed for this type of object. Bursting activity was detected from 1E 1547.0-5408 in October 2008 by NASA's SWIFT, Fermi, Rossi X-ray Timing Explorer (RXTE) and Chandra space telescopes. In January 2009 it was again observed to be active by Swift, Fermi, Konus-Wind (US/Russia), RHESSI (NASA), Suzaku (JAXA) and INTEGRAL (ESA). The study described in this article is based on data from the INTEGRAL observatory and from the RXTE and Swift telescopes, and covers a period of 27 months, from the onset of bursting activities in October 2008 until January 2011. INTEGRAL is an ESA project with instruments and science data centre funded by ESA Member States (especially the Principal Investigator countries: Denmark, France, Germany, Italy, Spain, Switzerland) and Poland, and with the participation of Russia and the USA. Reference publicationKuiper, L., Hermsen, W., den Hartog, P.R., Urama, J.O., 2012, Temporal and spectral evolution in X- and gamma-rays of magnetar 1E 1547.0-5408 since its October 2008 outburst: the discovery of a transient hard pulsed component after its January 2009 outburst, ApJ 748 133. doi: 10.1088/0004-637X/748/2/133 Beloborodov, A.M., 2009, Untwisting magnetospheres of neutron stars, ApJ, 703, 1044. doi: doi:10.1088/0004-637X/703/1/1044 SRON-National Institute for Space Research Utrecht, The Netherlands Phone: +31 88 777 5875 SRON-National Institute for Space Research Utrecht, The Netherlands Astronomical Institute Anton Pannekoek University of Amsterdam, The Netherlands Phone: +31 88 777 5871 INTEGRAL Project Scientist Research and Scientific Support Department Directorate of Science and Robotic Exploration ESA, The Netherlands Phone: +31 71 565 3591 Last Update: 13 Apr 2012
<urn:uuid:2d5985c1-835b-46d1-aa74-fd18b6a68e4f>
3.53125
2,055
Knowledge Article
Science & Tech.
38.258873
Heavy Rain in the U.S. Midwest Image produced by Hal Pierce (SSAI/NASA GSFC) and caption by Steve Lang (SSAI/NASA GSFC). Severe storms brought heavy rain to the mid-section of the United States when a developing area of low pressure over the Central Plains combined with high pressure anchored off the Southeast coast of the United States to channel moisture up from the Gulf of Mexico. In addition to high winds, hail, and tornadoes, the strong storms brought heavy rain to the region. The hardest hit area was central Indiana where nearly 11 inches of rain was reported from Friday evening (June 6, 2008) into Saturday morning. The rain resulted in widespread flooding. As of June 9, ten people had died in the floods and stormy weather, reported the Associated Press. This image shows rainfall totals for the 7-day period between June 2 and June 9, 2008. Rainfall amounts exceeding 100 millimeters (approximately 4 inches, shown in green) extend from north-central Oklahoma up into South Dakota and eastward into Michigan and Ohio. Higher amounts on the order of 200 to 300 mm (about 8 to 12 inches, shown in yellow) cover significant portions of this same area. The highest totals for the period (shown in red) exceed 400 mm (16 inches) and are located over central Indiana. The image was made from data generated by the near-real-time Multi-satellite Precipitation Analysis (MPA) at NASA’s Goddard Space Flight Center, which monitors rainfall over the global tropics. In the analysis, rainfall data from a number of satellites are calibrated using data collected by the Tropical Rainfall Measuring Mission satellite (TRMM). TRMM was placed into service in November of 1997 to measure rainfall over the global tropics using a combination of passive microwave and active radar sensors. This image originally appeared on the Earth Observatory. Click here to view the full, original record.
<urn:uuid:d9a3cc0c-e9a4-4d14-b9e6-e2cc39b04bc4>
3.25
400
Knowledge Article
Science & Tech.
47.380605
ENSO forecast based on tidal forcing with an Artificial Neural Network Investigation submitted by Per Strandberg Here on this page, you are going to find evidence that tidal forcing is one of the most important, if not the most important driver for ENSO variations. That tidal forcing could be the main explanation for ENSO variations was something I stumbled upon when I examined possible ENSO drivers. After the previous results which I got when I was using an Artificial Neural Network ANN and where I did an analysis of the correlations between the global mean temperature and possible forcing drivers, which can be viewed here, I turned my attention to the ENSO index by looking into the Multivariate ENSO Index (MEI). One thing I found when I analyzed the result from correlations to ENSO was that there is a strong correlation between variations in Earth’s rotations both to the global mean temperature and to the ENSO index. What we are talking about here are small variations in Earth’s rotations, which are in the order of milliseconds. One other factor with correlation to the ENSO is of course SOI, but I also found correlations to SST, PDO, and the Kp and Ap indexes. From this, I concluded that either it is ENSO which is driving changes in Earth’s rotation or it changes in Earth’s rotation, which is causing variations in ENSO or more likely it is some combination of both. Proof that ENSO and variations of Earth’s rotation are proportionally correlated to each other has been known for some time. This can be seen here. The mechanisms which tie ENSO and variations in Earth’s rotation together are caused by sea current changes, changes in trade winds or by displacements of water between the equator and slightly higher latitudes. This all makes sense. The water currents in the northern hemisphere follow a clockwise pattern, and in the Southern hemisphere they follow a counterclockwise pattern because of the Coriolis effect. The trade wind and the currents near the equator are moving to the west. However the Current closest to the equator called the equatorial counter current move to the east. Still deeper at depth down to 200 meters at the equator an ever stronger current is moving to the east. The behavior of this current of the Equatorial Pacific is shown on this page by Bob Tisdale. The only mechanism by which ENSO can be driven by changes in Earth’s rotation is by variations in the tidal force. My next step was to try to include tidal forcing in my ANN. I then got three problems, which I had to overcome. Firstly: I had to find data over the position and distance to the Moon and to the Sun. Eventually, I found software from which I could get this data, although it gave limited information and I was only able to print out time and position when the Sun and the Moon were closest and farthest from the Earth and with the Moon I could also calculate the time and position of the new moon, the full moon and the moon nodes. The Moon nodes are the location where the Moon cross over the ecliptic plane. Secondly: I had to find the formula for the tidal force vector and implement this into my software. Thirdly: I had to figure out what features in the tidal forcing which could affect ENSO. I had to experiment with different configurations based in my limited and rather crude data. To do this, I had to make complicated trigonometrically calculations in order to get the right value of the tidal force. Eventually, I got good correlations between ENSO and the tidal forcing. By this time, I had figured out which features in the tidal forcing that were causing this correlation. However it was not a direct correlation with ENSO, rather it was a correlation with the derivate signal of ENSO, i.e. it was affecting the rate of change of ENSO. The correlation to the change of rate in Earth’s rotation, on the other hand, is direct. This means that tidal forcing is causing the rate of Earth’s rotation to either speed up or speed down. The rate of rotation is then responsible for changes of the ENSO index. One reason was that it was difficult to identify, which features, which cause correlations. This was because each tidal forcing point I use the sum of monthly calculations. The size of the tidal forcing changes each and every day and how to summarize this data the right way into useful functions, which can be used to construct values that could create good correlations were difficult. Of course, the tidal force is not the only factor which drives ENSO, but it is the most influential factor. To test if that would be the case I ran my network with the right tidal forcing data. I also included feedback loops back in the network from the output ENSO values to some of the input nodes. After some testing and individual adjustments of the internal components in the artificial network, I got good results. Following on my earlier experiment of the ANN on the mean global temperature I trained the ANN from late 1978 up to the end of 2004. I used the time from 2005 up to the late 2011 for test the calculations, in order to find the minimal error function. This is the result I got. The exciting thing with this result is that it is possible to make forecasts for much longer times into the future. Today’s predictions use computer models and are only able to make credible predictions 4 to 5 months into the future. While in my case, using my ANN calculations based on tidal forcing can be made for forecasts for an almost unlimited time because the Moon and Sun’s positions into the future are known in advance. Although, I have to stress that with the predication so far it is not possible to get the last figure right. Currently, it is only possible to make an estimate with a relative high likelihood at any date if ENSO are going to be positive, negative or neutral. However, as can be seen here the predictions are not always correct. The main large El Niño events of 1982 and 1998 can clearly be seen, but the large magnitude of these events can not be predicted. I later made a ENSO forecast from late 2011 up to 2020. I cannot show the result here because of proprietary reasons. This picture shows the test period and some of the forecast which ends in early 2013. Note, however that the calculations from this graph, the ENSO index uses ENSO feedback values which all are from estimated ones. Those are not the real ENSO values. Now Look at: the previous graph with the whole time span from 1979 up to 2011! On this graph, look at the beginning at the 3 first years from the start of 1979. These 3 first years have all exceptional good correlation to the real ENSO values. The difference with the start values in this case is that I use real ENSO values for the feedback values in the network calculations which are going into the calculations with values before the graph begins. This is because in my ANN, I use for every calculation point values which goes 3 years back in time and I must use real input values for values before my first calculated value. If I would make a forecast for the next years using current real ENSO values from 3 years back and up to the current date, my forecast would be greatly improved and would be much better than forecast made with current computer models. There were 2 important events that happened the years just after 1979. The first was the eruption of El Chichon in 1982 in Mexico. The second was the unusually strong el Niño event between 1982-1983. My calculated values after 1982 of ENSO tend to come out of phase after and around 1982, and the ANN seems not to be able to handle strong El Niño’s very well. After some years, the estimated ENSO value deteriorate somewhat mainly because of errors in the feedback caused by the inertia in Earth’s rotation. In contrast to computer model forecasts, I don’t use data from the Tropical Atmosphere Ocean TAO network which is a NOAA measurement network of buoys in the tropical Pacific that deliver real time data which feeds these ENSO models with real time data. Here is a result from the same program, but as input it uses variations in the Earth’s rotation instead of tidal forcing. As you can see, the correlation to the Earth’s rotation makes the result much better. However in contrast to tidal forcing, future changes in Earth’s rotation is unknown. Here is another graph from the same program with feedback but this time the input signal is only from SOI Southern Oscillation Index. As expected SOI is closely related to ENSO. Here is a repeat with the same program, but this time with a combination of tidal forcing, changes in Earth’s rotation, SOI, Kp and the Ap indexes. As you can see, this result is similar to that of the previous with only SOI. The next step I plan to take is to use the ANN with real time data and make more accurate ENSO predictions for the next 3 to 4 years in to the future. I also want to test with real ENSO input data for several time periods in order to evaluate statistically how the good predictions can be based on real time ENSO feedback input data for the beginning. After that, I want to improve on my result by using more precise and accurate tidal calculations. I have found a program from which I can make precise calculations of the Moon and the Sun on a daily basis. Other factors I plan to look into are the mechanism of the Kelvin wave, Walker circulation and MJO which all should influence ENSO to some degree. So far I have only looked at ENSO. I can easily switch to SOI, NOI and changes in Earth’s rotation and use that as an output for predicting ENSO with the ANN. Conclusion from my result is that tidal forcing is as a major factor in ENSO forcing. I now have gotten new questions. Compared with other causes how important is the effect from tidal forcing? Is it possible to find an increasing effect from tidal forcing by improving the tidal data I use? Is it, for example, possible to identify tidal forcing as the cause for the strong El Niño of 1982 and 1998? It may be possible to get better ENSO results by using predictions based on SOI, NOI, Earth’s rotations or by starting from tidal forcing only. I’ll test and see. Also, ENSO and SOI are parameters for which there exists long historical data records. By, using a longer time span for training and testing, the accuracy of predictions based on ANN should be improved. I acknowledge that it is not easy to find correlations between tidal forcing without testing out the right feature and by using am ANN. However I do find it very strange that no scientist to my knowledge has been looking into a possible connection between tidal forcing and ENSO in any depth. As can be seen from what the IPCC writes about ENSO predictions, they do not have a clue. The current data models that are in use can only predict with any accuracy 4 to 5 months into the future. When it comes to the ENSO drivers, these researchers think chaos theory and random noise are the mechanisms which explains the causes of ENSO changes. However Cerveny, R. S. and J. A. Shaffer (2001) et al. in the report, The Moon and El Niño, Geophys. Res., writes about the Moon cycles and ENSO, where they find correlations between the solar cycles and ENSO. To me at least, it seems that the solution to long range ENSO prediction has for a long time been right in front of the eyes of these researchers, but nobody has taken up the challenge to figure it out. I see the same reason why the climate community at large has not studied tidal forcing as an explanation for ENSO variations and why none TSI solar forcing as an additional cause for climate forcing ignored. The primary reason is that they have had their education in meteorology, atmospheric physics, thermodynamics or in computer science. Most of them are specialists in a few narrow disciplines, and as such they prefer only to apply knowledge from the fields they know. They are not generalists and display strong resistance for applying knowledge from other area from which they lack knowledge. Then add to that group thinking, peer pressure and lack of funding for research in alternative causes of climate change and this explains the current one-sided situation. This is one of reason that predictions made with computer simulations are failing. ANN are seldom used in climate science. There are some exceptions. One is research done by Dr William Hsieh from the University of British Columbia who uses this technique for ENSO predictions, but to my knowledge without using any tidal forcing. To learn more about how ANN works and how I have implemented this technique in climate investigation, Click here
<urn:uuid:3da28564-99a1-4405-9c7d-3af44f415967>
3.125
2,718
Personal Blog
Science & Tech.
51.199779
by Robert Wilkinson It seems that the Sun is unusually calm these days. That's not necessarily a good thing, since the last time it was this calm World War I broke out soon thereafter. If you want to read some fascinating speculation about the Sun, Sunspots, and the like, let's take a short trip down Science Lane just off of Fate Street. Courtesy of the WaPo, "Absence of sunspots make scientists wonder if they're seeing a calm before a storm of energy." The WaPo states the article is adapted from one originally published by astrophysicist Stuart Clark in New Scientist, This may be extraordinarily important since when the Sun decides to flame on and off, our world is affected in major ways. By all means check out the story, but for those with a short attention span who want to cut to the chase, here are a few nuggets from the article: Sunspots come and go, but recently they have mostly gone. For centuries, astronomers have recorded when these dark blemishes on the solar surface emerge, only to fade away after a few days, weeks or months. Thanks to their efforts, we know that sunspot numbers ebb and flow in cycles lasting about 11 years. But for the past two years, the sunspots have mostly been missing. Their absence, the most prolonged in nearly 100 years, has taken even seasoned sun watchers by surprise.... other clues indicate that the sun's magnetic activity is diminishing and that the sun may even be shrinking. Together, the results hint that something profound is happening inside the sun. Groups of sunspots forewarn of gigantic solar storms that can unleash a billion times more energy than an atomic bomb. Fears that these giant eruptions could create havoc on Earth and disputes over the sun's role in climate change are adding urgency to these studies... Sunspots are windows into the sun's magnetic soul. They form where giant loops of magnetism, generated deep inside the sun, well up and burst through the surface, leading to a localized drop in temperature that we see as a dark patch. Any changes in sunspot numbers reflect changes inside the sun... When sunspot numbers drop at the end of each 11-year cycle, solar storms die down and all becomes much calmer. This "solar minimum" doesn't last long. Within a year, the spots and storms begin to build toward a new crescendo, the next solar maximum. So it seems that Sunspot cycles have some correlation with Jupiter, which takes about 11 years to transit the zodiac. It seems that the Sun most recently "calmed" in 2007 and so few Sunspots were expected in 2008. Scientists thought when they returned there would be more of them, as well as more Solar storms and energy sent out into space. However, since then there has been an "extreme dip" in Sunspots, and we are told "Only the minimum of 1913 was more pronounced, with 85 percent of that year clear." There wasn't much Solar action in 2009 either. Then in December 2009, "the largest group of sunspots to emerge in several years appeared. Even with the solar cycle finally underway again, the number of sunspots has so far been well below expectations. Something appears to have changed inside the sun, something the models did not predict." Without going into obscure details of how the process works, we are told there are two "vast conveyor belts of gas that endlessly cycle material and magnetism through the sun's interior and out across its surface. On average it takes 40 years for the conveyor belts to complete a circuit." It seems the surface flows have been speeding up since 2004, but the internal ones have significantly slowed. This is confounding computer models and scientists who don't really know what's happening. The Earth responds to these fluctuations in many ways. One professor of space environment physics believe the unusually cold European winter of 2009-10 is the result of the strange Solar activity. From the article: ... severe European winters are much more likely during periods of low solar activity. This fits an idea of solar activity's giving rise to small changes in the global climate overall but large regional effects.... Another example is the so-called Maunder minimum, the period from 1645 to 1715 during which sunspots virtually disappeared and solar activity plummeted. If a similar spell of solar inactivity were to begin now and continue until 2100, it would mitigate any temperature rise caused by global warming by no more than 0.3 degrees Celsius... However, something amplified the impact of the Maunder minimum on northern Europe, ushering in a period known as the Little Ice Age, when colder-than-average winters became more prevalent and the average temperature in Europe appeared to drop by between 1 and 2 degrees Celsius. A corresponding increase in temperatures on Earth appears to be associated with peaks in solar output. In 2008, Judith Lean of the Naval Research Laboratory's space science division published a study showing that high solar activity has a disproportionate warming influence on northern Europe. What the sun will do next is beyond our ability to predict. Most astronomers think that the solar cycle will proceed but at significantly depressed levels of activity, similar to those last seen in the 19th century. However, there is also evidence that the sun is inexorably losing its ability to produce sunspots. By 2015, they could be gone altogether, plunging us into a new Maunder minimum -- and perhaps a new Little Ice Age. Anyway, "we know something's happening here but we don't know what it is," to paraphrase the 20th century American Bard. While I suspect we stand on the threshold of something far better than anything we've known before, there are times when our larger environment compels us humans to take note and correct whatever we can correct, while adapting to what we can't. The months to come would seem to be one of those times. For those who are interested in more commentary on our Earth, Sun, and Solar System, you can find more on Stuart Clark's blog, Stuart Clark's Universe. And the beat goes on and on and on.... © Copyright 2010 Robert Wilkinson
<urn:uuid:7d100700-318b-4b5f-89a6-b6b6fcd0b3b5>
3.125
1,265
Personal Blog
Science & Tech.
54.076238
PAUL N. HIRTZ is president of Thermochem, Inc., and has worked as a chemical engineer for the last 25 years in the geothermal energy industry. He performs research for the U.S. Department of Energy and the California Energy Commission, and has published over 25 technical articles. He is an associate editor for the international journal Geothermics. Hirtz is chairman of ASTM Subcommittee E44.15. has the potential to be the world’s primary source of baseload renewable power. Unlike most other renewables, geothermal energy is available 24 hours a day and has a capacity factor on the order of 95 percent (1,000 MWe installed = 950 MWe average generation), compared to a non-baseload source such as wind energy with a capacity factor of 20 percent or less (1,000 MWe installed ≤ 200 MWe average generation). A recent report by the Massachusetts Institute of Technology (MIT) and sponsored by the U.S. Department of Energy has concluded that with a reasonable R&D investment in EGS technology, geothermal energy from EGS alone could provide 100,000 MWe of cost-competitive power for the United States within the next 50 years.4 This is equivalent to our total capacity of nuclear power generation now. Another 11,000 MWe could be generated from low-temperature (≤100 ºC) water that is co-produced from oil and gas wells in the United States using “off-the-shelf” binary-cycle modular power plants. The Hot Renewable: Geothermal Energy Renewable geothermal energy is currently used to generate electric power in 24 countries, for a total of 9,000 MWe.1 Geothermal energy is produced and utilized in many forms. The definition of geothermal energy, according to ASTM E 957, Standard Terminology Relating to Geothermal Energy, is quite broad: “the thermal energy contained in the rocks and fluids of the earth.” This energy is produced in the form of naturally occurring hot water and steam found in hydrothermal reservoirs that drive electric power plants. The fluids are also used directly in industrial processes and to heat buildings. Geothermal energy can be extracted from deep man-made reservoirs using technology known as enhanced geothermal systems. Ubiquitous low-grade geothermal energy is used by geothermal heat pumps to heat and cool buildings (7,300 MWt in the United States alone2). Hydrothermal systems are responsible for all geothermal electric power generated today, as they are the easiest to develop, and even still only account for a small fraction of the total potential for this clean energy source. A hydrothermal system is a subterranean reservoir that transfers heat energy upward by the vertical circulation of fluids through convection (Figure 1). The surface manifestations of these natural systems are the familiar hot springs and fumaroles (think Yellowstone National Park for an example of perhaps the world’s largest but closed to development). The power plants driven by hydrothermal systems typically resemble conventional thermal power plants where steam turbines are used to generate electricity (Figure 2). Remember that even a nuclear plant just boils water to make steam, which then drives the turbine. Most geothermal plants can use steam directly derived from the resource. Binary cycle plants use a secondary fluid to extract heat and generate power from lower-temperature hydrothermal resources (<150 °C), and combined cycle steam turbine/binary plants are now being used to efficiently generate power from higher-temperature resources (~200 350 °C) in a cascading extraction process. Enhanced geothermal systems, or EGS, are defined as engineered reservoirs created to extract heat from hot, dry rock. The energy is extracted using water pumped through a man-made subsurface fracture system, where it is heated by contact with rock and returned to the surface through production wells. Hydrofracturing, used widely in the oil and gas industry to enhance production, creates the EGS fracture network. Figure 3 shows the extent of domestic geothermal resources at a depth of just 6 km, the nominal drilling depth for oil and gas wells. At depths of 10 km, the practical upper limit of current drilling technology, the temperatures are much higher across the entire country. On the less extreme side of geothermal engineering, geothermal heat pump systems, or GHP, use the year-round stable temperature of the earth at depths of a few metres to the depth of a typical water well. Here the temperature will be a moderate 10 to 20 °C, even with heavy snow on the ground above. This heat is extracted and condensed using a closed-loop heat pump to deliver balmy temperatures to a building. During the summer, the ground serves as the heat sink for the GHP to provide air conditioning. Dual-action GHP systems supply both heating and cooling simultaneously, as needed by supermarkets and ice rinks. Geothermal heat pumps are used in all 50 states today, with about 80,000 systems being added each year. Given this rapid growth, the GHP industry is in need of standardizing practices for the installation, design and performance specifications for these systems, as noted recently in an energy journal.3 Contributions by E44.15 to Geothermal Energy Subcommittee E44.15 on Geothermal Field Development, Utilization and Materials, part of Committee E44 on Solar, Geothermal and Other Alternative Energy Sources, has developed standards to provide consistent terminology, evaluate the quality of geothermal resources, determine material compatibility for geothermal hardware and define the performance of power conversion technologies. The subcommittee’s current standards are: • E 947, Specification for Sampling Single-Phase Geothermal Liquid or Steam for Purposes of Chemical Analysis; • E 957, Terminology Relating to Geothermal Energy; • E 974, Guide for Specifying Thermal Performance of Geothermal Power Systems; • E 1008, Practice for Installation, Inspection, and Maintenance of Valve-Body Pressure-Relief Methods for Geothermal and Other High-Temperature Liquid Applications; • E 1068, Test Method for Testing Nonmetallic Seal Materials by Immersion in a Simulated Geothermal Test Fluid; • E 1069, Test Method for Testing Polymeric Seal Materials for Geothermal and/or High Temperature Service Under Sealing Stress; and • E 1675, Practice for Sampling Two-Phase Geothermal Fluid for Purposes of Chemical Analysis. Standard Practice E 1675 for Two-Phase Sampling The most widely used Subcommittee E44.15 standard today is E 1675. This standard is used in 17 countries to obtain representative samples of two-phase fluids (water and steam) produced from hydrothermal wells. The proper collection and preservation of samples are specified for subsequent chemical analysis of the water, brine, condensate and noncondensable gases that may be produced by these wells. The chemical composition data is used in many applications important to geothermal energy exploration, development and resource management. These applications include determining reservoir temperatures and the origin of reservoir fluids, the source of recharge fluids, and the compatibility of fluids with piping and steam turbines (corrosivity and scale deposition). The heart of E 1675 is the two-phase sampling separator (Figure 4). This cyclone device separates steam from liquid with minimal heat loss and pressure drop (isenthalpic and isobaric process) from the main pipeline through which the two-phase fluids are produced, such as the production pipeline from a geothermal well. Although it is virtually impossible to obtain a representative mixture of the two-phase fluids in the same proportions (same steam-to-liquid ratio) as they exist in the bulk flow through the pipeline, representative samples of each phase can be collected without changing the chemical composition. View Figure 5 and Figure 6. To aid in the separation process, two separators are often used on the top and bottom of the pipeline (Figure 7). The two-phase flow regime usually approaches slug or stratified flow in the large horizontal pipelines on the surface, known as the gathering system through which fluids are collected from each well and piped to the power plant. Using fluid samples collected according to E 1675 and the total fluid enthalpy, the composition of the original reservoir fluid can be reconstructed. In most cases, this means the “pre-flash” fluid composition the original deep hydrothermal reservoir liquid before it boiled on the way up the production well. The composition of the deep geothermal fluids can be used to evaluate the ability of a reservoir to sustain a large power plant. The geochemical interpretation includes geothermometry, where the concentrations of species such as silica, or the ratio of ions such as Na+, K+ and Ca++, determine the temperature of the deep fluid. Silica dissolves rapidly from rocks into hot water and equilibrates with the mineral quartz in the temperature range of about 200 to 330 °C, yielding a very precise and “recent” reservoir temperature. This temperature can be used to determine the total heat content (enthalpy) of the produced fluid if the reservoir is single-phase liquid. Other minerals that leave a signature in the water, in the form of dissolved Na+, K+ and Ca++, have a long-term “memory” and indicate the maximum temperature the fluid had been exposed to. This is important in geothermal exploration where the primary resource may not have been reached yet, and peripheral well chemistry is used to determine if a high-temperature resource exists nearby. Use of E 1675 in Two-Phase Flow and Enthalpy Measurement To obtain the full benefit of chemical composition data, physical data related to the two-phase discharge is required, such as the total fluid enthalpy and pressure or temperature at the sample point. The most widely used method to determine the total enthalpy and production flow rate of two-phase geothermal wells, the tracer flow test method, specifies the use of E 1675. The TFT method is based on the precise and constant injection of chemical tracers into two-phase geothermal fluid streams to determine the flow rate of each phase and thus the total enthalpy. The process involves injection of liquid and vapor tracers into a two-phase pipeline, with concurrent sampling of each phase according to E 1675 downstream of the injection point, where the tracers have fully dispersed into their respective phases (Figure 8). The mass flow rate of each phase is calculated based on the measured concentration and injection rate of each tracer. The mass rate of liquid (QL) and steam (QV) is given by: QL,V = QT / CT QL,V = mass rate of fluid (liquid or steam phase); QT = tracer injection mass rate (liquid or vapor tracer); and CT = tracer concentration by weight (liquid or vapor tracer). The total fluid enthalpy is then calculated from a heat and mass balance equation using the known enthalpies of liquid water and steam at the sampling pressure. The production flow rate and enthalpy of geothermal wells is critical data in itself for geothermal power generation. All of the energy derived from geothermal fluids is sensible and latent heat (enthalpy), as opposed to the chemical energy that is released by burning hydrocarbon fuels. Therefore, the energy per unit mass of geothermal fluid is low compared to oil, and the production rates of geothermal wells must be much higher than oil wells, on the order of 10 to 100 kg/s or more (depending on the steam-to-water ratio), to justify the drilling cost. The wells must be monitored regularly to ensure their output is sustained and that sufficient capacity is available from all wells to maintain baseload power. The TFT method, in conjunction with E 1675, allows geothermal well output to be measured directly on-line while producing to the power plant, rather than diverting flow to mechanical test apparatus such as full-flow separators and flow meters. This process is currently used in virtually all major geothermal fields in the world, including those in Guatemala, Iceland, Indonesia, Japan, Kenya, New Zealand, Nicaragua, the Philippines and the Western United States, including Hawaii. Use of E 1675 to Register Geothermal Energy Projects Under the Kyoto Protocol Another important application of E 1675 is the collection of condensed geothermal steam samples for noncondensable gas analysis. All naturally produced geothermal steam contains some amount of NCG, comprised mostly of CO2, H2S (the familiar rotten egg smell of hot springs), and other trace gases such as N2, H2, CH4 and NH3. Although most geothermal power plants emit greenhouse gases (primarily CO2), the amounts are very low. The average geothermal power plant in the United States releases 27 kg CO2 per MWh, while the average natural gas and coal plants emit 550 and 1,000 kg CO2 per MWh, respectively.2 Geothermal power generation in the United States alone offsets 22 million metric tons of CO2 annually. As a result of the broad international acceptance of ASTM standards, E 1675 has been adopted by the United Nations Clean Development Mechanism to register and verify compliance with the CDM program that allows carbon offsets to be traded on the international market. The CDM program provides economic incentives for developing countries to build clean renewable power plants such as geothermal, instead of polluting coal-fired plants. The offsets purchased by participating industrialized countries have become an important strategy in meeting their near-term Kyoto Protocol commitments. Continuing Activities of E44.15 to Aid Renewable Energy Development Future activities of Subcommittee E44.15 will focus on standard specifications for two-phase flow measurement by TFT, geothermal steam purity and quality measurement using traversing and fixed multi-nozzle isokinetic sampling probes, and standard methodology for CO2 analysis in geothermal steam. The geothermal heat pump community will be invited to participate and introduce standards relevant to this important sector of the multidisciplinary geothermal energy industry. Together with all the activities of ASTM Committee E44, renewable energy will continue to advance and displace the energy sources that are not sustainable and contributing to global warming. // 1. Lund, J.W.; Koenig, J.; Mertoglu, O.; Stafansson, V. Findings and Recommendations. Antalya, Turkey: 2005 World Geothermal Congress, April 2005. 2. Green, B.D.; Nix, R.G. Geothermal- The Energy Under Our Feet, Geothermal Resource Estimates for the United States. NREL/TP-840-40665. Golden CO.: National Renewable Energy Laboratory, November 2006. 3. Engle, D. Global Warmth: Earth’s Ultimate DE. Santa Barbara, CA.: Distributed Energy, Vol. 5, No. 2, March/April 2007. 4. Tester, J.W.; et al. The Future of Geothermal Energy: Impact of Enhanced Geothermal Systems (EGS) on the United States in the 21st Century. Final Report to the U.S. Department of Energy Geothermal Technologies Program. Cambridge, MA.: Massachusetts Institute of Technology, 2006.
<urn:uuid:f947437b-66fa-425e-9704-6e03495a076a>
3.375
3,159
Knowledge Article
Science & Tech.
35.868582
Happy Friday everyone! I don’t know about you, but when it comes down to learning and understanding physics, I get headaches. That’s why we have decided to make things simpler by bring you an exciting episode of learning Physics with Kittens! What you are about to see in this video is a Newton’s cradle. Various kittens are placed around the apparatus and each kitten will have a chance to pull back a metal ball and also allow it to strike the rest of the metal balls. As you can see from the video, both energy and momentum are conserved during the impact with exception of minor loss due to friction and heat. Meowtonian law for motion has been demonstrated in this case F = M * A.
<urn:uuid:8eeec578-ac70-40ad-8916-c3752914602f>
3.171875
150
Truncated
Science & Tech.
61.375
With Sunday's surprising tornado in Enumclaw, it's a good excuse to write a tornado-themed blog entry today. What's the difference between a funnel cloud and tornado? A funnel cloud is basically a tornado that doesn't touch the ground. They don't do any damage by itself, but a funnel cloud can certainly become a tornado, so they need to be reported and monitored. Once the funnel touches the ground -- even for an instant -- it gets classified as a tornado. You might have also heard of "waterspout" which is a tornado that is over water. There have been documented cases where waterspouts have been known to make it rain frogs or fish, having been sucked up out of the water. Then there is a "gustnado" which is not a tornado at all. This is a swirling vortex along the ground that is caused by straight-line winds. These don't connect to the clouds and are more visually along the lines of a dust devil (although not the same!). Gustnadoes do minor, if any damage. How rare are tornadoes here? It used to be that Washington averaged just one tornado per year in the state, but factoring in more recent data, our official average is now up to two -- most likely helped by our 1997 season that had a record 14 tornadoes. Still, tornadoes are typically very weak here, typically rating an EF0 or EF1 on the Enhanced Fujita Scale Strong tornadoes need severe thunderstorms fed by large changes in temperature in the upper atmosphere. Severe thunderstorms typically need much colder air moving in aloft to make the air very unstable. The so-called "Tornado Alley" in the Midwest is ripe for severe weather due to frequent battles between cold, arctic air marching south of out Canada colliding with very warm, moist air moving north from the Gulf of Mexico. But in the Pacific Northwest, the cool waters of the Pacific Ocean are a great moderating force that keeps temperature changes from being too drastic, and thus tornadoes are quite rare. However, in Sunday's case, we did have a very cold pool of air from the Gulf of Alaska move in with the low, helping to make our atmosphere very unstable and trigger heavy rain and thunderstorms. What are more common here are what we call "cold core funnels" They are different from typical devastating Midwest tornadoes in that these are spawned from non-severe storms and can occur when you get a tightly wrapped rush of rising air that can appear as a funnel. They get their name from the usual pattern when you have a storm bringing much colder air into the higher altitudes -- a common occurrence around here in spring and fall. Cold-core funnels rarely reach the ground, and if they do, are very weak. They are not all *that* rare with well-formed Convergence Zones -- especially in spring and autumn. Has there ever been a bad tornado here? There has only been one deadly tornado in recorded history in Washington -- an F3 tornado that touched down in Vancouver on April 5, 1972. Six people were killed and 300 were injured in that tornado. There have been two other storms in Washington that rated an F3 -- one was the same date as the Vancouver tornado, but in Lincoln County. One person was injured there. An F3 tornado also struck the Kent Valley on Dec. 12, 1969. One person was injured there as well. Finally, one tornado that touched down near LaCenter in Clark County on June 29, 1989 injured one person when their car was lifted six feet. Those are the only tornadoes that have injured anyone since records have been kept in 1880. In Oregon, there were reports of a tornado that killed three and injured 5 on June 14, 1888 near Lexington in Morrow County, and another one on June 3, 1894 in Grant County that killed three and injured 10. There have been no tornado-related deaths in Oregon since that 1894 tornado. More tornado information:
<urn:uuid:73ca41b2-ec55-4a00-b2d2-9acc3730d127>
3.40625
830
Personal Blog
Science & Tech.
51.288929
Jackson Laboratory - An Introduction For a detailed description of the lab's research, please see the faculty page. The Jackson Lab is interested in understanding the biology of the simplest human viruses, the picornaviridae. While they are simple and tiny - even for viruses - they cause an astounding amount and variety of human disease. Picornaviruses (the “pico” is for tiny and the “rna” refers to the genetic material) can cause hepatitis, foot-and-mouth disease, poliomyelitis, and the common cold, to name a few diseases. When a picornavirus moves into a cell, it does so with the goal of turning the cell into a virus factory. Most of the normal functions of a cell are shut down and the cell's resources are diverted to the service of the virus. Most dramatically, the cell's innards are rearranged to the point that they become irreversibly damaged and unrecognizable. The virus uses the rearranged cell membranes to set up "copying centers" where the genetic material of the virus is replicated. Our lab studies the large variety of tricks employed by these viruses to rearrange cell membranes. For example, poliovirus triggers a "starvation response" so that the cell generates double-membraned structures called autophagosomes. Autophagosomes typically act as recycling centers, gathering up cellular contents and chewing them up so the starving cell has the raw materials to build new proteins. Poliovirus, however, sets up shop right on the surface of these membranes to produce more virus genomes. We have found that some common cold viruses use this same strategy. However, we also found that one particular common cold virus, Rhinovirus 1A, uses a different trick. It causes the Golgi apparatus, which is used by the cells to sort newly made proteins, to break up into round “vesicles” which provide a base for virus genome factories. Microscope images of a cell with DNA tagged to glow blue and a protein in the Golgi tagged to glow red. This series shows the result of Rhinovirus infection over time. At 24 hours, the red-tagged round "vesicles" are apparent. What's surprising is that, to a virologist, poliovirus and Rhinovirus 1A are close cousins, and both need to rearrange cell membranes to produce progeny viruses. However, each does it in a unique way. How do each of their strategies work? Is there some common element among these viruses we could use to target therapeutics and eventually develop a cure for the common cold? These questions currently drive our research.
<urn:uuid:a3763296-1a60-48b3-9b59-7cf64fcd672f>
3.46875
554
Knowledge Article
Science & Tech.
36.411067
Joined: 16 Mar 2004 |Posted: Thu Dec 14, 2006 12:51 pm Post subject: From Nanowires to Nanotubes |From Nanowires to Nanotubes Hollow nanocrystals that can function as highly-efficient catalysers or transport containers for chemical agents are in great demand nowadays. Scientists from the Max Planck Institute of Microstructure Physics have created a procedure for combining chemicals to produce high quality nanotubes in large quantities. The researchers took advantage of the Kirkendall effect, which occurs during inter-diffusion between two solids. They used the effect to take nanowires of a certain chemical composition in the core and the shell, and produce nanotubes of a more complex composition. The scientists showed that this method can also be used to efficiently produce nanowires themselves (Nature Materials, August 2006). Compound nanotubes can be produced in various ways: rolling up layered materials, coating pores in templates, or eliminating the core of a core-shell nanowire. In the case of ternary compound nanotubes, most of these methods have limitations. They may need layered materials or templates (e.g., porous alumina). Or the nanotubes produced may have an insufficiently small aspect ratio. Frequently the crystallinity of the nanotubes produced is poor - that is, they consist of many grains that have different crystallographic orientations. Scientists at the Max Planck Institute of Microstructure Physics in Halle, Germany, have devised a generic technique for generating nanotubes with very good crystallinity of a ternary compound (i.e., a compound that consists of three different chemical elements). The scientists fabricated ultra-long single-crystalline ZnAl2O4 spinel nanotubes (total diameter: ~ 40 nm, wall thickness: ~10 nm). Spinels are ternary compounds of type AB2O4 that crystallise in a specific cubic structure. Nanotube formation is based on an interfacial solid-state reaction of core-shell ZnO-Al2O3 nanowires involving the Kirkendall effect. This effect arises when the interdiffusion between two solids is realized via vacancy transport and an asymmetry of the related diffusion rates occurs which finally results in the formation of pores. The researchers have used this effect to produce hollow nanowires, i.e. nanotubes. The latter grow due to the fact that the generated pores cannot leave the nanowires, which is a consequence of the spatial confinement given by the specific reaction geometry involving cylindrical symmetry. The method has a number of advantages. For example, the required voids need not be produced beforehand. This, in the end, allows scientists to fabricate three-dimensional nanostructures of complex shape. In addition, nanotubes with large aspect ratios can be obtained, and the method can be scaled up to produce large volume quantities useful for applications. Moreover, ZnO (part of medical salves) is very biocompatible. Finally, the method can be applied to many other types of ZnO- or MgO-based spinel nanostructures with adopted chemical composition and interesting properties. This story was first posted on 29th September 2006.
<urn:uuid:10e0da59-e9f2-4b65-9ee5-c9c04d8f4d03>
3.53125
678
Comment Section
Science & Tech.
26.103292
What is Nanotechnology? Authored by Ottilia Saxl. Nanotechnology is an exciting area of scientific development which promises ‘more for less’. It offers ways to create smaller, cheaper, lighter and faster devices that can do more and cleverer things, use less raw materials and consume less energy. There are many examples of the application of nanotechnology from the simple to the complex. For example, there are nano coatings which can repel dirt and reduce the need for harmful cleaning agents, or prevent the spread of hospital-borne infections. New-generation hip implants can be made more ‘body friendly’ because they have a nanoscale topography that encourages acceptance by the cells in their vicinity. Moving on to more complex products, a good example of the application of nanotechnology is a mobile phone, which has changed dramatically in a few years – becoming smaller and smaller, while paradoxically, growing cleverer and faster – and cheaper! What is Nanotechnology? Nanotechnology originates from the Greek word meaning “dwarf”. A nanometre is one billionth (10-9) of a metre, which is tiny, only the length of ten hydrogen atoms, or about one hundred thousandth of the width of a hair! Although scientists have manipulated matter at the nanoscale for centuries, calling it physics or chemistry, it was not until a new generation of microscopes were invented in the nineteen eighties in IBM, Switzerland that the world of atoms and molecules could be visualized and managed. In simple terms, nanotechnology can be defined as ‘engineering at a very small scale’, and this term can be applied to many areas of research and development – from medicine to manufacturing to computing, and even to textiles and cosmetics. It can be difficult to imagine exactly how this greater understanding of the world of atoms and molecules has and will affect the everyday objects we see around us, but some of the areas where nanotechnologies are set to make a difference are described below. From Micro to Nano Nanotechnology, in one sense, is the natural continuation of the miniaturization revolution that we have witnessed over the last decade, where millionth of a metre (10 -6m) tolerances (microengineering) became commonplace, for example, in the automotive and aerospace industries enabling the construction of higher quality and safer vehicles and planes. It was the computer industry that kept on pushing the limits of miniaturization, and many electronic devices we see today have nano features that owe their origins to the computer industry – such as cameras, CD and DVD players, car airbag pressure sensors and inkjet printers. Nanotechnology offers opportunities in creating new features and functions. It is already providing the solutions to many long-standing medical, social and environmental problems. Because of its potential, nanotechnology is of global interest. It is attracting more public funding than any other area of technology, estimated at €3.8 billion worldwide in 2005. It is also the one area of research that is truly multidisciplinary. The contribution of nanotechnology to new products and processes cannot be made in isolation and requires a team effort. This may include life scientists – biologists and biochemists - working with physicists, chemists and information technology experts. Consider the development of a new cochlear implant and what that might require - at least a physiologist, an electronic engineer, a mechanical engineer and a biomaterials expert. This kind of teamwork is essential, not only for a cochlear implant, but for any new nano-based product whether it is a scratch-resistant lens or a new soap powder. Nano scientists are now enthusiastically examining how the living world ‘works’ in order to find solutions to problems in the 'non-living' world. The way marine organisms build strength into their shells has lessons in how to engineer new lightweight, tough materials for cars; the way a leaf photosynthesizes can lead to techniques for efficiently generating renewable energy; even how a nettle delivers its sting can suggest better vaccination techniques. These ideas are all leading to what is termed ‘disruptive’ solutions, when the old ways of making things are completely overtaken and discarded, in much the same way as a DVD has taken over from videotape, or a flat screen display from a cathode ray tube.
<urn:uuid:e47ffca2-d6da-48e2-8067-b3c8d3b98c70>
3.375
902
Knowledge Article
Science & Tech.
23.706671
Distribution Maps of Coma and Regional Spectra in Coma Results from the Infrared spectrometer in work lead by Lori Feaga of University of Maryland, show asymmetric distributions of both water and carbon dioxide gases in the coma of Tempel 1. The water is enhanced in the sunward direction, where sunlight sublimates water ice. The CO2 is enhanced off of the southern hemisphere of the comet. This suggests that the composition of the nucleus of the comet is not uniform, and is heterogeneous. One of the major objectives of the mission was to determine whether comet nuclei are uniform in composition. The answer is no. Detailed study of these coma asymmetries gives insight to the relative abundances of the dominant molecular components of the inner coma, source regions of the native volatiles, anisotropic outgassing of the nucleus, and the formation and evolution of the nucleus. Photo Credit: NASA/UM/Lori Feaga + Larger image
<urn:uuid:a6816077-4f4a-4460-a212-7fb75c041368>
3.234375
201
Knowledge Article
Science & Tech.
23.387105
SCIENCE / History Jan 13, 2013 — Dorothy Wrinch was the first woman to ever receive a doctorate in science from Oxford University, and she was the first person to design a protein structure. But her name is largely unknown. I Died for Beauty, a biography of Wrinch by Marjorie Senechal, tells her story. Jan 1, 2013 — The discovery of the Higgs boson will likely be hailed as the most important scientific discovery of 2012. But many ideas that change the world don't tend to spring from flashy moments of discovery. Our view of nature — and our technology — often evolve from a sequence of more subtle advances. Jan 3, 2012 — The scientist is known as much for his contributions to theoretical cosmology and quantum gravity as for his willingness to make science accessible for the general public. His work is the topic of a new biography by science writer Kitty Ferguson.
<urn:uuid:47ebc269-db99-4e00-b47f-bf16c00c6a57>
2.90625
184
Content Listing
Science & Tech.
50.18775
The killer whale has many breeding habits that are known to researchers that suggest that the killer whale may be more like humans than previously expected. The newborn calves of an orca whale population have been observed by scientists at various times during the year. This suggests that orca whales have no particular breeding pattern that stops them from wanting to reproduce at any time during the year. The orca whale in its natural habitat becomes sexually active from the years of 10-18 and the female orca whales can reproduce when they reach around 16 feet in length. The males must reach about 20 feet in length before they can reproduce in contrast with the females. The killer whales has been studied long term and these studies have produced more information about the gestation period of the orca whale which can last anywhere from 13 to 17 months. In addition to this the calve of a typical orca whale can be up to 8 feet in length when born. Other interesting facts about killer whales include their ability to reproduce offspring into their 40's. This is an aspect of the life of killer whales that make many scientists compare them to human life. The orca whale is found all over the world with this breeding pattern and the females can also have a calf every two years though many choose not to have their calves in such rapid succession. Most female killer whales have a calf once every 3-5 years. If you are interested in killer whales and their mating patterns and rituals then it may be a good idea to see them for yourself at an aquarium close to where you live. Aquariums can be great places to learn about all manners of marine life including the killer whale.
<urn:uuid:fc9bf2ee-f5d9-4142-bc19-6848bc38b7fc>
3.421875
329
Knowledge Article
Science & Tech.
49.15789
Climate Change Affects Deep-Sea Life The cold dark expanses of the ocean floor may provide clues to the extent and effect of climate change on undersea ecosystems, said Ron Kaufmann, associate professor of marine science and environmental studies at the University of San Diego. Kaufmann was one of the authors of a paper published this fall, in the Proceedings of the National Academy of Sciences (PNAS). The paper was based on analyzing data from nearly 20 years of research on the composition and functions of deep-sea communities in two widely separated locations, one in the northeast Pacific and one in the northeast Atlantic. The findings show how animal communities on the abyssal seafloor are affected by climate change. (Full Story on Inside USD) |Contact||Public Affairs | email@example.com | (619) 260-4681|
<urn:uuid:b8346c01-849e-41c8-9cbe-f45fe43834ef>
3.046875
176
Truncated
Science & Tech.
30.004642
by Staff Writers Manchester, UK (SPX) Apr 05, 2012 Since the NASA / ESA Cassini-Huygens spacecraft arrived at Saturn in 2004, astronomers and space scientists have been able to study the ringed planet and its moons in great detail. Now, for the first time, a team of planetary scientists have made simultaneous measurements of Saturn's nightside aurora, magnetic field, and associated charged particles. Together the fields and particle data provide information on the electric currents flowing that produce the emissions. Team leader Dr Emma Bunce of the University of Leicester will present the new work at the National Astronomy Meeting in Manchester on 27 March 2012. Generally, images of the aurora (equivalent to the terrestrial 'northern lights') provide valuable information about the electromagnetic connection between the solar wind, the planet's magnetic field (magnetosphere) and its upper atmosphere. Variations in the aurora then provide information on changes in the associated magnetosphere. But viewing the aurora (best done at a large distance) at the same time as measuring the magnetic field and charged particles at high latitudes (where the aurora is found, best done close to the planet) is hard. In 2009, Cassini made a crossing of the magnetic field tubes that connect to the aurora on the night side of Saturn. Because of the position of the spacecraft, Dr Bunce and her team were able to obtain ultraviolet images of the aurora (which manifests itself as a complete oval around each pole of the planet) at the same time. This is the first time it has been possible to make a direct comparison between Cassini images of the nightside aurora and the magnetic field and particle measurements made by the spacecraft. And because of the geometry of the orbit at Cassini, it took about 11 hours to pass through the high-latitude region or about the same time it takes Saturn to make one rotation. This meant that the team were able to watch the auroral oval move as the planet turned. As Saturn and its magnetosphere rotated, the auroral oval was tilted back and forth across the spacecraft with a speed that is consistent with a planetary rotation effect: Dr Bunce comments: "With these observations we can see the simultaneous motion of the electric current systems connecting the magnetosphere to the atmosphere, producing the aurora. Ultimately these observations bring us a step closer to understanding the complexities of Saturn's magnetosphere and its ever elusive rotation period". Cassini at JPL Explore The Ring World of Saturn and her moons Jupiter and its Moons The million outer planets of a star called Sol News Flash at Mercury Comment on this article via your Facebook, Yahoo, AOL, Hotmail login. Icy Moons through Cassini's Eyes Pasadena CA (JPL) Apr 02, 2012 These raw, unprocessed images of Saturn's moons Enceladus, Janus and Dione were taken on March 27 and 28, 2012, by NASA's Cassini spacecraft. Cassini passed Enceladus first on March 27, coming within about 46 miles (74 kilometers) of the moon's surface. The encounter was primarily designed for Cassini's ion and neutral mass spectrometer, which "tasted" the composition of Enceladus' south p ... read more |The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:cd2ed1e6-ef9e-45f7-8b17-a8f271e2ca71>
3.375
803
Truncated
Science & Tech.
31.509562
Changing Planet: Rising Sea Level As the Earth's atmosphere warms due to rising levels of greenhouse gases, our oceans are warming also. The rising temperature of the atmosphere also leads to melting of ice in the cryosphere. Melting water from land-based glaciers and ice sheets eventually makes its way into the ocean, gradually raising the sea level globally. Depending on how quickly the ice melts because of global warming, this sea level rise may be relatively gradual (about a meter over a century) or may increase more dramatically, if substantial land-based glaciers near the ocean are suddenly destablized and release their water to the ocean more quickly. Click on the video at the left to watch the NBC Learn video - Changing Planet: Rising Sea Level Lesson plan: Changing Planet: Sea Levels Rising Shop Windows to the Universe Science Store! is a fun group game appropriate for the classroom. Players follow nitrogen atoms through living and nonliving parts of the nitrogen cycle. For grades 5-9. You might also be interested in: Energy from the Sun can enter the atmosphere, but not all of it can easily find its way out again. This is a natural process called the greenhouse effect. Without any greenhouse effect, Earth’s temperature...more The low-lying coast of Bangladesh in South Asia is home to millions of people, yet the amount of sea level rise predicted for the 21st Century is expected to change that, flooding homes with seawater and...more The world's surface air temperature increased an average of 0.6° Celsius (1.1°F) during the last century according to the Intergovernmental Panel on Climate Change (IPCC). This may not sound like very...more Altocumulus clouds (weather symbol - Ac), are made primarily of liquid water and have a thickness of 1 km. They are part of the Middle Cloud group (2000-7000m up). They are grayish-white with one part...more Altostratus clouds (weather symbol - As) consist of water and some ice crystals. They belong to the Middle Cloud group (2000-7000m up). An altostratus cloud usually covers the whole sky and has a gray...more Cirrocumulus clouds (weather symbol - Cc) are composed primarily of ice crystals and belong to the High Cloud group (5000-13000m). They are small rounded puffs that usually appear in long rows. Cirrocumulus...more Cirrostratus (weather symbol - Cs) clouds consist almost entirely of ice crystals and belong to the High Cloud (5000-13000m) group. They are sheetlike thin clouds that usually cover the entire sky. The...more
<urn:uuid:117db62f-87f1-4602-a65f-e39e43e41923>
3.828125
557
Tutorial
Science & Tech.
58.6125
Identifying Sources of Excess Nitrogen Meet EPA Ecologist Jana Compton, Ph.D. As an ecologist with the Western Ecology Division of EPA’s Office of Research and Development, Dr. Jana Compton investigates the sources and effects of nitrogen pollution. She obtained her undergraduate degree in biology and chemistry at Earlham College, earned her graduate degrees in forest ecosystems and biogeochemistry at the University of Washington, and then completed her post-doc work at Harvard Forest . Before joining EPA, Dr. Compton worked as an Assistant Professor of Soil Biogeochemistry at the University of Rhode Island. How does your science matter? I work on identifying the sources of nitrogen pollution. Small quantities of nitrogen are essential for living things, but too much nitrogen is harmful to ecosystems. My research focuses on non-point sources of nitrogen pollution, for example, land use and agriculture. I connect these sources to their impacts, and quantify them in ways that people can understand. This helps to address major problems like air quality, and problems we see in water ways such as eutrophication and coastal hypoxia . Also, by looking at the social and economical impacts of nitrogen pollution, we can help people to recognize how it might be affecting the benefits we all get from healthy ecosystems, what we call “ecosystem services,” such as clean air and safe drinking water. If you could have dinner with any female scientist, past or present, who would it be and what would you want to ask them? I’d like to talk to Sandra Steingraber , whom I consider a modern day Rachel Carson. She is an author that connects ecology, the environment, and human health. She had a really personal story, and I think a lot of women go into their field because of personal things that may have happened to them. She tried to understand her role in the environment and the impact that it has had on her environment and on children. She’s someone I’d like to sit down and talk with about our world and the chemicals that are out there, how they are affecting our children, and what we can do to make things better. When did you first know you wanted to be scientist? I knew I wanted to be a scientist in college when I had the chance to work in a research environment. I worked at Oak Ridge National Lab for a summer and saw what scientists there did and the way they asked interesting questions and then worked to answer them. The experience really pointed me in that direction. Tell us about your background. My background starts with growing up in Oak Ridge, Tennessee, one of the places where they worked on the Manhattan Project developing an atomic bomb in the 1940s. Growing up in that environment, it’s hard not to think about the mostly invisible chemicals and how they affect you. From radiation to the coal-fired power plant that I could see from my house, those were things that I thought about as a kid while I caught crawdads in our backyard creek. I think those kinds of things set me on the path to be an environmental scientist. I went to Earlham College for my undergraduate study. There was a great combination of ecology and one of my inspirations was a chemistry professor, Wil Stratton. I learned about acid rain from him. That was the first thing that really pushed me toward environmental science, and I still work on related issues here at EPA. I got my graduate degrees, both my masters and my PhD, at the University of Washington in forest and forest ecosystems. Then I worked at Harvard University as a post-doc. All of those were great experiences and kept me moving toward the kinds of things that I am doing now. What do you like most about your research? I like getting to work with people who are also really excited about what they are doing. I have a lot of great colleagues who are interested in nutrients and chemistry and all kinds of exciting topics so it's a great research environment. If you were not a scientist, what would you be doing? This is a hard question! I just went in to my daughter’s classroom and taught a bunch of 3rd graders about geology and geologic time. That was really fun to do. It would be hard work to be a teacher, but maybe that would be something I could do instead of science. Any advice for students considering a career in science? I think the 95% perspiration and 5% inspiration rule applies here! You need to love what you do because it will require a lot of hard work.
<urn:uuid:1e030db3-7821-434e-8d50-8518dcb13522>
2.703125
946
Audio Transcript
Science & Tech.
56.061236
Aircraft used for collecting air samples over the Gulf of Mexico off the coast of Corpus Christi, Texas (site code TGC). The plane is a Cessna 402. Since its inception in 1992, the NOAA/ESRL Carbon Cycle Greenhouse Gases (CCGG) group’s aircraft program has been dedicated to collecting air samples in vertical profiles over North America. The program's mission is to capture seasonal and inter-annual changes in trace gas mixing ratios throughout the boundary layer and free troposphere (up to 8000m). At present, most aircraft program flights collect 12 flask samples at different altitudes. These samples are stored in glass flasks for later analysis of Carbon Dioxide (CO2), Carbon Monoxide (CO), Nitrous Oxide (N2O), Methane (CH4), Molecular Hydrogen (H2), and Sulfur Hexafluoride (SF6), as well as isotopes of CO2 and CH4, and multiple Halo- and Hydrocarbons.
<urn:uuid:a5b2784b-1e3c-4274-b18c-381f37674d2f>
3.25
208
Knowledge Article
Science & Tech.
41.509323
C++ Structure: Why returns 'sizeof()' a bigger size than the members actually need? Q: Why returns 'sizeof()' a bigger size than the members actually need? A: This is due to structure alignment which aligns the data structure by padding extra bytes so as to optimize for data transfer. Modern CPUs perform best in data transfer when fundamental types, such as 'int' and 'float', are stored in memory addresses that are multiples of their length. Some CPUs, like x86, also allow unaligned access but at a performance penalty. In other words, it requires extra data transfer when the data is unaligned. When a C/C++ compiler processes a structure declaration, it adds extra bytes between fields to ensure that they are properly aligned. It also adds extra bytes to the end of the structure so that every element of an array of that structure type, is properly aligned. As a rule of thumb, to minimize the extra padded bytes needed for the alignment, all fields of the same type should be grouped together. See the following example: // Results from VC6 on x86, it may varies for other CPU/OS/compiler combination. int sizeA = sizeof(MyStructA); // sizeA = 1 + 1 + (2-padding) + 4 = 8 int sizeB = sizeof(MyStructB); // sizeB = 1 + (3-padding) + 4 + 1 + (3-padding) = 12 As the C/C++ standard states, the alignment is completely implementation defined, thus, each CPU/OS/compiler combination is free to choose whatever alignment and padding rules it deems best. Although the standard didn't provide any control in customizing the alignment and padding rules, many compilers provide this through non-standard extensions. For example, VC6 provides the pragma pack(n) macro. Last edited by Andreas Masur; July 24th, 2005 at 05:06 AM.
<urn:uuid:70fd3dc6-899d-41b6-81cb-785ae46c59b5>
3.078125
409
Q&A Forum
Software Dev.
49.36125
Here on planet Earth we're used to flames -- whether from a candle or campfire -- reaching upward to the sky with slender limbs hungry for oxygen and driven by rising hot air. But in space, sans our planet's strong gravitational pull, flames are more likely to take the shape of eerie fireballs. Within the flame of a regular candle wick, there's quite a bit going on. As the video below released this week by NASA explains, molecules from the wick are being cracked apart and vaporized by the flame, then combined with oxygen to produce light, heat, carbon dioxide, and water, as well as soot. In recent years we've become quite familiar with how flames can extend and expand quickly in their greedy quest for more fuel and oxygen; witness countless western wildfires of the past decade. But researchers aboard the International Space Station have observed that flames in microgravity behave much differently, staying in a small spherical shape and letting oxygen molecules come to them.… Read more
<urn:uuid:1dfe3ccc-238c-4320-9975-e41930ad8ebc>
3.6875
200
Truncated
Science & Tech.
35.423289
Fermat's Principle: Light follows the path of least time. The law of reflection can be derived from this principle as follows: The pathlength L from A to B is Since the speed is constant, the minimum time path is simply the minimum distance path. This may be found by setting the derivative of L with respect to x equal to zero. This derivation makes use of the calculus of maximum-minimum determination, the derivative of a square root, and the definitions of the triangle trig functions.
<urn:uuid:321803e9-9399-4e7a-95ae-2b27a7558685>
3.25
105
Knowledge Article
Science & Tech.
50.565647
Warm-Hot Intergalactic Medium in the Sculptor Wall Scientists have used ESA's XMM-Newton and NASA's Chandra X-ray Observatory to detect a vast reservoir of gas lying along a wall-shaped structure of galaxies about 400 million light years from Earth. In this artist's impression, a close-up view of the so-called Sculptor Wall is depicted. Spiral and elliptical galaxies are shown in the wall along with the newly detected intergalactic gas, part of the Warm-Hot Intergalactic Medium (WHIM), shown in blue. This discovery is the strongest evidence yet that the 'missing matter' in the nearby Universe is located in an enormous web of hot, diffuse gas. The X-ray emission from WHIM in this wall is too faint to be detected, so instead a search was made for absorption of light from a bright background source by the WHIM, using deep observations with Chandra and XMM-Newton. This background source is a rapidly growing supermassive black hole located far beyond the wall at a distance of about two thousand million light years. This is shown in the illustration as a star-like source, with light traveling through the Sculptor Wall towards the Earth. The relative location of the background source, the Sculptor Wall, and the Milky Way galaxy are shown in a separate plot, where the view instead looks down on the source and the Wall from above. An X-ray spectrum of the background source (known as <nobr>H 2356-309</nobr>) is given in the inset, where the yellow points show the Chandra data and the red line shows the best model for the spectrum after including all of the Chandra and XMM-Newton data. The dip in X-rays towards the right side of the spectrum corresponds to absorption by oxygen atoms in the WHIM contained in the Sculptor Wall. The characteristics of the absorption are consistent with the distance of the Sculptor Wall as well as the predicted temperature and density of the WHIM. This result gives scientists confidence that the WHIM will also be found in other large-scale structures. This result supports predictions that about half of the normal matter in the local Universe is found in a web of hot, diffuse gas composed of the WHIM. Normal matter - which is different from dark matter - is composed of the particles, such as protons and electrons, that are found on the Earth, in stars, gas, and so on. A variety of measurements have provided a good estimate of the amount of this 'normal matter' present when the Universe was only a few thousand million years old. However, an inventory of the nearby Universe has turned up only about half as much normal matter, an embarrassingly large shortfall.
<urn:uuid:3dfdff7a-e4e8-4d10-b2c5-e79e44b07c68>
2.71875
565
Knowledge Article
Science & Tech.
46.117188
Basic information about comets with images and links to comet-related sites. Links to a general discussion of comets and discussions of specific comets; images of comets; a link to an animation of a comet's flight. Images of comets (including Comet Halley) and of other small space objects, such as asteroids and meteorites. Images and statistics of Comet Halley; image of Comet Hyakutake; and information on Shoemaker-Levy 9 with links to other sites about this comet. Wonderful links to information about comets in general and about specific comets. The graphics are well-done and well-integrated into the text. Very basic information on comets (no links; no images). Clear exposition of basic information about comets with a few paragraphs about Comet Halley. Table of comets with general information as well as a brief outline of planned missions to selected comets; links to the NSSDC comet home page and other fact sheets. Discussion of basic theory of comet formation and comet locations, with notes on current research. Images of Comet Shoemaker from 1987. Specific Comets: Links to general information on comets and links to specific information and images of Comet Hale-Bopp. A companion electronic magazine is also available. Image of the comet with general information on it and its discovery. Links to other images and more information. Night sky image of the comet with a description of Comet Hale-Bopp, featuring many explanatory links to other sites. Night sky image of the fading comet with commentary, featuring many explanatory links to other sites. Table and map of the positions of the comet from 1 December, 1995 to 15 December, 1996. Personal observation, table of facts and images of the comet. A brief history of the discovery of the comet and links to news items and images, as well as to related resources. Basic information and a small image with links to more details about this comet, the first to have been discovered by a Swede. Headed by images simulating the impact, this page contains many links to news items and resources about this event. Very basic observational history and current thinking about this comet. (no links; no images). Color-enhanced image of the comet with a description of Comet Swift-Tuttle with many explanatory links to other sites. Black and white, night sky image of Comet Swift-Tuttle with a link to ``fits'' information. Color-enhanced image of the comet.
<urn:uuid:240d6e9a-45de-4666-9ad6-f5cd8e759ab2>
3.03125
523
Content Listing
Science & Tech.
39.569046
The maps displayed in the drought section of the Weekly Rainfall Update are masked rainfall percentages maps. These maps show the percentage of mean rainfall that has been received for specified periods. The time periods of the maps and the grey masking is determined by the most recent Drought Statement. At the beginning of each month drought periods are defined in the Drought Statement. These are periods of time where areas of Australia have been considered to have suffered from serious or severe rainfall deficiencies. The term serious and severe are defined as rainfall so low that it falls within the bottom 10% of records (Read more about rainfall deficiencies). In the Weekly Rainfall Update, these drought periods defined in the drought statement are evaluated on a weekly basis, to see if rainfall that occured during the previous week has had an impact on the rainfall deficits. This can be done by looking at the masked rainfall percentages map. Only the regions that have experienced serious or severe rainfall deficits in the most recent drought statement are displayed on these maps; all other regions are shaded in grey. The maps have the same start date as the drought periods discussed in the drought statement, but the end date is always the Tuesday on the date of issue of the latest Weekly Rainfall Update. The maps show how much rainfall, as a percentage of the mean rainfall for that period, was received in the rainfall deficient regions. Percentages that are below 100% are below the mean rainfall for that particular period. Read more about percentages of mean rainfall. |Figure 1. Rainfall deficiencies: 6 months||Figure 2. Masked rainfall percentages for the 6-month period in Figure 1 to date| Figure 1 shows the rainfall deficiencies map for the 6 months starting 1 January 2010 ending 30 June 2010, while Figure 2 shows the masked rainfall percentage map also starting on the 1 January 2010, but ending on Tuesday 19 June. Figure 1 shows that in the 6 months ending 30 June 2010 much of western WA recorded rainfall so low that serious to severe rainfall deficits are now evident rainfall that in the lowest 10% of records for tha period). Figure 2 shows that by Tuesday 19 June large areas of the central WA coast, and much of the Pilbara and Gascoyne districts recorded less than 60% of their mean rainfall for the period, with some sites recording less than 30%. Further south, much of the southwest of WA has recorded between 70 to 90% of their mean rainfall for the period, with several sites recording a little less than 70%. © Australian Government, Bureau of Meteorology
<urn:uuid:117e052c-633f-46b8-9e77-2aed214f0b99>
3.125
510
Knowledge Article
Science & Tech.
49.793868
by Ruth Tenzer Feldman Joanna Aizenberg, a scientist and one of Earth's most complex multicellular animals, entered a San Francisco store and encountered the elegant remains of Euplectella speciosa—a deep-sea sponge and one of Earth's simplest multicellular animals. Scientist and sponge might one day revolutionize fiber-optic cables, the thread weaving together our wired world. Fiber-optic cables are basically bundled strands of optical fibers—filaments of glass and reflective cladding that transmit coded light. These fibers are crafted under high heat using expensive equipment. Because the fibers are not very flexible, the cable is hard to install and repair, and is prone to minute cracks. The sponge Aizenberg encountered—called Venus's flower basket and other names—transmits light through resilient, flexible glass fibers made at sea temperature. Aizenberg and her colleagues aim to find out how. Venus's flower basket is a type of hexactinellid or glass sponge whose skeleton is composed of needlelike spicules of silica. The sponge uses proteins to collect and arrange silica particles into hairlike glass fibers two to three inches long. Traces of sodium are added, making the glass fiber better able to conduct light. Organic material and concentric shells of glass encase the fibers for protection. According to Aizenberg, “You could tie [the fibers] in tight knots and, unlike commercial fiber, they would still not crack.” This sponge lives in tropical waters and anchors itself to the ocean floor. It likely gathers luminescent (light-emitting) organisms and turns itself into a “fiber-optic lamp” to attract the plankton that it eats. Seeking protection from predators, other creatures live inside this cuplike sponge with a lattice top. Often, a mating pair of shrimp will swim in and remain for the rest of their lives. As scientists like Aizenberg realize, connectivity means more than communication among humans. There's a sea of information to be learned when we connect with the unwired world as well. - A metal coating bonded onto another metal under high pressure and temperature. - Having a common center. - A fine wire or thread. - An open framework made of strips that overlap in a crisscross pattern. - A small, needlelike structure. - Why does a fiber-optic cable contain glass filaments? [anno: A fiber-optic cable contains glass filaments so that it can transmit coded light.] - If a fiber-optic cable were made up of filaments of black rubber, would the cable transmit light? Why or why not? [anno: If a fiber-optic cable were made up of black rubber it would not transfer the light because the black rubber would absorb the light instead of transmitting it along the cable.] - Why are scientists interested in studying the Euplectella speciosa? [anno: Scientists are interested in studying the Euplectella speciosa because they are trying to replicate the structure of the sponge's filament since the filament can conduct light but also is flexible.]
<urn:uuid:590f2bd2-5333-42dd-a543-70716b230360>
3.625
659
Knowledge Article
Science & Tech.
41.336462
The Bison parser stack can run out of memory if too many tokens are shifted and not reduced. When this happens, the parser function yyerror and then returns 2. Because Bison parsers have growing stacks, hitting the upper limit usually results from using a right recursion instead of a left recursion, see Recursive Rules. By defining the macro YYMAXDEPTH, you can control how deep the parser stack can become before memory is exhausted. Define the macro with a value that is an integer. This value is the maximum number of tokens that can be shifted (and not reduced) before overflow. The stack space allowed is not necessarily allocated. If you specify a large value for YYMAXDEPTH, the parser normally allocates a small stack at first, and then makes it bigger by stages as needed. This increasing allocation happens automatically and silently. Therefore, you do not need to make YYMAXDEPTH painfully small merely to save space for ordinary inputs that do not need much stack. However, do not allow YYMAXDEPTH to be a value so large that arithmetic overflow could occur when calculating the size of the stack space. Also, do not allow YYMAXDEPTH to be less than The default value of YYMAXDEPTH, if you do not define it, is You can control how much stack is allocated initially by defining the YYINITDEPTH to a positive integer. For the deterministic parser in C, this value must be a compile-time constant unless you are assuming C99 or some other target language or compiler that allows variable-length arrays. The default is 200. Do not allow YYINITDEPTH to be greater than Because of semantic differences between C and C++, the deterministic parsers in C produced by Bison cannot grow when compiled by C++ compilers. In this precise case (compiling a C parser as C++) you are suggested to grow YYINITDEPTH. The Bison maintainers hope to fix this deficiency in a future release.
<urn:uuid:b62d2f0a-2748-43f7-8686-8a821e2ffb8c>
2.8125
447
Documentation
Software Dev.
41.794118
Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files. Make gets its knowledge of how to build your program from a file called the makefile, which lists each of the non-source files and how to compute it from other files. When you write a program, you should write a makefile for it, so that it is possible to use Make to build and install the program. Capabilities of Make - Make enables the end user to build and install your package without knowing the details of how that is done -- because these details are recorded in the makefile that you supply. - Make figures out automatically which files it needs to update, based on which source files have changed. It also automatically determines the proper order for updating files, in case one non-source file depends on another non-source file. As a result, if you change a few source files and then run Make, it does not need to recompile all of your program. It updates only those non-source files that depend directly or indirectly on the source files that you changed. - Make is not limited to any particular language. For each non-source file in the program, the makefile specifies the shell commands to compute it. These shell commands can run a compiler to produce an object file, the linker to produce an executable, arto update a library, or TeX or Makeinfo to format documentation. - Make is not limited to building a package. You can also use Make to control installing or deinstalling a package, generate tags tables for it, or anything else you want to do often enough to make it worth while writing down how to do it. Make Rules and Targets A rule in the makefile tells Make how to execute a series of commands in order to build a target file from source files. It also specifies a list of dependencies of the target file. This list should include all files (whether source files or other targets) which are used as inputs to the commands in the rule. Here is what a simple rule looks like: target: dependencies ... commands ... When you run Make, you can specify particular targets to update; otherwise, Make updates the first target listed in the makefile. Of course, any other target files needed as input for generating these targets must be updated first. Make uses the makefile to figure out which target files ought to be brought up to date, and then determines which of them actually need to be updated. If a target file is newer than all of its dependencies, then it is already up to date, and it does not need to be regenerated. The other target files do need to be updated, but in the right order: each target file must be regenerated before it is used in regenerating other targets. Advantages of GNU Make GNU Make has many powerful features for use in makefiles, beyond what other Make versions have. It can also regenerate, use, and then delete intermediate files which need not be saved. GNU Make also has a few simple features that are very convenient. For -o file option which says ``pretend that source file file has not changed, even though it has changed.'' This is extremely useful when you add a new macro to a header file. Most versions of Make will assume they must therefore recompile all the source files that use the header file; but GNU Make gives you a way to avoid the recompilation, in the case where you know your change to the header file does not require it. However, the most important difference between GNU Make and most versions of Make is that GNU Make is free software. Makefiles And Conventions We have developed conventions for how to write Makefiles, which all GNU packages ought to follow. It is a good idea to follow these conventions in your program even if you don't intend it to be GNU software, so that users will be able to build your package just like many other packages, and will not need to learn anything special before doing so. Downloading GNU Make GNU Make can be found on the main GNU ftp server: http://ftp.gnu.org/gnu/make/ (via HTTP) and ftp://ftp.gnu.org/gnu/make/ (via FTP). It can also be found on the GNU mirrors; please use a mirror if possible. Documentation for Make is available online, as is documentation for most GNU software. You may also find more information about Make by running info make or man make, or by looking at /usr/doc/make/, /usr/local/doc/make/, or similar directories on your system. A brief summary is available by running make --help. The main discussion list is <email@example.com>, and is used to discuss most aspects of Make, including development and enhancement requests, as well as bug reports. There is a separate list for general user help and discussion, <firstname.lastname@example.org>. GNU Make has been ported to a great many systems. One that poses unique challenges is Microsoft DOS and Windows platforms; because of that there is a GNU Make mailing list dedicated specifically to users of those platforms: <email@example.com>. Announcements about Make and most other GNU software are made on <firstname.lastname@example.org>. To subscribe to these or any GNU mailing lists, please send an empty mail with a Subject: header of just subscribe to the relevant -request list. For example, to subscribe yourself to the GNU announcement list, you would send mail to <email@example.com>. Or you can use the mailing list web interface. Development of Make, and GNU in general, is a volunteer effort, and you can contribute. For information, please read How to help GNU. If you'd like to get involved, it's a good idea to join the discussion mailing list (see above). - Test releases - Trying the latest test release (when available) is always appreciated. Test releases of Make can be found at http://alpha.gnu.org/gnu/make/ (via HTTP) and ftp://alpha.gnu.org/gnu/make/ (via FTP). - For development sources, bug and patch trackers, and other information, please see the Make project page at savannah.gnu.org. - Translating Make - To translate Make's messages into other languages, please see the Translation Project page for Make. If you have a new translation of the message strings, or updates to the existing strings, please have the changes made in this repository. Only translations from this site will be incorporated into Make. For more information, see the Translation Project. - GNU Make was written by Richard Stallman and Roland McGrath. It has been maintained and updated by Paul Smith since version 3.76 (1997). Please use the mailing lists for contact. Make is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
<urn:uuid:7d111e3d-b1d9-4b7b-94bb-64325d568682>
4.3125
1,508
Documentation
Software Dev.
51.092282
In fusion research, where deuterium-tritium is a common fuel mixture, the neutron released when (D + T) combine to form (4He + n) can activate the reactor structure. In this case the 4He is inert, the neutron sticks to another nucleus, and the neutron + nucleus reaction creates an actvation product. Sometimes called radioactivation. (09 Oct 1997) |Bookmark with:||word visualiser||Go and visit our forums|
<urn:uuid:26bde061-5797-42a2-8536-70e35a5d5958>
3.25
98
Knowledge Article
Science & Tech.
45.062917
Some people's talents are so well hidden that you overlook them completely. Sometimes it is the same with cells. According to tradition in neuroscience, brain cells fall into two broad groups - neurons and glial cells. Neurons are the smart, fast-talking communicators that process information in the brain; glial cells are the humble drudges that do little more than fill in the space between neurons. Or so generations of biology students were taught. No longer. These days glial cells are fast shaking off their old image, as neuroscientists uncover an exciting and hitherto hidden side to their personality. Far from being dumb space-fillers, glial cells are highly communicative. Not only can they pass messages to each other, they can also receive signals from - and perhaps even broadcast signals to - their supposed superiors, the neurons. And important medical implications are emerging as well. Glial cells turn out to play ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:0dbdc652-2b44-4950-ab3d-4457cb87657d>
3.265625
216
Truncated
Science & Tech.
50.516373
They are still a long way from returning to the moon, but NASA is already thinking about sending astronauts to an asteroid. Such a mission could be accomplished using the same spacecraft and launch vehicle being designed to take Americans back to the moon. "This would be the first time that humans go outside of the Earth-moon system," says Paul Abell of NASA's Johnson Space Center in Houston, Texas. About two years ago, NASA started work on a new crew capsule and rocket to send astronauts to the moon by 2020 (New Scientist, 11 August 2005, p 6). Now, Abell and his team say that the Orion capsule and Ares rocket could also take humans to one of the many asteroids with orbits that bring them near Earth. The team's analysis shows that the mission would require less fuel than going to the moon, because the vastly weaker gravity of ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:a255fd7c-1080-4a6b-a655-169b91302308>
3.84375
202
Truncated
Science & Tech.
62.130991
Two end member geodynamic settings produce the observed examples of rapid voluminous felsic (rhyolitic) magmatism through time. The first is driven by mantle plume head arrival underneath a continent and has operated in an identifiable and regular manner since at least 2.45 Ga. This style produces high temperature (≤ 1100 °C), low aspect ratio rheoignimbrites and lavas that exhibit high SiO₂/Al₂O₃ ratios, high K₂O/Na₂O ratios, and where available data exists, high Ga/Al₂O3 ratios (> 1.5) with high F (in thousands of parts per million) and low water content. F concentration is significant as it depolymerizes the silicate melt, influencing the magmas' physical behavior during development and emplacement. These rhyolites are erupted as part of rapidly emplaced (10–15 Myr) mafic LIPs and are formed primarily by efficient assimilation-fractional crystallization processes from a mafic mantle parent. The second is driven by lithospheric extension during continental rifting or back arc evolution and is exclusive to the Phanerozoic. SLIPs (silicic large igneous provinces) develop over periods < 40 Myr and manifest in elongate zones of magmatism that extend up to 2500 km, contrasting with the mafic LIP style. Some of the voluminous felsic magmas within SLIPs appear to have a very similar geochemistry and petrogenesis to that of the rhyolites within mafic LIPs. Other voluminous felsic magmas within SLIPs are sourced from hydrous lower crust, and contrast with those sourced from the mantle. They exhibit lower temperatures (< 900 °C), explosive ignimbrites with lower SiO₂/Al2O₃ ratios, and lower K₂O/Na₂O ratios. Rapid voluminous felsic magmatism represents both extreme examples of continental growth since the Archean, and also dramatic periods of crustal recycling and maturation during the Phanerozoic.
<urn:uuid:beecedd4-8069-4352-b97c-dd4cf5c03508>
2.90625
460
Academic Writing
Science & Tech.
23.948487
This lesson has students consider how various parts of the world and the United States are affected by climate controls such as world air currents. They will read about climate controls and will create maps showing how these controls affect the climate in various places around the country. 6, 7, 8 Biological And Life Sciences, Earth Science, Ecology, General Science, Meteorology National Geographic Society
<urn:uuid:f92e4a12-e3ac-411e-aeea-67122559f983>
3.203125
77
Content Listing
Science & Tech.
24.983125
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. January 11, 1998 Explanation: Gravity can bend light. Almost all of the bright objects in this Hubble Space Telescope image are galaxies in the cluster known as Abell 2218. The cluster is so massive and so compact that its gravity bends and focuses the light from galaxies that lie behind it. As a result, multiple images of these background galaxies are distorted into faint stretched out arcs - a simple lensing effect analogous to viewing distant street lamps through a glass of wine. The Abell 2218 cluster itself is about 3 billion light-years away in the northern constellation Draco. Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
<urn:uuid:31c0df99-a241-4176-b496-e679d325e7f4>
3.375
186
Knowledge Article
Science & Tech.
49.875244
Hosted by The Math Forum Problem of the Week 1059 Snakes on a Plane MacPoW Home || Forum PoWs || Teachers' Place || Student Center || Each of the eight rectangles in the diagram below represents a house, the door being marked by an attached semicircle. Each disk represents a gate on the boundary of the region. Each dot represents a tree: Show how to draw eight non-crossing paths leading from each door of a house to the gate of the corresponding color, with the following restrictions. The paths are broken lines consisting of only horizontal and vertical segments; i.e., a walker will make only right-angle turns. The walker can start from the door in any direction (north, east, west, or south) that doesn't take him back inside the house. So for example, one may leave the red door in a westerly, northerly, or southerly direction. The arrows are there only to point you to the gate. Each path stays in the lanes between the trees (or between the trees and the border fence). However, the narrow lanes near the houses (which may not have visible trees on both sides because of the houses) may be used. Each part of each lane contains only one path. To be precise, between any two trees that are one unit apart at most one path passes (and the same for the lanes between the trees and the border; a unit is the minimum distance between trees). This is an old Sam Loyd puzzle, popularized by Ed Pegg in his regular puzzle column (highly recommended) for the MAA. © Copyright 2006 Stan Wagon. Reproduced with permission.
<urn:uuid:3853afe1-c3d8-406b-ad43-15f04d391876>
3
356
Tutorial
Science & Tech.
59.465578
what exactly does it mean for integers to be unique? if i am supposed to prove that there exists UNIQUE positive m and n integers under a condition, can m and n be equal at some conditions? for them to be unique, does it just mean there is only one m and only one n each time the condition is met? as in if xsqrd= m, there would not be a unique solution m? better you post the problem - it would make the context clear, and a lot of times answer to what you have asked follows from there "Unique m and n" means there is only one pair of numbers, (m, n), that satifies the condition. It is quite possible that m and n are the same. Originally Posted by mremwo If you mean a pair, (m, n), such that then, no, that would not be unique- but not just because, for example, both of (-2, 4) and (2, 4) is such a pair. It is also not unique because (2, 4) and (3, 9) are such pairs.
<urn:uuid:6fd03dfa-a974-4d61-8755-5d7b3eb8b479>
2.921875
231
Q&A Forum
Science & Tech.
72.112084
And now for something completely different… or is it? This problem comes from the ancient Greeks (Euclid, to be exact). Suppose you have a rectangle which is one unit tall and has this special property: if you cut off a square piece from the end of the rectangle, you’re left with a smaller rectangle that has the same proportions as the original rectangle. How long is the original rectangle? Maybe this picture will help you see what is going on. Starting with the big blue rectangle at the top, the white square is cut off from the left side, leaving the smaller blue rectangle — which is just a smaller copy of the big blue rectangle. E-mail me with questions, comments, or solutions, or post them here. Also, if anyone has solved either or both parts of Challenge #4, feel free to post your solution now!
<urn:uuid:63073242-f0cf-4bda-872d-6618e40b1b3f>
2.734375
177
Personal Blog
Science & Tech.
62.14764
Sean Sheldrake - May 8, 2013 Daniel Botkin - May 2, 2013 Emily Frost - Apr 23, 2013 Throughout the site, check out ‘For Educators’ on the left side of the page. There you will find lesson plans, activities, and resources related to the page topic. Browse all Educator materials by clicking here: West Indian Manatees, Trichechus manatus, are found in warm, shallow coastal ecosystems along the southeastern North America and northeastern South America. They graze plants in mangrove ecosystems and seagrass beds, occasionally eating small fish or invertebrates. However, they are sensitive to changes in their environment, such as cool water temperatures and harmful algal blooms, along with human threats such as speedboats, hunting, and accidental harm from fishing. TAGS: Endangered species
<urn:uuid:693c6104-5555-49d5-9774-aeb187c2b34c>
3.796875
181
Content Listing
Science & Tech.
30.9
Number 63, January 17, 1992 by Phillip F. Schewe and Ben Stein| GAMMA RAY ASTROPHYSICS IS VIOLENT . In one single-week interval, for example, quasar 3C279 emitted about 1054 ergs of energy in gamma rays, roughly the same energy you would get if all the particles in our Sun were to be annihilated into radiation. Although scientists don't yet know the nature of the engine which produces such stupendous supplies of gammas, the sources of gamma radiation can at least be inventoried and studied by the Compton Gamma Ray Observatory (GRO), launched in April 1991. GRO has now discovered, in addition to 3C279, three new quasars that emit gamma rays (although not on a continuous basis) at a rate of 1048 ergs/sec. Other GRO results: The observation of gammas with energies as high as 1 GeV from solar flares. More than 200 gamma bursters have now been found, and their distribution across the sky remains isotropic, almost surely ruling out the notion that they originate in the plane of our galaxy. Two leading explanations for the isotropy entail interesting problems of their own: If the bursters sit in the galactic halo, then the halo would have to be much larger than thought before, more than 150,000 light years in radius. If the bursters are extra-galactic, how could it be that so many gammas had traveled so far? A third gamma pulsar, Circinius, has been discovered; the other two are the Crab and the Vela pulsars and, unlike these two, Circinius beams gammas only once for each pulsar rotation instead of twice. Finally, GRO has monitored the electron-positron annihilation radiation from the center of our galaxy and found that, unlike previous, less-sensitive measurements by other detectors, the radiation does not seem to vary with time. BEST EVIDENCE YET FOR A NEARBY BLACK HOLE . New Hubble Space Telescope (HST) images of galaxy M87 show a bright concentration of light at the core of the galaxy. The density of the light is some 300 times higher than what you would expect for a galaxy of this type. This and the presence of an energetic, collimated jet of material extending out more than 4000 light years from the core can best be explained by supposing that a black hole with a mass of 2 to 3 billion solar masses sits at the galactic hole, according to Tod R. Lauer of the National Optical Astronomical Observatories. Upcoming spectroscopic studies will test this hypothesis by measuring the velocities of stars near the galactic nucleus. YOUNG STAR CLUSTERS IN GALAXY NGC1275 . Jon Holtzman of the Lowell Observatory released high-resolution HST images of galaxy NGC1275 which show 50 bright objects believed to be massive, blue (and therefore very young, perhaps only hundreds of millions of years old) globular clusters. This is surprising because globular clusters in our own galaxy are among the oldest stars (typically 10 billion years old) in the Milky Way. Holtzman said that not only are these clusters blue, but almost precisely the same shade of blue, suggesting that the stars there formed at about the same time; this, Holtzman believes, may have been the result of two galaxies colliding or merging to form the present NGC1275.
<urn:uuid:11664dce-097d-472a-b371-3666793b3f30>
3.71875
711
Content Listing
Science & Tech.
42.817842
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. polarization of liquids ...of the atomic nuclei within the molecules. This generally small effect is observed at radio frequencies but not at optical, and so it is missing from the refractive index. The third effect, orientation polarization, occurs with molecules that have permanent dipole moments. These molecules are partially aligned by the field and contribute heavily to the polarization. Thus, the dielectric... What made you want to look up "orientation polarization"? Please share what surprised you most...
<urn:uuid:ad092957-e47e-4836-aa5e-766fd4ad03fd>
3.078125
136
Knowledge Article
Science & Tech.
37.050417
It's interesting to note, also, that the manner in which you refer to windows changes depending on where you are when you do the referring. In order to close the current window, for example, you can always use the following: However, you can also affect other windows, simply by replacing the generic name "window" with the actual name of the window. For example, let's suppose you want to close a child window (previously assigned the name "baby") from a parent window. Let's look at a simple example to see how this works. Here, the primary window consists of a menu containing links, each of which open up in a child window named "display". The child window can be closed either from the parent window, by clicking on the "Close Display Window" link, or from the child window itself, by clicking the "Close Me" link. Here's the code for the menu: Notice that I have prefixed the call to close() with the child window name. Within the pages loaded, there exists a "Close Me" link as well, which can be used to close the child window directly. Here's what one of the pages loaded into the child window might look like: In this case, since I'm closing the current window, I can use window.close() directly without worrying about the window name. Thus far, I've shown you how to control the child window from the parent. It's also possible to work the other way around, controlling the parent window from the child. Every Window object exposes an "opener" property, which contains a reference to the window that created it. Therefore, even if you don't know the name of the parent window, it's still possible to access and manipulate it via this "opener" property. Let's take a look at a simple example, resizing the parent window from the child:
<urn:uuid:193bd32d-1433-4701-aa24-a9519c064be2>
3.015625
388
Documentation
Software Dev.
53.687547
When a quantity can only take on integer multiples of some base value, those units are called quanta (sg. quantum), from the Latin for quantity. An important discovery of modern physics is that most if not all fields are quantized, hence the field of quantum physics. In physics, quantization refers to the formulation of a classical theory in the formalism of quantum physics. Even though classical physics stems from quantum theory, the build up of a quantum theory is often made the other way around, starting from existing classical physics to derive the more fundamental quantum counterpart. For instance one can speak of the quantization of the electromagnetic field. 2nd quantization refers to a special formalism of quantum theory suited to deal with variable number of particles. It pertains to quantum field theory and draws its name from a loose understanding of the formalism as quantifying once more an already quantized theory.
<urn:uuid:5c9d58cb-005f-4771-bb91-84bf38c1db5b>
3.46875
182
Knowledge Article
Science & Tech.
37.273684
To demonstrate the independence of vertical and horizontal motion. Monkey gun (barrel with electromagnet) "bullets" or "darts" (cylindrical Al plugs) "Monkey" (bull's eye lid) DC power supply laser for sighting 2 table clamps Mount Monkey Gun to table (Mara) or chalk tray (M208) using 2 clamps. Using two clamps greatly improves the stability of the aim of the gun when it is fired. Load the gun with bullet. Hang Monkey from ceiling using electromagnet. Attach laser sight to barrel of gun and aim a couple cm above bull's eye. Forcefully blow on tapered end of gun barrel. Adjust aim as necessary. You can use 2 volunteers with the following story. Imagine blow-dart hunters trying to kill a monkey. Since the dart will fall as it travels, one would have to aim above the monkey to hit it. Figuring out how high above the monkey to aim is difficult, so our hunters employ a different strategy. The idea is to scare the monkey off the branch right when the dart is shot so that the dart and the monkey fall the same distance. One volunteer is the shooter, and one is the yeller or clapper. The shooter aims straight at the monkey and shoots just as the clapper claps. yb = vy t - 0.5 g t2 ym = H - 0.5 g t2, where H is the original height of the monkey. Imagine there were no gravity. Then since the gun is aimed at the target, from geometry H = vy t. The bullet and the monkey fall at the same rate, so the bullet is always on a collision course with monkey. Note that the bullet is "falling" on the way up. That is, it's accelerating in the downward direction.
<urn:uuid:39052a7f-5f10-46f2-9ff8-eb0e5708a7dc>
3.3125
390
Tutorial
Science & Tech.
72.263982
The generation of indexes depends on the markups inserted in the text. Such markups will be processed afterwards by an external tool and will generate the index. An example of such a tool is the collateindex.pl script (see Section B.6.2). Details about the process used to generate these indexes are shown in Section B.6.2. The indexes have nesting levels. The markup of an index is done by the code Example D-3. Example D-3. Code for the generation of an index <indexterm> <primary>Main level</primary> <secondary>Second level</secondary> <tertiary>Third level</tertiary> </indexterm> It is possible to refer to chapters, sections, and other parts of the document using the attribute zone. Example D-4. Use of the attribute zone <section id="encoding-index"> <title>Encoding Indexes</title> <indexterm zone="encoding-index"> <primary>edition</primary> <secondary>index</secondary> </indexterm> <para> The generation of indexes depend on the inserted markups on the text. </para> The Example D-4 has the code used to generate the entry of this edition on the index. In fact, since the attribute zone is used, the index statement could be located anywhere in the document or even in a separate file. However, to facilitate maintenance the entries for the index were all placed after the text to which it refers. Example D-5. Usage of values startofrange and endofrange on the attributeclass <para>Typing the text normally sometimes there's the need of <indexterm class="startofrange" id="example-band-index"> <primary>examples</primary> <secondary>index</secondary> </indexterm> mark large amounts of text.</para> <para>Keep inserting the paragraphs normally.</para> <para>Until the end of the section intended to be indexed. <indexterm startref="example-band-index" class="endofrange">. </para>
<urn:uuid:0277dfed-b3d7-4765-a396-588280cb97a4>
3.203125
451
Documentation
Software Dev.
38.548208
The scientific study of organisms living in or near water. This term is to be used for the science of 'aquatic biology' and for biological studies in fresh and brackish water. For marine biological studies, use 'marine biology'. Website of the Gulf of Mexico Integrated Science program to understand the framework and processes of the Gulf of Mexico using Tampa Bay as a pilot study. Links to publications, digital library, water chemistry maps, epiphytes, and field trips. Article from Status and Trends of the Nation's Biological Resources on the serious impacts to river systems due to damming and flow regulation, and rehabilitation, monitoring, and research on such rivers. Handbook on monitoring methods for lake management, including program design, sampling methods and protocol, biota and chemical sampling methods, laboratory methods, preservation of data and samples, glossary, and bibliography. (PDF file, 92 pp.) Homepage for the Leetown Science Center in West Virginia conducting research on aquatic and terrestrial organisms and their supporting ecosystems with links to directions, general description, library, projects, fact sheets, and facilities. Macroinvertebrate data collected by USGS or USFS from 73 sites from 2000 to 2007 and algal data collected from up to 26 sites between 2000 and 2001 in the Eagle River watershed, with emphasis on methods of sample collection and data processing. Portal for Missouri River Infolinks, a clearinghouse to multiple links giving Missouri information, photo gallery, river weather forecast, projects and features, maps, meetings, history, and science research. A national information resource for locating biogeographic accounts of non-indigenous aquatic species in the U.S. Provided are scientific reports, online/real-time queries, spatial data sets, regional contact lists, and general information. Brief descriptions of programs of research on aquatic nonindigenous plants and animals at the Florida Integrated Science Center with links to descriptions, videos, posters, and reports on various exotic plant and animals species. Description of scientific focus and research at the Northern Appalachian Field Lab on mining land use impacts and mediation, aquatic ecology, effects of dam removal, and invasive plant and animal species.
<urn:uuid:6a1703b6-0c02-4878-9f2e-5075096f84f7>
2.890625
445
Content Listing
Science & Tech.
23.693722
Isn't this bat cute! Click on image for full size Courtesy of Corel Photography Tropical Rain Forest Mammals Birds aren't the only creatures that fly through the rain forests! Several species of flying mammals live in the jungle. From the harmless fruit bat to the flying squirrel, the tropical rain forests are full of surprises! The Indian flying fox is one of the largest bats in the world. Its wings can spread out to 5 feet in width! Unlike bats in other parts of the world, these bats do not live in caves. They prefer to hang in trees during the day. Hundreds or even thousands of bats can be spotted in a single tree! Vampire bats live in the Amazon jungle in South America. These bats do in fact drink the blood of their victims. They usually attack farm animals, but have also enjoyed the blood of humans. But don't worry, vampire bats only drink a very small amount of blood. They won't kill you, or turn you into a vampire! Shop Windows to the Universe Science Store! Our online store includes issues of NESTA's quarterly journal, The Earth Scientist , full of classroom activities on different topics in Earth and space science, as well as books on science education! You might also be interested in: Tropical rainforests are home to thousands of species of animals, plants, fungi and microbes. Scientists suspect that there are many species living in rainforests have not yet been found or described....more This picture of the Earth surface was taken from high above the planet in the International Space Station. In this view from above, we can see that there are lots of different things that cover the Earth....more Like the other creatures of the desert, birds come up with interesting ways to survive in the harsh climate. The sandgrouse has special feathers that soak up water. It can then carry the water to its...more Deserts are full of interesting questions. How can anything survive in a place with hardly any water? Why is it so dry to begin with? You can find at least one desert on every continent except Europe....more You can find insects almost anywhere in the world. So it should be of no surprise that there are plenty of insects in the desert. One of the most common and destructive pests is the locust. A locust is...more There are several species of mammals in the desert. They range in size from a few inches to several feet in length. Like other desert wildlife, mammals have to find ways to stay cool and drink plenty...more Biomes are large regions of the world with similar plants, animals, and other living things that are adapted to the climate and other conditions. Explore the links below to learn more about different biomes....more
<urn:uuid:f9a55392-611c-42d5-85dd-ea26e2b91222>
3.25
571
Content Listing
Science & Tech.
65.897962
The Solar Wind Space isn't empty, especially not in the Solar System, it is filled with a stream of particles given off by the sun called the Solar Wind. Well, when I say "not empty", I mean "almost not empty" . If you take an imaginary cube of space, one centimeter on all sides, it will be filled with about 10 particles of solar wind. On the Earth, if you were to fill it with the atmosphere you are breathing now, it would be filled with 20,000,000,000,000,000,000 particles of air. The solar wind is made up of mostly protons and electrons (which are tiny bits that make up atoms), and travels very fast. A bullet from a high powered hunting rifle travels at about 1 kilometer per second. The Space Shuttle orbits the Earth at about 8 kilometers per second. The Solar Wind travels anywhere between 200 to 700 kilometers per second! So how do you actually measure this near vacuum? That's what my Ph.D project is all about. I am designing instruments called "Top Hat Hemispherical Analysers". They are scientific experiments that measure not only how many solar wind electrons there are, but also how fast they are moving. From this, scientists can find out all sorts of important information and learn about space plasma. My group has worked on Plasma Analysers from five un-manned robotic missions that are in space right now.
<urn:uuid:9f33ec93-6159-4d46-a293-67f9fa0dfc81>
3.3125
294
Personal Blog
Science & Tech.
61.925333
The formation of planets in the process of star formation appears now to be commonplace. Ward and Brownlee in their book Rare Earth make the point that the formation of a planet like Earth that can support advanced life may be exceedingly rare. The search for and characterization of extrasolar planets is currently a very active area of research. At present, it appears that the closest star with a planet is Epsilon Eridani, a relatively yound orange-red dwarf star only 10.5 light years from the Earth. The planet is about 1.5 times as massive as Jupiter and is in a highly elliptical orbit swinging from 2.4 to 5.8 Astronomical Units from the star compared to Jupiter's nearly circular orbit at 5.2 AU. Another close neighbor, Barnard's Star, has also been shown to have planets. A protoplanetary disc around Vega has been studied. According to Ward and Brownlee, the count of extrasolar planets was 198 in 2000, with the count now having exceeded 200. Ward & Brownlee
<urn:uuid:eec774f3-8838-468e-8895-a69eee7f0864>
3.140625
211
Knowledge Article
Science & Tech.
62.271045
There are few animals in the world that grab the imagination and sense of awe than does the jellyfish. One of the most mysterious of sea animals, the jellyfish by its name and its body has an almost primordial effect on the human psyche.The almost alien looking animals are still largely a mystery to science and still grab the attention and sense of wonder from people both young and old. …….here are just a few Jellyfish do not have specialized digestive, osmoregulatory, central nervous, respiratory, or circulatory systems. They digest using the gastrodermal lining of the gastrovascular cavity, where nutrients are absorbed. They do not need a respiratory system since their skin is thin enough that the body is oxygenated by diffusion. They have limited control over movement, but can use their hydrostatic skeleton to accomplish movement through contraction-pulsations of the bell-like body; some species actively swim most of the time, while others are passive much of the time. Jellyfish are composed of more than 90% water; most of their umbrella mass is a gelatinous material — the jelly — called mesoglea which is surrounded by two layers of epithelial cells which form the umbrella (top surface) and subumbrella (bottom surface) of the bell, or body. Jellyfish do not have a brain or central nervous system, but rather have a loose network of nerves, located in the epidermis, which is called a “nerve net.” A jellyfish detects various stimuli including the touch of other animals via this nerve net, which then transmits impulses both throughout the nerve net and around a circular nerve ring, through the rhopalial lappet, located at the rim of the jellyfish body, to other nerve cells. Some jellyfish also have ocelli: light-sensitive organs that do not form images but which can detect light, and are used to determine up from down, responding to sunlight shining on the water’s surface. These are generally pigment spot ocelli, which have some cells (not all) pigmented.
<urn:uuid:0cea93ab-f88f-44c2-a208-ac1b7b2e78ff>
3.453125
431
Personal Blog
Science & Tech.
30.719121
Small island developing states are viewed as the early-warning canary for global environmental change. But though they share global problems, many have an eye for economic growth and have become industrialised nations facing their own suite of home-grown problems. Do these islands still have the pristine environments that make them suitable outposts for global monitoring? Or have they become hotspots of environmental change in their own right? The reality is probably somewhere in between, meaning that small states must ensure that monitoring programmes inform policies on local concerns while also warning of global changes. So who should take responsibility for monitoring change? Too often, local governments have left this up to the global community, and have failed to make the necessary commitments to ensure that local monitoring capacity complements their development aspirations. In my role in the Cook Islands government I have witnessed the changing monitoring priorities over the years and the lessons we have learned. In the early 1990s, the priority was to monitor the health of marine ecosystems on and around the capital island Rarotonga, where the effects of coral bleaching, sedimentation and pollution were clear. But a lack of political will and resources meant that the coastal developments causing these problems were rarely addressed. By the late 1990s, the concern had shifted to monitoring pearl aquaculture on Manihiki Atoll. Farming there went unchecked and reached unsustainable levels — and despite monitoring warnings, eventually led to the collapse of production because of an oyster disease. NZD$100 million was lost in gross revenue, and fifty per cent of the population abandoned the atoll. Local government staff did not have the scientific training needed to manage the crisis, and a lack of capacity to maintain the equipment meant that the databanks of water-monitoring probes were full, leaving the disease outbreak unrecorded. In Rarotonga, monitoring efforts now focus not only on marine ecosystem health, but also public health concerns such as ciguatera fish poisoning, noxious algal blooms and high concentrations of faecal bacteria in the water. Muri lagoon, a popular beachfront tourism destination, has become a prime monitoring site. Fortunately, lessons have been learned. The government understands the need to heed the warnings from monitoring programmes, and monitoring results are being adopted proactively rather than reactively. For example, the Ministry of Marine Resources and the Ministry of Health are assessing how illnesses such as diarrhoea, skin irritation and respiratory problems might be linked with the levels of enterococci bacteria in the water. These assessments will help the government to adopt beach monitoring and public notification standards. Modern technology such as video transects (surveys performed with a video camera) of coral cover and automated water quality-monitoring buoys have significantly increased the efficiency of our monitoring programmes, as well as providing a continuous stream of in-situ information. But technological tools should be used to shape decisions, rather than being treated as the solution. These tools operate only as well as their operators — a truth that in the past was overlooked, with the onus put on the tools rather than their handlers. This brings us back to the issue of developing local scientific capacity. The support provided by the global community in terms of human resources and equipment is commendable, and has undoubtedly expanded the pool of locally based researchers. However, the growth of priority areas for research is outpacing local resources. International and nongovernmental organisations working in the Pacific need to ensure that local researchers are not spread too thinly. And these organisations need to know when to switch their focus to emerging priorities. One obvious priority is to invest more in training local researchers to reduce the reliance on scientists coming from other parts of the world. Small developing island states are slowly building the confidence to merge their monitoring programmes and regulatory powers in order to protect their environments, populations and economies. They are beginning to share lessons via Internet platforms and international projects, and they are ready to participate in global monitoring programmes as both a service provider and user. Instead of pooling aid-funded projects together, Pacific governments must make a long-term commitment to maintaining a pool of local scientists and programmes, well-equipped and backed by the proper resources. Small island states must begin to see the problems in their own front yards as inherently linked to local developments, not simply the responsibility of the global community. The more complex the issues they face, the greater likelihood that they are part of the problem. The benefit of investing in monitoring is unquestionable — and it gives small states a tool with which to assert authority over their own agendas for development.
<urn:uuid:b03e4e9e-1593-48df-82bb-095f1b106ca6>
3.03125
939
Nonfiction Writing
Science & Tech.
22.249864
4 days till exams and counting down - nerves shattered already! A closed cuboid with rectangular base of width xcm. Length of base is twice the width and the volumn is 1944cm^3. The surface area of the cuboid is S cm^2. a) Show that S = 4x^2 + 5832x^-1. b) Given that x can vary, find the value of x that makes the surface area a minimum. c) Find the minimum value of the surface area. Going round in circles with this one. Or, going round in cuboids. ha ha I kill me!
<urn:uuid:9b88d6a9-84c6-4114-bf86-74e0bacfd04d>
2.78125
132
Q&A Forum
Science & Tech.
92.894
While writing a unit on magnets, I was asked, well you need to define magnetism and establish general understanding. The facts are easy to list but what is magnetism? I wrote that it was a form of energy that has specific properties and characteristics. Well how accurate am I? Is it a form of energy or just a "force". You have opened a great can of worms! Magnetism is a force that is interrelated with electricity. I am assuming that you are looking for explanations geared towards primary grades. May I recommend this website? It is a good website with basic information. You can look at your benchmarks for primary grade science to determine how much you need them to understand. It also lists a few books for further reading at the end. One of the books has experiments that your kids might enjoy doing. Let me know if this is enough to get you started. If not, e-mail back with the primary benchmarks and I'll see if I can help you out a little more. You can find your answer by going to http://www.google.com and searching for There are a number of tutorial web pages written at all kinds of levels. I get my best information from Wikipedia. It is an open source on-line encyclopedia that is reviewed and edited by super experts in the field. So when I "Google wiki" I get this: This site describes magnetism this way: "Magnetism is a category of behaviour of materials that respond at an atomic or subatomic level to an applied magnetic field." Yours is a complicated question. I do not think it is accurate to say that magnetism is an energy. Energy is the "capacity to do work." http://en.wikipedia.org/wiki/Work_(physics) defines work as a delta energy. In the wiki.magnetism article, in the "Magnetic Fields and Forces" Section, we see this: "When a charged particle moves through a magnetic field B, it feels a force F given by the cross product: F = q (v X B) q is the electric charge of the particle, v is the velocity vector of the particle, and B is the magnetic field. Because this is a cross product, the force is perpendicular to both the motion of the particle and the magnetic field. It follows that the magnetic force does no work on the particle; it may change the direction of the particle's movement, but it cannot cause it to speed up or slow down. The magnitude of the force is F = q v B sin(è), where è is the angle between v and B." Since magnetism cannot speed up or slow down a particle (0 delta energy), the best we do to define magnetism is as a "category of behavior of materials at an atomic or sub-atomic level..." This site calls Magnetism a Force, but I cannot agree with that because a bar magnet that displays the characteristics of magnetism does not exert a force on anything if it is not in the magnetic field. You asked one of those questions that appears "simple" to ask but very tough to answer in "simple" terms. As you point out, you can list the facts of how it behaves, but absent from that list is a simple explanation of "What it is." I tried a search using the search term: "What is magnetism?" I too found a lot about how it behaves but very little about why it behaves the way it does. This problem is not unique to "magnetism". There are a lot of such terms. For example: gravity, electrical charge to name a couple. Science is good at explaining "how things work" but not so good at "what it is." I'll keep looking but I didn't want you to think you were being ignored. You just asked a very hard question. Not a simple answer but a good resource is the NASA website: I think we must talk a bit in order to answer your question. First the scientists discovered that there are 4 forces known that sustain the universe called: 1. Gravity - This force acts between all mass in the universe and has infinite range. 2. Electromagnetic This one occurs between electrically charged particles. Electricity, magnetism are effects produced by this force that also has infinite range. 3. The Strong Force - This force binds neutrons and protons together in the cores of atoms and is a short range force. 4. Weak Force -Like the strong force, the weak force is also of short ange. It is present inside the atom and it is related with radioactivity, conversion and/or formation of different particles inside the atom nucleus. So when you speak of magnetism or electricity you rather say “Electromagnetism” that results from the presence of an electromagnetic Electromagnetism, also called electromagnetic interaction, or electromagnetic force is a long-range force involving the electric and magnetic properties of elementary particles. Particles with the same charge repel to each other and are attracted when have opposite charges. It explains atomic structure (positive protons and negative electrons) and the properties of light and other forms of electromagnetic radiation. Its effects are easily observed such as light and heat. Magnetism occurs when materials exert an attractive or repulsive force on other materials, as a magnet. Electricity is related to charges, and both electrons and protons carry a charge. The amount of the charge is the same for each particle, but opposite in sign. Electrons carry a negative charge while protons carry positive charge. The protons are basically trapped inside the nucleus and can't escape .As a result, it is moving electrons that are primarily responsible Answering your question: electromagnetism is a Force not Energy. But is is also actually a secondary energy source, also referred to as an energy carrier. That means that we can get electromagnetism from the conversion of primary sources of energy, such as coal, nuclear, or solar Thanks for asking NEWTON! Dr. Mabel Rodrigues You are hitting at some interesting material. There is much confusion between energy and force. Simply put, force is a push or pull, while energy is a conserved quantity that flows when there is a change. Contrary to what is commonly written about energy, there is only one kind of energy: energy. It is all measured in joules. We describe energy by its location. For example, if we lift a book from a table into the air, the energy transferred from us to the gravitational field. We can describe this by U=mgh. If the object is moving, we call it kinetic energy (energy stored in the motion of the body) and is described by the equation K=(1/2)mv^2. Force is not a conserved quantity. When the force is no longer applied, it is not transferred anywhere else, it just ceases to exist. This never happens Magnetism is a field that can exert forces on objects. This, in turn, can result in a transfer of energy into or out of the magnetic field. There are many web sites that address or identify science misconceptions. Here are a few: There are many other sites in the Physics Education Research and Science Education Research. Frankly, you may harbor some of these misconceptions. I know I still harbor some. Teach the best you can while you learn about the misconceptions and try to devise strategies to confront them. I know that I have taught misconceptions, and I work diligently to uncover my own misconceptions to better serve my students. I encourage you to keep on learning! Click here to return to the Physics Archives Update: June 2012
<urn:uuid:f44a42a0-544f-42b5-8727-d83d66072d17>
3.4375
1,691
Q&A Forum
Science & Tech.
54.314048
Sean Sheldrake - May 8, 2013 Daniel Botkin - May 2, 2013 Emily Frost - Apr 23, 2013 Throughout the site, check out ‘For Educators’ on the left side of the page. There you will find lesson plans, activities, and resources related to the page topic. Browse all Educator materials by clicking here: The lettuce sea slug (Elysia crispata) has enlarged fleshy appendages that are folded over one another, with colors ranging from blue to green, with purple and red lining. The green coloring is what gives this mollusk it's common name, resembling a head of leafy green lettuce. The sea slug eats green algae, but not all of the algae they eat is digested. Some of the green algae gets shuttled off to make a home in those fleshy appendages (called parapodia). The algae's chloroplasts, which convert sunlight into energy, can then live in the parapodia for up to...
<urn:uuid:0cc64561-30dd-433b-8096-2ea3325f5db4>
3.640625
210
Content Listing
Science & Tech.
56.033796
ESO Astronomical Glossary - M A macrolens is a type of gravitational lens where the lensing object is very massive, such as a . Macrolenses result in image warps, or even multiple images, of faint distant sources, that are strong enough to detect on images. Stuudies of such lensing effects can give information on the distribution of matter, and dark matter in particular, in the Universe. A term denoting stars that are observed to be in the longest stable section of their lifetime, where hydrogen is fused in the core in a stable chain reaction. Sometimes abbreviated 'MS', the expression 'main sequence' comes from the characteristic pattern or sequence these stars form in a plot of stars' colour and magnitude (this plot is called the Herzsprung-Russell diagram). The typical colour-magnitude relationship observed in main sequence stars is not valid for newly formed stars, and breaks down again once the star enters the red giant stage, which signals the beginning of its death. Mars is the fourth planet from the Sun in our solar system. It is named after Mars, the god of war in Roman mythology. Its red colour as viewed in the night sky earned it the name "The Red Planet." Mars has two moons, Phobos and Deimos, which are both small and oddly-shaped, possibly being captured asteroids. Massive Compact Halo Objects (MACHOs) MACHO is a collective term for objects that reside in the halo of a galaxy and which do not emit enough radiation to be detected from Earth. Examples include brown dwarfs. MACHOs are thought to make up part of the dark matter in the galaxy. In the absence of radiation to detect, MACHOs can be spotted using the technique of microlensing. Melipal is the name of the third 8.2 m Unit Telescope (UT2) of ESO's Very Large Telescope (VLT) at the Paranal Observatory in Chile. In the Mapuche language Melipal means 'The Southern Cross'. Mercury is the closest planet to the Sun, and the smallest planet in the solar system. The planet remains comparatively little-known: the only spacecraft to approach Mercury was Mariner 10 from 1974 to 1975, and only 40-45% of the planet has been mapped. Mercury has no natural satellites and no atmosphere. A NASA mission, MESSENGER (MErcury Surface, Space ENvironment, GEochemistry and Ranging), was launched in 2004 and will perform several flybys in 2008-2009. The metallicity of an object is the proportion of its material that is made up of metals. In astronomy, the term 'metals' is used for any element heavier than hydrogen or helium. The early Universe contained predominantly hydrogen, helium and lithium, and heavier elements were only formed when stars started to fuse these elemments into heavier byproducts, such as carbon and oxygen. Therefore studying the metallicity of stars in galaxies and clusters can give astronomers information on their stellar history. Meteoroids are tiny stones or pieces of metal that travel through space. Microlensing is a type of gravitational lens, where the foreground lensing object is of low mass. When light from a star is bent around the object, it will cause a temporary brightening, or magnification, of the star. Microlensing is predominantly used in the search for dark matter in the Milky Way galaxy and its nearest neighbours, as it allows us to detect objects that do not emit enough light to be imaged directly, when they act as lenses - for example brown dwarfs. Such objects are collectively known as MACHOs. Microlensing has also become an important method of detecting exoplanets in recent years, as light lensed by a star with an exoplanet appears different to that lensed by a singular star. In January 2006, Danish 1.54-m telescope at ESO's La Silla observatory contributed to the detection of a low-mass exoplanet using the microlensing method. Microwaves are an energetic form of radio waves. Their wavelength ranges from around 1 mm to 30 cm. Microwave astronomy is very important in the context of the cosmic microwave background radiation. Milky Way galaxy The Milky Way galaxy is a spiral galaxy, of which our Sun and solar system are a small part. All of the stars that we can see with the unaided eye are in the Milky Way Galaxy. The plane of the Milky Way looks like a faint band of white in the night sky - hence the name 'milky'. The Milky Way measures about 100,000 light-years in diameter and contains around 200 billion stars. The galaxy is the second largest in the local galactic neighbourhood, called the Local Group. One milliarcsecond (mas) is one thousandth of an arcsecond (q.v.).
<urn:uuid:65723403-38e6-47e9-9dbc-aca4dac2d61c>
3.4375
1,001
Structured Data
Science & Tech.
46.784395
Phobos, the larger moon of Mars, has a surface covered in craters, dust, boulders – and a great many semi-parallel and intersecting grooves. One theory for the grooves’ origin, proposed in 2011, holds that they are impact scars from chains of debris thrown into space by big meteorite impacts on Mars itself. Writing in Planetary and Space Science, Kenneth Ramsley and James Head (Brown University) say “nix” on this idea. The core of their finding is that the Phobos grooves are too perfectly shaped — too neat and clean — to be the result of ejecta from Martian impacts. “We strongly suggest that no impact event on Mars produces enough focused material to form grooves as impact chains on Phobos,” the team says. “At the altitude of Phobos, Martian impact debris disperses to a huge volume in the space above Mars. By the time it reaches the altitude of Phobos, the debris is far too thinly distributed to produce more than a few stray impacts on Phobos, if any at all.” To reach this conclusion, they undertook extensive computer simulations to see how much debris would be ejected from Mars and big the pieces would likely be, how far ejected pieces would travel, how much they would disperse as they flew, and where they would go within the Mars-Phobos system. “On the basis of our analysis,” they write, “we find that six major predictions of the hypothesis are not consistent with a wide range of Mars ejecta emplacement models and observations.” These failed predictions include: • The largest family of grooves can’t be emplaced by any valid trajectory from Mars in its present-day or ancient orbit. • To make families of parallel grooves over most of Phobos (as is seen), fragments must have nearly identical diameters and be ejected in grid patterns with virtually no dispersion. • Due to Phobos’ rough and uneven surface, grid patterns of incoming debris would strike the ground more unevenly than is seen, disrupting the grooves’ linearity. • Grooves are found on the trailing end of Phobos in places where no trajectory from Mars to Phobos is possible. The researchers also compared the Phobos grooves with chains of known secondary craters on Mercury and the Moon. They found that most Mercurian and lunar secondary craters are as large as the bigger (non-groove) craters on Phobos and far larger than the pits seen in the groove networks. They also observe that blasting chains of craters across Phobos would also throw secondary plumes into orbit around Mars. Such debris will resettle back onto Phobos over roughly 10,000 years. As Ramsley and Head explain, “This would substantially add to the effects of space weathering and potentially bury most evidence of the initial groove-forming impacts.
<urn:uuid:ca210657-9894-48aa-a700-e7d5011b2456>
4.59375
608
Academic Writing
Science & Tech.
40.492719
After a fun-filled summer at camp in Maine, Amy is back in school. She's ready to take over the science lab once again and perform some quite amazing experiments. She specializes in discrepant events. Watch and try to predict what will happen. Expect to be quite amazed. This month, Amy tests the strength of ordinary newspaper. A thin board (such as a meter stick) A few (3-5) sheets of newspaper NOTE: Please exercise caution. An adult should be present during this experiment. If the board is not hit quickly and firmly, the board may jump and injure the experimenter. If performed properly, there should be no pain to the hand. 1. Lay the thin board on a table so that one end of the board hangs over the edge of the table by about six to ten inches. 2. Open and spread the sheets of newspaper over the rest of the board on the table. Line the edge of the newspaper with the edge of the table. 3. Predict what will happen when you strike the board. 4. With your hand, strike a quick blow against the overhanging stick near the edge of the table. Never underestimate the strength of air pressure. If the newspapers have been carefully spread over the board, the air pressure above will hold the newspaper in place when the board is hit. When the board is hit, the part on the table under the newspaper starts to move up, creating a space with no air (vacuum). If the air cannot get under the paper fast enough to fill the vacuum, it exerts its pressure on the paper instead. This adds strength to the paper. The real trick to this experiment is to make the strike against the board quick and firm. Otherwise, you'll find that the board jumps and merely moves the newspaper. If you'd like, you can send a message to Amy. Back to "inQuiry Almanack"
<urn:uuid:1fd8d2ff-8731-47c5-aab8-7601af0b8e72>
3.46875
398
Tutorial
Science & Tech.
71.91103
Play now 18 mins Finding the Higgs boson on July 4th 2012 was the last piece in physicists' Standard model of matter. But Tracey Logan discovers there's much more for them to find out at the Large Hadron Collider. To start with there is a lot of work to establish what kind of Higgs boson it is. Tracey visits CERN and an experiment called LHCb which is trying to find out why there's a lot more matter than anti-matter in the universe today. Dr Tara Shears of Liverpool University is her guide. Tracey also talks to physicists who are hoping to find dark matter in the debris of the collisions at the LHC. Scientists know there's plenty of dark matter in the universe, from its effects on galaxies, but they don't know what it is. Tracey discovers that this fact isn't stopping the particle physicists carrying out experiments. (Image: Scientists in front of a screen at CERN during the restart of the Large Hadron Collider in 2009, Credit: AFP/Getty) This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.
<urn:uuid:5aaa44d3-845f-4e71-8290-794202a7ae77>
3.515625
288
Truncated
Science & Tech.
62.595099
The Lunar Late Heavy Bombardment - The small craters can be saturated on the oldest surfaces, but probably not the basins. - There are more than 40 basins (D>300km) on the Moon. - They are all older then ~3.8Gyr (Wilhelms 1987). - The only good ages we have are for the two youngest: ......Imbrium (3.85Ga) and Orientale (3.82Ga). - Nectaris is either ~3.9Ga or 4.1Ga. - This has led to controversy of whether the lunar basin forming impacts 3.9Ga were: - The tail end of terrestrial planet accretion. - A spike in the impact rate. (Tera et al. 1974) - Two recent advances strongly support the idea of a spike: - Bottke et al.(2006) show terrestrial-region impactors don't last 700Myrs. - They find that we can rule out (1) to >3σ based on Imbrium and Orientale alone. - D. Trail & S. Mojzsis looked at very old terrestrial samples and see no impacts at ~4.0Ga - If a spike, the most likely cause is a change in the dynamical state of the planetary system.
<urn:uuid:d8446f20-6c3f-4237-aa76-82b3b6f11438>
3.546875
278
Academic Writing
Science & Tech.
78.701404