text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Joined: May 2002 I came across a new article on the evolution of photosynthesis; there are a number of articles on this topic, I will post them as I rediscover them, others may have come across interesting stuff also. Reaction centres: the structure and evolution of biological solar power Peter Heathcote b, Paul K. Fyfe a and Michael R. Jones a Trends in Biochemical Sciences 2002, 27:79-87 Reaction centres are complexes of pigment and protein that convert the electromagnetic energy of sunlight into chemical potential energy. They are found in plants, algae and a variety of bacterial species, and vary greatly in their composition and complexity. New structural information has highlighted features that are common to the different types of reaction centre and has provided insights into some of the key differences between reaction centres from different sources. New ideas have also emerged on how contemporary reaction centres might have evolved and on the possible origin of the first chlorophyll–protein complexes to harness the power of sunlight. [...I'll quote the last part of the review to give a sense of where things are at...] Common structural blueprint The crystallographic information summarized in Fig. 4 highlights structural features that are common to all types of reaction centre [3,10,25] . At the heart of each complex is a core domain consisting of an arrangement of two sets of five transmembrane helices. This protein scaffold encases six (bacterio)chlorin and two quinone cofactors that are arranged in two pseudosymmetric membrane-spanning branches. These cofactors catalyse the photochemical transmembrane electron transfer reaction that is the key to the photosynthetic process. Added to this basic structural blueprint are a variety of protein–cofactor structures, such as antenna complexes, the oxygen-evolving complex or Fe–S centres, which represent further adaptations. In particular, in the PSII reaction centre and all known Type I reaction centres, the core electron transfer domain is flanked by two homologous antenna domains, each consisting of a bundle of six membrane-spanning helices binding antenna pigments , and antenna chlorophylls are also bound to the ten-helix core ( Fig. 4). These antenna domains are not present in purple bacteria such as Rhodobacter sphaeroides or green filamentous bacteria such as Chloroflexus. Which is the oldest reaction centre? The realization that all reaction centres are based on a common design has provoked much discussion over the evolutionary links between the different complexes and the nature of the ancestral reaction centre. This is a challenging topic because it is clear that chlorophyll-based photosynthesis is a very old process that appeared during the first few hundred million years of evolution . One approach to this problem has been to examine which of the five distinct groups of photosynthetic bacteria represents the oldest photosynthetic lineage, through phylogenetic studies of both photosynthetic and non-photosynthetic proteins. However, such studies have produced conflicting results, with green filamentous bacteria, heliobacteria and purple bacteria all being identified as the oldest lineage in different studies [39–42] . The problem of tracing the evolutionary development of modern day photosystems is not helped by some of the variety and complexity exhibited by photosynthetic organisms, which indicates some interchange of photosynthetic components by lateral gene transfer between groups during the course of evolution [41,43] . At present, it is probably prudent to conclude that the use of this approach requires additional data and a more extensive analysis. Primordial reaction centre: Type I, Type II or both? Setting aside the question of which is the oldest photosynthetic organism, several models have been proposed to account for the development of modern day reaction centres from simpler ancestors . Most recently, a new evolutionary scheme for contemporary reaction centres has been proposed that envisages the ancestral reaction centre as homodimeric, with the three-domain antenna–core–antenna organization seen in extant Type I complexes . It is proposed that this ancestral reaction centre had two membrane-spanning electron transfer chains, each terminating in a loosely bound quinone that could dissociate when reduced and move into the membrane pool, and that it occupied a membrane that had already developed a fully functional anaerobic respiratory chain, in accordance with the 'respiration early' hypothesis . Therefore, the ancestral reaction centre proposed had a mixed character, with the three-domain organization and (possibly) symmetric electron transfer characteristic of contemporary Type I reaction centres but a capacity to reduce the intramembrane quinone pool, as seen in contemporary Type II reaction centres . The future ... and the dim, distant past The increasingly detailed crystallographic information now available for the cyanobacterial Type I and Type II reaction centres is provoking renewed interest in the detailed mechanism of these elegant transducers of energy. In particular, the first crystallographic glimpses of the machinery for oxygen evolution are both intriguing and exciting, and will trigger much re-evaluation of our current understanding of a reaction that is of obvious importance to aerobes such as ourselves. It is also becoming apparent that a detailed understanding of quinone chemistry of the homodimeric reaction centres from heliobacteria and green sulfur bacteria might help to focus ideas about the nature of the ancestral reaction centre and the evolutionary route that has led to contemporary complexes. Finally, peering even further back in evolutionary time, an intriguing question that remains relatively unexplored concerns the origins of the ancestral reaction centre. What was the function of this (bacterio)chlorophyll-containing membrane protein before it evolved into a system capable of harnessing light energy? One suggestion is that early organisms used pigment–protein complexes to protect themselves against the ultraviolet (UV) radiation that bathed the surface of the planet before the development of the atmospheric ozone layer . Such proteins might originally have operated by absorbing high-energy UV photons and dissipating the energy through internal conversion between the (bacterio)chlorophyll Soret absorbance transition and the visible-region absorbance bands, before emitting the energy as a much more benign visible or near-infrared photon . Light-activated electron transfer might originally have developed as an extension to this photoprotective function, excited state energy being converted first into the energy of a charge separated state (similar to the P870+HA- state formed in the purple bacterial reaction centre) and subsequently lost as heat as the charge-separated state recombines (as occurs in purple bacterial reaction centres when forward electron transfer from HA- is blocked). Another suggestion is that photosynthetic function evolved from bacteriochlorophyll-containing proteins involved in infrared thermotaxis . Whatever the truth, addressing these questions requires a journey back to an early stage in the evolution of life, and presents a fascinating challenge. Baymann F. et al. (2001) Daddy, where did PS(I) come from? Biochim. Biophys. Acta, 1507:291-310. MEDLINE Cited by Nisbet E.G. and Sleep N.H. (2001) The habitat and nature of early life. Nature, 409:1083-1091. Cited by Olsen G.J. et al. (1994) The winds of (evolutionary) change: breathing new life into microbiology. J. Bacteriol., 176:1-6. MEDLINE Cited by Gupta R.S. et al. (1999) Evolutionary relationships among photosynthetic prokaryotes (Heliobacterium chlorum, Chloroflexus aurantiacus, cyanobacteria, Chlorobium tepidum and proteobacteria): implications regarding the origin of photosynthesis. Mol. Microbiol., 32:893-906. MEDLINE Cited by Xiong J. et al. (1998) Tracking molecular evolution of photosynthesis by characterization of a major photosynthesis gene cluster from Heliobacillus mobilis. Proc. Natl. Acad. Sci. U. S. A., 95:14851-14856. Full text MEDLINE Cited by Xiong J. et al. (2000) Molecular evidence for the early evolution of photosynthesis. Science, 289:1724-1730. Full text MEDLINE Cited by Blankenship R.E. (2001) Molecular evidence for the evolution of photosynthesis. Trends Plant Sci., 6:4-6. Full text Cited by Castresana J. et al. (1994) Evolution of cytochrome oxidase, an enzyme older than atmospheric oxygen. EMBO J., 13:2516-2525. MEDLINE Cited by Mulkidjanian A.Y. and Junge W. (1997) On the origin of photosynthesis as inferred from sequence analysis. Photosynth. Res., 51:27-42. Nisbet E.G. et al. (1995) Origins of photosynthesis.
<urn:uuid:2a4fd164-68db-40ec-9124-edaff115a61f>
2.78125
1,870
Content Listing
Science & Tech.
31.908367
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2011 July 3 Explanation: The closest star system to the Sun is the Alpha Centauri system. Of the three stars in the system, the dimmest -- called Proxima Centauri -- is actually the nearest star. The bright stars Alpha Centauri A and B form a close binary as they are separated by only 23 times the Earth- Sun distance - slightly greater than the distance between Uranus and the Sun. In the above picture, the brightness of the stars overwhelm the photograph causing an illusion of great size, even though the stars are really just small points of light. The Alpha Centauri system is not visible in much of the northern hemisphere. Alpha Centauri A, also known as Rigil Kentaurus, is the brightest star in the constellation of Centaurus and is the fourth brightest star in the night sky. Sirius is the brightest even thought it is more than twice as far away. By an exciting coincidence, Alpha Centauri A is the same type of star as our Sun, causing many to speculate that it might contain planets that harbor life. Authors & editors: Jerry Bonnell (UMCP) NASA Official: Phillip Newman Specific rights apply. A service of: ASD at NASA / GSFC & Michigan Tech. U.
<urn:uuid:0883f06e-fee5-4af4-815e-b42f2d8cf5dd>
3.46875
281
Knowledge Article
Science & Tech.
49.562182
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. photoreception and image formation ...Some water bugs (e.g., Notonecta, or back swimmers) use curved surfaces behind and within the lens to achieve the required ray bending, whereas others use a structure known as a lens cylinder. Similar to fish lenses, lens cylinders bend light, using an internal gradient of refractive index, highest on the axis and falling parabolically to the cylinder wall. In the 1890s... What made you want to look up "lens cylinder"? Please share what surprised you most...
<urn:uuid:e3c3ad10-e162-464e-9f78-748246c0435c>
3.484375
146
Truncated
Science & Tech.
53.190714
Is it getting hot in here, or is it just us? Check out a few DVDs on global warming, and decide for yourself. (DVD) QC981.8.G56 D5625 2006x Examines studies showing that pollution is not only the cause of global warming but is also responsible for preventing sunlight from reaching the earth's surface, thereby masking the degree of global warming's actual impact. (DVD) QC981.8.G56 A144 2008x Presents the argument that the current trend of global warming is the result of human activity. Examines ways that human populations can work to reverse this trend. (DVD) QC981.8.G56 E943 2007x A documentary on the gap between the scientific community's knowledge about global warming and the public's lack of understanding, efforts that have been undertaken to bridge the gap, and the Bush administration's attempts to obfuscate the realities of the matter. (DVD) QC981.8.G56 G581325 2005x Examines the dangers posed to the planet by global warming. (DVD) QC981.8.G56 G5839 2008x Scientists explore the skies to examine the warming effects of the sun and dig deep into the Earth to study continental movement and the volatile activity at the planet's core. Experts speculate on how natural events including volcanic eruptions and massive meteor impacts have affected temperatures and weather systems over the planet's 600-million-year history. (DVD) QC981.8.G56 I5335 Former Vice President Al Gore explains the facts of global warming, presents arguments that the dangers of global warning have reached the level of crisis, and addresses the efforts of certain interests to discredit the anti-global warming cause. (DVD) QC981.8.G56 S557 2008x Discusses why many scientists believe that the Earth's average temperature could rise by as much as six degrees Celsius by 2100. Explores what each rising degree could mean for the future of humanity and our planet. Illustrates how global warming has already affected the reefs of Australia, the ice fields of Greenland, and the Amazonian rain forest. Explains what's real, what's still controversial, and how existing technologies and remedies could help dial back the global thermometer. (DVD) QC981.8.G56 T6625 2006x Scientists in various fields explain the causes and dangers of global warming and discuss ways that it can be reversed. (DVD) QC981.8.G56 W376 2008x In 1995, an iceberg the size of Rhode Island broke off from the Larsen ice shelf along the Antarctic coast, hinting at a potential meltdown. If the entire West Antarctic Ice Sheet were to follow suit, sea level would rise by almost 20 feet, wiping out entire cities, redrawing coastlines, and leave millions homeless. (DVD) QC981.8.G56 W497 2007x Examines the opposing sides of the global warming debate. Browse the Catalog For more information about global warming and climate change, look under the following subjects: - Climatic changes - Endangered ecosystems - Endangered species - Fossil fuels -- Environmental aspects - Global temperature changes - Global warming - Greenhouse effect, Atmospheric
<urn:uuid:906e191a-7adc-4586-9543-46807736025b>
2.796875
691
Content Listing
Science & Tech.
61.290982
The LHC (Large Hadron Collider) is scheduled to come online come August 2008 in Europe. For seven thousand scientists, it is finally show time after 10 years and 8 billion dollars. It might end the quest to find the Higgs boson aka the “God particle” because it gives mass to itself and all elementary sub atomic particles. Peter Higgs proposed the Higgs field/mechanism in 1964 and predicted the existence of a new particle, the Higgs boson. Other expectations of the LHC include elucidating the nature of dark matter/energy and gravity. Here is a wiki on the LHC: The LHC is being funded and built in collaboration with over two thousand physicists from thirty-four countries as well as hundreds of universities and laboratories. When activated, it is theorized that the collider will produce the elusive Higgs boson, the observation of which could confirm the predictions and “missing links” in the Standard Model of physics and could explain how other elementary particles acquire properties such as mass Another article here from Scientific American: Physicists expect the LHC to bring about a new era of particle physics in which major conundrums about the composition of matter and energy in the universe will be resolved. Who is Peter Higgs? Peter Ware Higgs, FRS, FRSE, (born May 29, 1929), is an emeritus professor at the University of Edinburgh. It was at Edinburgh that he first became interested in mass, developing the idea that particles were weightless when the universe began, acquiring mass a fraction of a second later, as a result of interacting with a theoretical field now known as the Higgs field. Higgs postulated that this field permeates space, giving all elementary subatomic particles that interact with it their mass. Why is the Higgs particle nicknamed the “God particle”? Higgs is reported to be displeased that the particle is nicknamed the “God particle”, as Higgs is an atheist.This nickname for the Higgs boson is usually attributed to Leon Lederman, but it is actually the result of Lederman’s publisher’s censoring. Originally Lederman intended to call it the goddamn particle because of its elusiveness. Here is the wiki on the Higgs boson: The Higgs boson is a hypothetical massive scalar elementary particle predicted to exist by the Standard Model of particle physics. It is the only Standard Model particle not yet observed, but would help explain how otherwise massless elementary particles still manage to construct mass in matter. There are some concerns that the LHC will unleash mini black holes, magnetic monopoles or strangelets which could destroy the earth: The main danger could be now just behind our door with the possible death in blood of 6.500.000.000 (US notation 6,500,000,000) people and complete destruction of our beautiful planet. Such a danger shows the need of a far larger study before any experiment ! The CERN study presents risk as a choice between a 100% risk or a 0% risk. This is not a good evaluation of a risk percentage! Our desire of knowledge is important but our desire of wisdom is more important and must take precedence. The precautionary principle indicates not to experiment. The politicians must understand this evidence and stop these experiments before it is too late! Is the LHC a doomsday machine or is the experiment which will determine the nature of physical reality? Exciting stuff indeed! What do you think?
<urn:uuid:e533fa33-dc8c-4c88-8547-fbeda9b19c99>
2.765625
731
Comment Section
Science & Tech.
42.916347
Since I am a mathematician, I give a precise answer to this question. Thanks to Kurt Gödel, we know that there are true mathematical statements that cannot be proved. But I want a little more than this. I want a statement that is true, unprovable, and simple enough to be understood by people who are not mathematicians. Here it is. Numbers that are exact powers of two are 2, 4, 8, 16, 32, 64, 128 and so on. Numbers that are exact powers of five are 5, 25, 125, 625 and so on. Given any number such as 131072 (which happens to be a power of two), the reverse of it is 270131, with the same digits taken in the opposite order. Now my statement is: it never happens that the reverse of a power of two is a power of five. The digits in a big power of two seem to occur in a random way without any regular pattern. If it ever happened that the reverse of a power of two was a power of five, this would be an unlikely accident, and the chance of it happening grows rapidly smaller as the numbers grow bigger. If we assume that the digits occur at random, then the chance of the accident happening for any power of two greater than a billion is less than one in a billion. It is easy to check that it does not happen for powers of two smaller than a billion. So the chance that it ever happens at all is less than one in a billion. That is why I believe the statement is true. But the assumption that digits in a big power of two occur at random also implies that the statement is unprovable. Any proof of the statement would have to be based on some non-random property of the digits. The assumption of randomness means that the statement is true just because the odds are in its favor. It cannot be proved because there is no deep mathematical reason why it has to be true. (Note for experts: this argument does not work if we use powers of three instead of powers of five. In that case the statement is easy to prove because the reverse of a number divisible by three is also divisible by three. Divisibility by three happens to be a non-random property of the digits). It is easy to find other examples of statements that are likely to be true but unprovable. The essential trick is to find an infinite sequence of events, each of which might happen by accident, but with a small total probability for even one of them happening. Then the statement that none of the events ever happens is probably true but cannot be proved.
<urn:uuid:7410116a-0300-4717-9649-788a705bbca9>
2.765625
538
Comment Section
Science & Tech.
59.744585
subduction zone, large-scaled narrow region in the earth's crust where, according to plate tectonics, masses of the spreading oceanic lithosphere bend downward into the earth along the leading edges of converging lithospheric plates where it slowly melts at about 400 mi (640 km) deep and becomes reabsorbed. Subduction zones are usually marked by deep ocean trenches that often exceed 6 mi (10 km) compared to the ocean's overall depth of 2 to 4 mi (3 to 5 km). A pattern of earthquakes of shallow, intermediate, and deep focus occurs along the same angle as the descending plate, which is steeply inclined (30°–60°) toward the continent behind the trench in a zone called the Benioff Zone, discovered by the U.S. seismologist Hugo Benioff. This earthquake pattern enables geophysicists to trace the descending plate to depths of 600 to 700 km (370–440 mi), where temperatures are thought to be between 1,000°C and 2,000°C (1,800°–3,600°F). As the oceanic plate descends, friction between the two plates probably causes partial melting of the descending plate forming a magma of andesitic composition that rises along fractures. If the overlying crustal plate is oceanic, the magma may erupt to form volcanic island arcs, such as Japan or the Aleutians. If the overlying plate is continental, a line of batholiths and volcanoes may be created as in the Coast Ranges of Canada and the W United States. See continent; continental drift; seafloor spreading. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on subduction zone from Fact Monster: See more Encyclopedia articles on: Geology and Oceanography
<urn:uuid:ddf1c3e4-bb3b-4ccd-b643-5ce170ccfa54>
4.0625
379
Knowledge Article
Science & Tech.
46.969903
A glass rod is charged to +5.0 nC by rubbing. How many electrons have been added? Do not want to provide additional optional info Thank you for using Just Answer! If the glass rod has gained a positive charge by rubbing then that means it has actually LOST electrons. The answer to your question is that it has not gained any electrons, but has actually transferred electrons to the item it was rubbed with. I understand, however it is asking me how many were lost. The answer is 3.1 x 10^10, but I do not know how to get this number. THIS ANSWER IS LOCKED!You can view this answer by clicking here to Register or Login and paying $3. If you've already paid for this answer, simply Login. B.S. Chemistry and Biology minor
<urn:uuid:606611d0-1223-4ab9-a980-49fb302093f7>
2.875
172
Q&A Forum
Science & Tech.
77.465414
Joined: 16 Mar 2004 |Posted: Wed Sep 20, 2006 10:50 am Post subject: Researchers Use DNA to Direct Nanowire Assembly & Growth |Brown University Researchers Use DNA to Direct Nanowire Assembly and Growth The coding qualities of DNA have been harnessed by a research team from Brown University to create zinc oxide nanowires on top of carbon nanotube tips. The journal ‘Nanotechnology’ explains that DNA has never before been used to direct the assembly and growth of complex nanowires. The tiny new structures can create and detect light and also generate electricity, when mechanical pressure is applied. The optical and electrical properties of the wires allows for many applications, from fibre optical networks and computer circuits to medical diagnostics and security sensors. Adam Lazareck, a graduate student in Brown’s Division of Engineering stated that “the use of DNA to assemble nano-materials is one of the first steps toward using biological molecules as a manufacturing tool. If you want to make something, turn to Mother Nature. From skin to sea shells, remarkable structures are engineered using DNA.” The work is an example of “bottom up” nano-engineering. This is where engineers experiment with ways to get biological molecules to do their own assembly work rather than moulding or etching materials into smaller components. Molecular design and machinery can be used to create miniscule devices and materials, under the right chemical conditions The team of engineers and scientists successfully harnessed DNA to provide instructions for this self-assembly. The new structures, created in the Xu laboratory, are the first example of DNA-directed self-assembly and synthesis in nano-materials. The Xu laboratory is the first in the world to make uniform arrays of carbon nanotubes. Lazareck, with collaborators at Brown and Boston College, built on this platform to make their structures. They started with arrays of billions of carbon nanotubes of the same height and diameter, which were evenly spaced on a base of aluminium oxide film. On the tips of the tubes, they introduced a tiny synthetic snippet of DNA. This DNA carries a sequence of 15 “letters” of genetic code. It was chosen because it attracts only one complement – another sequence made up of a different string of 15 “letters” of genetic code. This second sequence was coupled with a gold nanoparticle, which acted as a chemical delivery system, bringing the complementary sequences of DNA together. To make the wires, the team put the arrays in a furnace set at 600° C and added zinc arsenide. What grew: Zinc oxide wires measuring about 100-200 nanometers in length. The team conducted control experiments – introducing gold nanoparticles into the array with no DNA attached or using nanotubes with no DNA at the tips in the nanotube array – and found that very few DNA sequences stuck and no wires could be made. According to Lazareck the key is DNA hybridization - the process of bringing single, complimentary strands of DNA together to reform the double helixes for which DNA is famous. “DNA provides an unparalleled instruction manual because it is so specific. Strands of DNA only join together with their complements. So with this biological specificity, you get manufacturing precision. The functional materials that result have attractive properties that can be applied in many ways,” said Lazareck. “We’re seeing the beginning of the next generation of nanomaterials,” said Xu, senior author of the article. “Many labs are experimenting with self-assembly. And they are making beautiful, but simple, structures. What’s been missing is a way to convey information – the instruction code – to make complex materials.” Source and more information: Brown University News Bureau This story was first posted on 13th July 2006.
<urn:uuid:bb00aa15-83de-4019-9fcb-d3d87a93a47a>
3.25
796
Comment Section
Science & Tech.
33.514514
The 10 frames, taken 32 seconds apart, show the formation and evolution of what are likely mid-level, convective water clouds. Such clouds are common near Mars' equator at this time of the Martian year. They have been observed by both of NASA's Mars Exploration Rovers, by satellites orbiting Mars, and by the Hubble Space Telescope. In this case, the clouds appear to develop at a fixed location, in the center of the frame about 25 degrees above the horizon. This style of origin suggests that a thermal plume is rising over a surface feature. In spite of apparent winds aloft, the thermal plume appears to remain stationary for the 5-minute duration of the movie. Though scientists have determined from the images that the wind bearing is east-northeast, approximately 80 degrees, it is not possible on the basis of the movie to unambiguously determine the height and speed of the clouds. Scientists estimate, based on models of atmospheric wind profiles and the apparent displacement of the clouds, that all of the clouds in the movie are at about the same height somewhere between 5 kilometers and 25 kilometers (3 to 20 miles) above the surface. The clouds are estimated to be moving at 2.5 meters per second, if they are low, to 12.5 meters per second, if they are high (8 feet per second to 41 feet per second). Like clouds on Earth, these Martian clouds are probably composed of ice crystals and possibly supercooled water droplets. They are similar in appearance to terrestrial cirrocumulus or high altocumulus clouds. On Earth, such clouds are relatively transient and consist of small, individual cloudlets arranged in rippled patterns. They usually form 6 kilometers to 12 kilometers (4 to 7 miles) above Earth's surface by a process known as convection, during which warm air rises and cools, with clouds condensing from the moist air once it has cooled sufficiently. These Martian clouds appear to be associated with a broader layer of ice-crystal clouds fanning out toward the upper right of the frames at the end of the movie. This is similar to the occurrence of terrestrial cirrocumulus and altocumulus clouds within layers of cirrus or cirrostratus clouds on Earth. Also apparent in this movie are prominent waves in the clouds, a result of the effect of gravity waves on cloud thickness, as on Earth. Though both rovers now have the ability to autonomously detect clouds, these images were taken prior to the first use of the new abilities. The images shown here were stored on Opportunity and were transmitted to Earth on sol 1056 (Jan. 12, 2007) during a routine communications pass.
<urn:uuid:9d023f67-cdff-4e47-8416-88d1ae2a582e>
4.0625
547
Knowledge Article
Science & Tech.
45.100306
Scientists Collect Information on Plant and Animal Species Hundreds of species of rare and at-risk animals and plants — from wild hyacinth to peregrine falcons — inhabit Pennsylvania. The Western Pennsylvania Conservancy’s scientists, through the Pennsylvania Natural Heritage Program (PNHP), have documented what and where these species are for the last three decades. This information is stored in the Pennsylvania Natural Diversity Inventory (PNDI), a database of the state’s rich ecology, with a particular emphasis on vulnerable species. It is a project of PNHP, managed by the Conservancy in partnership with state agencies that manage natural resources. In this endeavor, PNHP has joined a network coordinated by the conservation group NatureServe, in which all 50 states, as well as Canada and Latin America, use common methodologies to collect, analyze and post data. Some of the rare animal and plant information is available online at the PNHP website. “You can look up any species of animal or plant and see its geographic range and status anywhere in the Western Hemisphere,” said Jeffrey Wagner, WPC’s PNHP director. “The data provides both a state and a global perspective on elements of biodiversity, both specifically and in their array.” In addition to helping state agencies manage wildlife and make decisions about land use and development, the data ensures that groups such as WPC invest their conservation dollars in places with the highest natural resource value, according Charles Bier, WPC’s senior conservation scientist. “With limited funding and so many needs, it’s important we go where we can do the most good,” Bier said. The information on PNHP’s website is also an effective tool for helping the public better appreciate where they live and play. “Is it important to know that mussels are in the Clarion River?” asked Wagner. “If you’re planning to fish or swim there, knowing that there are freshwater mussels in the stream tells you something positive about water quality in the place where you’ll be recreating.” PNHP assisted PGC staff with peregrine falcon banding at a nest site. Here, WPC herpetologist Charlie Eichelberger climbs a rock face to investigate one of the state’s only cliff nests. Photo by: Cal Butchkoski, PA Game Commission Currently, there are over 20,000 records in the PNHP database, reflecting 30 years of research and field work. Partners such as the Pennsylvania Department of Conservation and Natural Resources, Pennsylvania Fish and Boat Commission (PFBC), and Pennsylvania Game Commission (PGC) contribute data from their field work. PGC, for instance, directed the recently completed second Pennsylvania Breeding Bird atlas, for which experts roamed the state documenting avian species, while PFBC is wrapping up surveys of timber rattlesnakes that include tracking and population density studies. Because the natural world is constantly in flux, the PNDI is evolving, too. “Some populations grow; others fail,” said Wagner. “The loss of bats to white nose syndrome is an example of how rapidly things can change. When people ask, ‘When will you be done with your work?’ the answer, of course, is ‘never.’” In fact, efforts have become more robust. Just over the last two years, 40 permanent monitoring sites have been established around the state, ranging from limestone habitats to high elevation wetlands, which enable scientists to assess changes over time. “The County Natural Heritage Inventory Program has been collecting information on the biology of the state, county-by-county since 1989,” said Rocky Gleason, WPC’s county inventory manager located in the Middletown, Pa. office. “The primary focus of these surveys has been on the location and habitat status of plants, animals and natural communities considered rare, threatened or endangered at the global or state level. This information has been provided to county, municipal and regional planning entities and conservation organizations as a means of proactive conservation planning.” Conversely, an abundance of mussels once documented in the Allegheny and Ohio rivers near Pittsburgh is missing today as a consequence of industrial pollution, dredging and other impacts. Surveying historical sites has spurred recent efforts to encourage the mussels’ return, Bier said. “We’ve gone from 50 species to zero to a dozen in the Ohio River, an indication their habitat is improving,” Bier concluded. Scientists use such field data to make recommendations about species’ conservation status. Rankings can change as new data is discovered. “The northeastern bulrush, for instance, is listed as both federally and state-endangered, but over the last 10 or 15 years, we’ve found this vernal pool species in many more locations, so it will probably be reevaluated for listing,” Bier said. Although the PNHP focuses on rare species and natural communities, there are plans to add exotic and invasive species to the database, because of their potential to compete with natives for habitat and food. Visit the Pennsylvania Natural Heritage Program’s website at naturalheritage.state.pa.us
<urn:uuid:ef0d202f-b19b-45f9-a456-3c18038dc75e>
3.84375
1,109
Knowledge Article
Science & Tech.
33.151554
Virtual Field Study Graphing Data / Identifying Trends When the data for internal volume and shell thickness versus age was plotted, a trend in both features was observed over time. A linear trend is defined tendency for a pattern to change in one direction, the validity of which is tested by fitting the line to the trend. Linear regressions produce a "best fit line" for data values, and the "fit" to this line is given a value between 0.0 and 1.0. The "fit" of a regression is identified by the R² notation. An R² 0.0 indicates no relationship between variables, while a R² value of 1.0 idicates a one to one relationship between the variables. It varies as a function of sample size, but R2 values above 0.7 are generally considered statistically robust. Data collected from Astarte and Anadara shells had an R² value for both variables of shell thickness and internal volume that ranged between 0.85 and 0.95. This indicates that a strong positive relationship exists between these variables and their age (as measured by stratigraphic position). Last modified October 06 2008 03:17 PM
<urn:uuid:18c88915-c769-4b9b-8d27-c23ee4fbdd76>
3.078125
254
Knowledge Article
Science & Tech.
57.24925
For the past week I've begun researching Geoexchange or the process of how Geothermal Heat Pumps work. A geothermal heat pump works by utilizing the earth as either a source or sink depending on the time of year. In the winter the area beneath what is called the freeze line is warm in comparison to the ambient temperature. In the summer the earth is cooler. Liquids such as types of refrigerant or mixtures of water and anti-freeze make thermal connections with the ground and are either heated or cooled.
<urn:uuid:e899bbee-2181-4ad7-9f2f-8585392d73fa>
3.1875
106
Truncated
Science & Tech.
55.419143
Editor's note: The Science Seat is a feature in which CNN Light Years sits down with movers and shakers from different areas of scientific exploration. This is the fifth installment. Sarah Dodson-Robinson is an assistant professor in the astronomy department at the University of Texas at Austin. She is a member of the American Astronomical Society and recently won the organization's Annie Jump Cannon Award for her work exploring how planets form. Dodson-Robinson says she enjoys discovering new things and coming up with new pieces of knowledge, no matter how small. She describes it as a "wonderful feeling." CNN Light Years recently chatted with Dodson-Robinson about her research. Here is an edited transcript: CNN: What is the main goal of your research program? Sarah Dodson-Robinson: My main goal is to answer the question, "How do planets form?" I'm interested in almost any type of planet, but my work so far is best known for its focus on supergiant planets, which are much more massive than Jupiter. I also study stars that host planets in order to figure out what chemical elements are most important for forming planets. Lately I have been studying small stars, which may form in a similar way to large planets, in order to get insight into supergiant planets. CNN: What is the importance of your research? Dodson-Robinson: By studying planet formation, I help answer some of the most fundamental questions that confront humanity. Humans wouldn't be here without planet Earth, but there was a time when Earth didn't exist. It formed relatively recently in the history of the universe, only about 4.5 billion years ago. Earth isn't alone, either -- astronomers have discovered over 2,000 planets outside the solar system. Planets are everywhere in the galaxy, so it's important that we understand how they grow. CNN: Tell us about some of your results. Dodson-Robinson: My best-known result is that supergiant planets on wide orbits must form by gravitational instability, which is the sudden collapse of a gas cloud in a disk orbiting a young star. That's a very different process from how giant planets usually form, which is by sticking asteroid-like pieces of rock together till a massive solid core forms, then gravitationally attracting an atmosphere. I also made a discovery that silicon is a particularly important element for forming planets, which we know because stars that have planets are rich in silicon. My hypothesis for why silicon is important, which my student, Erik Brugamyer, is testing, is that silicon is a limiting reagent in the formation of the tiny dust grains that are the first planetary building blocks. More silicon means more grains and more raw material for planet building. CNN: How do you study planet and star formation? Dodson-Robinson: I have several ways of doing it. For stars, my student and I are working on discovering new stars that formed by top-down collapse. We are looking for small dwarf stars orbiting large, giant stars. It's a challenging observation to do since the giant star is so much brighter than the dwarf star. It's like looking for a firefly in the glare of a big stadium lamp. My students and I also make computer models of the disks of gas and dust where planets form. One of our new models shows a piece of a disk near the star going unstable, possibly forming a planet by top-down collapse. Another of my students runs computer models of the chemistry of planet-forming material. CNN: What tools do you use to conduct your study? Dodson-Robinson: My tools are supercomputers, grids of connected desktop computers and telescopes. In my group, we do lots of computer programming, even the students who use telescopes. The general process is that we acquire data -- either from a computer simulation or a telescope -- then we sit down and write computer programs to analyze it. CNN: So how does a planet form? Dodson-Robinson: There are two ways of forming planets. In the first way, bottom-up growth, tiny grains of dust stick together to form pebbles, then the pebbles collapse into rubble piles under the force of gravity. The rubble piles are called planetesimals, and today's asteroids and comets are leftovers from these early planetary building blocks. The planetesimals then begin to collide with each other and stick due to gravity, eventually becoming an Earth-size planet or even larger. If the forming planet reaches 10 Earth masses or more, it can attract an atmosphere of hydrogen and helium and become a giant planet. Jupiter and Saturn started in the same way as Earth, from sticking together tiny pieces of dust. There's a second way of forming planets, top-down collapse, where a cloud of gas orbiting the star spontaneously collapses. This type of planet formation is not very common, but it happens sometimes. CNN: What interesting facts can you share with us about planet formation? Dodson-Robinson: The first step in planet growth is forming tiny dust grains of about one millionth of a meter in size. Those tiny dust grains stick together electrostatically, which is a fancy word meaning they form dust bunnies. The same physical process that makes annoying dust bunnies under your bed is actually responsible for the growth of worlds. Tiny dust grains don't just float around quietly in the gas of protostellar disks. These little grains actually control how heat flows through the disk. They're very good at absorbing light that's trying to leave the disk. The grains might actually stall planet growth by top-down collapse, preventing supergiant planets from forming. I wanted to test (this hypothesis) because we don't know how common these planetary behemoths are. It seems like they might be rare, and if they are, we want to know what's inhibiting them from growing. CNN: How did you come up with that hypothesis and why? Dodson-Robinson: I was just thinking about how planets grow, and I wanted an explanation for why supergiant planets on wide orbits are so rare. The dust grains were a natural suspect since they stop heat from leaving the protostellar disk. In order for top-down collapse to occur, the disk has to get very cold.
<urn:uuid:49b04aa9-1246-460b-bc70-c36798b3c825>
3.46875
1,315
Audio Transcript
Science & Tech.
52.415262
New research provides a detailed explanation for a baffling effect in which much larger-than-expected amounts of light passed through a silver-coated quartz barrier with tiny openings: namely, a periodic array of 150-nm holes up to 10 times smaller than the wavelength of the light sent through. This unexpected experimental effect bodes well for scaling down optical devices to nanometer dimensions. Light can pass through such tiny holes due to the actions of surface plasmons (SPs), collective oscillations of electrons at the boundary between conductors and insulators. According to one of the research collaborations investigating this effect, the light gets through the holes in the form of an SP "molecule," consisting of two polaritons, one on each side of the metal film, that interact with one another with exponentially decaying electromagnetic fields, forming "molecular" levels in very much the same way that atomic electron wavefunctions interact to form molecular levels in a diatomic molecule. The plot illustrates the effect of the SP molecules. The x axis depicts the propagation of light. The y-axis runs along a cut of the periodic array of holes (the cut considered is represented schematically in the upper left panel). To show more clearly the formation of the SP molecule, this plot neglects the effects of light absorption by the metal. The upper right panel shows the wavelengths (780 and 788 nm) at which light is transmitted through the metal in this case. discovered, the mathematics of the SP molecule's electromagnetic field are essentially the same as the ones describing the formation of molecular electronic levels from the levels of (otherwise) isolated atoms. Suppose there are two atoms that, when very far apart, have their own sets of energy levels. When the atoms come closer, these separate sets of energy levels combine to form a set of molecular levels. In the plasmon molecule something analogous occurs: if the two metal surfaces were very far apart, there would be two isolated surface plasmons . If the metal is not too thick, these two surface plasmons "talk" to each other, and form a set of combined Two separate cases are shown in (a) and (b), corresponding to the two different plasmon molecule levels: the symmetric (b) and antisymmetric (a) linear combination of surface plasmons at both interfaces. Note that, as expected, in the antisymmetric case the electric field intensity at the middle of the hole is much smaller than in the It is also worth noticing the huge enhancement of the fields at the surfaces, by a factor of order 500 in intensity, due to the plasmons. In this scale the field of the incoming and outgoing wave cannot be resolved, so large are the fields close to the metal! (Thanks to Luis Martin Moreno and Francisco Jose Garcia Vidal for providing the figure and the explanation.) L. Martín-Moreno, F. J. García-Vidal, H. J. Lezec, K. M. Pellerin, T. Thio, J. B. Pendry, and T. W. Ebbesen, in Physical Review Letters 86, 1114 (5 February 2001). Physics News Update to Physics News Graphics
<urn:uuid:57a1ece9-9ecc-412e-a37b-39d3676ed8ab>
3.640625
732
Knowledge Article
Science & Tech.
45.014268
1. Glass Identification: From USA Forensic Science Service; 6 types of glass; defined in terms of their oxide content (i.e. Na, Fe, K, etc) 2. Ionosphere: Classification of radar returns from the ionosphere 3. Wine: Using chemical analysis determine the origin of wines 4. Robot Execution Failures: This dataset contains force and torque measurements on a robot after failure detection. Each failure is characterized by 15 force/torque samples collected at regular time intervals 5. Connectionist Bench (Sonar, Mines vs. Rocks): The task is to train a network to discriminate between sonar signals bounced off a metal cylinder and those bounced off a roughly cylindrical rock.
<urn:uuid:88468b61-3548-4046-8992-65f09bd055ec>
2.6875
145
Content Listing
Science & Tech.
38.634313
First rainforests arose when plants solved plumbing problem A team of scientists, including several from the Smithsonian Institution, discovered that leaves of flowering plants in the world's first rainforests had more veins per unit area than leaves ever had before. They suggest that this increased the amount of water available to the leaves, making it possible for plants to capture more carbon and grow larger. A better plumbing system may also have radically altered water and carbon movement through forests, driving environmental change. "It's fascinating that a simple leaf feature such as vein density allows one to study plant performance in the past," said Klaus Winter, staff scientist at the Smithsonian Tropical Research Institute in Panama, who was not an author, "Of course, you can't directly measure water flow through fossil leaves. When plants fix carbon, they lose water to the atmosphere. So to become highly productive, as many modern flowering plants are, requires that plants have a highly elaborate plumbing system." A walk through a tropical forest more than 100 million years ago would have been different than a walk through a modern rainforest. Dinosaurs were shaded by flowerless plants like cycads and ferns. Fast-forward 40 million years. The dinosaurs have disappeared and the first modern rainforests have appeared: a realm of giant trees—with flowers. By examining images of more than 300 hundred kinds of fossil leaves, the team, led by Taylor Feild from the University of Tennessee, Knoxville, counted how many veins there were in a given area of leaf. Flowerless plants then and now have relatively few veins. But their work shows that even after flowering plants evolved, it took some time before they developed the efficient plumbing systems that would allow them to develop into giant life-forms like tropical trees. The density of veins in the leaves of flowering plants increased at least two different times as the transition from ancient to modern rainforests took place, according to this research reported in the journal, Proceedings of the National Academy of Sciences. The first jump—when the vein density in fossil leaves of flowering plants first exceeded vein density in the leaves of flowerless plants—took place approximately one hundred million years ago. The second and more significant increase in vein density took place 35 million years later. Petrified tree trunks more than a meter in diameter were first found from this period, indicating another landmark—the evolution of flowering trees. Soon the leaves of flowering plants had twice more veins per unit leaf area than the non-flowering plants. By the end of the Cretaceous period about 65 million years ago, the number of leaf veins per unit area was very similar to that of modern rainforest leaves. As often happens, this study has seeded new questions. Did improvements in the plumbing system of flowering plants make giant rainforest trees possible? How important was this change in the plants to climate changes that were taking place at the time? Were these plants better able to take advantage of naturally occurring wet environments that were created by tectonic activity or changes in ocean circulation patterns, or did the trees themselves contribute to climate change by pumping more water into the atmosphere? - Scientists discover 'switch' in plants to create flowersWed, 18 Apr 2012, 0:34:13 EDT - Changing flowering times protect tobacco plants against insect herbivoryThu, 21 Jan 2010, 13:38:41 EST - The lifeblood of leaves: Vein networks control plant patternsWed, 17 Nov 2010, 9:35:18 EST - Ancient leaves help researchers understand future climateThu, 6 May 2010, 11:50:03 EDT - Scientists unveil mechanism for 'up and down' in plantsTue, 28 Oct 2008, 9:36:31 EDT - First rainforests arose when plants solved plumbing problemfrom Science DailyTue, 3 May 2011, 17:30:32 EDT - First rainforests arose when plants solved plumbing problemfrom PhysorgTue, 3 May 2011, 11:00:29 EDT - First rainforests arose when plants solved plumbing problemfrom Science BlogTue, 3 May 2011, 10:30:24 EDT
<urn:uuid:52799fd7-33c8-4e3c-92f2-0a1413e623f8>
4.40625
832
Truncated
Science & Tech.
38.387106
Science Express has just published a new study of (a virtual endocast from) the skull of (specimen LB1 of) Homo floresiensis. The abstract: The brain of Homo floresiensis is assessed by comparing a virtual endocast from the type specimen (LB1) with endocasts from great apes, Homo erectus, Homo sapiens, a human pygmy, a human microcephalic, Sts 5 (Australopithecus africanus) and WT 17000 (Paranthropus aeithiopicus). Morphometric , allometric and shape data indicate that LB1 is not a microcephalic or pygmy. LB1's brain size versus body size scales like an australopithecine, but its endocast shape resembles that of Homo erectus. LB1 has derived frontal and temporal lobes and a lunate sulcus in a derived position, which are consistent with capabilities for higher cognitive processing. However, an AP wire service story on the paper quotes "some other researchers" as "sticking to their opinion that the Hobbit probably suffered from a form of microcephaly, a condition in which the brain fails to grow at a normal rate, resulting in a small head with a large face, and even dwarfism." What it is not, says primatologist Robert Martin, provost of the Field Museum in Chicago, is a scaled-down version of a Homo erectus or a new transitional species that held on for millennia in tropical isolation. As I noted in an earlier post, "this is all a bit of a black eye for physical anthropology. If paleontologists (and Science magazine!) can't agree about such a basic question, something is wrong." In particular, the amount of quantitative data on statistical norms for various modern populations seems to be surprisingly small. At least, not much of it is used in this Science paper, whose plots and tables generally represent each species or group as a single point (e.g. in the plot of endocast Height/Breadth against Breadth/Length reproduced below, where there are individual points for "Homo sapiens", "Pan", "Pygmy" and "Microceph", as well as for various individual fossil skulls). [Update: There's a NYT article today (2/4/2005) by John Noble Wilford, with a somewhat clearer quote from Martin: Dr. Robert Martin, a primatologist at the Field Museum in Chicago, said that he and colleagues in Britain were preparing a paper contending that the examined braincase was too small to be explained easily by ordinary dwarfism or the tendency of isolated people on islands to become smaller over generations. In such cases, Dr. Martin said, brain size would usually not diminish by the same amount as body size. Dr. Martin said that he was not ready to rule out microcephaly on the basis of a test of a single microcephalic braincase. One other thing that bothers me a little about this discussion is that the original find was not actually a fossil -- in the sense of being mineralized -- but was preserved in some other way that left it with a consistency described as similar to "mashed potatoes" or "wet blotting paper" (see the quote from the original issue of Science in my post here). Given this, how secure can anyone's estimate of the skull's aspect ratio really be? ] Posted by Mark Liberman at March 3, 2005 05:01 PM
<urn:uuid:db0023d5-bb3e-4f17-bbdc-6ca9f9b1003e>
2.953125
724
Comment Section
Science & Tech.
38.656841
Can catastrophic events be tamed? Technology has done plenty to protect us, but much more remains to be done. by Mohamed Gad-el-Hak Predictions of the weather can warn us that we may need an umbrella, or that we should batten down and head for the cellar. A forecast can often help us stay drier and more comfortable. In extreme instances, it can save our lives. Once a storm system has formed, the course and intensity of a hurricane, for instance, which typically lasts a couple of weeks from inception to dissipation, can be predicted about a week in advance. The path of a tornado can be predicted only about 15 minutes in advance, although weather conditions favoring its formation can be predicted hours ahead. An earthquake of magnitude 7.6 hit Kashmir in 2005. Success in efforts to predict earthquakes has remained elusive. Hurricanes and tornadoes, even when they do not kill, leave a wake of damage and disruption. They qualify as disasters, and the sooner we are warned of their approach, the better preparation we can make to protect ourselves and our property. There are many other extreme events besides storms that cause widespread harm. Throughout human history, populations have been struck by disasters natural and otherwise—disease, earthquake, and fire; war, terrorism, and crime. The degree of success in predicting their occurrence and behavior varies more widely than it does even for the weather. MITIGATION AND CONTROL A scientist or an engineer can look at a disaster as a dynamical system that can in principle, though not always in practice, be modeled using the Newtonian framework of nature. If potential disasters can be predicted, their effects can be mitigated. Sometimes the events can even be controlled. This is essentially the subject of a book that I edited, Large-Scale Disasters: Prediction, Control and Mitigation, published earlier this year by Cambridge University Press. In it, dozens of contributors describe issues that science and engineering can address in the management and control of extreme events. Science and technology can help greatly in predicting the course of certain types of disasters. When, where, and how intense will a severe weather phenomenon strike? Are weather conditions favorable to extinguishing a particular wildfire? What is the probability of a particular volcano erupting? How much air and water pollution is going to be caused by the addition of a factory cluster to a community? How would a toxic chemical or biological substance disperse in the atmosphere or in a body of water? Below a specific concentration, certain dangerous substances are harmless, and a safe zone could be established based on the dispersion forecast. Earthquake prediction is far from satisfactory, but is seriously attempted nevertheless. The accuracy of predicting volcanic eruptions is somewhere in between those of earthquakes and severe weather. Scientists are able to monitor Italy’s Mount Etna, for example, and forecast its eruption using seismic tomography, a technique similar to that used in computed tomography scans in the medical field. The method yields time photographs of the three-dimensional movement of rocks to detect their internal changes. The success of the technique is in no small part due to the fact that Mount Etna, Europe’s biggest volcano, is equipped with a high-quality monitoring system and seismic network, tools not readily available for most other volcanoes. REALITY VS. SCIENCE FICTION Science and technology can also help to control the severity of a disaster, but here the achievements to date are much less spectacular than those in the prediction arena. Cloud seeding to avert drought is still far from being a practical tool. Slinging a nuclear device toward an asteroid or a meteor to avert its imminent collision with Earth remains solidly in the realm of science fiction. (In the 1998 film Armageddon, a Texas-size asteroid was courageously nuked from its interior.) On the other hand, employing scientific principles to combat a wildfire is doable. So is the development of scientifically based strategies to reduce air and water pollution, moderate urban sprawl, evacuate a large city, and minimize the probability of accident for air, land, and water vehicles. Satellite photographs show the same island before and after the 2004 Indian Ocean earthquake and its tsunami. Advance warning might have reduced the death toll. Structures are designed to withstand an earthquake of a given magnitude, wind of a given speed, and so on. Dams are constructed to moderate the flood/drought cycles of rivers, and levees and dikes are erected to protect lands below sea level from the vagaries of the weather. Storm drains, fire hydrants, fire-retardant materials, sprinkler systems, pollution control, simple hygiene, strict building codes, traffic rules, and regulations governing air, land, and sea travel are the types of measures a society takes to mitigate or even eliminate the adverse effects of certain natural and manmade disasters. Of course, there are limits to what a government can do. While much better fire safety will be achieved if a fire station is built on every city block, and fewer earthquake casualties will occur if every house is built to withstand the strongest possible tremor, clearly the cost of such efforts cannot be justified or even afforded by society. In contrast to natural disasters, manmade ones are generally somewhat easier to control, but more difficult to predict. The war on terrorism is a case in point. Who could predict the behavior of a crazed suicide bomber? A civilized society spends its valuable resources on intelligence gathering, internal security, border control, and screening to prevent such devious behavior, whose dynamics obviously cannot be distilled into a differential equation to be solved. A satellite image of Katrina, a category 5 hurricane, was taken over the Gulf of Mexico in August 2005. Many flood- and storm-control systems failed to protect populations living on the Gulf of Mexico. However, even in certain disastrous situations that depend on human behavior, predictions can sometimes be made. Crowd dynamics are a prime example. The behavior of a crowd in an emergency can to some degree be modeled and anticipated, so that adequate escape or evacuation routes can be properly designed. Panic situations and other crowd disasters have been modeled as nonlinear dynamical systems. For disasters that involve fluid transport phenomena, such as severe weather, fire, or release of a toxic substance, the governing equations can be formulated subject to some assumptions—the fewer, the better. Modeling is usually in the form of nonlinear partial differential equations with the appropriate number of initial and boundary conditions. But those field equations are typically impossible to solve analytically, particularly if the fluid flow is turbulent, which unfortunately is the norm for the high Reynolds number flows encountered in the atmosphere and oceans. MODELING TO THE RESCUE? Furthermore, initial and boundary conditions are required for both analytical and numerical solutions. Computers have their practical limits, so numerical integration of the instantaneous equations (direct numerical simulations) for high Reynolds number natural flows is prohibitively expensive, if not outright impossible, at least for now. Modeling comes to the rescue, but at a price. Large-eddy simulations, spectral methods, probability density function models, and the more classical Reynolds-stress models are examples of such closure schemes that are not as computationally intensive as direct numerical simulations, but are not as reliable either. This type of second-tier modeling is phenomenological in nature and does not stem from first principles. The more heuristic the modeling is, the less accurate the expected results. Together with massive ground, sea, and sky data to provide at least in part the initial and boundary conditions, the models are entered into supercomputers that come out with a forecast. It may be a prediction of a severe thunderstorm that is yet to form, the future path and strength of an existing hurricane, or the impending concentration of a toxic gas that was released in a faraway location some time in the past. For other types of disasters such as earthquakes, the precise laws are not even known, mostly because proper constitutive relations are lacking. Additionally, deep underground data are difficult to gather, to say the least. Predictions in those cases become more or less a black art. The important issue is to precisely state the assumptions needed to write the evolution equations, which are basically statements of the conservation of mass, momentum, and energy, in a certain form. The resulting equations and their eventual analytical or numerical solutions are valid only under those assumptions. This seemingly straightforward fact is often overlooked and wrong answers readily result when the situation we are trying to model is different from the one assumed. The prediction of weather-related disasters has had spectacular successes within the last few decades. The pains-taking advances made in fluid mechanics in general and turbulence research in particular, together with the exponential growth of computer memory and speed, no doubt contributed immeasurably to those successes. A generation ago, the next day’s weather was hard to predict. Today, the 10-day forecast is available 24/7 on weather.com for almost any city in the world. Imagine what we might do for the world if we could engineer systems as accurate as that to predict earthquake, famine, or war. |to read more Extensive literature exists on the application of engineering and physical science to the prediction and control of disasters. Here are a few that relate to points made in this article. The seismic monitoring of Mount Etna is described in a paper, “Magma Ascent and the Pressurization of Mount Etna’s Volcanic System,” written by Domenico Patanè, Pasquale De Gori, Claudio Chiarabba, and Alessandro Bonaccorso, published in the journal Science in March 2003. Andrew Adamatzky discusses crowd dynamics in his 2005 book, Dynamics of Crowd-Minds: Patterns of Irrationality in Emotions, Beliefs and Actions, published by World Scientific in London. The Science of Disasters: Climate Disruptions, Heart Attacks, and Market Crashes, published by Springer of Berlin in 2002, contains a contribution by D. Helbing, I.J. Farkas, and T. Vicsek, “Crowd Disasters and Simulation of Panic Situations,” which also reports on the modeling of panic behavior in crowds. Mohamed Gad-el-Hak discusses more ideas from Large-Scale Disasters: Prediction, Control and Mitigation, in a companion article, “Large-Scale Disasters as Dynamical Systems,” which is published exclusively on Mechanical Engineering Online. Mohamed Gad-el-Hak, an ASME Fellow, is the Inez Caudill Eminent Professor of Biomedical Engineering and chair of mechanical engineering at Virginia Commonwealth University in Richmond.
<urn:uuid:ccbc94cc-5f48-40e2-ba4d-8b36ec19c778>
3.375
2,202
Academic Writing
Science & Tech.
28.089009
1/3 of corals face extinction July 10, 2008 Assessing the conservation status of corals from around the world using IUCN Red List Criteria, an international team of researchers found that 32.8 percent of reef-building corals are in categories with elevated risk of extinction. At greatest risk are corals in the Caribbean and the Coral Triangle of the Western Pacific. "Our results emphasize the widespread plight of coral reefs and the urgent need to enact conservation measures," wrote the authors. Comparison of current Red List Categories for all reef-building coral species to hypothetical Red List Categories back-cast to pre-1998. (CR=Critically Endangered, EN=Endangered, VU=Vulnerable, NT=Near Threatened, LC=Least Concern, DD=Data Deficient). The authors say the results show that the extinction risk of corals has increased dramatically over the past decade, from 13 in threatened categories before 1998 to 231 today. The number of "near-threatened" species rose from 20 to 176 over the same period. The proportion of threatened coral species is second only to amphibians — which are also susceptible to climate change — among animal groups. The researchers warn that without immediate conservation efforts, a large proportion of coral species may go the way of the dinosaurs, resulting in significant biodiversity loss and economic impacts. "If corals cannot adapt, the cascading effects of the functional loss of reef ecosystems will threaten the geologic structure of reefs and their coastal protection function, and have huge economic effects on food security for hundreds of millions of people dependent on reef fish. Our consensus view is that the loss of reef ecosystems would lead to large-scale loss of global biodiversity," the authors conclude. K.E. Carpenter et al. (2008). One Third of Reef-Building Corals Face Elevated Extinction Risk From Climate Change and Local Impacts. Science 10 July 2008. U.S. coral reefs in trouble (7/7/2008) Nearly half of U.S. coral reefs are in "poor" or "fair" condition according to a new study by the National Oceanic and Atmospheric Administration (NOAA). Coral reefs declining faster than rainforests (8/8/2007) Coral reefs in the Pacific Ocean are dying faster than previously thought due to costal development, climate change, and disease, reports a study published Wednesday in the online journal PLoS One. Nearly 600 square miles of reef have disappeared per year since the late 1960s, a rate twice that of tropical rainforest loss. (5/7/2007) A new study provides further evidence that climate change is adversely affecting coral reefs. While previous studies have linked higher ocean temperatures to coral bleaching events, the new research, published in PLoS Biology, found that climate change may increasing the incidence of disease in Great Barrier Reef corals. Omniously, the research also shows that healthy reefs, with the highest density of corals, are hit the hardest by disease. (3/29/2007) Several studies have shown that increased atmospheric carbon dioxide levels are acidifying the world's oceans. This is significant for coral reefs because acidification strips carbonate ions from seawater, making it more difficult for corals to build the calcium carbonate skeletons that serve as their structural basis. Research has shown that many species of coral, as well as other marine microorganisms, fare quite poorly under the increasingly acidic conditions forecast by some models. However, the news may not be bad for all types of corals. A study published in the March 30 issue of the journal Science, suggests that some corals may weather acidification better than others. (3/8/2007) Rising levels of carbon dioxide will have wide-ranging impacts on the world's oceans regardless of climate change, reports a study published in the March 9, 2007, issue of the journal Geophysical Research Letters. Coral reefs decimated by 2050, Great Barrier Reef's coral 95% dead (11/17/2005) Australia's Great Barrier Reef could lose 95 percent of its living coral by 2050 should ocean temperatures increase by the 1.5 degrees Celsius projected by climate scientists. The startling and controversial prediction, made last year in a report commissioned by the World Worldwide Fund for Nature (WWF) and the Queensland government, is just one of the dire scenarios forecast for reefs in the near future. The degradation and possible disappearance of these ecosystems would have profound socioeconomic ramifications as well as ecological impacts says Ove Hoegh-Guldberg, head of the University of Queensland's Centre for Marine Studies.
<urn:uuid:f45b32ed-a8a6-4afa-945c-7892a31f6da2>
3.5
942
Content Listing
Science & Tech.
41.940387
Gravity and Mountains Measurements I just read in "A Short History of Nearly Everthing", by Bill Bryson, about the historic measurement, in eighteenth-century Scotland, of the gravitational attraction of a mountain. Suppose there is a large mountain next to a broad plain. I wonder how difficult it would be to measure the deflection of a plumb bob by the I am unable to find any information about how to perform such an experiment, or whether it is beyond the abilities of a high school student who wished to do it as a science project. Sensitive instruments exist for measuring the acceleration of gravity at a given location. However, these instruments would be too costly and the analysis too complex for a high school science project. It is possible to make "gravity maps" of the earth -- that is acceleration of gravity vs. longitude / latitude / altitude. These measurements are sensitive to thousandth's of the nominal acceleration of gravity. The measurements are called "gravity anomalies". The most sensitive of these devices are found in satellites in a NOAA/NASA project called GRACE (Gravity Recovery And Climate Experiment). This is a powerful experimental setup because it provides data not only on position but is able to survey the entire Earth on a relatively short time scale -- something that has not been available previously. You can find gravity anomaly maps on the web site http://www.ngdc.noaa.gov/mgg/announcements/announce_predict.html It is easy to pick out "high gravity" and "low gravity" regions on the earth. The image can be enlarged with a 'click'. There are other links on that site. In addition, a "Google" search of "gravity maps" will bring up numerous web sites with maps and other information about GRACE etc. Perhaps an interesting project would be to use some more detailed data from GRACE (I am pretty sure you can find that.) and research WHY a certain location has "abnormally" high or low gravity. The gravity distribution on the site above is not at all "obvious" and a proposed explanation could be challenging. Click here to return to the Physics Archives Update: June 2012
<urn:uuid:153b2c64-4008-47c1-b96e-5ca8499fee2a>
3.734375
478
Personal Blog
Science & Tech.
41.172252
Wind speeds usually mean the movement of air in an outside environment, but the speed of movement of air inside is also important in many cases, including weather forecasting, aircraft and maritime operations, construction and civil engineering. High wind speeds can cause unpleasant side effects, and strong winds often have special names, including gales, hurricanes, and typhoons. The most simple method of measuring wind speed is to estimate the speed from the observed event against the Beaufort Scale. Obviously, this method is not the most accurate available, so each value on the scale represents a range of values, three on the scale, for example, covers the wind speeds between seven and ten knots. |Wikimedia Commons has media related to: Wind speed|
<urn:uuid:835b28b1-62dd-48af-a901-326af3cce032>
4.09375
147
Knowledge Article
Science & Tech.
25.808936
Bill Gates gave a TED Talk last week about how if he could have just one wish it would be to get to zero carbon. Even with all of his understanding, passion and work on the issues of health, poverty, sanitation, disease and so many of the world's urgent and interrelated problems - his wish, if he only had one, would be to get the big innovation breakthroughs that will make clean, carbon neutral energy affordable and safe. This is because climate disruption is the problem we face today that will make all of the other problems so much worse. It is incredibly exciting to see the country's colleges and universities leading the way on this push to zero. With 667 institutions committed to publicly reporting on their progress through the ACUPCC, and many more taking very similar steps, they are driving the innovation needed, educating the leaders who can make the breakthroughs, and serving as role-models to show that we can do this in ways that make good business sense. In the talk, Gates lays out a simple equation: CO2 = P * S * E * C Where P = population; S = services per person; E = energy per service; and C = carbon dioxide per energy. He points out P is going up to 9 billion, maybe a bit smaller with big efforts in vaccines, education and reproductive health. S is going up, and for most of the world's population, where meeting the basic need of subsistence is a challenge, that's a great thing (in the developed world we have plenty of opportunity to bring that down while improving quality of life). Energy per service is going down, another good thing (although I personally believe it can go down a lot further than he suggests, if we improve the design of just about everything we do). Regardless, the only one that we can really go to zero is carbon per energy (of course the closer E gets to zero, the less zero-carbon energy we'll need). He also identifies four necessary steps in getting to zero: - Basic Research Funding - Market Incentives to Reduce CO2 (i.e. price on carbon) - Entrepreneurial Opportunity - Rational Regulatory Framework We all have a critical role to play in ensuring that all four of these steps are taken in a timely fashion.
<urn:uuid:78a8ee95-a4de-4581-a78a-221b00d04f43>
2.953125
467
Personal Blog
Science & Tech.
40.865742
From Cartan Matrix to Root System Yesterday, we showed that a Cartan matrix determines its root system up to isomorphism. That is, in principle if we have a collection of simple roots and the data about how each projects onto each other, that is enough to determine the root system itself. Today, we will show how to carry this procedure out. But first, we should point out what we don’t know: which Cartan matrices actually come from root systems! We know some information, though. First off, the diagonal entries must all be . Why? Well, it’s a simple calculation to see that for any vector The off-diagonal entries, on the other hand, must all be negative or zero. Indeed, our simple roots must be part of a base , and any two vectors must satisfy . Even better, we have a lot of information about pairs of roots. If one off-diagonal element is zero, so must the corresponding one on the other side of the diagonal be zero. And if they’re nonzero, we have a very tightly controlled number of options. One must be , and the other must be , , or . But beyond that, we don’t know which Cartan matrices actually arise, and that’s the whole point of our project. For now, though, we will assume that our matrix does in fact arise from a real root system, and see how to use it to construct a root system whose Cartan matrix is the given one. And our method will hinge on considering root strings. What we really need is to build up all the positive roots , and then the negative roots will just be a reflected copy of . We also know that since there are only finitely many roots, there can be only finitely many heights, and so there is some largest height. And we know that we can get to any positive root of any height by adding more and more simple roots. So we will proceed by building up all the roots of height , then height , and so on until we cannot find any higher roots, at which point we will be done. So let’s start with roots of height . These are exactly the simple roots, and we are just given those to begin with. We know all of them, and we know that there is nothing at all below them (among positive roots, at least). Next we come to the roots of height . Every one of these will be a root of height plus another simple root. The problem is that we can’t add just any simple root to a root of height to get another root of height . If we step in the wrong direction we’ll fall right off the root system! We need to know which directions are safe, and that’s where root strings come to the rescue. We start with a root with , and a simple root . We know that the length of the string through must be . But we also know that we can’t step backwards, because would be (in this case) a linear combination of simple roots with both positive and negative coefficients! If then we can’t step forwards either, because we’ve already got the whole root string. But if then we have room to take a step in the direction from , giving a root with height . As we repeat this over all roots of height and all simple roots , we must eventually cover all of the roots of height . Next are the roots of height . Every one of these will be a root of height plus another simple root. The problem is that we can’t add just any simple root to a root of height to get another root of height . If we step in the wrong direction we’ll fall right off the root system! We need to know which directions are safe, and that’s where root strings come to the rescue… again. We start with a root with , and a simple root . We know that the length of the string through must again be . But now we may be able to take a step backwards! That is, it may turn out that is a root, and that complicates matters. But this is okay, because if is a root, then it must be of height , and we know that we already know all of these! So, look up in our list of roots of height and see if it shows up. If it doesn’t, then the string through starts at , just like before. If it does show up, then the root string must start at . Indeed, if we took another step backwards, we’ve have a root of height , which doesn’t exist. Thus we know where the root string starts. We can also tell how long it is, because we can calculate by adding up the Cartan integers for each of the simple roots we’ve used to build . And so we can tell whether or not it’s safe to take another step in the direction of from , and in this way we can build up each and every root of height . And so on at each level we start with the roots of height and look from each one in the direction of each simple root . In each case, we can carefully step backwards to determine where the string through begins, and we can calculate the length of the string, and so we can tell whether or not it’s safe to take another step in the direction of from , and we can build up each and every root of height . Of course, it may just happen (and eventually it must happen) that we find no roots of height . At this point, there can be no roots of any larger height either, and we’re done. We’ve built up all the positive roots, and the negative roots are just their negatives.
<urn:uuid:c6d03fed-33dd-48f5-a357-bb376441a3f7>
3.546875
1,197
Personal Blog
Science & Tech.
74.065062
The key idea expressed is that if a property p is provable for every object x of type T and S is a subtype of T, then p should be provable for every object y of type S as well. Since the principle must hold for every conceivable property, we can also say that “functions that use pointer or references to base classes must be able to use pointers or references to derived classes without knowing it” (this is Martin formulation of the principle relative to C++). In Java we would say that methods that use objects of base classes must be able to use objects of derived classes as well. The circle-ellipse problem is an archetypal violation of the principle. Without any sort of originality I shortly present here the problem, starting with the Rectangle class. I use Python, though I chose to use explicit Java style setter and getters (which is bad practice in Python) and to violate the PEP 8 to make it even more evident this is bad Python. Here the point is starting with Python just because it is easier to use the REPL to try the different problems. class Rectangle(object): def __init__(self, width, height): self._width = width self._height = height def setWidth(self, width): self.width = width def getWidth(self): return self.width def setHeight(self, height): self.height = height def getHeight(self): return self.height def getArea(self): return self.getHeight() * self.getWidth() Essentially when we try to derive a Square from a Rectangle, we find out there is no way to do it without violating the principle of least surprise (and in turn the liskov substitution principle). Martin very clearly explains the many ways we can do that and how inevitably we would screw something up. If we don’t override setHeight and setWidth, we have a Square that is not a Square anymore. If we do override them (and have them set both width and height), we find out that code like: oldArea = rect.getArea() rect.setWidth(rect.getWidth()/2) assert oldArea == rect.getArea() * 2 is doomed to blow up. Thus the LSP is violated. Many conclusions have been drawn. Luckily enough the problem today is somewhat less serious, as long inheritance chains are disappearing from software projects and inheritance freaks are rotting in hell (or at least they should). In fact delegation is a much cheaper (from a conceptual point of view) alternative. Modern IDE’s and nice languages (with meta-programming) make the tedious boiler plate code associated with delegation automatically generated (or better, unnecessary). It is also worth noting that the problem is there only with mutable objects. Immutable objects don’t have the problem. And I like immutable objects. Although I would hate to memorize both width and height for each square. I would also stress the fact that drawing programs (and other similar stuff) do not need the square abstraction. As people modify shapes, it is very easy that they want to stretch a rectangle to a square. This is an indication that squares are not the right abstraction for the task. The point now is that we need to use Interfaces. After all it is quite reasonable to have generic code to print the area of every shape. Ah… generic, perhaps generics could help? Yes, C++ templates somewhat could make the problem less relevant. But in Java we still need interfaces. And while using interfaces is the right thing to do (every time that is possible), there is always the risk of interface clutter. Once again, it seems that dynamic typing solves the problem. If you got an area and that is something I can print, then I will print it. And what about oldArea = rect.getArea() rect.setWidth(rect.getWidth()/2) assert oldArea == rect.getArea() * 2? Things seem to get nasty. After all a Square has an area. And perhaps has a getWidth method. This is reasonable… for example we could have any object know his “box” that is to say width and height of the smallest enclosing rectangle. Simply it would not have a setWidth method. Because we do not have any reason to make a Square inherit from a Rectangle, as simply code that should be able to work with both Rectanges and Squares will be able to do it (remember, dynamic typing). And if Square also have setWidth? Perhaps we want to change the “box” and have the contained resize. That is reasonable. Then you got me… my dynamic typing won’t save me, uh? Now the things boils down to the assertion. The point is that “if it walks like a duck and it quacks like a duck, then treat it like a duck”. But you know, Squares do not behave like Rectangles. The point is that while we can imagine lots of pieces of code working with rectangles and squares, since a Square is not a fucking Rectangle it is simply unreasonable to expect that piece of code to work. That’s is. It’s calling that code with a Square that is wrong, not the fact that lot of other pieces of code could just use Squares and Rectangles (and Triangles and everything that makes sense to include in a box – consider the “Enclosable” interface if you just can’t get rid of Java). And come on, it’s 2010, everyone and his mother have unit test.
<urn:uuid:edb092fc-fc88-4855-a559-ecf9c657cb9a>
3.140625
1,176
Personal Blog
Software Dev.
60.597595
WHAT DO GLUONS DO, AND WHAT ARE THEY MADE OF? Gluons transmit "color" force between quarks. To define these terms: protons, neutrons, and other particles typically appearing in an atomic nucleus are assumed to be made up of several particles called quarks. The proton and neutron, for example, are each made up of three quarks. Therefore the force between protons and neutrons, called the "strong nuclear force", which holds the nucleus together despite the fierce mutual repulsion of the positively charged protons, must be the result of some other, more complicated force between the quarks. This force is called the "color" force, although it has nothing to do with color as we know it. Now, forces are transmitted by fields (i.e. electric forces are transmitted by electric fields), and with quantum mechanics these fields can take on only certain strengths separated by tiny gaps (thus the strength of electric fields can be increased only in eensy-weensy discrete jumps). One discrete jump in field strength (at a given frequency) is described as the addition of one "particle" of the field, for a variety of reasons. This particle of the field is usually given its own name, although in the case of gravity it seems like it will be called just the "graviton". In the case of electric fields, it is called a "photon," while in the case of color fields, it is called a "gluon," because of course it helps the quarks stayed glued together, ho ho ho. High-energy physics and cosmology are full of amusing names, MACHOs, WIMPs, charmed particles --- all of which mean something --- as well as droll expressions for real physical principles, such as "black holes have no hair." Click here to return to the Physics Archives Update: June 2012
<urn:uuid:a65953cd-5678-46c8-b7de-6802512e4734>
3.8125
418
Knowledge Article
Science & Tech.
46.14807
The remarkable developments which have resulted in making photoelectric detection available for the whole spectrum extending from the extreme ultraviolet to the far infrared are discussed. In spite of these developments thermal detectors are still widely used and will continue to be used for sometime. The special uses of various kinds of detectors are described and in particular the use of detectors to obtain very high speeds of response. A review of current practice is given. R. A. Smith, "Detectors for Ultraviolet, Visible, and Infrared Radiation," Appl. Opt. 4, 631-638 (1965)
<urn:uuid:18b40458-378b-4fe4-bb1b-2dcc6e329005>
2.890625
117
Knowledge Article
Science & Tech.
37.916848
This section shows you the gradient Paint on rounded corner rectangle. A gradient is like a colored strip. It is created by specifying a color at one point and another color at another point. Then the colors will starts changes gradually from one to the other along a straight line between the two points. To use a Gradient paint, you must have a shape object and two end points. The class RoundRectangle2D defines a rectangle with rounded corners defined by a location (x, y), a dimension (w x h), and the width and height of an arc with which to round the corners. The class GradientPaint allows you to set up a gradient filled shape with a linear color gradient pattern. Here is the code of GradientPaintRectangle.java Output will be displayed as: If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:35be3e77-50bc-4bdf-a693-e08e8e3b64f1>
3.578125
207
Documentation
Software Dev.
61.530021
Climate Numerology; January 2010; Scientific American Magazine; by David Biello; 2 Page(s) Last December world leaders met in Copenhagen to add more hot air to the climate debate. That is because although the impacts humanity would like to avoid—fire, flood and drought, for starters—are pretty clear, the right strategy to halt global warming is not. Despite decades of effort, scientists do not know what “number”—in terms of temperature or concentrations of greenhouse gases in the atmosphere—constitutes a danger. When it comes to defining the climate’s sensitivity to forcings such as rising atmospheric carbon dioxide levels, “we don’t know much more than we did in 1975,” says climatologist Stephen Schneider of Stanford University, who first defined the term “climate sensitivity” in the 1970s. “What we know is if you add watts per square meter to the system, it’s going to warm up.”
<urn:uuid:0319e174-e45a-4a5e-9704-0b866c1428c4>
3.265625
204
Truncated
Science & Tech.
32.938336
The BioSystematic Database of World Diptera (BDWD) is a source of names and information about those names and the taxa to which they apply. The BDWD is a set of tools to aid users in finding information about flies. The two main components of the BDWD are the Nomenclator and the Species database. The Nomenclator allows users to check names, find the status (valid or invalid) and correct (valid) name for obsolete ones as well as basic information such as type, family classification and source for all names. The Species database is being designed to answer queries about the attributes of species, such as distribution, biological associates and economic importance. This database will also serve as a portal by providing links to other World-Wide-Web resources, such as species pages where further information may be found. A reference database is provided to allow users to find printed works about flies. And lastly a set of tools will be provided for taxonomists working on flies. These include or will include a database on collections, databases with historical information on authors, serials where papers on flies have been published, et cetera. Information about quality of the data, the current status of the project, future work plans, team, as well as details about of the format, abbreviations, et cetera, can be found by following the project links in the frame. The Standards, Description and Data Sources are only available in PDF Format. Click here to download Adobe Acrobat PDF Reader. You may also find information about North American fly names at the Integrated Taxonomic Information System (ITIS) World Wide Web site. Some 25,000 names for North American flies from our BioSystematic Database of World Diptera were incorporated into the National Oceanographic Data Center (NODC) Code back in 1989. So, you can retreive information about the status of these names by querying of the ITIS or Species2000 databases. The BDWD works with ITIS and was an initial member of the Species2000 program. The BDWD is endorsed by the Council for the International Congresses of Dipterology, a scientific member of the International Union of Biological
<urn:uuid:3f344203-9516-4d48-99d7-43e0f27ec9af>
2.75
475
Knowledge Article
Science & Tech.
30.927159
|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.| How do we model global plant physiology? A case study of leaf phenology If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins. This event may be recorded and made available internally or externally via http://research.microsoft.com. Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending Global climate change has raised new questions about the global carbon cycle and, specifically, about the role terrestrial vegetation plays in this cycle. Over the last decades we have acquired a large amount of data on plant traits and processes from a variety of sources that range from field and laboratory studies to space based global measurements. We have also deepened our understanding of plant physiology at the leaf and plant level. What we are faced with now is combining this understanding with the existing data to develop global models that have the capacity to answer questions about the future of the terrestrial carbon cycle. I will describe how such a model can be built and the challenges we face, focusing on the problem of leaf seasonal cycles, known as leaf phenology. This talk is part of the Microsoft Research Cambridge, public talks series. This talk is included in these lists: Note that ex-directory lists are not shown. Other listsDPMMS Pure Maths Seminar Humanitarian Centre Cambridge University Library Friends programmes Other talksCalculations on Tunneling by Carbon Tell Experimentalists Where to Look and What to Look For When Magic And Science Collide Cambridge Assessment Network: Reliability TCR requirements for gamma/delta T cell development White dwarf cooling: electron-phonon coupling and the metallization of solid helium Extracting Uplift Rate Histories From River Profiles: Examples from Africa, North America and Australia
<urn:uuid:9f396ec7-0ced-4033-94d1-678db411be34>
2.765625
391
Content Listing
Science & Tech.
26.441669
1. Cosmic Evolution This animation illustrates a billion or more years of cosmic evolution, from the hot Big Bang to the formation of galaxies and clusters of galaxies. The time sequence is not to scale. The beginning of the animation covers a few minutes of cosmic evolution, whereas the later portions cover billions of years.
<urn:uuid:1c6147a9-ac4b-4ba2-b9ba-b2b0220eba25>
2.703125
63
Truncated
Science & Tech.
42.136932
Find information on common issues. Ask questions and find answers from other users. Suggest a new site feature or improvement. Check on status of your tickets. In nanotechnology, a particle is defined as a small object that behaves as a whole unit in terms of its transport and properties. It is further classified according to size: in terms of diameter, fine particles cover a range between 100 and 2500nanometers, while ultrafine particles, on the other hand, are sized between 1 and 100 nanometers. Similar to ultrafine particles, nanoparticles are sized between 1 and 100 nanometers. Learn more about quantum dots from the many resources on this site, listed below. More information on Nanoparticles can be found here. Since November 2004, Baudilio Tejerina manages the computer facilities of the Theory Group in the Department of Chemistry at Northwestern University. After receiving his PhD in Physical … a teacher in Xi'an Jiaotong University emiley krystine herbert '''---- == ^,,Hello My name's Emiley Krystine. I'm fifteen years old and i'm a freshmen in high school. I am very interested in science. My favorite subjects are Nanotechnology, Astrophysics, … Preetam Kumar Sharma Robert J. Moon Dr. Robert Moon is a Materials Research Engineer with the US Forest Service- Forest Products Laboratory (FPL) and an Adjunct Assistant Professor in the School of Materials Engineering at Purdue … Sourabh Madhav Mehta nanoHUB.org, a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies.
<urn:uuid:f23ac25e-1ad8-4273-89d0-e614557a9e9f>
2.828125
347
Content Listing
Science & Tech.
34.267211
Dec. 9, 2008 The world's most precise clock - on which all time-keeping and navigation systems are based - might be made as small as a wristwatch with a new design proposed by an international team of physicists. Cesium atomic clocks are presently used to define the basic unit of time - the second - to co-ordinate and synchronise global timekeeping, GPS navigation systems, computers on the Internet and scientific equipment. But these devices - known as fountain clocks - are very large and technically very complex. They employ magnets and lasers to hold in place a beam of cesium atoms passing through an intense field of microwave energy. A new class of atomic clocks of at least equivalent accuracy could be made much smaller and simpler by trapping aluminium, gallium, cesium or rubidium atoms in a lattice of laser light operated at a specific "magic" wavelength, according to a new theory put forward by physicists at the University of Nevada, in the US, and the University of New South Wales. "We have determined these magic wavelengths and theoretically the accuracy is at least competitive to that of the most precise clocks existing today," says theoretical physicist Scientia Professor Victor Flambaum who, along with colleague Dr Vladimir Dzuba, belongs to UNSW's School of Physics. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:41841e23-3a60-4b8a-8e11-90516c4dc1db>
3.671875
310
Truncated
Science & Tech.
31.060047
XRD - X-Ray Diffraction Powder x-ray diffraction (XRD) uses x-rays to investigate and quantify the crystalline nature of materials by measuring the diffraction of x-rays from the planes of atoms within the material. It is sensitive to both the type of and relative position of atoms in the material as well as the length scale over which the crystalline order persists. It can, therefore, be used to measure the crystalline content of materials; identify the crystalline phases present (including the quantification of mixtures in favourable cases); determine the spacing between lattice planes and the length scales over which they persist; and to study preferential ordering and epitaxial growth of crystallites. In essence it probes length scales from approximately sub angstroms to a few nm and is sensitive to ordering over tens of nanometres. The samples for analysis are typically in the form of finely divided powders, but diffraction can also be obtained from surfaces, provided they are relatively flat and not too rough. Moreover the materials can be of a vast array of types, including inorganic, organic, polymers, metals or composites and the potential applications cover almost all research fields, e.g. metallurgy, pharmaceuticals, earth sciences, polymers and composites, microelectronics and nanotechnology. Powder XRD can also be applied to study the pseudo crystalline structure of mesoporous materials and colloidal crystals provided that the length scales are in the correct size regime. CMM has two XRD instruments currently available. The newly commissioned (2011) Di Vinci instruments is highly versatile and can be configured in both reflectance and transmission geometries, and with either divergent (Bragg-Brentano) or parallel-beam optics. It has a 90 position magazine sample changer for high-throughput analysis and can be configured for cobalt radiation to allow for the measurement of highly fluorescing samples, e.g. Fe rich powders. The D8 Advance is configured in parallel-beam geometry and allows for the measurement of low angles and moderately rough surfaces. As such we are able to investigate a wide range of material types which includes powers, thin films and moderately rough surfaces (e.g. composite fibre mats) as well as highly oriented samples (e.g. clays) and Fe and Mn bearing minerals. In addition to XRD it is also possible to carry x-ray reflectometery experiments of thin (< 200 nm) films on atomically smooth surfaces (e.g. silicon wafers).
<urn:uuid:dacf44bd-cb0e-451f-8a19-cae187810c4b>
3.046875
525
Knowledge Article
Science & Tech.
32.701416
Using static variables for Strings When you define static fields (also called class fields) of type String, you can increase application speed by using static variables (not final) instead of constants (final). The opposite is true for primitive data types, such as int. For example, you might create a String object as follows: private static final String x ="example"; For this static constant (denoted by the final keyword), each time that you use the constant, a temporary String instance is created. The compiler eliminates "x" and replaces it with the string "example" in the bytecode, so that the BlackBerry® Java® Virtual Machine performs a hash table lookup each time that you reference "x". In contrast, for a static variable (no final keyword), the String is created once. The BlackBerry JVM performs the hash table lookup only when it initializes "x", so access is faster. private static String x = "example"; You can use public constants (that is, final fields), but you must mark variables as private.
<urn:uuid:38496616-dc5c-4265-a169-99235d13a9ef>
3.3125
218
Documentation
Software Dev.
41.91
MathTools || All Problems of the Week Print This Problem 211: Types of Triangles Open the sketch triangletypes.gsp or view the JavaSketchpad applet. Drag the vertices of each of the four triangles (red, blue, green, purple). Record what you notice about the angles and the lengths of the sides of each of the triangles. Based on these observations, what kind of triangle do you think each is? Go to page 2. Drag the vertices of the triangles and explain whether or not the additional information provided agrees with the conclusions you made above for each triangle. Go to page 3. Again, drag the vertices of the triangles and explain whether or not the additional information provided agrees with the conclusions you made above for each triangle. You can move the blue and purple triangles so that they "match up," or are exactly the same in every way (try it!). Explain which other pairs of triangles match up and which ones don't. Be sure not to miss any pairs (there should be six, including the blue and purple). Which triangle is your favorite? Why? Use the submit link below to get hints and also chances to revise. If you are under 13, you must have permission from your parent or teacher to participate in this web project. You will be asked to provide the email address of your parent or teacher when you register. At any time, parents or teachers may request that we remove personal information by writing to firstname.lastname@example.org or by contacting us via postal mail or telephone (800-756-7823). Math Forum Home || Math Library || Quick Reference || Math Forum Search © 1994-2009 Drexel University. All rights reserved. Contact the Problem of the Week administrators The Math Forum is a research and educational enterprise of the Drexel School of Education.
<urn:uuid:e1e9cabe-6a60-4645-8d1d-b486ef9244b6>
3.296875
390
Tutorial
Science & Tech.
61.627189
Learn more physics! WHAT IS THE TYPICAL MAGNETIC FIELD PREDICTED FOR THE SURFACE OF A NEUTRON STAR? - JENNIFER IGLESIAS (age 18) KILLEEN HIGH SCHOOL, KILLEEN, TX, BELL Neutron stars have the strongest magnetic fields of any stars known. Relatively weak ones have magnetic fields strengths of about 10^8 G and strong ones can be as high as 10^12 G. In fact, one group of scientists has recently discovered a young neutron star with a magnetic field strength of 8*10^14 G! To give you a little bit of perspective, the Earth's magnetic field (by which we orient our compasses) has a strength of only 1 G, and very few scientists have been able to make fields stronger than 10^6 G. For more information, check out this link (published on 10/22/2007) Follow-up on this answer.
<urn:uuid:5b5ad96a-56b8-4803-9b12-703992b7d6ed>
3.375
211
Q&A Forum
Science & Tech.
73.047817
New images of Planet Mercury A perspective view of the immense volcanic plains that span Mercury’s northern latitudes, from Messenger spacecraft, colorized by the topographic height of the surface. The purple colors are the lowest and white is the highest. A mosaic photograph of Planet Mercury, from images taken from the spacecraft Messenger. Messenger has nearly completed two of its main global imaging campaigns. This graphic shows a comparison of the internal structures of Earth and Mercury as currently understood based on the latest data from the MESSENGER mission. Mercury’s interior has a larger ratio of metallic core material to silicate rock material than the Earth. Mercury also appears to have a solid layer of iron sulfide that lies at the top of the core. - See also: Dear reader we hope you enjoyed wordlessTech! Don't forget to join our community on Facebook
<urn:uuid:d8ae75c0-d10f-40c8-b509-55eece60475d>
2.984375
178
Truncated
Science & Tech.
37.90847
Global Warming Science - www.appinsys.com/GlobalWarming [last update: 2011/05/21] The following figures compare the annual average temperature anomaly data for the Arctic for NOAA GHCN (unadjusted) and Hadley CRUTEM3 (adjusted and averaged over 5x5 degree grids) through 2010. All stations or grids north of 65N with data extending from before 1930 to after 2000 have been included. (Plotted at www.appinsys.com/GlobalWarming/Climate.aspx) GHCN – 29 stations CRUTEM3 – 32 5x5 grids The following figure compares the above two figures with GHCN in blue and CRUTEM3 in red. Hadley/CRU adjustments result in reduction in past warm years in the 1920s-1940s and slightly warmer temperatures in the mid-2000s. The following figure shows the GHCN temperature anomaly data (blue) along with the same data shifted back 69 years and down 0.3 degrees (red). The following figure shows the CUTEM3 temperature anomaly data (blue) along with the same data shifted back 69 years and down 0.6 degrees (red). This shows the similarity of the cycles and may portend 25 years of cooling before the warming resumes. Hansen Says It’s Natural NASA’s James Hansen (Hansen et al 2007 “Climate simulations for 1880–2003 with GISS modelE” Clim Dyn (2007) 29:661–696 [http://pubs.giss.nasa.gov/docs/2007/2007_Hansen_etal_3.pdf]) observed that the climate model was not correctly simulating the 1930s-1940s warm period in the global average temperature: “It may be fruitless to search for an external forcing to produce peak warmth around 1940. It is shown below that the observed maximum is due almost entirely to temporary warmth in the Arctic. Such Arctic warmth could be a natural oscillation (Johannessen et al. 2004), possibly unforced. Indeed, there are few forcings that would yield warmth largely confined to the Arctic. Candidates might be soot blown to the Arctic from industrial activity at the outset of World War II, or solar forcing of the Arctic Oscillation (Shindell et al. 1999; Tourpali et al. 2005) that is not captured by our present model. Perhaps a more likely scenario is an unforced ocean dynamical fluctuation with heat transport to the Arctic and positive feedbacks from reduced sea ice.” So Hansen asserts that the previous warming cycle was natural (perhaps “solar forcing of the Arctic Oscillation”), but the current warming cycle is due to CO2. And yet the current “global” warming has also been “largely confined to the Arctic”. The following figure shows the global temperature change from 1978 to 2006 for the lower troposphere from satellite data [http://climate.uah.edu/25yearbig.jpg]. Most of the warming has been in the Arctic. The following figure is from the IPCC Fourth Assessment Report (AR4) Figure 9.6 (2007). It shows the change in temperature (C per decade) by latitude. The black line shows the observed temperature, the blue band shows the output of the computer models including only natural factors, whereas the pink band shows the output of computer models including anthropogenic CO2. Notice that the models without CO2 (blue shaded area) can explain all of the warming for most of the world up to 30 degrees north latitude. This figure also shows that the warming is mainly in the Arctic. So it is Hansen’s and other alarmists’ position that these two nearly identical Arctic warming cycles have two completely different causes – 1930s = natural; 1990s = CO2. (See Global Warming is Not Global for more details about the non-global trends) (See Arctic Regional Summary for more details on the Arctic) It is only in recent years that scientists are starting to recognize the influence of oceanic cycles in influencing climate. A 2008 study – “Oceanic Influences on Recent Continental Warming”, by Compo et al in Climate Dynamics, 2008) [http://www.cdc.noaa.gov/people/gilbert.p.compo/CompoSardeshmukh2007a.pdf] states: “Evidence is presented that the recent worldwide land warming has occurred largely in response to a worldwide warming of the oceans rather than as a direct response to increasing greenhouse gases (GHGs) over land. Atmospheric model simulations of the last half-century with prescribed observed ocean temperature changes, but without prescribed GHG changes, account for most of the land warming. … Several recent studies suggest that the observed SST variability may be misrepresented in the coupled models used in preparing the IPCC's Fourth Assessment Report, with substantial errors on interannual and decadal scales. There is a hint of an underestimation of simulated decadal SST variability even in the published IPCC Report.” The Atlantic Multi-Decadal Oscillation (AMO) is a fluctuation in de-trended sea surface temperatures in the North Atlantic Ocean. It was identified in 2000 and the AMO index was defined in 2001 as the 10-year running mean of the de-trended Atlantic SST anomalies north of the equator. The following figure shows the AMO [http://intellicast.com/Community/Content.aspx?a=127]. The following figure superimposes the Arctic average annual temperature anomaly (CRUTEM3 - blue) on the AMO. The following figure superimposes the Arctic average annual temperature anomaly (GHCN - green) on the AMO (AMO graph from [http://en.wikipedia.org/wiki/File:Amo_timeseries_1856-present.svg]). The following figure shows the sum of the AMO plus the Pacific Decadal Oscillation (PDO) [http://intellicast.com/Community/Content.aspx?a=127] (black line). The following figure shows the AMO+PDO (black line above changed to red below) superimposed on the Arctic average annual temperature shown at the beginning of this document. The above figures show the clear correlation of the Arctic temperature cycles to the oceanic oscillations. For more info: Pittsburg Press, Dec. 3, 1922: The Evening Independent, Sep. 10, 1920: New York Times, Dec. 12, 1938: Thanks to Steven Goddard at http://stevengoddard.wordpress.com/ for digging these up. National Geographic Alarmism The NatGeo creates alarm about recent Arctic warming, but ignores the long term data showing that this is a repeated cycle. They also make the false claim “the bays of Greenland’s southwest coast, where warming temperatures have reduced permanent sea ice.” – southwest Greenland never had permanent sea ice as shown below. The magenta line shows the median area of sea ice. Tiny Tim – A New Scientist Tiny Tim sings “The ice caps are melting … all the world is drowning to wash away the sin” (forerunner of the green religion).
<urn:uuid:e9164102-07e5-4f41-a872-bb3491dc87db>
3.078125
1,536
Knowledge Article
Science & Tech.
51.054124
Books & Music Food & Wine Health & Fitness Hobbies & Crafts Home & Garden News & Politics Religion & Spirituality Travel & Culture TV & Movies Identifying a Monophyletic Group Phylogeneticists use phylogenies to isolate monophyletic groups or groups composed of an ancestor and all of its descendants. These groups are also known as clades. A monophyletic group will include a fundamental ancestor and all of that ancestorís descendants. A split in the tree called a node will represent this ancestor. All of the organisms on that branch of the tree would be its descendants. Note that organisms located on another branch of the tree would not be within that same clade. Only organisms directly connected to the same hypothetical common ancestor are within the same clade or monophyletic group. Species next to each other from the same node are known as sister groups. These species share similar heritable traits to one another. For instance, angiosperms (flowering plants) and gymnosperms (non-flowering plants) are sister clades. They are both land plants that share many characteristics in common. They differ in their ability to flower. These species belong to the same monophyletic group: land plants. For instance, the organism at the stem of the tree and all of the other branches connecting to that stem form one monophyletic group. If you trace the stem to the first node in the tree the hypothetical common ancestor and the branches directly connected to that node are another monophyletic group within that first larger clade. Organisms that have different connecting nodes have different hypothetical common ancestors; they are not in the same clade. They diverged from each other along their evolutionary histories. Often, you may find small clades nested within a larger clade. For instance, all land plants form one monophyletic group but within that group, angiosperms (flowering plants) form another separate monophyletic group. Scientists continue to discover and isolate groups of species to this day. A truly wonderful resource for phylogenetic trees is the Tree of Life Project. This project features scientists from around the world who have collaborated in order to build up the entire biological tree of life. This project is still in progress. You can find them on the web. | Related Articles | Editor's Picks Articles | Top Ten Articles | Previous Features | Site Map Content copyright © 2013 by Catherine Ebey. All rights reserved. This content was written by Catherine Ebey. If you wish to use this content in any manner, you need written permission. Contact Catherine Ebey for details. Website copyright © 2013 Minerva WebWorks LLC. All rights reserved.
<urn:uuid:afde31c9-e5e3-4761-ba31-6d5dfc5633be>
3.65625
566
Truncated
Science & Tech.
39.463146
In the early 1960's using a simple system of equations to model convection in the atmosphere, Edward Lorenz, an MIT meteorologist, ran headlong into "sensitivity to initial conditions". In the process he sketched the outlines of one of the first recognized chaotic attractors. In Lorenz's meteorological computer modeling, he discovered the underlying mechanism of deterministic chaos: simply-formulated systems with only a few variables can display highly complicated behavior that is unpredictable. Using his digital computer, culling through reams of printed numbers and simple strip chart plots of the variables, he saw that slight differences in one variable had profound effects on the outcome of the whole system. This was one of the first clear demonstrations of sensitive dependence on initial conditions. Equally important Lorenz showed that this occurred in a simple, but physically relevant model. He also appreciated that in real weather situations, this sensitivity could mean the development of a front or pressure-system where there never would have been one in previous models. In his famous 1963 paper Lorenz picturesquely explains that a butterfly flapping its wings in Beijing could affect the weather thousands of miles away some days later. This sensitivity is now called the "butterfly effect". Here is a Java simulation of the Butterfly Effect using the chaotic attractor that Lorenz discovered. © The Exploratorium, 1996
<urn:uuid:15f57fdd-63ed-4729-ae25-a15ed450d2a2>
4.03125
276
Knowledge Article
Science & Tech.
24.629027
Get flash to fully experience Pearltrees After explaining what a normal climate is, Heidi Cullen asks whether there actually is a normal climate at all, given that each of the past three decades has been hotter than the previous one, with 2000-2009 being the hottest on record. But nevertheless, she goes on to refer to the “new normals”, defined as a region’s weather averaged over 30 years, which are updated at the end of each decade. These show that “on average, conditions were about 3.6 F warmer from 2001-2010 than from 1971-1980.” As it will likely take a long time before we actually reach a new normal, it seems like it is time for a new term to characterize this interim period. Again, Welcome to Post-Normal Times! And note that I am no longer the only one using the term. Last week I was at the 2012 AGU Fall Meeting . I plan to blog about many of the talks, but let me start with the Tyndall lecture given by Ray Pierrehumbert, on “Successful Predictions”. You can see the whole talk on youtube , so here I’ll try and give a shorter summary. On the role of software engineering in the study of climate change. by Jul 26
<urn:uuid:dcae8f75-ee6c-48f9-b2ad-75b4b007cc72>
2.96875
270
Personal Blog
Science & Tech.
62.068233
Oil is no longer cheap, however, and it’s certainly not limitless. We have entered what Hampshire College professor Michael Klare calls the “age of tough oil,” in which the easily extractable deposits have been depleted, sending us drilling for oil miles beneath the surface of the ocean. Meanwhile, the growing Indian and Chinese middle classes appear poised to double the number of cars on the planet by 2050, to as many as two billion automobiles. If they are going to replace gas-powered cars, electric vehicles need the best possible batteries, and today those batteries are based on lithium. Lithium is the third-lightest element on the periodic table, well suited to lightweight energy storage. Because of its extreme reactivity, it can form the basis for more-energy-dense batteries than just about any other element. The rechargeable lithium battery has already helped transform portable electronics, enabling the shift from the 30-ounce Motorola DynaTAC (commonly known as the Michael-Douglas-in-Wall Street phone) to the 4.8-ounce iPhone 4. Now automakers are betting that lithium could be equally transformative for transportation. But lithium and the batteries based on it are only part of a larger system. Exploiting every milliwatt-hour of electricity stored in that battery requires the most efficient electric motors possible--and the magnets within those motors call for rare-earth elements such as neodymium and dysprosium. Generating electricity from renewable sources such as wind and sun requires ultra-efficient machines as well. Naturally, the most efficient wind turbines use rare-earth-based magnets; advanced thin-film solar panels use either tellurium or indium. Yet the cost and availability of the elements that deliver such efficiency could be a problem. To be deployed on a massive scale, the machines of the clean-energy age must be cost-competitive with today’s fossil-fuel-based systems. But clean technology can’t be cost-competitive unless it’s manufactured on a large scale, and nothing is going to get built in volume if the raw ingredients aren’t available and affordable. Given infinite money, as the APS/MRS report notes, “there is no absolute limit on the availability of any chemical element, at least in the foreseeable future.” Theoretically, scientists can wring tiny quantities of many elements from a random bucket of dirt--it just might cost a fortune to do so. So there are two key questions about neodymium, tellurium, lithium and the 26 other energy-critical elements: How much is there? And more crucially, what will it cost to get them out of the ground? The world’s largest lithium producer, Sociedad Química y Minera de Chile S.A. (SQM), operates in Chile’s Atacama Desert, the driest place on Earth, where the soil is so barren that NASA has used it to calibrate microbe-detecting Mars robots. Last May, I traveled to northern Chile to see the company’s operations. Andrés Yaksic, a marketing manager from SQM, met me in San Pedro de Atacama, a tourist oasis about 50 miles north of SQM’s plant. On a bright, chilly morning, we set out for the facility. The sky was a spotless cobalt blue as we drove south toward the Salar de Atacama, the salt flat that is one of the world’s most abundant sources of lithium. SQM says the Salar de Atacama contains some 40 million tons of measured, economically extractable lithium carbonate. After about an hour on the highway, we turned right onto a gravel road through the salar. Bulldozed salt dams and white mounds the size of suburban office buildings speckled the landscape. We stopped at a small office building and put on boots, blaze-orange safety vests and hard hats. Then we walked outside to meet Álvaro Cisternas, a stout, deeply tanned operations manager who would be taking us out to the evaporation pools. Satellite images of SQM’s facility show huge white and cerulean squares carved into cocoa-colored earth, like the world’s largest swimming facility. In these pools, brine pumped from a subsurface aquifer bakes in the quasi-Martian sun for months. Water evaporates, the brine concentrates, and in time, minerals begin to precipitate. Later, the brine designated for lithium production is piped into a dedicated series of evaporation pools, each one a deepening shade of yellow. A tanker truck then carts the final product, a solution of 6 percent lithium, to a plant three hours away on the Pacific coast. There it is processed into lithium carbonate, a white powder that looks so much like cocaine that I didn’t dare try to fly back to the U.S. with samples. After we walked among the pools, Cisternas drove us to the top of a small mountain of salt that had been set aside as an overlook. Evaporation pools, tractors, trucks, outbuildings and hills of valuable salt stretched for what appeared to be miles, though the air there was so dry and clear and the view was so completely uninterrupted that getting a firm perspective on the operation’s size was difficult. SQM extracts 31 percent of the world’s lithium supply from this salt flat each year, which is just 40,000 of the salar’s known 40 million metric tons of reserves. Earlier, Yaksic had told me that within a matter of months, operations could scale up to supply three or four times the total global demand. Now, to emphasize the company’s world-beating capacity, Cisternas and Yaksic pointed to group of pools in the distance and explained that every year SQM actually pumps some hundreds of thousands of metric tons of lithium back into the salar—lithium that has been unavoidably harvested in the pursuit of the real moneymaker. Despite being the world’s largest lithium supplier, SQM generates more revenue from “specialty plant nutrition,” potassium fertilizer for our hydrangeas and geraniums. Among the energy-critical elements, lithium is abnormally easy to mine, at least from brine-based sources like the Salar de Atacama. Nevertheless, the situation with many other critical elements might also be less dire than is often reported. “Most of the issues, in my opinion, are a bit overblown,” says MIT’s Gerbrand Ceder. “There are enormous buffers in the system.” The first is simply that if the price of an element goes up, people have incentive to spend more money refining that element from raw ore. “There’s a lot of mining waste that still contains a lot of metal,” Ceder explains. That waste can, in many instances, yield more metal than we’re currently getting from it. In the case of energy-critical elements, whose production typically piggybacks on the extraction of more widely used minerals, the scrap pile could be a valuable source of reserves. Another, often overlooked buffer is simple hierarchy of demand: If the supply of an element is limited, then the industries that need it most will take it away from those that need it less. Platinum, for example, is an indispensable catalyst in the exhaust filters that car companies are required to install on their automobiles. If platinum demand goes up, that doesn’t mean car companies will use fewer catalytic converters. It means couples will exchange fewer platinum wedding rings. Tellurium provides another example. In addition to cadmium-telluride thin-film solar panels, tellurium is used to make thermoelectric devices (which convert wasted heat into electricity) and steel alloys. If demand for tellurium goes up, it will quickly become clear who needs it most. “What you find for tellurium is that the solar industry sits way on top of the chain,” Ceder says. “The value that they get from it is so high that the steel guys are going to get screwed, and then after that the thermoelectric guys.”single page
<urn:uuid:e1aef0d6-a2ca-4b4d-9bb3-e8332b31fa6a>
2.78125
1,749
Truncated
Science & Tech.
42.815259
Results: MIT researchers have developed designs for a new kind of coal-burning power plant, called a pressurized oxy-fuel combustion system, whose carbon-dioxide emissions are concentrated and pressurized so that they can be injected into deep geological formations. This system is a way to reduce the energy penalty that all carbon-capture systems for power plants have compared to regular fossil-fuel plants, and could thus be an enabling technology to help make carbon capture and sequestration systems (CCS) practical and affordable. While all carbon capture systems incur about a one-third reduction in plant efficiency, this system reduces that penalty. Why it matters: Since more than 90 percent of world energy production uses fossil fuels, finding ways to burn them without adding greenhouse gases to the atmosphere is seen a crucial step toward curbing global climate change. The new system not only would eliminate the carbon dioxide emissions from the plant, but could produce savings by reducing the size of some components in the plant. How they did it: Professor of mechanical engineering Ahmed Ghoniem and his team designed a coal-plant combustion chamber that burns the fuel under pressure, and uses a stream of pure oxygen instead of ordinary air, which is 79 percent nitrogen. They did both simulations and lab-scale tests of the new system to demonstrate a 3 percent improvement in efficiency compared to an unpressurized oxy-fuel system. Next steps: The Italian energy company ENEL, which sponsored the research, plans to build a pilot plant using the system in the next few years. In the meantime, Ghoniem and his team are continuing to fine-tune the technology, hoping to improve the energy efficiency improvement to 10 to 15 percent.
<urn:uuid:94d16a86-9189-40c3-ae03-b8791bce3d02>
4.0625
342
Knowledge Article
Science & Tech.
24.72283
The Large Magellanic Cloud, an irregular galaxy. Click on image for full size © Loke Kun Tan (StarryScapes) A Matter of Scale - interactive showing the sizes of things, from very tiny to huge - from NSF Any galaxy which does not look like an elliptical or spiral is called an irregular galaxy. Every irregular galaxy is unique in it's appearance. It doesn't have to look like the others. It just isn't a spiral or an elliptical. There are two types of irregulars. Irr I galaxies are similar to spirals because they have lots of gas and young stars, but they don't have spiral arms. Irr II galaxies are distorted and strange looking. Their appearance leads some astronomers to think that Irr II galaxies may have collided with another galaxy at some time during their lives. If you live south of the Equator, you may be able to see two irregular type galaxies in your night sky. The Large and Small Magellanic Clouds are two very nearby irregular galaxies which are orbitting the Milky Way. Because they are nearby and fairly bright, they can be seen with the Shop Windows to the Universe Science Store! The Fall 2009 issue of The Earth Scientist , which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store You might also be interested in: When we look up at the night sky, we notice that there are many stars in our sky. Stars must like to live together in star cities - galaxies. Our city of stars is called the Milky Way, and it is home to...more Satellites in the 1960's looked for a type of light called Gamma Rays. They found bursts of Gamma Rays coming from outer space! They can't hurt you. They are stopped by the Earth's atmosphere. We have...more Neutron Stars form when really big stars die. When such a star runs out of fuel its center begins to collapse under gravity. When the center collapses the entire star collapses. The surface of the star...more Spiral galaxies may remind you of a pinwheel that blows in the breeze. Like a pinwheel, a spiral galaxy is rotating, and it has spiral arms. Through a telescope or binoculars,a spiral galaxy may look...more When stars like our own sun die they will become White Dwarfs. As a star like our sun is running out of fuel in its center it grows into a red giant. This will happen to our sun in 5 Billion years. The...more What's in a Name: Arabic for "head of the demon" Claim to Fame: Represents Medusa's eye in Perseus. A special variable star that "winks" every 3 days. Type of Star: Blue-white Main Sequence Star, and...more What's in a Name: Nicknamed the "Pup" because it is the companion to Sirius, "the Dog Star" Claim to Fame: Highly compressed white dwarf remnant. Density about 50,000 times that of water. It has approximately...more
<urn:uuid:ded0d950-42e6-45a0-8843-3f59ba5b3c09>
3.75
659
Content Listing
Science & Tech.
64.889491
Evolution is change over time. Under this broad definition, evolution can refer to a variety of changes that occur over time—the uplifting of mountains, the wandering of riverbeds, or the creation of new species. To understand the history of life on Earth though, we need to be more specific about what kinds of changes over time we're talking about. That's where the term "biological evolution" comes in. Biological evolution refers to the changes over time that occur in living organisms. An understanding of biological evolution—how and why living organisms change over time—enables us to understand the history of life on Earth. They key to understanding biological evolution lies in a concept known as as descent with modification. Living things pass on their traits from one generation to the next. Offspring inherit a set of genetic blueprints from their parents. But those blueprints are never copied exactly from one generation to the next. Little changes occur with each passing generation and as those changes accumulate, organisms change more and more over time. Descent with modification reshapes living things over time and biological evolution takes place. All life on Earth shares a common ancestor. Another important concept relating to biological evolution is that all life on Earth shares a common ancestor. This means that all living things on our planet are descended from a single organism. Scientists estimate that this common ancestor lived some 3.5 to 3.8 billion years ago and has since given rise to all living things that have inhabited our planet. The implications of sharing a common ancestor are quite remarkable and mean that we're all cousins—humans, green turtles, chimpanzees, monarch butterflies, sugar maples, parasol mushrooms and blue whales. Biological evolution occurs on different scales. These scales on which evolution occurs can be roughly grouped into two categories: small-scale biological evolution and broad-scale biological evolution. Small-scale biological evolution, better known as microevolution, is the change in gene frequencies within a population of organisms changes from one generation to the next. Broad-scale biological evolution, commonly referred to as macroevolution, refers to the progression of species from a common ancestor to descendent species over the course of numerous generations.
<urn:uuid:57b4e358-c275-48cf-b06f-4e3475525159>
3.84375
443
Knowledge Article
Science & Tech.
30.611935
Dolphins that hunt using sponges as tools prefer to spend time with other… (Eric M. Patterson ) Scientists have discovered that bottlenose dolphins have something in common with junior high school girls: They like to hang out in cliques. The researchers found that dolphins that wear sea sponges on their beaks as hunting tools prefer to hang out with other dolphins that do the same. The finding is the first strong example of cultural behavior in non-human animals. While there is much disagreement among scholars over what constitutes a cultural behavior, there is broad agreement that it must include two central components: The behavior must be socially learned, meaning animals learn largely by observing and interacting with others, and it must also lead to identifiable groups, some of which exhibit the behavior and some of which don’t. In other words, it has to produce social cliques. Dolphin experts already knew that the mothers taught their children how to hunt with sea sponges. But they had yet to show that sponge-hunters preferred the company of fellow sponge-hunters. Dolphins hunt alone, so researchers wanted to know whether the spongers spent more non-hunting time with other spongers than with non-spongers. To answer this question, a team of researchers from Georgetown University used a social network analysis to examine a trait called “homophily,” or how likely dolphins were to associate with other dolphins who hunted the way they did. The analysis was complicated by the fact that dolphins, like humans, associate with a wide range of other dolphins, but for varying amounts of time. As a result, the analysis required a significant amount of data. In the report, published Tuesday in the journal Nature Communications, the researchers analyzed 22 years of records recounting the observed interactions between 36 sponge-hunters and 69 non-spongers. They found that female spongers—but not males—spent more time with fellow sponge-hunters than with non-sponge-hunters. While the researchers were unsure why they found this sex difference, they suspect it is related to the different approaches male and females dolphins take to social behavior in general. The researchers do not claim that hunting techniques are the primary determinant of dolphin social behavior. More important in determining associations, they write, may be “enduring traits such as sex, kinship, age, and geography,” traits that are often definitive in human culture as well. Nevertheless, the scientists say that their results demonstrate that socially learned behaviors do play a role, just as they do in pre-teens. As a result, they believe that there are likely to discover more examples of animal cultural behaviors in the future—sponging may be just the tip of the iceberg. --Return to the Science Now blog.
<urn:uuid:c242b39d-f71b-4fd1-90ce-307659806b9a>
3.46875
579
Truncated
Science & Tech.
37.767414
As I mentioned in my last post, one of the ways we are helping our teams get a better understanding of the wild and wacky world of the Web and Web developers is via a glossary we’ve created. In compiling this I pulled information from various and sundry sources across the Web including wikipedia, community and company web sites and the brain of Cote. Over the next several entries I will be posting the glossary. Feel free to bookmark it, delete it, offer corrections, comments or additions. Today I present to you, the Application tier. - Application framework : Provides re-usable templates, methods, and ways of programming applications. Often, these frameworks will provide “widgets” and “libraries” that developers use to create various parts of their application – they may also include the actual tools to create, deploy, and run the final application. Some application frameworks create whole sub-cultures of developers, such as Rails which supports the Ruby programming language. Most application frameworks are open source and free, though there are also many closed source, not-free ones. - Continuous code development lifecycle: releasing software at more frequent intervals (30 days or less) by (a.) doing smaller batches of code, and, (b.) using tools and processes that enable a more lean approach to development. Software released in such a cycle tends to release many small features instead of, in contrast, “traditional” development where 100s of features are bundled up in one version of the software and released every 1-2 years. - Java/.NET: The incumbent enterprise development languages. Very powerful but relatively difficult to learn and take time to program in. - PHP: a server-side scripting language originally designed for web development to produce dynamic web pages. WordPress is written in PHP, as well as Facebook and countless web sites. PHP is infamous for being very quick and easy to get started with (which it is) but turning into a mess of “spaghetti code” after years of work and different programmers. PHP is open source, though Zend, the patron company behind PHP, and others sell “commercial” versions. - Perl: One of the original programming languages of the web, Perl emphasizes a very “Unix way” of programming. Perl can be quick and elegant, but like PHP can result in a pile of hard to maintain code in the long term. While Perl was extremely popular in the first Internet bubble, it has sense taken a back-seat to more popular development worlds such as PHP, Java, and Rails. Perl is open source and there are few, if any, commercial companies behind it. - Python: Like all dynamic languages, Python emphasizes speed of development and code readability. Its an object-oriented language. Python is something of an evolution of Perl, but it not that closely tied to it. Python emphases broadness of functionality while at the same time being a proper, object oriented programing language (not just a way to write “scripts”). Python enjoys steady popularity; Google uses Python as one of its primary programming languages. - Ruby: Ruby and Python are very similar in ethos: emphasizing fast coding with a more human-readable syntax. Ruby became famous with the rise of Rails in the mid-2000s which was a rebellion against the “heavy weight” practices that Java imposed on web development. Ruby is still very popular. Ruby can also be run on-top of the Java virtual machine (via JRuby), providing a good bridge to the Java world. Salesforce’s acquired PaaS, Heroku, uses Ruby, and most modern development platforms use Ruby. - Ruby on Rails: a popular web application framework written in Ruby. Rails is frequently credited with making Ruby “famous”. - Scala: A somewhat exotic language, but it has quite a buzz around it. It’s good for massive scale systems that need to be concurrent (lots of people changing lots of things, often the same things, at the same time). Erlang is another language in this area. Scala runs on the Java Virtual Machine and Common Language Runtime. In April 2009 Twitter announced they had switched large portions of their backend from Ruby to Scala and intended to convert the rest. In addition, Foursquare uses Scala and Lift (Lift is a framework for Scala much in the same way Rails is a framework for Ruby.) - R: a programming language and software environment for statistical computing and graphics. - Clojure: A recent dialect of the Lisp programming language and is good for data intense applications. It runs on the Java Virtual Machine and Common Language Runtime Runtimes and Platforms - Common Language Runtime (CLR): is the virtual machine component of Microsoft’s .NET framework and is responsible for managing the execution of .NET programs. - Java Virtual Machine (JVM) – the underlying execution engine that the Java language runs on-top of. It controls access to the hardware, networks, and other “infrastructure” and services outside of the main application written in Java. Of special note is that many languages other than Java can run on the JVM (as with the CLR), e.g., Scala, Ruby, etc. There are many JVMs and ISVs (IBM, Oracle, etc.) will use their custom JVMs as key differentiators for middle ware, mostly around performance, scale-out, and security. - Openshift: Red Hat’s Platform as a Service (PaaS) offering. More specifically, OpenShift is a PaaS software layer that Red Hat runs and manages on top of third party providers – Amazon first with more to follow. - Heroku: A Platform as a Service (PaaS) offering that was acquired by Salesforce.com. It supports development of Ruby on Rails, Java, PHP and Python. - CloudFoundry: A Platform as a Service (PaaS) offering and VMware-led project. Cloud Foundry provides a platform for building, deploying, and running cloud apps using the Spring Framework for Java developers, Rails and Sinatra for Ruby developers, Node.js and other JVM languages/frameworks including Groovy, Grails and Scala. - Joyent: Offers PaaS and IaaS capabilities through the public cloud. Dell resells this capability as turnkey solution under the name The Dell Cloud Solution for Web applications. Joyent also sponsors the development of node.js and employs its creator. - GitHub: a web-based hosting service for software development projects that use the Gitrevision control system. GitHub offers both commercial plans and free accounts for open source projects. But wait there’s more… Stay tuned for the next couple of entries when I will cover first the Database tier and then the Infrastructure tier. Pau for now…
<urn:uuid:f70643dc-633e-4cfb-9c65-fa4c81ba4373>
2.765625
1,428
Personal Blog
Software Dev.
44.622858
[Graphitemaster] is helping to demystifying the process of tailoring functions for dynamic loading. His tutorial shows how make a dynamic function that prints “Hello World” to the standard output. This is of course rudimentary, but if you have no prior experience with the topic you might be surprised at what actually goes into it. Normally your compiled code has addresses in it that tell the processor where to go next. The point of dynamic loading is that the code can be put anywhere, and so static addresses simply will not work. The code above shows how a simple printf statement normally compiles. The callq line is a system call that needs to be replaced with something that will play nicely in the registers. [Graphitemaster] takes it slow in showing how to do this. Of course a dynamic function alone isn’t going to be much good. So the tutorial finishes by illustrating how to program a dynamic code loader.
<urn:uuid:bddee612-2491-484a-95f6-f05442e280dd>
2.765625
192
Tutorial
Software Dev.
51.292726
* Japanese name: Tobihaze * Scientific name:Periophthalmus sp. * Description: Mudskippers are fish with eyes on the top of the head (not at the sides like in most other fish) and with front (pectoral) fins that are more like legs than fins. They are olive-brown in color, have sharp teeth and large mouths, and grow up to 15-cm long. The eyes can be raised on stalks, independently of one another. * Where to find them:Mudskippers are found in the mud of mangroves and river estuaries in Honshu, Kyushu and Okinawa. They thrive in brackish water with a salinity halfway between marine saltwater and riverine freshwater. Mudskippers are unusual fish in that they can often be seen clinging to the branches of estuarine or mangrove trees, above the water line. * Food: Worms and crustaceans (crabs and shrimps) that live in the river mud. Some mudskipper species eat algae that grow on mangrove roots; others eat insects. * Special features: Mudskippers are amphibious fish. They have gills that work like those of other fish and extract oxygen from water, but unlike other fish, they can also breathe air. In this respect they are similar to lung fish, the ancestors of the first vertebrates to walk on land. Mudskippers absorb oxygen through their wet skin, and have sacs under the skin near the gills that act like lungs, transmitting oxygen from the air to the blood. Everything about mudskippers is an adaptation from the truly fish way of life, to one where much of the animal's time is spent out of the water. Their pectoral fins are so well adapted to use on land that mudskippers can run faster than they can swim. During the mating season, males dig mud burrows and perform acrobatics and push-ups to attract females. The dorsal fin becomes brightly colored and is flashed in warning to rival males. Females lay their eggs in the burrows, but as these are so deep, the water within them contains almost no oxygen. To ensure the eggs have enough oxygen to develop, males gulp air from the surface and release it at the bottom of the burrow. Mudskippers care for their offspring.
<urn:uuid:c43a0e6b-59bf-4ba1-bde4-8e41f04e5316>
3.140625
488
Knowledge Article
Science & Tech.
53.06051
Rotifer occurrence in relation to water temperature in Loch Leven, Scotland May, L.. 1983 Rotifer occurrence in relation to water temperature in Loch Leven, Scotland. Hydrobiologia, 104 (1). 311-315. 10.1007/BF00045983Full text not available from this repository. Many rotifer species in Loch Leven show a distinct seasonality in occurrence. This appears to be primarily an effect of temperature. While some species seem to be eurythermal, other species show a well-defined range of temperature preference, outside which they are unable to maintain populations. Within this range, there is a close correlation between food availability and rotifer abundance. |Programmes:||CEH Programmes pre-2009 publications > Other| |CEH Sections:||_ Pre-2000 sections| |Additional Keywords:||rotifers, temperature, Loch Leven, Scotland, occurrence, grazing, population dynamics| |NORA Subject Terms:||Zoology Ecology and Environment |Date made live:||03 Aug 2010 07:58| Actions (login required)
<urn:uuid:6e204f18-abdf-47b4-8c43-03823be95ec5>
2.8125
234
Academic Writing
Science & Tech.
25.824545
Mission Type: Orbiter Launch Vehicle: Delta 1913 (no. 95 / Thor no. 581) Launch Site: Eastern Test Range / launch complex 17B, Cape Canaveral, USA NASA Center: Goddard Space Flight Center Spacecraft Mass: about 330 kg at launch, 200 kg in lunar orbit (after solid braking motor was ejected) Spacecraft Instruments: 1) galactic studies experiment; 2) sporadic low-frequency solar radio bursts experiment; 3) sporadic Jovian bursts experiment; 4) radio emission from terrestrial magnetosphere experiment; and 5) cosmic source observation experiment Spacecraft Dimensions: Radio-antenna array: 183 meters from tip to tip. Main body: truncated cylinder 92 cm in diameter, about 79 cm high Spacecraft Power: solar panels which charged six nickel-cadmium batteries Maximum Power: 38.3 W Antenna Diameter: 183 meters (would have been 457 m if fully extended) Program Manager: John R. Holtz Project Manager: John T. Shea Principal Scientists: Dr. Nancy G. Roman (Program Scientist); Dr. Nancy G. Roman (Program Scientist) Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000, Monographs in Aerospace History No. 24, by Asif A. Siddiqi National Space Science Data Center, http://nssdc.gsfc.nasa.gov/ Solar System Log by Andrew Wilson, published 1987 by Jane's Publishing Co. Ltd. After launch on a direct-ascent trajectory to the Moon and one midcourse correction on 11 June, Explorer 49 fired its insertion motor at 07:21 UT on 15 June to enter orbit around the Moon. Initial orbital parameters were 1,334 x 1,123 kilometers at 61.3? inclination. On 18 June, the spacecraft jettisoned its main engine and, using its Velocity Control Propulsion System, circularized its orbit. The spacecraft, with a partially deployed radio-antenna array measuring 183 meters from tip to tip, was the largest spacecraft in physical dimensions to enter lunar orbit. Although the antennas did not deploy to full length, the mission goals were not affected. During its mission, Explorer 49 studied low-frequency radio emissions from the solar system (including the Sun and Jupiter) and other Galactic and extra-galactic sources. It was placed in lunar orbit to avoid terrestrial radio interference. NASA announced completion of the mission in June 1975. Last contact was in August 1977. This was the last spacecraft the U.S. sent to the Moon for 21 years (until Clementine in 1994).
<urn:uuid:4cf71ff5-298f-4af2-9760-8b30df35c99b>
3.4375
539
Knowledge Article
Science & Tech.
55.025486
2.5 Body lengths per second. Rough Terrain. Here, the servos at the hips of a tripod of legs are made to swing as the pistons are extended. An impressive 2.5 body lengths per second is achieved and fairly robust performance over obstacles that are the same height as the "stomach" clearance is obtained. This is mainly attributed to the swinging of the legs as well as increased foot traction gained by padding the track and obstacles with high-friction materials. The issues of foot traction is raised: Cockroaches can achieve high traction due to their small size and intricate leg design. Will our scaled-up robots be able to do the same? Will they be able to operate with the same magnitude of horizontal forces without slipping? Will they get stuck in upward slopes? Though the speed achieved is impressive, the behavior is still too "hoppy" and not very "Groucho-like" [McMahon, et al. 1987]. Minisprawl is observed to bottom-out, or crash, during the gait. Playing the videos in slow motion (or one frame at a time as one can do by using the arrow keys in Movie Player) one can see that there is a galloping or bounding quality to the motion.
<urn:uuid:33e18098-20d3-40b0-a33a-57c53e6872d7>
2.828125
283
Truncated
Science & Tech.
62.29955
PF's XAML language defines the controls that make up a user interface. Typically, some kind of container control such as a Grid or StackPanel fills the window. In turn, that container holds other controls, such as Labels, TextBoxes, Buttons, Sliders, and so forth. Your XAML code defines the user interface's structure and—to an extent—its behavior. For example, you can define animations in XAML that change control properties such as size and position when certain events occur. WPF also lets you use templates to define the structure of the controls that make up the interface. By controlling the structure of a Slider, for example, you can change the way it looks and acts. For example, you could use a template to make a Slider display round buttons on its ends, a thin track in the middle, and a fat diamond-shaped thumb for dragging. This article describes templates, explains how to build and use them, and provides a few useful examples. What Is a Template? |Figure 1. Slider Dissected: If you look closely at a WPF Slider control, you'll see that it's a composite, made up of lots of parts that are actually other controls.| If you look closely at most controls, you'll find that they're made up of a number of parts. Figure 1 shows a Slider control with its constituent parts marked. You might imagine that a single chunk of code makes up the Slider and creates all of these pieces, but that's not the case. The Slider is made up of an assortment of other controls, including Border, Canvas, Grid, Path, Rectangle, RepeatButton, Thumb, TickBar, and Track controls. For example, the TickBar control draws the tick marks and the Thumb allows the user to click and drag the thumb. Together, those individual controls determine the Slider control's appearance and behavior. The control's template determines the Slider's pieces and how they behave. It defines the sub-controls that make up the Slider and determines how they interact with the user through the sub-controls' properties. This would all be only mildly interesting if it weren't for the fact that you can change a control's template. If you want to build an elliptical Slider that lets the user drag a circle around the ellipse's rim, you can! You'll have to define how the new template provides a Slider's typical behavior—but you can do it if you really want to.
<urn:uuid:94e36c34-480d-4fc3-bc94-2ffdd9091cf3>
4
530
Tutorial
Software Dev.
57.982249
Book Summary - Eaarth: Making a Life on a Tough New Planet This is a substantive summary of Bill McKibben's book Eaarth: Making a Life on a Tough New Planet. McKibben argues that we have underestimated the pace and impact of climate change, and as a result we now inhabit a planet that has been irrevocably changed. The speed and magnitude of the impact of global warming will require that we abandon our assumption that the economy must grow. In order to endure on the new planet we have created, we will need to develop local resources for growing our own food and creating our own energy. This summary may be useful as background reading for teachers developing secondary level Social Science lessons on climate change. climate change, environment, renewable energy, local food
<urn:uuid:16a2afff-fd19-4c76-97cb-718945775289>
2.953125
158
Truncated
Science & Tech.
33.535248
C and C++ books-page13 Posted on: October 7, 2010 at 12:00 AM This book was motivated by my experience in teaching the course E&CE 250: Algorithms and Data Structures in the Computer Engineering program at the University of Waterloo. C and C++ books-page13 Data Structures and Algorithms with Object-Oriented Design Patterns in C++ This book was motivated by my experience in teaching the course E&CE 250: Algorithms and Data Structures in the Computer Engineering program at the University of Waterloo. I have observed that the advent of object-oriented methods and the emergence of object-oriented design patterns has lead to a profound change in the pedagogy of data structures and algorithms. The successful application of these techniques gives rise to a kind of cognitive unification: Ideas that are disparate and apparently unrelated seem to come together when the appropriate design patterns and abstractions are used. The basic elements from which large and complex software artifacts are built. To develop a solid understanding of a data structure requires three things: First, you must learn how the information is arranged in the memory of the computer. Second, you must become familiar with the algori manipulating the information contained in the data structure. And third, you must understand the performance characteristics of the data structure so that when called upon to select a suitable data structure for a particular application, you are able to make an appropriate decision. The teach yourself C++ in 21 day (second edition) This book is designed to help you teach yourself how to program with C++. In just 21 days, you'll learn about such fundamentals as managing I/O, loops and arrays, object-oriented programming, templates, and creating C++ applications--all in well-structured and easy-to-follow lessons. Lessons provide sample listings--complete with sample output and an analysis of the code--to illustrate the topics of the day. Syntax examples are clearly marked for handy reference. This book starts you from the beginning and teaches you both the language and the concepts involved with programming C++. You'll find the numerous examples of syntax and detailed analysis of code an excellent guide as you begin your journey into this rewarding environment. Whether you are just beginning or already have some experience programming, you will find that this book's clear organization makes learning C++ fast and easy. The ZooLib Cookbook ZooLib is a cross-platform application framework. What it allows you to do is to write a single set of C++ sources and compile for different operating systems and microprocessors to produce native executable applications with very little need for platform-specific client code. This is of great benefit to a developer, as it allows you to support your application on a variety of platforms without a lot of extra work developing parallel codebases. It also allows you to spend the bulk of your time developing on whatever platform you enjoy the most while delivering for the platforms your users need, even if they're not the same. ZooLib applications are multithreaded. This means there can be multiple sequences of program execution within one application. For the most part there is a thread for each window in a GUI application and a thread for the main application itself, but you can create as many threads as you like, within the limits imposed by the host operating system's resources. The C++ in Action programming There aren't that many books that teach C++. You can find good C++ reference books and technical books for advanced C++ programmers, but precious few books that actually teach programming in C++. Among them C++ In Action presents a unique approach--teaching the language from the perspective of a professional programmer. In this book you won't find examples of boring payroll applications or programs for grading students. Instead you'll witness the development of a simple parser and a symbolic calculator from a simple command-line program to a GUI Windows application. In the process you'll learn how to use C++ like a real pro. C++ and free electronic books These are electronic books in HTML on C++ and Java, along with the source code. The HTML books are fully indexed, use Frames for easy navigation through the chapters, and have color syntax highlighting on all the source-code listings. Each HTML download contains an entire book and source code in a single zipped file.
<urn:uuid:e33d7d88-0b99-4d39-af82-a6400b4c4508>
2.78125
898
Content Listing
Software Dev.
38.518548
The clash of water sloshing in from the Atlantic and the Mediterranean that will reach up into the Alps at Switzerland is due to many factors, as we have explained. We have described the direction of slosh during the pole shift, where France and Spain will be rushing toward the northwest during the crustal shift, thus causing water to be pushed down along the UK and the coast of Spain. But note that this water will be trapped in the Bay of Biscay! It will roil there, with no escape except inland, as the pressure will come from the Atlantic, relentlessly. Water takes the path of least resistance and does not move in the direction of water under pressure. It prefers to move overland, and will do so. When this initial pole shift tide encounters a slosh from the Mediterranean, which will have a different rate of slosh being a smaller body than the Atlantic, the flow overland is also blocked, and thus tidal bore in central France and up into the Alps at Switzerland will occur. Note that the Mediterranean will backwash up the Rhone River and the Atlantic will backwash along the rivers emptying into the Atlantic at Bordeaux. When these waters clash, tidal bore will climb up. Those who have observed tidal bore, water under pressure, are astonished at the height water can climb in these circumstances. Watch waves as they rise to crash on the beach. They rise higher than the water in the sea, but only rise when there is nowhere else for the water to go. In the case of the sloshing over France, this point is at the foothills of the Alps at Switzerland, and thus the rapid and astonishing rise is likely to occur there. Northern France and countries along the English Channel will not have this problem, as the English Channel is sufficient to allow the tide to sweep along. Places where the pole shift tide can become trapped, such as the Bristol Channel, should brace for a pole shift tide higher than 500 feet, closer to the 600 foot limit, as we have explained. But the Bay of Biscay will trap more water than the Bristol Channel, and thus the force of water overland and the tidal bore in central France can be expected to be voratious at some points. Will the Ardennes be a safe location during the European tsunami expected to assault the European coastline during the 7 of 10 scenarios? It will be sufficiently inland and sufficiently high, and will be so during the hour of the pole shift also. ZetaTalk July 2, 2011
<urn:uuid:37db3e41-44c8-45c5-a3c3-2523ff254d43>
3.140625
515
Personal Blog
Science & Tech.
50.356442
The thing in the photo above, I’m sad to say, is a penis. It belongs to the male seed beetle. And just in case you’re holding out hope that appearances are deceiving, I can assure you they are not. Those spikes are hard and sharp, and they inflict heavy injuries upon the female beetles during sex. Why would such a hellish organ evolve? This isn’t just about beetles. The animal kingdom is full of bafflingly-shaped penises adorned with spines, spikes, and convoluted twists and turns. In some animal groups, like certain flies, penis shape is the only clue that allows scientists to distinguish between closely related species. For a male, sex isn’t just about penetration. After he ejaculates inside a female, his sperm still have to make their way to her eggs to fertilise them and pass on his genes. If she mates with many suitors, her body becomes a battleground where the sperm of different males duke it out. Females can influence this competition by being choosy over mates, storing sperm in special pouches, or evolving their own convoluted genital passages. Males, meanwhile, have evolved their own tricks, including: guarding behaviour; self-castration; barbed sperm; chemical weapons in their sperm; mating plugs; ‘traumatic insemination’; and having lots of sperm. And spiky penises. That too. The dung beetle, Scarabaeus nigroaeneus, as its name suggests, eats the faeces of large grazing mammals. When it finds a fresh pat, it fashions the dung into a ball and rolls it home, head down and walking backwards. That’s hard work. The balls can be 50 times heavier than the beetle, whose body heats up as it pushes around its weighty cargo. Heating up is something that an insect can’t afford to do in the South African desert, where the ground can reach a scorching 60 degrees Celsius in the middle of the day. But the beetle’s dung-rolling antics provide it with a constantly accessible way of beating the heat. By filming dung beetles with a heat-sensitive camera, Jochen Smolka from Lund University has found that their dung balls aren’t just take-away meals—they’re also portable coolers. Sticking to surfaces and walking up walls are so commonplace among insects that they risk becoming boring. But the green dock beetle has a fresh twist on this tired trick: it can stick to surfaces underwater. The secret to its aquatic stride is a set of small bubbles trapped beneath its feet. This insect can plod along underwater by literally walking on air. The green dock beetle (Gastrophysa viridula) is a gorgeous European resident with a metallic green shell, occasionally streaked with rainbow hues. It can walk on flat surfaces thanks to thousands of hairs on the claws of their feet, which fit into the microscopic nooks and crannies of whatever’s underfoot. Most beetles have the same ability, and some boost the adhesive power of their hairs by secreting a sticky oil onto them. These adaptations work well enough in dry conditions, but they ought to fail on wet surfaces. Water molecules should interfere with the hairs’ close contact, and disrupt the adhesive power of the oil. “People believed that beetles have no ability to walk under water,” says Naoe Hosoda from the National Institute for Material Science in Tuskuba, Japan. They were clearly wrong. Together with Stanislav Gorb from the Zoological Institute at the University of Kiel, Germany, she clearly showed that the green dock beetle has no problems walking underwater. The duo captured 29 wild beetles, and allowed them to walk off a stick onto the bottom of a water bath. Once there, they kept on walking. Read More In the 1940s, visitors watching football games at Berkeley’s Californian Memorial Stadium would often be plagued by beetles. The insects swarmed their clothes and bit them on the necks and hands. The cause: cigarettes. The crowds smoked so heavily that a cloud of smoke hung over the stadium. And where there’s smoke, there’s fire. And where there’s fire, there are fire-chaser beetles. While most animals flee from fires, fire-chaser beetles (Melanophila) head towards a blaze. They can only lay their eggs in freshly burnt trees, whose defences have been scorched away. Fire is such an essential part of the beetles’ life cycle that they’ll travel over 60 kilometres to find it. They’re not fussy about the source, either. Forest fires will obviously do, but so will industrial plants, kilns, burning oil barrels, vats of hot sugar syrup, and even cigarette-puffing sports fans. The beetles find fire with a pair of pits below their middle pair of legs. Each is only as wide as a few human hairs, and consists of 70 dome-shaped sensors. They look a bit like insect eyes. In the 1960s, scientists showed that the sensors detect the infrared radiation given off by hot objects. Each one is filled with liquid, which expands when it absorbs infrared radiation. This motion stimulates sensory cells and tells the beetle that there’s heat afoot. For fans of a velvety latte or a jolting espresso, meet your greatest enemy: the coffee berry borer beetle. This tiny pest, just a few millimetres long, can ruin entire coffee harvests. It affects more than 20 million farming families, and causes losses to the tune of half a billion US dollars every year- losses that are set to increase as the world warms. But the beetle isn’t acting alone. It has a secret weapon, stolen from an unwitting accomplice. Ricardo Acuña has found that the beetle’s ancestors pilfered a gene from bacteria, most likely the ones that live in its gut. This gene, now on permanent loan, allows the insect to digest the complex carbohydrates found in coffee berries. It may well have been the key to the beetle’s global success. Heavy locks, imposing gates and motion-sensing lights can help to fortify your home and safeguard your belongings against thieves. On the other hand, they can also advertise the fact that you have stuff worth stealing. Extra security can be a double-edged sword. This is as true for plants defending their tissues as it is for humans defending their homes. Maize plants, like many others, protect themselves with poisons. They pump their roots with highly toxic insecticides called BXDs, which deters hungry mandibles. But these toxins don’t come free. The plant needs energy to act as its own pharmacist, so it distributes the poison to the areas that deserve the greatest fortification – its crown roots. During its lifetime, a frog will snap up thousands of insects with its sticky, extendable tongue. But if it tries to eat an Epomis beetle, it’s more likely to become a meal than to get one. These Middle Eastern beetles include two species – Epomis circumscriptus and Epomis dejeani – that specialise at killing frogs, salamanders, and other amphibians. Their larvae eat nothing else, and they have an almost 100 percent success rate. They lure their prey, encouraging them to approach and strike. When the sticky tongue lashes out, the larva dodges and latches onto its attacker with wicked double-hooked jaws. Hanging on, it eats its prey alive. The adult beetle has a more varied diet but it’s no less adept at hunting amphibians. It hops onto its victim’s back and delivers a surgical bite that paralyses the amphibian, giving the beetle time to eat at its leisure. Some parents give their children a head start in life by lavishing them with money or opportunities. The mother seed beetle (Mimosestes amicus) does so by providing her children with shields to defend them from body-snatchers. A female seed beetle abandons her eggs after laying them. Until they hatch, they are vulnerable to body-snatching parasites, like the wasp Uscana semifumipennis. It specialises on seed beetle eggs and lays its own eggs inside. Once the wasp grub hatches, it devours its host. The wasp problem is so severe that around 70 percent of the beetles’ eggs can be infested. But the mother seed beetles have a defence, and it is a unique one. Joseph Deas and Molly Hunter from the University of Arizona have found that they can protect an egg from this grisly fate by laying another one on top. Sometimes, the mothers lay entire stacks of two or three eggs. The tops ones are always flat and unviable. They never hatch into grubs and they completely cover the ones underneath. The southern beaches of Cumberland Island, off the coast of Georgia, USA, are part of a national park. To protect the area, only residents and staff are allowed to drive their vehicles on the sands. But there are plenty of wheels nonetheless – small, living ones. The beaches are home to the beautiful coastal tiger beetle (Cicindela dorsalis media). Tiger beetles are among the fastest of insect runners, but their larvae are slow and worm-like. If they’re exposed and threatened, running isn’t an option. Instead, they turn themselves into living wheels. They leap into the air, coil their bodies into a loop, and hit the ground spinning. The wind carries them to safety. The fact that a long, worm-like animal can jump and roll is amazing in its own right. The ability is even more remarkable because the tiger beetle is “one of the best-studied insect species in North America” and until a few years ago, no one had ever seen it doing this. Alan Harvey and Sarah Zukoff were the first. They write, “[Sarah] was walking through some unusually loose sandy drifts on Cumberland Island and happened to kick up some C. d. media larvae, which promptly started wheeling.”
<urn:uuid:5094145e-dc9d-41f2-bf48-2468c365ab19>
3.09375
2,145
Personal Blog
Science & Tech.
55.666431
Would you like to make this site your homepage? It's fast and easy... Yes, Please make this my home page! PLAN OF DC GENERAL RELATIVITY dca foundations of general relativity dcb derivation of general relativity steps 1 and 2 dcc the centrifugal model step 3 dcd the centripetal model step 4 BACK TO SITE PLAN Site Plan : DCB DERIVATION OF GENERAL RELATIVITY, Steps 1, 2. A FEW TERMS. -SPACE (capitalized): abstract topological concept such as Riemann SPACE, synonymous with Domain" or "Class", sharply distinct from its unfortunate homonym, the "space" of -AS (Abstract SPACE): Synonym of "Symbolism" ("STRUCTURES -PS (Phenomenon SPACE): Synonym of "Imagery" ("STRUCTURES -FIELD: a mathematical term associating a potential force vector with each point of AS SPACE. It's observable in PS not directly, but via its manifestations - forces acting on physical bodies which may be mapped into Field in AS. For simplicity's sake we may say in metalanguage that Field is "observed", however always keeping in mind the above implication. In the present level of GR derivation we shall disregard the electromagnetic Field and consider only acceleration and gravity Fields. In order to respect the term "inertial mass" of the Equivalence Principle we shall call call the acceleration Field "inertial Field". Equivalence Principle makes "inertial" and "gravity" Fields equivalent and indistinguishable by whatever interior experiment in the involved Referential. Thus, we may talk for convenience of derivation about inertial and gravity Fields, always keeping in mind that we talk about two P-Equivalent aspects of the unique Phenomenon "Field". NOTE: By "P-Equivalence" (Phenomenal Equivalence) we mean the relation among observable aspects of a not directly observable phenomenon. For instance, continuous wave and discrete photons are P-Equivalent aspects of the phenomenon -MASS: a mathematical coefficient of AS, non observable in PS, but helping to order in AS such observations as force and acceleration into consistent patterns. In mathematical Field formulas mass is a singularity i.e. a point in which the formulas don't hold. -MATTER: a metalanguage concept not existing as such in PS, nor in AS. In physical models singularities are not limited to a point, but extend over neighboring areas where Field equations cannot be solved numerically. Such areas, where for instance Field density exceeds some threshold. are for convenience sake called in metalanguage "matter", or -INERTIAL REFERENTIALS. Traditional Definition: set IR of Referentials whose members move at constant speed with respect to each other. We shall see that this definition is not adequate in GR and has to be replaced by GR Definition: set IR of Referentials in which no Field is observed. -NON INERTIAL REFERENTIALS: set NIR of Referentials not belonging to IR, thus accelerating with respect to any IR; or set NIR of Referentials in which Field is observed. -SCOPE OF SR: SR holds in IR. For the moment we don't know anything about NIR and in particular about relations between an IR and a NIR. -LOCAL INERTIAL REFERENTIAL (LIR): local elementary Referential within a NIR in which Field disappears eg. free falling box in the Field of the NIR. GR DERIVATION VIA MENTAL EXPERIMENT OF "ROTATING DISK". I: an IR OI: Observer in the center of I solidary with I. F: "Rotating Disk", a NIR with centriFugal Field, observed from I as rotating around center of I, confused with the center of F. OF: Observer in the center of F solidary with F. P: a NIR with centriPetal Field. Observer OF draws two circles of respectively radii and circumferences RF1,SF1 and RF2,SF2, such that between 0 and RF1 the Field practically does nor exist and at RF2 is strong enough to have all observable effects, yet below the threshold of the "black" matter area. OF accepts OI's view, considers SF1 and SF2 as rotating with respect to the center of F and checks the tangential speeds at RF1 and at RF2 as respectively negligible and effectual. Observer OI draws in I two circles RI1,SI1, RI2,SI2 exactly covering those of F. Then he makes a straight physical unit rod UI1 short enough to cover a segment of SI1 with acceptable approximation. He measures his both radii and circumferences with UI1 and finds SI1/RI1 = SI2/RI2 = 2pi. Then he drops UI1 on F where it becomes UF1 solidary with F and asks OF to make analogical measurements with its help, in a way observable from I. As long as UF1 covering a segment of SF1 or SF2 rotates with it, it stays a small NIR and we cannot say anything about it. So, OF cuts the "string" attaching it to F's center and lets it fly free at the tangential speed. Now, UF1 becomes a LIR and all SR laws can be applied to it. At RF1 its speed is too small to cause Lorentz Contraction and measurements are identical in both Referentials: SF1=SI1 At RF2 however, its speed is sufficient to cause Lorentz contraction, so that UF2<UF1 and SF2 needing more of shorter units to be covered is measured as longer than SI2. RF2 being perpendicular to the tangential speed stays unaffected (in first approximation as we shall see) so that SF2>SI2 and SF2/RF2>2pi. Conclusion: Fast enough rotation changes F's SPACE to non-Euclidean, namely to Hyperbolic or Lobatchevskian. S = 2piR: Euclidean, "flat" SPACE, S < 2piR: Riemannian, parabolic SPACE, Positive curvature which may be visualized on a S > 2piR: Lobatchevskian or hyperbolic SPACE, Negative curvature, impossible to visualize. Having complied with OI's wishes and with his view, OF proposes to carry out a symmetrical experiment: While F appears rotating to OI solidary with I, the inverse is just as true: to OF I appears rotating and F stationary. Thus OF expects symmetrical results, namely UI2 becoming shorter than UI1, SI2>SI1 and SI2/RI2>2pi. However, these expectations prove false. Why? How do we know? How can they "prove"? - one may object - you talk as if you had some empiric facts falsifying these expectations, but you have no facts, you are just carrying out a "mental experiment". It's a tough objection, but a welcome one: it will allow us to throw some light on the perhaps deepest and most complex problem of scientific inquiry: factual plausibility within mental experiments. (see "ERN LOGIC") But as we said it's tough as well, so we shall ask the reader for a bit of patience and concentration when considering the following lines. It's true that within the SR observations of one from another IR are symmetrical: To a fellow flying fast close to us the earth would look like a flat lens, but then to us this fellow would look just as flat. That holds for IR's, but there is no earthly reason to extend it beyond the SR and pretend that it holds between NIR's, or between an IR and an NIR. Nor is there any to pretend that it does not hold. We could rigorously chose one or the other option only in the light of empiric, experimental justification, but it seems that we cannot have any within our mental experiment. Cannot we? If it were true, if we could not be empirical in mental experiments, no creativity, no progress would exist in science. Indeed, in mental experiments we can check our assumptions against imaginary observations; imaginary in fact, but encompassing our knowledge of the investigated PS, the recollection and the synthesis of all real experiments carried out there. And in our particular case, real experience tells us that PS or Universe is overwhelmingly if not entirely non-inertial, that we are not even sure if any rigorous IR exists at all, that Referentials approximated as inertial are tiny islands within the non-inertial ocean and that this non-inertial vicinity does not upset their inertial character at all. So, in our case the vicinity of NIR F does not influence in any way the IR I which stays inertial, independently of all Consequently, the expectations of OF prove false. He may be solidary with his F and observe I as turning, but it has no bearing on I's inertial character and Euclidean SPACE. This statement has TREMENDOUS consequence: the traditional distinction between IR and NIR, the criterion of movement, of constant speed against acceleration is no more pertinent. From this point of view F seen from I and I seen from F are identical. The only difference between them is Field which replaces from now on the traditional criterion of movement. Conclusion: The criterion of movement, the traditional distinction between IR and NIR is no more pertinent. The PERTINENT CRITERION IS FIELD. A Referential in which Field is observed is NIR, otherwise it is IR. FIELD AND SPACE Newton's Gravity Field: Gravity Force exerted on a unit mass detector at distance r from mass M F = -GM/r^2 may be expressed as gradient of potential Field P(r)=GM/r. In 0B GALILEAN RELATIVITY AND NEWTON'S MODEL we discuss Newton's Paradoxes, in particular the Second Gravity Field P(r)=GM/r is clearly determined by distance r, thus by SPACE, but SPACE is presumed absolute and in no way affected by Field, which clearly violates the Reciprocity Principle (Action / Reaction). Newton's Gravity generalized for a continuum point in presence of ponderable "matter" in form of Poisson's equation reiterates the Paradox: lap(phi) = 4 pi K ro (Poisson's equation) phi: scalar gravity field lap: Laplace operator (divergence of the gradient of phi) ro: density of ponderable "matter". Indeed, density is a spatial concept associating some agent (here an element of ponderable "matter") with a SPACE element. This "material" SPACE element "generates" the Field by determining its divergence without being reciprocally affected in any way. Resolution of Newton's Paradoxes had to wait till Einstein's Relativity. The first, concerning the apparent action at distance of gravity, contradicting the mechanistic dogma is discussed in AXIOMS OF SPECIAL RELATIVITY and in 0B GALILEAN RELATIVITY AND NEWTON'S MODEL. Here we shall concentrate on the solution of the Second Paradox involved by the General Relativity. Qualitative solution is immediately given by the Rotating Disk: On the one hand Field increases with r, i.e. SPACE (distance) apparently determines Field; On the other hand the SPACE curvature (indicated by circumference getting greater than 2 pi r) increases with Field density, i.e. Field apparently determines SPACE. Such a reciprocal apparent "determination" of two events, each appearing in turn as cause and effect of the other, should of course not be confused with causal, one-way determination, but is the very essence of the P-Equivalence. SPACE and Field are two P-Equivalent Aspects through which manifests itself the most general domain of physical reality, For readers familiar with Tensor Calculus we recall Einstein's Field Equation, which may be considered as GR's equivalent of Poisson's equation: R(/m,/n)-0.5g(/m,/n)R=-mu*T(/m,/n) EFE(Einstein's Field Equ.) R(/m,/n): contracted Riemann Tensor g(/m,/n): Metric Tensor R: scalar R=g(m/,n/)R(/m,/n) T(/m,/n): energy tensor of the "matter" mu: constant related to Newton's gravity constant. Left side of EFE represents the Field OR SPACE in terms of metric and curvature. This common representation shows the homomorphism of both constructs which defines them as P-Equivalent aspects of the Cosmos. In spite of being abstract constructs of Mind, Field and SPACE have full phenomenal sense secured by their observable manifestations: force, speed; etc. for Field, geometry for SPACE. Right side of EFE represents the "singularity term" replacing Poisson's "matter" with its relativistic equivalent, viz. energy, encompassing that of gravity and
<urn:uuid:9215b49c-6e84-4d1b-b5bb-3a9599f6f13b>
3.390625
3,028
Documentation
Science & Tech.
43.991677
Math 416 - Abstract Algebra Chapter 5 - Permutations The following questions are based on sample questions from the "Instructors' Solutions Manual. 1. Find the order of the permutation (124)(2345) 2. Give two reasons why the set of odd permutations in Sn is not a subgroup. 3. a. Write (12345) as a product of 2-cycles b. Write it as a product of 3-cycles 4. In Sn, let b = (12)(123)(1234)(12345)…(123…n) If n = 99, determine whether b is odd or even. 5. Let n be an integer that is 3 or larger. How many elements of Sn send 1 to n – 2. 6. Let b be a cycle of length at least 3 in Sn. Prove that b2 is a cycle if and only if the length of b is odd. 7. Prove that every non-identity permutation in Sn can be written as a product of at most n – 1 2-cycles. 8. Let b be some fixed odd permutation in Sn. Prove that every odd permutation in Sn can be written as the product of b and some even permutation. 9. Write the permutation (13)(2,4,5) as a 5x5 matrix. 10. Show that A8 contains an element of order 15. 11. What is the order of each of the following permutations? 12. Determine whether the following permutations are even or odd. 13. Prove that if a is an odd permutation, then a-1 is an odd permutation. 14. What is the inverse of b = (142857) ? The test will be strongly based on these questions. hits since 9pm Sunday November 11.
<urn:uuid:9a0bdcd7-ad95-4f68-9812-f99df276c3c1>
3.0625
394
Tutorial
Science & Tech.
95.609785
Jim Thomson is principal oceanographer at the Applied Physics Lab at the University of Washington. He studies ocean surface waves and coastal processes. Wednesday, Oct. 1 44.00 degrees north latitude, 134.84 degrees west longitude The experience of an ocean transit has some similarities to driving across the continental United States. You realize just how big the place is, and you have a lot of time to think. We have covered 1,000 nautical miles and have another 600 nautical miles to go. At 10 knots, that is a lot of time indeed. I’ve been thinking about the science that has come before us. Knowledge is incremental, and everything we do out here is the result of generations of earlier research. I trace my own academic lineage back to a proud moment, though my connection is tenuous. On June 6, 1944, Allied Forces landed at Normandy. The seas were rough for D-Day, but not as rough as they would have been the day before. The landing was timed according to the wave predictions of Harald Sverdrup and Walter Munk. Dr. Sverdrup and Dr. Munk derived the first formulas for waves as a function of wind speed, time (duration) and distance (fetch). The formulas are still in use today. On Saturday we got our first taste of good wave breaking conditions, along with a practical lesson in the fetch formula. We also got pretty wet working the deck. The wave height was 3.1 meters and the winds were 20-25 knots. The breaking was frequent, in part because the fetch is essentially unlimited out here. By comparison, the same winds near the coast or in a bay would not produce as much breaking or as much wave energy — there, the wind simply does not have as much space to work. Dr. Sverdrup and Dr. Munk knew this, but their inclusion of breaking was empirical (in contrast with the direct approach we are taking in the current project). Dr. Sverdrup and Dr. Munk were central figures at the Scripps Institution of Oceanography, where my doctoral adviser, Steve Elgar, was trained and took classes from Dr. Munk. (Dr. Munk is still alive; I have met him on a few occasions. Dr. Sverdrup died in 1957.) This project we are engaged in, and indeed my whole career, owe much to their efforts. Aboard this ship right now is one of my own doctoral students, Michael Schwendeman. He may be learning a few things from me, but he is learning far more from the work of our forebears. They sought knowledge, and they put it to use. I can think of no finer path to follow.
<urn:uuid:ae6e7b13-5b06-4192-a66f-057d829e120c>
2.78125
568
Personal Blog
Science & Tech.
68.633368
Last year's record loss of Arctic sea ice is already causing big changes for plants and animals that scientists are just starting to understand, according to newly published research. (3 months and 6 days ago) Massive snow storms in the northern hemisphere are linked to shrinking Arctic sea ice levels, scientists say. (2 months ago) Research suggests that last year's record loss of Arctic sea ice is already causing big changes that scientists are just starting to understand for plants and animals.German scientists on board a research vessel in the High Arctic last summer found that large clumps of algae that would normally cling to the underside of sea ice were falling and sinking to the ocean... (3 months and 8 days ago) The debate surrounding climate change often creates misunderstandings about the facts. I've been asked whether Arctic ice is actually melting more than once recently. The answer is that Arctic sea ice is most certainly decreasing. (2 days ago) From Andrea Thompson, OurAmazingPlanet Managing Editor: Over the past 30 years, the Arctic has warmed more than any other place on the planet, and that warming and the resulting melt of the region's sea ice presents a number of potential adverse effects, from impacts on weather systems to the decline in the habitats of native species. Now, a team of scientists have found... (29 days ago) Melting sea ice will allow ice-strengthened vessels to sail directly over the pole, and normal ships to take the 'northern sea route'Ships should be able to sail directly over the north pole by the middle of this century, considerably reducing the costs of trade between Europe and China but posing new economic, strategic and environmental challenges for governments,... (2 months and 21 days ago) A sea ice simulator in Canada is growing frost flowers to see how ice forms in the Arctic Ocean. (2 months and 25 days ago) The sea ice in the Arctic Ocean is now nearing its winter maximum, but a February storm tore through relatively weak, thin, year-old pack ice. (2 months and 9 days ago) Figuring out the future of the rapidly warming Arctic is crucial for climate scientists, largely because changes in the region’s ice — both on land and at sea — can have major consequences for the rest of the planet. (1 month and 13 days ago) Melting sea ice, exposing huge parts of the ocean to the atmosphere, explains extreme weather both hot and coldClimate scientists have linked the massive snowstorms and bitter spring weather now being experienced across Britain and large parts of Europe and North America to the dramatic loss of Arctic sea ice.Both the extent and the volume of the sea ice that forms and melts... (2 months ago) More Science news »
<urn:uuid:5e1a6fac-25ff-4e90-957d-f819fd7e208e>
3.5
565
Content Listing
Science & Tech.
51.932282
There are four mathematical properties which involve addition. The properties are the commutative, associative, additive identity and distributive properties. Commutative property: When two numbers are added, the sum is the same regardless of the order of the addends. For example 4 + 2 = 2 + 4 Associative Property: When three or more numbers are added, the sum is the same regardless of the grouping of the addends. For example (2 + 3) + 4 = 2 + (3 + 4) Additive Identity Property: The sum of any number and zero is the original number. For example 5 + 0 = 5. Distributive property: The sum of two numbers times a third number is equal to the sum of each addend times the third number. For example 4 * (6 + 3) = 4*6 + 4*3
<urn:uuid:49cee08f-882d-4cd8-a47c-eb8a1150228f>
4.34375
185
Knowledge Article
Science & Tech.
49.674355
Science Fair Project Encyclopedia Planck's constant, denoted h, is a physical constant that is used to describe the sizes of quanta. It plays a central role in the theory of quantum mechanics, and is named after Max Planck, one of the founders of quantum theory. It has a value of approximately A closely-related quantity is the reduced Planck constant (sometimes called Dirac's constant): where π is the constant pi. This constant is pronounced as "h-bar". The figures cited here are the 2002 CODATA-recommended values for the constants and their uncertainties. The 2002 CODATA results were made available in December 2003 and represent the best-known, internationally-accepted values for these constants, based on all data available through 31 December 2002. New CODATA figures are scheduled to be published approximately every four years. Planck's constant is used to describe quantization, a phenomenon occurring in microscopic particles such as electrons and photons in which certain physical properties occur in fixed amounts rather than assuming a continuous range of possible values. For instance, the energy E carried by a beam of light with constant frequency ν can only take on the values It is sometimes more convenient to use the angular frequency ω=2πν, which gives Many such "quantization conditions" exist. A particularly interesting condition governs the quantization of angular momentum. Let J be the total angular momentum of a system with rotational invariance, and Jz the angular momentum measured along any given direction. These quantities can only take on the values Thus, may be said to be the "quantum of angular momentum". Planck's constant also occurs in statements of Heisenberg's uncertainty principle. The uncertainty (more precisely: the standard deviation) in any position measurement, Δx, and the uncertainty in a momentum measurement along the same direction, Δp, obeys There are a number of other such pairs of physically measurable values which obey a similar rule. On some browsers, the Unicode symbol ℎ (ℎ) is rendered as Planck's constant, and the symbol ℏ (ℏ) is rendered as Dirac's constant. - Electromagnetic radiation - Natural units - Schrödinger equation - Wave-particle duality - Quantum Hall effect The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:ba7a9dd7-ced9-45a7-b078-087c137c2c5e>
3.734375
511
Knowledge Article
Science & Tech.
30.522292
Richard Newcombe looks at how to interact with objects in the 3D world, using a card game as a base. This article only covers the basics, but by the end of the article you will be able to interact with objects in a 3D world using the mouse. Articles Written by Richard Newcombe Richard Newcombe takes us into the basics of building and moving inside a 3D world of our own making, using Windows Presentation Foundation (WPF) and VB.NET. Richard Newcombe takes a look at the basic building blocks of a 3D world. While 2D animation was done by hard coding all of the graphics, 3D animation is much easier, using the WPF (Windows Presentation Foundation) graphical subsystem to handle the actual image rendering. Richard Newcombe takes a look at Bitmap Animations in VB.NET. VB6 made extensive use of API's to load and initialize large quantities of smaller images. .NET has the GDI+ dynamic library, with a large selection of graphic classes and functions, which means that we no longer require API's to manipulate images. Quite often we need a VB control that has requirements that don't quite exist within the current control set, such as all data-entry controls be on a single page with scroll bars. Read on to learn more... Discover how to Interact with the objects on a playing field. The article uses a game as a base for the code.
<urn:uuid:10d3a2a2-4d01-4ff2-805f-df7c58aae3f6>
2.734375
300
Content Listing
Software Dev.
59.613088
HomeOracle Page 5 - Database Interaction with PL/SQL, Named Notations, Storing Procedures and Functions PACKAGE and PACKAGE BODY - Oracle This is part 16 of a series of articles focusing on database interactions with Oracle PL/SQL. In my previous article, we worked with PL/SQL TABLE types in between sub-programs. In this article, we will look into Named Notation, default values of parameters, stored procedures, stored functions and finally introduce the concepts of package and package body. A package is a single unit containing several stored sub-programs. Even a package itself gets stored inside the database (along with all of its sub-programs). Basically any package in Oracle has two parts, namely the package specification and the package body. The package specification contains the definition or specification of all the elements in the package that may be referenced outside of the package. These are called the public elements of the package. Like the module, the package specification contains all the code that is needed for a developer to understand how to call the objects in the package. A developer should never have to examine the code behind the specification (which is the body) in order to understand how to use and benefit from the package. The package specification does not contain any executable statements or exception handlers. A specification only specifies, or declares, those objects in the package that are public -- that is, visible outside of the package and callable by other programs. The body of the package contains all the code behind the package specification: the implementation of the modules, cursors, and other objects. The body may also contain elements that do not appear in the specification. These are called private elements of the package. A private element cannot be referenced outside of the package, since it does not appear in the specification. The body of the package resembles a standalone module's declaration section. It contains both declarations of variables and the definitions of all package modules. The package body may also contain an execution section, which is called the initialization section because it is only run once, to initialize the package.
<urn:uuid:4cb535a5-e8af-44f2-bd52-2dbe57b3c73c>
3.03125
425
Documentation
Software Dev.
23.607555
Construct an Equilateral Triangle in the Hyperbolic Disk Using the tools in the toolbar the below, construct an equilateral triangle with AB as one of its sides. You should have already completed the midpoint problem, so that you are familiar with how to use the tools. Try moving the points A and B around to get a better feeling for how the triangle can vary. After you have successfully made the construction, enter the password you get at the bottom of the page so I can give you credit.
<urn:uuid:ca4592dd-0025-40c0-b0f5-330b50cfd03c>
2.765625
105
Tutorial
Science & Tech.
52.12316
Pre-Thunder Electrical "Zap This question refers to David R. Cook's explanation of thunder. (Please refer to question #170 of the weather category archives, titled: "Making Thunder Clouds" http://www.newton.dep.anl.gov/askasci/wea00/wea00170.htm ). Several times, I have been so close to a lightning strike that I heard an electrical "zap" sound just prior to hearing the thunder. I assumed this was the sound of the electrical discharge, and the delay in hearing the thunder was that it took a brief time for the air column surrounding the lightning path to be heated, expand, then collapse. If thunder is created by the splitting of air molecules and not by the air getting heated, then that process, and the sound it makes, should happen almost instantaneously. Does this mean that the zap sound is caused by something that occurs just prior to the main discharge? First, the phenomenon of "lightning", which causes "thunder" IS NOT completely understood. It is a tough "apparatus" to do controlled experiments. What is known is that lightning is due to an electrical discharge between a cloud and the ground, or between two clouds. It is several billion times the strength of the "zap" you get when you touch a door knob after sliding across a rug on a dry day, or the crackle you get sometimes when you stroke a cat. Second, there appears to be several atmospheric mechanisms that result Third, there is a "leader" pre-discharge. These partial breakdowns have been observed in high speed photos of lightning. Just why and how these occur is not known, but they appear to initiate a discharge path for the primary lightning discharge. Fourth, in some cases there is a "return" stroke that runs in the opposite direction as the initial discharge. That may be what you heard, but I cannot be sure. The physics of how "thunder" is produced is also not fully understood. The lightning produces a shock wave due to the rapid heating, ionization and expansion of air in the discharge. This is called a plasma. The implosion (collapse) of the plasma column no doubt produces another shock wave as the ionized air is re-compressed. Fifth, what we "hear" is only a part of the acoustics of thunder, just as light we "see" is only a portion of the electromagnetic spectrum. There are no doubt sub-sonic and ultra-sonic acoustic waves produced by lightning. I cannot find any studies on these inaudible sound frequencies. If you do a "Google" search on "physics of lightning" or similar search term you can find many sites on the topic. In addition, Chapter 9, Volume II of Richard Feynman's Lectures on Physics is devoted to lightning, and is a concise place to start researching the topic. The sound produced by the explosion of the molecules travels at the speed of sound, which, if you are at least 50 feet away, is faster than the expansion rate of the air column. So, thunder reaches you faster than the expanding air column. The zap that you heard before the lightning flash is most likely the result of the charging and subsequent ionization of the air column as the upward leader forms from the ground. The upward leader connects with the stepped leader coming down from the cloud. The leaders have little flow of current in them, just enough to ionize the air molecules (stripping off electrons) and produce a path of reduced resistance for the lightning current to flow in. Only at the time of the connection of the leaders is there significant current flow, which explodes the molecules, producing the lightning flash and thunder. The leaders are only very slightly luminescent and normally can only be detected with sensitive high speed film; you normally cannot see them with the naked eye. However, that stripping of electrons may produce enough sound that you hear a zap before the lightning strike I could sometimes detect this sound when we were performing our outdoor simulated lightning tests with a huge spark generator in Florida. Even though I was within 50 feet of the spark in my Faraday cage, it was often hard to distinguish the zap from the spark discharge itself, perhaps because the spark was only 8 feet long. An upward leader is usually about 400 feet long and would produce more sound. David R. Cook Atmospheric Physics and Chemistry Section Environmental Research Division Argonne National Laboratory Click here to return to the Weather Archives Update: June 2012
<urn:uuid:97cbc1be-08d9-40cd-b267-76614c58a836>
2.875
1,012
Q&A Forum
Science & Tech.
54.240835
Displaying 1-3 of 3 key documents Source: Global Canopy Programme | December 2008 This policy brief, published by the Global Canopy Programme, proposes a system called Proactive Investment in Natural Capital (PINC), to reward countries for conserving large areas of tropical forest that act as 'global utilities' providing ecosystem services essential for preserving global food and energy security. The authors suggest that the system, could complement current proposals for reducing emissions from deforestation and forest degradation (REDD). They argue that REDD could encourage countries with historically low deforestation rates to destroy their forests. They point out that if REDD successfully brings deforestation rates down — to zero eventually — then in the long-term, countries will not be able to receive payments for reducing deforestation. The alternative, PINC, would build on existing systems that pay for ecosystem services, such as eco-certification, although scaling-up funding for standing forests is still a challenge, say the authors. To be effective, PINC requires capacity building and improved governance across the world. Land tenure reform will be needed in many countries, as will local participation in decision making and training in forest management. But, if appropriately designed, PINC could provide local communities with co-benefits such as poverty alleviation and biodiversity conservation. Source: Intergovernmental Panel on Climate Change (IPCC) The Third Assessment Report of the IPCC's Working Group 1 builds on past assessments and incorporates new results from the past five years of climate change. It descibes the current state of udnerstanding of the cliamte system, and provides estiamtes of its projected future evolution and their uncertainties. Many hundreds of scientists from around the world participated in the preparation and review of the report, which states that "there is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities". Source: Royal Institute of International Affairs | February 2002 The Third Assessment Report of the Intergovernmental Panel on Climate Change, published in 2001, is the most comprehensive and authoritative source of information on climate change. Its conclusions confirm and strengthen those of the previous reports: human-induced climate change is a reality and most of the effects will be negative, but a range of mitigation opportunities is available to address the problem. The Report finds that most of the earth’s warming over the past 50 years can be attributed to human activities, and that its effects are already being felt. Global temperature is expected to increase by 1.4 to 5.8ºC over the next century, a significant increase on the projections of the 1995 Second Assessment Report. This briefing paper summarises the findings of the Third Assessment Report and the debates underpinning them, and discusses the likely outcomes of the Report.
<urn:uuid:d5b2144e-8c5f-4020-8bb9-9eac3fbff9ea>
3.078125
561
Content Listing
Science & Tech.
22.417138
What mom doesn't get stressed out every now and then? Stress is usually considered a bad thing, but new research suggests that mom's stress can actually give her future offspring certain physical advantages, at least in the case of one bird species. A few years ago, experiments on barn swallows demonstrated that when ovulating females were exposed to models of predators, their eggs had higher levels of the stress hormone corticosterone, which resulted in reduced hatchability and smaller fledglings. A study published this month in linkurl:Functional Ecology;http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2435.2011.01834.x/abstract shows that nesting great tits exposed to stuffed models and audio calls of their predators also have smaller offspring, but the chicks' wings actually grow faster and longer, resulting in wings about 1.8 millimetres longer than the offspring of less stressed mothers -- a difference that might make the birds better at avoiding predators in flight, linkurl:according to Nature.;http://www.nature.com/news/2011/110325/full/news.2011.187.html Monkeys bathe in urine to attract mates Scientists may have finally found an explanation for the strange tendency of tufted capuchin monkeys to rub their own urine all over themselves -- it's all about the ladies. New research, published last month in the linkurl:American Journal of Primatology,;http://onlinelibrary.wiley.com/doi/10.1002/ajp.20931/abstract shows that female brains are excited by the smell of sexually mature males' urine, suggesting males perform "urine washes" to signal their availability to females. Tufted capuchins aren't the only monkeys to partake in such bizarre behavior -- mantled howler monkeys, squirrel monkeys and other capuchins species have all been observed urinating into their hands and rubbing it into their fur. Scientists previously guess that the behavior was meant to help regulated body temperature or to mark themselves for individual identification by their comrades. But the new study found that "female capuchin monkey brains react differently to the urine of adult males than to urine of juvenile males," study author Kimberley Phillips, a primatologist at Trinity University in San Antonio, Texas, linkurl:told the BBC.;http://news.bbc.co.uk/earth/hi/earth_news/newsid_9404000/9404757.stm linkurl:(Hat tip to Huffington Post);http://www.huffingtonpost.com/2011/03/01/why-monkeys-wash-in-urine_n_829388.html Elephants, smart and cooperative Elephant herds would do right to follow their elders. According to a recent study published in linkurl:Proceedings of the Royal Society of London B,;http://rspb.royalsocietypublishing.org/content/early/2011/03/10/rspb.2011.0168 older elephant matriarchs are better at assessing threats than younger animals. Specifically, female elephants over the age of 60 appeared to better determine the level of danger posed by recordings of lions, reacting more defensively to the roars of male cats, which can be more deadly than lionesses, linkurl:according to Wired Science.;http://www.wired.com/wiredscience/2011/03/elephant-memory-leadership/ Although they rarely hunt, just one male lion can bring down an elephant calf. Elephants were also recently shown to be impressively cooperative, helping each other obtain food by pulling on two ends of the same rope. The study, published earlier this month in the linkurl:Proceedings of the National Academy of Sciences,;http://www.pnas.org/content/early/2011/03/02/1101765108.abstract demonstrates that elephants also timed their pulls, waiting for their partners to get a hold of the rope before tugging together. Finding the shark spa Thresher sharks in waters off the the Philippines keep fresh by routinely visiting a sea mount home to cleaner fish that rid them of parasites and dead skin. "They pose, lowering their tails to make themselves more attractive to the cleaners," Simon Oliver from Bangor University in the UK, who filmed the sharks and published his findings in linkurl:PLoS ONE,;http://www.plosone.org/article/info:doi/10.1371/journal.pone.0014755 told the linkurl:BBC.;http://news.bbc.co.uk/earth/hi/earth_news/newsid_9427000/9427886.stm "And they systematically circle for about 45 minutes at speeds lower than one metre per second." To find their way to the cleaning site, the sharks may use a mental map. A new study published in the linkurl:Journal of Animal Ecology;http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2656.2011.01815.x/full that analyzed tracking data from three species of sharks suggests that tiger and thresher sharks, but not blacktip reef sharks, are capable of navigating directly to specific locations. Here's one for the record books -- a sexual act, performed by ancient mites, preserved in a drop of tree resin for 40 million years. The discovery, reported in the March issue of the linkurl:Biological Journal of the Linnean Society,;http://onlinelibrary.wiley.com/doi/10.1111/j.1095-8312.2010.01595.x/abstract is more than just a peek into the bedroom of an extinct insect species. It's also a glimpse at an ancient example of sex role reversal, where the female, who is clinging to her mate with her back legs, appears to be in control of the action, linkurl:according to Discovery News.;http://news.discovery.com/animals/40-million-year-old-sex-act-captured-in-amber.html **__Related stories:__*** linkurl:Manipulative mosquito semen;http://www.the-scientist.com/news/display/58052/ [15th March 2011]*linkurl:Behavior brief;http://www.the-scientist.com/news/display/57966/ [28th January 2011]*linkurl:Behavior brief;http://www.the-scientist.com/news/display/57909/ [6th January 2011] **__Related F1000 Evaluations:__***linkurl:Scales of orientation, directed walks and movement path structure in sharks;http://f1000.com/9002957?key=t7qwfd0qg53tb6c Y.P. Papastamatiou et al., J Anim Ecol, ePub, 2011. Evaluated by Kent Berridge, University of Michigan.*linkurl:Leadership in elephants: the adaptive value of age;http://f1000.com/9180957?key=wc30wyp1w6j6t5d K. McComb et al., Proc Biol Sci, ePub, 2011. Evaluated by Daniel Promislow, University of Georgia.
<urn:uuid:73a46a08-d7c6-4056-ab94-f01092fef1a6>
3.109375
1,557
Content Listing
Science & Tech.
62.671552
- Special Sections - Public Notices Scientists have discovered another earth. Well, sort of. Earlier this month, NASA’s Kepler space telescope team announced the discovery of Kepler-22b located in what is called a “habitable zone,” meaning an environment that’s not too hot or too cold for the possibility of life. And just last week, the team unveiled two other earth-sized planets, Kepler-20e and Kepler-20f, although they are not in the habitable zone. If you currently subscribe or have subscribed in the past to the The News Enterprise, then simply find your account number on your mailing label and enter it below. Click the question mark below to see where your account ID appears on your mailing label. If you are new to the award winning The News Enterprise and wish to get a subscription or simply gain access to our online content then please enter your ZIP code below and continue to setup your account.
<urn:uuid:34fc8f0b-8aea-4a9a-bb57-de129e77853a>
3.125
198
Truncated
Science & Tech.
48.63283
The Weather Modification Roller Coaster In 1960, Simpson was a full Professor at the University of California Los Angeles, where she designed and taught graduate classes, and wrote two books. She also “computerized” her cloud model and began looking for ways to test it. Simpson realized that cloud seeding experiments would be a good way to test how well the model she had developed described how clouds really behaved. Since the late 1940s, scientists had known that silver iodide introduced into a cloud with super-cooled water droplets led to the formation of ice crystals. But by the 1960s, no one had figured out exactly how clouds behaved after being seeded. Simpson’s model predicted that the large amount of latent heat released as the cloud’s water droplets changed to ice would, under certain conditions, cause a cloud to grow much taller and to more than double in size compared to an unseeded cloud. Simpson got her chance to test this hypothesis in 1963 by “bootlegging” aircraft time during Project Stormfury, a weather modification experiment started in 1961 by Simpson’s future husband, Bob Simpson. She flew up above clouds and ejected flares from a chute under the airplane. The flares ignited and created silver iodide smoke, which dispersed silver iodide fairly evenly over clouds. The clouds behaved just as her cloud model predicted. “We wrote an article and a furor broke loose,” said Simpson. “I was totally unaware of the level of emotion and hostility that was directed against anything that had to do with cloud seeding.” Many in and outside of the scientific community were skeptical of weather modification. The main fear was that meteorology would get a bad name by the promises of charlatans. In 1964, Simpson left UCLA to take a position with the National Weather Bureau (which later became the National Oceanic and Atmospheric Administration). In 1965 she married Bob Simpson, who headed the Weather Bureau’s Severe Storms Program. The Simpsons moved to Miami, where Bob became the Director of the National Hurricane Center and Simpson headed the Experimental Meteorology Laboratory. She somewhat reluctantly agreed to take charge of Project Stormfury. Stormfury’s primary objective was to test a hypothesis that a hurricane’s maximum winds could be weakened by about 10 percent if the most active area of the eyewall could be massively seeded. The project was based on a reasonable hypothesis that required careful testing, but it became bitterly controversial, with insults in speech and print that were often assaults on the personal integrity of the scientists involved in Stormfury. Permission for seeding was restricted to a very small area in the western Atlantic. Between 1964-68, Stormfury deployed its aircraft, only to find that the sole suitable hurricane moved out of the permitted seeding area. Simpson decided she had ridden long enough on the weather modification roller coaster, and she passed the Stormfury Directorship on to a another good hurricane expert. Stormfury eventually fizzled in the 1970s after the low-flying aircraft in use at that time failed to find the kind of super-cooled clouds in which seeding was known to be effective. Simpson decided to concentrate on cloud seeding experiments with an emphasis on the scientific processes involved, rather than on how they might be applied to weather modification. Simpson developed a double-blind experiment that convinced the science community that her cloud seeding hypothesis was correct. Simpson had tried to leave weather modification behind, but a young co-worker, Bill Woodley, found that the seeded clouds in Simpson’s experiment rained about twice as much as the controls. NOAA management got on the weather modification wagon again. They wanted the Experimental Meteorology Laboratory to show that the rainfall over a large area in south Florida could be increased by seeding. This was the beginning of a project called FACE (the Florida Area Cumulus Experiment). Early results looked favorable for area-wide rainfall increase. But scientists involved estimated that to detect just a 10-15 per cent rainfall increase they would need at least 600 cloud seeding test cases. NOAA management agreed to fund no more than 100. With so few cases, sharp scientists would mistrust the results, even if FACE seemed successful. Confronted with the agency’s militaristic, “no-argument” management style of the time, Simpson realized that she had to leave. Ultimately, FACE failed to demonstrate that cloud seeding could increase rainfall, effectively killing weather modification research in the United States. In 1974, Simpson left NOAA and became an Endowed Chair Professor in the Environmental Science Department at the University of Virginia, but even after her already substantial accomplishments, the mostly-male faculty did not regard her as a real professor simply because she was a woman. In fact, it became clear over the years that women faculty were held in such low regard there that it was nearly impossible to get a secretary! Luckily Simpson had been in touch with Dave Atlas, who at the time was putting together a new Laboratory for Atmospheres at NASA’s Goddard Space Flight Center. “He’d been going at it for about a year or so, and I asked him if he had any jobs left. He said, ’The Severe Storms Branch needs leadership, please come tomorrow morning.’” A one-year leave from the University extended into two, and in the end Simpson decided that Goddard was a far more favorable environment in which to work. Simpson counts the choice as the best career decision she ever made.
<urn:uuid:ed54ceea-8f6b-4278-8727-8f959e99a872>
3.125
1,138
Knowledge Article
Science & Tech.
37.160677
This image represents an expanding Universe. The arrows show that the movement is outward from the center. The spirals represent galaxies. Hubble Flow: The Expanding Universe In the 1920's the famous American astronomer Edwin Hubble made an amazing discovery. He found that, no matter which direction he looked into space, distant galaxies appeared to be moving away from us. The farther away the galaxy is from our galaxy the faster it moves away from us. Hubble was observing the expansion of the Universe. What does it mean that the Universe is expanding? Imagine baking a loaf of raisin bread. As the bread rises it also expands. All of the raisins move farther apart from one another. Every single raisin would see all of the others moving away from it. All of the galaxies in the universe are like the raisins in the bread. Shop Windows to the Universe Science Store! The Summer 2010 issue of The Earth Scientist , available in our online store , includes articles on rivers and snow, classroom planetariums, satellites and oceanography, hands-on astronomy, and global warming. You might also be interested in: Scientists have found the real Hubble Constant after 8 years of research. The Hubble Constant is the rate that the Universe is expanding at. They found it to be 70 kilometers per second per megaparsec....more Satellites in the 1960's looked for a type of light called Gamma Rays. They found bursts of Gamma Rays coming from outer space! They can't hurt you. They are stopped by the Earth's atmosphere. We have...more When we look up at the night sky, we notice that there are many stars in our sky. Stars must like to live together in star cities - galaxies. Our city of stars is called the Milky Way, and it is home to...more Neutron Stars form when really big stars die. When such a star runs out of fuel its center begins to collapse under gravity. When the center collapses the entire star collapses. The surface of the star...more Spiral galaxies may remind you of a pinwheel that blows in the breeze. Like a pinwheel, a spiral galaxy is rotating, and it has spiral arms. Through a telescope or binoculars,a spiral galaxy may look...more When stars like our own sun die they will become White Dwarfs. As a star like our sun is running out of fuel in its center it grows into a red giant. This will happen to our sun in 5 Billion years. The...more What's in a Name: Arabic for "head of the demon" Claim to Fame: Represents Medusa's eye in Perseus. A special variable star that "winks" every 3 days. Type of Star: Blue-white Main Sequence Star, and...more
<urn:uuid:5b02748e-0002-4594-a652-6f4f1d78dbec>
3.671875
574
Content Listing
Science & Tech.
65.634822
Sea level rise is a topic that we frequently focus on because of all the gross environmental alterations which may result from anthropogenic greenhouse gas emissions, it is perhaps the only one which could lead to conditions unexperienced by modern societies. A swift (or accelerating) sea level rise sustained for multiple decades and/or centuries would pose challenges for many coastal locations, including major cities around the world—challenges that would have to be met in some manner to avoid inundation of valuable assets. However, as we often point out, observational evidence on the rate of sea level rise is reassuring, because the current rate of sea level rise from global warming lies far beneath the rates associated with catastrophe. While some alarmists project sea level rise of between 1 to 6 meters (3 to 20 feet) by the end of this century, currently sea level is only inching up at a rate of about 20 to 30 centimeters per hundred years (or about 7 to 11 inches of additional rise by the year 2100)—a rate some 3-4 times below the low end of the alarmist spectrum, and a whopping 20 to 30 times beneath the high end. September 10, 2012 June 29, 2012 Last week, the National Academies of Sciences’ National Research Council (NRC) released a report Sea-Level Rise for the Coasts of California, Oregon, and Washington: Past, Present, and Future. The apparent intent of the report was to raise global warming alarm by projecting rapidly rising seas—some 2-3 times higher than recent IPCC estimates—along the California coast and elsewhere. Based on the news coverage, the NRC was successful. Successfully handling the media does not equate to successfully handling the science, if scientific success is judged by scientific accuracy. The NRC was quite adept at sidestepping the inconvenient scientific literature which would have tempered their conclusions and which would have replaced alarm with prudent vigilance. Sure, global sea level will continue to rise, but the rate of future rise will likely be closer to the rise observed during the 20th century, about 8-12 inches—a rate to which coastal residents have easily adapted—than to the NRC’s upper bound which approaches some 4-5 feet by the year 2100. June 24, 2011 A recent study has attempted to use a long-term (~2,100 years), local (coastal North Carolina), determination of sea level derived from the build-up of salt marsh sediments to better characterize the behavior of global sea level (and by proxy, global temperatures) over the same multi-millennial time period. Based upon the results of this investigation, the research team led by Andrew Kemp from the University of Pennsylvania, concludes that there were four rather distinct periods of sea level rise over the past 2,100 years. Here is how they describe the first three: Sea level was stable from at least BC 100 until AD 950. Sea level then increased for 400 y at a rate of 0.6 mm/y, followed by a further period of stable, or slightly falling, sea level that persisted until the late 19th century. And then, and here’s the kicker (and why this paper received all the press coverage that it did, with headlines such as “Fastest Sea-Level Rise in 2,000 Years Linked to Increasing Global Temperatures“): Since then, sea level has risen at an average rate of 2.1 mm/y, representing the steepest century-scale increase of the past two millennia. This rate was initiated between AD 1865 and 1892. Using an extended semiempirical modeling approach, we show that these sea-level changes are consistent with global temperature for at least the past millennium. But can a paleo-record of sea level rise from basically one locality (e.g., coastal North Carolina) provide a good indication of the long-term history of global sea level rise? Obviously, the authors think so, but others are not so sure. May 2, 2011 We dedicated our last World Climate Report post to the findings from our just-published (and quite popular) paper in which we attempted a reconstruction of the warm season ice melt extent that has taken place across Greenland each year since 1784. Our goal was to develop a larger context in which to place the direct observations of ice melt across Greenland (available only since 1979) and to better be able to judge the reports of record high ice melt in recent years. Our general conclusions were: • several recent years (in particular 2007 and from preliminary observations 2010) likely had a historically high degree of surface ice melt across the Greenland ice sheet, • on a decadal scale, there were several 10-yr periods during the 1930s through the early 1960s during which the average annual ice melt extent across Greenland was likely greater than the most recent 10 years of available data in our study (2000-2009), • that the ice melt across Greenland was particularly low at the start of the era of satellite observations (which began in 1979), such that a sizeable portion of increasing ice melt observed by satellite-borne instruments since then could potentially be part of the natural variability about the mean state, • that, for the next several decades at least, Greenland’s contribution to global sea level rise was likely to be modest. But not everyone was enamored with our findings. Last week, the most popular article from among those recently published in the American Geophysical Union’s (AGU) Journal of Geophysical Research-Atmospheres was one which presents a 225-yr reconstruction of the extent of ice melt across Greenland. We are happy to say that your obedient servants here at World Climate Report were part of the research team of this oft-downloaded paper. The full citation (for those who may want to check it out) is: Frauenfeld, O.W., P.C. Knappenberger, and P.J. Michaels, 2011. A reconstruction of annual Greenland ice melt extent, 1785-2009. Journal of Geophysical Research, 116, D08104, doi: 10.1029/2010JD014918. April 7, 2011 Back in the summer of 2009, we ran a piece titled “Sea Level Rise: An Update Shows a Slowdown” in which we showed that the much ballyhooed “faster rate of sea level rise during the satellite era” was actually slowing down. We suggested that this observation would help the IPCC to adjudicate an issue that it raised in its 2007 Fourth Assessment Report: “Whether the faster rate [of sea level rise] for 1993 to 2003 reflects decadal variability or an increase in the longer term trend is unclear.” February 16, 2011 One word that comes up over and over in the global warming issue is “uncertainty”. The alarmists tend to minimize the discussion of uncertainties while the so-called skeptics seem to harp on how uncertain we are on so many fronts. Two articles have appeared in the literature during the past year highlighting amazing uncertainties dealing with ice loss from glaciers and water mass in the world’s oceans. February 3, 2011 If you haven’t heard the news, global warming is causing sea level to rise and causing storms to become more severe, and the net result is shoreline erosion throughout the world. This pillar of the apocalypse is particularly easy to sell—gather up some pictures of shoreline erosion, throw in some images of turtle nest destruction, and you are on your way to winning a Nobel Prize for putting all the pieces together. A recent issue of Global and Planetary Change contains an article on this subject written by two scientists with the School of Earth and Environmental Sciences at James Cook University in Townsville, Queensland; funding was provided by the Environmental Protection Agency-Queensland. Dawson and Smithers focused on Raine Island located on the northern portion of the Great Barrier Reef, and if you don’t know, Raine Island is “a globally significant turtle rookery.” So it’s all here—an island on the Great Barrier Reef, turtles, sea level rise, relatively frequent tropical cyclones, sand beaches easily eroded—we are sure the global warming alarmists cannot wait to see how bad things have become at this sacred location. But, alas, the results from Raine Island are about to rain on their parade of pity. August 9, 2010 We are sure you’ve heard that sea level is rising? We conducted a web search on “Global Warming and Sea Level” and nearly 3.5 million websites are immediately located. And before you conduct the search yourself, you already know what you will find. The earth is getting warmer due to the buildup of greenhouse gases, the warmer sea water expands causing sea level to rise, and most of all, you will read all about the ice melting throughout the world pouring fresh water into ocean basins causing sea level to rise far more. Alarmists insist that the worst is just around the corner, and the sea level rise will accelerate or even quickly jump to a new level given some catastrophic collapse of large sheets of ice near the fringes of the polar areas. Coastlines will be inundated, the human misery will be on a Biblical scale, ecosystems will be destroyed … this goes on for millions of websites! But things aren’t really so simple. February 9, 2010 The Technical Summary of the most recent IPCC reports states that “Over the 1961 to 2003 period, the average rate of global mean sea level rise is estimated from tide gauge data to be 1.8 ± 0.5 mm yr–1.” “The average thermal expansion contribution to sea level rise for this period was 0.42 ± 0.12 mm yr–1, with significant decadal variations, while the contribution from glaciers, ice caps and ice sheets is estimated to have been 0.7 ± 0.5 mm yr–1. The sum of these estimated climate-related contributions for about the past four decades thus amounts to 1.1 ± 0.5 mm yr–1, which is less than the best estimate from the tide gauge observations. Therefore, the sea level budget for 1961 to 2003 has not been closed satisfactorily.” That is indeed very interesting – the average rate of sea level is around 1.8 mm per year, and the IPCC can account for only 60% of the increase. This uncertainty is compound considering IPCC’s statements that “The global average rate of sea level rise measured by TOPEX/Poseidon satellite altimetry during 1993 to 2003 is 3.1 ± 0.7 mm yr–1. This observed rate for the recent period is close to the estimated total of 2.8 ± 0.7 mm yr–1 for the climate-related contributions due to thermal expansion (1.6 ± 0.5 mm yr–1) and changes in land ice (1.2 ± 0.4 mm yr–1). Hence, the understanding of the budget has improved significantly for this recent period, with the climate contributions constituting the main factors in the sea level budget (which is closed to within known errors). Whether the faster rate for 1993 to 2003 compared to 1961 to 2003 reflects decadal variability or an increase in the longer-term trend is unclear”. As you might guess, there is much to be done to improve our understanding of sea level rise.
<urn:uuid:9f26bb1b-bf64-48cc-b3f1-bc940673b55c>
3.296875
2,365
Content Listing
Science & Tech.
49.670298
06 Oct 2003 This article provides some basic code to set up forms authentication within your web. Tested: ASP.NET 1.0 This article has not yet been rated.| Views (Total / Last 10 Days): Have you ever wondered how sites authenticate users using forms? This article will explain how to by using ASP.Net and SQL 2000. First, you need to prepare your computer to run the authentication. Be sure you already have IIS installed and running on a Windows 2000 (or better) computer or server. You also need to install the .Net Framework which is a free download available from Microsoft. After the .Net Framework is installed, you must configure IIS to run the framework properly. You can do so by going to IIS and setting the default user to ASPNET. More experienced users might want to edit the config files on the host to add additional users or permissions. Secondly, you need to establish a database in SQL. If you do not have SQL, the SQL desktop engine (MSDE) is available as a free download from Microsoft. You can download this engine by going to www.asp.net/webmatrix/ and downloading the ASP.Net Community built Web Matrix. This will include steps to downloading MSDE. Once you have downloaded and installed your MSDE, you must setup a database. |Section 1: Setting up SQL| At this point, all of the necessary work has been done to establish the base of your project. Then create a table and procedure to hold and query users and passwords. For this example, I used the following: 1: CREATE TABLE [dbo].[User] ( 2: username char(25), 3: password char(25), 7: CREATE PROCEDURE MyAuthentication 10: @username Char( 25 ), 11: @password Char( 25 ) 15: DECLARE @actualPassword Char( 25 ) 18: @actualPassword = Password 19: FROM [Users] 20: Where username = @username 22: IF @actualPassword IS NOT NULL 23: IF @password = @actualPassword 24: RETURN 1 26: RETURN -2 28: RETURN -1 The table called users holds the username and passwords of visitors. The procedure called MyAuthentication is the gateway to the table users. It allows 2 parameters to be passed, username and password. The procedure then queries the table for entries where the passed username (@username) is equal to the username in the table (username). The If..Else statement works similar to any other programming if..else statement. If no results are returned from the select statement, the procedure will return a -1. If results are returned, the next If..Else statement is executed. This statement checks to see if the passed password (@password) is equal to the actual password in the table (@actualPassword). If it is, 1 is returned. Otherwise, a -2 will be returned. |Section 2: Setting Up The Web.Config File| Now that the SQL side is setup, we can begin programming the ASP.Net pages. We will begin by configuring our web application. The configuration is stored in a file called web.config. A sample of the configuration is shown below: 4: <!-- This section stores your SQL configuration as an appsetting called 5: conn. You can name this setting anything you would like. 8: <add key="conn" _ 9: value="server=localhost;database=MySite;uid=sa;pwd=password;" /> 13: <!-- This section sets the authentication mode to forms authentication 14: and routes all traffic to the specified page. It also specifies a 15: timeout. The authorization section below denies all users not 16: authenticated. For testing purposes, custom errors was turned off. 17: The section below allows pages to be trace enabled for debugging 21: <authentication mode="Forms"> 22: <forms name="MyFormsAuthentication" loginUrl="login.aspx" _ 23: protection="All" timeout="720" /> 27: <deny users="?" /> 30: <customErrors mode="Off" /> 32: <trace enabled="true" requestLimit="0" pageOutput="true" /> This will setup the basic web.config file that we will need. |Section 3: Creating The ASP.NET Page| The login ASP.Net page will include two sections, a code block and an html block. For my example, I used the following login page: 1: <%@ Import Namespace="System.Data" %> 2: <%@ Import Namespace="System.Data.SqlClient" %> 4: <Script Runat="Server"> 6: Sub Login_Click( s As Object, e As EventArgs ) 7: If IsValid Then 8: If MyAuthentication(txtUsername.Text,txtPassword.Text) > 0 Then 9: FormsAuthentication.RedirectFromLoginPage (txtUsername.Text,False) 10: End If 11: End If 12: End Sub 14: Function MyAuthentication(strUsername As String, _ 15: strPassword As String) As Integer 17: ' Variable Declaration 18: Dim myConn As SQLConnection 19: Dim myCmd As SQLCommand 20: Dim myReturn As SQLParameter 21: Dim intResult As Integer 22: Dim conn As String 24: ' Set conn equal to the conn. string we setup in the web.config 25: conn = ConfigurationSettings.AppSettings("conn") 26: myConn = New SQLConnection(conn) 28: ' We are going to use the stored procedure setup earlier 29: myCmd = New SQLCommand("MyAuthentication",myConn) 30: myCmd.CommandType = CommandType.StoredProcedure 32: ' Set the default return parameter 33: myReturn = myCmd.Parameters.Add("RETURN_VALUE",SqlDbType.Int) 34: myReturn.Direction = ParameterDirection.ReturnValue 36: ' Add SQL Parameters 40: ' Open SQL and Execute the query 41: ' Then set intResult equal to the default return parameter 42: ' Close the SQL connection 45: intResult = myCmd.Parameters( "RETURN_VALUE" ).Value 48: ' If..then..else to check the userid. 49: ' If the intResult is less than 0 then there is an error 50: If intResult < 0 Then 51: If intResult = -1 Then 52: lblMessage.Text = "Username Not Registered!<br><br>" 54: lblMessage.Text = "Invalid Password!<br><br>" 55: End If 56: End If 58: ' Return the userid 59: Return intResult 61: End Function 68: <title>Authentication Sample</title> 72: <form Runat="Server"> 73: <asp:table runat="Server" HorizontalAlign="Center"> 75: <asp:tablecell ColumnSpan="2"> 76: <h2>Please Login:</h2> 77: <asp:label ID="lblMessage" ForeColor="Crimson" Font-Bold="True" 81: <asp:tablecell CssClass="FormText"> 84: <asp:tablecell CssClass="FormText"> 85: <asp:TextBox ID="txtUsername" MaxLength="25" Runat="Server" 87: <asp:RequiredFieldValidator ControlToValidate="txtUsername" Text="Required!" Runat="Server" /> 91: <asp:tablecell CssClass="FormText"> 94: <asp:tablecell CssClass="FormText"> 95: <asp:TextBox ID="txtPassword" TextMode="Password" MaxLength="25" Runat="Server" CssClass="FormElement" /> 97: <asp:RequiredFieldValidator ControlToValidate="txtPassword" Text="Required!" Runat="Server" /> 101: <asp:tablecell ColumnSpan="2"> 103: <asp:Button Text="Login!" OnClick="Login_Click" The server button contains an OnClick method that calls a subroutine called Login_Click. The Login_Click subroutine checks the validity of the form and then checks to be sure the authentication was validated. If it was, the form will redirect the user to the page that forwarded the request to the login.aspx page. If not, the user will remain on the page until the credentials are correct. The authentication is checked using a function called MyAuthentication. This function calls the SQL settings from the web.config file. It then declares that a stored procedure will be used. Finally, input and return parameters are added to the procedure declaration. When the stored procedure is executed, the return value is checked for errors. Remember that our stored procedure returns a negative number for an incorrect username or password. The If..Else statement is setup to check this. If the credentials are incorrect, a message will be displayed on the screen. If they are correct, the function will return a positive value and continue in the subroutine. 5/25/2011 7:51:10 AM Hi I want the code in C#, can u please send it? login authentication in asp.net 7/13/2009 1:09:45 AM 4/16/2007 3:31:45 AM I got an error in this line intResult = myCmd.Parameters( "RETURN_VALUE" ).Value when i converted the code to c#.error is intResult = myCmd.Parameters( "RETURN_VALUE" ).Value how do i solve this Nice done but... 9/12/2006 4:25:17 PM I just have a little question. What if i want to know the username of the user in other wepages???. ANTONY LEO EDEL 3/27/2006 5:40:52 AM easy to understand . lot of thanks 2/2/2006 4:50:17 PM Very Very Good. 1/14/2006 2:20:40 AM thanks, this tutorial is very good. 11/15/2005 3:46:14 PM Convert sqlCommand2.Parameters["RETURN_VALUE"].Value to a integer 11/12/2005 10:15:36 AM hello ,thanx for this grat code ,but i have a prob!! i have converted this code to C# and every thing is fine exept this statment ntResult = sqlCommand2.Parameters["RETURN_VALUE"].Value; it say can nit implicit convert from in to object , so can u help me in doning it in c#? 10/3/2005 5:21:58 PM very nice work - thanks much! need an import though 10/3/2005 5:14:52 PM it neededs this at the top of the page though: 7/29/2005 4:49:45 AM Thank you. Just what i needed. 6/16/2005 10:29:39 PM This was very well done and cover a whole lit of stuff that I actually was after! 5/28/2005 11:30:31 PM Just what I was after 2/10/2005 2:56:41 AM
<urn:uuid:12174fe4-1f02-4de6-a2dc-6cce9b5f0286>
3.03125
2,487
Comment Section
Software Dev.
68.340321
As explained in an earlier post (which you may be interested in reading as a bit of background to this one), the earlier part of the Caenozoic (the current era of the earth's history) was home to a number of mammalian lineages of very mysterious relationships. Very few of the familiar orders around us today had yet put in an appearance, and instead the world was home to such oddities as pantodonts, tillodonts and dinocerates. Among the prominent carnivorous mammals of the time were a group known as the creodonts. Creodonts ranged in size from that of a small cat to lion- or bear-size species, and often converged in appearance with those animals. But what were creodonts? Current authors regard the Creodonta as including two families, the vaguely cat-like Oxyaenidae and the largely dog- or hyaena-like Hyaenodontidae. Oxyaenids were found in North America and Europe during the late Palaeocene and Eocene, while hyaenodontids were found in Africa, Eurasia and North America from the Late Palaeocene to near the end of the Miocene, though they disappeared from North America not long after the end of the Eocene (Gheerbrant et al., 2006). Many authors have suggested a relationship with modern carnivorans (cats, dogs, weasels, bears, etc.), and they have been included with the latter in a superorder Ferae. Popular as this arrangement has been, however, there's just one small problem - there's not a shred of evidence to support it. Part of the problem is that creodonts are a good example of what might be called "taxonomic drift". Imagine that an author establishes a taxon, and presents a list of organisms that he thinks belong to that taxon. A few years pass by, and the taxon is revised by another author, who excludes some of the originally-included species that he thinks belong elsewhere, and substitutes a few more species that he believes to be related to the remainder. Carry this on through a few subsequent revisions, with species being taken out and put in, and you may end up with a situation where nearly all of the original members of the taxon have been taken out, and the taxon name has become associated with a very different concept from its original intent. This can be horrendously confusing for later readers, because if they don't realise that this taxonomic drift has taken place, they may read things into older publications that their authors never intended. Creodonta was originally established by Edward Drinker Cope in 1875 as a suborder of the Insectivora*. In his new suborder, Cope included three families - Oxyaenidae, Ambloctonidae (now included in Oxyaenidae - Gunnell, 1998) and Arctocyonidae (another contemporary family of carnivorous placentals, within which Cope also included what are now regarded as the Miacidae). The Hyaenodontidae were not part of the original Creodonta - at the time, Hyaenodon was regarded as a genuine carnivoran. Cope distinguished creodonts from carnivorans by the former's lack of a fused scapholunar bone in the wrist, their ungrooved astragalus, and their less-developed and smoother cerebral hemispheres (Cope, 1884). These features, it should be noted, are all primitive for placentals, but to Cope indicated the creodonts' position in the insectivoran grade. He nevertheless regarded creodonts as ancestral to carnivorans, with cats descended from Oxyaenidae and dogs from Miacidae (Cope, 1880). Later, Cope (1883) included Insectivora and Creodonta as separate suborders of his order Bunotheria, which also included the tillodonts, taeniodonts and prosimians**. Cope (1883) also redefined creodonts to include mammals without continuously-growing incisors and with trituberculate molars, which meant that in addition to the Oxyaenidae and Miacidae, Creodonta now included Mesonychidae, Leptictidae, moles and tenrecs (Arctocyonidae were transferred to the Insectivora). The Hyaenodontidae wormed their way in a year later (Cope, 1884). *This does not necessarily mean that he thought they were specifically related to modern insectivorans such as shrews and hedgehogs. Cope and most of his contemporaries would have regarded the "Insectivora" as representing the generalised basal form from which all other placental mammals were derived, and recent insectivorans would have been the remnants of that original grade. **It is also notable that Cope regarded the aye-aye as forming a separate suborder from other prosimians, due to its rodent-like incisors. Cope (1884) held that the tillodonts were "intimately allied to the living Chiromys [aye-aye] of Madagascar, which is itself almost a lemur, by general consent" (emphasis mine). So right from the beginning, the question of what was a creodont was convoluted. Over the years, various families of "creodonts" were reassigned as their relationships became clearer. The Miacidae became regarded as true Carnivora. Arctocyonidae and Mesonychidae became included among the primitive ungulates (another confused mess, but that's a story for another year) and may be related to artiodactyls. Moles and tenrecs, of course, were reunited with their fellow modern insectivorans (though the tenrecs have recently had another falling-out). Eventually, the creodonts were whittled down to their modern content of oxyaenids and hyaenodontids, but, as pointed out by Polly (1996), "Hyaenodontidae and Oxyaenidae are currently grouped together in Creodonta because they are the only taxa that have not been removed from the group, not because there has been specific positive evidence proposed for their grouping". Those few characters the two families do share are also found in other, non-creodont mammals. As for their association with Carnivora, the two orders have been associated because they both possess shearing carnassial teeth. However, while the carnassials in Carnivora are formed by the last upper premolars and the first lower molars, those of Oxyaenidae are derived from the first upper and second lower molars, while hyaenodontids have two sets of carnassials formed by the first upper/second lower and second upper/third lower molars. Carnassials have also developed in other groups of mammals - notably the borhyaenoids, which are metatherians if not marsupials and so definitely not related to carnivorans. The only real reason creodonts have been associated with Carnivora for so long seems to be their prior inclusion of the genuinely carnivoran (or stem-carnivoran) miacids. It's a bit like when one of your friends brings an acquaintance of theirs to a party who just hangs around for hours with everybody being too polite to ask them to leave. So, if they weren't related to Carnivora, can we say what creodonts were related to? Particularly in the case of Oxyaenidae, the answer is brief, simple and to the point: we really have not got a sodding clue. Whatever their ancestry might have been, oxyaenids were horribly derived little (or not so little) beggars - for instance, they had completely lost the third molars. Van Valen (1969) derived both oxyaenids and hyaenodontids from the Palaeoryctidae, particularly from the Cretaceous-Palaeocene Cimolestes, and other authors seem to have regarded the idea favourably, at least for the hyaenodontids (Polly, 1996; Gheerbrant et al., 2006). The main problem with this scenario, however, is that the Palaeoryctidae of Van Valen and other authors is itself polyphyletic. For instance, the phylogenetic analysis of Wible et al. (2007) included two "palaeoryctids", Cimolestes and Eoryctes (Eoryctes is more likely to represent the Palaeoryctidae proper),and while Cimolestes appeared outside the placental crown group, Eoryctes was placed among the insectivorans as the sister to Potamogale (Tenrecidae). If creodonts (either or both families) are closer to Cimolestes, they may be stem-eutherians. If they are closer to Palaeoryctidae proper, they may even be afrotheres (Wible et al. did not support placement of tenrecs among afrotheres, but it is notable that the earliest hyaenodontids are African). Placement of either the Oxyaenidae or the Hyaenodontidae still awaits proper analysis. Cope, E. D. 1880. On the genera of the Creodonta. Proceedings of the American Philosophical Society 19 (107): 76-82. Cope, E. D. 1883. On the mutual relations of the bunotherian Mammalia. Proceedings of the Academy of Natural Sciences of Philadelphia 35: 77-83. Cope, E. D. 1884. The Creodonta. American Naturalist 18 (3): 255-267. Gheerbrant, E., M. Iarochene, M. Amaghzaz & B. Bouya. 2006. Early African hyaenodontid mammals and their bearing on the origin of the Creodonta. Geological Magazine 143 (4): 475-489. Gunnell, G. F. 1998. Creodonta. In Evolution of Tertiary Mammals of North America vol. 1. Terrestrial Carnivores, Ungulates, and Ungulatelike Mammals (C. M. Janis, K. M. Scott & L. L. Jacobs, eds) pp. 91-109. Cambridge University Press. Polly, P. D. 1996. The skeleton of Gazinocyon vulpeculus gen. et comb. nov. and the cladistic relationships of Hyaenodontidae (Eutheria, Mammalia). Journal of Vertebrate Paleontology 16 (2): 303-319. Van Valen, L. 1969. The multiple origins of the placental carnivores. Evolution 23 (1): 118-130. Wible, J. R., G. W. Rougier, M. J. Novacek & R. J. Asher. 2007. Cretaceous eutherians and Laurasian origin for placental mammals near the K/T boundary. Nature 447: 1003-1006.
<urn:uuid:a9d01169-f9c6-48b5-abd7-df9c7533e75b>
3.140625
2,376
Comment Section
Science & Tech.
40.929803
Most extreme January - June period on record NOAA’s U.S. Climate Extremes Index (CEI), which tracks the percentage area of the contiguous U.S. experiencing top-10% and bottom-10% extremes in temperature, precipitation, and drought, was 44% during the year-to-date January - June period. This is the highest value since CEI record-keeping began in 1910, and more than twice the average value. Remarkably, 83% of the contiguous U.S. had maximum temperatures that were in the warmest 10% historically during the first six months of 2012, and 70% of the U.S. of the U.S. had warm minimum temperatures in the top 10%. The percentage area of the U.S. experiencing top-10% drought conditions was 20%, which was the 14th greatest since 1910. Extremes in 1-day spring heavy precipitation events were near average.
<urn:uuid:495a25ec-90aa-4c27-8969-387033c148d9>
2.96875
194
Knowledge Article
Science & Tech.
67.950462
As temperatures in southwestern North America increase 3 to 5°C over the next 60 to 90 years, rapidly colonizing species should increase, while slow colonizing species will at first decrease, eventually becoming re-established in their new range. Using long-term plots, this successional process has been estimated to require from 100 to 300+ years in small areas, under a stable climate, with a nearby seed source. How much longer will it require on a continental scale, under a changing climate, without a nearby seed source? This question is tested using the response of fossil plant assemblages from the Grand Canyon, Arizona to the most recent rapid warming of similar magnitude. At the start of the Holocene, 11,700 years ago, temperatures increased about 4 oC over less than a century. Grand Canyon plant species responded at different rates to this warming climate. Early-successional species rapidly increased, while late-successional species decreased. This shift persisted throughout the following 2700 years. Two similar but less pronounced shifts followed rapid warming events around 14,700 and 16,800 years ago. Late-successional species only predominated following 4000 years or more of relatively stable temperatures during the full glacial Wisconsinan and late Holocene. These results suggest the potential magnitude, duration, and nature of future ecological changes. When these concepts are extended to include the most rapid early-successional colonizers, the herbaceous species, they imply that the recent increases in invasive exotics may be only the most noticeable part of a new resurgence of herbaceous early-successional vegetation. These results also caution against models of natural vegetation and carbon balance projecting future conditions based upon the assumption that vegetation approaches equilibrium within only a century.
<urn:uuid:6af9f757-bca5-4882-b094-c89882ffb983>
3.265625
346
Academic Writing
Science & Tech.
23.012729
Type.GetType Method (String, Boolean, Boolean) Gets the Type with the specified name, specifying whether to perform a case-sensitive search and whether to throw an exception if an error occurs while loading the Type. [Visual Basic] Overloads Public Shared Function GetType( _ ByVal typeName As String, _ ByVal throwOnError As Boolean, _ ByVal ignoreCase As Boolean _ ) As Type [C#] public static Type GetType( string typeName, bool throwOnError, bool ignoreCase ); [C++] public: static Type* GetType( String* typeName, bool throwOnError, bool ignoreCase ); [JScript] public static function GetType( typeName : String, throwOnError : Boolean, ignoreCase : Boolean ) : Type; - The name of the Type to get. - true to throw any exception that occurs. false to ignore any exception that occurs. - true to perform a case-insensitive search for typeName, false to perform a case-sensitive search for typeName. The Type with the specified name, if found; otherwise, a null reference (Nothing in Visual Basic). |ArgumentNullException||typeName is a null reference (Nothing in Visual Basic).| |TargetInvocationException||A class initializer is invoked and throws an exception.| |TypeLoadException||throwOnError is true and an error is encountered while loading the Type.| GetType only works on assemblies loaded from disk. If you call GetType to look up a type defined in a dynamic assembly defined using the System.Reflection.Emit services, you might get inconsistent behavior. The behavior depends on whether the dynamic assembly is persistent, that is, created using the RunAndSave or Save access modes of the System.Reflection.Emit.AssemblyBuilderAccess enumeration. If the dynamic assembly is persistent and has been written to disk before GetType is called, the loader finds the saved assembly on disk, loads that assembly, and retrieves the type from that assembly. If the assembly has not been saved to disk when GetType is called, the method returns a null reference (Nothing in Visual Basic). GetType does not understand transient dynamic assemblies; therefore, calling GetType to retrieve a type in a transient dynamic assembly returns a null reference (Nothing). To use GetType on a dynamic module, subscribe to the AppDomain.AssemblyResolve event and call GetType before saving. Otherwise, you will get two copies of the assembly in memory. If the requested type is non-public and the caller does not have ReflectionPermission to reflect non-public objects outside the current assembly, this method returns a null reference (Nothing). The following table shows what members of a base class are returned by the Get methods when reflecting on a type. |Field||No||Yes. A field is always hide-by-name-and-signature.| |Event||Not applicable||The common type system rule is that the inheritance is the same as that of the methods that implement the property. Reflection treats properties as hide-by-name-and-signature. See note 2 below.| |Method||No||Yes. A method (both virtual and non-virtual) can be hide-by-name or hide-by-name-and-signature.| |Property||Not applicable||The common type system rule is that the inheritance is the same as that of the methods that implement the property. Reflection treats properties as hide-by-name-and-signature. See note 2 below.| - Hide-by-name-and-signature considers all of the parts of the signature, including custom modifiers, return types, parameter types, sentinels, and unmanaged calling conventions. This is a binary comparison. - For reflection, properties and events are hide-by-name-and-signature. If you have a property with both a get and a set accessor in the base class, but the derived class has only a get accessor, the derived class property hides the base class property, and you will not be able to access the setter on the base class. - Custom attributes are not part of the common type system. Arrays or COM types are not searched for unless they have already been loaded into the table of available classes. typeName can be a simple type name, a type name that includes a namespace, or a complex name that includes an assembly name specification. If typeName includes only the name of the Type, this method searches in the calling object's assembly, then in the mscorlib.dll assembly. If typeName is fully qualified with the partial or complete assembly name, this method searches in the specified assembly. AssemblyQualifiedName can return a fully qualified type name including nested types and the assembly name. All compilers that support the common language runtime will emit the simple name of a nested class, and reflection constructs a mangled name when queried, in accordance with the following conventions. |Backslash (\)||Escape character.| |Comma (,)||Precedes the Assembly name.| |Plus sign (+)||Precedes a nested class.| |Period (.)||Denotes namespace identifiers.| For example, the fully qualified name for a class might look like this: If the namespace were TopNamespace.Sub+Namespace, then the string would have to precede the plus sign (+) with an escape character (\) to prevent it from being interpreted as a nesting separator. Reflection emits this string as follows: A "++" becomes "\+\+", and a "\" becomes "\\". This qualified name can be persisted and later used to load the Type. To search for and load a Type, use GetType either with the type name only or with the assembly qualified type name. GetType with the type name only will look for the Type in the caller's assembly and then in the System assembly. GetType with the assembly qualified type name will look for the Type in any assembly. Type names may include trailing characters that denote additional information about the type, such as whether the type is a reference type, a pointer type or an array type. To retrieve the type name without these trailing characters, use t.GetElementType().ToString(), where t is the type. Spaces are relevant in all type name components except the assembly name. In the assembly name, spaces before the ',' separator are relevant, but spaces after the ',' separator are ignored. The following table shows the syntax you use with GetType for various types. |An unmanaged pointer to MyType||Type.GetType("MyType*")| |An unmanaged pointer to a pointer to MyType||Type.GetType("MyType**")| |A managed pointer or reference to MyType||Type.GetType("MyType&"). Note that unlike pointers, references are limited to one level.| |A parent class and a nested class||Type.GetType("MyParentClass+MyNestedClass")| |A one-dimensional array with a lower bound of 0||Type.GetType("MyArray")| |A one-dimensional array with an unknown lower bound||Type.GetType("MyArray[*]")| |An n-dimensional array||A comma (,) inside the brackets a total of n-1 times. For example, System.Object[,,] represents a three-dimensional Object array.| |A two-dimensional array's array||Type.GetType("MyArray")| |A rectangular two-dimensional array with unknown lower bounds||Type.GetType("MyArray[*,*]") or Type.GetType("MyArray[,]")| .NET Compact Framework Platform Note: The ignoreCase parameter is not supported and should be set to false. Platforms: Windows 98, Windows NT 4.0, Windows Millennium Edition, Windows 2000, Windows XP Home Edition, Windows XP Professional, Windows Server 2003 family, .NET Compact Framework, Common Language Infrastructure (CLI) Standard .NET Framework Security: - ReflectionPermission for reflecting non-public objects. Associated enumeration: ReflectionPermissionFlag.TypeInformation Type Class | Type Members | System Namespace | Type.GetType Overload List | String | TypeLoadException | ReflectionPermission | AssemblyQualifiedName | GetAssembly | GetType | AssemblyName | Specifying Fully Qualified Type Names
<urn:uuid:e68e4f86-c65e-4db9-8cea-08d000336407>
2.765625
1,790
Documentation
Software Dev.
32.169809
Prove that if n is a triangular number then 8n+1 is a square number. Prove, conversely, that if 8n+1 is a square number then n is a triangular number. Robert noticed some interesting patterns when he highlighted square numbers in a spreadsheet. Can you prove that the patterns will Which numbers can we write as a sum of square numbers? Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? Sets of integers like 3, 4, 5 are called Pythagorean Triples, because they could be the lengths of the sides of a right-angled triangle. Can you find any more? Discover a way to sum square numbers by building cuboids from small cubes. Can you picture how the sequence will grow? A square patio was tiled with square tiles all the same size. Some of the tiles were removed from the middle of the patio in order to make a square flower bed, but the number of the remaining tiles. . . . A challenge that requires you to apply your knowledge of the properties of numbers. Can you fill all the squares on the board? How many four digit square numbers are composed of even numerals? What four digit square numbers can be reversed and become the square of another number? What is the value of the digit A in the sum below: [3(230 + A)]^2 = A woman was born in a year that was a square number, lived a square number of years and died in a year that was also a square number. When was she born?
<urn:uuid:833470dd-5951-4897-826a-b3bc27b88884>
2.96875
343
Content Listing
Science & Tech.
71.107471
Web-based Resources for Teaching the Ocean System This is a general collection of web sites that relate to teaching oceanography at the undergraduate level. Please let us know about your favorite oceanographic web Refine the Results Results 11 - 20 of 378 matches Turbidity Current Movies part of SERC Web Resource Collection These turbidity, or density, current movies from Wesleyan University's Learning Objects website show turbidity current experiments conducted in a 1 meter long tank. In the lab, the tank angle and ... Indian Ocean Tsunami Quicktime Animation part of SERC Web Resource Collection This Quicktime animation, by Dr. Steven Ward at the Institute of Geophysics and Planetary Physics at the University of California - Santa Cruz, shows the tsunami's progress across the Indian Ocean. ... 2004 Sumatra Earthquake part of SERC Web Resource Collection This visualization from Kenji Satake at the Active Fault Research Center in Tsukuba, Japan, highlights the crests and troughs of the tsunami waves as they travel across the Indian Ocean and refract ... NOAA Indian Ocean Tsunami Animation part of SERC Web Resource Collection This Quicktime visualization from NOAA concentrates on the wave propagation as it occurred in the Indian Ocean as a result of the Sumatra earthquake. This animation can be paused, rewound and ... NOAA East African Coast Tsunami Animation part of SERC Web Resource Collection This NOAA visualization tracks the tsunami waves until they reach the East African coast of Somalia. This Quicktime animation can be paused, rewound and advanced. NOAA World Wide Tsunami Animation part of SERC Web Resource Collection This NOAA visualization was created by combining two other tsunami models - the Indian Ocean and East African Coast tsunami animations. This movie shows the worldwide propagation of the tsunami ... The Guardian Unlimited Special Tsunami Report part of SERC Web Resource Collection This special report from The Guardian uses imagery from NOAA's tsunami animation collection and uses a stepwise progression to show a timeline of the tsunami's arrival at particular points throughout ... Ballard Team Has High Hopes for Deep-Water Robot part of SERC Web Resource Collection This National Geographic article describes Hercules, an innovative underwater remotely operated vehicle (ROV) equipped with mechanical arms, fingers, and a variety of tools. The robot will be used to ... Technology Opens Deep Seas to Exploration part of SERC Web Resource Collection This National Geographic article details deep-sea exploration and the potential that many new technologies have that will help explore this uncharted terrain.
<urn:uuid:b0b1ca25-cfa3-4285-b359-9a4903d6e7f5>
2.953125
524
Content Listing
Science & Tech.
33.607173
(a) The uncertainty region for the vacuum state is a circle centered at the origin. (b) The coherent state exhibits a circular uncertainty region centered about the rotating phasor. Points (x, p) in the uncertainty circle trace out an electric field with an uncertainty that is independent of time. (c) The squeezed-vacuum state has an elliptical uncertainty region. (d) The quadrature-squeezed coherent state has an elliptical uncertainty region centered about the phasor. As with the squeezed-vacuum state (b), the electric field shows a periodic reduction and enhancement of its uncertainty. If the minor axis of the ellipse were oriented along the phasor, the state would also be number squeezed. Modified/Expanded from Figure 2 on page 28 of M.C. Teich and B.E.A. Saleh, "Squeezed and Antibunched Light", Physics Today, June, 1990.
<urn:uuid:bed74abb-0cb6-4583-9374-bec577474ca9>
3.140625
204
Academic Writing
Science & Tech.
49.169118
Tip-tilt mirrorA tip-tilt mirror is a rapidly moving mirror that can make small rotations around its two axis. They are used in last-generation telescopes, to correct the aberration introduced by the atmosphere on the light path and improve image quality over what would be possible according to the night seeing. A significant fraction of the aberration results in simply moving the image by a small amount, in a random direction that changes several times a second. A tip-tilt mirror is introduced in the optical path of the telescope, and is moved in realtime to reposition the image in its theoretical center. A fast detector is needed to achieve good results.
<urn:uuid:db32aa47-a01d-4e56-8efc-22980aeb427b>
3.390625
137
Knowledge Article
Science & Tech.
33.045632
A model is a representation of a set of information constructs. A familiar model is the relational model, which defines tables composed of columns and containing records of data. Another familiar model is the XML model, which defines hierarchical data sets. In Teiid, models are used to define the entities, and relationships between those entities, required to fully define the integration of information sets so that they may be accessed in a uniform manner using a single API and access protocol. Source models define the structural and data characteristics of the information contained in data sources. Teiid uses the information in source models to access the information in multiple sources, so that from a user's viewpoint these all appear to be in a single source. In addition to source models, Teiid provides the ability to define a variety of view models. These can be used to define a layer of abstraction above the physical layer, so that information can be presented to end users and consuming applications in business terms rather than as it is physically stored. These business views can be in a variety of forms: relational, XML, or Web services. Views are defined using transformations between models. Types of Models Teiid Designer can be used to model a variety of classes of models. Each of these represent a conceptually different classification of models. - Relational, which model data that can be represented in table – columns and records – form. Relational models can represent structures found in relational databases, spreadsheets, text files, or simple Web services. - XML, which model the basic structures of XML documents. These can be “backed” by XML Schemas. XML models represent nested structures, including recursive hierarchies. - XML Schema, the W3C standard for formally defining the structure and constraints of XML documents, as well as the datatypes defining permissible values in XML documents. - Web Services, which define Web service interfaces, operations, and operation input and output parameters (in the form of XML Schemas). - Model Extensions, for defining property name/value extensions to other model classes. VDBs contain two primary varieties of model types - source and view. Source models represent the structure and characteristics of physical data sources, whereas view models represent the structure and characteristics of abstract structures you want to expose to your applications. Models and VDBs Models used for data integration are packaged into a virtual database (VDB). The models must be in a complete and consistent state when used for data integration. That is, the VDB must contain all the models and all resources they depend upon. Models contained within a VDB can be imported into the Teiid Designer. In this way, VDBs can be used as a way to exchange a set of related models. Models and Translators, Resource Adaptors Source models must be configured with a Translator and a Resource Adaptor with them before a VDB is tested in Designer or deployed for data access. It is possible that multiple models may use the same settings, but each model must define these configurations. Models must be in a valid state in order to be used for data access. Validation of a single model means that it must be in a self-consistent and complete state, meaning that there are no "missing pieces" and no references to non-existent entities. Validation of multiple models checks that all inter-model dependencies are present and resolvable. Models must always be validated when they are deployed in a VDB for data access purposes. Model Execution in Teiid Designer Models can be tested in the Teiid Designer by issuing SQL queries in the SQL Explorer perspective. In this way, you can iterate between defining your integration models and testing them out to see if they are yielding the expected results. Models are stored in XML format, using the XMI syntax defined by the OMG. Model files should never be modified "by hand". While it is possible to do so, there is the possibility that you may corrupt the file such that it cannot be used within the JBoss Enterprise Data Services Platform. Dynamic VDBs and Models The information in this artical applies to the VDBs that are built using the Teiid Designer. If you are building Dynamic VDBs, much of the information does not apply in that case. However, even Dynamic VDBs have models but they only define configuration for importing metadata and Translators and Resource Adaptors.
<urn:uuid:b57d8f00-7f8d-4085-a3ed-eea7ebacddf3>
3.15625
924
Documentation
Software Dev.
37.526062
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. Post a reply Topic review (newest first) Thank you,I read the article(there cannot be a parabola with my equation,right?). Yes, I worked that out. No problem and no harm done. Sorry,bob bundy,I had put email address in place of name by mistake,and I didn't know how to delate a post,so I gave another post by mint. Generally it is difficult to say what the general graph of something of that form would look like. Curve sketching is done by observing properties of the graph: Sorry,there will be '=d' in the right of equation [d=constant] Well,what would this function's graph look like- Is there a method to recognize what a function's graph will be by just looking at the equation?I know how to recognize circle,ellipse,line,parabola,what about others?
<urn:uuid:6b461934-cf50-41b2-8514-6dcad8fe8b1f>
2.796875
258
Comment Section
Science & Tech.
69.201663
AFTER carefully studying the faint microwave echo of the big bang, three teams of astrophysicists say that our Universe really did start out much smaller than an atom and then expanded faster than light, exploding in size during its first fraction of a second in existence. It's the best evidence yet for what's called "inflation" theory. Last year experimenters working with Boomerang and Maxima, microwave telescopes attached to high-altitude balloons, reported the most detailed images yet of the cosmic microwave background. This is radiation from the big bang that pervades all of space. The researchers found faint hot spots in the radiation that were roughly an angular degree in size. These hot spots showed up as a distinct peak in the "power spectrum" of ripple amplitude versus ripple size. The position of the peak showed that the total mass and energy of the Universe exactly balances gravity, making the Universe ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:cf2635f0-98af-4097-aaee-d09ecf4e8550>
3.328125
210
Truncated
Science & Tech.
42.699564
ORCAS: Researchers designing new ultra-sensitive hydrophones -- underwater microphones -- looked to a creature known for its keen sense of hearing in the underwater world, the killer whale. Mother Nature Network reports that Stanford University's Onur Kilic and colleagues analyzed orca ears, which are perfectly constructed to hear well in the ocean's noisy environment. When sound waves hit their thin membranes, data is transmitted as electric impulses to the brain. Kilic built hydrophone sensors that work similarly, then beefed up the frequency range beyond what orcas can sense. They could affect all sorts of research, from tracking migrating whales to directing robots toward underwater oil leaks. The research was published recently in the Journal of the Acoustical Society of America.peacock mantis shrimp, blessed with wicked-sharp vision. The crustacean native to the Indian Ocean and western and central Pacific Ocean is one of the few species that can see circularly polarized light, like the light used to make 3-D movies. Some scientists believe the shrimp's eyes perform better over the entire visual spectrum than any man-made waveplates, transparent slabs that can alter the polarization of light. Penn State University reports than an international researcher team developed a method to produce a waveplate crafted from two layers of nanorods structurally similar to the shrimp's. The research is published online in the journal Nature Communications. On the flip side is a story about one way technology can help inform us about animals:Physorg.com shares a bit about how biologists following wild zebras in Africa joked about how nice it would be to have a bar-code reader to help identify and catalog individual beasts. Now, they do, ever since scientists developed the StripeSpotter, a device being used to identify and study zebras in Kenya. Designed for use with animals that have prominent stripes or patches, it was developed as a joint project between scientists at the University of Illinois at Chicago and Princeton University. They hope StripeSpotter will assist in building biometric databases using photographs taken in the field. In the future, it might be used to identify and track such species as giraffes, given their distinctive spot patterns, and elephants, by the wrinkles on their trunks. WILDLIFE: Interested in observing spawning salmon, Roosevelt elk or Sauvie Island's abundant wildlife? Answer yes and Firsthand Oregon, a series of conservation-focused field trips and lectures might be for you. The series, in which participants head out on small-group learning excursions with Oregon Department of Fish and Wildlife biologists, is sponsored by the Oregon Wildlife Heritage Foundation. Registration is open first to foundation members; when space allows, others can join in for $6. Most trips are family-friendly. Next up in the series: July 27: Fish salvage, site tour and discussion of fish-passage challenges. Oct. 1: See spring chinook spawning in the South Santiam River. Nov. 12: See new fish passage projects along Rickreall Creek. Nov. 19: View wildlife on Sauvie Island. Learn more and register online or call 503-255-6059."Keiko the Untold Story," a documentary by Portlander Theresa Demarest, which made its debut last October at a Southern California film fest, finally will screen in Oregon during the Da Vinci Days Film Festival in Corvallis. It follows the story of the whale star of the 1993 film, "Free Willy." The affable orca lived at the Oregon Coast Aquarium from January 1996 to September 1998, when he was moved to a sea pen in Iceland, where his home waters were believed to be. He died in Norwegian waters in 2003.
<urn:uuid:5e1a9baf-79e0-46d8-9657-612d4ed264e3>
3.046875
774
Content Listing
Science & Tech.
40.437791
- Jan 31, 2003 Finding a Minimum with MIN() Use the aggregate function MIN() to find the minimum of a set of values. To find the minimum of a set of values: expr is a column name, literal, or expression. The result has the same data type as expr. Listing 6.1 and Figure 6.1 show some queries that involve MIN(). The first query returns the price of the lowest-priced book. The second query returns the earliest publication date. The third query returns the number of pages in the shortest history book. Listing 6.1 Some MIN() queries. See Figure 6.1 for the results. SELECT MIN(price) AS "Min price" FROM titles; SELECT MIN(pubdate) AS "Earliest pubdate" FROM titles; SELECT MIN(pages) AS "Min history pages" FROM titles WHERE type = 'history'; Figure 6.1 Results of Listing 6.1. MIN() works with character, numeric, and datetime data types. With character data columns, MIN() finds the value that is lowest in the sort sequence; see Sorting Rows with ORDER BY in Chapter 4. DISTINCT isnt meaningful with MIN(); see Aggregating Distinct Values with DISTINCT later in this chapter. String comparisons may be case-insensitive or case-sensitive, depending on your DBMS; see the DBMS Tip in Filtering Rows with WHERE in Chapter 4. When comparing two VARCHAR strings for equality, your DBMS may right-pad the shorter string with spaces and compare the strings position by position. In this case, the strings Jack and Jack are equal. Refer to your DBMS documentation (or experiment) to determine which string MIN() returns.
<urn:uuid:3778459d-0391-48f7-a79e-17f765f4d405>
3.5
369
Documentation
Software Dev.
65.466719
August 1, 2007 Scientists gives us a sneak peek into the world of wave pools, and explain how these huge pools make constant waves. Waves are made by a huge compressor that feeds four gigantic air blowers. Then a computer controls chambers that generate the waves. When the chamber lids are closed, air from the blowers pushes the water out and makes a wave. When the valve is open, the balance tank fills with water, getting ready to make the next wave. It works just like a toddler pushing a cup upside-down onto water in a bathtub. Summer's almost over...but the fun isn't over just yet! At Noah's Ark Waterpark, there's the "Flash Flood," the "Black Anaconda," and the "Stingray!" But it's the 'Big Kahuna Wave Pool' that is a kid favorite. But -- how do wave pools work? Actually, a huge compressor is fired up that feeds four gigantic air blowers. Then a computer controls chambers that generate the waves. When the chamber lids are closed, air from the blowers pushes the water out and makes a wave. When the valve is open, the balance tank fills with water, getting ready to make the next wave. It works just like a toddler pushing a cup upside-down onto water in a bathtub. "There's a lot of mechanics down there, but it really is simple physics," said Tim Gantz, president and co-owner of Noah's Ark water park in the Wisconsin Dells. More than one million gallons of water constantly run through the Big Kahuna wave pool, and 30,000 gallons of water travel in and out of the chambers every few seconds. But luckily -- all most of us really have to worry about is playing by the wave pool rules. BACKGROUND: Outdoor and indoor wave pools and water theme parks are hugely popular in the US. These are sanitized, man-made versions of nature's wild surfs. In wave pools, the water is chlorinated, the beach is concrete, and the waves arrive like clockwork once every few minutes. The densest collection of wave pools can be found in the Wisconsin Dells, home to 18 water parks. Such parks are good clean family fun, but they also illustrate some fascinating basic science about waves. MAKING WAVES: Waves are the result of wind traveling over water. Imagine a breeze blowing gently across the surface of lake, creating small waves. The waves arise from the surface tension of water. The molecules on the water's surface hold together and form a sort of 'skin', which makes the surface stretchy, and therefore 'sticky.' As more air passes over that sticky surface, it grabs some molecules and pushes them into molecules ahead, which push on other molecules, and so on, so that the wave travels to the opposite end of the shore. The water mostly stays in place; it's the disturbance caused by the wind that is moving across the water. In strong wind, the waves become choppy. The stronger the wind, the larger the waves, because as the waves move, they run into each other and merge adding their energy together to become bigger and move faster. HOW WAVE POOLS WORK: There are a number of ways to recreate wave action with just a basin of water and a means of creating a periodic disturbance: a strong blast of air, perhaps, or a rotating paddle wheel. In one such approach, there is a pump room below the pool, which causes a high-speed fan to blow air into a wide metal pipe, leading to an exhaust port. In the middle of the pipe is a butterfly valve, a wide disc with a swiveling metal axis rod. When the rod swivels so the disc rests horizontally in the pipe, it blocks the air flow, while swiveling the rod the other way moves the disc to a vertical position, allowing air to flow. A hydraulic piston swivels the rod back and forth a regular intervals, causing short burst of pressurized air to flow up the exhaust port and blow on the surface of the water ý an artificial wind. This creates small waves across the water pool's surface. BIGGER IS BETTER: A large wave pool doesn't push on the water with air or a paddle; instead, the wave machine dumps a large volume of water into the deep end of the pool. The surge in water travels all the way to the artificial 'breach', so that the water level in the pool once again balances out. Dumping more water into the pool increases the size and strength of the wave. There are five basic components to such a system: a water pumping system, a water collection reservoir, a series of release valves at the bottom of the reservoir, a giant slanted swimming pool, and a return canal, leading from the breach area back to the pumping system. In this scenario, the water is constantly circulating, moving from the deep end of the pool, out to the canal, around to the pumping system, and back into the deep end of the pool. The American Association of Physics Teachers contributed to the information contained in the TV portion of this report.
<urn:uuid:24e99c09-c52a-434b-a4a4-b7f3e756105f>
3.1875
1,058
Truncated
Science & Tech.
56.905535
Snow Leopard Ecology Understanding snow leopard ecology is a key building block for successful conservation programs. In order to protect the snow leopards, we must first identify the resources they use within the landscape and how they interact with each other and other wildlife. The Snow Leopard Trust conducts groundbreaking ecological research in five countries across Central Asia. In Mongolia, we created a Long-Term Ecological Study (LTES) that is focused on growing our knowledge of snow leopard behavior and patterns of land use. Through this study, we have been able to continuously monitor wild snow leopards as they hunt, interact with each other, and move around their home range. The LTES was launched in 2008 when we fitted a GPS tracking collar on a wild snow leopard in the South Gobi region of Mongolia. Named Aztai, this snow leopard was the first of over 15 snow leopards we have met over the last 4 years. The collars we use send out satellite uplinks daily on the location of the snow leopard wearing it. With this data, our field team can follow the cat’s movement and activities, learning more than we ever thought possible. Because of this study, we now know that each individual snow leopard uses an average area of 250-300 km2. The cats in our study are mostly active at night, and typically hunt a large animal such as ibex or argali every 8-10 days. We also place research cameras in areas we hypothesize snow leopards will be. These cameras photograph wild snow leopards as they engage their environment, giving us a rare insight into the private lives of these mysterious cats. These cameras have photographed mothers and cubs, helping us establish educated estimates of snow leopard birth rates. Through this ecological research we hope to one day answers questions like: - Who is the dominant male in a specific area? - How do neighboring individuals, both male and female, interact? The information from these ecological studies is used by researchers along with data from China, India, Pakistan and Kyrgyzstan on snow leopard diets. With this data, we now know that ensuring a healthy population of snow leopard prey species is vital to snow leopard conservation.
<urn:uuid:accd89c3-5ec4-4a13-bc1b-2784f9e93cc4>
3.796875
459
Knowledge Article
Science & Tech.
43.422622
|God created the integers. All else is the work of man. The problem with knowledge is that you have to start somewhere. Once you know something, you can start deducing other things. But how do you know the first thing? Descartes’s famous solution was “I think, therefore I am”. I have direct knowledge of my own thoughts, and this in turn tells me that I exist. Now we can go on. Mathematics, like all other knowledge, needs a starting point. Most of our mathematical knowledge is deduced from prior mathematical knowledge. I know that every positive integer is the sum of four squares because I know how to deduce this fact from other things I know. But where do I start? There seems to be a widespread misconception (widespread, that is, among non-mathematicians — mathematicians know better) that all we know is what we can derive from axioms. This is wrong for several reasons. Two of these I’ve blogged about repeatedly in the past (e.g. here, here, and here and here), but the third is even more fundamental: - To say all we know about mathematics is what we can derive from the axioms is like saying that everything we know about Nebraska is what we can derive from maps of Nebraska. It confuses a partial description of reality with reality itself. - Godel’s Incompleteness Theorem (among other theorems) assures us that no system of axioms can serve as a complete description of the natural numbers. Any set of axioms that can describe the natural numbers also describes many other mathematical structures — just as a sufficiently undetailed map of Nebraska might serve equally well as a map of part of Montana. If the map fits both territories equally well, then it can’t be said to specify either territory. - Any axiomatic system presupposes that you know something about numbers and therefore cannot be the basis for your knowledge about numbers. In an axiomatic system, a proof is a list of statements, each of which is either an axiom or a logical consequence of earlier statements on the list. You can’t make sense of that notion until you know what a list is, and you can’t know what a list is unless you know what it means to come first, second, third and fourth in a series. Since I’ve blogged before about points 1 and 2, here I’ll emphasize point 3: To employ an axiomatic description of the natural numbers, you must already know something about natural numbers. That knowledge must come first. In fact, any sort of mathematical reasoning at all seems to rely on at least some prior familiarity with both numbers and sets. We write down axioms to formalize some of that knowledge, but numbers and sets are the starting points. The basic properties of numbers and sets are the things we have to “just know”, in the same sense that Decartes “just knew” that he was conscious. Some people find it unsettling that all knowledge must have a starting point. But nobody’s ever found a way around that problem, and in math, the starting points are numbers, sets, and their basic properties. But there is a key difference between our knowledge of numbers and our knowledge of sets. The difference is important and profound. When it comes to numbers, we write down (say) the Peano axioms to formalize some of what we already know. Then we discover (thanks to Godel and others) that the Peano Axioms have multiple models — that is, there are alternative mathematical territories that they describe equally well. Of the many structures that obey the Peano Axioms, we are particularly interested in the standard model — that is, the ordinary natural numbers that you’ve known about since age three. When it comes to sets, we write down (say) the Zermelo-Frankel axioms to formalize some of what we already know. Then we discover that the Zermelo-Frankel axioms have multiple models — that is, alternative mathematical territories that obey the axioms for a “universe of sets”. Of the many structures that obey the Zermelo-Frankel axioms, we’d like to pick one out and call it the standard model — the ordinary universe of sets that we’ve all been studying since fourth grade. But when you meditate on that notion, you discover that “the universe of sets” is a much fuzzier notion than “the natural numbers” — sufficiently fuzzy that it’s not clear there is a standard model of the Zermelo-Frankel axioms. Roughly this means the following: When we talk about the natural numbers, we are quite confident that we’re all talking about the same thing. When we talk about the universe of sets, it’s not so clear. This in turn makes the real numbers into a fuzzier notion than you might expect — because you can’t describe the real numbers without talking about sets, and sets are fuzzy. The rational numbers are fine — a rational number consists of a numerator and a denominator (together with some simple rules for deciding when two rational numbers are equal, and for adding them etc.), and numerators and denominators are natural numbers, which we understand. But to describe an arbitarary real number, you’ve got to talk about sets. The square root of two, for example, is the real number that separates the set of rational numbers whose squares are less than two from the set of rational numbers whose squares exceed two. Other real numbers have more complicated descriptions in terms of sets. So if there is no preferred “universe of sets” — and it’s not at all obvious that there is — then there is no preferred system of real numbers. Of course we all know how to specify a real number — it’s a (possibly infinite) string of digits, punctuated by a decimal point. But which infinite strings are permitted? Well, all of them course. But what are “all of them”? What infinite strings are there? Ultimately, infinite strings are defined in terms of sets, which brings us full circle back to the fundamental ambiguity. The real numbers, then, are a little fuzzy. The natural numbers are sharp. And their sharpness derives, ultimately, from our direct intuition of them, from which all mathematical knowledge flows.
<urn:uuid:a7a4a69e-9af7-4935-8f8b-bbcafe8c4b8f>
3.0625
1,374
Personal Blog
Science & Tech.
50.327071
1721 Balsam Carpet Xanthorhoe biriviata (Borkhausen, 1794)Wingspan c. 28 mm. A species of damp woodland, occurring in fens, along canals and similar habitats. It is distributed locally in the south and south-east of England, and was first noted in the 1950s. The main larval foodplant is orange balsam (Impatiens capensis), an introduced plant, and the larvae can be found in June and in August and September. The adults fly in May and June, and as a second generation in July and August. The summer brood is generally darker in appearance, without the obvious white band.
<urn:uuid:3bc2691a-008f-4883-bdf9-887b152597e9>
3.0625
146
Knowledge Article
Science & Tech.
58.456598
Peregrine Falcons and DDT Understanding "Bio-accumulation" of Toxins in the Environment Even the youngest students that we talk to in the third and fourth grade seem to understand what it means to be listed as "endangered" and that the peregrine falcon and bald eagle were added to the endangered species list largely because of the impact of dichloro-diphenyl-trichloroethane (DDT). DDT was considered to be a safe and effective insecticide in the 1940s and 1950s. It was used worldwide as an agricultural insecticide, and to combat lice and mosquitos responsible for the spread of human diseases such as typhus, and malaria. At the time, there were no known significant adverse effects to either animal or human health. Scientists later discovered the harmful effects of this pesticide and its derivatives in the environment. Select image at right to enlarge view. When DDT was in wide and common use, daily doses of the chemical accumulated in the fatty tissue of the peregrine. The use of DDT was diminished and for some uses banned in 1972, however residual DDT in the environment today continues to contaminate peregrine falcons. Through a biological coincidence, the stored chemicals acted to "block" the movement of calcium during eggshell formation causing the shells to be "thin." Peregrine falcon eggs broke and embryos died at an alarming rate around the world, alerting biologists long-term to search for a cause. DDT is an organochlorine, and is highly persistent in the environment. In soil DDT is reported to have a half life between 2-15 years. Its metabolic products (DDE, DDD) accumulate through the food chain, with top level predators such as raptors having a higher systemic concentration of the chemicals than other animals sharing the same environment. DDE (dichloro-diphenyl-ethylene) is the most stable and toxic of the DDT metabolites. Because raptors and especially peregrine falcons exist at the top of food chains, regular surveys of their population status can reveal threats to environmental health before they become hazards to human beings. The Predatory Bird Research Group continues to study the peregrine falcon and monitor its breeding status because we believe it is an important indicator of ecosystem health. Our longterm studies of the peregrine falcon population in California could be very important to future assessments of the natural environment and have value for predicting future threats to human health. Further Reading (JSTOR): Bitman, J., Cecil, H.C., Fries, G.F. DDT-Induced Inhibition of Avian Shell Gland Carbonic Anhydrase: A Mechanism for Thin Eggshells Science, New Series, Vol. 168, No. 3931. (May 1, 1970), pp. 594-596. Cade, T.J., Lincer, J.L., White C.M., Roseneau, D.G., Swartz, L.G. DDE Residues and Eggshell Changes in Alaskan Falcons and Hawks (in Reports). Science, New Series, Vol. 172, No. 3986. (May 28, 1971), pp. 955-957. Fox, G.A. A Simple Method of Predicting DDE Contamination and Reproductive Success of Populations of DDE-Sensitive Species The Journal of Applied Ecology, Vol. 16, No. 3. (Dec., 1979), pp. 737-741. Grier, J.W., Ban of DDT and Subsequent Recovery of Reproduction in Bald Eagles (in Reports). Science, New Series, Vol. 218, No. 4578. (Dec. 17, 1982), pp. 1232-1235. Hickey, J.J., Anderson, D.W. Chlorinated Hydrocarbons and Eggshell Changes in Raptorial and Fish-Eating Birds (in Reports) Science, New Series, Vol. 162, No. 3850. (Oct. 11, 1968), pp. 271-273. Lincer, J. L. DDE-Induced Eggshell-Thinning in the American Kestrel: A Comparison of the Field Situation and Laboratory Results. The Journal of Applied Ecology, Vol. 12, No. 3. (Dec., 1975), pp. 781-793. Newton, I., Bogan, J.A., Rothery, P. Trends and Effects of Organochlorine Compounds in Sparrowhawk Eggs. The Journal of Applied Ecology, Vol. 23, No. 2. (Aug., 1986), pp. 461-478. Peakall, D.B. DDE: Its Presence in Peregrine Eggs in 1948 (in Reports) Science, New Series, Vol. 183, No. 4125. (Feb. 15, 1974), pp. 673-674. White, C.M., Emison, W.B., Williamson, F.S.L. DDE in a Resident Aleutian Island Peregrine Population. The Condor, Vol. 75, No. 3. (Autumn, 1973), pp. 306-311. Wiemeyer SN, Porter RD. DDE thins eggshells of captive American kestrels*. Nature. 1970 Aug 15;227(5259):737-8.
<urn:uuid:4dced0e8-f651-4e45-a88f-beb392a39ee2>
3.75
1,144
Knowledge Article
Science & Tech.
69.515469
Sometimes in science, the answer you end up with is not exactly the question you started with. The path to discovery is not always predictable. Researchers have to constantly evaluate what they are finding, and be ready to adjust their course when the data leads down a different path. This is especially true in tropical ecology, where there is so much basic information yet to be learned. Such is the case with our new paper published this week in the Proceedings of the National Academy of Sciences (PNAS). We started tracking the fate of tropical seeds with small radio-transmitters because we thought that the predation of agoutis (the main mover of palm seeds) by ocelots (the main predator of agoutis) would leave a bunch of “orphan seeds” buried in the forest where no other agoutis would discover them. These orphaned seeds would thus be free to germinate and grow into new palm trees. It was a cool idea, and would show how predators affect prey, ultimately trickling down through the tropic levels to affect seed survival, forest regeneration, etc… We had all the hypotheses, sub-hypotheses, and sub-sub-hypotheses worked out. Now we just had to go into the jungle and prove ourselves right. We set out to map all the palm trees, radio-collar a bunch of agoutis, have them disperse our special radio-tagged seeds, and then wait for the ocelots to pick them off one-by-one. Earlier research suggested that only about 1/3 of these rodents survive one year, with most falling to the island’s ocelots. If we did our part we knew we could count on the ocelots to do theirs. This was actually a huge amount of work, we needed “our agouti” to move “our seed”, and bury it in a little hole for safe-keeping. Camera traps told us whether one of “our agoutis” moved a particular seed, and more often than not it was an un-marked agouti, or a rat or squirrel. Initially animals just ate most of the seeds, but once they recovered from the recently-ended hungry season, they started storing seeds in scattered underground caches for later, when little fresh fruit will be available. Finally our radio-tagged seeds were moving. Only, and here’s where the change in the-path-to-discovery comes in, the seeds didn’t stop moving. Once a seed was buried we figured we’d just sit and wait till it was dug up and eaten, sometime in the next few months or year. Instead, the seeds were quickly dug-up, moved, and buried again, and again, and again. During our first season of field-work this high rate of movement caught us off guard and the additional work of tracking down these crazy seed movements completely wore down everyone on the project. Given the super-high rates of seed movement, we realized we needed to look for (actually, listen for radio-signals) moving seeds every single day. Even daily checks didn’t catch all the movements because we observed some seeds actually move twice in one day. What the heck was going on? Why were agoutis moving seeds so often? Some seeds were going 100’s of meters. Were agoutis shifting home-ranges and taking their seeds with them? Or, were there thieves amongst us? For our second field season we decided to switch tactics a bit, and investigate this new research path illuminated by the crazy seed movements. We mounted a major trapping effort to try and capture and mark as many agoutis as we could. By being able to recognize lots of animals in one area, we hoped to determine who was taking the seeds. We hid motion-sensitive cameras next to the buried seeds to see which animal’s dug the seeds up. Our videos (example above) showed that most (84%) of seeds were being stolen by robber-agoutis. These unscrupulous rodents weren’t just eating the buried treasure, but often moved it over to the center of their territory, where they could more easily find it during the upcoming hungry-season. This repeated thievery resulted in seeds moving much further than you would expect from a single agouti. Slightly more than 1/3 of seeds moved more than 100m, which is typically considered far enough to escape the competition of sibling-seeds that just drop underneath the mom-tree. One seed was cached 36 different times, traveling over 749 m back-and-forth between territories until it was 280 m from its starting point. We made a movie illustrating this amazing amount of movement (shown below with a fun soundtrack). Although our test of the predator-mediated seed dispersal hypothesis didn’t go off exactly as planned, our results incidentally disproved it. Even if seeds do become “orphaned” by predated agoutis, we now know that the rates of seed theft are so high that these orphaned seeds still have a good probability of being discovered. While this particular route of influence between predators-prey-trees is probably not important to forest dynamics, our other work shows how other behavior of these agoutis is heavily influenced by the threat of predation (recent biotropica paper, and another one in the works). This discovery of robber-rodents helping trees by moving their seeds long distances was made even more interesting by the fact that the dispersal of this particular type of tree has been a tropical enigma since Janzen and Martin published “Neotropical anachronisms: The fruits the gomphotheres ate.” In 1982. This paper, and dozens since it, suggested that the very largest fruits and seeds found in the Neotropics must have co-evolved to be dispersed by the now-extinct Pleistocene Megafauna. How these trees have survived the >10,000 years since megafaunal extinction has puzzled tropical ecologists for decades. These results are also important when applied to current mammalian extinctions. If tree species are able to survive due to “disperser substitution” maybe this holds a glimmer of hope for trees that are dispersed by mammals that are currently being hunting to extinction or local extirpation. Alternately, our results also show how important of a role these little agoutis can play in their ecosystems. When poaching gets so bad that they also deplete these smaller-sized mammals, the trees seeds may have no chance to survive. Our accidental discovery of robbing rodents offers a new potential answer to this mystery, and highlights the potential rewards of following thieves down the dark and mysterious scientific path to discovery. By Roland Kays
<urn:uuid:b9840a30-a682-4d93-9a3b-b015ad6621dc>
3.6875
1,406
Personal Blog
Science & Tech.
45.954944
The effect shown in the gif is called gravitational lensing. What is gravitational lensing? Gravitational lensing is the effect seen when an object behind a massive object is in the line of sight with the earth. For example: Earth ————>Massive Object—————->Far away object When we try looking at the far away object, the massive object bends space-time around it, causing the light rays from the far away object to travel in a curved path around into our line of sight. As a result of this, we can often see the far away object magnified which helps astronomers understand the early universe. The gif shows a far away galaxy being gravitationally lensed by a closer black hole.
<urn:uuid:75907129-7650-4059-9a6a-4847778e9436>
3.578125
151
Knowledge Article
Science & Tech.
49.219096
A string conversion is an expression list enclosed in reverse (a.k.a. backward) quotes: string_conversion: "`" expression_list "`" A string conversion evaluates the contained expression list and converts the resulting object into a string according to rules specific to its type. If the object is a string, a number, None, or a tuple, list or dictionary containing only objects whose type is one of these, the resulting string is a valid Python expression which can be passed to the built-in function eval() to yield an expression with the same value (or an approximation, if floating point numbers are involved). (In particular, converting a string adds quotes around it and converts ``funny'' characters to escape sequences that are safe to print.) It is illegal to attempt to convert recursive objects (e.g., lists or dictionaries that contain a reference to themselves, directly or indirectly.) The built-in function repr() performs exactly the same conversion in its argument as enclosing it in parentheses and reverse quotes does. The built-in function str() performs a similar but more user-friendly conversion.
<urn:uuid:27ea6199-72a5-4cad-b1bc-1854ad6d0f3a>
3.46875
233
Documentation
Software Dev.
41.647533
[erlang-questions] Comma, semicolon. Aaaaa (Emmanuel Okyere) Mon Sep 17 04:00:33 CEST 2007 On 16 Sep 2007, at 15:19, Ahmed Ali wrote: > I'll try to give you my view on the syntax. The comma (,) is like > (;) in > languages like Java/C. Each method is ended with a Dot (.). If you > have more > than one method with the same number parameters (called arity in > you need to separate them with (;). There is no similar construct in > Java/C for this. > Similarly, you can think of if/case statements, which use (;) to > different if/case conditions, (,) to separate clauses for each > and (.) to indicate the end of if/case statement. if the end of the if/case clause is not the end of the entire function it will not end with a period. a period-whitespace denotes the end of an entire function. Anyone can do any amount of work, provided it isn't the work he is supposed to be doing at that moment. – Robert Benchley -------------- next part -------------- An HTML attachment was scrubbed... More information about the erlang-questions
<urn:uuid:9efcf2db-82ce-4eae-80e4-0537a5e94163>
2.78125
285
Comment Section
Software Dev.
69.479007
For a short time today -- too short to get a decent picture -- the sky over Minnesota was wearing a pinstripe suit, thanks to the weather conditions that made obvious what is true all the time -- there's a highway over us. Jet contrails only form to this degree in certain weather conditions. Let's turn it over to Dr. Steve Ackerman at the University of Wisconsin for a proper explanation: If you are attentive to contrail formation and duration, you will notice that they can rapidly dissipate or spread horizontally into an extensive thin cirrus layer. How long a contrail remains intact, depends on the humidity structure and winds of the upper troposphere. If the atmosphere is near saturation, the contrail may exist for sometime. On the other hand, if the atmosphere is dry then as the contrail mixes with the environment it dissipates. Contrails are a concern in climate studies as increased jet aircraft traffic may result in an increase in cloud cover. It has been estimated that in certain heavy air-traffic corridors, cloud cover has increased by as much as 20%. An increase in cloud amount changes the region's radiation balance. For example, solar energy reaching the surface may be reduced, resulting in surface cooling. They also reduce the terrestrial energy losses of the planet, resulting in a warming. Jet exhaust also plays a role in modifying the chemistry of the upper troposphere and lower stratosphere. NASA and the DOE are sponsoring a research program to study the impact contrails have on atmospheric chemistry, weather and climate. Coincidentally, 11 years ago tomorrow provided some important data in the research of this question about whether contrails can influence the weather, the Christian Science Monitor reported... Then Sept. 11, 2001 presented a unique opportunity to study what the sky looked like without airplanes and contrails. In the wake of the 9-11 terrorist attacks, the FAA prohibited commercial aviation over the United States for three days. That's when David Travis, an atmospheric scientist at the University of Wisconsin, Whitewater, thought to look at how temperatures might differ at temperature stations around the country. He found that [PDF], for those three days, the average range between highs and lows at more than 4,000 weather stations across the US was 1 degree C wider than normal. In other words, contrails seemed to raise nighttime temperatures and lower daytimes ones. But the real effect was in daytime highs, which were much higher. That would seem to indicate that, contrary to prevailing thinking, contrails might have a net cooling effect. Certain areas seemed particularly sensitive to the absence of contrails. Because of unique climatic conditions in the atmosphere in these regions -- chiefly, moisture-laden air -- the Pacific Northwest and the Midwest are often covered by contrails. But when planes stopped flying right after 9-11, Travis also found that these areas saw the most dramatic increase in daytime highs. It's unlikely much is being influenced, though, by today's contrails. They're evaporating fairly quickly, although several of them are faintly visible -- especially in southeast Minnesota -- in this satellite photo, taken around 10:15 this morning. I remember wondering, and asking about this when I was little. Something to the effect of "If jets make clouds, can they cause rain?" I think I got some dismissive answer from whom every I asked (I think most children just accept that some times no one knows, and no one seems to care, no matter how important the topic seems to them at the moment.) Later I remember being on a trip with my family, and my aunt some where that had a nuclear cooling tower (billowing out steam puffs at the time) my aunt referred to it as a "Cloud Factory". It probably sounded less frightening in her mind than "Nuclear Reactor" though I was concerned that if a factory could produce light puffy cumulonimbus clouds, what would stop some one from producing storm clouds... and if these factories did exist near the gulf, devastating hurricanes... seemed like a far more frightening concept to me at the time then what the reality was of a simple power plant. Now I'm wondering if cooling towers have similar effects to contrails, or if they are releasing the moisture to close to the ground to have similar impact. They were everywhere over the Capitol this morning. I got some nice video I'll post to my Tumblr when I get home. It was about as many as I can ever remember seeing in the city. On the farm where I grew up the southern sky was filled with them in the winter as planes crossed the midwest. From Chicago to Denver, I assumed. // Now I'm wondering if cooling towers have similar effects to contrails, or if they are releasing the moisture to close to the ground to have similar impact. sometimes in the winter we get some snow flurries around MPR; it comes from the District Energy cooling tower vapor. Prince says these are Chemtrails. He said this during an interview with Tavis Smiley. What does Prince know that we don't know... ;) ;) As I was driving to work today, I was noticing the contrails, and this was at 7:00am. They were beautiful in the light of the rising sun, reminding me of the translation giving at the end of Koyaanisqatsi: "Near the day of Purification, there will be cobwebs spun back and forth in the sky." One winter I drove through a tiny snowstorm outside of the Rock Tenn recycling plant at I-94 and 280 in St. Paul that was made from the steam coming out of their stacks. Made the highway unexpectedly slippery. The great Boston band O Positive released an EP in the 1980s called Cloud Factory. (Always nice when I can plug a great Boston band!) Not as cool as I thought. Didn't get them heading all the way south. I have been contacted in the past by "chemical contrail" conspiracy theorists who believe there is a secret effort underway to change global climate via chemistry to either a) hasten or b) constrain the warming of our little blue marble. Pretty crazy stuff, either way.
<urn:uuid:be919ee4-4597-4ad7-abe3-187330d2c88c>
3.734375
1,263
Comment Section
Science & Tech.
53.201346
The equatorial Atlantic is a complex region dominated by the presence of large scale westward currents and eastward counter currents. Some of the more extensive studies of this region include GARP (Global Atmospheric Research Program) Atlantic Tropical Experiment (GATE), the First GARP Global Experiment (FGGE), the Seasonal Equatorial Atlantic (SEQUAL) program and the World Ocean Circulation Experiment (WOCE). The classical view of the surface currents in this region shows a westward flowing North Equatorial Current (NEC), an eastward flowing North Equatorial Counter Current (NECC), a westward flowing South Equatorial Current (SEC), and an eastward flowing South Equatorial Counter Current (SECC). The North Equatorial Current (NEC) as represented by the Mariano Global Surface Velocity Analysis (MGSVA). The NEC is the broad westward flow that is the southern component of the N. Atlantic subtropical gyre. It is fed by the Canary Current and its waters eventually end up in the Gulf Stream system, either via the Antilles current or through the Caribbean via the Guiana current. Click here for example plots of seasonal averages. The NEC found in the North Atlantic from about 7°N to about 20°N (see figure1; Schott et al, 2002). Fortified by the Atlantic trade wind belt, the NEC is a broad westward flowing current that forms the southern limb of the North Atlantic subtropical gyre (Bourles et. al. 1999b). The current originates from the northwestern coast of Africa, where it is fed mainly by the cooler waters flowing from the northeast Atlantic. As the NEC travels across the open ocean, it is joined by waters originating south of the equator thus entraining waters from the Southern Atlantic into the Northern Atlantic. The details of these various pathways of cross-equatorial water exchange in the open-ocean remain unclear, in part because of the weak, meandering North Equatorial Counter Current (NECC), which lies south of the NEC. It is understood that the complex seasonal retroflection and recirculation current systems that straddle the equatorial region and the strong western boundary currents along the American shelf also play an important role in water-mass exchange with the NEC, however exactly how is not well understood (Arnault et al, 1999; Schott et al. 2002). Estimates by Onken (1994) suggest that wind-stress causes about 12 Sv of cross-equatorial transport to the north. When the NEC approaches the shelf region of the Americas, its interaction with bottom topography and accompanying western boundary currents produce a complicated, seasonally variable flow regime, both laterally and vertically, that is still the source of continued investigations. The coastal and near-coastal region along the east coast of South America is, as described by Arnault and others (1999), is a spawning ground for meso-scale eddy activity, which compounds the difficulties in discerning the general flow regimes. However, most authors concede that the overall flow is to the northwest to supply the Guiana and Caribbean Currents (Bourles, et. al., 1999a; Arnault et. al. 1999; Bourles et al, 1999a & 1999b). In the open ocean, the NEC, a broad current, is generally identifiable north of 10°N and has a westward mean velocity between 10-15 cm s-1 (Richardson and Walsh, 1986). Its peak velocity values of 15 cm s-1 are prevalent in boreal summer (July/August), with a weakening to 10-12 cm s-1 during spring and fall (Arnault, 1987). Most of the NEC waters flow northwest and feed the Guiana Current and the Caribbean Current (Bourles et al, 1999a), some of the water is retroflected cyclonically to join the eastward-flowing NECC. According to Wilson and others (1994), some of the NEC water is also deflected downward, supplying approximately 12 Sv to the North Equatorial Undercurrent (NEUC), which lies at intermediate depths between 100m and 300m. The boundary between the NEC and the Guiana current exhibits a large amount of seasonal variation. Latitudinal shifts in this boundary cause the currents at 2°N to shift direction four times a year between 0°-10°W. From approximately January to February and then again between August and October flows are eastward. Westward flows prevail from November to December and again from March through April. (Richardson and Walsh, 1986). The annual mean transport is 8.5 Sv with the lowest transports occurring in late winter and in spring (Bayev and Polonskiy, 1991). The annual transport variation in the NEC is 2.3 Sv and the semiannual variation in the NEC is 1.1 Sv, in addition to a large residual variance of 1.3 Sv. (Bayev and Polonskiy, 1991). The maximum variation in transport has a magnitude of 1.5 Sv and occurs between June and January, with a mean transport of about 10 Sv during this period (Bayev and Polonskiy, 1991). Seasonal variation of the North Equatorial Current coincides with than of the Canary Current attaining its maximum during summer (Mittelstaedt, 1991, Stramma and Seidler, 1988). In the late summer and early fall, temperatures range between 30° and 32°C and in the winter range between 24° and 28°C. Salinities range widely between the summer and winter months. According to the PIRATA (Pilot Research Moored Array in the Tropical Atlantic) program data that has been gathered since 1998, salinities at 38°W and 12°N typically range between 35.2 in the winter, and increase in the summer, when evaporation increases, to about 36.4. Subsurface values, from about 20 to 40 meters, have been recorded to be as low as low as 34.4 during the winter. Arnault, S., 1987: Tropical Atlantic geostrophic currents and ship drifts, Journal of Physical Oceanography, 18, 1050-1060. Bayev, S.A., and A.B. Polonskiy, 1991: Seasonal variability of the Equatorial Countercurrent and the North Equatorial Current in the central tropical Atlantic, Oceanology, 31, 155-159. Bourles, B, Y. Gouriou and R. Chuchla, 1999a: On the circulation in the upper layer of the western equatorial Atlantic, Journal of Geophysical Research, 104, C9, 21151-21170. Bourles, B., R.L, Molinari, E. Johns, W.D. Wilson and K. D. Leaman, 1999b: Upper layer currents in the western tropical North Atlantic, Journal of Geophysical Research, 104, C1, 1661-1375. Mittelstaedt, E., 1991: The ocean boundary along the northwest African coast: Circulation and oceanographic properties at the sea surface, Progress in Oceanography, 26, 307-355. Onken, R., 1994: The Asymmetry of western boundary currents in the upper Atlantic Ocean, Journal of Physical Oceanography, 24(5), 928-948. Richardson, P.L., and D. Walsh, 1986: Mapping climatological seasonal variations of surface currents in the tropical Atlantic using ship drifts, Journal of Geophysical Research, 91, 10537-10550. Schott, F.A., P. Brandt, M. Hamann, J. Fischer and L. Stramma, 2002: On the boundary flow off Brazil at 5-10°S and its connection to the interior tropical Atlantic, Geophysical Research Letters, 29(17), 1840. Stramma, L., and G. Siedler, 1988: Seasonal changes in the North Atlantic Subtropical Gyre. Journal of Geophysical Research, 93, 8111-8118. Wilson, W.D., E. Johns, and R.L. Molinari, 1994: Upper layer circulation in the western tropical North Atlantic Ocean during August 1989, Journal of Geophysical Research, 99, 22513-22523.
<urn:uuid:a1b3d219-5d2d-4267-b851-9e4ff269b537>
3.84375
1,725
Academic Writing
Science & Tech.
60.617668