text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Printer friendly version Experimental explanation of supercooling : why water does not freeze in the clouds 21 April 2010 European Synchrotron Radiation Facility (ESRF) Supercooling, a state where liquids don’t solidify even below their normal freezing point, still puzzles scientists today. An example of this phenomenon is found everyday in meteorology: clouds in high altitude are an accumulation of supercooled droplets of water below their freezing point. Scientists from the Commissariat à l’Energie Atomique et aux Energies Alternatives (CEA), the Centre National de Recherche Scientifique (CNRS) and the ESRF have found an experimental explanation of the phenomenon of supercooling. Supercooled liquids are trapped in a metastable state even well below their freezing point, which can only be achieved in liquids that do not contain seeds that may trigger crystallization. Clouds at high altitude are a good example for this: they contain tiny droplets of water that, in the absence of seed crystals do not form ice despite the low temperatures. In everyday life, though, there is usually some crystalline impurity in contact with the liquid that will trigger the crystallization process, and therefore the freezing. Controlling solidification behaviour is important for applications ranging from hail prevention up to technological processes such as welding and casting or even the growth of semiconductor nanostructures. Supercooling was discovered already in 1724 by Fahrenheit, but even today the phenomenon remains a subject for intense discussions. Over the last 60 years the very existence of deep supercooling has led to speculations that the internal structure of liquids could be incompatible with crystallization. Models propose that a significant fraction of the atoms in liquids arrange in five-fold coordinated clusters. To form a crystal however, one needs a structure that can be repeated periodically, filling the entire space. This is not possible with five-fold coordinated clusters. In the two-dimensional analogue, a plane cannot be filled by pentagons only, whereas triangles, rectangles or hexagons can fill a plane perfectly. In this example, pentagons are an obstacle to crystallization. Until today there was no experimental proof that this five-fold coordinated structures are at the origin of supercooling. The researchers from the CEA, CNRS and ESRF studied the structure of a particular liquid, a gold-silicon alloy, in contact with a specially decorated silicon (111) surface, where the outermost layer of the solid featured pentagonal atomic arrangements. Their findings confirmed that a strong supercooling effect took place. “We studied what happened to the liquid in contact with a five-fold coordinated surface”, explains Tobias Schülli, first author of the paper. The team performed the control experiment with the same liquid exposed to three-fold and four-fold coordinated surfaces, which reduced the supercooling effect dramatically. “This constitutes the first experimental proof that pentagonal order is at the origin of supercooling”, explains Tobias Schülli. It was during their studies, originally focusing on the growth of semiconducting nanowires, that the scientists discovered the unusual properties of these liquids. As they were observing the first stage of growth of nanowires, they could see that the metal-semiconductor alloy they used remained liquid at a much lower temperature than its crystallization point and so they decided to investigate this phenomenon. These liquid alloys are popular in applied research as they enable the growth of sophisticated semiconductor nanostructures at low growth temperatures. Most of these nanowire structures are grown on silicon (111), the same surface used by the team. Semiconducting nanowires are promising candidates for future electronic devices. Prominent examples are solar cells, where scientists are working on the integration of silicon nanowires in order to increase their performance. Droplet of a gold-silicon liquid alloy on a silicon (111) surface. Pentagonal clusters formed at the interface exhibit a denser structure compared to solid gold and prevent the liquid from crystallization at temperatures as low as 300 Kelvin below the solidification temperature. Credits: M.Collignon VIDEO: When pure water is cooled below freezing point, it may remain in a supercooled state. It can then rapidly crystallize into ice when stimulated by an appropriate catalyst, such as shaking the bottle. Credits: J. Cusack/ESRF.
<urn:uuid:b849dcab-8302-40d9-a5d2-3f3d3804a220>
3.84375
912
Truncated
Science & Tech.
20.4674
Black holes are among the most fascinating phenomena thought to exist in the Universe. A black hole is a region of space in which the gravitational field is so powerful that nothing, not even light, can normally escape its pull after having fallen past its event horizon. The equations describing a black hole were developed by Einstein in his ground breaking theory of gravity - the General Theory of Relativity (GR). We use super-computer simulations to study the processes believed to lead to the formation of the first super-massive black holes (SMBHs). Shown in the graphic is a region of high gas density in a simulated part of the Universe. It is within the very densest regions where we we expect the first SMBHs to form at redshifts greater than 10. Progressively zooming in on one of the highest density regions we can study the contraction of the gas at the centre of a dark matter halo with a "virial temperature" >104 K under the influence of gravity. SMBHs and their formation are currently a very active area of research because it is these objects which eventually grow to form the quasars we see at redshifts 3–7. Figure Credits: John Regan; the simulations were carried out on the DARWIN supercomputer at the University of Cambridge and on the Cosmos supercomputer at the Department of Applied Mathematics and Theoretical Physics (DAMTP) at the University of Cambridge. The simulations were performed using the Adaptive Mesh Code ENZO.
<urn:uuid:1597a173-5f69-4959-ba6a-ba8feca7fd5a>
4.40625
308
Academic Writing
Science & Tech.
35.512678
Return to Vignettes of Ancient Mathematics I Prop. 15 II Prop. 2 (diagram 1 = general diagram) note on the theorem 1. If two areas are enclosed by a straight-line and the section of a right-angled cone, which we are able to apply to the given straight-line, and they do not have the same center of weight, the center of weight of the magnitude composed from both of them will be on the straight-line that joins the center of their weight which divides the mentioned straight-line in a way that the segments of the have the same ratio inversely with the areas. (diagram 2) Let there be two areas, AB, GD, as stated,(*1e*) points E, Z the centers of their weight, and the ratio which AB has to GD, let ZQ have this to QE. One must show that point Q is the center of the weight of the magnitude composed from both areas AB, GD. (diagram 3) Let each of ZH, ZK be, in fact, equal to EQ, (diagram 4) and EL equal to ZQ, that is HE. Therefore, LQ will also be equal to KQ, and furthermore as LH is to HK so is AB to GD. For each is double each. (diagram 5) Let the area of AB be applied along LH on each side of LH, so that MN is equal to AB. In fact, point E is the center of the weight of MN (I prop. 9).(*2*) (diagram 6) Let NX be, in fact, filled out, but MN will have a ratio to NX that LH has to HK. But AB also has to GD the ratio of LH to HK. Therefore, as AB is to GD, so too is MN to NX. And alternando. But AB is equal to MN. (diagram 7) Therefore GD is also equal to NX, and point Z is the center of its weight (and the center of NX because of I prop. 9). (diagram 8) And since LQ is equal to QK, and a whole, LK, bisects the opposite sides, point Q is the center of the weight of a whole, PM (I prop. 9). But MP is equal to that from both MN, NX. (diagram 9) Thus too point Q is the center of the weight of that from AB, GD. Assumption 6 Commentary of Eutocius, pp. 278.2-16 (the wording of the text follows more closely the wording of the statement of the theorem, but note the other differences): On the 2nd book. Having pursued precisely the first [book] and having made clear the what was difficult to understand in it, we consider it also necessary to set out appropriately what was difficultly stated in the second book. He says then in the first sentence of the first theorem, “Let there be supposed areas AB, GD enclosed by a straight-line and the section of a right-angled cone which we are able to apply to the given straight-line.” It is not possible to find this at once from the things proved here. Since it has been proved by him, as also he said in On the Sphere and Cylinder (in the introduction), that such a figure is a third again the triangle having the same base as it and an equal height, but we are able to apply the rectilinear plane that is a third again of a triangle along the given straight line, it is obvious that [we can also do it] to figures of this sort. What was said in the construction are clear through the tenth theorem of the first of these books. (back) (*1*)This theorem is surrounded with controversy. Why does Archimedes prove it instead of just using Propositions 6-7 of Book I? Moreover, a careful perusal of the theorem would show that it is crucial to the proof that one can apply the area of parabola AB to line LH, as Eutocius notes. The commensurable case, theorem I 6, does not apply the area to the line but supposes that equal commensurable weights can be divided up into units of the common measure. Of course, theorem 7 does neither by reducing the incommensurable case to the commensurable one. In a sense, II 1 and I 6 are different special cases of the general principle of the balance where the use of some sort of distribution of weights is possible. Is there any guarantee that one can always divide up the weight into its measures? It is plausible that if one can prove two weights commensurable one should be able to do this. For example, given two commensurable circles, one can readily construct a circle that is the common measure by using the diagonal of the square common measure of the squares of the diagonals. Of course, a similar problem exists with the proofs of Eucid, Elements X. Does Archimedes need the general principle of the balance, I 6-7, in Book II or only the case of the parabola, II 1? In fact, in the logical structure of both books, I 6-7 is only used for I 8. However, I 8 is a fundamental theorem for both books. This issue depends on how one interprets I 6-7, however (more below). In fact, Archimedes does not seem to need or use II 1 at all in Book II. If so, II 1 stands alone in the logical structure of Book II. The issue here is that sometimes Archimedes infers that two parabolas equal in area have their centers of weight equidistant from some point. This follows equally from II Prop. 1 and the much more trivial I Prop. 4. However, at the same time he will make the exact same claim about polygons. Here, II Prop. 1 will be useless, but I Prop. 4 is required. Hence, in my comments, I ignore applications that may be this sort of trivial application of II Prop. 1 and cite I Prop. 4 instead. The reader should take note of this, however. We can speculate why Archimedes proves II 1. Perphaps, he wants to show use this application of areas arguments involves a simpler argument. Perhaps, he is concerned about more philosophical issues. Perhaps, he regards Book II as a complete treatise on parabolas and so proves every relevant, important theorem on centers of weight. He doesn't tell us. The theorem assumes I Prop. 9, that the center of weight of a parallelogram is where the bisectors of the sides meet. It is interesting that Archimedes assumes that there is a center of weight and uses this point in his proof (cf. the postulates of Book I, which presuppose that there is a unique center of weight). However, he does not actually find it until Proposition 8. Why does Eutocius refer to the introduction to On the Sphere and Cylinder and not, for example, either the Quadrature of the Parabola, the work refered to in the introduction, or to the Method. He has already written a commentary on the On the Sphere and Cylinder, but perhaps he does not know either of these other works. Cf. Ernst Nizze, Archimedes von Syrakus vorhandene Werke (Stralsund, 1824), p. viii and I.L. Heiberg, Quaestiones Archimedea (Copenhagen, 1879), 29. Each is available through Google Books. In fact, Eutocius (292.27-294.1-4, on prop. 8) does refer once to the theorem in question as from On the Section of the Right-angled Cone. This need not mean that he knows the work other than through the reference in On the Sphere and Cylinder. (*2*) Actually, the step depends on two applications of I Prop. 9, as it is used in the first version of I Prop. 10. I Prop. 9 itself only uses Assumption 4 and (Corollaries 1 to Propositions 4 and 5).
<urn:uuid:d3345039-18dc-46e0-8b3a-e16f86590f76>
3.59375
1,716
Academic Writing
Science & Tech.
71.611003
As I walked through the canopy of hemlocks to reach the front door of the Cary Institute a few days ago, I was greeted by the singing of many species of birds. Although the evergreen foliage hid them from view, the variety of their songs was impressive. There were clearly many different species flitting about and singing high above my head. It was an almost boisterous spring chorus. We owe much of this delightful medley to Rachel Carson, whose book, "Silent Spring," was published 50 years ago. She forcefully sounded the alarm that indiscriminate spraying of DDT, an insecticide that has broad killing power, was having negative effects on the environment. DDT did control target insects that damaged crops or carried human diseases. It was used to kill mosquitoes and fleas, for example, and helped bring malaria and typhoid fever under control. It was first applied extensively during World War II, to help protect the health of both soldiers and civilians. But DDT did not discriminate between beneficial and nuisance insects. And the chemical took a heavy toll on the bees and butterflies that pollinate crops and ornamental plants. It also killed insects that are an essential part of the food chain in on our field, stream and forest ecosystems. Beyond even that environmental damage, DDT was found to accumulate in the fat of longer-lived organisms. In the food chain, the chemical biomagnifies, with larger animals concentrating higher amounts of the toxin. Birds of prey and scavengers, as well as song birds that eat many insects during breeding and nesting seasons, can be killed by DDT. It can also interfere with bird reproduction by causing egg shells to become so thin that the weight of incubating parents can crush them. Birds were not the only victims of DDT's ill effects. There are many studies documenting the toxicity of DDT to mammals, including humans. Usually direct toxicity isn't the problem. Rather association with diseases such as breast cancer and genetic damage are the risks of DDT in humans. Carson's book was a wake-up call. It brought together a large body of information to indict DDT as an environmental hazard. She went on to caution about the indiscriminate use of chemicals in the environment. Such use exploded after World War II, and rarely were environmental or even health effects of the many new chemicals tested before release. Carson, who lived from 1907 to 1964, was trained as a biologist, and held a master's degree from Johns Hopkins University. Her book was greeted with derision by representatives of chemical industries involved in the manufacture of DDT and other pesticides. But her elegant writing and careful research won out. Her book is a milestone in the improvement of America's environment and protection of human health from chemical contamination. The subsequent ban on the broadcast spraying of DDT and controls on other chemicals that affect wild organisms and ecological food chains have been successful. Many bird and other species that were directly threatened by DDT, such as bald eagles and pelicans, have recovered from very low numbers at the height of the era of profligate spraying of DDT and related compounds. So the delightful spring chorus I experienced on my wooded walk here at the Cary Institute recently prompts me to thank Carson, and remember her bravery in publishing "Silent Spring" 50 years ago. Steward Pickett is a plant ecologist at the Cary Institute of Ecosystem Studies in Millbrook. He also directs the Baltimore Ecosystem Study, a multi-partner effort investigating the ecology of urban areas. Learn more at www.beslter.org.
<urn:uuid:1f4b4b45-4ae4-4731-9b17-b8945cd80253>
3.4375
731
Nonfiction Writing
Science & Tech.
47.078072
Severe wildfires can create clouds of smoke so thick that they are hard to see through even with satellite sensors. That’s the kind of visibility problem that residents of Salmon, Idaho, and other towns faced in the summer of 2012 while living downwind of the large Mustang Complex fire in Salmon-Challis and Bitterroot National Forests. Sparked by lightning on July 30, 2012, the Mustang Complex was just one of numerous wildfires burning through dense stands of ponderosa pine forests in central Idaho in September. At times, the smoke was so heavy that authorities were forced to close roads because of poor visibility. This natural-color image (top) of the Mustang Complex fire, which is based on data from the visible portion of the electromagnetic spectrum, is a good example of how thick smoke can obscure a satellite’s view of the surface. The image was acquired on September 18, 2012, by the Advanced Land Imager (ALI) on NASA’s Earth Observing 1 (EO-1) satellite. It shows thick smoke near the fire’s flaming front that completely conceals the rugged terrain below. However, ALI detects radiation in more than just visible wavelengths. The instrument also can sense wavelengths in the infrared portion of the spectrum. The bottom image—made from a combination of visible, near-infrared, and shortwave-infrared light—provides a clearer view of the burn scar. Severely burned areas appear brick red; partly-burned areas are pink, and unburned areas are green. Thick smoke is light blue. While many areas have burned severely, the fire left some vegetation unscathed, particularly at the bottom of valleys and along some ridges. - Inciweb. (n.d.) Mustang Complex fire. Accessed September 20, 2012. - 8KPAX. (September 14, 2012). Pilot cars being used to escort traffic through Mustang Complex fire zone. Accessed September 20, 2012. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using EO-1 ALI data from the NASA EO-1 team. Caption by Adam Voiland. - EO-1 - ALI
<urn:uuid:05c9bf17-2bb7-474b-9c8e-dc765ca4e6ab>
3.328125
451
Knowledge Article
Science & Tech.
49.45096
龙脑香科 long nao xiang ke Authors: Xi-wen Li, Jie Li & Peter S. Ashton Trees, evergreen or semievergreen, rarely deciduous in dry season. Xylem with aromatic resin in intercellular resin canals. Branchlets with stipular scars, sometimes annular. Leaves simple, alternate; stipules persistent or caducous, large or small; leaf blade with lateral veins pinnate, margin entire or sinuate-crenate. Inflorescences few- or many-flowered, terminal or axillary racemes or panicles; flowers usually sweetly scented; bracts usually fugacious and minute, rarely persistent and large. Inflorescences, calyces, petals, ovary, and other parts usually with stellate, squamate, fascicled or free-standing hairs. Flowers bisexual, actinomorphic, contorted. Calyx lobes 5, free or united at base, imbricate in bud if not united. Petals 5, adnate or connate at base. Stamens (10-)15 to many, free from or connate to petals; filaments usually dilated at base; anthers 2-celled, with 2 pollen sacs per cell (Chinese species); connective appendages aristate, filiform or stout. Ovary superior, rarely semi-inferior, slightly immersed in torus, usually 3-loculed, each locule 2-, rarely many ovuled; ovules pendulous, lateral or anatropous. Fruit usually nutlike, sometimes capsular and 3-valved, 1(to many)-seeded, with persistent, variously accrescent calyx of which 2 or more lobes are usually developed into lorate wings. Seed exalbuminous; cotyledons fleshy, equal or unequal, applanate or ± folded or cerebriform, entire or laciniate; radicle directed toward hilum, usually included between cotyledons. About 17 genera and 550 species: tropical Africa, Asia, and South America (in Asia, most species and genera in NW Borneo); five genera and 12 species (one endemic, one introduced) in China. Tong Shaoquan & Tao Gouda. 1990. Dipterocarpaceae. In: Li Hsiwen, ed., Fl. Reipubl. Popularis Sin. 50(2): 113-131.
<urn:uuid:30bf0eef-9c8f-4fd3-8586-650f170eb04c>
2.84375
533
Knowledge Article
Science & Tech.
25.836483
. "Genetics and the origin of bird species." (NAS Colloquium) Genetics and the Origin of Species: From Darwin to Molecular Biology 60 Years After Dobzhansky. Washington, DC: The National Academies Press, 1997. The following HTML text is provided to enhance online readability. Many aspects of typography translate only awkwardly to HTML. Please use the page image as the authoritative form to ensure accuracy. Proceedings of the National Academy of Sciences of the United States of America under single-gene or polygenic control (plumage and morphology), others are culturally inherited (song). The slow evolution of postmating isolation implies that the scope for reinforcement of premating isolating mechanisms is minimal. Involvement of culturally inherited traits may be partly responsible for the relatively rapid rate of speciation in bird. Speciation in Birds Speciation has been most thoroughly investigated, and for many years, in Darwin’s finches (Geospizinae; refs. 4–7). We therefore begin by describing a model that was devised specifically for these birds on the Galápagos Islands (8). We examine the evidence for various aspects of the model focusing on genetic factors where possible, and consider alternatives. Then we ask what needs to be added to the model to make it a comprehensive statement of speciation in birds in general. A Model of Allopatric Speciation Fig. 1 portrays three stages in the cycle of events leading to the division of one species into two. The choice of islands to illustrate these stages is arbitrary. In step 1, the archipelago is colonized from continental South or Central America. A breeding population becomes established, and its size increases. In step 2 some individuals disperse to another island and establish a new breeding population. Some evolutionary change takes place in the new environment through selection and drift. Step 2 may be repeated several times, giving rise to several differentiated populations of the same species. Step 3 is the contact, through dispersal of members of two populations possessing different mate signaling and recognition systems. This is the secondary sympatric phase of the cycle, and there are two types of outcomes. In one, members of the two populations do not interbreed, or if they do their offspring are inviable or infertile; the process of speciation in this case has been completed in allopatry. Alternatively the populations are only partly reproductively isolated, interbreeding occurs, and some of the hybrids survive to breed. Reinforcement of the differences between the species then may occur if the hybrids have relatively low fitness. FIG. 1. Allopatric speciation of Darwin’s finches in the Galápagos archipelago (8). After an initial colonization of the archipelago (step 1), dispersal and the colonization of new islands (step 2) gives rise to allopatric populations, which diverge through selection and drift. The process is completed with the establishment of sympatry (step 3). The choice of islands to illustrate the process is arbitrary. [Reproduced with permission from ref. 7 (Copyright 1996, The Royal Society)]. Step 1 probably occurred once, or at most a few times, given the large distance separating the islands from the continent, which was greater at the time of initial colonization than at present (9). An argument from major histocompatibility complex variation suggests that there must have been a minimum of 30 individuals in the colonizing population (10). Stages 2 and 3 were repeated several times, giving rise to several species over a period of time estimated to be less than 3 million years (11). The ecological conditions would have varied from one cycle to another, but the essential features were repeated. The varying conditions include the length of the period of the allopatric phase (stage 2) before secondary contact, population sizes and hence the scope for drift, and the difference in the island environments and hence the scope for directional selection. Another important factor was the creation of new islands by volcanic activity (7, 9) and the recent periodic lowering of sea level (12). Over the last 3 million years there has been a net increase in the number of islands despite some disappearing through submergence, paralleling the increase in number of species (7). Thirteen species are recognized on the basis of morphological and biological criteria (4, 6), with as many as 10 occurring on a single island. A 14th species inhabits Cocos Island. Evidence from Field Studies of Darwin’s Finches We observe closely related species in sympatry and infer how they evolved from a common ancestor. Therefore we first consider how species are reproductively isolated, and then work back to their allopatric origin. Species can be recognized by their morphological characteristics and songs (13, 14). With rare exceptions sympatric species pair and breed conspecifically, and as a result are reproductively isolated from each other. They choose mates on the basis of song, sung by males only, and morphological appearance, in which beak size and shape and body size play a part but plumage does not. Imprinting on adult features early in life appears to guide the choice of mates (7, 15, 16). The role of morphology in mate choice has been demonstrated experimentally with tests that show that several pairs of sympatric species of ground finches (Geospiza) discriminate between conspecific and heterospecific visual cues (17). Separately, experiments have shown that males can discriminate between conspecific and heterospecific auditory cues (18). Females were not tested in these acoustic experiments, but it would be surprising if they were not capable of making the same discriminations. The evolution of reproductive isolation in Darwin’s finches is therefore the evolution of differences in song and in morphology. Reproductive isolation is not complete; species hybridize, rarely, and are capable of producing fertile hybrids that backcross to the parental species (12, 19, 20). The rare interbreeding of species and the mating pattern of the hybrids provide further evidence of the importance of song in mate choice. Hybridization occurs some times as a result of miscopying of song by a male; a female pairs with a heterospecific male that sings the same song as that sung by her misimprinted father (16). On Daphne Major island hybrid females bred with males that sang the same species song as their fathers (20). All G. fortis ×G. scandens F1 hybrid females whose fathers sang a G. fortis song paired with G. fortis males, whereas all those whose fathers sang a G. scandens song paired with G. scandens males. Offspring of the two hybrid groups (the backcrosses) paired within their own song groups as well. The same consistency was shown by the G. fortis ×G. fuliginosa F1 hybrid females and all their daughters, which backcrossed to G. fortis. Thus mating of females was strictly along the lines of paternal song. The independent role of morphology in mate
<urn:uuid:441a4c0e-6e02-4a03-8051-79e8026331b6>
3.578125
1,469
Academic Writing
Science & Tech.
34.20763
Neil Fisher (5 March, p 32) suggests that most of the global warming projections of the Intergovernmental Panel on Climate Change are flawed because, he thinks, there must be negative feedbacks which will counter global warming (5 March, p 32). He believes that, without such negative feedbacks, the climate would not have been stable over thousands of years. Climate history, unfortunately, gives us many examples of abrupt and catastrophic climate change. Some 13,000 years ago, global temperatures first dropped and, many centuries later, rose by well over 3 °C. In both cases the temperature change occurred within less than 50 years. Buried mangrove forests have been found under the Great Barrier Reef which suggest a 3-metre rise in sea levels in less than 30 years, following the end of the last ice age, at least in that region. Longer ago, massive and abrupt global warming saw a rise of 8 ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:ffadee3b-c6f3-4228-b675-fd7f689c253a>
3.46875
216
Truncated
Science & Tech.
52.7513
Physicists study matter by causing particles, accelerated to high energy, to collide with each other. In the high energy collisions, new particles can be created and new processes occur. In these collisions the most fundamental constituents of matter reveal their properties. |Accelerators in Use| |Accelerators in Research| |The History of Accelerators| |Discoveries Made with the Help of Accelerators| Credits: Produced in collaboration with Erik Johansson MLA style: "Accelerators". Nobelprize.org. 25 May 2013 http://www.nobelprize.org/educational/physics/accelerators/index.html
<urn:uuid:46943a0b-2eec-42b8-b6da-fc587e77cb05>
3.375
139
Knowledge Article
Science & Tech.
22.196978
Fertile areas of the ocean swarm with hundreds and hundreds of millions of animal plankton. Some are visible as small dustlike particles; others can be seen only with a microscope. The living zooplankton shown here contain two kinds of copepods - animals shaped like grains of rice - and one kind of arrow worm - the long slender animals at upper right and lower left.
<urn:uuid:3ef4acdd-6286-40e9-bc51-47922b431b04>
2.84375
84
Knowledge Article
Science & Tech.
43.948656
But at the close of the ice age, about 13,000 years ago, most of the megafauna vanished — an extinction attributed to both climate change and the appearance of efficient Stone Age hunters. With them went the largest predators, allowing the smaller grey wolves to fill the vacant niche, which put them in competition with the largest coyotes. That conflict, as well as the loss of large herbivores, caused coyotes to shrink in stature. Within 1,000 years of the Pleistocene extinctions, coyotes had reached the same size as in most present-day populations. Now, they're going through a whole new set of changes as they adapt to the modern landscape of North America. Genetic studies show that some coyotes are even interbreeding with dogs, which could lead to a different sort of hybrid animal. Researchers are struggling to keep up with the animals and their impacts as they lope into more new regions. “Invading a landscape emptied of wolves may trigger a whole new pathway in terms of the coyote's evolution,” says Bill Ripple, an ecologist at Oregon State University in Corvallis. “And the coyote's arrival will have unpredictable effects on other species in the ecosystem.”
<urn:uuid:04d762fe-2a5c-4c30-82bc-c0ce0458db71>
4.0625
254
Knowledge Article
Science & Tech.
43.843444
Amazing Metazoan Archive - The Amazing Metazoan for April 5 2013 is the greenling (genus Hexagrammos), a demersal fish commonly found throughout Sitka Sound. The Sound is home to three species of greenlings within the genus Hexagrammos, and one species of the genus Oxylebius. The rock greenling Hexagrammos lagocephalus is by far the most beautiful with its fiery orange, red, and deep brown markings and blue highlights. The Ahlgren Aquarium has two species of greenling in the tanks: the kelp greenling Hexagrammos decagrammus, and the white spotted greenling Hexagrammos stellatus. The kelp greenling in the photograph above is a male; females have gold mottling on a pale body and golden colored fins. Kelp greenling can grow up to two feet in length, and white spotted greenling attain roughly half that size. In both species, females lay the egg mass and males guard against predators. (Photo by Samantha Weaver) BTY, the picture on facebook is a photo taken through a microscope of a scale from a white spotted greenling! Is it a worm, a flower, or an invader from outer space? Click on the image to find out the identity of an animal that lives in the mud but gets its nutrients from the water. This metazoan is a favorite among divers, and gets it namesake for its affinity for a certain type of habitat. Click on the photo to solve the clue and discover its identity. - This entry’s Amazing Metazoan is our star attraction at the Molly Ahlgren Aquarium, the wolf eel (Anarrhichthys ocellatus). The wolf eel is not a true eel but a member of the wolf fish family Anarhichadidae, and unlike true eels such as morays, they possess pectoral fins and have flexible spines in their long dorsal fin. They also possess “canine” teeth as well as several rows of molars. Although they have a fearsome reputation for their ability to deliver a painful bite, they are in fact quite docile and have been the playful center of attention for hundreds of SCUBA divers. The aquarium’s wolf eel has been in residence since 2008, and was approximately 12 inches long when she was brought in by a fisherman. She now measures over 36 inches in length, and eats a variety of foods such as squid, salmon, and the occasional helmet crab. Click on the photo to see a short video of our wolf eel in her cave. The Amazing Metazoan for January 14 2013 is a semi rigid animal that is fairly often seen in shallow subtidal areas. Notice the “bumps” in the photograph-the animal uses these structures for defense, cleaning, and respiratory purposes. Want to know what it is? Go to the Aquarium Gallery page and complete the jigsaw puzzle to find out! For this week’s Amazing Metazoan, we are going to ask you to follow the clues to determine the identity of our featured aquarium animal. Let’s get started: What has scales, can live in diverse places such as the muck on the bottom of the ocean or as an epibiont on certain echinoderms, and is a close relative of an important terrestrial decomposer? Click on the photo and complete the jigsaw puzzle at the bottom of the page to find out! Winter hasn’t even begun, but we bet that you are longing for those days of sunshine from summers past. Let us accommodate you with this photo of the gold dirona, Dirona pellucida, that was taken last summer during one of our collecting dives at the Eliason breakwater. The gold dirona, along with the similar looking white lined dirona and the not often seen Janolus fuscus, is an Arminid nudibranch. The leafy gold and white tipped appendages on this mollusc are the gills (called cerata). Gold dirona feed specifically on the bryozoan Bugula, which is an animal that resembles algae. This beautiful specimen was approximately 12 centimeters long. The beautiful Aleutian Moonsnail Cryptonatica aleutica It’s always a pleasure to see who’s roaming in our touch tanks at night; here we find the Aleutian moonsnail, which is easy to identify by its extensive mottled body. Contrary to what its name implies, this marine gastropod is found from Alaska all the way to southern California from the shallow intertidal down to depths of 400 meters (that’s over 1300 feet deep!). Cryptonatica aleutica was first described in 1919, but other species of Cryptonatica have been described as far back as the late 1700′s. One of Alaska’s most beautiful molluscs (in our opinion), the Aleutian moonsnail measures up to 6 centimeters, or 2.3 inches across. Allow me to introduce you to the grainyhand hermit crab, Pagurus granosimanus. Although this anomuran crab may appear drab to the casual observer, closer inspection will reveal that it has a striking pattern of contrasting granules covering its walking legs and its “chelipeds” (the two front legs that hold the “chelae”, or pincers), as well as beautiful orange antennae (but not all grainyhands have orange antennae). Commonly found in tide pools and in the shallow intertidal zone, the grainyhand hermit crab prefers very large shells for its habitation. This particular touch tank resident is living in the shell of a deceased frilled dogwinkle. Do you recognize this creature? This is a close up of the sunflower star, one of the largest (if not the largest) sea stars species found in Sitka Sound. Visitors to the aquarium often ask “why does the sunflower star look so soft and mossy?” If you look closely at the surface of the sunflower star with a magnifying lens, you will notice hundreds of fleshy projections called papula that give the animal its “soft and mossy” appearance. Scattered amongst the papula are numerous spines, which can be seen in this photo, and tiny jaw like structures called pedicillariae, which the animal uses for defense. During a late morning excursion to collect live mealstock with the Molly O. Ahlgren Junior Curators, aquarium camp director Lynn Wilbur was under the Crescent Harbor dock shooting video when she noticed colonies of beautiful Vancouver feather duster worms (Eudistylia vancouveri). This species of polychaete worm is a member of the phylum Annelida, which it shares with earthworms. Feather duster worms build flexible, leathery sheaths through which they extend their feathery feeding appendages (known as radioles). When startled, a feather duster worm will retract its radioles, and the end of the tube folds over. Vancouver feather duster worms can be found under floats and walkways in at least two of Sitka’s harbors. Look at what floated up at the aquarium! Chaya, our summer high school technician, discovered a winged nudibranch (Gastropteron pacificum) while cleaning one of the touch tanks. Contrary to what one may infer from its common name, this opistobranch mollusc is actually a member of the order Cephalaspidea.We believe that this individual recruited through our intake system that connects the touch tanks to the ocean, which provides a constant flow of nutrients for our critters. The winged nudibranch uses its wing-like appendages (called parapodial flaps) to seemingly fly through the water. When resting, the creature folds the flaps around itself, exposing the siphon that brings in oxygenated water. Chaya’s cephalispid is the size of a pinky fingernail, but winged nudibranchs can reach up to four centimeters in length. Love in the touch tanks The cloudy pinkish mass that you might see in our touch tanks isn’t some weird algal bloom-spring is here and our sea stars are spawning! Sea stars have separate sexes, and during spawning season the males and females release their eggs and sperm through their dermis into the water column. A sample taken from the bottom of our touch tank reveals thousands of eggs and sperm when viewed through a microscope. After fertilization, the sea star gametes undergo cell division and metamorphose into a planktonic stage, first becoming bipinnaria, then brachiolaria larva. As a member of the zooplankton community, they will beat their cilia and float with the ocean currents until they reach the juvenile stage, when they develop their arms (rays, which they will use for attaching to the substrate. Once on the bottom they will begin scavenging for food such as clams and worms, growing into the adult starfish that we are all familiar with. Pinto abalone Haliotis kamtschatkana Also known as the northern abalone, this “coiled snail” is a much sought after member of the Haliotidae family, ranging from Japan, Siberia, Alaska, and as far south as Mexico. Attempts at opening a commercial fishery for this mollusc usually result in a severe decline in regulatory harvest size adults. Abalone have separate sexes and reproduce by releasing eggs and sperm into the water; males and females must be in close approximation in order for the gametes to make contact. After fertilization, abalone undergo a planktonic stage, feeding on phytoplankton in the water column. Once they reach their final developmental stage they will settle on rocks and in crevices to feed on microalgae; as they increase in size they will feed macroalgae (giant kelp). Decorator crab Oregonia gracilis, named after the crustacean’s affinity for decorating its carapace with available pieces of algae, sponge, and other living material. The decorator crab starts out life as a member of the zooplankton community before maturing into its adult form. Like other species of decapods, decorator crabs “molt” (shed their exoskeleton) in order to grow larger. The individual in this photo recruited through our seawater intake system and underwent a series of molts to achieve a carapace size of approximately 4cm across. She now resides in one of our several display tanks. Sea Leech Heptacyclus diminutus. “YUCK!”. That is what a visitor to the aquarium might say when closely viewing the sea leech Heptacyclus diminutus. This important member of the marine ecosystem is one of many marine leeches described in the fourth edition of the Light and Smith Manual, and is notable for its numerous light sensitive organs called ocelli. Heptacyclus diminutus attaches itself to rockfish by the oral sucker shown in the picture; we have also seen it attached to sculpins as well as an octopus. Sea leeches are introduced into the aquarium as stowaways on new residents or as hitchhikers through the intake system. Luckily for the squeamish, this leech spends most of its time attached to substrate by its caudal (tail) sucker. Heptacyclus diminutus is extremely photophobic as evidenced by its aversion to light when viewed under a microscope, and it disappears from the aquarium once spring progresses. The species name comes from the leeches’ diminutive stature; Heptacyclus diminutus achieves an average length of less than 1 cm and doesn’t seem to cause any harm to our fish. Cookie star (Ceramaster patagonicus). This sea star gets its species name from the Straits of Magellen near Patagonia, where it was originally described aboard the research vessel Challenger. There are several species of cookie stars that range from South America to the Gulf of Alaska, as well as off the coast of South Africa. Ceramaster can be found at depths of 245 meters (800 feet). A similar sea star, the Arctic cookie star (Ceramaster arcticus), is also found in Alaska waters but is much smaller and more colorful. The cookie star feeds primarily on sponges, and is difficult to keep in an aquarium. March 15th, 2012 White Capped Limpet Acmaea mitra If you carefully explore the rocks and crevices in our subtidal touch tank you will likely encounter the white capped limpet Acmaea mitra. Although the common name for this nacellid gastropod may confuse some, the pink coloration comes from an encrusting red algae called Lithoamnia. When the animal dies, the algae wears off and beachcombers may find the snowy white shell of this mollusc all along the wrack line. March 1st, 2012 The white sea urchin (Strongylocentrotus pallidus) is an echinoderm that is seen infrequently in Sitka Sound, and can easily be confused with the white variety of the green sea urchin (Strongylocentrotus droebachiensis). S. pallidus has a somewhat squatter test than S. droebachiensis, its tube feet and tentacles are white to pale pink, and the spines are white. Sea urchins graze on algae, including giant kelp, with a five “toothed” aparatus, called the Aristotle’s lantern, located on the underside of the animal. Feb. 15th, 2012
<urn:uuid:22c7c46f-898c-440c-8561-b87cfef5c75b>
2.78125
2,902
Content Listing
Science & Tech.
41.912012
Chartella papyracea occurs throughout the British Isles. The species is apparently restricted to the temperate east Atlantic, ranging from the southern North Sea through the English channel, the south and west coasts of the UK, the Irish Sea to the northern coast of Spain Colonies are predominantly found in the shallow subtidal on hard substrates, but they may also colonise overhangs on rocky shores, near the low water mark. Chartella papyracea forms delicate tuft-like colonies with flattened branching fronds. Fresh colonies are brown or light grey and grow to up to 10 cm in height. Colonies grow through asexual budding of new zooids at the periphery. Growth of Chartella papyracea is perennial, typically from early spring, throughout the summer to early October. Erect fronds die by detachment within 2-3 years, but the encrusting portion of the colony may live longer. Annual growth checks, in the form of lines across the frond surface are frequently visible. Chartella papyracea has a similar colony form to several other species in the family Flustridae. In particular C. papyracea may be mistaken for Flustra foliacea, but can be distinguished by the larger zooids and more delicate calcification than F. foliacea. Additionally, in F. foliacea, the fronds are broader and larger. Chartella papyracea can be distinguished from C. barleei -the only other species of the genus to occur in Britain - by the presence of short thick spines at each distal corner of the zooids, and the absence of avicularia. Colonies establish as an encrusting sheet of non-sexual feeding and non-feeding (kenozooids) zooids. Small delicate tufts of flat radiating fronds arise from a short flattened stem, diverging to both sides. Kenozooids border the edge of the fronds. Chartella papyracea is only lightly calcified and the colony as whole is flexible, allowing it move with the current. Zooids are simple, approximately rectangular in shape and are arranged “back to back” to form bilaminar sheets. Short, thick spines protrude from the two distal corners of each zooid, furthest from the colony origin. The frontal surface of the zooids is entirely membranous and no gymnocyst is present. Opercula (flap-like folds of the body wall which close the orifice) are lightly chitinized and avicularia are absent. Colonies typically grow up to 10 cm in height. Zooids are 0.5 by 0.2 mm Chartella papyracea appears restricted to the temperate east Atlantic, ranging from the southern North Sea through the English channel, the south and west coasts of the UK, the Irish Sea to the northern coast of Spain. The species has been recorded as abundant from dredge samples off the north and west coast of France. Not thought to occur in the Mediterranean. Distinguishing individual colonies of C. papyracea can be problematic. The holdfasts of adjacent colonies frequently come into contact, and multiple colonies may appear as one, although there is no fusion. Other colonies often become fragmented by mechanical disruption or partial overgrowth. Chartella papyracea is a cold temperate species that is most frequently found on hard substrates in the shallow subtidal. Colonies are also (rarely) found beneath overhangs on rocky shores, near the low water mark. The species occurs as a codominant with Bugula flabellata in communities colonizing vertical, shaded, sbulittoral rock surface in the the Bristol Channel. The founding zooid (ancestrula) develops into a young colony, and later into an adult colony through asexual budding. Zooids formed in early spring are male (androzooids) and may become opaque white in colour as testis develop. Zooids developing later in the growing season are female (gynozooids). Sexually produced embryos are brooded within the colony, before larvae are released, around six weeks after gynozooid completion. Larvae settle after liberation and metamorphose into an ancestrula. Fronds are expected to live for 2-3 years, but the encrusting part of the colony may exceed this. The life expectancy of a colony, as opposed to fronds, is difficult to assess in C. papyracea, mainly because of the difficultly in delimiting the extent of any one colony. Like all bryozoans, C. papyracea is a suspension feeder. It feeds on small phytoplankton using ciliated tentacles of the lophophore. The colony and individual fronds are hermaphroditic, but individual zooids are either male or female. Brood chambers (ovicells) are immersed within the zooid (endozooidal) and appear as small domed cap at the distal end (furthest from the colony origin) of the female zooids. The sexually-produced embryos, pale orange in colour, are brooded until larval release in the summer months and continuing into October. The larvae are non-feeding coronate larvae, which lack a shell and have a densely ciliated belt (the corona) for locomotion. Several generations of larvae are produced in a single summer breeding season.
<urn:uuid:6096da11-f0f2-4c92-a772-315d140cf0c8>
3.46875
1,123
Knowledge Article
Science & Tech.
38.806885
Les Bossinas / NASA An artist's conception shows a starship entering a wormhole to travel to a distant galaxy. Last month's "100-Year Starship" conference, backed by NASA and the Pentagon's Defense Advanced Research Projects Agency, threw a huge spotlight on the idea of sending spacecraft far beyond our solar system — but how realistic is that idea? Check out what one of the world's top experts on the subject has to say on "Virtually Speaking Science." Marc Millis, the researcher behind NASA's Breakthrough Propulsion Physics Project and the nonprofit Tau Zero Foundation, was my guest on tonight's show, which is available as a podcast via BlogTalkRadio and iTunes. Millis estimates that it'll take 200 years to get in position for the first missions to stars beyond our own, but he says there are lots of small steps we can take starting tomorrow to "chip away" at the challenge. Experiments with solar sails have already started, and Millis says the next step there is to figure out the business case for more ambitious light-powered trips. There are all sorts of potential breakthroughs to consider: Could the recent reports of faster-than-light neutrinos point to a way to break the speed limit set by special relativity? Could laser experiments let scientists warp the fabric of space-time on a small scale? "What creates the properties of an inertial frame, and how does that relate to space travel?" Millis asked. Is it worth spending money on precursor missions — for example, sending a "Super-Hubble" space telescope beyond the edge of our solar system to look outward, and inward? "What would it take to do that? How much would it cost?" Millis said. Here's an edited transcript of my pre-show Q&A with Millis: Cosmic Log: More people are aware that interstellar flight is on the agenda, in part because of the 100-Year Starship conference. So is anyone building a starship anytime soon? What's the next step? Millis: No one's building a starship anytime soon, although a lot of people would like to attempt that. The workshop had about 1,000 people there. It was open to the public, and I was glad to see some very intelligent questions from the public. It was an introductory look at not only the technology, but also some of the social issues, and how you would do financing. The next step by DARPA is that there's a competition out to award the remaining funds of about $500,000 [out of an original $1 million] as seed money to whoever can suggest the best organizational structure to carry forward with the 100-Year Starship image. That will be an organization that will work for at least a century to develop the technology and financing to ultimately enable starships. Q: Do you see Tau Zero as that organization? A: Tau Zero is making a proposal. To gauge our chances, I would have to know what all the other competitors are proposing, and that's hard to do. Q: Could it be that the social issues are actually more challenging than the technological issues? A: Theoretically, it would be possible to send a probe to the nearest neighboring star in less than a century, so you could actually get your data back. But the required expense is beyond what I think our society could commit to right now. Q: What's the ballpark figure for the cost? A: There isn't one, because it's so beyond what we can do. Based on the progression of society ... if we don't change anything that we're doing, it looks as if it might take another two centuries to have an interstellar probe that's fast enough to complete a mission within a human lifespan. Not that there's people on board, but that the people who launched the mission could get the data back before they retire. We have a long way to go. The important issue to figure out today is to make sure we have a sane comparison of the real challenges and the real state of the art, so we're proceeding wisely here. Then, from that, ask, "OK, if that's where we are, what can we start tomorrow to chip away at those issues?" We can't build the starship tomorrow, but we can identify the correct questions to ask, and begin seeking answers to those questions. When it looks more promising, and the advancements are there, fine. On the social issues ... when you think of leaving the planet, and representing Earth, that requires a high degree of political will and collaboration. I don't consider that impossible, and things are certainly looking up in terms of nations collaborating on major space topics. But I don't know how long it will take to really bring this collaboration to bear. Now this doesn't preclude any one sufficiently able and wealthy team from launching their own mission, on their own. Would that be ethical or not? Then, suppose we did identify a habitable planet. Is it really ours to consider colonizing? There are a lot of huge questions: What's the optimal population for an interstellar trip? What are the governance models? What's the meaning of life? When you start thinking about "world ships," where we're sending people instead of just robotic probes, that provides a venue that's far enough out that you can rationally discuss these questions. It's an interesting opportunity that we really haven't tapped into yet. Q: I guess one of those big questions would be, "Why travel to other star systems?" How would you answer that one? A: The ultimate, highest-priority benefit of star flight is the survival of the human species beyond the fate of our own solar system and our home planet. In the meantime, the progress we make to try to turn all this stuff into a reality will result in profound improvements in energy conversion, transportation, self-supporting life support — things that would be very useful for life on Earth. And then there's the social aspect. This effort can give us hope for a better future, expand our opportunities — and hopefully give people a frontier to conquer, rather than being left with no option other than to conquer each other. More about interstellar flight: - The best options for flying to other stars - Billionaires wanted for starship plan - Sex poses big challenge for interstellar travel Podcasts from 'Virtually Speaking Science': - Download tonight's hourlong show from BlogTalkRadio or iTunes - Sean Carroll on the puzzling frontiers of physics - Rand Simberg on the private-enterprise vision for spaceflight - Martin Hoffert on the future of energy policy - George Djorgovski on science in virtual worlds - Alan Stern on suborbital research and NASA's mission to Pluto - Col. 'Coyote' Smith on the outlook for space solar power - Tim Pickens on rocket ventures and the Google Lunar X Prize Last update: 10:30 p.m. ET Nov. 2. Many thanks to the Meta Institute for Computational Astrophysics for co-sponsoring tonight's Second Life talk at the Stella Nova auditorium. Connect with the Cosmic Log community by "liking" the log's Facebook page, following @b0yle on Twitter or adding me to your Google+ circle. You can also check out "The Case for Pluto," my book about the controversial dwarf planet and the search for other worlds.
<urn:uuid:a7bb0d53-7667-4726-a2f8-b26210761032>
3.171875
1,527
Audio Transcript
Science & Tech.
56.364517
- Physically mismeasured skulls so that their cranial volumes would match his expectations about racial differences in cranial volume - Statistically manipulated population means by taking averages of individual skulls rather than averages of population averages, hence biasing his "Indian" means to be lower Furthermore, they show that Morton's supposed statistical manipulation had very little effect: the difference was only 0.3 cubic inches. Not only this, but Gould fudged his own measurements which were supposed to proved that different populations did not differ in cranial capacity: Gould's reanalysis of Morton's 1849 shot-based data resulted in a Native American mean capacity of 86 in3 rather than Morton's original 79 in3. Gould obtained his new average by again taking the group mean of Native American populations with four or more crania. But Gould also applied an additional restriction: he only included Native American crania that Morton had also previously measured with seed. This restriction is entirely arbitrary on Gould's part, as Morton's publications and analyses for his seed- and shot-based measurements are completely separate (1839 versus 1849), and Gould did not apply this restriction to the other groups he reanalyzed in Morton's shot-based data. If this restriction is lifted, Gould's Native American average would be reduced to about 83 in3, considerably below his reported 86 in3. In other words, Gould's bias is about an order of magnitude higher than Morton's presumed "bias". It is remarkable that 30 years after the Mismeasurement of Man Gould's errors are uncovered. Why did it take so long? While one could understand why the (totally unfounded but -on the surface- plausible) idea of measurement bias could have gone unnoticed until someone actually re-measured the skulls, but the statistical error that Gould committed was there for anyone to see. From the paper: Of the substantive criticisms Gould made of Morton's work, only two are supported here. First, Morton indeed believed in the concept of race and assigned a plethora of different attributes to various groups, often in highly racist fashion. This, however, is readily apparent to anyone reading the opening pages of Morton's Crania Americana. Second, the summary table of Morton's final 1849 catalog has multiple errors (Dataset S3). However, had Morton not made those errors his results would have more closely matched his presumed a priori bias (and see Box 4). Ironically, Gould's own analysis of Morton is likely the stronger example of a bias influencing results .First, there is a conflation here between "believing in the concept of race" (which is in no-way invalid, and certainly its validity or lack thereof is not the subject of this paper) and "assigning a plethora of different attributes..." which may indeed be true, but completely irrelevant to the actual quantitative measurements of skulls. What is most interesting is that Gould's analysis of Morton's work shows clear evidence of bias in favor of his own hypothesis ("Morton was a racist, different races have not much different cranial capacities"), rather than the opposite. Nonetheless, Gould has been viewed by some as some sort of progressive enlightened intellectual, whereas Morton is vilified as a bad scientist who fudged his data because of his racist bias. Morton may have been a racist, but his data were not provably the product of his racism. Gould was a non-racist, but his data was clearly the product of his biological egalitarianism and/or his quantitative incompetence. PLoS Biol 9(6): e1001071. doi:10.1371/journal.pbio.1001071 The Mismeasure of Science: Stephen Jay Gould versus Samuel George Morton on Skulls and Bias Jason E. Lewis et al. Stephen Jay Gould, the prominent evolutionary biologist and science historian, argued that “unconscious manipulation of data may be a scientific norm” because “scientists are human beings rooted in cultural contexts, not automatons directed toward external truth” , a view now popular in social studies of science –. In support of his argument Gould presented the case of Samuel George Morton, a 19th-century physician and physical anthropologist famous for his measurements of human skulls. Morton was considered the objectivist of his era, but Gould reanalyzed Morton's data and in his prize-winning book The Mismeasure of Man argued that Morton skewed his data to fit his preconceptions about human variation. Morton is now viewed as a canonical example of scientific misconduct. But did Morton really fudge his data? Are studies of human variation inevitably biased, as per Gould, or are objective accounts attainable, as Morton attempted? We investigated these questions by remeasuring Morton's skulls and reexamining both Morton's and Gould's analyses. Our results resolve this historical controversy, demonstrating that Morton did not manipulate data to support his preconceptions, contra Gould. In fact, the Morton case provides an example of how the scientific method can shield results from cultural biases.
<urn:uuid:449f2c84-0100-4b97-a3c2-6960ffc7fc0d>
3.234375
1,036
Personal Blog
Science & Tech.
33.712674
In 2007 China conducted an anti-satellite (ASAT) test, shattering an aging Chinese weather satellite with a missile. This led to the creation of over 3,000 new debris fragments--all potentially catastrophe-causing should they collide with a shuttle or space station. But surprisingly, for debris at low altitudes, fragmentation by ASAT device can actually be helpful in speeding orbital decay . In February of 2008, the United States used an ASAT interceptor to destroy a failed satellite at an altitude of 150 miles (compared to 537 miles for the China test). Because of greater atmospheric drag at that altitude, 99 percent of the debris re-entered the atmosphere within one week (and burned up on re-entry). While controversial, this technique can still be used to remove defunct satellites from Low Earth Orbit at altitudes below 180 miles. As this remarkably detailed painting illustrates, ASAT weapons are very destructive. Some space agencies are pushing for an outright ban at high altitudes.
<urn:uuid:7dd43a26-9246-48f7-8ed9-ccdeea6284ea>
2.796875
207
Knowledge Article
Science & Tech.
31.18125
05 Aug 2010 A Looming Oxygen Crisis and Its Impact on World’s Oceans As warming intensifies, scientists warn, the oxygen content of oceans across the planet could be more and more diminished, with serious consequences for the future of fish and other sea life. The Deepwater Horizon oil spill is overshadowing another catastrophe that’s also unfolding in the Gulf of Mexico this summer: The oxygen dissolved in the Gulf waters is disappearing. In some places, the oxygen is getting so scarce that fish and other animals cannot survive. They can either leave the oxygen-free waters or die. The Louisiana Universities Marine Consortium reported this week that this year’s so-called “dead zone” covers 7,722 square miles. Unlike the Deepwater Horizon disaster , this summer’s dead zone is not a new phenomenon in the Gulf. It first appeared in the 1970s, and each summer it has returned, growing bigger as the years have passed. Its expansion reflects the rising level of fertilizers that farmers in the U.S. Midwest have spread across their fields. Rain carries much of that fertilizer into the Mississippi River, which then delivers it to the sea. Once the fertilizer reaches the Gulf, it spurs algae to grow, providing a feast for bacteria, which grow so fast they use up all the oxygen in their neighborhood. The same phenomenon is repeating itself along many coastlines around the world. This summer, a 377,000-square-kilometer (145,000-square-mile) dead zone appeared in the Baltic Sea. In 2008, scientists reported that new dead zones have been popping up at an alarming rate for the past 50 years. There are now more than 400 coastal dead zones around the world. As serious as these dead zones are, however, they may be just a foreshadowing of a much more severe crisis to come. Agricultural runoff Warming is expected to reduce the mixing of the ocean by making surface seawater lighter. can only strip oxygen from the ocean around the mouths of fertilizer-rich rivers. But global warming has the potential to reduce the ocean’s oxygen content across the entire planet. Combined with acidification — another global impact of our carbon emissions — the loss of oxygen could have a major impact on marine life. Scientists point to two reasons to expect a worldwide drop in ocean oxygen. One is the simple fact that as water gets warmer, it can hold less dissolved oxygen. The other reason is subtler. The entire ocean gets its oxygen from the surface — either from the atmosphere, or from photosynthesizing algae floating at the top of the sea. The oxygen then spreads to the deep ocean as the surface waters slowly sink. Global warming is expected to reduce the mixing of the ocean by making surface seawater lighter. That’s because in a warmer world we can expect more rainfall and more melting of glaciers, icebergs, and ice sheets. Since freshwater is less dense than salt water, the water at the ocean’s surface will become lighter. The extra heat from the warming atmosphere will also make surface waters expand — and thus make them lighter still. The light surface water will be less likely to sink — and thus the deep ocean will get less oxygen. Instead, more of the oxygen will linger near the surface, where it will be used up by oxygen-breathing organisms. The prospect that global warming could reduce the ocean’s oxygen has led some scientists to wonder if the predicted decline has already begun. It’s a Scientists predict declining oxygen levels could have a major effect on marine life. maddeningly hard thing to determine, however. We can be very confident that humans have driven up the concentration of carbon dioxide in the atmosphere because scientists have recorded a steady increase over the course of decades. The signal of human-produced carbon dioxide is stronger than the noise of nature’s ups and downs. Fluctuations in oxygen levels, on the other hand, are a lot noisier. As ocean currents oscillate naturally, upwellings of deep-ocean water can deliver nutrients to coastal waters, triggering an explosion of growth and driving down oxygen levels. Volcanoes can alter oxygen levels, too, by creating a haze that blocks sunlight, thus temporarily cooling the ocean’s surface and allowing more oxygen to dissolve into the water. In recent years some worrying signals have started to emerge from the noise. In 2006, for example, oxygen levels off the coast of Oregon dropped to record lows. Reefs that had been packed with rockfish and other animals suddenly became ecological ghost towns. Instead of agricultural run-off, studies on the Oregon dead zone suggest that global warming was partly responsible. Higher temperatures have reduced the oxygen in the ocean currents that deliver water to the Oregon coast. It’s much harder for scientists to figure out what’s happening in the open ocean than along the coastlines, because the records are far spottier. But some recent studies have also offered cause for worry. In April, for Because records of oxygen levels are incomplete, scientists are calling for a new push for more research. example, Lothar Stramma of the University of Kiel and his colleagues published a study in Deep Sea Research in which they compared records of oxygen levels in the tropical ocean from two periods: from 1960 to 1974 and from 1990 to 2008. In some regions, the oxygen levels have gone up, the scientists found, but in most places they’ve gone down. In fact, the area of the global ocean without enough oxygen for animals to survive (less than 70 micromoles per kilogram to be exact) expanded by 4.5 million square kilometers (1.7 million square miles). That’s an area about half the size of the United States. Because the records of oxygen levels in the past are so incomplete, many scientists are calling for a push for more research. An international collaboration started in 1995, the Climate Variability and Predictability Repeat Hydrography Program — CLIVAR for short — is beginning to gather better data. But in the latest issue of Annual Review of Marine Science , Ralph Keeling of Scripps Institution of Oceanography and his colleagues warn that the CLIVAR program may need 20 to 30 years to establish long-term trends of oxygen levels. To speed up the process, they call for a global network of floating sensors known as Argo to be brought into the effort. If scientists put oxygen sensors on a few hundred of the 3,000 Argo floats, Keeling and his colleagues predict that a clear pattern would emerge in as little as five years. Keeling and his colleagues believe that it’s urgent to speed up this research, because the deoxygenation of the oceans could have a major impact on marine life. In order to project how global warming will alter oxygen in the oceans, climate scientists are developing a new generation of computer models. The models are still too crude to capture some important features of the real world, such as the way winds can change how deep water rises in upwellings. But the models are good enough to replicate some of the changes in oxygen levels that have already been recorded. And they all predict that the oxygen in the world’s oceans will drop; depending on the model, the next century will see a drop of between 1 and 7 percent. That could be enough to have a profound effect on life in the ocean, according to Daniel Pauly, a fisheries biologist at the University of British Jellyfish, which can tolerate lower oxygen levels than fish, may thrive in the new conditions. Columbia. In his new book, Gasping Fish and Panting Squids: Oxygen, Temperature and the Growth of Water-Breathing Animals , Pauly argues that getting oxygen is the most important constraint on the growth of fishes and many other marine animals. That’s because it takes a lot of energy to extract oxygen from water, and the bigger animals get, the more energy they have to invest. Pauly and his colleagues are working on computer models to project how global warming will affect the world’s fisheries. Many species of fishes will shift their ranges away from water that’s too warm for them. But this flight from heat may force them into regions of the ocean with low levels of oxygen, where their growth will be limited. Pauly and his colleagues predict that the drop in the ocean’s oxygen and pH levels will together reduce the world’s fish catch by 20 to 30 percent by 2050. While fishes and other animals with high oxygen demands suffer, jellyfish may thrive. Jellyfish can tolerate lower oxygen levels than fish, in part because they can store reserves of the gas in their jelly. Free from competition and predators, jellyfish will be able to feast on the microscopic animals and protozoans that feed on algae. They may thus leave more food for bacteria, spurring a further drop in oxygen levels. MORE FROM YALE e360 An Ominous Warning on the Effects of Ocean Acidification A new study says the seas are acidifying ten times faster today than 55 million years ago when a mass extinction of marine species occurred. And current changes in ocean chemistry due to the burning of fossil fuels may portend a new wave of die-offs, reports science writer Carl Zimmer. A drop in oxygen may also cause the ocean's bacteria to change. Bacteria that need oxygen will no longer be able to thrive in oxygen-free zones of the ocean. But these dead zones will foster the growth of many species of bacteria for whom oxygen is toxic. Some of these oxygen-hating microbes produce nitrogen compounds that are among the most potent greenhouse gases ever measured. In other words, a drop in oxygen levels could further intensify global warming. Unless we find a way to rein in our carbon emissions very soon, a low-oxygen ocean may become an inescapable feature of our planet. A team of Danish researchers published a particularly sobering study last year . They wondered how long oxygen levels would drop if we could somehow reduce our carbon dioxide emissions to zero by 2100. They determined that over the next few thousand years oxygen levels would continue to fall, until they declined by 30 percent. The oxygen would slowly return to the oceans, but even 100,000 years from now they will not have fully recovered. If they’re right, fish will be gasping and squid will be panting for a long time to come. ABOUT THE AUTHOR writes about science for The New York Times and a number of magazines. A 2007 winner of the National Academies of Science Communication Award, Zimmer is the author of six books, including Microcosm: E. coli and the New Science of Life. In previous articles for Yale Environment 360 , he has written about the prospects of passing planetary tipping points and the consequences of increased acidification in the world’s oceans
<urn:uuid:0dcbf88b-1efa-442c-bced-680e5e6ded77>
3.484375
2,261
Content Listing
Science & Tech.
46.377854
Pub. date: 2008 | Online Pub. Date: April 25, 2008 | DOI: 10.4135/9781412963893 | Print ISBN: 9781412958783 | Online ISBN: 9781412963893 | Publisher:SAGE Publications, Inc.About this encyclopedia THE JET STREAMS are fast-flowing eastward currents of air in the mid-latitudes of both hemispheres, with their cores at altitudes above 30,000 ft. (9,144 m.). Although they flow eastward, they are driven by the temperature contrast between the equator and the poles. Near the equator where surface temperatures are at a maximum, the air rises and, The ...
<urn:uuid:77a3ab26-6843-4df9-9182-2106e64c451d>
3.203125
147
Truncated
Science & Tech.
69.476238
MNN: The Arctic has seen better years than 2012. Its sea ice melted to an all-time low this summer, and by fall it was 18 percent smaller than at any point in recorded history. As U.S. scientists noted in their annual Arctic Report Card, the region’s sea ice is now “a younger, thinner version of its old self” — and that’s not as flattering as it sounds. Scientists widely agree the main catalyst is manmade climate change, boosted by a feedback loop called “Arctic amplification.” (Antarctic sea ice, meanwhile, is more buffered against warming and has actually expanded lately.) The problem has become well known even among laypeople, thanks largely to its compelling effect on polar bears. But while many people realize humans are indirectly undermining sea ice via global warming, there’s often less clarity about the reverse of that equation. We know sea ice is important to polar bears, but why is either one important to us? Such a question overlooks many other dangers posed by climate change, of course, fromstronger storms and longer droughts to desertification and ocean acidification. But even in a vacuum, the decline of Arctic sea ice could be disastrous — and not just for polar bears. To shed some light on why, here are seven of its lesser-known benefits: 1. It reflects sunlight Earth’s poles are cold mainly because they get less direct sunlight than lower latitudes do. But there’s also another reason: Sea ice is white, so it reflects most sunlight back to space. This reflectivity, known as “albedo,” helps keep the poles cold by limiting heat absorption. As shrinking sea ice exposes more seawater to sunlight, the ocean absorbs more heat, which in turn melts more ice and curbs albedo even further. This creates a “positive feedback loop,” one of several ways in which warming begets more warming. The angle of sunlight, combined with albedo from sea ice, helps keep the poles cold. (Image: NASA) 2. It influences ocean currents By regulating polar heat, sea ice also affects weather around the world. That’s because Earth’s oceans and air act as heat engines, moving heat to the cold poles in a constant quest for balance. One method is atmospheric circulation, or the large-scale movement of air. Another, slower method occurs underwater, where ocean currents move heat along a “global conveyor belt” in a process called thermohaline circulation. Fueled by regional differences in warmth and salinity, this drives weather patterns at sea and on land. The global conveyor belt of ocean currents, aka “thermohaline circulation.” (Illustration: NASA) Sea-ice loss affects the process in two basic ways. First, warmer poles can disrupt Earth’s overall heat flow by changing its temperature gradient. Second, altered wind patterns push more sea ice to the Atlantic, where it melts into cold freshwater. (Seawater expels salt as it freezes.) Since less salinity means less density, melted sea ice floats rather than sinking like cold saltwater. And since thermohaline circulation needs cold, sinking water at high latitudes, this can halt the flow of warm, rising water from the tropics. 3. It insulates the air As cold as the Arctic Ocean is, it’s still warmer than the air in winter. Sea ice serves as insulation between the two, limiting how much warmth radiates up from the ocean. Along with albedo, this is another way sea ice helps maintain the Arctic’s chilly climate. But as sea ice melts and cracks, it becomes dotted with gaps that let heat escape. “Roughly half of the total exchange of heat between the Arctic Ocean and the atmosphere occurs through openings in the ice,” according to the National Snow & Ice Data Center. Older, multiyear sea ice can grow much thicker and sturdier than first-year ice. (Illustration: NASA) 4. It keeps methane at bay Heat isn’t all that can seep through weak sea ice. Scientists have long known that Arctic tundra and marine sediments contain large, frozen deposits of methane, posing a climatic risk if they thaw and release the potent greenhouse gas skyward. But in April 2012, researchers from NASA’s Jet Propulsion Laboratory discovered “a surprising and potentially important” new source of Arctic methane: the Arctic Ocean itself. Flying north of the Chukchi and Beaufort seas, the researchers found mysterious methane fumes that couldn’t be explained by typical sources like wetlands, geologic reservoirs or industrial facilities. Noticing the gas was absent over solid sea ice, they finally traced its source to surface waters exposed by broken ice. They still aren’t sure why there’s methane in Arctic seawater, but microbes and seabed sediments are likely suspects. “While the methane levels we detected weren’t particularly large, the potential source region, the Arctic Ocean, is vast, so our finding could represent a noticeable new global source of methane,” NASA’s Eric Kort said in a statement. “As Arctic sea ice cover continues to decline in a warming climate, this source of methane may well increase.” Gaps in sea ice can release methane into the atmosphere, scientists have discovered. (Photo: NOAA) 5. It limits severe weather It’s well-established that global warming boosts severe weather in general, but according to the NSIDC, sea-ice loss also favors bigger storms in the Arctic itself. Under normal conditions, unbroken swaths of sea ice limit how much moisture moves from the ocean to the atmosphere, making it harder for strong storms to develop. As sea ice dwindles, though, storm formation is easier and ocean waves can grow larger. “[W]ith the recent decline in summer sea ice extent,” the NSIDC reports, “these storms and waves are more common, and coastal erosion is threatening some communities.” Satellites spotted this unusually strong storm in the Arctic Ocean on Aug. 5, 2012. (Photo: NASA) In Shishmaref, Alaska, for example, years of fading ice have let waves eat a shoreline already softened by permafrost thaw. The sea is now invading the town’s drinking water, threatening its coastal fuel stores and even forcing its residents to consider relocation. At the same time, a swell in Arctic storms and waves could also create yet another feedback loop, damaging current ice and impeding new growth as it agitates the ocean. 6. It supports native people Shishmaref is an extreme case, but its residents aren’t alone in watching their home crumble. Nearly 180 Alaskan native communities have been identified as vulnerable to erosion, Smithsonian anthropologist Igor Krupnik said at a 2011 summit on Arctic climate change, and at least 12 have already decided to relocate to higher ground. Many Arctic people rely on seals and other native animals for food, yet the deterioration of sea ice can make it increasingly difficult and dangerous to pursue certain prey. Hunters must not only wait longer for ice to form, but must travel farther over mushier terrain. “Everywhere we asked people, they talked about increasing uncertainty,” Krupnik said. “They talked about irregular changes in weather and weather patterns, they talked about flooding and storms, they talked about new risks of going out on thin ice.” Inuit and other indigenous Arctic people often travel across sea ice by snowmobile. (Photo: NOAA) Farther offshore, the retreating ice is often deemed good news for the oil, gas and shipping industries, which are already jockeying for drilling rights and shipping routes in newly ice-free waters. Such activity could pose risks on its own — from whales killed by ship strikes to shores fouled by oil spills — yet may also be hindered by stronger storms and waves, thanks to the same declining sea ice that enabled it in the first place. 7. It supports native wildlife Sea-ice loss has made polar bears into poster children for climate change, and the shoe unfortunately fits. Like people, they sit atop the Arctic food web, so their plight reflects an array of ecological woes. Not only are they directly hurt by warming, which melts the ice rafts they use to hunt seals, but they also indirectly suffer the effects on their prey. Polar bears and other animals are struggling to adapt to less sea ice. (Photo: Continentalshelf.gov) Arctic seals, for instance, use sea ice as everything from a maternity ward and pup nursery to a cover for stalking fish and escaping predators. Walruses also use it as a place to rest and congregate, so its absence often forces them to overcrowd shorelines and swim greater distances to reach food. Caribou have reportedly fallen through thin sea ice while migrating, one of many threats the hardy herbivores face from climate change. Not all wildlife likes Arctic sea ice, though. Warm, open seas let migratory whales stay later in the summer; some bowheads from Alaska and Greenland have even recently begun mingling in the Northwest Passage. And less ice means more sunlight for phytoplankton, the base of the marine food web. Arctic algae productivity rose 20 percent from 1998 to 2009, according to NOAA, especially in giant blooms near “skylights” in the ice. Less sea ice also helps the Arctic Ocean absorb more carbon dioxide from the air, removing at least some of the heat-trapping gas from the atmosphere. But like most apparent perks of climate change, this silver lining has a cloud: Excess carbon dioxide is making parts of the Arctic Ocean more acidic, NOAA reports, a problem that’s potentially fatal to marine creatures such as shellfish, coral and some types of plankton.
<urn:uuid:2e41ddc2-6e17-4e5b-b999-e38ba99a65ee>
3.5
2,085
Listicle
Science & Tech.
44.572885
Science Fair Project Encyclopedia A liquid (a phase of matter) is a fluid whose volume is fixed under conditions of constant temperature and pressure; and, whose shape is usually determined by the container it fills. Furthermore, liquids exert pressure on the sides of a container as well as on anything within the liquid itself; this pressure is transmitted undiminished in all directions. If a liquid is at rest in a uniform gravitational field, the pressure p at any point is given by - p = ρgz where ρ is the density of the liquid (assumed constant) and z is the depth of the point below the surface. Note that this formula assumes that the pressure at the free surface is zero, relative to the surface level. Liquids at their respective boiling point change to gases, and at their freezing points, change to a solids. Via fractional distillation, liquids can be separated from one another as they vaporise at their own individual boiling points. Cohesion between molecules of liquid is insufficient to prevent those at free surface from evaporating. It should be noted that glass at normal temperatures is not a "supercooled liquid", but a solid. See the article on glass for more details. Liquid may also refer to— The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:09654575-99ff-4339-9a4a-e64b135faf04>
4.0625
287
Knowledge Article
Science & Tech.
38.986311
Printer friendly version Understanding light-driven molecular switches 20 May 2010 Light-driven molecular switches are already used in technical devices such as LCD displays and storage media. Full comprehension of the processes at a molecular scale is required to increase their efficiency, but this knowledge had not been available to date. By computer simulation studies, a consortium of theoretical chemists and physicists from the Ruhr-University Bochum and King’s College London has now managed to reconstruct the exact course of the light-driven molecular changes and gain insight into the switching process. This enables targeted chemical design of light-controllable nanotechnological devices. Dr. Marcus Böckmann, Prof. Dominik Marx (RUB) and Dr. Nikos Doltsinis (King’s College) have published their findings in “Angewandte Chemie International Edition.” The colour of light causes the molecule to switch over Chemically modified azobenzene was used for the computer simulation based on the laws of quantum mechanics. The azobenzene molecule can have two forms and vary between them, different coloured light triggering the switching processes. The researchers carried out a detailed computer simulation study of the switchover processes of the two molecular forms, attaining unprecedented insight into the atomic resolution. Dr. Markus Böckmann explained that it is important that the switching process is fast and highly efficient in all devices in which it is used. Recent experiments have shown that particular chemical modifications in azobenzene can significantly enhance this process. However, to date the reasons for this improvement have not been understood. The computer simulation study could explain the experimental results for the first time. The researchers reported that they have been able to establish a clear relationship between the structure and switching properties of the molecule. This is a decisive step for the chemical design of azobenzene-based light-driven nanotechnological devices and thus the development of improved light-controlled materials.
<urn:uuid:bbafdf9c-b1dd-46cf-9a8a-1a30717902f2>
3.0625
401
Truncated
Science & Tech.
21.365924
The Ocean: A Whole Lot of Heat Unlike the atmosphere, the ocean can absorb and release an enormous amount of heat with little change in temperature. As such, the ocean has an enormous impact on Earth's climate, absorbing warmth from the atmosphere during summer and releasing it back during winter. Ocean Conveyor Belt Within the ocean, rivers of water of different temperatures and salinities move heat around the globe. - As warm surface water moves from the tropics to the North Atlantic, it becomes saltier and denser as fresh water evaporates. In the far North Atlantic the water cools and sinks. - The dense, deep water in the North Atlantic flows all the way to the Southern Ocean, where it is joined by cold water sinking around Antarctica. This deep, cold water spreads out through the Indian and Pacific basins. - Cold, deep water slowly mixes in the Pacific and returns as a shallow, warm, less salty current to replace sinking waters in the North Atlantic. - The ocean conveyor belt holds about 15 times the water flowing in all the world's rivers. - The deep water mixes through the world's ocean in about 1,000 years. Streams of Water Fast-moving boundary currents move tremendous amounts of water and heat around the globe. For instance, the Gulf Stream, seen in this false-color image showing water temperatures, is more than 60 miles wide in many places, has an average depth of over 900 meters (about 3,000 feet) and moves as fast as 9 kilometers (nearly 6 miles) per hour. A Calming Influence Ocean waters absorb heat during the summer and release it during winter, reducing temperature differences between winter and summer and, in a similar way, between day and night. Away from the oceans, many landlocked areas, such as inland Siberia, experience seasonal temperature swings of over 55°C (about 100°F). By comparison, most of western Europe, which borders the ocean, has much milder winters and summers.
<urn:uuid:c5abc163-4f16-4870-b5ae-5d72dd5788de>
4.28125
412
Knowledge Article
Science & Tech.
47.939114
What creates these long and nearly straight grooves on Mars? linear gullies, they appear on the sides of some sandy slopes during Martian spring, have nearly constant width, extend for as long as two kilometers, and have raised banks along their sides. Unlike most water flows, they do not appear to have areas of dried debris at the downhill end. A leading hypothesis -- actually being tested here on Earth -- is that these linear gullies are caused by chunks of carbon dioxide ice breaking off and sliding down hills while sublimating into gas, eventually completely evaporating into thin air. If true, these natural dry-ice sleds may well provide future adventurers a smooth ride on cushions of escaping carbon dioxide. The above recently-released image was taken in 2006 by the HiRISE camera on board the Mars Reconnaissance Orbiter currently orbiting Mars. LPL (U. Arizona),
<urn:uuid:6f4f53b5-d24c-41a6-bd21-b92ba8efa6ef>
3.640625
190
Knowledge Article
Science & Tech.
37.3175
From the outside, the platypus looks like a grade-school art project assembled by a kid too busy making spitballs to pay attention in class. The creature, which is classified as a mammal, has a duck's bill and webbed feet, lays eggs like a reptile, but has fur and rears its young on milk. Researchers say the platypus genome is equally cobbled together from bird, reptile and mammalian lineages. One more oddity: Males can deliver venom from tiny spurs on each hind limb. Click on the "Next" arrow above to learn about nine more oddballs in the animal world. More info: Mixed-up platypus genome unscrambled — By John Roach, msnbc.com contributor Colossal squid has plate-sized eye In April 2008, scientists in New Zealand looked a thawing colossal squid in the eye and discovered that the eye is, well, colossal — about the size of a dinner plate. That makes it the largest animal eye on Earth. Fishermen caught the 1,000-pound creature last year in Antarctic waters and froze it intact for scientific study. Colossal squids can reach 46 feet in length and have tentacles equipped with suckers and hooks. Scientists believe the creatures can descend to 6,500 feet and are active, aggressive hunters. More info: Huge squid caught, could be biggest ever Aye-aye gives grubs the finger The aye-aye is a bushy-tailed primate from Madagascar with big eyes and bat ears. But call it funny-looking and it just might extend its extra-long middle finger in your general direction. The member of the lemur family otherwise uses the extended digit to fish out grubs from the crevices of trees. Captive aye-ayes such as the one shown here from Duke University are teaching scientists about the evolution of color vision. Learn more about the brainiac of all lemurs Star-nosed mole sniffs out food, fast The fleshy appendages that ring the snout of the star-nosed mole, shown here, make it one strange-looking creature. But when it comes to eating, those 22 tentacles help the mole detect and devour food faster than the human eye can follow — in a fraction of a second. Researchers say the speedy feeding allows the mole to prey on small insect larvae that would otherwise be too energetically costly to eat. The creature lives and forages under the cover of marshes and wetlands along the east coast of North America. Burrowing toad is genetically different For an amphibian, the stocky and squat Mexican burrowing toad doesn't look all that strange, but it's actually unique. A global conservation program called EDGE of Existence ranks the toad as the most "evolutionarily distinct" amphibian in the world. A fruit bat, polar bear, killer whale, kangaroo and human are all more closely related to one another than the toad is to any other species, according to the program. The Mexican burrowing toad, as its name suggests, spends most of the year underground, coming out only after particularly heavy rains to breed in pools of water. Learn about other bizarre amphibians under threat Yeti crab lurks on the ocean bottom Named after the legendary shaggy man-beast that tromps through the snows in some of the world's tallest mountains, the Yeti crab blindly scurries about hydrothermal vents along a ridge at the bottom of the Pacific Ocean. First observed in 2005, the crab, officially named Kiwa hirsute, sports a carpet of pale yellow hairs on its arms. Scientists suspect the crab uses those hairs either to farm bacteria or to feel its way around the seafloor for food and potential mates. More info: Scientists to list all species on Web Narwhals, the 'unicorn' whales Unicorns are purely mythical creatures, but the myths may have been inspired by narwhals. Most males and some females among the 2,200- to 3,500-pound whales sport an 8-foot-long appendage that emerges from the left side of their upper jaw. Scientists recently discovered that the elongated tooth is packed with nerve endings, making it extraordinarily sensitive. The whales may use it to determine the salinity of water and search for food. Male narwhals are also known to rub their tusks together, presumably because it gives off a unique sensation. More info: Mystery of 'unicorn' whale solved Sucker-footed bats stick to Madagascar In January 2007, scientists announced the discovery of a new species of bat that uses suckers on its thumbs and hind feet to stick to broad-leafed plants such as the traveler's palm. The new species, Myzopoda schliemanni (left image), was found on the dry, western side of the African island nation of Madagascar and is closely related to another sucker-footed bat called Myzopoda aurita (right image) that lives in the humid eastern forests. Conservationists were heartened by the discovery because it suggests the bats can adapt to pioneering broad-leafed plants in deforested areas. Only about 8 percent of the island's original forest cover remains. Watch NBC video: What's killing all the bats? Long-eared jerboa hops onto the screen In December 2007, conservationists released the first known footage of an endangered rodent they've nicknamed the "Mickey Mouse of the Desert." Known more formally as the long-eared jerboa, the little critter has ears about one-third larger than its head, and legs that allow for hopping like a kangaroo. The International Union for Conservation of Nature lists the species as endangered. One threat: the domestic cat. More info: Mongolian 'Mickey Mouse' caught on tape Ligers, wholphins and grolar bears, oh my! Every now and again, trysts between two different species result in oddball offspring that capture the public's fascination. Ligers, which are a cross between a male lion and a female tiger, were immortalized in the 2004 cult movie "Napoleon Dynamite": The main character of the 2004 cult movie, played by Jon Heder, describes it as "pretty much my favorite animal." (A real one is shown above.) Other popular hybrids include wholphins, which are a cross between false killer whales and Atlantic bottlenose dolphins; and the "grolar bear," a cross between a grizzly bear and polar bear. Watch NBC video of a liger Whale-dolphin hybrid has baby wholphin Hairy hybrid: Half grizzly, half polar bear
<urn:uuid:bdc8598d-2285-4520-8242-8a4307a18939>
3.34375
1,391
Content Listing
Science & Tech.
44.400995
Almost everyone has experienced the Doppler effect, though perhaps without knowing what causes it. For example, if one is standing on a street corner and an ambulance approaches with its siren blaring, the sound of the siren steadily gains in pitch as it comes closer. Then, as it passes, the pitch suddenly lowers perceptibly. This is an example of the Doppler effect: the change in the observed frequency of a wave when the source of the wave is moving with respect to the observer. The Doppler effect, which occurs both in sound and electromagnetic waves—including light waves—has a number of applications. Astronomers use it, for instance, to gauge the movement of stars relative to Earth. Closer to home, principles relating to the Doppler effect find application in radar technology. Doppler radar provides information concerning weather patterns, but some people experience it in a less pleasant way: when a police officer uses it to measure their driving speed before writing a ticket.
<urn:uuid:aa1c01fd-e165-4dba-aabb-862ee5513b1a>
3.921875
201
Knowledge Article
Science & Tech.
36.677074
Atmospheric pressure is defined as the force per unit area exerted against a surface by the weight of the air above that surface. In the diagram below, the pressure at point "X" increases as the weight of the air above it increases. The same can be said about decreasing pressure, where the pressure at point "X" decreases if the weight of the air above it also decreases. Thinking in terms of air molecules, if the number of air molecules above a surface increases, there are more molecules to exert a force on that surface and consequently, the pressure increases. The opposite is also true, where a reduction in the number of air molecules above a surface will result in a decrease in pressure. Atmospheric pressure is measured with an instrument called a "barometer", which is why atmospheric pressure is also referred to as barometric pressure.
<urn:uuid:6a5bc51e-7f9a-4090-8e0d-ec93161e7d8b>
3.953125
177
Knowledge Article
Science & Tech.
27.69204
HTML is a language for describing Web documents and Web applications, along with a set of APIs (application programming interfaces) for interacting with in-memory representations of resources that use the HTML language. Highlights since the previous AC meeting On 23 April 2012, the HTML Working Group Chairs announced a draft stabilization plan with a timeline for advancing the HTML5 specifications to W3C Recommendation status. Given some substantive changes based on feedback, the Chairs plan to start a second Last Call review for the HTML5 specifications. Some further details: - As part of the plan to move the HTML5 specifications through the second Last Call round and beyond, the HTML Working Group chairs have begun a search for new editors for the Recommendation-track versions of the HTML5 and HTML Canvas 2D Context specifications; for details, see the full announcement. - At the same time, the W3C plans for standardization of the next version of HTML to take place in the HTML Working Group. W3C will be rechartering the group so that it may begin that work on new features for that next version in parallel with work on taking the HTML5 specifications to Recommendation. - Some members of the community have launched the W3C Web Hypertext Application Technology Community Group as an independent effort, with no formal relationship to the HTML Working Group. That W3C community group plans to continue separate-but-related activity on further development of HTML features using the “living specification” model, and it is expected that its activity will also help the re-chartered HTML Working Group in its work on the next Recommendation-track version of HTML. In the mean time, the HTML Working Group has been continuing its work on processing issues from the initial Last Call round, and, on 29 March 2012, published ten updated working drafts: - the HTML5 specification - HTML5: Edition for Web Authors - HTML5 differences from HTML4 - HTML+RDFa 1.1 - HTML Microdata - HTML Canvas 2D Context - HTML5: Techniques for providing useful text alternatives - Polyglot Markup: HTML-Compatible XHTML Documents - HTML to Platform Accessibility APIs Implementation Guide - HTML: The Markup Language Note that the HTML5 differences from HTML4 draft includes a section which lists all of the changes made to the HTML5 specification since publication of the initial Last Call Working Draft in May 2011. The HTML Working Group chairs continue to follow a documented Decision Policy for resolving issues in the group. The chairs also maintain a number of resources for tracking issues related to progress of the group’s specifications: - an HTML Working Group overall status page - a list of all open issues, along with status of change proposals and deadlines - a list of all current formal objections - a list of issues that are closed but potentially may be reopened The HTML Working Group had a face-to-face meeting on 3-4 May 2012 in Mountain View. The meeting minutes are available. Michael[tm] Smith, W3C HTML Activity Lead $Revision: 1.3 $ of $Date: 2012/10/02 19:39:00 $
<urn:uuid:13b15cf3-b187-4bac-8751-ec0e8bf12cc9>
2.71875
662
Knowledge Article
Software Dev.
37.2554
The Nevada team has created environmental stress sensors by using rd29A and DREB1C promoters to express red fluorescent protein and other bio-fluorescent markers. When induced by environmental stress, plants carrying these genes can easily be detected by the farmer walking through his field or by a plane flying over acres of farmland. Promoter elements are short and easy to work with. They also allow for modification and specialization. DREB1C and rd29A both have multiple binding motifs in their promoter regions allowing for variation in expression levels and the particular stresses that induce them. 35S is a constitutive promoter that can be valuable for control groups in stress and other plant response research. Fluorescent Plant Image Taken From: http://www.edinformatics.com/inventions_inventors/genetic_engineering.htm
<urn:uuid:d43704a4-d1f5-4b5c-bedc-0271924e3c79>
3.546875
172
Knowledge Article
Science & Tech.
37.750578
Atmospheric rivers are meteorological phenomenon that we humans only discovered in 1998 and which supply about 30-to-50 percent of California's annual precipitation. In the NOAA satellite image above, the atmospheric river is visible as a thin yellow arm, reaching out from the Pacific to touch California. Or, more evocatively, reaching out to slap California silly with a gushing downpour. An atmospheric river is a narrow conveyor belt of vapor about a mile high that extends thousands of miles from out at sea and can carry as much water as 15 Mississippi Rivers. It strikes as a series of storms that arrive for days or weeks on end. Each storm can dump inches of rain or feet of snow. The real scare, however, is that truly massive atmospheric rivers that cause catastrophic flooding seem to hit the state about once every 200 years, according to evidence recently pieced together (and described in the article noted above). The last megaflood was in 1861; rains arrived for 43 days, obliterating Sacramento and bankrupting the state. As you might guess, climate change is also involved. Evidence suggests that warming global temperatures could increase the frequency of atmospheric rivers. That, combined with the 200-year event expected soon and the fact we're learning so much much more about these storms, means that you should expect to hear the phrase "atmospheric river" more often. Scientific American has two interesting stories on the phenomenon right now. The first, which I quote from above, is a blog post by Mark Fischetti. The second is a much longer feature story that gets into the forces that cause these storms and the climate change connection. Maggie Koerth-Baker is the science editor at BoingBoing.net. She writes a monthly column for The New York Times Magazine and is the author of Before the Lights Go Out, a book about electricity, infrastructure, and the future of energy. You can find Maggie on Twitter and Facebook.
<urn:uuid:3330903a-d6d0-4520-b069-88a437c40d81>
3.8125
402
Personal Blog
Science & Tech.
43.90042
The La Nina Pacific Ocean cooling event continues to weaken, according to Dr. John Christy, director of the Earth System Science Center at The University of Alabama in Huntsville. Temperatures in the tropics were cooler than seasonal norms for the 12th straight month. - Earth's early ocean cooled more than a billion years earlier than thought: Stanford studyWed, 11 Nov 2009, 14:07:52 EST - Solar cycle linked to global climate, drives events similar to El Nino, La NinaThu, 16 Jul 2009, 13:15:30 EDT - Tropical Atlantic sees weaker trade winds and more rainfallSun, 6 Feb 2011, 13:52:35 EST - A cooler Pacific may have severely affected medieval Europe, North AmericaWed, 9 Jun 2010, 13:37:09 EDT - Polar oceans key to temperature in the tropicsThu, 17 Jun 2010, 14:43:15 EDT
<urn:uuid:8710a65a-867e-4d3b-a3b5-c439e12359ed>
3.171875
185
Content Listing
Science & Tech.
37.08
Koch's Snowflake 2 From Math Images Basic DescriptionThe Koch Snowflake was first described in 1904, but it was not until 1967 that some of its most significant properties became apparent. That year, Benoit Mandlebrot published a paper explaining an earlier finding that as the length of a country's border is measured with increasingly fine measurement devices, the measured length does not approach a specific, "true" measurement of the border, as one would expect with increasingly precise measurement. Instead, the measured length grows exponentially, suggesting that given a fine enough measuring device, there would be no upper limit on how long the measured length of the border could be. Mandlebrot's paper showed for the first time that this strange finding could be explained if the border of a country resembled a fractal, shapes which are largely defined by self-similarity , and have since been found throughout nature in such places as coastlines and the branching of blood vessels. Mandlebrot's paper went on to show that this self-similarity could lead to other counter-intuitive properties in fractals as well. For instance, Mandlebrot showed that it doesn't make sense to think of most fractals as being 1 or 2 dimensional, but rather as something in between. To derive and demonstrate how self-similar shapes can behave in these ways, Mandlebrot didn't actually use coastlines, but rather used variations of the Koch Snowlake, because of its relative geometric simplicity. With this example, he was able to show how these surprising properties could exist in any fractal and to provide insights into how they might be measured and interpreted. These insights continue to form the basis of much of our analysis of fractals today. Koch Curve Construction The curve begins as a line segment and is divided into three equal parts. A equilateral triangle is then created, using the middle section of the line as its base, and the middle section is removed. The Koch Snowflake is an iterated process. It is created by repeating the process of the Koch Curve on the three sides of an equilateral triangle an infinite amount of times in a process referred to as iteration (however, as seen with the animation, a complex snowflake can be created with only seven iterations - this is due to the butterfly effect of iterative processes). Thus, each iteration produces additional sides that in turn produce additional sides in subsequent iterations. An interesting observation to note about this fractal is that although the snowflake has an ever-increasing number of sides, its perimeter lengthens infinitely while its area is finite. The Koch Snowflake has perimeter that increases by 4/3 of the previous perimeter for each iteration and an area that is 8/5 of the original triangle. Click here, for more information about Iterated Functions. A More Mathematical Explanation - Note: understanding of this explanation requires: *Calculus in some sections - There are currently no teaching materials for this page. Add teaching materials. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
<urn:uuid:7d8d1894-a538-4677-9e62-deeca5896ff2>
3.765625
636
Knowledge Article
Science & Tech.
30.477516
Features include interactive map, in-depth stories, and more.Download now. » The week's top five must-sees, delivered to your inbox. What do plants, snakes, molds, marine sponges, and cone snails have in common? They have helped develop medicines that save human lives. Biodiversity -- the variety of life on Earth -- is key to human survival. But plants, animals, and microorganisms are disappearing at unprecedented rates. What impact will this have on human health? Find out in this Earth Focus special report, produced in collaboration with Harvard Medical School's Center for Health and the Global Environment. Time | May 9 From rainforests shrinking to cities mushrooming to deserts blooming, a new project from Google and Time magazine allows users to view timelapse im...
<urn:uuid:88061216-bf2c-4cf8-bf98-2bee1ea4acd2>
2.84375
171
Content Listing
Science & Tech.
47.177591
This will be a simplified guide, as this is high-school level (I'm assuming that means ages ~ 12-16). You'll need some idea of available resource. Now, Alice Springs has quite a bit of open land around it (it's probably one of the most isolated settlements in the world, isn't it?). So there will be no issue here. Still, you need to do the maths. So get some figures from a database of solar energy per unit area - that's called insolation. That will tell you how much solar power is available . Then you need to find the typical efficiency of a PV system (that will be of the order of 10-15%), and a solar-thermal system (that may be 40-60%). And then you'll need some ideas of total demand. To get that, you'll need the number of people living in Alice Springs, and the average electricity use per person (it may be around 0.3kW-1kW), and the average hot-water use (for space heating and direct hot water use) - and that will depend on the local climate. Your local energy company, or local municipality, might have figures for that. So, once you've got electricity demand, you can find out how much area you'll need for PV. Power produced = efficiency x area of panels x average sun power per unit area. So if you know the amount of power you need, then you can use the efficiency and the average sun power per unit area to calculate the area of panels you need. You can then do a similar calculation with solar thermal energy, using the same figure for average sun power per unit area, but the different figures for demand, and for efficiency. You might find this book useful: Australian Sustainable Energy- by the numbers - you can download the pdf for free from a link on that page.
<urn:uuid:d32321eb-334d-4313-a759-f99a34541d77>
3.28125
389
Q&A Forum
Science & Tech.
61.361806
Edited by William J. Winters1, Thomas D. Lorenson2, and Charles K. Paull3 Download all chapters with cover in a single PDF file for printing. The northern Gulf of Mexico contains many documented gas hydrate deposits near the sea floor. Although gas hydrate often is present in shallow subbottom sediment, the extent of hydrate occurrence deeper than 10 meters below sea floor in basins away from vents and other surface expressions is unknown. We obtained giant piston cores, box cores, and gravity cores and performed heat-flow analyses to study these shallow gas hydrate deposits aboard the RV Marion Dufresne in July 2002. This report presents measurements and interpretations from that cruise. Our results confirm the presence of gas hydrate in vent-related sediments near the sea bed. The presence of gas hydrate near the vents is governed by the complex interaction of regional and local factors, including heat flow, fluid flow, faults, pore-water salinity, gas concentrations, and sediment properties. However, conditions appropriate for extensive gas hydrate formation were not found away from the vents. Suggested citation: Winters, W.J., Lorenson, T.D., and Paull, C.K., eds., 2007, Initial Report of the IMAGES VIII/PAGE 127 Gas Hydrate and Paleoclimate Cruise on the RV Marion Dufresne in the Gulf of Mexico, 2-18 July 2002: U.S. Geological Survey Open-File Report 2004-1358, one DVD, online at http://pubs.usgs.gov/of/2004/1358/ Scientific party of RV Marion Dufresne (photograph courtesy of University of Lille 1). 1U.S. Geological Survey, Woods Hole Science Center, 384 Woods Hole Rd., Woods Hole, MA 02543 USA To view files in PDF format, download a free copy of Adobe Reader. Any use of trade names is for descriptive purposes only and does not imply endorsement by the U.S. Government. There are Internet links to USGS collaborators and Web sites included in this report. These links are only accessible if access to the Internet is available when browsing the DVD, and if those linked sites are operating.
<urn:uuid:f5f59361-4664-4140-a4ff-e141a0d1e9f3>
2.890625
462
Academic Writing
Science & Tech.
55.429286
This is the Dynkin Diagram of E8. See what is E8 for a description of the E8 root system. This is a set of roots in R8. The 8 nodes of the Dynkin diagram correspond to 8 roots of the E8 root system, which are a basis of the vector space. This is a picture of the 248-dimensional Lie algebra of E8. This Lie algebra is a complex vector space, of dimension 248. It is equipped with a Lie bracket operator: for X,Y in the Lie algebra, so is [X,Y]. There are 248 nodes in the picture, one for each basis element of the Lie algebra. Label the nodes 1,...,248, and the corresponding elements of the Lie algebra Z1,...,Z248. The Lie algebra E8 is generated by 16 elements Xred, Yred, Xblue, Yblue,..., Xblack, Yblack corresponding to the nodes in the Dynkin diagram. These are elements of the Lie algebra, and every element of the Lie algebra can be otained by taking brackets [,] of these. The Lie algebra of E8 is generated by 8 pairs of elements (X,Y), one pair for each of the colored nodes in the Dynkin diagram. Suppose a node in the big diagram does not have a red edge, for example. This means the operators X and Y take Zi to a multiple of itself. If node i is connected to node j by a red edge, then X and Y take Zi to some linear combination of Zi and Zj. Main E8 page
<urn:uuid:7ae5718f-3432-425b-9af2-cd89cb97bd80>
2.765625
325
Tutorial
Science & Tech.
74.418212
How do astronomers know so much about distant stars, galaxies, and nebulae? By studying light. To determine what stars are made of, astronomers use telescopes that break starlight into its component colors on the electromagnetic spectrum. These colors correspond to specific chemical elements and, as Academy astronomer Bing Quock points out, are as distinctive as fingerprints. To determine the age of stars, scientists again examine their color, but not to determine their chemical makeup but, rather, to determine their temperature. Blue-colored stars indicate hotter temperatures, suggesting that they are relatively young. Yellow, orange, and red-colored stars indicate cooler temperatures, suggesting that they are relatively older. Scientists use the speed of light to calculate the mind-boggling distances between planets, solar systems, stars, and galaxies. If light travels at a speed of 186,282 miles per second then a light year equals approximately six trillion miles. It’s estimated that it takes 100,000 light years to travel from one end of our own galaxy, the Milky Way, to the other! In the last two decades, increasingly powerful observational tools have led astronomers to discover thousands of large orbiting bodies in the region of space beyond Neptune called the Kuiper Belt. One body—discovered in 2003 and named Eris in 2006—is larger than Pluto. Eris’ discovery led scientists to debate the definition of a planet and whether a tenth planet should be added to the nine previously listed in our Solar System. In response, the field’s governing body—the International Astronomical Union (IAU)—established new, selective criteria to define planets. According to these new criteria, neither Pluto nor Eris qualifies as a planet. After all, neither planet maintains a stable, elliptical orbit around the Sun, and neither planet is of sufficient size and mass to sweep away comets, meteors, and other space debris from its orbiting path. So what do we call them now? According to the IAU, both Pluto and Eris should now be referred to as dwarf planets.
<urn:uuid:50398be5-1393-4190-9c2e-77d90c7a4211>
4.40625
418
Knowledge Article
Science & Tech.
35.625068
July 10, 2012 "President Barack Obama has one. Comedian Stephen Colbert has one. Elvis Presley has one. Even computer software magnate Bill Gates has one. And now, Bob Marley–the late popular Jamaican singer and guitarist–also has one. So what is it that each of these luminaries have? The answer: they each have a biological species that has been named after them. Paul Sikkel, an assistant professor of marine ecology and a field marine biologist at Arkansas State University, discovered and just named after Marley a “gnathiid isopod”–a small parasitic crustacean blood feeder that infests certain fish that inhabit the coral reefs of the shallow eastern Caribbean. Sikkel named the species Gnathia marleyi. All of the life stages of Gnathia marleyi are described by Sikkel and his research team in the June 6th issue of Zootaxia. This research was partly funded by the National Science Foundation (NSF). Sikkel said, “I named this species, which is truly a natural wonder, after Marley because of my respect and admiration for Marley’s music. Plus, this species is as uniquely Caribbean as was Marley." To read the full text of the article, click here .
<urn:uuid:70b48343-3dc1-4e44-95f9-9027872d3315>
2.953125
271
Truncated
Science & Tech.
46.436282
I'm a beginner with this whole programming thing I got my self a new programming book with a a CD with so many questions and I'm stuck with this one could some one show me how the solution would be using stdio.h and please don't make the answer so complicated for a beginner :\ Produce a program which allows the user to up to twenty positive integers. The program should store a maximum of twenty numbers and the entry of a negative number should terminate. Functions should be used to: a.Prompt the user for numbers, store them in the array and report the size of the array when the numbers have been entered. If we write this for you, you won't learn anything which defeats the purpose of the exercise. To do this, you'll need to use scanf() to input everything. You'll need to store the data in an array of 20. You'll also need to use a few loops. Which part are you having trouble with? #include <algorithm> // min_element, max_element #include <iostream> // cin, cout #include <numeric> // accumulate #include <ostream> // endl std::list<int> li; // the list can grow as needed, as opposed to an array std::cout << "Enter numbers, or \"stop\" to stop." << std::endl; while (std::cin >> temp) // when std::cin fails to extract a number, the while() exits, li.push_back(temp); // otherwise `temp', which is a number, is added to the list // this part may look messy, but notice how little work we actually do // C++ components do most of the work for us std::cout << "\n\nSummary:\n" << li.size() << " numbers where given.\n"; std::cout << "Largest number: " << *std::max_element(li.begin(), li.end()) << '\n'; std::cout << "Smallest number: " << *std::min_element(li.begin(), li.end()) << '\n'; std::cout << "Average: " << std::accumulate(li.begin(), li.end(), 0) / static_cast<float> (li.size()) << std::endl; Hi. I think you may be experiencing something which happens when you skip all the "easy" problems in the early chapters. When I arrive at a problem during self study where I am totally stumped it means that I have missed crucial concepts and must back up for review. Once I am lost I am not going to get any less lost going forward, so trying to do it anyway is pointless. You may need to back up and then take it slower. Try to work out all end of chapter exercises. It may be "boring" but it is the best way to move your skills forward solidly. You wont believe this but I spent 9 hours and 38 min trying to solve the question I give up like seriously could some one just write the answer for me and show me the technique :| by brain is hanging upside down 0.o int set, size = 0; int i; // iterator used in for loops int minimum, maximum, sum; printf("Enter up to 20 numbers.\n"); printf(" Terminate with a negative number if you wish to end early: \n"); while (size < 20) if (set[size] < 0) printf("%d numbers have been entered\n", size); minimum = set; for (i = 1; i < size; i++) if (set[i] < minimum) minimum = set[i]; printf("The minimum of these numbers is %d\n", minimum); maximum = set; for (i = 1; i < size; i++) if (set[i] > maximum) maximum = set[i]; printf("The maximum of these numbers is %d\n", maximum); sum = 0; for (i = 0; i < size; i++) sum += set[i]; average = double(sum) / double(size); printf("The average of these numbers is %f\n", average); Enter up to 20 numbers. Terminate with a negative number if you wish to end early: 1 2 3 4 5 -1 5 numbers have been entered The minimum of these numbers is 1 The maximum of these numbers is 5 The average of these numbers is 3.000000 The recommended technique is to break down a large problem into a series of smaller steps.Don't just look at the whole thing and see a mountain. Break it into stages and tackle them one at a time. Some steps may be less obvious than others, and may take longer. By all means ask for help with the tricky parts. The main thing is to consider the design first. Don't think in terms of lines of code, or syntax. Think in terms of what actions need to be performed in order to carry out each step. Write those down in words, in ordinary English. Only after you have at least some sort of design should you begin to write any code. Thank you for youre time, the hard part was how can I allow the user to up to twenty positive integers I had know I idea how can I make that work i've never done it but Ill start analyzing the answer now so i can understand the whole concept :)
<urn:uuid:0745c00b-f028-4316-b6f6-a25b44468813>
3.5625
1,179
Comment Section
Software Dev.
69.321989
Calc includes several commands which interpret vectors as sets of objects. A set is a collection of objects; any given object can appear only once in the set. Calc stores sets as vectors of objects in sorted order. Objects in a Calc set can be any of the usual things, such as numbers, variables, or formulas. Two set elements are considered equal if they are identical, except that numerically equal numbers like the integer 4 and the float 4.0 are considered equal even though they are not “identical.” Variables are treated like plain symbols without attached values by the set operations; subtracting the set ‘[b]’ from ‘[a, b]’ always yields the set ‘[a]’ even though if the variables ‘a’ and ‘b’ both equaled 17, you might expect the answer ‘’. If a set contains interval forms, then it is assumed to be a set of real numbers. In this case, all set operations require the elements of the set to be only things that are allowed in intervals: Real numbers, plus and minus infinity, HMS forms, and date forms. If there are variables or other non-real objects present in a real set, all set operations on it will be left in unevaluated form. If the input to a set operation is a plain number or interval form a, it is treated like the one-element vector ‘[a]’. The result is always a vector, except that if the set consists of a single interval, the interval itself is returned instead. See Logical Operations, for the in function which tests if a certain value is a member of a given set. To test if the set ‘A’ is a subset of the set ‘B’, use ‘vdiff(A, B) = ’. The V + ( converts an arbitrary vector into set notation. It works by sorting the vector as if by V S, then removing duplicates. (For example, [a, 5, 4, a, 4.0] is sorted to ‘[4, 4.0, 5, a, a]’ and then reduced to ‘[4, 5, a]’). Overlapping intervals are merged as necessary. You rarely need to use V + explicitly, since all the other set-based commands apply V + to their inputs before using The V V ( vunion] command computes the union of two sets. An object is in the union of two sets if and only if it is in either (or both) of the input sets. (You could accomplish the same thing by concatenating the sets with |, then using V +.) The V ^ ( vint] command computes the intersection of two sets. An object is in the intersection if and only if it is in both of the input sets. Thus if the input sets are disjoint, i.e., if they share no common elements, the result will be the empty vector ‘’. Note that the characters V and ^ were chosen to be close to the conventional mathematical notation for set The V - ( vdiff] command computes the difference between two sets. An object is in the difference ‘A - B’ if and only if it is in ‘A’ but not in ‘B’. Thus subtracting ‘[y,z]’ from a set will remove the elements ‘y’ and ‘z’ if they are present. You can also think of this as a general set complement operator; if ‘A’ is the set of all possible values, then ‘A - B’ is the “complement” of ‘B’. Obviously this is only practical if the set of all possible values in your problem is small enough to list in a Calc vector (or simple enough to express in a few intervals). The V X ( vxor] command computes the “exclusive-or,” or “symmetric difference” of two sets. An object is in the symmetric difference of two sets if and only if it is in one, but not both, of the sets. Objects that occur in both sets “cancel out.” The V ~ ( computes the complement of a set with respect to the real numbers. Thus ‘vcompl(x)’ is equivalent to ‘vdiff([-inf .. inf], x)’. For example, ‘vcompl([2, (3 .. 4]])’ evaluates to ‘[[-inf .. 2), (2 .. 3], (4 .. inf]]’. The V F ( reinterprets a set as a set of integers. Any non-integer values, and intervals that do not enclose any integers, are removed. Open intervals are converted to equivalent closed intervals. Successive integers are converted into intervals of integers. For example, the complement of the set ‘[2, 6, 7, 8]’ is messy, but if you wanted the complement with respect to the set of integers you could type V ~ V F to get ‘[[-inf .. 1], [3 .. 5], [9 .. inf]]’. The V E ( converts a set of integers into an explicit vector. Intervals in the set are expanded out to lists of all integers encompassed by the intervals. This only works for finite sets (i.e., sets which do not involve ‘-inf’ or ‘inf’). The V : ( vspan] command converts any set of reals into an interval form that encompasses all its elements. The lower limit will be the smallest element in the set; the upper limit will be the largest element. For an empty set, ‘vspan()’ returns the empty interval ‘[0 .. 0)’. The V # ( vcard] command counts the number of integers in a set. The result is the length of the vector that would be produced by V E, although the computation is much more efficient than actually producing that vector. Another representation for sets that may be more appropriate in some cases is binary numbers. If you are dealing with sets of integers in the range 0 to 49, you can use a 50-bit binary number where a particular bit is 1 if the corresponding element is in the set. See Binary Functions, for a list of commands that operate on binary numbers. Note that many of the above set operations have direct equivalents in binary arithmetic: b o ( b a ( calc-and), b d ( b x ( calc-xor), and b n ( respectively. You can use whatever representation for sets is most convenient to you. The b u ( converts an integer that represents a set in binary into a set in vector/interval notation. For example, ‘vunpack(67)’ returns ‘[[0 .. 1], 6]’. If the input is negative, the set it represents is semi-infinite: ‘vunpack(-4) = [2 .. inf)’. Use V E afterwards to expand intervals to individual values if you wish. Note that this command uses the b (binary) prefix key. The b p ( converts the other way, from a vector or interval representing a set of nonnegative integers into a binary integer describing the same set. The set may include positive infinity, but must not include any negative numbers. The input is interpreted as a set of integers in the sense of V F ( that a simple input like ‘’ can result in a huge integer (‘2^100’, a 31-digit integer, in this case).
<urn:uuid:acf689a4-0836-41d1-b4bc-8f6078e7e235>
4
1,739
Documentation
Software Dev.
68.17093
If this video doesn’t load – click here. Henry Markram, director of the Blue Brain project, says the mysteries of the mind can be solved — soon. Mental illness, memory, perception, working memory and fluid intelligence: they’re made of neurons and electric signals, and he plans to find them with a supercomputer that models all the brain’s 100,000,000,000,000 synapses! (From TED) The Blue Brain Project is a supercomputing artificial intelligence (AI) project that is modelling the mammalian brain (its neocortex) to precise cellular detail using computer chips. They started with a rat’s brain. Now they are aiming to fully model the human brain, and with this computer brain, recreate the universe that each of us experiences! Take the 10 minutes it takes to watch this video. It gives one the idea that we are all caught in our own subjective bubble – and that the vast universe around us – as we experience and know it – is simply a construction of our brains electrical patterns. Do you agree with this ’subjective bubble’ idea? What else could our experience of reality be based on other than electrical activity in our own brains, and our own ‘mental models’ of what is going on ‘out there’? The 2,000-microchip Blue Gene machine is capable of processing 22.8 trillion operations per second—just enough to model a 1-cubic-mm column of rat neocortex! The human neocortex is nearly 2 m² and 2 mm thick – and it’s basic processing units – the ‘cortical columns’ – are more complex.
<urn:uuid:498d544e-0869-46d0-947e-fca2ebabdbb3>
3.078125
350
Truncated
Science & Tech.
55.630065
How do snowflakes know how to be symmetrical? Are there any circumstances in which they can be persuaded to be asymmetrical? Answer: The snowflakes that you can see falling slowly from the sky are loose collections of tiny snow crystals. These snow crystals are indeed hexagonally symmetrical and can develop the elegant, feathery star shapes that are so beloved of Christmas card designers. However, snowflakes are not themselves symmetrical. Unfortunately, confusion often arises because snow crystals are also commonly referred to as snowflakes. Snow crystals are hexagonally symmetrical because their constituent water molecules initially join up in a set of interlocking rings of six molecules as the water freezes A snow crystal will contain more than a million trillion water molecules, but because the pattern from which the crystal is assembled is hexagonally symmetrical, the final structure will be too. Within that hexagonal symmetry, snow ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:fa4d0766-b87a-46a1-94a1-c9ffe070407b>
3.828125
214
Truncated
Science & Tech.
27.267841
Search our database of handpicked sites Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest. You searched for We found 15 results on physics.org and 217 results in our database of sites 216 are Websites, 0 are Videos, and 1 is a Experiments) Search results on physics.org Search results from our links database Revision site aimed at UK GCSE level covering energy transfer and energy sources. Well presented with animated diagrams and some questions to try A couple of pages about energy transfer and efficiency. Aimed at GCSE revision but useful for anyone. A good explanation of radiation, convection and conduction. An activity to explore energy transfer and storage Basic information and equations relating to potential and kinetic energy, with details of an experimental procedure for demonstrating energy transfer. Simulation of a falling ball with adjustable ... An index page for lots of information relating to heat transfer topics such as conduction and convection. A clear explanation of heat transfer through conduction, convection and radiation. Uses flash. Breaking down the physics of a bungee jump. including energy transfer, Hooke's law and Newton's laws. Conduction is heat transfer by means of molecular agitation within a material without any motion of the material as a whole. A revision guide for heat transfer covering convection, cooling and radiation suitable for children 14-16 Showing 1 - 10 of 217
<urn:uuid:a4a38a4b-8ad5-4be5-a9af-9755a13bf16a>
3.34375
313
Content Listing
Science & Tech.
46.757254
What drives monarch butterflies to undertake a mass migration, traveling thousands of miles to pine groves in Mexico? New research published this week in the journal PLOS Biology takes a look at a complex circadian clock mechanism in the butterfly brain, a molecular tool that allows monarch butterflies to use the position of the sun for navigation even as it moves across the sky. A second paper in the journal PLOS One examines the genetics involved in the migration mechanism. In this segment, Ira talks with Steven M. Reppert, professor at the University of Massachusetts Medical School and Chair of Neurobiology there, about the mechanisms of monarch migration. Produced by Karin Vergoth
<urn:uuid:05b2b311-47f4-4364-963a-db1984c0a34d>
3.265625
133
Truncated
Science & Tech.
30.7615
9th October 2002 Input and output Matrix algebra and manipulation Writing for posterity This section concentrates on making your programs more error-free. It emphasises the importance of structured design and testing of programs, and making sure at each stage that you are clear about what you are doing. The algebra of GAUSS translates almost from the page into code, but there are few checks to ensure that your algebra is correct. This section aims to correct that. 1 Programming methods Because GAUSS is tolerant in the range of errors and mistakes it will let pass, a systematic approach to writing code is important: a program should be designed rather than just developed. In a structured language like GAUSS, paper solutions will tend to resemble the finished code. There two main approaches to program design are top-down and bottom-up. 1.1 Top-down design To econometricians used to dealing with packages, this is the most logical approach. The idea is to write down an algorithm; then take each part of the first algorithm and write down an algorithm for that bit; then find algorithms for all the elements of the sub-algorithm; and so on. This progressive approach is called step-wise refinement. For example, consider writing a program to run OLS regressions on a data set. The first algorithm might be (1) Get options (2) Read data (4) Print results Now refine stage (3): (3.1) Get x and y matrices from dataset (3.3) Calculate statistics and then (3.3): (3.1) Get x and y matrices from dataset (3.3) Calculate statistics (3.3.1) Find TSS, ESS, RSS (3.3.2) Calculate s (3.3.3) Calculate standard errors and t-stats (3.3.4) Calculate R2 The first stage is similar to the instructions that would be given to, say, TSP. The difference with GAUSS is that all the sub-stages need to be written as well. On the other hand, in this scheme it is becoming clear that the problem degenerates rapidly into a simple set of tasks. Other problems will of course be more difficult, but the principle of breaking down a problem into more detailed (but also simpler) actions is clear. Also clear is that much of this can be translated directly into GAUSS code. The first algorithm might almost be the main section of a program, with the tasks being procedure calls. This is why a structured approach to design improves the quality of programs: as well as forcing the programmer to write down all the steps to be taken (and so, hopefully, all the pitfalls to be avoided), the correlation between the outline of the original algorithm and the final program structure aids verification of the program. 1.2 Bottom-up design The bottom-up approach takes the opposite tack. Problems are solved at the lowest level, and programs are built up by using earlier solutions as building blocks. In the above example, the first task might be to design a procedure to take as input TSS, ESS, n and k and produce R2, s2, and standard errors. When this procedure is fully tested, a procedure taking as input the x'x and x'y matrices will use the first routine in the production of OLS estimates, variances, and significance levels. This procedure is then fully tested and only when it functions correctly does consideration of the next stage begin; but then in this next stage, the written procedures can be taken as proven code. This approach, while as valid as top-down design, is not often the immediate choice, particularly when the programmer is used to working at a much higher level of abstraction (as in econometric packages). It also gives less of a "feel" to a program's structure. On the other hand, testing procedures built from the bottom up is usually simpler. Procedures are tested at the lowest possible level, and only the procedure being built is being tested. This is much more reliable than trying to test a complete program. The choice of a design method is up to the programmer, and most programs have an element of both. Generally, the top-down style works best on large projects which need a disciplined approach, but when it comes to actually programming rather than designing, starting from the simplest bits of code and working outwards is usually the most effective (and safest) route. However, most programmers will over time build up their own libraries of useful little functions, and so the bulk of design will tend to concentrate on the "grand scheme" side. One of the most important aids to writing better programs is the use of comments. Comments generate no executable code and have no effect whatsoever on the performance of the program. They are entirely for the programmer's benefit. How then do they make programs safer? By allowing complicated pieces of code to be explained in the program; by identifying what variables are used where; by proclaiming the purpose of procedures; in short, by encouraging descriptions within the program of what a piece of code does, why it does it, what variables it uses, and what results it gives out. A comment is anything enclosed in a slash-asterisk combination: /* this is a comment */ /* a = b + c; */ /* so is the above instruction as it is enclosed in comment marks */ The start of a comment is marked by /*, the end by */. Anything enclosed in these marks will be treated as a comment and ignored by the program: the instruction in the above example no longer exists as far as the program is concerned. Comments can be nested; that is, one comment can contain another comment. This is useful when, for example, the user wants to temporarily "block out" a piece of code to test something: a = b + c; /******* remove this bit of code temporarily Mutate (b, c); /* proc to do something to b and c */ d = b * c; Having multiple asterisks after the start or before the end of the comment block is fine by GAUSS; all it checks for is the /* or */ combination. Everything else within these two is ignored. This is one of the few places in GAUSS where spacing is important. The comment /* this is a comment with a space in the final marker * / will be lead to the error message "Open comment at end of file" because GAUSS will not recognise "* /" as the intended token "*/". 2.1 When to use comments Too many comments in a program are not as bad as too few, but they may distract from the program. However, this is difficult to achieve. Generally, comments amongst code are usually only wanted where a complex operation is being carried out, or where the control structure of the program is not immediately obvious, or where a particular variable value is not clear; basically, anywhere where a new reader might be confused by some aspect of the program. The programmer may also want to include comments on variables as they are declared, saying what their purpose is, their type, and so on for his own reference. Comment blocks can be used to keep track of programs. A comment of some sort should always be included at the start of the program, identifying the program's purpose and possibly also authorship details. Where procedures are declared, comments become very important. Because a GAUSS procedure header only says how many variables are returned, a comment saying which of the local variables and parameters are returned would be useful - along with a note of any global variables used or updated. As GAUSS variables are can change size and form very easily, comments explaining the type of variables expected as parameters and returned is often useful. Finally, a note of what the procedure actually does makes the whole block much more readable. Consider the following comment block. The procedure TestColl is used to test each of the nSubs square submatrices, concatenated vertically into one matrix, for multicollinearity: This consists of a one-line description of the procedure's function; details of the input and output parameters; and a reference to the mathematical basis of the function. It also informs us that the procedure does not access any (user-defined) global variables. The aim of a block such as this is twofold. Firstly, the author of the procedure can check its function against the claims in the comment block (ie that given the correct sort of data it will return a boolean variable set to true if multicollinearity is found in any submatrix). Secondly, the programmer wanting to use this procedure can find out what the procedure does and what are the types of the input and output parameters without having to study the procedure in detail. The laxity of the GAUSS syntax, the weak typing of variables, and the poor handling of input all contribute to making testing a necessity for all but the smallest programs. We consider here some aspects of testing programs. However, it should be remembered that testing is inherently Popperian: a program can only be proved not to work by testing; it cannot be proved to work. Essentially, there are three things that can go wrong with a program: it is given the wrong instructions; the instructions are entered wrongly; or the data it uses is wrong or inappropriate. All three areas should at least be considered before a program is pronounced "finished". 3.1 Semantic errors Semantic errors are those where the program does not work as intended because it has been told to do the wrong thing. For example, the instruction sequences are both valid programs; however, the second correctly calculates the variance of an IV estimate of beta, while the first does - well, something else. GAUSS cannot detect these errors. It is entirely up to the programmer to find them. This is where a rigorous approach to defining the problem and implementing the solution will make a difference. If a program is well structured and commented, then the actions of each part of a program can be checked against the claimed result; this claimed result should itself be checked against the solution algorithm to see if the result was intended. Procedurisation simplifies this somewhat by turning sections of the code into "black boxes" which can be tested independently and then, once they appear to work, can be taken for granted to some extent. Small sections of code should be tested where possible; waiting until a program is finished before testing commences may well be counterproductive if the program is large and complex. Semantic errors are the most difficult to find because there is nothing for GAUSS to report as an error. The program is only "wrong" in the sense that it does work as intended. Unfortunately, some errors will still slip by - particularly those to do with matrix size and orientation. In one program I missed a transpose operator; the fact that a number of calculations were therefore being done on a row vector when they should have been using column vectors and scalars left GAUSS unfazed. As the results were sensible (largely due to luck in the way the matrix was indexed), the error did not come to light for some months, until the program was altered and an associated operation failed. The most obvious way to test for this is to create test data; for example, testing an IV estimator might involve creating a number of observation sets with different variances and correlations between the variables. One test data set might have zero error terms, to test the model in the "ideal" case; another might have instruments uncorrelated with explanatory variables; another leads to a singular covariance matrix to see if the program picks that error up; and so on. GAUSS does have a run-time debugger, but this is signally difficult to use and rarely informative. The easiest way to test particular portions of code is to use PRINT statements to inform the user where the program has got to and what values any variables of interest the program currently has. For example, supposing an unexpected result seems to arise from the code a = b * c; IF b > c; a = ThisProc(a, b, c);ELSE; a = ThatProc(a, b, c);ENDIF; Then this could be augmented with a = b * c; PRINT "xtestx a is currently size " ROWS(a) COLS(a); PRINT "xtestx Current value of a: " a; IF b > c; PRINT "xtestx IF section; b>c";ELSE; a = ThisProc(a, b, c); PRINT "xtestx ELSE section, b<=c";ENDIF; a = ThatProc(a, b, c); PRINT "xtestx Out of IF statement: new value of a:" a; This seems like overkill, but this is often the easiest and quickest way to find errors. Note that the PRINT statements write "xtestx" before the error codes. Adding easily indentifiable text fragments makes it easier to see which statements are test messages. It also makes it easier to find them later when the program works and they need to be removed. 3.2 Syntactic errors Syntactic errors - mistakes in the coding of a program - are usually fairly simple to discover. GAUSS will pick up some when it prepares to run a program; others will only come to light when a particular piece of code is executing. For example, if a procedure does not return the number of variables claimed in the procedure declaration, this will only be picked up when the procedure is called. However, it will be discovered at some point, and so testing should make sure that all the instructions in the program are called at some time during the test stage. Again, PRINT statements and test data can be helpful in finding these errors. 3.3 User errors GAUSS's worst feature is undoubtedly its handling of user input. The CON command is extremely user-unfriendly, and its file handling is based on shaky assumptions of existence. The CON command assumes that the program instructs the user well and that the user neither makes mistakes or changes his mind during the entry of streams of numbers. These are unjustified assumptions in most practical cases. If a program expects a stream of numbers, then the authors suggest replacing CON with CONS, the string input function. This allows the user to edit the list of numbers as they are entered. The output from CONS can then be converted using the function STOF, which converts a string full of numbers into a column vector. Thus these two are equivalent: unless the user types in less than r*c numbers. However, the second form is much more usable in almost every case. On files, GAUSS generally assumes that files exist. Therefore, GAUSS will often crash if files are not found. This tends to be more annoying than a serious problem. If, however, a file not being found would have devastating impact, then file opening should be carried out at the beginning of the program - or at least, before any permanent work is carried out. There is no "exist" command in GAUSS, but the FILES command provides a feasible if irritatingly awkward way to test for existence. In GAUSS 4.0 FILES is deprecated in favour of FILESA and FILEINFO. Once the program has its input, it may need to be tested. The amount and rigour of this depends on the type of input. For example, one program used by the authors uses information in one file to analyse another file. Because the information in the first is crucial to successful management of the second, the program will not accept an information file which it considers is inconsistent with the data file. A program should be able to deal with all kinds of user input; anything it cannot deal with should be weeded out and thrown away. Testing a program only against sensible inputs is often not good enough, especially if the program is to be used by other people. Making a program robust to errors in data entry can require some thought as to what might actually be entered. Unlike syntactic or semantic errors, some error in the user input may be allowable. A procedure of mine expects positive integers up to a certain number. It does not check the input string for dud entries, because the relevant code ignores them anyway. Foolproof routines for checking data are not always desirable. In the 1.6-million-iteration program described in an earlier section, only essential variables are checked for missing values; missing values in other variables are ignored because they do no harm, and the time wasted checking for them would not be well spent. |back to top| |Copyright © 2002 Trig Consulting Ltd|
<urn:uuid:d4bc07b8-0948-4680-b421-5e06922d2ddf>
3.453125
3,491
Documentation
Software Dev.
48.560962
The reason that Europa is warm is because of tides. An example of what happens is shown in this drawing of the Earth and Moon. In this picture, the Earth is pulling on the Moon, and the Moon is pulling on the Earth with gravity. The Moon pulls more strongly on the side of the Earth facing the Moon than on the side facing away from the Moon. Because planets are not perfectly rigid, they change their shape when they are pulled this way. In the case of Europa, the push-me, pull-me effect is caused by the pull of Jupiter on one side, and the other moons of Jupiter on the other side. (Europa is an inside moon). The volcanism of Jupiter's moon Io is caused by the very same effect. If a body is very rigid or is not held together well, instead of getting pushed and pulled out of shape, the tidal forces can actually tear the body in half, as with comet Shoemaker-Levy 9. This is page 5 of 20
<urn:uuid:bd8bbc87-3bea-4680-8f0e-2279f4c96712>
4
209
Knowledge Article
Science & Tech.
65.94791
The permanently frozen ground around our poles holds, it seems, a huge amount of methane, a very powerful greenhouse gas. Climate scientists wonder what will happen if there is a major melt. And there are signs of such a melt taking place, including methane bubbling up from Alaskan lakes (above). The scary thing about climate change is that in the past it has not happened gradually, but by sudden jumps. If the planet is going to warm dramatically it will probably take place over the course of a decade or less, far too fast for us to do anything about it by the time it starts.
<urn:uuid:7a276dcf-7af7-4272-911a-191a32c90dfc>
2.8125
122
Personal Blog
Science & Tech.
50.435868
The problem of dealing with Geological time and having events that occur in very short intervals thereof, is that for those of us on a faster life schedule, “almost instantly” can still be a period of years. That thought has struck me quite frequently as, over the past year, I have watched the earthquakes that are happening around the Myrdalsjokyll glacier in Iceland, with the Katla volcano rumbling beneath it. I first noted how the earthquakes in the region were focusing down into the caldera of the volcano last May. Since then I reported occasionally as the intensity of the underlying earthquakes increased in intensity up to levels above 3.0, and started to align along possible fissures that might lead from the magma chambers up to the surface. In the end the volcano did not seriously erupt, but nor did the patterns change that much, and I have kept an eye on the region over the months since. It is worth noting that the pattern of quakes continues to focus in the region of Katla, and that while they vary in intensity and frequency (some days there may only be one or two quakes) that they are still ongoing. Pattern of Earthquakes in Iceland in the last 24-hours (Icelandic Met Office ) If one clicks on the map at the site over the clump of quakes at the bottom (those on the right at the base of the peninsula are caused by water injection from the geothermal program in Iceland) then one gets this picture: Quakes in the Katla region of Iceland in the last 24-hours (Icelandic Met Office) This is about as dispersed a pattern as there has been over the last few months in the region. Eyjafjallajokull was the volcano that erupted two years ago, and brought some disruption to Europe. Katla will likely go at a somewhat greater scale, and likely quite soon – in geological time. For those with shorter attention spans it remains hard to tell whether that will be in 6 months, or 6 years. We’ll just have to keep watching.
<urn:uuid:87c37667-21f5-408f-9b5d-080e769a6a4e>
2.796875
430
Personal Blog
Science & Tech.
43.506821
MODERATOR INTERJECTION: Post moved from this thread. I see this thread could use an update about WMAP. From NASA's own site about WMAP. Sound waves in the early universe? SOUND is the KEY, like I have been suggesting all along.Much of what WMAP reveals about the universe is because of the patterns in its sky maps. The patterns arise from sound waves in the early universe. As with the sound from a plucked guitar string, there is a primary note and a series of harmonics, or overtones. The third overtone, now clearly captured by WMAP, helps to provide the evidence for the neutrinos. So there now exists proof that SOUND waves in space play a role in how the creation/universe evolved. NASA and their toy WMAP help prove it. The results being provided by WMAP suggests mainstream science/astronomy needs to update their hard drives. And WMAP also suggests the universe is finite? WMAPs Top Ten
<urn:uuid:e1ae0ff7-77fe-4bb1-8c96-aa0e87d47d23>
2.75
213
Comment Section
Science & Tech.
58.741524
As the title suggest, a new species of teleost has been found out, it was collected, “sort of” unearthed, from the sand bed of a small river in south western India, thus named “ammophila” which means “sand loving”. This species is for now known only from this location, and grows not more than 3 centimeters. It is the tiniest fish that I have ever seen* and were it not for the authors of the study, Ralf Britz, Anvar Ali and Rajeev Raghavan, probably it would have stayed in the wilderness and would not have received this attention. Readers should recall that the lead author of this study is the same one who described the “smallest vertebrate” Paedocypris progenetica, thus this fish is rather “big” for him. This find calls our attention to some important points: - It is the fourth valid Pangio species from the Indian region, the congeners of which are all distributed in the South – South Eastern Asian region. - This species has a remarkably different colour pattern from the hitherto identified species of the genus, and most similar to its geographically nearest species Pangio goaensis. These two points leads our attention to the historical bio-geography of the region. How was the present distribution of animals, in particular fishes of South – South Eastern Asia formed. The disjunct distribution of these species with its congeners in the North Eastern India and South East Asia (a huge geographical barrier), is surprising. These authors have found out Dario urops which was described recently, which also has a similar disjunct distribution. So these findings should help advance our understanding of the historical bio-geography of the region as well as the pangean and gondwanan connections of the Asian fauna. 3. Another issue that this species brings to fore is conservation of fragile habitats. This location is the only place where the species is found and is thus important (also it should harbour other species). The unprecedented economic growth in India and especially in the region means that indiscriminate sand mining occurs in this same stream. Imagine how many of these sand loving eel-loaches would have been mined out before being noticed by the authors? How do we balance the biodiversity conservation and economic growth? *Competing interest: I was part of the collection team which found this species and is a collaborator at the Conservation Research Group. Ralf Britz, Anvar Ali and Rajeev Raghavan (2012). Pangio ammophila, a new species of eel-loach from Karnataka, southern India (Teleostei: Cypriniformes: Cobitidae). Ichthyol. Explor. Freshwaters,, 23 (1), 45-50
<urn:uuid:23b6ae50-4a45-4678-a62c-94725ba4e1c2>
3.109375
590
Academic Writing
Science & Tech.
44.610446
Search Loci: Convergence: M. is execrable, but Mr. C. is (x + In N. Rose, Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988. Van Schooten's Ruler Constructions Solution to Problem V Problem V: Given an indefinitely long straight line AB and a point C on it, to draw a line CF which is perpendicular to the given straight line. Construction: Draw, as in the previous problem, any perpendicular DE above AB, and then, from C, by the third, problem, draw a line CF parallel to that. It will be the one sought. Be the first to start a discussion about this article.
<urn:uuid:dec528be-89a4-46a7-bcd2-d19a70d1ae80>
2.859375
150
Academic Writing
Science & Tech.
75.073231
Hosted by The Math Forum Problem of the Week 1046 A Triumphal Arch Suppose you have a budget to build an arch using 110 bricks that you are to purchase. The arch is made simply from two vertical columns of bricks that go up to the same height. A lintel is then placed over the top, but that does not use any bricks and is not relevant to the problem. Each brick is either one or two units high, and their price is the same. How many ways are there to build such a pi-shaped arch when Again, bricks are used in the columns only, not for the lintel. Arches that are symmetric about the vertical center but are not identical are considered to be different. So, for example, with 4 bricks there would be exactly 3 arches. With 5 bricks there are 4 arches, illustrated below in horizontal fashion. That is, the first example below means that the left column is 2, then 1, then 1, and the right columns has two 2s. Source: This problem -- finding a formula with 110 replaced by general n -- is a "live" problem from the American Mathematical Monthly, #11183 by David Beckwith, Nov. 2005 issue. © Copyright 2005 Stan Wagon. Reproduced with permission. Home || The Math Library || Quick Reference || Search || Help
<urn:uuid:4fa84267-203a-4dfa-8971-9d69a0e574d8>
3.375
282
Tutorial
Science & Tech.
65.630745
Introduction to Data Binding Microsoft Internet Explorer 4.0 and later enables content providers to develop data-centric Web applications that support retrieval and update through native data binding facilities. The use of HTML extensions and pluggable data source objects (DSOs) makes data-driven pages easy to author, with minimal scripting required. Because data is downloaded to the client asynchronously, pages render quickly and provide immediate interactivity. Once downloaded, the data can be sorted and filtered without requiring additional trips to the server. Compare that to traditional Web pages and those generated by server-side scripts. Once the data reaches the client, it's static, and any manipulation of that data requires another server request. Consider a simple example, such as the list of samples in the Internet Client software development kit (SDK). While the list could be hard-coded into this text, a better solution is to store the data external to the document, in a delimited text file. If the samples change, only the data in the database needs to be modified. Once the modifications are made, this page and any others that display that data will reflect the changes. While this maintenance issue justifies the use of data binding in place of a static page, consider a slightly more complex scenario, such as an online classical music catalog maintained on a server in a relational database management system. Using today's server-side technologies, such as the Common Gateway Interface (CGI), the Internet Server Application Programming Interface (ISAPI), or one of its derivatives, such as Active Server Pages (ASP), a developer can write a script to extract a subset of data based on criteria specified by the client, such as all titles by a particular composer. The script might generate and return to the client a Web page that includes a tabular view of this data, including the title, the year composed, the type of composition, and a list of the movements in the composition. While the page can contain the information in date order, to view the compositions in alphabetical order or to limit the view to orchestral works, or fugues, the user is forced to submit additional requests to the server. This is a memory-intensive way to view the same data in only a slightly different way. The problem is compounded if the Web author decides to present each composition in single record view, that is, one record per page. A full round-trip to the server is required to obtain each subsequent record. If Web authors want to allow the user to add or update the data, they typically create HTML forms and submit GET and POST requests to an application running on the server. The server application parses the data from an HTTP data stream and then logs the data to a database. For every update request, the page containing the Submit button and the form data is reloaded. This is not a seamless experience for the user. The following sections show how the various components in the data binding architecture offer solutions to these problems and make the most efficient use of the Web.
<urn:uuid:037a7ad5-6b5b-4ab6-8060-c5ff224e2068>
2.953125
612
Documentation
Software Dev.
33.157077
Epitaxial graphene on metals Graphene (an atomic layer of carbon arranged in a honeycomb lattice) may be prepared in epitaxy on metal surfaces. First studies date back to the 1960s: Very high quality graphene may be prepared on metals. Such samples are well suited to the study of graphene's properties with the help of surface science techniques. In cases when graphene is weakly coupled to its metallic substrate, it is then possible to investigate some of the intrinsic properties of the material. The opposite point of view consists in manipulating the properties of graphene thanks to more or less strong coupling with a metal. The isolation of graphene on a sacrifical substrate (chemically etched following graphene growth) is known since the 1960s. This approach encounters renewed interest since the last few years, as a route to large area graphene of reasonably good quality. This is of use for fundamental studies (devices are easily prepared this way) as well as for application purposes (transparent conductive electrodes for photovoltaics or displays, or membranes for DNA translocation for instance). We prepare graphene layers onto various substrates, either single crystals or thin films on wafers, via catalytic thermal decomposition of carbon-containing molecules (usually referred as CVD). We study the growth, structure (for instance, superstructures, cf. simulation of X-ray diffraction data below, or defects), and interaction between graphene and its substrate.
<urn:uuid:63546d73-69b1-4d8d-87eb-2d07c78d27a3>
3.203125
296
Academic Writing
Science & Tech.
24.632056
In physics, a wave is a method of energy transportation. Mechanical waves require an environment to travel through - called a medium. The medium's density, elasticity and temperature determine velocity. Matter with the most elasticity, least density and highest temperature would create the fastest wave. Transverse waves travel perpendicular to the disturbance. For example, if the end of a slinky is moved up and down, a transverse wave will be created, because although the disturbance is vertical, the wave will travel horizontally across the slinky. Transverse waves have crests (each parabola's tip in the slinky if the parabola is above where the slinky would usually be) and troughs (each parabola's low point in the slinky if the parabola is below where the slinky would usually be). A full transverse wave consist of one crest and one trough. Longitudinal waves travel parallel to the disturbance - for example, sound waves. Compressions and rarefactions are to longitudinal waves as crests and troughs are to transverse waves, thus a full longitudinal wave consist of one Compression and one rarefaction. Compressions are areas of high pressure, and rarefactions are areas of low. Surface waves are transverse waves and longitudinal waves mixed into one medium. Electromagnetic waves are waves that do not require a medium. Matter waves are produced by electrons. When two or more waves collide, the result determines the type of collision. If the amplitudes of each wave are in the same direction (i.e. two crest or two troughs) - a larger amplitude will result, called constructive interference. If the amplitudes are in opposite directions, then a smaller amplitude will result, called destructive interference. The amplitude of a wave is the height of the displacement. The velocity of a wave is how quickly the wave travels. The frequency of a wave is how many waves pass in one second, and conversely, the period of a wave is how many seconds it takes for one wave to show up. The period is the inverse of the frequency and vice versa. The wavelength (represented by the Greek symbol lambda - λ) is the length of one wave in meters. and all variations. This includes the formulas:
<urn:uuid:c0384e7e-a7cc-4b9e-b51a-9459218d8d23>
4.28125
474
Knowledge Article
Science & Tech.
34.227153
In the default mode of operation, each character that is typed is inserted into the text, with the existing characters being shifted as appropriate. In overwrite mode, each character that is typed deletes an existing character in the text. When in overwrite mode, a character can be inserted without deleting an existing character by preceding it with Switches overwrite mode on if it is currently off, and off if it is currently on. With a positive prefix argument, overwrite mode is turned on. With a zero or negative prefix argument it is turned off. Using prefix arguments with disregards the current state of the mode. Key sequence: key If the current point is in the middle of a line, the next character (that is, the character that is highlighted by the cursor) is replaced with the last character typed. If the current point is at the end of a line, the new character is inserted without removing any other character. A prefix argument causes the new character to overwrite the relevant number of characters. This is the command that is invoked when each character is typed in overwrite mode. There is no need for users to invoke this command explicitly. Overwrite Delete Previous Character Key sequence: None Replaces the previous character with space, except that tabs and newlines are deleted.
<urn:uuid:d39b34db-bdfd-4703-8ef0-1c816aa3125c>
3.453125
264
Documentation
Software Dev.
37.869394
American biochemist who, with John E. Walker, was awarded the Nobel Prize for Chemistry in 1997 for their explanation of the enzymatic process involved in the production of the energy-storage molecule adenosine triphosphate (ATP), which fuels the metabolic processes of the cells of all living things. (Danish chemist Jens C. Skou also shared the award for separate research on the molecule.) In the early 1950s Boyer began to research how cells form ATP, a process that occurs in animal cells in a structure called a mitochondrion. In 1961 the British chemist Peter D. Mitchell showed that the energy required to make ATP is supplied as hydrogen ions flow across the mitochondrial membrane down their concentration gradient in an energy-producing direction. (For this work Mitchell won the 1978 Nobel Prize for Chemistry.) Boyer's more recent research revealed more specifically what is involved in ATP synthesis. His work focused on the enzyme ATP synthase, and he demonstrated how the enzyme harnesses the energy produced by the hydrogen flow to form ATP out of adenosine diphosphate (ADP) and inorganic phosphate. Boyer postulated an unusual mechanism to explain the way in which ATP synthase functions. Known as his "binding change mechanism," it was partially confirmed by Walker's research. Main Page | About Us | All text is available under the terms of the GNU Free Documentation License. Timeline of Nobel Prize Winners is not affiliated with The Nobel Foundation. External sites are not endorsed or supported by http://www.nobel-winners.com/ Copyright © 2003 All Rights Reserved.
<urn:uuid:dc7ade13-0f7c-455a-a8d1-627f5025959c>
3.875
349
Knowledge Article
Science & Tech.
42.315882
A Mutualistic Root Ecosystem article - botany, plant physiology (Jan/14/2008 ) The rhizosphere is a very thin line of 'mucus' seen in plant roots containing bacteria and fungal hyphae mixed in a soup of root excretions containing nutrients for the beneficial microorganisms. The bacteria and fungi give the plant other nutrients (like nitrogen) and protection from infection (viruses and nematodes) in exchange for plant nutrients. Here's an article that discusses this: Is this publicity to some article you published? Did anyone ask for some info on this matter?
<urn:uuid:6496fd27-1b29-4bab-b078-3853e61065a0>
3.03125
125
Comment Section
Science & Tech.
30.260239
How does a spacecraft change course? In order to know where a ship is, NASA needs to know two things: how far it is from Earth and its location in space. Generally, NASA uses the downlink, or radio signal from a spacecraft to a radio telescope in the DSN, to tell where it is. The distance between Earth and the ship is measured by sending up a radio signal from Earth with a time code on it. The spacecraft "bounces" back the signal, and people on the ground can see how long it took to travel from Earth to the ship and back. Since all radio waves travel at the speed of light, scientists can look at how long it took for the signal to make it to the ship and back and figure out the distance it traveled. The angle that the radiotelescope is pointing when it receives the signal tells the direction of the ship. A more precise way of measuring uses two radio telescopes. When a ship is in space, it sends a signal back to Earth. Three times a day, this signal can be received by two different DSN radio telescopes at once. They can compare how far the ship is from each signal. They then get the distance to a known object in space that doesn't change its location, like a pulsar, (pulsing star), and from the three locations, (two telescopes and a pulsar) they can use a technique called triangulation to get the ship's location. Some spacecraft, like DS1, can use asteroids and other objects in space to figure out where they are. Using a process called Optical Navigation or OpNav, pictures are taken of particular asteroids. The asteroids' location relative to the spacecraft are used to determine position, and the position is compared to where the ship should be. At that point the ship can do a course correction. OpNav needs at least three objects to compare and uses triangulation to figure out a ship's location. What is a course correction? What is AutoNav? What is triangulation? What are uplink and downlink? What is DSN? What are radio waves? How does DS1 do a course correction? When does DS1 do a course correction? How can we tell a spacecraft's speed? What are vectors?
<urn:uuid:b46c48ff-3399-4445-a292-5ef8ef4166b8>
4.34375
475
Tutorial
Science & Tech.
63.957257
Aluminum 'nanometal' is strong as steel Raleigh, N.C. (UPI) Sep 8, 2010 U.S. researchers say they've learned how make an aluminum alloy -- a mixture of aluminum and other elements -- that's as strong as steel. North Carolina State University scientists say the search for ever lighter yet stronger materials is important for everything from more fuel-efficient cars to safer airplanes. Yuntian Zhu, professor of materials science at NC State, says nanoscale architecture within the new aluminum alloys give them unprecedented strength but also reasonable plasticity to stretch and not break under stress, a university release reports. The new aluminum alloys have unique structural elements called "grains," each a tiny crystal less than 100 nanometers in size, that make them super-strong and ductile, Zhu says. Bigger is not better in materials, he says, as smaller grains result in stronger materials. The technique of creating these nanostructures can be used on many different types of metals, Zhu says. He says he is working on strengthening magnesium, a metal even lighter than aluminum, and is working with the Department of Defense to make magnesium alloys strong enough to be used as body armor for soldiers. Share This Article With Planet Earth Space Technology News - Applications and Research Delft, Netherlands (UPI) Sep 1, 2010 Concrete might heal its own hairline fractures - as living bone does - if bacteria are added to the wet concrete during mixing, European researchers say. Cracks in concrete surfaces make them vulnerable, allowing water and tag-along aggressive chemicals in, says Henk Jonkers of Delft University of Technology in the Netherlands. Patching cracks in old concrete is a time-consumin ... read more |The content herein, unless otherwise known to be public domain, are Copyright 1995-2010 - SpaceDaily. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by SpaceDaily on any Web page published or hosted by SpaceDaily. Privacy Statement|
<urn:uuid:7ecd9ea4-fccf-4578-b65a-8a683d9c4403>
3.046875
475
Truncated
Science & Tech.
33.165922
The Role of Macroalgal Species as Bio-indicators of Water Quality in Bermudian Karstic Cave Pools Bermuda consists of a series of mid-ocean islands located 1000 km (600 miles) off the eastern coast of the United States. Geologically, the islands are composed of highly cavernous limestone overlying a volcanic seamount. Due to this foundation, This study will determine if the various algal species present in inland pools at the entrance to many Bermuda caves can serve as bio-indicators for groundwater quality. Initially floral surveys of submerged and intertidal algae will be conducted at eleven different sites to determine the diversity present. More detailed experiments will be carried out on one to two species of algae at six of the sites. These experiments will measure productivity and respiration rates as well as determine the effects of nutrient enrichment on growth. Algae are autotrophic organisms at the bottom of the trophic food web. They rely directly on nutrients in their environment for growth and survival. Due to this relationship, they are very sensitive to pollution and nutrient enrichment and show rapid and quantifiable morphological changes. Entrance cave pools with lower water quality and higher nutrient loading exhibit different algal distributions and show higher primary productivity and growth characteristics than cave pools with high water quality and less or no nutrient loading relative to background levels. By determining if there is a correlation between the presence and distribution of different algal species in the cave pools and the water quality, environmental health in the various inland cave pools and in the groundwater can be monitored. This research will add to the knowledge of karst ecosystems and their interconnections. All acquired data will be provided to the Bermuda Biodiversity Project (BBP) of the Bermuda Aquarium, Museum, and Zoo. This research is intended for use in conservation efforts to preserve and protect the cave systems and the flora and fauna that inhabit this unique environment.
<urn:uuid:ef276e49-2e7e-42e6-8f2c-0ca263c9116a>
3.3125
398
Academic Writing
Science & Tech.
23.748385
How might one predict the arrangement of charges on a if the symmetry arguments that make Gauss's Law so useful are not available? On the left, you'll find the challenge: Calculate the negative (cyan) charge-density σ induced on a grounded sphere of radius R when a positive (red) point charge q is a distance r from the sphere's center. One possible method is illustrated on the right, where you'll find depicted the image-charge q'=-q(R/r) at radius r'=R2/r which sets the electric potential at each point on that sphere to zero in the presence of the positive charge. Notice in the animation that the two charges always lie on a common radial line to the center of the sphere, and that the image-charge becomes stronger (brighter cyan) as the red charge approaches the sphere. To figure out how charges are distributed on the conductor at left, the electric field at right from the positive charge q and its negative image-charge q' were calculated on the sphere surface using E=kqr/r3+kq'r'/r'3. From this the electric field and induced charge-density σ = -εoE on the sphere's surface at left was inferred, given that its electric potential is also held to zero since it is grounded. This inference works thanks to the uniqueness of electrostatic field and potential solutions for any enclosed volume when, along with the position of charges within, the electrostatic potential is specified everywhere on its surface. Questions: What are the surfaces that bound the "enclosed volume" referred to in the above application example? How would you solve this same problem if the sphere at left were neutral and electrically isolated? (Note: Log color animations of grounded and neutral spheres are provided below for comparison.) For example, would a third charge inside the transparent sphere model (above right) help? If so, where would you put it and how much charge would it have? Does the image-charge method work only in the electrostatic limit, where rates of movement of the red charge can be ignored? How would you take into account the magnetic effects of that movement? The radiation effects? What would you do if the sphere were a cube or a tetrahedron? Are there robust numerical platforms for doing these calculations which would allow one to deal with arbitrary geometries as well as symmetric solids? Why might calculations like this be useful e.g. in studies of global warming, for designers of electromagnetic shielding and communications, or in video game physics engines like that by Havok? This page is hosted by UM-StL Physics and Astronomy. Thanks to Eric Mandell for the suggestion to display the image-charge model in parallel with the charge density on the sphere, and Ricardo Flores for mention of that radiation applet. What measurements from these simulations can you make as an experimentalist, for comparison to quantitative model predictions? The person responsible for mistakes is P. Fraundorf.
<urn:uuid:00f8af3a-4d37-46a2-8e68-486f5e0bba38>
3.875
616
Q&A Forum
Science & Tech.
42.792604
Cosmic gamma ray bursts (GRBs) were discovered by accident in the late 1960's by satellites designed to detect gamma rays produced by atomic bomb tests on Earth. The GRBs appear first as a brilliant flash of gamma rays, that rises and falls in a matter of minutes. These bursts are often followed by afterglows at X-ray, optical and radio wavelengths. A major leap forward in understanding the source of cosmic GRBs was made when the Burst and Transient Source Experiment (BATSE) was launched aboard the Compton Gamma Ray Observatory in 1991. BATSE had an all-sky monitor that was capable of detecting a GRB virtually anywhere in the sky. Over a period of 9 years BATSE recorded thousands of GRBs, about 1 per day. Among other things, these results showed that the bursts occurred at random all over the sky. If the bursts were associated with objects in our Milky Way Galaxy, they would not show such a universal distribution. Rather, they would be concentrated along the plane of our galaxy like most of the matter in the Milky Way. The BATSE data was so good that it allowed astronomers to also rule out the possibility that the GRBs might be originating in the halo of our galaxy. In 1997, astronomers were able to use the Beppo-Sax satellite to refine the location of several GRBs by observing their X-ray afterglow. Then the Hubble Space Telescope and other optical telescopes were used to study the optical afterglow of the GRBs and were able to precisely locate them in galaxies billions of light years from Earth. At such great distances, a GRB must produce enormous amounts of energy. While at their peak, which lasts only a few seconds, they have a power output that is comparable to that of all the galaxies in the universe! The source of this tremendous energy is unknown. Astronomers have developed a model – the fireball model – that explains the time variation of the bursts, and the shift of the peak radiation to progressively lower energies reasonably well. The model involves matter moving at near the speed of light that collides with other material in the vicinity. What is the source of this rapidly moving matter? Theories include the merging of neutron stars, or black holes, or the collapse of an extremely massive star to produce what has been called a hypernova. In one variation of this model, the collapsed core forms a spinning black hole. As surrounding material falls toward this black hole, intense beams of high energy particles and neutrinos eject matter at nearly the speed of light. It is this matter that produces the gamma ray fireball. X-ray observatories such as Chandra should help to solve the mystery of gamma ray bursts. By studying the X-ray afterglow, they can measure the amount of gas in the vicinity of the burst, and tell which elements are present. This should help to pin down which theory is correct. For example, the Chandra observation of GRB991216 provides evidence for a large, iron-rich cloud moving away from the site of the burst. Although much more work needs to be done, this observation would appear to support the hypernova model.
<urn:uuid:b65c88d2-df62-4a18-8fcb-48407da4b20f>
4.625
641
Knowledge Article
Science & Tech.
48.641074
What will happen when the fragments hit Jupiter?The impacts will be centered on June 19, 1994; the first is expected late at night on July 16, with an impact, on average, about every six hours. All the fragments will enter Jupiter's atmosphere at an angle of 42 degrees from the vertical and impact near a latitude of 44 degrees south, but on the back side of the planet as seen from Earth (about 10 degrees in longitude behind the edge of Jupiter, as seen from Earth). However, because Jupiter spins so rapidly (a day on Jupiter lasts only 9 hours 50 minutes), the sites will rotate into view from Earth within about 20 minutes of each impact. Exactly what will happen as the fragments enter the atmosphere of Jupiter is very uncertain, though there are many predictions. Any body moving through an atmosphere is slowed by atmospheric drag, by having to push the molecules of that atmosphere out of the way. The kinetic energy (energy of motion) lost by the body is given to the air molecules. They move a bit faster (become hotter) and in turn heat the moving body. The drag increases roughly as the square of the velocity. In any medium, a velocity is finally reached at which the atmospheric molecules can no longer move out of the way fast enough and they begin to pile up in front of the moving body. This is the speed of sound (Mach I -- 331.7 meters/second or 741 mph in air on Earth at sea level). A discontinuity in velocity and pressure is created which is called a shock wave. Comet Shoemaker-Levy 9 will enter Jupiter's atmosphere at about 60 kilometers per second. which would be about 180 times the speed of sound on Earth (Mach 180!) and is about 50 times the speed of sound even in Jupiter's very light, largely hydrogen atmosphere. At high supersonic velocities (much greater than Mach 1) enough energy is transferred to an intruding body that it becomes incandescent and molecular bonds begin to break. The temperature may rise to 50,000 kelvin (90,000 degrees Fahrenheit.) or more for very large bodies such as the fragments of Shoemaker-Levy 9. The effect of increasing temperature, pressure and vibration on an intrinsically weak body is to crush it and cause it to flatten and spread. Meanwhile the atmosphere is also increasing in density as the comet penetrates to lower altitudes. All of these processes occur at an ever increasing rate. The net result is that the fragile Shoemaker-Levy 9 fragments will suffer almost immediate destruction. The only real question is whether each fragment will break into several pieces immediately after entry, and therefore exhibit multiple smaller explosions, or whether it will survive long enough to be crushed, flattened and obliterated in one grand explosion and terminal fireball. Astronomer Zdenek Sekanina, of the Jet Propulsion Laboratory in Pasadena, California, calculates that about 93 percent of the mass of a 1013-kilogram fragment, still moving at almost 60 kilometers per second, remains one second before the terminal explosion. During that last second, the energy of perhaps 10,000 100-megaton bombs is released. Much of the cometary material will be heated to many tens of thousands of degrees, vaporized, and ionized along with a substantial amount of Jupiter's surrounding atmosphere. The resulting fireball should balloon upward, even fountaining clear out of the atmosphere, before falling back and spreading out into Jupiter's atmosphere, imitating in a non-nuclear fashion some of the atmospheric hydrogen bomb tests of the 1950s. One of the more difficult questions to answer is just bow bright these explosions will be. Sekanina calculates that a 1013--kilogram fragment, a reasonable value for the largest piece, will reach an apparent visual magnitude of -10 during the terminal explosion. This is 1,000 times Jupiter's normal brilliance and only 10 times fainter than the full Moon! However, Sekanina calculates that the explosions will occur above the clouds. There is much controversy as to exactly how deep into the atmosphere the fragments will penetrate before exploding, with other astronomers arguing that the fragments will explode beneath the visible clouds. The brightness of explosions occurring below the clouds would be attenuated by a factor of at least 10,000, making them most difficult to observe. Jupiter-centered final orbit of Shoemaker-Levy 9 as viewed from the Sun. (Courtesy P.W. Chodas) The fireball created by the terminal explosion will spew vaporized comet material to very high altitudes as it expands and balloons upward. It may carry with it atmospheric gases that are normally to be found only far below Jupiter's visible clouds. Hence the impacts may give astronomers opportunity to detect gases which have been hitherto hidden from view. As the gaseous fireball rises and expands it will cool, with some of the gases it contains condensing into liquid droplets or small solid particles. If a sufficiently large number of particles form, then the clouds they produce may be visible from Earth-based telescopes after the impact regions rotate onto the visible side of the planet. These clouds may provide the clearest indication of the impact locations after each event. Large regular fluctuations of atmospheric temperature and pressure will be created by the shock front of each entering fragment and travel outward from the impact sites, somewhat analogous to the ripples created when a pebble is tossed into a pond. These may be observable near layers of existing clouds in the same way that regular cloud patterns are seen on the leeward side of the mountains. Jupiter's atmosphere will be sequentially raised and lowered. creating a pattern of alternating cloudy areas where ammonia gas freezes into particles (the same way that water condenses into cloud droplets in our own atmosphere) and clear areas where the ice particles warm up and evaporate back into the gas phase. Whether or not these "wave'' clouds appear, the ripples spreading from the impact sites will produce a wave structure in the temperature at a given level that may be observable in infrared (or thermal) maps. In addition there should be compression waves, alternate compression and rarefaction in the atmospheric pressure, which could reflect and refract within the deeper atmosphere, much as seismic waves reflect and refract due to density changes inside Earth. The phenomena directly associated with each impact from entry trail to rising fireball will last perhaps three minutes. The fallback of ejecta over a radius of a few thousand kilometers will last for about three hours. Seismic waves from each impact might be detectable for a day, and atmospheric waves for several days. Vortices and atmospheric hazes could conceivably persist for weeks. New material injected into the Jovian ring system might be detectable for years. Changes in the magnetosphere (Jupiter's magnetic field is much stronger than that of Earth and affects an area of space tens of millions of kilometers from the planet) and/or the Io torus (particles ejected from Io's volcanoes are ionized and trapped by Jupiter's magnetic field into a donut-shaped torus completely circling the planet) caused by the sudden influx of large amounts of cometary dust might also persist for some weeks or months. There is the potential to keep planetary observers busy for a long time! << previous page | 1 | 2 | 3 | next page >> back to Teachers' Newsletter Main Page
<urn:uuid:51434d67-064e-4483-afe4-9d0e4833df76>
3.921875
1,495
Knowledge Article
Science & Tech.
39.471514
As I discussed in a past posting on uranium enrichment, the uranium we dig out of the ground is unable to sustain a nuclear chain reaction without a lot of coaxing – it has to be immersed in heavy water or surrounded by graphite as opposed to natural (light) water. This is why we enrich uranium – boost the amount of fissionable U-235 from the natural abundance of 0.72% to a richer 3% or higher and we can make a nuclear reactor; raise it to the point where 90% of the atoms are U-235 and we can make a bomb. This is why most nations stick with enriching uranium to 20% – high enough to produce a useful number of neutrons in the core (more on this in a moment) but not enough to explode. And this is one reason why the United States and Russia are both working to replace highly enriched fuel in research reactors (there are 82 around the world at the moment) with less dangerous stuff. As an aside – under the Non-Proliferation Treaty, non-nuclear weapons powers are not permitted to enrich uranium beyond 20% U-235 for weapons production, but a loophole in the treaty permits high-enrichment uranium for military reactors and for civilian purposes. So (you might wonder), if a reactor can sustain criticality with only 3% U-235 then why would we even want to go to the extra work to make reactor fuel potent enough to explode? And it’s not only research reactors, by the way, that rely on such highly enriched uranium – the reactors on American nuclear-powered naval vessels are fueled with weapons-grade uranium (although other nations have been moving away from HEU fuel). There are a few reasons, actually, depending on the use to which the reactor will be put. On a military vessel, for example, the reactor has to be compact enough to fit inside a submarine or ship hull. Since each fissioning uranium atom releases the same amount of energy, cramming more fissioning atoms into the same volume means that the reactor produces more power. If my submarine reactor had used commercial-grade fuel then the reactor would have been far too large (or far too wimpy). So in our case the reason for such high-powered fuel was space considerations. A side benefit was that our core was longer-lived than would otherwise have been the case. As the U-235 atoms fission they are lost to the core; as they are used up the U-235 enrichment necessarily drops. When it drops too far the reactor becomes less-suited for combat operations for a number of reasons – packing more U-235 into the core means that the reactor will last longer before it needs to be refueled. So running weapons-grade uranium meant that our core lasted over a dozen years before it was replaced – compare this to the typical 18 months or so between refuelings at the typical commercial reactor. Both of these are good reasons, but neither really applies to a university-based research reactor (or an industrial isotope production reactor). Universities are not as space-conscious as submarines and a reactor that operates only intermittently doesn’t really have the longevity demands of a military plant – so why run on weapons-grade uranium? The main reason comes down to neutron flux – if each fission produces 2-3 neutrons then packing more fissionable atoms into the same volume means that the number of neutrons in each volume of the core (the neutron density) will be higher than with lower-enriched fuel. And if the purpose of the reactor is to produce, say, radionuclides for medical or for research purposes (cobalt-60 is produced when a stable cobalt-59 atoms captures a neutron) then a higher neutron density means a higher rate of isotope production. A reactor fueled with weapons-grade uranium produces more radionuclides at a faster rate than one with lesser concentrations. This is why we once made civilian reactor fueled with HEU. In the 1950s and 1960s – before the Non-Proliferation Treaty – the United States and Soviet Union both built research reactors for a number of nations, hoping to win followers during the Cold War, and many of these reactors contained weapons-grade uranium. Today this seems sort of silly – spreading highly enriched uranium around the world – but at the time, in a world that was more or less stabilized by the Cold War and in which fears of terrorism did not include weapons of mass destruction, it sort-of made sense. The problem is that today, with the Cold War over and terrorism (not to mention wanna-be nuclear states) on the rise, we’ve got to address the problem. There are a few potential snags. One is that reactors fueled with less-enriched uranium have a much lower neutron flux and it simply takes a bit longer to get the same amount of neutron activation than with an HEU-fueled reactor. This is not a show-stopper so much as an inconvenience, but it must be acknowledged. Another is that, for scientists running experiments that require a long-duration irradiation at very high neutron flux, there may be few (if any) viable alternatives. But there are few experiments that really have both of these requirements so this affects few (if any) active research programs. The other snag is that refueling reactors is neither simple nor cheap. The irradiated fuel is chock full of radioactive fission products and must be handled carefully to avoid any harm to the staff engaged in the work. It also has to be kept shielded and cooled (not to mention secure) during transport, and then it has to be placed into storage or recycled at the end of its journey. Not to mention refueling the reactor with fresh low-enriched uranium if the reactor is still to be used. And that doesn’t even get into the details of modifying the reactor’s operating license, re-training staff, and so forth! There’s not a step in there that’s cheap or easy – but it’s certainly better than the alternative, which is why the US and Russia have been engaged in refueling or defueling this part of their Cold War legacy. As with my sons’ room, cleaning up this particular mess is neither exciting nor easy – but it needs to be done.
<urn:uuid:22e407ff-cb63-41d1-b413-aa5fa50d1c0e>
3.125
1,306
Personal Blog
Science & Tech.
42.46251
Source: Science Daily May 23, 2012 "Most people are fascinated by the colorful and exotic coral reefs, which form habitats with probably the largest biodiversity. But human civilization is the top danger to these fragile ecosystems through climate change, oxygen depletion and ocean acidification. Industrialization, deforestation and intensive farming in coastal areas are changing dramatically the conditions for life in the oceans. Now scientists at the Max Planck Institute for Marine Microbiology from Bremen together with their colleagues from Australia, Sultanate of Oman and Italy have investigated how and why the corals die when exposed to sedimentation. According to their findings, oxygen depletion, together with an acidification of the environment, creates a chain reaction that leads to coral death. Reef forming stone corals inhabit the light-flooded tropical shallow coastal regions 30 degree south and north of the equator. Coral polyps build the carbonate skeletons that form the extensive reefs over hundreds to thousands of years. Photosynthesis of the symbiotic algae inside the polyps produces oxygen and carbohydrates from carbon dioxide and water, thereby feeding the polyps. Since the 1980s the process of coral bleaching is under study: elevated temperatures of 1 to 3 degrees induce the algae to produce toxins. The polyps react by expelling the algae and the coral reef loses its color as if it was bleached. Without its symbionts the coral can survive only several weeks." To read the full text of the article, click here .
<urn:uuid:3d75b2d9-98d3-450b-aca8-807a5e49cebc>
4
299
Truncated
Science & Tech.
26.8525
|Sunday, Jan 15, 2012 - 2:30 AM "Earth's Changing Climate" Tropical glaciers are the world's thermometers; their melting is a signal that human activities are warming the planet. A California project tries to predict whether natural ecosystems will be able to absorb enough additional carbon dioxide from the atmosphere in the next 50 years to mitigate the full impact of human-induced greenhouse gas emissions.. D |More on LEARN/CREATE (PT)| |No upcoming shows in database|
<urn:uuid:68d4525f-4392-4071-87d8-155cf3fae4dc>
2.828125
105
Content Listing
Science & Tech.
39.445
replace string in a file josef.pktd at gmail.com Mon Mar 17 17:03:07 CET 2008 On Mar 16, 10:35 pm, sturlamolden <sturlamol... at yahoo.no> wrote: > On 15 Mar, 21:54, Unknown <cantabile... at wanadoo.fr> wrote: > > I was expecting to replace the old value (serial) with the new one > > (todayVal). Instead, this code *adds* another line below the one found... > > How can I just replace it? > A file is a stream of bytes, not a list of lines. You can't just > replace a line with another, unless they have the exact same length. > You must rewrite the whole file to get it right. An example: looks for all 'junk*.txt' files in current directory and replaces in each line the string 'old' by the string 'new' import os, glob, fileinput allfiles = glob.glob(os.getcwd() + '\\junk*.txt') # makes absolute findstr = 'old' replstr = 'new' countlinesfound = 0 for line in fileinput.input(allfiles,inplace=1): if line.find(findstr) != -1: line = line.replace(findstr,replstr) # old string , new countlinesfound += 1 print line, # this writes line back to the file I found something similar in a tutorial when I started to learn Python, but I don't remember which one. More information about the Python-list
<urn:uuid:83468791-7fcb-412e-863f-8e587c3a83e5>
2.765625
371
Comment Section
Software Dev.
78.352311
The Amazing Unit Circle Reference Angles in Quadrant III Reference angles are used to determine the values of the trigonometric functions in the second, third and fourth quadrants, in particular, for the "nice" angles. The reference angle for an angle θ is the smallest angle φ from the (positive or negative) x-axis to the terminal ray of the angle θ. For an angle θ in the third quadrant the reference angle φ is the angle that must be subtracted from θ to leave a straight angle, that is, π radians or 180°. Thus θ - φ = π or θ - φ = 180°, and so φ = θ - π or φ = θ - 180°. Next plot the reference angle φ in the first quadrant, that is, in standard position. We see that the point (cos θ, sin θ) is on the opposite side of the unit circle from the point (cos φ, sin φ). The x- and y-coordinates of these two points have opposite signs. Thus, for θ in Quadrant III: cos θ = - cos φ and sin θ = - sin φ. Conclusion: to compute the value of cosine and sine of an angle θ in the third quadrant, find the value of the function at the reference angle φ and then attach the correct sign (- for both cosine and sine in Quadrant III). The method also works for the other trigonometric functions. For example, tan θ = sin θ/cos θ = (- sin φ)/(- cos φ) = sin θ/cos θ = tan φ in Quadrant III. Restore initial diagram The Amazing Unit Circle | Trigonometry Facts | Home Page |
<urn:uuid:bf2e8b40-eb91-4a77-bed3-6bcccb83c711>
4.21875
396
Tutorial
Science & Tech.
69.28486
Did you know that the average human body has about 10 times as many microbial cells as human cells. AMAZING! Read this for more information. Tags: Biology, Microbiology, Science This entry was posted on August 22, 2007 at 10:18 pm and is filed under Biology, Microbiology, Science. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. Fill in your details below or click an icon to log in: You are commenting using your WordPress.com account. ( Log Out / Change ) You are commenting using your Twitter account. ( Log Out / Change ) You are commenting using your Facebook account. ( Log Out / Change ) Connecting to %s Notify me of follow-up comments via email. Notify me of new posts via email. Theme: Kubrick. Blog at WordPress.com. Entries (RSS) and Comments (RSS). Get every new post delivered to your Inbox. Join 315 other followers
<urn:uuid:525f6739-a599-4a58-af7e-ba5d8b1abbb4>
2.984375
226
Personal Blog
Science & Tech.
69.195957
Identifying what kind of bulge a given galaxy has is very relevant if we wish to understand the formation and evolutionary processes such galaxy went through, until it reached the physical state presented to us today. While a classical bulge, i.e. component number 2 in the list above, suggests a more violent history, including mergers, a disk-like bulge possibly indicates a quieter evolution, if it is the only bulge in the galaxy. (Although note, again, that some mergers might contribute only to material in the outer halo, and not result in the formation of a bulge.) A given galaxy can have no bulge, can have a classical bulge or a disk-like bulge, or both. It's easy to picture a bulge-less disk galaxy evolving, accreting a smaller satellite in a merger event, which would originate a classical bulge, and then developing a bar which would produce a disk-like bulge. Later, the bar can itself evolve and have its inner parts puffed up and form a box/peanut. Eventually, this galaxy not only has a classical and a disk-like bulge, but also a box/peanut. Gadotti (2009) discussed composite bulges, i.e. classical bulges with a young stellar component that could be embedded disk-like bulges, while Nowak et al. (2010) argued that NGC 3368 and NGC 3489 show a small classical bulge embedded in a disk-like bulge. Finally, Kormendy & Barentine (2010) found that NGC 4565 has a disk-like bulge inside a box/peanut. Since disk-like bulges contribute to a smaller fraction of the total galaxy light than classical bulges (i.e. they have smaller bulge/total ratios - see e.g. Drory & Fisher 2007, Gadotti 2009), they are naturally found most often in more late-type galaxies. However, disk-like bulges can also be found in lenticular galaxies (Laurikainen et al. 2007), which can be understood in the context proposed by van den Bergh (1976, see also Kormendy & Bender 2012) of a Hubble sequence with spirals and lenticulars forming parallel branches. Durbala et al. (2008) found that galaxies hosting disk-like bulges are predominantly in low density environments (see also Zhao 2012). Mathur et al. (2011) and Orban de Xivry et al. (2011) found that the bulges of narrow line Seyfert 1 galaxies (AGN accreting at high rates and powered by less massive black holes) are disk-like bulges, an important clue to understand the fueling of AGN activity by bars (Shlosman et al. 1989) and the connected growth of bulges and their central black holes. Note that a disk-like bulge can be any of the components number 5 through 9 in the list above, or any combination of them. Classical and disk-like bulges can therefore be distinguished by their morphology. Although this can work well (see e.g. Fisher & Drory 2010), it is to a large extent subjective, and there are more objective ways to proceed with such a separation. Another method to distinguish bulge types is to look at their surface brightness radial profiles. In the past, these were fitted using the de Vaucouleurs (1948) function, used to fit such profiles in ellipticals. We now know that a better fit to the profiles of both ellipticals and bulges is provided by the Sérsic (1968) function, which is a generalization of the de Vaucouleurs' function (see Caon et al. 1993): where re is the effective radius of the bulge, i.e., the radius that contains half of its light, µe is the bulge effective surface brightness, i.e., the surface brightness at re, n is the Sérsic index, defining the shape of the profile, and cn = 2.5(0.868n - 0.142). When n = 4, the Sérsic funtion becomes the de Vaucouleurs' function; when n = 1, it is an exponential function, and, when n = 0.5, a Gaussian. Important properties of the Sérsic function and its application to fit galaxy light profiles can be found in Trujillo et al. (2001) and Graham & Driver (2005). There is evidence that the light profiles of most classical bulges, as well as ellipticals, are better described by a Sérsic function with n > 2, whereas most disk-like bulges have n < 2, i.e., closer to an exponential function, as disks (e.g. Fisher & Drory 2008, Gadotti 2009). Figure 11 shows schematically the light profiles of an elliptical galaxy and of disk galaxies with bulges following Sérsic functions with different values of n. For a real (and barred) galaxy, see the right panel in Fig. 4. Note that in order to obtain bulge structural parameters one needs to decompose either the galaxy light profile (1D decomposition) or better the whole galaxy image (2D decomposition) into the main different galactic components. Figure 11. Top left: a Sérsic function with n = 4, that could represent the light profile of an elliptical galaxy. Top right: a Sérsic function with n = 3 - that could represent the bulge in a galaxy of early Hubble type - plus an exponential function, representing the disk of such galaxy. Bottom left: same as the latter but with a Sérsic with n = 2; and finally, bottom right: same as the latter but with n = 1. The sum of both components is shown when this applies. Also indicated are the difference between the bulge effective and central surface brightness, µe - µ0 (note that this does not consider effects from a PSF), and the positions of re and the disk scale length h, for each model. However, the threshold at n = 2 to separate classical and disk-like bulges is set arbitrarily, and still lacks a clear physical justification. Furthermore, the uncertainty on the measure of n - typically 0.5 - is large compared to the range of values n typically assumes in bulges: 0.5 < n < 6 (see Gadotti 2008, Gadotti 2009). This means that using the Sérsic index to discriminate between bulge types is prone to misclassifications. A more physically motivated criterion to separate classical and disk-like bulges can be devised using the Kormendy (1977) relation between <µe> (the mean surface brightness within re) and re (Carollo 1999). The fact that classical bulges and elliptical galaxies seem to follow this relation suggests a similarity on the physics behind their formation. If the formation of disk-like bulges considerably involves different physical processes then they do not necessarily follow this relation. Figure 12 shows the Kormendy relation for elliptical galaxies and bulges, the latter separated by Sérsic index at n = 2. It is clear that, in contrast to most bulges with n > 2, many of those with n < 2 occupy a different locus in the <µe> - re plane. This tells us two things: (i): there seem to be bulges with different properties, and (ii): the Sérsic index is a first-order approximation to distinguish these bulges. However, one also sees that many bulges with n<2 follow the same relation set by ellipticals, and several bulges with n > 2 do not. A follow-up in this analysis is then to define classical bulges as those which follow the Kormendy relation of ellipticals within 3 boundaries. Conversely, disk-like bulges are then those which do not fall within these boundaries. It is important to note that this criterion is independent of the Sérsic index. This is done in Gadotti (2009) and it is found that disk-like bulges satisfy the following relation: where measurements are made using the SDSS i-band, and re is in units of a parsec. Figure 12. Kormendy (1977) relation for elliptical galaxies and bulges. The latter separated by Sérsic index: those with n > 2 appear only in the top panel, and those with n < 2 appear only at the bottom panel. The solid line is a fit to the elliptical galaxies, while the dashed lines mark the corresponding 3 boundaries. A more physically motivated definition for disk-like bulges is devised using the lower 3 boundary: disk-like bulges fall below this boundary and are thus outliers in the Kormendy relation set by ellipticals. [Taken from Gadotti 2009.] Figure 13 shows a density plot of the < µe > - re plane using the same data as in Fig. 12, but without making any separation between galaxy/bulge types. It shows that the loci occupied by elliptical galaxies, classical bulges and disk-like bulges correspond to three well-defined `islands' of points. A 2D Kolmogorov-Smirnov test shows that these groups of points are indeed different populations, with a statistical confidence level of 5 . This is important because it shows that the definition of disc-like bulges from Eq. 2 is not an artificial one, but in fact statistically justified. There is a statistically significant gap between classical and disk-like bulges in the <µe> - re plane. Since the sample used is drawn from a volume-limited sample, and has well-known selection effects, one can show that this gap cannot be attributable to spurious effects from the selection of the sample (see Gadotti 2009). Figure 13. Same as Fig. 12, but with no separation on galaxy/bulge type, and plotted as iso-density contours. Elliptical galaxies, classical bulges and disk-like bulges correspond to well defined `islands'. It can be shown that these islands represent populations of distinct physical systems with a confidence level of 5 . This shows that the separation between classical and disk-like bulges using Eq. 2 is not artificial, and rather has solid physical grounds. [Adapted from Gadotti 2009. Possibly the best way to recognize disk-like bulges from classical bulges is by directly studying their dynamics. As noted in the previous section, classical bulges are dynamically supported by the velocity dispersion of their stars, whereas disk-like bulges are supported by rotation. This is, however, demanding in terms of telescope usage.
<urn:uuid:d534bc7a-923f-4a21-acab-798168431d23>
3.15625
2,251
Academic Writing
Science & Tech.
52.13489
Effects of Global Warming on Precipitation in Guangdong Province, China Liu, D., Guo, S., Chen, X. and Shao, Q. 2012. Analysis of trends of annual and seasonal precipitation from 1956 to 2000 in Guangdong Province, China. Hydrological Sciences Journal 57: 358-369. Specifically, Liu et al. analyzed "trends of annual, seasonal and monthly precipitation in southern China (Guangdong Province) for the period 1956-2000 ... based on the data from 186 high-quality gauging stations," and they employed "statistical tests, including the Mann-Kendall rank test and wavelet analysis," in order to determine whether the precipitation series exhibited any regular trends or periodicities. In describing their findings the four researchers report that "annual precipitation has a slightly decreasing trend in central Guangdong and slight increasing trends in the eastern and western areas of the province," but they say that "all the annual trends are not statistically significant at the 95% confidence level." In addition, they discovered that "average precipitation increases in the dry season in central Guangdong, but decreases in the wet season," such that "precipitation becomes more evenly distributed within the year." Last of all, they state that "the results of wavelet analysis show prominent precipitation with periods ranging from 10 to 12 years in every [italics added] sub-region in Guangdong Province." And comparing precipitation with the 11-year sunspot cycle, they find that "the annual precipitation in every [italics added] sub-region in Guangdong province correlates with Sunspot Number with a 3-year lag." Rather than becoming more extreme in the face of 1956-2000 global warming, Liu et al.'s analysis of the pertinent data suggest that precipitation in China's Guangdong Province has become both less extreme and less variable. And the temporal precipitation patterns that do emerge upon proper analysis suggest that the primary player in their determination is the sun.
<urn:uuid:69c16809-fbb9-4f34-a9e6-f80b8ae62ab5>
2.734375
412
Academic Writing
Science & Tech.
39.811687
Why is it not possible to store windfarm energy in battery banks? A lot of energy isn't being used when wind farms are at peak and we have no way of storing it. This is wasteful. So why don't we use battery banks? closed as off topic by Sklivvz♦, Ϛѓăʑɏ βµԂԃϔ, Manishearth♦ Dec 28 '12 at 16:28 Questions on Physics Stack Exchange are expected to relate to physics within the scope defined in the FAQ. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about closed questions here. We can. But they don't scale well: to get grid-level storage, you need to be able to scale to gigawatts of power, and gigawatt-hours of energy. And to be able to cycle hundreds, or thousands, of times. To date, we have one technology which will do that, which is pumped storage hydro, which typically has a round-trip efficiency of 75%. So even though that's worse than a decent battery's round-trip efficiency, its scalability means that it dominates grid storage. But that's just one small part of the picture. Behind the question of storage, is the physics question of how do you balance an electricity grid. The grid typically has very low capacitance, so electricity in and electricity out must balance at every second. To manage that balance, you can either adjust the amount going in, or the amount going out, or both. There are lots of ways to integrate high wind penetrations into the grid: this is a solved problem technically. See, for example, the work in Energy Policy by Delucchi & Jacobson; or the book by Gregor Czisch on renewable scenarios. There are more ways to do virtual storage than direct storage. For example, delaying consumption of 1GWh of electrical energy at 1GW power, for one hour, is equivalent to storing it at 100% efficiency for one hour. In the UK, a lot of energy is used for domestic hot water use. So thermal storage and delayed heating of that thermal storage, can act as a virtual storage for electrical heating of water. To put some numbers on that, UK domestic hot water storage is currently about equivalent to 60GWh @ 30GW for 24 hours. Similarly, with 20 million cars, if they were all electrified, you might have 20 million 50kWh batteries, which is 1TWh of electrical storage. V2G (vehicle to grid) studies often refer to a round-trip efficiency of about 75%, which comes from 10% loss on charge, 10% loss on discharge, and a few percent on transmission. To find out more, there's a wealth of literature on integrating renewables into the grid.
<urn:uuid:a2157b31-4dfd-47aa-b16e-7785db632630>
2.875
597
Q&A Forum
Science & Tech.
55.104581
The buff-tail bee. Bumblebee declines, microbes, and amazing birds 13 June 2011 This week in the Planet Earth Podcast: what UK farmers are doing to protect the country's vanishing bumblebees, butterflies and other pollinating insects; how scientists are trying to figure out how many types of microbes there are on our planet and why they all matter; and why birds are more amazing than we ever imagined. Bumblebees, butterflies, honeybees and other pollinating insects are in trouble the world over. Habitat loss and farming intensification have led their populations to decline at a worrying rate. You might not think this affects you, but the fact is we rely on these insects to pollinate the crops that feed everyone. Now it turns out that farmers can help pollinating insects by limiting grass growth in so-called buffer margins and encouraging wildflowers to grow. Richard Hollingham meets a farmer and a scientist just outside Reading in southern England to find out how the scheme works. Later, Tim Hirsch meets ecologists in the Amazon and the UK to find out about a new global initiative called the Earth Microbiome Project. The project aims to build up the most detailed global picture of microbial diversity yet. A grand feat, but crucial if we want to understand how microbes contribute to the health of every ecosystem on Earth. Click the play button above to listen now. A full text transcript is available. Finally: the wonders of birds, and how technology is revealing exactly how high, fast and far these amazing creatures can go. If there's a subject you'd like to hear about in the Planet Earth Podcast, don't forget to let us know. Email your ideas to email@example.com or if you're on Facebook or Twitter, contact us there – see the links below. Interesting? Spread the word using the 'share' menu on the top right.
<urn:uuid:519a74ff-fbf5-4340-a3a9-eb6cbe819566>
3
392
Truncated
Science & Tech.
52.58559
Description of Difflugia lacustris: The shell is transparent or hyaline, elongate, cylindrical or slightly pyriform. It is composed of small to medium pieces of quartz, diatom frustules and small siliceous flagellate cysts blended together to form a thin structure intermediate between smooth and rough. Only small areas of organic cement occur at the junction of the shell components. The cement is in the form of thick-walled rings, between 0.7-0.8 microns in diameter, perforated with either three or four holes, 0.12-0.16 microns in diameter, which gives these units a similar shape to a button. The cement may occasionally be seen either as rings with a slight indentation or as a network of joined rings. When organised as a network the walls of individual rings may be fused together but the typical button-like form are usually seen at the edges. The aperture is usually circular and surrounded by small particles so that the margin is smooth. Length 140-231 microns, breadth 63-94 microns, diameter of aperture 26-42 microns.
<urn:uuid:dc753a10-d7dc-4ea1-ac10-0d292bc6cb60>
2.703125
234
Knowledge Article
Science & Tech.
56.396061
NOAA Teacher at Sea Aboard R/V Savannah July 7 – 18, 2012 Mission: SEFIS Reef Fish Survey Location: Atlantic Ocean, off the coast of Daytona Beach, Florida Date: July 13, 2012 Latitude: 29 ° 19.10’ N Longitude: 80 ° 24.31’ W Air Temperature: 28.3° C (82.94°F) Wind Speed: 12 knots Wind Direction: from Southeast Surface Water Temperature: 27.48 °C (81.46°F) Weather conditions: Sunny and Fair Science and Technology Log Catching bottom fish at the reef As the fish trap lies at the bottom of the ocean at the reef site, fish can enter and exit freely through the opening. At the end of approximately 90 minutes, the R/V Savannah returns to the drop site and begins the process of raising the trap with whatever fish remain inside. The six traps are pulled up in the order in which they were dropped. The crew member on watch in the wheelhouse will maneuver the boat toward the paired poly ball buoys at a speed of about 5 knots. The boat draws alongside each pair on the starboard side. One of the scientists throws a grappling hook toward the line that links the poly balls. The line is hauled in and passed to a waiting scientists, who pull the poly balls on deck. There is substantial hazard associated with this step. Undersea currents can be very powerful near the bottom where traps are set. As scientists are pulling in the cable by hand, unexpected current force can yank the trap cable, rope and buoys out of their hands and off the deck in an instant. If personnel on deck aren’t mindful and quick to react, the speeding rope can cause serious rope burn injury. The cable connecting the fish trap and the poly balls is pulled in and threaded through the pulley system of a pot hauler. The pot hauler is an automated lifting tool that is operated by the second crew member on watch. At this time the first crew member on watch has left the wheel house and is piloting the boat from a small cab on deck above the pot hauler, so he can monitor the action below. The pot hauler makes a distinctive clicking sound as it draws the trap toward the surface at an angle. It can take one to five minutes to raise the trap to the deck, depending on the depth of the water. As the fish trap becomes visible, shimmering rapidly changing shapes can be seen as fishes’ bodies catch and reflect sunlight. The trap clears the water and gets pulled aboard. Very quickly, and with two scientists holding each side, the trap is upended onto its nose and suspended above the deck. A third scientist opens the trap door at the bottom and the fish are shaken into a plastic bin. Ice pellets are shoveled onto the fish and a cover is snapped on the bin. If the catch is small, fish may be placed in a bucket or tub and cover with ice. A numbered tag is removed from the trap and tied onto the bin to identify specimens from each catch. The containers holding the day’s catch are set aside for later processing. Every so often, unexpected sea life is brought up in the traps. The catch has included sea stars, sea urchins, several kinds of tropical fish and many moray eels. Video cameras are also removed from the top of the trap. Their data cards will be downloaded. Fish behavior and surrounding habitat videos will be analyzed, along with anatomical specimens and size data taken from the fish themselves in the wet lab. Every day brings more wildlife encounters and sightings. I am dazzled by the many fascinating organisms I’ve been able to see up close. Sometimes I am quick enough to grab my camera and put the animal into my view finder, focusing clearly enough to catch a great image. Here are a few of those images (including some new friends from the cruise): Other times I have to capture a memory. Last night I tried reef fishing. I have no experience fishing. At all. Adam P. handed me his own rod and reel. The hook was baited and the line was already lowered to the bottom, down at around 40 meters (more than 120 feet). Shortly after I took it, the tip of the rod began to bend downward and pull. I asked Adam if that meant something had been hooked. He said, “Go ahead. Reel it in.” That’s when I discovered that even recreational fishing is tough work – particularly this unfamiliar technique of holding the rod with the right hand and reeling in with the left. Neophyte to fishing is me. When the fish got to the surface, Adam took the big, beautiful black sea bass off the hook for me. On the deck it splayed out the spines of its dorsal, caudal and pectoral fins defensively. I was concerned because the fish’s air bladder was hanging out of its mouth from its rapid ascent to the surface. Adam punctured the air bladder to deflate it. He threw the fish back into the sea at my request, and assured me that the fish will go on with its life. I’m optimistic it will.
<urn:uuid:f9f267bb-36cd-4356-be8b-57d896dbf3cb>
2.90625
1,095
Personal Blog
Science & Tech.
67.456828
APS Division of Fluid Dynamics 2012 Gallery of Fluid Motion Feature Summary - From Physics Research APS Division of Fluid Dynamics 2012 Gallery of Fluid Motion - This image is a high-resolution computer simulation of the head-on collision of two tiny drops, one moving up and one moving down. The drops approach each other with a high velocity, so there is considerable energy in the collision. For more information, see this description. The diameter of each drop is only about twice the thickness of human hair. Imagine how hard it would be to do this experiment and see what actually happens. Also, compare the image at left, and also this additional Corrie White image, with the simulation above: Note how the edge of the disk and film break up into drops in a similar way. - image credit: Xiaodong Chen and Vigor Yang (School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA); image source; larger image. - Image URL: - http://www.compadre.org/Informal/images/features/drop collision simulation.jpg - February 1, 2013 - February 16, 2013
<urn:uuid:7d2fb084-1648-4e92-ba29-9742dca238b1>
2.765625
237
Knowledge Article
Science & Tech.
40.659929
In this simulation the left boundary is kept at one degree higher temperature than the right boundary. On the top boundary the variations of surface tension due to temperature differences on the surface induce a tangential force on the flow. This effect is called thermocapillary convection or Marangoni convection. The flow was assumed incompressible. The ElmerSolver uses a stabilized finite element formulation to solve the incompressible Navier-Stokes equations and heat equation with the convection term. In this simulation linear triangular finite elements were used. The coupled system was solved by the method of sequential iteration. If you want to reproduce the results, download the file Marangoni.tar.gz and follow the instructions in the file README. You should have the Elmer package installed in your computer, however. The figures show the velocity vectors and the temperature field at steady state.
<urn:uuid:60ffa233-120d-4832-b158-c853f6a28f61>
3.015625
182
Documentation
Science & Tech.
27.50875
Energy from Wood eat wood to get energy to live and reproduce. Termites feed on damp rotting trees, and occasionally on houses. Trees build themselves by using energy from the sun to make a complex sugar called cellulose. The energy remains trapped in the wood until the tree dies. Termites chew through the wood to eat the energy-rich cellulose. At this exhibit visitors can see termites in action.
<urn:uuid:c25902fa-8b6f-4a2f-83cf-8b3df980dad2>
3.265625
91
Knowledge Article
Science & Tech.
52.936413
Wave climate in the Baltic Sea 2010 Due to the ice conditions in 2010 wave measurements could be carried on through the whole year only in the open sea area of the Southern Baltic Proper. In January the wave climate was rougher than usual in the Western Baltic Proper while further north, at stations where measurements could still be carried out, the wave climate was calmer. In the northern parts of the Baltic Sea and in Skagerak the summer was typical for the season while in the southern parts of the Baltic Sea it was calmer than usual. November was clearly rougher than usual in all the other stations were measurements were made except in Skagerak. December was calmer at latitudes 58° N – 59°N while it was clearly rougher at stations situated at higher and lower latitudes. Results and assessment In 2010 waves were measured in eight locations in the Baltic Sea and Skagerrak (Figure 1). These buoys provide real time information of the wave climate for professional and free time navigation. The wave measurements are also important for wave related research and wave model development. As waves contribute to the mixing of the surface layer and their influence can extend to the bottom (resuspension) the information about the yearly wave activity adds to the understanding of the physical environment of the Baltic Sea. Figure 1. The position of wave measuring sites in 2010. Red dots indicate FMI buoys in the Northern Baltic Proper and in the Gulf of Finland (station Helsinki), blue dots SMHI buoys in the Southern Bothnian Sea (station Finngrundet), in the Northern Baltic Proper (station Huvudskär Ost), in the Southern Baltic Proper and in Skagerak (station Väderöarna) and green dots the BSH and GKSS buoys off Cape Arkona and on the Darss Sill. See section Metadata for the exact positions of the buoys. The Bothnian Bay The Southern Bothnian Sea, station Finngrundet The buoy at Finngrundet was not operating during February to May, mainly due to the ice conditions in the Bothnian Sea. The highest measured significant wave height during the measurement period was 4.7 metres on November 9th. Generally the monthly mean significant wave heights in autumn and winter (September through December) were between 0.10 to 0.30 metres higher than usual with significant wave heights above three metres at several occasions. Figure 2. Time series of significant wave height at station Finngrundet. The Gulf of Finland The middle parts of the Gulf of Finland, station Helsinki The period for risk of ice damage in the middle parts of the Gulf of Finland is typically from January to May. In 2010 the buoy was recovered 6th January due to the risk of ice and deployed again in the end of May. The summer season from June to August was in average typical for the season. The highest significant wave height was measured 12th June, 2.8 m, while in July and August the significant wave height remained below 2.4 m. September and October were calmer than usual and the significant wave height did not exceed 2.4 m. In average November and December were clearly rougher than usual. The highest significant wave height was 4.5 m, measured during an eastern storm on 24th November. Significant wave heights over four metres are rather rare at this site. The ice period started early in 2010 and the wave buoy was recovered 22nd December. The Baltic Proper Between November 7th and 9th a storm moving in over the Baltic Sea from the southeast passed over the stations Southern Baltic, Northern Baltic Proper, Huvudskär Ost and Finngrundet. The mean wind velocities were between 21 and 25 m/s at a number of coastal stations along the Bothnian Sea and the Baltic Proper. Measurements show a quite fast change in the mean wave direction from the northwest to the east sector starting in the evening of the 7th at Southern Baltic (with significant wave heights raising to above 3 metres). Similar changes in wave direction could be observed at Huvudskär Ost and at Northern Baltic Proper some hours later and even at Finngrundet about 20 hours later. The highest significant wave heights during this storm were registered on the 8th around 7 pm at Southern Baltic, on the 9th at noon at Huvudskär Ost (5.7 m), at Northern Baltic Proper (5.6 m) and around 10 pm at Finngrundet (4.7 m). The significant wave heights at the latter three stations were the highest registered during 2011. At the station Helsinki in the Gulf of Finland the change of wave directions occurred in the evening of 8th and the significant wave height was highest (3.8 m) in the evening of 9th. The Northern Baltic Proper, stations Northern Baltic Proper and Huvudskär Ost Due to the risk of ice the wave buoy at the station Northern Baltic Proper was recovered 25th January and redeployed in the end of May. Before the buoy was recovered in January the wave climate was much milder than usual, reflecting the growing ice cover in the area. The period from June to August was typical for the season. The significant wave height exceeded three metres three times in June-July; the highest value was 3.4 m, measured 12th June. September, October and November were somewhat rougher than usual, while December was calmer. At this station the highest significant wave height in the measuring period 2010 was measured 9th November, 5.6 m. During the eastern storm 24th November when there were high waves in the middle parts of the Gulf of Finland the significant wave height at this site reached 5.1 meters. 5.1 metres was also measured 16th December and the significant wave height exceeded four metres four times in this month. The buoy at Huvudskär Ost is located SWW from the buoy at station Northern Baltic Proper and closer the shoreline. Due to ice conditions and technical problems measurements could only be conducted during two short periods, May to mid-July and mid-October to end of November. The mean significant wave height was slightly lower than usual during summer. The highest significant wave height, 5.7 meters (maximum individual wave height 10.8 meters), was measured on November 9th. These were the highest waves ever measured at this position. Figure 3. Time series of significant wave height at the station Huvudskär Ost. Southern Baltic Proper, station Southern Baltic At the position Southern Baltic a relatively calm summer was followed by autumn months which saw a lot of stormy weather resulting in high waves. Although in September waves were only registered during the second half of the month a look on the weather shows even in the beginning of the month extended periods of strong winds and rain showers over the Baltic. Both during September and December a number of low pressure systems moved over the Baltic Sea causing the significant wave height to be over four metres at a number of occasions. The highest significant wave height registered by the Southern Baltic buoy on September 28th was 5.8 metres. But even on September 16th and 18th shortly after redeployment of the buoy significant wave heights around 4.5 metres were measured. During December significant wave heights were above four metres on at least five occasions (on the 10th, 12th, 14th, 17th and 23rd). Also the mean significant wave height during second half of September and December were high for the season. Due to maintenance the measurements were disrupted for a short period between mid-August and mid-September. Figure 4. Time series of significant wave height at the station Southern Baltic. Western Baltic Proper, stations Darss Sill and Arkona The wave buoys were taken out of service in February and March as a precaution to prevent them from being damaged by drift ice. The Darss Sill buoy suffered repeated malfunctions during the year, so that only few measurement data were available for evaluation. Mean significant wave heights in the area of Darss Sill typically are 0.6 m in summer and 0.9 m in winter; the annual mean is 0.77 m. Wave heights at the Arkona station are slightly higher, ranging from about 0.6 m to 1.2 m, with an annual mean of 0.93 m. The most frequent wind direction, and thus also the predominant wave direction, is west-southwest (WSW), especially during storm events. Wind fetch in this offshore direction is much longer at the Arkona Basin than at Darss Sill, i.e. waves reaching the Arkona station have more time to grow. But also with different wind and wave directions, the larger distance from the coast and deeper water at the Arkona station are important factors contributing to higher wave heights. Wind conditions in 2010, like 2009, were relatively calm. The mean wind speed measured in the Arkona Basin, at 7.6 m/s, was below the long-term mean (8.0 m/s), and even below that of 2009 (7.8 m/s). The mean wind and wave direction (250-260°) did not deviate from the mean, although this wind direction was more frequent than usual. The annual mean significant wave height did not differ significantly from the long-term mean. Monthly means during the six summer months were slightly lower, and in winter they were higher. In December, the mean wave height in the Arkona Basin exceeded the average by as much as 0.3 m. Unlike the monthly mean values, the maximum values were mostly lower than the extreme values measured so far, except in January (although this statement is of limited validity due to data gaps). The highest waves in 2010 were measured in January during a prolonged NE storm of 8-9 Bft. Significant wave heights reached 4.8 m at Arkona, which almost matched the historical extreme value for January (4.9 m), and 4.2 m at Darss Sill, which constituted a new maximum for that area. The high waves were characterised by long wave periods of about 10 s. This storm event also caused the increased monthly mean values in January. Kattegat and Skagerak Kattegat, station Läsö Ost During 2010 no buoy was operating at this position. In 2012 it is planned to continue measurements at a new position, most likely more to the south in the Kattegat area. Skagerak, station Väderöarna The buoy at Väderöarna was recovered in the end of January and redeployed in mid-April to prevent ice damage. On April 17th a low pressure system with strong winds from the west resulted in a significant wave height of 3.7 metres, a new maximum for April at this position. The highest significant wave height at Väderöarna, 5.4 metres, was registered on August 24th when strong and persisting winds caused individual waves of up to a height of 8.4 metres. From September to December it was relatively calm with mean significant wave heights below normal. During this period the maximum significant wave height exceeded four metres only once, on November 4th. Figure 6. Time series of the significant wave height at the station Väderöarna. Figure 7. The monthly means of significant wave heights in the Southern Bothnian Sea, the Gulf of Finland and the Baltic Proper. In some months the long-term statistics are calculated over fewer years (but at least over four years) than indicated in the legend. Figure 8. The monthly means of significant wave heights in the Western Baltic Proper and Skagerak. In some months the long-term statistics are calculated over fewer years (but at least over four years) than indicated in the legend. Figure 9. The monthly maxima of significant wave heights in the Southern Bothnian Sea, the Gulf of Finland and the Baltic Proper. Figure 10. The monthly maxima of significant wave heights in the Western Baltic Proper and Skagerak. In 2010 Finnish Meteorological Institute (FMI) made real time wave measurements at two locations in the Baltic Sea, in the Northern Baltic Proper (station Northern Baltic Proper, 59° 15' N, 21° 00' E) and in the Gulf of Finland (station Helsinki, 59° 58' N, 25° 14' E). The northern parts of the Baltic Sea freezes every year. The length of the measuring periods varies every year depending on the extend of the ice cover. The Swedish Meteorological and Hydrological Institute (SMHI) made wave measurements at four locations, in the Southern Bothnian Sea (station Finngrundet, 60° 54' N, 18° 37' E), in the Northern Baltic Proper (station Huvudskär Ost, 58° 56' N, 19° 10' E), in the Southern Baltic Proper (station Southern Baltic, 55° 55' N, 18° 47' E) and in Skagerak (station Väderöarna, 58° 29' N, 10° 56' E). Since 1991, wave measurements in the western Baltic Sea have been carried out at a station located at 54° 41.9‘N, 12° 42.0‘E in the area of Darss Sill (with GKSS Research Centre as the operator), and since 2002 at a station northwest of Cape Arkona (54° 52.9‘N, 13° 51.5‘E), where measurements are made by the Federal Maritime and Hydrographic Agency of Germany (BHS). Long-term climatological wave data are not yet available at the latter position. Up to now, measurement interruptions due to ice formation occurred in the winter of 1995/1996 at the Darss Sill measuring station and in February and March 2010 at both stations. The waves at each station are measured with surface following buoys, Seawatch, Directional Waveriders and Waveriders. Measurements were collected app. every hour via HF link, Argos-satellite or Orbcomm system. The significant wave height is calculated onboard the buoys over 1600 s time series of surface displacement and the quality of the measurements were checked according to the routines at each of the responsible Institutes. The lengths of the deployment periods in 2010 are indicated in the text. The length of the period at each station depends on the extent of the ice cover, maintenance and deployment logistics and possible instrument damages. As a consequence, measurements are not always available for 12 months per year for the long-term statistics. The years given in the Figures 7 and 8 indicate the start of the measurements: in some months the statistics are over fewer years but only statistics over at least four years are plotted in the Figures. Due to the variation of the length of the timeseries in statistics they should be used with caution. For reference purposes, please cite this Baltic Sea Environment Fact Sheets as follows: [Author’s name(s)], [Year]. [Baltic Sea Environment Fact Sheets title]. HELCOM Baltic Sea Environment Fact Sheets 2011. Online. [Date Viewed], http://www.helcom.fi/environment2/ifs/en_GB/cover/. Last updated 6.9.2011
<urn:uuid:817a1fa3-b1da-42bd-809c-ff6fd747f7f3>
3.03125
3,157
Knowledge Article
Science & Tech.
58.559169
|MadSci Network: Chemistry| What exactly is dipole movement and what is its relationship to hydrogen Dipole moment is due to the degree of charge separation in a molecule. How it is calculated (or how the degree of charge separation can be determined from a dipole moment, since that's what's obervable) is given in most physics textbooks, but normally it's a product of the quantity of charge and the distance between positive and negative. All polar molecules have per manent dipole moments, meaning that they have a positive end (usually carbon or hydrogen) and a negative end (usually oxygen, nitrogen or a halogen). "Temporary" dipole moments are discussed here. But some polar molecules are also good at coordinating highly-charged species, such as ions. For example, ethers are good at coordinating alkali metal cations. This is because negative charge is fairly concentrated on the oxygen atom of the ether. But ethers are not good at coordinating halide anions! This is because the positive charge in ethers is widely spread among several different carbon and hydrogen atoms and doesn't present a "concentrated target" for an anion. So-called protic solvents all contain hydrogen atoms bonded directly to electronegative atoms (oxygen or nitrogen). Since the hydrogen atom is electropositive, it takes on a positive charge; because the hydrogen atom is very small, the positive charge is quite concentrated. This means that protic solvents are good at coordinating both cations and anions, because they have both a concentrated negative charge (the oxygen or nitrogen atom) and a concentrated positive charge (the hydrogen bonded to oxygen or nitrogen). The center of negative charge on one molecule can also coordinate the center of positive charge on a neighboring molecule, if it is concentrated enough. Because a concentrated yet non-ionic center of positive charge is always a hydrogen atom, this is called a hydrogen bond. Hydrogen bonds give ice its open structure and hold together the double-helix of DNA. Try the links in the MadSci Library for more information on Chemistry.
<urn:uuid:d32f61b8-05db-4680-9f8b-ed806dfbaa66>
3.734375
437
Knowledge Article
Science & Tech.
21.042721
Answering this need for Earth observation capacity several Earth observation systems have been built. Agenda 21, G8 meetings and other international declarations (the latest from Rio +20) lend political support and reference for this capacity building. Below you will find the description of a handful of Earth observation systems that have been or are being constructed. The question is then; are we as a global society coordinated enough when answering this need of Earth observation capacity? Global Monitoring for Environment and Security - GMES The European initiative for the Global Monitoring for Environment and Security (GMES) provides data to help deal with a range of disparate issues including climate change and border surveillance. Land, sea and atmosphere - each is observed through GMES, helping to make our lives safer. GMES was initiated 1998 by the main national space agencies in Europe, the European Space Agency - ESA, the European Commission – EC, and the European organization for the Exploitation of Meteorological Satellites - EUMETSAT. GMES consists of a complex set of systems which collects data from multiple sources (earth observation satellites and in situ sensors such as ground stations, airborne and sea-borne sensors), processes these data and provides users with reliable and up-to-date information through the services mentioned above. Some of these systems and data sources already exist today, as well as prototype services but many developments are still required in all domains. GMES is an EU-led initiative. The coordination and management of the GMES programme is ensured by the European Commission. The developments related to the observation infrastructure are performed under the aegis of the European Space Agency for the space component (i.e. Sentinel missions) and of the European Environment Agency and the Member States for the in situ component. Group of Earth Observations - GEO The Group on Earth Observations is coordinating efforts to build a Global Earth Observation System of Systems, or GEOSS. GEO was launched in response to calls for action by the 2002 World Summit on Sustainable Development and by the G8 (Group of Eight) leading industrialized countries. These high-level meetings recognized that international collaboration is essential for exploiting the growing potential of Earth observations to support decision making in an increasingly complex and environmentally stressed world. GEO is constructing GEOSS on the basis of a 10-Year Implementation Plan for the period 2005 to 2015. The Plan defines a vision statement for GEOSS, its purpose and scope, expected benefits, and the nine “Societal Benefit Areas” of disasters, health, energy, climate, water, weather, ecosystems, agriculture and biodiversity. Eye On Earth Eye on Earth is a global public information service to share data and information from diverse sources. Eye on Earth allows you to manipulate tha data for collective discovery. Eye on Earth is the result of a public-private partnership joining expertise from industry and public organisations. The European Environment Agency (EEA), Esri and Microsoft Corporation collaborated to launch the Eye on Earth Network, the online community for sharing and discovering data about environment. This new cloud computing-based network promotes the principles of public data access and citizen science. The first Eye on Earth Summit was held in Abu Dhabi in 2011. In 2011 EarthCube was initiated by USA's National Science Foundation. The goal of EarthCube is to transform the conduct of research by supporting the development of community-guided cyberinfrastructure to integrate data and information for knowledge management across the Geosciences. ICSU's World Data System (WDS) In 2008 ICSU decided to establish the World Data System – WDS. The WDS supports ICSU’s mission and objectives, ensuring the long-term stewardship and provision of quality-assessed data and data services to the international science community and other stakeholders. WDS covers more that the Earth system sciences. The WDS concept aims at a transition from existing stand-alone WDCs and individual Services to a common globally interoperable distributed data system, that incorporates emerging technologies and new scientific data activities. The new system will build on the potential offered by advanced interconnections between data management components for disciplinary and multidisciplinary scientific data applications. Applications for the new WDS are already being investigated, including the WDC online portal which is being considered as a proof of concept for an element of the new system. WDS will enjoy a broader disciplinary and geographic base than previous ICSU bodies and will strive to become a worldwide ‘community of excellence’ for scientific data Future Earth is a new 10-year international research initiative that will develop the knowledge for responding effectively to the risks and opportunities of global environmental change and for supporting transformation towards global sustainability in the coming decades. Future Earth will mobilize thousands of scientists while strengthening partnerships with policy-makers and other stakeholders to provide sustainability options and solutions in the wake of Rio+20. Future Earth was launched in 2012 and his a common effort of a number of international research programs, funding agencies, ISSU and ICSU. The blue marble by NASA As a politician, science policy maker or funding agency I would ask myself: Where shall I invest my money? They all look so similar. Redundancy is necessary and good, but are we funding capacity building that will result in too much overlap? As an end-user, in science, industry, government or as a civil citizen, I would ask myself, where can I find the right information for me in the most effective and quality assured way? It is a jungle of portals out there. Who can I trust? In both cases I would be confused, and it would take me some time to figure out where to spend my money and where to go for information. Maybe this is how it has to be or even ought to be, but it sure looks like governments have lost track of the original idea and motivation behind the establishment of GEO; namely to created a global interoperable system of systems that increase capacity yet avoids unnecessary overlaps. The question asked in the title deserves a thorough analysis. Science policy, socio-economic and political research is needed as a basis for answering my follow-up questions and I suspect there are no easy answer or solution for them. All the same, the unanswered questions reflect part of reality. We, who work both on funding strategy and are trying to fill the gap between the available knowledge and the end-users, have to deal with this rather confusing and messy reality. Even as insiders we get lost sometimes. The author has experience from all elements of the research system, including a national funding agency where she worked both on national and international science policy and program funding. Today she runs BLB, an European SME and partner in the EU-funded project Egida. One of BLBs tasks is to help develop a funding scheme for GEO.
<urn:uuid:51557a61-2450-4142-b51c-1ebd9f000e13>
3.078125
1,389
Nonfiction Writing
Science & Tech.
27.03413
Bat biologist Nickolay Hristov, of UNC’s Center for Design Innovation and Winston-Salem State University, develops new techniques for filming and visualizing bats and the caves they occupy. Some of the tools in his kit include a long-range laser scanner--for modelling bat cave morphology--and portable thermal cameras--to capture bat-life when the lights are off. ***If you are having trouble viewing the video in our standard html5 player above, click here and the mp4 file should open in your browser's default video player. Or check it out on YouTube. Video, images courtesy of Nickolay Hristov. Darwin images: Reproduced with permission from John van Wyhe ed. The Complete Work of Charles Darwin Online (http://darwin-online.org.uk ), music: Broke For Free/ Free Music Archive, produced by flora lichtman
<urn:uuid:0f8b84c3-5658-4933-91d0-10afba417fe0>
2.875
184
Truncated
Science & Tech.
45.256138
Web edition: July 1, 2011 Print edition: July 16, 2011; Vol.180 #2 (p. 4) CERAMICS PROVED BEST FOR POWER GENERATORS — Ceramics have proved to be the best material for checking the white-hot stream of gases in a new kind of electric power generators. Westinghouse Electric Corporation scientists, Pittsburgh, Pa., believe ceramics will be superior to iron and steel for magnetohydrodynamic (MHD) electric power generators. They found that ceramics, relatives of those widely used for making bricks, tile and pottery, could be used to line the walls of the MHD generators and to project into the stream of gas that provides the electric power. Magnetohydrodynamic is one of the newest methods for direct generation of electricity without using a steam turbine or rotating electric generator.
<urn:uuid:287ff4b5-7c2a-43fa-915e-ff4ef24f01bf>
3.1875
176
Truncated
Science & Tech.
35.479286
Web edition: June 22, 2012 About 300 million years ago, long before the first dinosaurs appeared, a different type of oversized critter inhabited Earth: giant insects. Scientists suspect bugs grew bigger then because the atmosphere contained more oxygen than it does now. For example: Wings of one ancient dragonfly measured almost as long, tip to tip, as a Little League baseball bat. Alas, the giant insects didn’t last, and a modern dragonfly can fit comfortably inside a Wiffle ball. In a new study, researchers say the reign of mammoth insects ended when hungry, flying predators came along about 150 million years ago. D. Powell. Ancient birds wiped out huge insects. Science News Online, June 4, 2012. [Go to]
<urn:uuid:ebd7843b-dceb-4173-9247-8ed3765665ba>
3.890625
155
Truncated
Science & Tech.
55.495117
Pineapple (Ananas comosus var. comosus), is an important tropical non-climacteric fruit with high commercial potential. Understanding the mechanism and processes underlying fruit ripening would enable scientists to enhance the improvement of quality traits such as, flavor, texture, appearance and fruit sweetness. Although, the pineapple is an important fruit, there is insufficient transcriptomic or genomic information that is available in public databases. Application of high throughput transcriptome sequencing to profile the pineapple fruit transcripts is therefore needed. To facilitate this, we have performed transcriptome sequencing of ripe yellow pineapple fruit flesh using Illumina technology. About 4.7 millions Illumina paired-end reads were generated and assembled using the Velvet de novo assembler. The assembly produced 28,728 unique transcripts with a mean length of approximately 200 bp. Sequence similarity search against non-redundant NCBI database identified a total of 16,932 unique transcripts (58.93%) with significant hits. Out of these, 15,507 unique transcripts were assigned to gene ontology terms. Functional annotation against Kyoto Encyclopedia of Genes and Genomes pathway database identified 13,598 unique transcripts (47.33%) which were mapped to 126 pathways. The assembly revealed many transcripts that were previously unknown. The unique transcripts derived from this work have rapidly increased of the number of the pineapple fruit mRNA transcripts as it is now available in public databases. This information can be further utilized in gene expression, genomics and other functional genomics studies in pineapple.
<urn:uuid:7bef34c0-adb6-4b1b-8245-39660aee0e80>
2.734375
306
Academic Writing
Science & Tech.
24.491387
NEW DELHI (AP) - Delegates from nearly 200 countries are working to implement an agreement for protecting Earth's ecosystems at a biodiversity conference in southern India. The U.N. conference in Hyderabad is discussing progress toward achieving goals set in the Convention on Biological Diversity and the Nagoya Protocol created in Japan two years ago. The protocol lays down steps for countries to protect ecosystems and share access to genetic resources. Convention officials told delegations that 92 countries have signed the protocol but only six have ratified it. Scientists warn that numerous species could become extinct unless action is taken to protect them. However, countries are divided over resources to fund the Nagoya protocol. 'Your papers, please' must never be heard in America Independent voices from the TWT Communities Contributions to the Communities Sports desk from readers. Benghazi: The anatomy of a scandal Vietnam Memorial adds four names Cinco de Mayo on the Mall
<urn:uuid:cf258119-fbbe-4e69-92f4-fc3cb8c83545>
2.859375
193
Truncated
Science & Tech.
33.451638
An indispensible tool for both learning and programming Erlang. Submitted by: LRP; July 30, 2011 You don’t need to know Erlang or use the Erlang shell to create simple Zotonic websites. But for Zotonic application development and debugging, Erlang programming skills are essential. Erlang is not that hard to learn. Several excellent books and any number of web tutorials will get you well underway. The Erlang shell is an indispensible tool for both learning and programming Erlang. The easiest way to learn how to use the Erlang shell is to fire it up and play. This Cookbook recipe provides all you need to get started. You have a recent version of Erlang installed on your system. What is the Erlang shell? The Erlang shell lets you test Erlang expression sequences both locally and remotely. It also lets you work with Erlang records and manage Erlang jobs. How can I enter the Erlang shell? From your terminal command line: (and press enter). You’ll see something like: Erlang R13B03 (erts-5.7.4) [source] [64-bit] [smp:3:3] [rq:3] [async-threads:0] [hipe] [kernel-poll:false] Eshell V5.7.4 (abort with ^G) 1> 1> is the Erlang shell command line. You are now able to execute Erlang expressions. How can I get help? help(). – list of shell functions How can I execute an expression? Type now now(). and then hit ENTER. NOTE: the period at the end of the expression is necessary. It terminates the Erlang expression sequence and initiates evaluation. How can I edit a line? For more shell editing commands, refer to Section 1.3.1 in: http://www.erlang.org/documentation/doc-5.1/doc/getting_started/getting_started.html How can I manage Erlang jobs? How can I leave the shell? q(). – quits Erlang Are there other ways to quit? BEWARE: You don’t want to bring a running Erlang system down just to quit the shell. Use q() until you get the hang of stuff. When I enter Ctl+C, I get a bunch of choices. What do they mean? Refer to: http://www.erlang.org/doc/man/shell.html
<urn:uuid:d5f4813a-9e4a-45e2-ad8a-e31f7de47dde>
3.078125
546
Tutorial
Software Dev.
78.796795
The Trusty Jackknife Method identifies outliers and bias in statistical estimates by I. Elaine Allen and Christopher A. Seaman Outliers are a continual source of problems when analyzing data. A few questionable data points can skew your distribution, make significant results seem insignificant and generally ruin your day. While you can’t simply throw away inconvenient data when it doesn’t support your hypothesis, there is a simple procedure to identify small subsets of data that influence statistical measures. It is called the jackknife. Initially presented by John W. Tukey in an abstract in the Annals of Mathematical Statistics in 1958,1 the jackknife is a resampling technique that is a special case of the bootstrap.2 A relatively simple and straightforward procedure, it has been widely adopted as an estimator of bias for any statistic and as a way to examine the stability of a variance estimate. The jackknife can be a useful tool in quality control estimation by identifying outliers and bias in statistical estimates. In this column, the jackknife procedure will be applied to meta-analysis as a way of identifying studies with large influence on the summary effect size estimate. The jackknife procedure is a simple idea. For any summary statistic, the spread of individual values comprising this statistic can be examined by systematically eliminating each individual observation (or a group of observations) from a dataset, creating a set of "perturbed" summary statistics. The magnitude of the difference between the overall summary statistic and each jackknifed statistic is an estimate of that value’s infl uence on the summary value. For example, if you have scores of 1, 2 and 3, their mean is 2. The means by selectively eliminating an individual value and averaging the other two values are 1.5 (eliminating 3), 2 (eliminating 2) and 2.5 (eliminating 1). Observations might be considered outliers or points of high influence on the summary statistic when the effect of removing them from the dataset is disproportionately large. This is a useful and important technique because whenever a statistic is estimated, there is some degree of variability (or error) associated with it. In general, the procedure for performing a jackknife is: - Given a sample of size n and a sample estimate (for example, µ, the mean), divide the sample into m exhaustive and mutually exclusive subsamples of size h (in many, if not most cases, h will equal 1). - Drop one subsample from the original sample. Calculate µh - 1, the mean with one sample removed. - You now have a reduced sample of size (g - 1)*h. - Calculate the effect of dropping out 1 subsample: biasµ = g* µ −(g - 1)* µh - 1. - Repeat steps 2 and 3 for all g subsamples, yielding a vector of biasµ values. - Take the mean of this vector to yield the overall jackknife estimate of µ. Since the jackknife estimate of µ can be shown to be unbiased, an estimate of the overall bias of the statistic is just the difference between µ and its jackknife estimate. Applying the jackknife In meta-analysis, it is the individual study’s effect on the overall effect size that is of interest. It’s important to examine the influence one study can have on the overall outcome and, when that study is removed, whether a signifi cant effect size in one direction becomes insignifi cant or possibly significant in the opposite direction. Including jackknife estimates in meta-analysis software is becoming standard, and using them as a validity tool has started to be included in results of meta-analyses. The first example is real data summarizing quality-of-life data from a new treatment for cancer. The second example uses some of the studies from the first example but has perturbed others to be more extreme in their study summary statistic or in the size of the variance. In both cases, the fixed-effects and random-effects models are shown. The difference between these models is that fixed-effects models control for within-study variability but assume that the variability between studies is constant and is not controlled. The random-effects models control the variability within and between studies and are more conservative. This can be seen in both examples, but especially in the second example. The first example shows how using the jackknife can give assurance that there is no bias introduced by specific studies in the meta-analysis. This is shown in Figures 1 and 2. Figure 1 shows the meta-analysis of eight studies, of which all are relatively consistent in their results, giving an overall effect size that is significant (p-value < 0.001). Figure 2 displays the results of the jackknife estimates. The first line of Figure 2, Study 1, shows the overall estimate with Study 1 omitted. Notice how consistent the jackknife estimates are, indicating the effect size estimate is not biased by the influence of any one study. You can conclude from the jackknife analysis that the results are consistent and valid. The second example is considerably more problematic, as it shows studies that are extremely variable, and the results of the jackknife example give different results depending on the meta-analysis model applied to the studies. Studies 2 and 6 are widely differing in their study statistics, with Study 2 significantly favoring control and Study 6 significantly favoring treatment (see Figure 3). The summary statistics for the fixed and random-effects models are inconsistent, with the fixed-effects model significantly favoring treatment and the random-effects model showing no difference between treatment and control. The conclusions might take several forms, and the jackknife estimates show quite different results. Given that the results are so extreme, initially returning to the original studies and ensuring the data are correct is important. Next, examining the inclusion criteria for studies to ensure that they all meet the criteria and trying to identify any moderating variables that might cause such extreme results is important. Finally, in this case it might not be appropriate to perform a quantitative meta-analysis of these studies given the huge variability between the estimates. Random-effects jackknife estimates Eliminating single studies / Online Figure 2 - John W. Tukey, "Bias and Confidence in Not Quite Large Samples," abstract, Annals of Mathematical Statistics, Vol. 29, 1958, p. 614. - Bradley Efron, The Jackknife, the Bootstrap and Other Resampling Plans, Philadelphia: Society for Industrial and Applied Mathematics, 1982. Adams, Dean C., Jessica Gurevitch and Michael S. Rosenberg, "Resampling Tests for Meta-Analysis of Ecologic Data," Ecology, Vol. 78, No. 5, 1997, pp. 1,277-1,283. Baghi, Heibatolla, Siamak Noorbaloochi and Jean B. Moore, "Statistical and Nonstatistical Significance: Implications for Health Care Researchers," Quality Management in Health Care, Vol. 16, No. 2, 2007, pp. 104-112. Gee, Travis, "The Concept of ‘Gravity’ in Meta-Analysis," Counselling, Psychotherapy, and Health, Vol. 1, No. 1, 2005, pp. 52-75. The meta-analysis software referenced in this column is Comprehensive Meta-Analysis, version 2.0, 2005. More information can be found at http://meta-analysis.com/index.html. I. Elaine Allen is professor of statistics and entrepreneurship at Babson College in Wellesley, MA. She earned a doctorate in statistics from Cornell University in Ithaca, NY. Allen is a member of ASQ. Christopher A. Seaman is a doctoral student in mathematics at the Graduate Center of City University of New York.
<urn:uuid:5ceacc57-0943-4ccf-a82f-5fd6171a077d>
3.53125
1,630
Nonfiction Writing
Science & Tech.
37.848048
|Constructor Attributes||Constructor Name and Description| Computes the convex hull of a Geometry. Computes the convex hull of a Geometry. The convex hull is the smallest convex Geometry that contains all the points in the input Geometry. Uses the Graham Scan algorithm. getConvexHull()Returns a Geometry that represents the convex hull of the input geometry. The returned geometry contains the minimal number of points needed to represent the convex hull. In particular, no more than two consecutive points will be collinear. - if the convex hull contains 3 or more points, a Polygon ; 2 points, a LineString; 1 point, a Point; 0 points, an empty GeometryCollection.
<urn:uuid:7943f526-ce20-490f-abd1-26f089314e61>
3.5
159
Documentation
Software Dev.
47.678839
|Oracle9i Java Developer's Guide Release 1 (9.0.1) Part Number A90209-01 API stands for Application Programming Interface. As applied to Java, an API is a well-defined set of classes and methods that furnish a specific set of functionality to the Java programmer. JDBC and SQLJ are APIs for accessing SQL data. The set of single-byte, machine-independent instructions to which Java source code is compiled using the Java compiler. The memory that the memory manager uses to allocate new objects. The environment variable (or command line argument) that the JDK or JRE uses to specify the set of directory tree roots in which Java source, classes, and resources are located. In a uniprocessor system, the current thread is interrupted by a higher priority thread or by some external event, and the system switches to a different thread. The choice of which thread to dispatch is usually made on a priority basis or based on how long a thread has been waiting. The programmer places calls to the Thread.yield() method in locations in the code where it is appropriate to suspend execution so that other threads can run. This is quite error-prone because it is often difficult to assess the concurrent behavior of a program as it is being written. Common Object Request Broker Architecture. Specified by the Object Management Group (OMG), CORBA provides a language-independent architecture for distributing object-oriented programming logic between logical and physical tiers in a network, connected through ORBs. Generally, the Java packages delivered with the Sun Microsystems JDK, java.*. We also use this term to denote some The conflict state where two or more synchronized Java objects depend on locking each other, but cannot, because they themselves are locked by the dependent object. For example, object A tries to lock object B while object B is trying to lock object A. This situation is difficult to debug, because a preemptive Java virtual machine can neither detect nor prevent deadlock. Without deadlock detection, a deadlocked program simply hangs. The system saves the state of the currently executing thread, restores the state of the thread to be executed, and branches to the stored program counter for the new thread, effectively continuing the new thread as if it had not been interrupted. As used with JDBC, a layer of code that determines the low-level libraries employed to access SQL data and/or communicate across a network. The three JDBC drivers supported in Oracle9i JVM are: Thin, OCI, and KPRB. Enterprise JavaBeans. Oracle9i provides an implementation of the Enterprise JavaBeans 1.1 Specification. Within your session, you may invoke Java many times. Each time you do this, end-of-call occurs at the point at which Java code execution completes. The memory manager migrates static variables to session space at end-of-call. The popular name for the automatic storage reclamation facility provided by the Java virtual machine. Integrated Development Environment. A Java IDE runs on a client workstation, providing a graphical user interface for access to the Java class library and development tools. The platform-independent language that CORBA specifies for defining the interface to a CORBA component. You use a tool like idl2java to convert IDL to Java code. The term that Oracle9i uses to denote either Java source, binary, or resources when stored in the database. These three Java schema objects correspond to files under the JDK--. class, or other files (such as . properties files) used in the JDK CLASSPATH. Java Compatibility Kit. The set of Java classes that test a Java virtual machine and Java compiler's compliance with the Java standard. JCK releases correspond to the Sun Microsystems JDK releases, although in the case of Oracle9i, only the Java classes and not the virtual machine, are identical to the Sun Microsystems JDK. Java Database Connectivity. The standard Java classes that provide vendor-independent access to databases. The vendor-specific layer of JDBC that provides access to a particular database. Oracle provides three JDBC drivers--Thin, OCI, and KPRB. Java Development Kit. The Java virtual machine, together with the set of Java classes and tools that Sun Microsystems furnishes to support Java application and applet development. The JDK includes a Java compiler; the JRE does not. Java Language Specification. This specification defines the syntax and semantics of the Java language. Java Runtime Environment. The set of Java classes supporting a Java application or applet at runtime. The JRE classes are a subset of the JDK classes. A technique for initializing data, typically used in accessor methods. The technique checks to see if a field has been initialized (is non-null) before returning the initialized object to it. The overhead associated with the check is often small, especially in comparison to initializing a data structure that may never be accessed. You can employ this technique in conjunction with end-of-call processing to minimize session space overhead. An object is said to reference the objects held in its fields. This collection of objects forms an object graph. The memory manager actually migrates the object graphs held in static variables; that is, it migrates not only the objects held in static fields, but the objects that those objects reference, and so on. Oracle's scalable Java server platform, composed of the Java virtual machine running within the Oracle9i database server, the Java runtime environment and Oracle extensions, including the ORB and Enterprise JavaBeans implementation. Object Request Broker. An ORB is a program that executes on the server, receiving encoded messages from clients for execution by server-side objects and returning objects to the client. ORBs typically support different services that clients can use, such as a name service. The operating system preempts, or takes control away from a thread, under certain conditions, such as when another thread of higher priority is ready to run, or when an external interrupt occurs, or when the current thread waits on an I/O operation, such as a socket accept or a file read. Some Java virtual machines implement a type of round-robin preemption by preempting the current thread on certain virtual machine instructions, such as backward branches, method calls, or other changes in control flow. For a Java virtual machine that maps Java threads to actual operating system threads, the preemption takes place in the operating system kernel, outside the control of the virtual machine. Although this yields decent parallelism, it complicates garbage collection and other virtual machine activities. An address space and one or more threads. The memory that the memory manager uses to hold objects that survive past the end-of-call--those objects reachable from Java static variables within your session. Embedded SQL in Java. The standard that defines how SQL statements can be embedded in Java programs to access SQL data. A translator transforms the SQLJ programs to standard JDBC programs. In Java, the requirement that the class of each field and variable, and the return type of each method be explicitly declared. The hardware has multiple processors, and the operating system maps threads to different processors, depending on their load and availability. This assumes that the Java virtual machine maps OS threads to Java threads. This mechanism provides true concurrency among the threads, but can lead to subtle programming errors and deadlock conflicts on synchronized objects. Often used in discussion as the combination of the hardware, the operating system, and the Java virtual machine. An execution context consisting of a set of registers, a program counter, and a stack. A program that emulates the functionality of a traditional processor. A Java virtual machine must conform to the requirements of the Java Virtual Machine Specification.
<urn:uuid:f145b8a0-bff4-4ded-819c-4eb0b13fec10>
3.0625
1,621
Documentation
Software Dev.
38.538268
Photograph by Raul Touzon, National Geographic Coral reefs are complex structures built by tiny organisms called coral polyps, which are kin to jellyfish and sea anemones. Polyps attach themselves to sunken rocks at the edges of islands or continents. Their limestone skeletons connect with one another in massive numbers to create first coral colonies, then larger reefs—Earth’s largest biological structures. A healthy coral reef can live for many thousands of years.
<urn:uuid:602e6139-2e75-4b75-89e3-476aaff99a6f>
3.734375
94
Knowledge Article
Science & Tech.
34.177003
This is the first GIF of an Atom, shot with a new Quantum Laser Camera. You’re looking at the first ever direct observation and recording of an atom and its orbital structure. Hydrogen atoms make up 75% of the mass in the universe, but they’ve always been too small to actually see. A team of scientists held a hydrogen atom in a static field and shot a laser at it, causing it to shoot out electrons at a lens which magnified its wave pattern 20,000 times so a microscopic camera could see it. The images were shot by Aneta Stodolna and the team of geniuses at The Institute for Atomic and Molecular Physics, and published in their paper ”Hydrogen Atoms under Magnification: Direct Observation” (http://physics.aps.org/featured...) , helping confirm 30 years of theoretical predictions.
<urn:uuid:24e8a596-fb49-4fb6-b254-fcd626eb4d17>
3.25
183
Truncated
Science & Tech.
56.392687
Since October 2010 and after successfully completing the requisite reviews, the OCO-2 mission has been in implementation to meet the new launch date. OCO-2 will be based on the previously launched Orbiting Carbon Observatory (OCO) satellite and will carry a single instrument, consisting of three high-resolution grating spectrometers (instruments that measure properties of light within the electromagnetic spectrum). This instrument will obtain the most precise measurements of atmospheric CO2 ever made from space. The spacecraft, developed by Orbital Sciences Corp., will be based upon the LeoStar-2 architecture, which was also used on the successful Earth orbiting SORCE and GALEX missions. OCO-2 will fly in a near-polar orbit, thus enabling it to observe most of the Earth's surface at least once every sixteen days. Since the abundance of CO2 in the atmosphere varies with the time of day and season, OCO-2 measurements will record changes in CO2 over yearly and seasonal cycles within each year. To remove the effect of changes in CO2 abundances each day and discriminate between seasonal variations and long term changes, OCO-2 will acquire measurements in the sun-synchronous orbit. This means that OCO-2 will measure carbon dioxide over a given point on Earth’s surface at the same local mean solar time. The Observatory will fly with a series of other Earth orbiting satellites, known as the Earth Observing System Afternoon Constellation or the A-train. These satellites all cross the equator at approximately noontime, a few minutes apart from each other. This coordinated flight formation will enable researchers to correlate OCO-2 data with data acquired by other instruments onboard Earth observing spacecrafts, such as the Atmospheric Infrared Sounder (AIRS) instrument, which flies on the Earth Observing System Aqua platform and the Tropospheric Emission Spectrometer (TES), which flies on the Earth Observing System Aura. To provide the mission with additional flexibility, the Observatory will acquire data in three different measurement modes. In Nadir Mode, the instrument views the ground directly below the spacecraft. In Glint Mode, the instrument tracks near the location where sunlight is directly reflected on the Earth's surface. Glint Mode enhances the instrument's ability to acquire highly accurate measurements, particularly over the ocean. In Target Mode, the instrument views a specified surface target continuously as the satellite passes overhead. Target Mode provides the capability to collect a large number of measurements over sites where alternative ground-based and airborne-instruments also measure atmospheric CO2. The OCO-2 Science Team will compare Target Mode measurements with those acquired by ground-based (e.g., from the Total Carbon Column Observing Network (TCCON)) and airborne-instruments to calibrate the OCO-2 instrument and validate mission data.
<urn:uuid:e297127f-89ed-4bad-a3eb-75763326c4b2>
3.484375
582
Knowledge Article
Science & Tech.
26.198278
Climate change is not an us-and-them problem. When the very planet is at stake, there can only be us-and-us. By Ryan McGreal Published October 07, 2005 The Paradise ice caves at Mount Rainier, shown here in 1982, melted away by fall 1991. The Nisqually glacier has drawn back nine-tenths of a mile since early in the last century. Photo Credit: Gilbert W. Arias/Seattle Post-Intelligencer Earlier this year, Raise the Hammer first learned about the worldwide Kyoto World Cities 20/20 Challenge, in which cities commit to reducing overall GHG emissions by 20 percent over 20 months. Environment Hamilton is now taking the lead in starting a local, grassroots-based climate change action group for our city. The group is still in its earliest stages, but promises to raise the profile of a tremendous challenge that is too easily dismissed as either a fraud or a fait accompli. Hamilton desperately needs an organization that is willing to take the issue seriously, raise its profile in a media environment dedicated to preserving the status quo, and offer real, tangible steps that individuals and local government can take to meet the challenge head-on instead of waiting passively with our heads in the sand. The earth's climate is changing before our eyes. During 2003, the glaciers that crest the Swiss Alps retreated by record amounts. None of the glaciers remained stationary, and some retreated as much as 150 metres. Swiss Academy of Natural Sciences explained, "These observations should not be associated directly with the extreme summer heat," and "the length of the glaciers reacts with a delay to the change in climate." The United States Northwest is also experiencing warmer average weather. Shorter ski seasons meet up with more and fiercer forest fires, worse flooding, and reduced water supplies across long, hot summers. Some glaciers have disappeared completely, and many others are shrinking by the year. At the same time, the Gulf Stream is weakening. The Gulf Stream, a mass of warm water that brings warm water from the tropical Atlantic up the North American east coast and across to northern Europe, is largely responsible for Britain being more habitable than Siberia. According to Peter Wadhams, an ocean physics professor at Cambridge University, the Arctic Sea ice is thinning and the columns of cold, dense water that sink 2,700 metres below sea level and interact with the Gulf Stream have stopped forming. This is threatening the Gulf Stream and could actually make Britain significantly cooler in coming years, with shorter growing seasons and higher demand for heating fuels. In the summer of 2005, scientists reported that the vast frozen peat bog of sub-Arctic western Siberia is rapidly thawing for the first time in 11,000 years. Frozen peat gradually absorbs and stores organic matter through a process called cryoturbation. As the permafrost thaws, it releases its pent-up organic matter as methane gas. "I think it's just a time bomb, just waiting for a little warmer -- Professor Vladimir Romanovsky Sergei Kirpotin of Tomsk State University in Siberia and Judith Marquand at Oxford University, and warn that billions of tonnes of methane gas will be released over the next few decades, doubling atmospheric levels of the potent greenhouse gas and accelerating the rise in mean temperatures over this century. Dr. Kirpotin called the thaw an "ecological landslide that is probably irreversible and is undoubtedly connected to climatic warming." Climate change scientists, long worried about such 'tipping points' where whole systems change abruptly instead of gradually, are already revising their predictions upwards. The thawing Siberian permafrost is joined by thawing North American permafrost. Traditionally, permafrost is coldest at the surface and gets warmer deeper under the ground, but permafrost in northern Alaska is now coldest halfway down, getting warmer as you approach the surface. According to Vladimir Romanovsky, a geophysicist at Alaska University, permafrost is "like ready-use mix - just a little heat, and it will start cooking. I think it's just a time bomb, just waiting for a little warmer conditions." Climate change is a public relations nightmare. It's just not easy to get people riled up about complex atmospheric, oceanic, and geologic phenomena taking place on a global scale and at what appears to humans accustomed to short-term thinking to be a glacial pace (no pun intended). Put simply, people cannot observe climate. We can only observe weather, which is influenced as much by local and transient events as by large climatic forces. What we cannot observe becomes difficult to think about. The cognitive bias of humans is to concern ourselves with the immediate, the visual, and the visceral. Climate change is none of those things. Its myriad causes - including human activities - run into the billions, so individuals find it difficult either to assign or accept responsibility. Our difficulty in imagining the likely effects of climate change translates into a difficulty believing they could actually occur. Similarly, its effects, while predictable in a statistical sense of identifying changing broad patterns and relationships among huge sets of data points (for example, an average increase in the frequency of severe hurricanes due to the complex interplay of air and water temperatures and wind shear effects), cannot be applied so easily to specific events (for example, Hurricane Katrina, the formation of which may have been influenced by climate change, but which might well have formed anyway). In fact, because the long-term potential for climate change to devastate the carrying capacity of our planet is so horrible, the very act of trying to imagine what a changed global climate might be like throws many people into denial. Our difficulty in imagining the likely effects of climate change translates into a difficulty believing they could actually occur. Learning that a million square kilometres of frozen peat bog are thawing is paralyzing, not galvanizing. Climate-change denial has gone through four stages. First the fossil-fuel lobbyists told us that global warming was a myth. Then they agreed that it was happening, but insisted that it was a good thing: we could grow wine in the Pennines and take Mediterranean holidays in Skegness. Then they admitted that the bad effects outweighed the good ones, but claimed that climate change would cost more to tackle than to tolerate. Now they have reached stage four. They concede that climate change would be cheaper to address than to neglect, but maintain that it's now too late. This is their most persuasive argument. -- George Monbiot, September 20, 2005 Efforts to raise the profile of climate change have yielded very little in terms of actual changes to how North Americans live. The Canadian government hired polling firm Ipsos-Reid to gather feedback on Rick Mercer's One-Tonne Challenge campaign, and discovered that most people remembered the campaign but had no idea what it was about. The Day After Tomorrow, a disaster movie featuring a century of climate change crammed into a couple of apocalyptic days, was an embarrassment. It may actually have done more harm than good for persuading the public to take the issue more seriously, by making a mockery of the complex and subtle science involved in climate modeling. This comes after two decades of obfuscation and outright denial on the part of those industries most responsible for producing greenhouse gas (GHG) emissions: the auto industry, oil and gas companies, and other heavy industries. From the way the mainstream news media have generally reported it, climate change sounds like a controversial theory, hotly disputed by experts. In fact, a review of 900 peer-reviewed papers on climate change published in 2004 turned up not one paper disputing the theory that climate change is a) occurring and b) at least partially human-caused. A recent US Senate dog and pony show on climate change had to rely on popular fiction author Michael Crichton to produce a dissenting voice, apparently because its ringmaster, Senator James Inhofe (R-Oklahoma), couldn't find any real scientists to support his belief that climate change is "the greatest hoax ever perpetrated on the American people". It should go without saying that a novelist with a background in science is not the same as a practicing scientist whose research is peer-reviewed. When those differences are obscured, the narrow, self-interested body of climate change deniers gain much more prominence than they deserve, and the public is left with a false sense that the issue is highly uncertain. That confusion, of course, promotes the status quo by default. As a result, a quarter of a century after scientists began publicly to worry about climate change, our lifestyles have actually gotten even more harmful. Finally, big business is starting to acknowledge that something must be done. In January, Lord Oxburgh, the chairman of Royal Dutch-Shell, insisted, "governments in developed countries need to introduce taxes, regulations or plans ... to increase the cost of emitting carbon dioxide." Jeffrey Immelt, CEO of General Electric, made headlines this spring when he demanded that governments give up on "voluntary standards" and mandate reductions. In Britain, environmental journalist George Monbiot threw down the gauntlet in a recent column, demanding that (wait for it) the British government follow the lead of big business and establish regulations that encourage better corporate behaviour. A week ago, I would have said that if it is too late, then one factor above all others is to blame: the chokehold that big business has on economic policy. ... But last Wednesday I discovered that it isn't quite that simple. At a conference organised by the Building Research Establishment, I witnessed an extraordinary thing: companies demanding tougher regulations - and the government refusing to grant them. A cynic might say that big business finally got what it always wanted: a hands-off government willing to let the market to its magic. Amazingly, the British Department of Trade and Industry responded that the rules businesses are demanding would be "an unwarranted intervention in the market". However, when even those corporations most likely to be affected by new rules limiting GHG emissions are demanding them, it's an important sign that our society is ready for a change. The Kyoto Accord doesn't go anywhere near far enough to reverse the human causes of climate change. Most scientists agree that humans would have to reduce our output of greenhouse gases by 70 to 80 percent to accomplish that - essentially, what we were producing at the start of the industrial revolution. However, Kyoto is a line in the sand; it's an official acknowledgement that the way we've been doing things has to end. It's also the first step in what should be an ongoing, progressive effort to continue rolling back GHG emissions over the coming years and decades. The biggest problem with Kyoto, aside from its modest targets, is its national focus. While 157 countries have signed the Kyoto Accord, no cities have done so. Since 70 percent of all people live in cities, and traffic accounts for over half of air pollution, cities have a unique role to play in meeting and eventually exceeding the Kyoto targets. Portland has reduced its GHG emissions significantly during a period of robust economic growth and dramatic improvement in livability for its residents. In North America, city governments of both political parties across the United States have pledged to meet their Kyoto obligations and reduce their greenhouse gas emissions - despite the U.S. federal government's rejection of the Kyoto Accord. A few cities like Portland, Oregon have proven that the so-called "trade-off" or "balance" between economic and ecological considerations is a sham. Portland has reduced its GHG emissions significantly during a period of robust economic growth and dramatic improvement in livability for its residents. Businesses in Portland love the high efficiency of environmental building and smart transit, while businesses in other cities are struggling under rising energy costs and demanding fuel subsidies. Unlike Portland, Hamilton has more than its fair share of automotive traffic. A physical layout that has been allowed to sprawl away from the downtown core for decades and a spending pattern that has constantly favoured roads and highways over transit has made much of Hamilton utterly car-dependent. Unfortunately, Hamilton City Council has taken little interest in making Hamilton a cleaner, more efficient city. Instead, Council has placed its hopes yet again on the 20th century model of "development": more highways, wider lanes, a huge investment in air travel (by far the most polluting per kilometre travelled), and more residential growth in sprawl areas far from the centre of town. City Council's decision to hire energy consultant Richard Gilbert to study the effects of rising oil prices is an encouraging sign that local government may be starting to take the large issues facing our long term plans seriously. Climate change and declining energy supplies both originate from the same wasteful practices, so our efforts to respond to both challenges will be similar. Environment Hamilton's new climate change group couldn't come at a better time. Its success will lie in engaging and involving as many individuals and groups as possible - from all parts of our society and from every political background - to bring pressure on our local government to take the issue seriously, re-think its long-term planning strategy, and begin the task of transforming Hamilton from a part of the problem into a part of the solution. This is no longer an us-and-them problem. When the very planet is at stake, there can only be us-and-us. Lisa Stiffler and Robert Mcclure, "Our Warming World: Effects of climate change bode ill for Northwest", The Seattle Post-Intelligencer, November 13, 2003 http://seattlepi.nwsource.com/local/148043_warming13.html Elizabeth Kolbert, "The Climate Of Man Part I", The New Yorker, April 25, 2005 (no longer available online) Jonathan Leake, "Britain faces big chill as ocean current slows", The Sunday Times (Britain), May 8, 2005 http://www.timesonline.co.uk/article/0,,2087-1602579,00.html Marsha Walton, "Changes in Gulf Stream could chill Europe", CNN, Tuesday, May 10, 2005 http://www.cnn.com/2005/TECH/science/05/10/gulfstream/ Editorial, "Climate Signals", New York Times, May 19, 2005 http://www.nytimes.com/2005/05/19/opinion/19thu1.html?ex=1274155200&en=d7b124255621e19b&ei=5090&partner=rssuserland&emc=rss Bill Curry, "The challenge no one understands", The Globe and Mail, July 7, 2005, Page A4 (no longer available online) http://www.theglobeandmail.com/servlet/ArticleNews/TPStory/LAC/20050707/TONNE07/TPNational/TopStories Ian Sample, "Warming hits 'tipping point'", The Guardian, August 11, 2005 http://www.guardian.co.uk/climatechange/story/0,12374,1546824,00.html George Monbiot, "It would seem that I was wrong about big business", The Guardian, September 20, 2005 http://www.guardian.co.uk/climatechange/story/0,12374,1574002,00.html Jamie Wilson, "Novel take on global warming", The Guardian, September 29, 2005 http://books.guardian.co.uk/comment/story/0,16488,1580591,00.html
<urn:uuid:059d2814-71ec-4865-8307-5a0586413d79>
2.9375
3,224
Nonfiction Writing
Science & Tech.
41.935604
Life Versus the Volcanoes The underwater volcanoes are part of a larger structure of tar deposits in the area, and although the volcanoes themselves are not active, oil has been bubbling steadily out of nearby seeps in the underground rock for thousands of years. Just sail some 10 miles offshore and the surface of the ocean has a oily sheen and smells, says Chris Reddy, a scientist at WHOI and co-author of a paper on the asphalt volcanoes that appeared in April’s Nature Geoscience. Some 20,000 liters a day is released; about half the oil that enters the world’s oceans comes from natural seeps like these. The volcanoes are dormant now, but at one time may have been an important regional flux of methane, a potent greenhouse gas. Finding natural sources of methane like these are critical to understanding how methane is released into the atmosphere. Valentine and his colleagues discovered the volcanoes on a diving expedition in the area. They were curious about some unusual sea floor topography they had noticed, and then sent an autonomous underwater vehicle to snap some photographs.What was revealed were seven domed volcanoes in all, larger in area than a football field, the largest of which was taller than a six story building. How do you miss these so close to a heavily populated coastline? Well, they were deep enough that diving expeditions never reached those peaks. The volcanoes are thought to be made entirely of asphalt rooted deep below the subsurface to their base. It’s like a massive bumpy parking lot down there. Valentine says they’re not a source of global warming today, although the surrounding seeps are adding a relatively small, ever-present source to the atmosphere. But geochemists are often thinking in terms of scale. “In a longer term view, changes in emission from the subsurface may have had significant impacts on climate, but it would take many such features as we have found to make a global impact.” The volcano in Iceland is a reminder of how ultimately precarious our situation is here on Earth. There’s just no telling what the planet’s systems have in store for us. We build entire civilizations on the assumption of permanence. But in moments the ground — or skies — can start shifting. No known end. And of course the scariest part is that this kind of thing isn’t all that unusual in Earth’s past. Although Eyjafjallajökull is happening in an inopportune spot, it’s not particularly large either. Now’s the time to bone up on past volcanic occurrences. We’ve all heard of the 1991 eruption of Mount Pinatubo in the Philippines — the largest eruption in living memory — which caused average global temperatures to drop by almost a degree Fahrenheit as a haze of sulfuric acid droplets prevented sunlight from reaching the Earth. Some 800 people died from houses that collapsed under accumulated wet hot ash, and of course there was untold damage to nearby communities, forests, and agriculture. How fast things change: these days, the volcano is a tourist destination. So what’s all this have to do with global warming? Well, volcanic eruptions temporarily (or not so temporarily) lead to global cooling, though, of course, also cause quite a bit of destruction so we don’t exactly want them to pop off. But in the long term, they are actually adding quite a bit of CO2 to the atmosphere. Think of them as balancing out the Great Carbon Cycle. But as Virginia Polytechnic Institute geologist Dewey McLean points out: Despite all the carbon it spewed into the air, Eyjafjallajökull has, in an odd way, actually helped alleviate climate change.
<urn:uuid:b4597dbb-4a51-486e-9c7a-e76d01be9460>
3.359375
796
Truncated
Science & Tech.
44.231552
A star in the Draco constellation was recently murdered by a black hole. Not just murdered, but shredded and swallowed -- a helpless star, by a perverse, serial-killing black hole! Now, the actual findings, as published in the peer-reviewed journal Nature, were just as exciting, but a little bit too thinky for us average readers. The real news was that astronomers at Johns Hopkins University in Baltimore, building upon the work of scientists from many other organizations, were able to finally observe the "rising emission" phase of "a luminous ultraviolet–optical flare from the nuclear region of an inactive galaxy." That is, they were smart (and lucky) enough to have their powerful, high-tech telescopes pointed in exactly the right direction at exactly the right time to witness, from start to finish, a fairly rare cosmic phenomenon: the helium core of a red giant star (previously stripped of its hydrogen envelope) disappearing into a black hole. In the past, we've detected these events only during the "falling emission" phase -- the disappointing Act II of this stellar drama. So, we've had to rely on theoretical mathematical models. Of course, we've got nothing against these mathematical models -- actually, they've proved to be pretty good up till now. But our newfound ability to observe the rising-emission phase of these events provides science with new data, allowing us to confirm previous theories about the composition of distant stars and the nature of black holes, and perhaps even to modify current thought. Which is pretty cool, when you think about it. But I guess not as titillating as "Giant Black Hole Shreds and Swallows Helpless Star." In other science news, the Moon is going to kill us on Saturday! According to USA Today, May 5 will feature a "supermoon" -- that is, a full Moon occurring at perigee, the Moon's closest approach to Earth. Could a supermoon have been the cause of the Titanic disaster? This random website reports, you decide. So, watch out! Some people think that supermoons cause earthquakes, volcanic eruptions, and dirt, grease and stains in your family wash. In reality, the evening of May 5 is simply an opportunity to view a big, bright full moon -- that night, the Moon will appear 14 percent larger (and 30 percent brighter, which I guess must be due to some kind of inverse-square luminosity law) than the full moon does on average. According to NASA, there is little reason to fear the supermoon. It happens about once per year, and evidence suggests it's harmless. My advice: Just enjoy it! Go out and dance naked around a bonfire! Or do whatever it is normal people do during a full Moon. (Although, to be safe, you may wish to cancel any transatlantic voyages on that date.) Black hole illustration credit: S. Gezari/Johns Hopkins University and J. Guillochon, UC Santa Cruz/NASA Brandon's Big Gay Blog
<urn:uuid:a4146e3a-612b-42d8-8670-0df2119df639>
3.0625
631
Personal Blog
Science & Tech.
47.956235
Marilley, L., Hartwig, U.A. and Aragno, M. 1999. Influence of an elevated atmospheric CO2 content on soil and rhizosphere bacterial communities beneath Lolium perenne and Trifolium repens under field conditions. Microbial Ecology 38: 39-49. What was done A FACE experiment, located in Switzerland, was began in 1993 to study the effects of atmospheric CO2 enrichment on monocultures of ryegrass (Lolium perenne) and white clover (Trifolium repens). After two years of differential CO2 fumigation, soil samples were taken beneath experimental plots and analyzed to determine the effects of elevated CO2 on bacterial populations and community structure. What was learned Elevated CO2 did not impact the total number of bacteria in the bulk soil beneath the swards of ryegrass or white clover. However, it tended to increase bacterial numbers in the rhizosphere, which consists of soil in closer proximity to plant roots than the bulk soil, beneath both species. Thus, it appears that enhanced nutrient exudation from plant roots, resulting from atmospheric CO2 enrichment, allowed greater bacterial populations to live within the rhizosphere. In addition, atmospheric CO2 enrichment altered the profile of bacterial communities in a plant species-dependent manner. In ryegrass, for example, elevated CO2 increased the dominance of Pseudomonas species, which enhance plant growth by many different mechanisms, while in white clover, it increased the dominance of Rhizobium species, which enhance plant growth by making atmospheric nitrogen available for their utilization. What it means As the atmospheric CO2 concentration rises, it is likely that most plants will exhibit increases in photosynthesis and growth. As a consequence of these phenomena, greater amounts of organic carbon compounds should be input into the soil via root exudation and biomass turnover. Thus, bacterial numbers can be expected to increase as a result of this CO2-induced enhancement of soil carbon content. Within the rhizosphere, it is likely that shifts in bacterial communities will occur in such a way as to optimize nutrient exchange between plants in a species-dependent manner. In the case of the leguminous white clover, for example, elevated CO2 favored a shift towards Rhizobium bacterial species, which likely increased their nitrogen-fixing activities and made more nitrogen available to support enhanced plant biomass production. In contrast, the non-leguminous ryegrass, which does not form symbiotic relationships with Rhizobium species, exhibited greater dependence upon Pseudomonas bacterial species to increase its acquisition of various soil minerals to support its CO2-induced growth enhancement. Thus, as the CO2 content of the air rises, it is likely that swards of white clover and ryegrass will both exhibit increased biomass production, due to CO2-induced shifts in bacterial populations that optimize nutrient acquisition for each plant species. Reviewed 1 February 2000
<urn:uuid:b64f5fa2-b63e-4826-a8aa-40ec892723c8>
3.140625
625
Academic Writing
Science & Tech.
22.843423
here is the directory structure: C:\ dirA\dirB\dirC\A.java when i try to compile using the command 1:-c:\javac -cp dirA\dirB\dirC\A.java it displays the error message javac: no source files but when i pass this command 2:-c:\java -cp \dirA \dirA\dirB\dirC\A.java it compiles successfully. what is the difference between 1st command & 2nd command. if I am inside dirA i.e c:\dirA> what should I give as classpath variable to complie & execute A.java. what is relative classpath i.e what is meaning of -cp dirB;dirB\dirC? if i execute the command c:\dirA> javac -cp dirB;dirB\dirC\A.java.The error message is "javac: no source files is found ". why did I get this error message when A.java file is inside dirC.
<urn:uuid:074fd3e6-9013-47aa-b49b-1880fddf679d>
2.84375
223
Q&A Forum
Software Dev.
74.275261
Appearance The regal fritillary is one of temperate North America's most striking butterflies. Almost as large as the familiar monarch butterfly, it is instantly recognizable from above by its black-flecked reddish-orange forewings and blue-black hind wings. The wing undersides have a bold pattern of large, triangular silvery-white spots on a dark brown background. Females are slightly larger than males and have two rows of white spots atop the hind wings, whereas one of these rows is orange in males. Habitat and Range The species ranges across the northern half of the United States, from the Dakotas and Colorado east to Maine and Virginia. Its Minnesota range coincides with the historical extent of prairie and savanna, as far north as Polk County in the west and the Anoka Sand Plain in the east. It is widespread in the western part, where it can be common in some larger prairies. It breeds only in native prairie habitats—now scattered in remnants that amount to less than 1 percent of the historical extent. Upland prairies appear to be favored, but adults are frequently seen visiting flowers in wetland prairies as well. Biology Females lay eggs in late summer. The hatchling larvae hibernate in the duff without feeding until the following spring. Larvae complete their growth and pupate by June. Larvae feed only on violets, particularly prairie bird's-foot violet in Minnesota. Male adults begin to appear later in June; females appear a week or two later. Females delay egg laying until August and September, after most males have died. Adults feed on floral nectar of purple coneflower, milkweeds, thistles, and especially blazing stars. Status The regal fritillary has suffered a recent catastrophic decline in the eastern half of its historical range: It has vanished from most states from Michigan, Ohio, and Kentucky east to the Atlantic seaboard. The reasons for this decline are not clear, though possible causes could include habitat fragmentation, as well as widespread use of insecticides to control mosquitoes and gypsy moths. The butterfly fares better in the western half of its range but is considered relatively secure only in Kansas. The regal fritillary is widespread in western Minnesota but is found in only a few localities in the eastern portion of the state. It is listed as a state species of special concern. In Minnesota's comprehensive wildlife conservation strategy, Tomorrow's Habitat for the Wild and Rare, the DNR identifies it as a wildlife species in greatest conservation need. Protection and proper management of remaining native prairies, especially the careful use of prescribed burning, are critical conservation needs for this beautiful butterfly. DNR plant ecologist for Minnesota County Biological Survey The regal fritillary occurs in 11 of the 25 ecological subsections highlighted in Tomorrow's Habitat for the Wild and Rare: An Action Plan for Minnesota Wildlife. It has been most frequently surveyed in prairies of the Minnesota River valley. In the 1890s, prairies covered almost 80 percent of this region. Today, more than 80 percent of the land cover has been converted into row crops. To read more about the region and its conservation priorities, visit www.dnr.state.mn.us/cwcs/subsection_profiles.html.
<urn:uuid:ed4907ae-e850-45cc-a61e-a91aa971ccc5>
3.53125
677
Knowledge Article
Science & Tech.
40.923013
Key to the Laphria of Arkansas and the East This key was created by modifying the key to the Laphria that appears in the 1975 paper "A Taxonomic Study of the Asilidae of Michigan" by Norman T. Baker and Roland L. Fischer. Species that do not occur here were removed and species that possibly occur here or in adjacent states were worked in by characteristics. Several large species appear at the initial added portion. Some of the redescriptions from Bullington's Laphria page were helpful in placing some of the non-Michigan species and for characters in some of the larger robbers. Links from the species names are to Bullington's redescriptions. Note that there are likely several undescribed species among the larger species Laphria sensu strictu that Bullington noted, which have not been formally published or named. 1 Fly with prominent deaths head pattern of dark on the dorsal thorax; orange waspy insect of the pine forests and coastal plain. 1 Fly without above pattern. 2 Large flies, 24 mm or more in length. 2 Smaller flies, many less than 20mm. 3 Visible dorsum of all abdominal tergites except 5 entirely bare, some hairs present along extreme lateral margins of all tergites; in the male all of these lateral hairs are black; in the female some of the hairs on tergites 1-2 are yellow. 3 Dorsum of some or all tergites other than 5 with hair. 4 Tufts of hair in front of wings entirely pale. 4 Tufts of hair in front of wings black or mostly black. 4.5 Abdominal tergite 6 covered with red hair in female. 4.5' Abdominal tergite 6 yellow haired as well as 4 and 5. 5 Entire length of dorsal half of each mid-tibia with extremely conspicuous yellow hairs. 5 Mid-tibias black. 6 Light hairs very dirty yellow to brownish yellow; tufts of hair in front of halteres with some black; very large to huge(27-39 mm long). 6 Light hairs bright to dull yellow ; tufts of hair in front of halteres entirely dull yellow; large to huge (23-35 mm long). 7 Abdomen always with considerable amounts of erect long black pile; pile of mesonotum always dense and erect; abdomen broadened beyond middle and generally ovate in males. 7 Abdomen devoid of pile or with yellow or golden pile and very little or no black pile, frequently appressed; mesonotum naked or with more or less appressed pile; abdomen almost always nearly parallel sided in males. 8 Hair on sides of first abdominal segment largely black. 8 Hair on sides of first abdominal segment largely yellow. 9 Front and middle legs and joint of metafemora and tibiae densely covered with yellow hair or fore and mid-tibia yellow haired or midtibia only yellow haired. 9 Legs largely black-haired or profemora alone with dense yellow hair. 10 Beard entirely yellow or eye margin black and rest yellow. 10 Beard black top third, yellow lower 2/3. 11 Tuft of hairs in front of halteres mostly yellow. 11 Tuft of hairs in front of halteres black. 12 Beard entirely black or nearly so. 12 Beard entirely yellow or yellow except lower margin black continuous with eye line. 13 Mystax entirely yellow. 13 Mystax all or largely black. 14 Lateral of tergite 2 and dorsum of tergites 3 and 4 always yellow. 14 Abdomen usually all black but sometimes with some yellow on 2, 3, 4. 15 Marginal scutellar bristles largely pale; disc of scutellum with black hair or entirely yellow hair. 15 Scutellar bristles and hair largely black. 16 Abdominal tergites 4 and 5 yellow; mesonotum entirely covered with long yellow hairs, two thirds of which are recumbent. 16 Abdominal tergites 4 and 5 black; mesonotum with pile appearing bright yellow from a distance, equal numbers of hairs erect or recumbent. 17 Scutellar vestiture black; black hairs far outnumber any lateral yellow hairs; pile of tergite 6 grayish to pale yellow. 17 Scutellar vestiture yellow, sometimes central smaller hairs black and lateral large hairs yellow. 18 Pile of tergite 6 grayish to pale yellow; tergites 1-4 with short black hairs; 10-16 mm insect. 18 Pile of tergite 6 black (as well as 5 and 7); tergites 1-4 yellow haired; over 22 mm long. 19 Pile of tergite 6 always black; dorsal abdominal pile of first three tergites black; tergites 2-7 entirely black haired. 19' Pile of tergite 6 all yellow; tergites 4 and 5 entirely covered with yellow hair. 19" Pile of tergite 6 black; tergites 1-3 in both sexes long yellow haired as well as part of 4, the rest black. 20 Dorsum of abdomen with entirely black pile. 20 Dorsum of abdomen with black and yellow pile. 21 Pronotal bristles black; hairs on disc of scutellum conspicuous and yellow. 21 Pronotal bristles yellow; hairs on scutellum black and slanted 45 degrees back. 22 Abdominal tergites largely covered with reddish or yellowish pile. 22 Abdominal tergites nearly bare except for more or less dense hair laterally or with only scattered fine whitish or yellowish pile on anterior margins (canis complex). 23 Femora and tibiae rather bright red-orange on all legs. Abdomen with golden pile contrasting with the darker thorax. 23 Femora and tibiae not red. 24 Mesonotum uniformly covered with golden or reddish orange, no posterior triangle present on thorax. 24 Mesonotum with extensive black pile either covering dorsum or anteriorly at the sides, light pile of thorax concolorous with abdominal pile, or mesonotum yellow haired with the pale pile on the posterior thorax forming a narrow triangle. 25 Posterior pronotum black haired; beard and coxal hair usually white; abdominal tergite 7 of female mostly black haired. 25 Posterior pronotum with golden hair; beard and coxal hair yellow; abdominal tergite seven of female mostly golden haired. 26 Abdominal tergite 6 with two well defined blunt apical processes; apical processes of the hypandrium convergent. 26 Abdominal tergite 6 with a single minute median process; apical processes of hypandrium divergent. 27 Hypopygium wider than tergite 6 when viewed from above; tergite 7 triangular rugose. 27 Hypopygium not wider than tergite 6; tergite 7 not rugose. 28 Tip of hypandrium tapering and bluntly pointed; tergite 7 distinctly keeled. 28 Tip of hypandrium expanded and leaflike; tergite 7 with small bilobate process.
<urn:uuid:a73ba12e-b28e-47cb-b412-b4b230d8e47c>
2.875
1,555
Structured Data
Science & Tech.
55.169591