text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Find the exact values of x, y and a satisfying the following system of equations: 1/(a+1) = a - 1 x + y = 2a x = ay Find the sum of the series. Photocopiers can reduce from A3 to A4 without distorting the image. Explore the relationships between different paper sizes that make
<urn:uuid:ba840440-e053-47a1-9eb8-a49bd446ce69>
2.765625
74
Content Listing
Science & Tech.
54.881912
A cool early Earth John W Valley, William H Peck, Elizabeth M King, Simon A Wilde April 2002 Geology v. 30; no. 4; p. 351-354 This landmark paper describes the analysis of the oldest-known earth materials, 4.4 billion year old zircon grains. Studies of these zircons suggest that some continental crust formed as early as 4.4 Ga, 160 m.y. after accretion of the Earth, and that surface temperatures were low enough for liquid water. The hypothesis of a cool early Earth suggests long intervals of relatively temperate surface conditions from 4.4 to 4.0 Ga that were conducive to liquid- water oceans and possibly life. Meteorite impacts during this period may have been less frequent than previously thought.
<urn:uuid:8e049746-9094-4b80-988b-a51e7451851d>
3.34375
158
Academic Writing
Science & Tech.
68.888945
Density Altitude Calculator 2. Relative humidity in percent. 3. Corrected barometric pressure in inches of Mercury. This is the barometric pressure as reported by most local radio and TV stations. It is corrected to sea level. 4. Your location altitude in feet. Then click the calculate button. 2. Density Altitude in feet: The density altitude is the altitude at which the density of the International Standard Atmosphere (ISA) is the same as the density of the air being evaluated.
<urn:uuid:af85c7f1-2145-49f0-8f9c-03d3fc125d01>
3.265625
106
Tutorial
Science & Tech.
37.841657
HOW DO BABY TURTLES SURVIVE WINTERS? Questions about how various animals survive winter come across my desk (or rather my computer screen) rather frequently. Every species living in the temperate zone has to cope in some way with winter cold. Birds fly south as winter approaches. Mammals add a layer of body fat when cold weather arrives. Trees lose their leaves before they freeze. Turtles, one of the most conspicuous animals in warm weather, have special ways to deal with winter. What happened to the turtles you saw basking on logs or sun-warmed rocks during spring, summer, and fall? They have disappeared. Where did they go, and why? Turtles are reptiles, so their surroundings determine their body temperature. At body temperatures of about 40 to 50 degrees F, most reptiles become sluggish, stop eating, and seek hiding places to get safely through the winter. Many aquatic turtles go into the bottom mud or under the bank where the water is cold but does not freeze. An advantage reptiles have over most mammals is that their metabolism drops with their body temperature, meaning that they require less oxygen. Some turtles can stay underwater for days at a time without taking a breath, as long as the water stays cold. Recently born baby turtles have a different strategy. Turtles lay their eggs on land, usually by digging a hole in dirt or sand and then covering the nest. Most turtle eggs hatch in autumn, but the hatchlings often do not leave the nest until the following spring, a year or more after the eggs are laid. This phenomenon, known as overwintering in the nest, occurs worldwide among many different kinds of turtles. may sound like a reasonable way for a helpless baby turtle in mild-wintered Alabama or Florida to pass its first cold spells and avoid predators. But what do baby turtles do in Canada, Michigan, and Minnesota, where painted turtle hatchlings are entombed only a few inches beneath the soil for the winter months? Even in an underground nest, soil temperatures drop as low as 25 degrees F. Most animals deal with these extremely low winter temperatures by seeking a warmer place. Not so baby painted turtles. Some hatchling turtles are also believed to be capable of producing antifreeze compounds. Hatchling painted turtles exposed to subfreezing temperatures produce significantly higher levels of glucose in the blood than do those kept at normal temperatures. The glucose and other body products may function as a form of antifreeze, although how the process works is unknown. An even more important discovery is that some baby turtles can survive when more than half their internal body water freezes. The painted turtle is one of the highest vertebrate life forms known in which the freezing of body fluids is tolerated during hibernation. This does not mean that other animals are incapable of surviving such an assault, only that scientists have not yet documented the phenomenon. If you go for a walk around the edge of a lake this winter, consider that adult turtles are lying dormant beneath the lake's surface and that baby turtles may be on land beneath your feet. Both the adults and hatchlings have a good chance of enduring anything winter has to offer, in the South as well as the North. underwater by adult reptiles is not particularly uncommon, but the phenomenon of hatchlings overwintering in the nest is unusual behavior. Both methods of surviving winter are indicative of just how versatile and endlessly fascinating the natural history of native wildlife is. And we still have much to learn about how even the most common of animals around us survive in the natural world. you have an environmental question or comment, email
<urn:uuid:8b40ae6a-d8c2-47bb-b90c-e01934332e12>
3.8125
749
Nonfiction Writing
Science & Tech.
44.418401
YELLOWSTONE VOLCANO OBSERVATORY MONTHLY UPDATE Monday, March 1, 2010 2:47 PM MST (Monday, March 1, 2010 21:47 UTC) YELLOWSTONE VOLCANO (CAVW #1205-01-) 44°25'48" N 110°40'12" W, Summit Elevation 9203 ft (2805 m) Current Volcano Alert Level: NORMAL Current Aviation Color Code: GREEN February 2010 Yellowstone Seismicity Summary: During the first half of February 2010, Yellowstone continued to experience a large swarm of earthquakes on the Madison Plateau, near the northwest margin of the Yellowstone Caldera. Retrospective analysis shows that the swarm began on January 15, 2010 and picked up in intensity on the 17th of January. As of February 25, a total of 1,809 earthquakes had been automatically located for the entire swarm, including 14 with a magnitude greater than 3.0; 136 with M2.0-2.9; 1,119 with M1.0-1.9; and 540 with M0.0-0.9. By the end of February 2010, earthquake activity at Yellowstone had returned to near-background levels. Within the entire Yellowstone National Park region, 244 earthquakes received review by a seismologist during February. The largest event was a magnitude 3.1 on Feb. 2 at 7:31 PM MST. This earthquake was part of the Madison Plateau swarm and was located 7 miles SSE of Madison Junction, WY. In addition a small earthquake swarm of 17 earthquakes occurred on February 13, and was located about 12 miles NE of West Yellowstone, MT, with magnitudes ranging from -0.2 to 1.6. Some of the smallest events from the Madison Plateau swarm remain to be reviewed by a seismologist, and so the 244 earthquake tally is provisional. Ground Deformation Summary: Continuous GPS data show that uplift of the Yellowstone Caldera has slowed significantly. Uplift rates for YVO GPS stations are less than 2.5 cm per year. The WLWY station, located in the northeastern part of the caldera, underwent a total of ~23 cm of uplift between mid-2004 and mid-2009. Its record can be found at: The general uplift and subsidence of the Yellowstone caldera is of scientific importance and will continue to be monitored closely by YVO staff. An article on the recent uplift episode at Yellowstone and discussion of long-term ground deformation at Yellowstone and elsewhere can be found at: http://volcanoes.usgs.gov/yvo/publications/2007/upsanddowns.php The Yellowstone Volcano Observatory (YVO) is a partnership of the U.S. Geological Survey (USGS), Yellowstone National Park, and University of Utah to strengthen the long-term monitoring of volcanic and earthquake unrest in the Yellowstone National Park region. Yellowstone is the site of the largest and most diverse collection of natural thermal features in the world and the first National Park. YVO is one of the five USGS Volcano Observatories that monitor volcanoes within the United States for science and public safety. Jacob Lowenstern, USGS Scientist-in-Charge, Yellowstone Volcano Observatory Robert Smith, University of Utah Coordinating Scientist, YVO Henry Heasler, Yellowstone National Park Coordinating Scientist, YVO
<urn:uuid:9e157381-beb4-4b52-8eb5-7c40abdda23b>
2.75
713
Knowledge Article
Science & Tech.
53.558212
Joined: May 2002 Here's that 100,000 rpm flagellum: Polar Flagellar Motility of the Vibrionaceae Linda L. McCarter* Interestingly, this one has more flagellin genes than usual, and no single one is crucial: Flagellar filaments, which act as propellers, consist of self-assembling protein subunits (flagellin) arranged in a helix and forming a hollow tube (reviewed in reference 135). Subunits move down the hollow core and are polymerized at the tip of the flagellum. The V. parahaemolyticus polar organelle is a complex flagellum. There are six polar flagellin genes, organized in two loci (86, 122). The flagellins are similar to each other: FlaB and FlaA are 78% identical, FlaB and FlaC are 68% identical, FlaB and FlaD are 99% identical, FlaB and FlaF are 69% identical, and FlaB and FlaE are 50% identical. Despite the great protein similarity of FlaB and FlaD, the flagellins migrate differently on sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis, suggesting the possibility of posttranslational modification, e.g., glycosylation, which has been observed for flagellins of many bacteria including some spirochetes, Campylobacter species, and P. aeruginosa (27, 56, 171, 196), or phosphorylation, which has been detected for flagellins of P. aeruginosa (85). Analysis of the protein composition of purified flagella from wild-type strains and strains with mutations in flagellin genes suggests that all of the flagellins can be incorporated into the organelle and that FlaA, FlaB, and FlaD are the major subunits (121, 122; L. McCarter, unpublished data). Nothing is known with respect to their spatial arrangement in the flagellum. Loss of function of a single flagellin gene has little or no effect on swimming motility or flagellar structure (waveform or length). Thus, none of the six flagellin genes is essential for filament formation. Deletion of the flaFBA or the flaCD genes also has little effect on motility, but the deletion of both loci (flaFBA flaCD) completely abolishes motility (122). Why are there six flagellin genes? It is not clear why the organism possesses such an extraordinary number of flagellins. The similarity of the gene products and the dispensability of the genes suggest that there are no special structural requirements, although the filament structure and function could be more complex and adapted to specific circumstances than our laboratory tests can reveal. Bacteria are known to modulate the antigenicity of their flagellar filaments by expression of different flagellin genes or by recombination and rearrangement of flagellin genes (reviewed in reference 190). Therefore, the capacity for immune system evasion in a host organism might account for some of the diversity. The multiplicity of flagellin genes suggests a significant reservoir for antigenic or phase variation. Although the sheath covers the filament and might be thought to provide a disguise, electron microscopy suggests that it may be fragile (2, 164). Thus, the sheath may not protect the filament against the immune response of a host. In some respects the endoflagella of the spirochetes are similar, for these flagella can also be viewed as being sheathed, polar organelles (reviewed in reference 103). The spirochete flagella are normally found in the periplasm, between the outer membrane and the cell cylinder and attached near each cell pole. Purification of periplasmic flagella demonstrated that these filaments are also complex, with generally two to four different flagellin proteins, encoded by distinct genes, as well as containing an accessory, nonflagellin protein. And, the flagellum is apparently sheathed by the cell membrane: The flagellum is sheathed by an apparent extension of the cell membrane (2). The mechanism of how a sheathed flagellum rotates has not been elucidated. Potentially, the flagellar filament could rotate within the sheath or the two could rotate as a unit (50). Little is known about the composition, formation, or function of flagellar sheaths, which are found in many bacteria, including marine Vibrio species, V. cholerae, B. bacteriovorus, and Helicobacter pylori (reviewed in reference 164). Evidence from these organisms suggests that the sheath contains both lipopolysaccharide and proteins and that it may exist as a stable membrane domain distinct from the outer membrane (42, 51, 58, 69, 144). The lipid content of the sheath of B. bacteriovorus is distinct from that of the outer membrane, and the sheath appears to be a highly fluid, symmetric bilayer (179). How the sheath is formed remains essentially uninvestigated. It has been postulated that the sheath forms concomitantly with the elongation of the flagellar filament. However, it is provocative to note that "tubules" or structures that appear to be empty sheaths lacking filament have been observed, which suggests the interesting possibility of uncoupling of the flagellar core and the sheath assembly (2). One of three major sheath proteins of V. alginolyticus has been characterized. Genetic and biochemical evidence suggests that it is a lipoprotein (52). Another flagellar sheath protein that is a lipoprotein is HpaA of H. pylori (76, 144). There is some controversy about the role and cellular location of HpaA. Although HpaA has also been reported to be a cell surface adhesin (45), other groups have localized HpaA to the cytoplasm (144) or the flagellar sheath (76), and no adherence defect for hpaA mutants to eukaryotic cell lines has been demonstrated (76, 144). Experiments with V. anguillarum suggest the sheath is a virulence organelle. Mutants of V. anguillarum that lack a major flagellar sheath antigen are avirulent, even though the initial stages of infection are unaffected (137). Biochemical analysis indicates that this particular sheath antigen is lipopolysaccharide. Thus, the sheath may be important for specific interactions with the environment. And, the presence of the sheath appears to make the loss of the distal cap (HAP2, here) non-fatal to flagellar function. Implications of the Sheath for Filament Assembly: HAP Mutant Phenotypes The sheath seems to provide some variation to the pathway of flagellar assembly. The hook-basal-body complex forms a channel through which proteins can be exported. In fact, not only are structural elements of the flagellum exported through this channel, but also regulatory molecules, e.g., the flagellar anti- factor FlgM, are secreted (70, 99, 139). For bacteria with unsheathed flagella, such as E. coli, mutants with defects in genes encoding three hook-associated proteins (HAPs) are nonmotile and secrete unpolymerized flagellin subunits (66). HAP1 and HAP3 are the connector proteins that join the filament to the hook. Without the ability to adapt flagellin subunits to the hook, the flagellins are secreted. HAP2 is also called the distal capping protein because its role is one of a cap or plug. Without this cap, flagellins are also secreted. Since purified flagellin subunits can assemble in vitro (9) and since an S. enterica serovar Typhimurium mutant lacking HAP2 can polymerize filaments if the concentration of flagellin in the external medium is high (67), the role of HAP2 has been viewed as capping the flagellar tip to retard subunit secretion sufficiently to increase the local concentration of flagellin and promote self-assembly. Recent work in Salmonella, analyzing cap-filament interactions by cryoelectron microscopy, suggests a model for the cap as being a flat, disklike pentameric structure that acts as a processive chaperone, preventing the loss of flagellin monomers and actively catalyzing folding and insertion into the filament (200). HAP mutants of V. parahaemolyticus display different phenotypes (122). The most striking difference is in the phenotypes of mutants with defects in the gene encoding HAP2. These mutants are competent for filament assembly and motile. Figure 5 compares the flagella of the wild type and mutants with defects in the gene that encodes HAP2, and the flagella seem mostly indistinguishable. This suggests that in the absence of HAP2 but in the presence of the flagellar sheath, the local concentrations of the flagellin monomers remain high enough to allow polymerization of subunits. HAP1 and HAP3 mutants of V. parahaemolyticus are nonmotile and nonflagellated; however, they produce detached, severely truncated filaments encased in a membrane (122). The sheath seems to act to retain flagellin monomers and allow subunit assembly. The polymerized flagellins cannot be connected to the hook and bleb off as abortive filaments surrounded by a membrane vesicle. Similar filamentless mutants that produce flagellin-containing membrane vesicles were isolated in V. alginolyticus, although the genetic lesions in these strains were not determined (136). Thus, the sheath itself appears to be able to substitute for the cap. 1. There are some required genes not found or required in E. coli (e.g. FlhH, FlhG, apparently used in flagellar polar placement) 2. Chemotaxis varies significantly: Chemotaxis Genes and Gene Organization The complement of chemotaxis (che) genes and their organization are different from those in E. coli. One of these genes is cheV, which encodes a hybrid CheY/CheW. CheV has been found in a number of organisms, including Bacillus species, Campylobacter jejuni, H. pylori, P. aeruginosa, and V. cholerae. In B. subtilis, genetic analysis suggests that CheV and CheW are functionally redundant (155). Three unusual ORFs occur within the che gene cluster of region 2. ORF1 encodes a protein that resembles Soj of B. subtilis and other ATPase proteins involved in chromosome partitioning (152, 161). The other ORFs encode potential polypeptides that do not resemble proteins of known function. It seems curious that a Soj-like protein exists within a flagellar/chemotaxis operon, and this particular arrangement is conserved in other bacteria, e.g., P. aeruginosa, P. putida, and V. cholerae. Perhaps these novel ORFs will prove key for understanding the linkage between cell division and flagellation or development. It should be remarked that many of the V. parahaemolyticus che genes located in the flagellar clusters (Table 2) were discovered by mutant analysis; i.e., these genes produce defects in chemotaxis when mutated. The V. cholerae genome, as well as those of Pseudomonas species, indicates additional complexity with respect to a multiplicity of potential che genes. These guys have two additional mot genes: MotX and MotY proteins are the unique components of sodium-type flagellar systems, and their specific roles are not known. Loss of function of either motX or motY produces a paralyzed mutant completely defective for swimming but competent for flagellar assembly (123, 124, 141). Both proteins possess single membrane-spanning domains. The C terminus of MotY contains an extended domain that shows striking homology to a number of outer membrane proteins know to interact with peptidoglycan, e.g., OmpA and peptidoglycan-associated lipoproteins (124). The simplest hypothesis for the role of MotY is that the polar flagellar motor possesses two elements for anchoring the force generator. Perhaps extremely precise alignment of the stator with the rotor is required for a motor that spins as fast as 100,000 rpm. The role of MotX remains mysterious. It is known that MotX recruits MotY to the membrane when the proteins are coexpressed in E. coli (123). Furthermore, overexpression of MotX is lethal to E. coli in proportion to the external Na+ concentration, and lethality can be reversed by the presence of the sodium channel blocker amiloride. This suggests that the proteins may somehow participate in or modulate Na+ translocation. For example, MotX could act to modify or specify ion channel activity. Thus, it may be that all four proteins comprise and specify the sodium-type torque-generating unit. However, there is no existing evidence that places MotX and MotY in the physical context of MotA and MotB. It is possible that MotX and MotY may play a more distinct role in the generation of sodium-driven motility; e.g., they could participate in some other aspect of the sodium cycle. Lotsa Che genes: |The genomes of V. cholerae and P aeruginosa are tantalizing with respect to chemotaxis, for they suggest additional complexity over that of E. coli due to a multiplicity of che-like genes. For example, five cheY-like genes can be identified in V. cholerae.|
<urn:uuid:f5f4d1ae-2b49-4f04-8521-dd8f7e48c171>
2.765625
2,996
Academic Writing
Science & Tech.
35.202205
Why does this galaxy emit such spectacular jets? No one is sure, but it is likely related to an active supermassive black hole at its center. The galaxy at the Hercules A, appears to be a relatively normal elliptical galaxy in visible light. When imaged in radio waves, however, tremendous jets over one million light years long appear. Detailed analyses indicate that the central galaxy, also known as is actually over 1,000 times more massive than our Milky Way Galaxy, and the central black hole is nearly 1,000 times more massive than the black hole at our Milky Way's center. is a visible light image obtained by the Earth-orbiting Hubble Space Telescope superposed with a radio image taken by the recently upgraded Very Large Array (VLA) of radio telescopes in The physics that creates the jets remains a topic of research with a likely energy source being infalling matter swirling toward the central S. Baum & R. Perley and (NRAO/AUI/NSF), and the Hubble Heritage Team
<urn:uuid:dac49a60-1eac-4f73-8830-6623dbcf73f0>
3.59375
238
Knowledge Article
Science & Tech.
35.379118
I am really confused with tostring........ Why does this program give an output of 10. There is no way in the program that I am refering to the method tostring, still why does it executes the tostring method.. Mysterious strings......... Take a look at <code>java.io.PrintStream</code> class which is contained in the the System.out reference. There are several <code>println()</code> methods ( overloaded ) but there are only two of them which take the non-primitive argument. They are <code>println( String ) </code> and <code>println( Object )</code> Also, remember that any expression involving a <code>String</code> evaluates to a <code>String</code>. If such expressions has primitives, they too are converted to strings using respective wrapper classes. If the expression has object references, <code>toString()</code> method on such object(s) gets called. Hence, when you pass an object reference to the <code>println()</code> method, the <code>println(Object)</code> version gets called. This method calls the <code>toString()</code> method on the object that is passed in, to convert the object to the string representation. If the ( runtime type of the ) object being passed doesnot implement the <code>toString</code> method, the <code>Object.toString()</code> method gets called due to polymorphism. Now lets look at your example. You are calling the <code>println(Object)</code> version and so at runtime, the <code>toString()</code> method on your object ie., the <code>TestClass</code> object gets called. This returns the string <code>"TestClass.s = "+this.s;</code>. Hence the result To make this code more interesting, just comment out the <code>toString()</code> method in your <code>TestClass</code> and run the program. Observe what happens ! Ajith [This message has been edited by Ajith Kallambella (edited November 03, 2000).] Open Group Certified Distinguished IT Architect. Open Group Certified Master IT Architect. Sun Certified Architect (SCEA). Hi Shalini, Nothing Mysterious about strings. You have overrided the toString() method in your class and asking why it is getting executed. This is really fun. By the way do you know when your JVM will invoke toString() method? If you pass something funny inside the System.out.println(). Here you have passed the class reference "this" inside the System.out.println(). The JVM is confused with your parameter so it invoked toString() overrided by you in the class. You remove your toString() then you will see funny results. try. solaiappan
<urn:uuid:f255b082-0179-4e5b-a2d0-7acaab183de3>
3.125
602
Q&A Forum
Software Dev.
59.218573
Marine geologists studying ocean temperatures at the 2500 meter deep spreading center of the Galapagos Rift acquired their first definitive evidence for the existence of hydrothermal vents in May, 1976. A deep-tow vehicle deployed less than 40 meters above the volcanic ridge detected a buoyant plume of vent discharge. The vehicle, which originated from Scripps Institution of Oceanography's Marine Physical Laboratory, was equipped with a Conductivity Temperature Density device (CTD), sampling bottles, acoustic sensors, and cameras. Water samples and physical measurements helped identify the hydrothermal plume. While taking these samples, the vehicle also photographed the ocean floor. To everyone's surprise, these photographs revealed a dense, productive benthic community living in close proximity to the hydrothermal vent. Relying on this data, in 1977 Peter Lonsdale published the first scientific paper describing a hydrothermal vent life. The paper describes a community including mussels, anemones, and crabs, as well as evidence of burrowing activity. Lonsdale speculated that increased food resources near vent plumes allowed the animals to flourish, and suggested that searching for these unusual communities might serve as the simplest method of detecting hydrothermal vents. In 1977 geologists returned to the Galapagos Rift, diving in Alvin, and had the first chance to see hydrothermal vent communities with their own eyes. Two years later a group of biologists, chemists, and geologists came back to the rift with a film crew from National Geographic in tow. A National Geographic documentary and an article by Robert Ballard and Fredrick Grassle (1979) resulted from the 1979 expedition, introducing the general public to the exotic world found at hydrothermal vents. Alvin brought back pictures of giant tube worms, foot-long clams, galatheid crabs, and other unusual creatures. Though the discovery of vent communities came as a total surprise, the discovery of hydrothermal vents was not unanticipated. Based on measurements of heat flux, marine geologists had hypothesized about the existence of vents years before any were directly located (Lonsdale, 1977). Additionally, the composition of seawater itself indicated that unexplained chemical processes were taking place in the world's oceans. Seawater manganese concentrations were too high, while magnesium concentrations were too low to be accounted for by mineral contributions from river runoff alone. Chemical analysis of vent waters demonstrated that the circulation of water through the ocean crust decreased magnesium levels and increased manganese concentrations in ocean water. Hydrothermal vents occur at ocean spreading centers, that is, at locations where tectonic plates are pulling apart, creating new ocean floor as volcanic material rises to fill in the space between the plates. At spreading centers ocean water infiltrates the ocean floor and mixes with molten crust, after which hydrothermal fluids rise back to the surface of the sea floor. As hydrothermal fluids return to the ocean floor, they exit through narrow chimneys known as white or black smokers (Metaxas, 2003). Exiting fluids range in temperatures between 300-400 degrees C and are rich in hydrogen sulfide, heavy metals, and other elements. The high temperatures of vent fluids cause them to be more buoyant than ocean water. As hydrothermal fluids escape into ocean waters, they form buoyant plumes that rapidly mix with ambient seawater. The plume rises until the fluids mix sufficiently to reach a state of neutral buoyancy. At this point the plume spreads out horizontally: ocean currents then dictate further mixing and movement (Van Dover, 2000). The benthic zone surrounding hydrothermal vents is an extremely variable environment. Within this area, vent fluids and oceanic waters mix. These two water types possess very different physical and chemical properties. Consequently, temperature and chemical gradients form within vent environments. Small distances can make a big difference in the characteristics of the water experienced by organisms that live near vents. As currents shift, water properties can change dramatically in a matter of minutes or seconds. Toxic substances precipitate from vent fluids. Living in such a unique physical and chemical environment can require a considerable amount of adaptability. GO TO Chemosynthesis List of Visuals - Black smoker Dr. C's Remarkable Ocean World (Dr. William Chamberlin, Fullerton College, North Orange County Community College District, 1830 W. Romneya Drive, Anaheim, CA 92801-181)
<urn:uuid:19698438-0c42-4ce2-814a-5eed1a89c020>
4.3125
899
Knowledge Article
Science & Tech.
25.817926
Niels Bohr was a Danish scientist who won the Nobel Prize for physics in 1922 for his work in regards to understanding the structure of atoms. Bohr introduced the theory that electrons travel in an orbital path around the atom's nucleus. He also theorized that light could have properties of both a wave and a particle at the Niels Bohr was a professor at the University of Copenhagen and because of his research others developed theories about quantum mechanics. Niels Bohr became director of the Institute of Theoretical Physics in 1920. In 1943, Niels Bohr escaped being arrested by the German police and fled to Sweden. He then traveled to the United States where he worked at the Los Alamos Laboratory in New Mexico on the Manhattan Project. At the Los Alamos Laboratory Niels Bohr was known as Nicholas Baker for reasons of security. Niels Bohr was a consultant on the project and believed the technology should be shared between nations within the international scientific community in order to speed up the results. Bohr tried to convince Winston Churchill of this idea and Churchill opposed it. After World War II, Niels Bohr returned to Copenhagen and advocated for the peaceful use of nuclear energy. Bohr was born on October 7, 1885 in Copenhagen, Denmark and died on November 18, 1962. The element of Bohrium was named in his honor. Rumor Has It … Rumor has it that Niels Bohr used to give his little brother atomic wedgies as a youth. The atomic wedgie ceremonies were thought to be the springboard for Bohr's later work in atomic energy and complex A totally falsified, unsubstantiated and nearly defamatory account reports that one of Niels Bohr's favorite pastimes was to shave his dog and teach it to walk backwards. Written by Kevin Lepton
<urn:uuid:2153b87e-cb4b-452b-a6fc-90ae54f4ca4d>
3.5625
399
Knowledge Article
Science & Tech.
43.753245
Suppose a square is inside a right triangle as shown in the picture. If the square is 1 inch on each side and the hypotenuse is inches long, find the length of the vertical (longer) leg of the triangle. [Problem submitted by Robert Hart, LACC Associate Professor of Computer Science.]
<urn:uuid:36da62a9-5b37-495d-8050-29cd01b1a4b5>
3.296875
68
Tutorial
Science & Tech.
54.558333
You have said that you use THIOSULPHATE as a reducing agent. What does a reducing agent do? And what are the properties of it? Basically im asking for as much information as possible about the chemical. A reducing agent is a substance that's works together with an oxidizing agent. The type of reaction is an oxidation reduction reaction. "Redox" reactions are reactions that involve the transfer of electrons from some substance to another. One substance provides electrons for the other, so anmoxidation (loss of electrons) cannot occur without a reduction (gain of electrons). Thiosulfate is an ion that has to be bonded to something else. Probably the most common thiosulfate is sodium thiosulfate, also known to photographers as "hypo." All the details of sodium thiosulfate can be found in a book called the "Merck Index" which should be in the reference section of any good library. Hope this helps with your question. York High School Click here to return to the Chemistry Archives Update: June 2012
<urn:uuid:3cd1a1e3-5280-48ed-af32-c1a113ecb517>
3.40625
235
Q&A Forum
Science & Tech.
44.78623
Light Speed and Theories Name: James L. I recently read an article (The article can be found at http://www.nature.com/nsu/000601/000601-5.html) which stated that by using evanescent waves people were able to transmit a wave faster then c for very short distances. c of course being the speed of light in a vacuum. How do the currently accepted theories of space-time and relativity account for this? Current theories of relativity would not account for this. What must be realized it that ALL physics theories are approximations of reality. Newton's work applies to "medium" situations: objects made of many, many molecules moving much slower than the speed of light. In that range of reality, Newtonian physics is a very good approximation of reality. Large object moving very fast (close to the speed of light) don't obey Newton's laws. Einstein came up with an approximation called relativity to work with such objects. Relativity, however, is not a good approximation for individual particles. They require quantum physics. There is not yet any set of theories that works for everything. There is still a great deal we don't understand. Reality is not based on the theories. Rather, the theories are based on reality. Dr. Ken Mellendorf Click here to return to the Physics Archives Update: June 2012
<urn:uuid:adfd69ec-d547-41e8-8cb4-4bafd2886b46>
3.4375
306
Comment Section
Science & Tech.
59.125036
Superman relies on the same illusion to protect his identity: thanks to a pair of glasses, a change of clothes and a different hairstyle, nobody in Metropolis realizes that he and Clark Kent are the same person. Gaze at the angry face (left) for about 30 seconds while looking around the face from the eyes to the mouth, to the nose, back to the eyes, and so on. Then look at the center face. It looks scared, right? Now look at the scared face (right) for 30 seconds and then look at the center face again. This time it is angry! In reality, the center face is a 50–50 blend of an angry and a scared face. Created by Andrea Butler and her colleagues at the University of British Columbia, this illusion shows that our visual-processing system adapts to an unchanging facial expression by temporarily becoming less responsive to it. As a result, the other facial expression dominates when you view the blend. This adaptation occurs in higher-level brain circuits, rather than in the retina, because the illusion works even if you view the left or right image with one eye only and then look at the center image with your other (unadapted) eye.
<urn:uuid:dac05d35-b7cc-4235-ab0d-2ff98cfa528c>
3.59375
248
Knowledge Article
Science & Tech.
52.998413
Polar Bear Births Could Plummet With Climate Change Edmonton, Canada (SPX) Feb 16, 2011 University of Alberta researchers Peter Molnar, Andrew Derocher and Mark Lewis studied the reproductive ecology of polar bears in Hudson Bay and have linked declining litter sizes with loss of sea ice. The researchers say projected reductions in the number of newborn cubs is a significant threat to the western Hudson Bay polar-bear population, and if climate change continues unabated the viability of the species across much of the Arctic will be in question. Using data collected since the 1990s researchers looked at the changing length of time Hudson Bay is frozen over (the polar bear's hunting season) and the amount of energy pregnant females can store up before hibernation and birthing. An early spring-ice breakup reduces the hunting season making it difficult for pregnant females to even support themselves, let alone give birth to and raise cubs. Pregnant polar bears take to a maternity den for up to eight months and during this time no food is available. In the early 1990s, researchers estimate, 28 per cent of energy-deprived pregnant polar bears in the Hudson Bay region failed to have even a single cub. Researchers say energy deprived pregnant females will either not enter a maternity den or they will naturally abort the birth. Using mathematical modeling to estimate the energetic impacts of a shortened hunting season, the research team calculated the following scenarios: If spring break up in Hudson Bay comes one month earlier than in the 1990s, 40 to 73 per cent of pregnant female polar bears will not reproduce. If the ice breaks up two months earlier than in the 1990s, 55 to a full 100 per cent of all pregnant female polar bears in western Hudson Bay will not have a cub. The polar-bear population of western Hudson Bay is currently estimated to be around 900 which is down from 1,200 bears in the past decade. The number of polar bears across the Arctic is estimated to be between 20,000 and 25,000. The research team says because the polar bears of Hudson Bay are the most southerly population they are the first to be affected by the global-warming trend. However, they say that if temperatures across the Arctic continue to rise, much of the global population of polar bears will be at risk. Share This Article With Planet Earth University of Alberta Beyond the Ice Age London UK (SPX) Feb 14, 2011 Based on reconstructions of Arctic climate variability in the greenhouse world of the Late Cretaceous, Southampton scientists have concluded that man-made global warming probably would not greatly change the climatic influence associated with natural modes of inter-annual climate variability such as the El Nino - Southern Oscillation (ENSO) or the Arctic Oscillation/ North Atlantic Oscillation ( ... read more |The content herein, unless otherwise known to be public domain, are Copyright 1995-2010 - SpaceDaily. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by SpaceDaily on any Web page published or hosted by SpaceDaily. Privacy Statement|
<urn:uuid:cca4cba1-896a-4d6d-a45f-8dc80606ba8b>
3
691
Truncated
Science & Tech.
34.049925
Boffins get black hole double-vision Messier 22 springs astro-physics surprise The Messier 22 globular star cluster has yielded a messier-than-expected observation: instead of the black hole astronomers expected to find in its centre, there are two. Conducted by the University of Southampton’s Dr Tom Maccarone, Michigan State University assistant professor Jay Strader and others, and using the NSF’s Karl G Jansky Very Large Array in New Mexico, the researchers analysed observations of Messier 22 hoping to find an intermediate-mass black hole. Instead, they found two smaller black holes in a pas de deux some distance away from the cluster’s centre. They estimate that each of the objects is about ten-to-twenty times the mass of the Sun. Part of the Milky Way, Messier 22 is about 10,000 light-years from Earth. "We were searching for one large black hole in the middle of the cluster, but instead found two smaller black holes a little way out from the centre," study co-author James Miller-Jones from the International Centre for Radio Astronomy Research says. The find is surprising because mathematical simulations predict that of the black holes that might have once formed in a globular star cluster, only one should survive – not because it will absorb the others, but because in their “gravitational dance”, the others should have been thrown out of the cluster. In this release, via AlphaGalileo, the researchers propose a couple of possible mechanisms to keep two black holes in the cluster. “First, the black holes themselves may gradually work to puff up the central parts of the cluster, reducing the density and thus the rate at which black holes eject each other through their gravitational dance. Alternatively, the cluster may not be as far along in the process of contracting as previously thought, again reducing the density of the core”, the announcement states. The discovery marks a couple of other firsts as well: it’s the first time black holes have been observed in a globular star cluster within the Milky Way; and they’re the first black holes discovered by radio rather than X-ray observations. ® Interesting stuff. M22 is magnificent in my 8" scope, as my son (10) also agreed to when we observed it from France this summer. I will tell him he has seen the home of a black-hole binary system. I bet he will be excited. Sounds like blak-hole boffinry just got messier. Mine's the one with the Disney movie in the pocket. Re: I saw what you did there. I do enjoy a multi-hole cluster.......
<urn:uuid:eeba46e4-c6dd-4fc6-9e16-3133bdbe3726>
2.859375
570
Comment Section
Science & Tech.
54.499527
| A solar array is made up of many individual solar cells that generate a low current. Each individual cell is made up of two layers of different semiconductor materials: a p-type layer (containing free holes) overlain by an n-type layer (containing free electrons). Where the two layers are joined the free electrons in the n-type layer will move over to fill the holes in the p-type layer. As a result, the p-type layer (which had no net charge before) now has a negative charge due to the acquisition of electrons and the n-type layer (which also had no net charge before) has a positive charge due to the loss of electrons. | When sunlight hits the cell, electrons are knocked free of the crystal lattice. The free electrons move toward the positively-charged n-type layer, creating a current. The current closes through a circuit and returning electrons fill previously evacuated holes. High speed ions that slam into the solar array also liberate electrons thus degrading the semiconductor material and limiting the on-orbit lifetime of the solar array. Solar panels cannot be shielded from the damaging radiation as is usually done with other sensitive electronics. Any shielding would prevent the sunlight from entering the solar cells and cut off the spacecraft power source.
<urn:uuid:4246aea3-3097-458f-8898-891cc7a5cb93>
4.28125
257
Knowledge Article
Science & Tech.
42.061295
cean The children learned about earthquakes that happen under the ocean but not the old story about California falling into the ocean! In fact, our earthquakes are proof in and of themselves that we cannot fall into the ocean. Read the description under Epicenter of how an earthquake happens. Earthquakes only happen when a fault is pushed together by huge forces in the earth to create friction. Without those huge forces pushing the fault together, the fault would slide smoothly and not produce earthquakes. The image of the Earth swallowing us up during an earthquake (so well-beloved of novelists and movie makers) was encouraged by pictures from the great Alaskan earthquake of Good Friday 1964 (magnitude 9.2, far greater than anything we will see in California) showing great fissures in the ground with cars falling into them. Those fissures were caused by liquefaction of the soil that caused the soil to flow down hill and in fact existed within only the top few meters of the ground. Previous Letter |
<urn:uuid:0cff79ab-1910-4687-8ac6-7ac47ee1fe7f>
3.53125
203
Knowledge Article
Science & Tech.
47.602533
What Is the Difference Between Weather and Climate? It’s a sweltering midsummer day. “It must be global warming,” mutters someone. But is it the Earth’s changing climate that has made the day so warm? Or, is it just the weather that is so unbearable? Weather is the mix of events that happen each day in our atmosphere including temperature, rainfall and humidity. Weather is not the same everywhere. Perhaps it is hot, dry and sunny today where you live, but in other parts of the world it is cloudy, raining or even snowing. Everyday, weather events are recorded and predicted by meteorologists worldwide. Climate in your place on the globe controls the weather where you live. Climate is the average weather pattern in a place over many years. So, the climate of Antarctica is quite different than the climate of a tropical island. Hot summer days are quite typical of climates in many regions of the world, even without the effects of global warming. Climates are changing because our Earth is warming, according to the research of scientists. Does this contribute to a warm summer day? It may, however global climate change is actually much more complicated than that because a change in the temperature can cause changes in other weather elements such as clouds or precipitation. Explore weather and climate! Click on links to the left to explore how dynamic forces within the atmosphere change our weather and climate. Learn what causes weather events and climate change and how NCAR scientists are exploring our atmosphere through scientific research.
<urn:uuid:68cc0d5b-57b9-4412-bf5c-46b0755daedc>
3.421875
318
Knowledge Article
Science & Tech.
45.733485
Paleobiology of Marine Benthic Communities History of Regional Outbreaks of Coral Disease on Caribbean The staghorn coral Acropora cervicornis was a dominant space-occupier and an important framework constructor of Caribbean coral reefs during the Pleistocene and Holocene. Populations of Acropora were killed throughout the region in the 1980s and 1990s by outbreaks of white-band disease, a presumed bacterial infection. On lagoonal reefs in Belize, the demise of Acropora led to dominance by the lettuce coral Agaricia tenuifolia. This Acropora-to-Agaricia transition produced a clear signature in the subsurface sediments in Belize, and analysis of cores extracted from these reefs showed this sequence of events to be unique in at least the last 3,000 years. Similar Agaricia-dominated communities are common today in Bahía Almirante, a coastal lagoon at Bocas del Toro in northwestern Panama, more than 1,000 km from the Belizean reefs. Supported by grants from the National Science Foundation, the National Geographic Society and the Smithsonian Institution, we conducted an intensive program of coring to determine whether the Panamanian reefs have a similar history to those in Belize. In Panama we discovered that the transition was from branching Porites to Agaricia, rather than from Acropora to Agaricia. As in Belize, the shift was unprecedented in the past several thousand years, but the cause was different. In Panama, biogeochemical studies suggest that shifting patterns of land use, related primarily to agricultural development, were responsible for the transition to Agaricia. Our current efforts are focused on the Pacific coast of Panama, where a severe El Niño event in 1982–83 wiped out vast populations of the branching coral Pocillopora damicornis. We are coring reefs in the Gulf of Panama and the Gulf of Chiriquí to determine whether that coral mass mortality was a unique historical event, or whether similar coral kills occurred in the past. Global Climate Change and the Evolutionary Ecology of Mollusks in Antarctica Global climate change late in the Eocene epoch (about 35 million years ago) had an important influence in Antarctica. This was the beginning of the transition from a cool-temperate climate in Antarctica to the polar climate as we know it today. The cooling trend strongly influenced the structure of shallow-water, Antarctic marine communities, and these effects are still evident in the peculiar ecological relationships among species living in modern Antarctic communities. Cooling late in the Eocene reduced the abundance of fish and crabs, which in turn reduced skeleton-crushing predation on invertebrates. Reduced predation allowed dense populations of ophiuroids (brittlestars) and crinoids (sea lilies) to appear in shallow-water settings at the end of the Eocene. These low-predation communities appear as dense fossil echinoderm assemblages in the upper portion of the late Eocene La Meseta Formation on Seymour Island, off the Antarctic Peninsula. Today, dense ophiuroid and crinoid populations are common in shallow-water habitats in Antarctica but generally have been eliminated by predators from similar habitats at temperate and tropical latitudes; their persistence in Antarctica to this day is an important ecological legacy of climatic cooling in the Eocene. Although the influence of declining predation on Antarctic ophiuroids and crinoids is now well-documented, the effects of cooling on the more abundant mollusks have not been investigated. During field expeditions to Seymour Island in 2000–2003 we collected material to examine the evolutionary ecology of gastropods and bivalves in Antarctica during the late Eocene. Along with colleagues from the University of Illinois, we are testing evolutionary hypotheses based on the predicted responses of mollusks to declining temperature and changing levels of predation. Seymour Island contains the only fossil outcrops readily accessible in Antarctica from this crucial period in Earth history. The La Meseta Formation on Seymour Island thus provides a unique opportunity to learn how climate change affected Antarctic marine communities. In practical terms, global climate change is warming the waters around the Antarctic Peninsula. Recent ecological evidence suggests that skeleton-crushing predators are in the process of reinvading subtidal habitats, which is cause for concern. Understanding the response of the La Meseta faunas to global cooling in the late Eocene will provide direct insight into the rapidly changing structure of modern benthic communities in Macroecology Applied to Management of Coral Reefs small-scale biological and physical processes revealed by experimental studies be reflected in long-term, regional dynamics? Supported by grants from NOAA's Sanctuaries and Reserves Division, we are conducting a long-term, biogeographic-scale program to track corals, sponges, algae and other sessile organisms in fully-protected zones (FPZs) and on reference reefs within the Florida Keys National Marine Sanctuary. Video and photographic records enable us to detect changes in coral cover, diversity, and recruitment success, and to determine the contributions of large- and small-scale disturbances to those changes. We are especially interested in the landscape- to regional-scale predictors of coral diversity and in the changeover from coral-dominated to algal dominated reef communities. These topics are of special concern not only to ecologists, but to managers and Ecosystem Development in Restored Salt Marshes in Alabama Salt marshes provide ecosystem services that include critical habitat for commercially important crustaceans and fish, and energy export to adjacent estuarine habitats. The goal of most marsh restoration efforts along the Atlantic and Gulf coasts has been to replant the smooth cordgrass, Spartina alterniflora, and monitor its subsequent re-establishment. The assumption has been that some approximation of natural ecosystem function will follow the provision of structure at the water’s edge. This assumption generally has not been corroborated. The blue crab, Callinectes sapidus, is the basis of important commercial fisheries along the Gulf Coast and it is the keystone predator of salt marshes in the southeastern United States. Although Callinectes and many of their mobile prey species rapidly colonize created/restored marsh habitats, it is not clear whether natural or near-natural trophic relationships become established. Likewise, it is unclear when restored marshes begin to provide significant prey resources to support Callinectes populations. The marsh periwinkle, Littoraria irrorata, is an abundant and conspicuous herbivore in Spartina marshes along the Gulf Coast. Callinectes is a generalist predator, and it is the primary predator By controlling Littoraria populations, Callinectes prevents cascading ecosystem effects, which under some conditions include overgrazing and the loss of Spartina. The long-term success of restoration efforts thus depends in large part on the establishment and maintenance of trophic linkages such as this Callinectes–Littoraria interaction. The goal of this study is to compare the degree of ecosystem development in restored salt marshes of varying ages to the state of nearby reference marshes. We are assessing community structure as faunal abundance, biomass and diversity, using flume traps and pit traps for the mobile epifauna, and sediment cores for the infauna. We are measuring energy flux through analysis of crab gut contents and fecal material. Predator-prey dynamics are being examined through a set of proven parameters of predation, which have been validated and positively correlated in pilot studies: (1) attacks on Littoraria in tethering experiments; (2) sublethal shell repair in Littoraria populations; (3) induced morphological defenses of Littoraria shells; and (4) the abundance of Callinectes. Physical/biological structure - i.e., Spartina density - will be measured, along with densities of Littoraria, using standard quadrat survey methods. We are using these densities in tandem with Callinectes densities to determine how Spartina itself influences predation on Littoraria. We have just begun this study, so results are not available at this time. Moody, R. M. and R. B. Aronson. 2007. Trophic heterogeneity in salt marshes of the northern Gulf of Mexico. Marine Ecology Progress Series 331:49-65. PDF Aronson, R. B. and W. F. Precht. 2006. Conservation, precaution, and Caribbean reefs. Coral Reefs 25:441-450. PDF MacIntyre, I. G. and R. B. Aronson. 2006. Lithified and unlithified Mg-calcite precipitates in tropical reef environments. Journal of Sedimentary Precht, W. F. and R. B. Aronson. 2006. Death and resurrection of Caribbean coral reefs: a paleoecological perspective. Pp. 40-77, In: I. Côté and J. Reynolds (Eds.), Coral Reef Conservation. Cambridge University Press, Cambridge Aronson, R. B., I. G. Macintyre, S. A. Lewis and N. L. Hilbun. 2005. Emergent zonation and geographic convergence of coral reefs. Ecology 86:2586-2600. Copyright © 2005 by the Ecological Society of America. PDF Aronson, R. B., I. G. Macintyre and W. F. Precht. 2005. Event preservation in lagoonal reef systems. Geology 33:717-720. Geological Society of America, P.O. Box 9140, Boulder, CO 80301-9140 USA (http://www.geosociety.org). Aronson, R. B., W. F. Precht, T. J. T. Murdoch and M. L. Robbart. 2005. Long-term persistence of coral assemblages on the Flower Garden Banks, northwestern Gulf of Mexico: implications for science and management. Gulf of Mexico Science 23:84-94. PDF Precht, W. F. and R. B. Aronson. 2004. Climate flickers and range shifts of reef corals. Frontiers in Ecology and the Environment 2:307-314. Copyright © 2004 by the Ecological Society of America. PDF of Aronson, R. B., I. G. Macintyre, C. M. Wapnick and M. W. O'Neill. 2004. Phase shifts, alternative states, and the unprecedented convergence of two reef systems. Ecology 85:1876-1891. Copyright © 2004 by the Ecological Society of America. PDF of Publication© Wapnick, C. M., W. F. Precht and R. B. Aronson. 2004. Millennial-scale dynamics of staghorn coral at Discovery Bay, Jamaica. Ecology Letters 7:354-361. PDF of Aronson, R. B., I. G. Macintyre, W. F. Precht, T. J. T. Murdoch and C. M. Wapnick. 2002. expanding scale of species turnover events on coral reefs in Belize. Ecological Monographs 72:233-249. Copyright © 2002 by the Ecological Society of America. PDF of Publication© Aronson, R. B., W. F. Precht, M. A. Toscano and K. H. Koltes. 2002. The 1998 bleaching event and its aftermath on a coral reef in Belize. Marine Biology 141:435-447. Copyright© 2004 by the Ecological Society of America. PDF of Publication© R. B. and W. F. Precht. 2001. White-band disease and the changing face of Caribbean coral reefs. Hydrobiologia Aronson, R. B., K. L. Heck Jr. and J. F. Valentine. 2001. Measuring predation with tethering experiments. Marine Ecology Progress Series 214:311-312. PDF of Publication© Aronson, R. B., W. F. Precht, I. G. Macintyre and T. J. T. Murdoch. 2000. Coral bleach-out in Belize. Nature 405:36. PDF of Publication© Aronson, R. B. and W. F. Precht. 2000. Herbivory and algal dynamics on the coral reef at Discovery Bay, Jamaica. Limnology and Oceanography 45:251-255. Murdoch, T. J. T. and R. B. Aronson. 1999. Scale-dependent spatial variability of coral assemblages along the Florida Reef Tract. Aronson, R. B. and R. E. Plotnick. 1998. Scale-independent interpretations of macroevolutionary dynamics. Pages 430-450 in M. L. McKinney and J. A. Drake, eds. Biodiversity dynamics: turnover of populations, taxa and communities. Columbia University Press, New York. Aronson, R. B., W. F. Precht and I. G. Macintyre. 1998. Succession and species replacement on a Holocene reef in the Belizean shelf lagoon. Coral Reefs 17:223-230. PDF of Richardson, L. L., W. M. Goldberg, K. G. Kuta, R. B. Aronson, G. W. Smith, K. B. Ritchie, J. C. Halas, J. S. Feingold and S. L. Miller. 1998. Florida's mystery coral killer identified. Nature 392:557-558. Aronson, R. B., D. B. Blake and T. Oji. 1997. Retrograde community structure in the late Eocene of Antarctica. Aronson, R. B. and W. F. Precht. 1997. Stasis, biological disturbance, and community structure of a Holocene coral reef. Aronson, R. B., P. J. Edmunds, W. F. Precht, D. W. Swanson and D. R. Levitan. 1994. Large-scale, long-term monitoring of Caribbean coral reefs: simple, quick, inexpensive techniques. Atoll Research Bulletin 421:1-19. PDF National Geographic Society (2005-2007); Land Use and Reef Development in Central America. Alabama Center for Estuarine Studies (2006-2008); Impacts of Salt-Marsh Restoration on Ecosystem Function and Export to Estuarine Environments. Alabama Department of Conservation and Natural Resources (2005-2007); Trophic Dynamics of a Created Salt Marsh in Coastal Alabama. NOAA Coastal Ocean Program (2004-2006); Ecological Processes Driving Recovery of Coral Reefs in the Florida Keys. | Ryan Moody
<urn:uuid:ccde4f82-0c8b-49c8-8278-8ab16170ba61>
3.59375
3,327
Academic Writing
Science & Tech.
46.543571
Snow and Ice Climate Change Opinion Book and Movie Reviews A study released by NASA confirmed that 2005 was the warmest year on record, narrowly beating out 1998. That year a strong El Niño--a warm water event in the eastern Pacific Ocean--added significant warmth to global temperatures. The new record was set without the help of an El Niño. This suggests that a very sustantial warming trend is affecting the globe and more "warmest years ever" will continue to occur in this decade--particularly if they are El Niño years. Global warming since the middle 1970s is now about 0.6° C (1° F ). Total warming in the past century is about 0.8° C (1.4° F). The five warmest years over the last century have occurred in the last eight years. Reliable instrument records of global temperatures extend back to about 1880, but the consenus scientific view is that the current level of warmth has been unmatched for at least the past 125,000 years. Figure 1: (Top) Global annual surface temperature relative to 1951-1980 mean based on surface air measurements at meteorological stations and ship and satellite measurements for sea surface temperature. The blue segments represent the uncertainty of of the measurements at the 95% level. (Bottom) Temperature anomaly for 2005 calendar year. Image credit: NASA Goddard. The plot of 2005 temperature anomalies shows that virtually all land areas across the globe were warmer than average in 2005. More warming was observed in the Northern Hemisphere than the Southern Hemisphere, and the U.S. had its 13th warmest year on record. The Arctic had the most warming, helping make the extent of summer ice coverage over the Arctic Ocean in 2005 the lowest ever measured. It's sobering to note that even the Antarctic showed a net warming for 2005. The Antarctic had been the only land area on the globe to have cooler than average temperatures the past decade. If 2005 signals an end to this Antarctic cooling trend, we can expect a higher rate of global sea level rise in coming years as Antarctic melting increases.
<urn:uuid:a8795493-ce0c-4d4f-8b0e-655190c0ee3f>
3.9375
420
Knowledge Article
Science & Tech.
54.514521
The principle of mathematical induction is based on the fifth Peano Postulate, which states that: If a statement holds for the positive integer 1, and if, whenever it holds for a positive integer, it also holds for that integer's successor, then the statement holds for all positive integers. This property is quite fundamental and is usually taken as a non-provable postulate about the natural numbers. It can be used to prove that a proposition P(n) is true for all n ∈ N. So, the principle of mathematical induction is: If P(1) is true, and if the truth of P(r) implies the truth of P(r + 1), then P(n) is true for all n, n = 1, 2, .... Of course, one doesn't have to start with 1; one could start with another integer. As an example, say that we wanted to prove that the nth triangular number, 1 + 2 + 3 + ... + n is equal to ½n(n + 1), we could: Mathematical induction should not be confused with inductive reasoning, which is something different. Sources used: Elementary Number Theory, Second Edition by Underwood Dudley.
<urn:uuid:c744011f-2623-4636-b965-05549cd69548>
3.953125
255
Knowledge Article
Science & Tech.
66.777208
universe is a system with broken symmetries. In the very early universe, the strong, weak, and electromagnetic forces were indistinguishable. At some point as things cooled off, the symmetry between these forces broke and the three forces went their separate ways to become the three very different forces that, along with gravity, operate in our universe. today, at the microscopic scale standard symmetries are usually present between antimatter and matter (charge-conjugation invariance or “C-symmetry”) and between the two directions of time (time-reversal invariance or “T-symmetry”). Matter and antimatter interactions are subject to the same forces and look the same. Fundamental interactions look the same when run forward or backwards in we know that in the early universe some unknown processes broke the C-symmetry and slightly favored the production of matter over antimatter, leading to an excess of matter over antimatter of about one part per billion. As the universe evolved and cooled and after almost all of the matter-antimatter annihilation was over, we were left with the surviving matter residue: lots of protons and electrons and almost no antiprotons and positrons. That broken C-symmetry of the early universe has made possible our matter-based world, and indeed our despite the time-reversal invariance or T-symmetry of most of the fundamental interactions at the microscopic scale, our universe presents us with a built-in “arrow of time” that is quite obvious but has unknown origins. Think about a movie that can be run either forward or backwards showing some event. If the movie shows the collision and interactions of fundamental particles, in almost all cases (see below) there are no clues as to whether the movie was running forwards or backwards. But think of a movie showing some macroscopic event, an egg hitting the floor or a high dive into a swimming pool. The backward-running version would be quite obvious, and would seem unphysical and contrary to experience. Eggs do not gather their liquid parts, assemble a shell around them, and leap upward. Water waves do not converge in a swimming pool to propel a diver up in the air. The arrow of time, at the psychological level, is also obvious. We can remember the past but not the future. We can take actions that can change the future but not the past. The broken T-symmetry of the macroscopic world also makes our existence possible. Evolution cannot happen in a time-symmetric world. addition to these symmetries applying to charge and time, there is a third symmetry, the symmetry of space. Just as T-symmetry is concerned with the reversal of the time direction, parity invariance or “P-symmetry” is concerned with phenomena that may change or appear different when the three space coordinate axes are reversed. When you view an object in a mirror, the image you see has a reversed coordinate axis in the direction perpendicular to the plane of the mirror. This is roughly equivalent to reversing all three spatial directions. In both cases the letters on a page are reversed, clockwise rotations become counterclockwise, and right-handed screw threads become left-handed the 1950s, all physicists assumed that parity was a good symmetry, that all physical processes looked the same in mirror-image as they did when viewed directly. Then the first blow to symmetry preservation arrived. It was discovered that for the weak interaction, the physical force that can change neutrons to protons or vice versa in the radioactive beta-decay process, there was a massive violation of P-symmetry or parity invariance. Spin-oriented nuclei emitted electrons in a preferred direction. Neutrinos are always emitted with a left-handed (clockwise) spin if viewed from the front. One could watch a movie of a beta decay process and tell whether or not the images had been mirror-reversed. For the weak force, nature had an intrinsic “handedness”. was noted in studying violations of P-symmetry, however, that this lack of mirror symmetry was reversed for beta-decaying systems involving the emission of antimatter positrons instead of matter electrons. Antineutrinos are always emitted with a right-handed (counterclockwise) spin if viewed from the front. Therefore, it was assumed that even if P-symmetry was violated, CP-symmetry, involving simultaneously reversing the space axes and converting matter to antimatter, was preserved. There is a general theorem in theoretical physics that second blow to symmetry preservation arrived in 1964, when Val Fitch and Jim Cronin discovered a violation of CP-symmetry in the decays of neutral K mesons (which are quark-antiquark combinations involving a strange quark) into pi mesons. This is equivalent to finding a preferred time direction in the microscopic world. The movie of a K meson decay process would have an observable change in if it was running backwards instead of forward. Recent studies of processes involving B mesons (quark-antiquark combinations involving a bottom quark) have shown similar CP-symmetry violations. The CP violations that have been observed in these systems are, however, too weak to explain matter dominance. While hinting at a preference for matter over antimatter, they are not strong enough to have produced the part per billion dominance of matter over antimatter in the early universe. The nature of the forces that produced that matter dominance remains as one of the major unsolved mysteries of physics. we now have a way of re-creating the conditions of the early universe in the laboratory, using the Relativistic Heavy Ion Collider (RHIC) facility at Brookhaven National Laboratory. The RHIC facility brings gold (and lighter) nuclei into collision at energies of up to 200 GeV per nucleon, producing a relativistic fireball that replicates conditions in the early universe at about one microsecond after the Big Bang. The temperatures reached in RHIC collisions are several trillion degrees Celsius, about 250,000 times hotter than the central temperature of our Sun. At such temperatures, a strongly-interacting phase of nuclear matter, a quark-gluon plasma, is expected. Further, the highly charged nuclei passing each other in RHIC collisions with a slight offset can produce extremely intense magnetic fields that can reach strengths of up to about 1015 tesla. These conditions make it possible to look for possible symmetry breaking in strong interactions operating in collisions in this new and unprecedented new analysis of has studied collisions between gold nuclei and between copper nuclei at collision energies of 200 GeV per nucleon. At this collision energy, the two nuclei are heading toward each other at 99.9957% of the speed of light or only 4.32 parts in 100,000 below light speed. No all such collisions are head-on, but one can distinguish the offset or “centrality” of the colliding systems by counting the number of neutrons that were non-participants and went straight ahead after the collision. In this way, the collisions can be broken up into eight centrality groups ranging from head-on collisions to near misses. In offset collisions there is a tendency for there to be more particles produced in the “reaction plane”, which includes the beam and the collision offset, than in the direction perpendicular to it. Since thousands of particles are produced in a typical RHIC collision, finding the preferred emission plane gives a good estimate of the reaction plane of each collision, and each particle can be characterized in terms of the angle perpendicular to the beam that it makes with the reaction plane. collision events have randomly oriented reaction planes and magnetic field directions, most of the potentially observable effects of a hypothetical local parity violation are averaged out. However, the STAR Collaboration has looked for an event-by-event signal in the form of two-particle correlations between the particle emission angles with respect to the reaction plane of particles of the same sign of electric charge. To eliminate issues of how accurately the reaction plane was determined, they have moved to three-particle correlations that replace the reaction plane angle by the emission angle of all the other particles observed in the collision. The results show an unambiguous correlation in the emission of pairs of particles of the same electric charge. There is no similar correlation between pairs of particles with opposite electric charge. The collisions studied prefer to emit same-charge particles in the same direction, which is a strong indication of a local violation of P-symmetry or parity. The effect is present in both gold-gold and copper-copper collisions but stronger in the latter, and it is strongest when the collision offset is about half a nuclear diameter. Theoretical collision calculations that do not include any expectation of local parity violations predict only weak correlations having the opposite sign from those observed, and predict no difference in the correlations of same-charge and opposite charge particle pairs. Thus, there is good evidence that local parity violations occur in RHIC collisions. mentioned above, the theory that stimulated the STAR investigation of parity violations also suggested that there should be local violations of CP symmetry created by the high temperature gluon fields in the environment of RHIC collisions and in the conditions of the early universe. Can this be the missing key to understanding the dominance of matter over antimatter in our universe? The sign of the possible local CP violations at STAR appears to be in the wrong direction and cannot, if taken at face value, explain the matter-dominance of the universe. However, there are many questions raised by the initial observation that remain to be answered, and these should provide new insights into how such local symmetry violations occur, and into their implications for the universe as a whole. It is expected that the STAR results will checked by other experiments, will be extended to lower energies at RHIC and to higher energies at the LHC, and will trigger more theoretical activity on the issues of local symmetry may be on the verge of answering one of the major questions about the nature of our universe: why is there more matter than antimatter? Watch this column for further results. Electronic reprints of about 150 "The Alternate View" columns by John G. Cramer, previously published in I. Abelev, et al, “Observation of charge-dependent azimuthal correlations and possible strong parity violations in heavy ion collisions”, arXiv (See also http://www.bnl.gov/bnlweb/pubaf/pr/PR_display.asp?prID=1073 E. Kharzeev “Parity violation in hot QCD: why it can happen, and how to look for it”, , Physics Letters B633, 260-264 (2006), arXiv E. Kharzeev, L. D. McLerran, and H. J. Warringa, “The effects of topological charge change in heavy ion collisions: ‘Event by event P and CP violation’”, Nucl.Phys.A803, 227-253 (2008), arXiv preprint 0711.0950 Exit to issue index .
<urn:uuid:9b38ccdb-4f34-4778-bb8b-623177196425>
3.21875
2,515
Academic Writing
Science & Tech.
34.638
By Sarah Wagner, Institute for Collaborative Education (I.C.E.) Quick! Run! The zombie apocalypse is finally upon us. But it’s much less scary when it’s ants being zombified. In Brazil, four new types of fungi were found that take over ants’ brains, turning them into zombies. The fungi, each of which attacks a specific type of ant, makes its home within the ants' body. Then the ant unwittingly does as the fungi commands. It leaves the nest (a very un-ant like thing to do) and finds a leaf, almost exactly 25 cm off the ground where the humidity is 95%, perfect for the growth of the fungi. The ant is ordered to lock onto the main vein of the leaf...the ants' final resting place. At this point, the fungus grows over the ant's body and produces a stalk from the ant's head. This stalk then drops spores to the floor of the rain forest, infecting other ants as they pass. Two of the four types of fungus release a spore that, if it misses an ant, will grow a stalk on the forest floor to infect passing ants. But how does this happen? That's what Evans, Elliot, and Hughes are trying to figure out. They were the first to identify the fungi. So far, all anyone knows is that an unknown chemical is being released into the ants' brain to control their mind. However, no one may ever know how this fungi works, because of global warming; the particular environment the fungi needs to grow, is becoming scarcer. Not only does this mean that there will be an unanswered mystery, but also a possible spike in ant population...and who wants more ants? The earliest evidence of these zombie-making fungi was found in Germany, in a fossilized leaf that is 48 million years old. The fossilized leaf is scarred with a dumbbell shape created when a zombie ant gripped on to the leaf with its mandibles. The fungi managed to survive 48 million years of change, but will it be able to adapt to the current changes in the environment? Who knows, but let's just hope this fungus doesn't develop a taste for humans. Sarah Wagner is completing her senior year internship at TalkingScience. She attends the Institute for Collaborative Education (I.C.E.) an alternative high school in Manhattan, NY. She loves the constant change of scientific discoveries, the unknown that is continually being redefined around us.
<urn:uuid:9f628977-519d-40e3-add3-aac409cf5e93>
3.171875
514
Personal Blog
Science & Tech.
67.532219
|Chandra X-Ray Observatory| |The third of NASA's Great Observatories for Space Astrophysics| The electromagnetic spectrum, extends from radio waves to gamma waves, from very low frequencies to extremely high frequencies. Our ability to tune in the more exotic radio waves has grown in recent decades. In fact, it's only been in the 20th Century that we have used any radio, starting at the long-wave end of the spectrum. Today's AM broadcast stations transmit signals in the medium-wave portion of the spectrum. FM music stations use VHF transmitters. TV stations use VHF and UHF. Cooking ovens are at microwave frequencies, as are police-speedtrap radar transmitters and receivers. All of these frequencies are part of the electromagnetic, or energy spectrum. The visible light that we receive with our eyes is flanked in the spectrum by infrared and ultraviolet light we can't see. We do, however, have the capability to capture infrared light, ultraviolet light and X-rays on photographic film. Deep Space Sensors. Only recently have we been able to make radio receivers and sensors, covering the UHF to gamma ray part of the electromagnetic spectrum, small enough and sensitive enough to send to space as part of orbiting telescopes. [spectrometers] NASA has developed its set of four Great Observatories In Space to extend mankind's knowledge of astronomy and life itself. Each observatory has its own specialized instruments to gather data from its assigned part of the electromagnetic spectrum. Chandra's Energy Spectrum. Chandra has given scientists their first view of some of the most violent and energetic activities in the Universe. Compared with the red-green-blue visible light, which carry an energy of about 2 electron-volts and are seen through optical telescopes, Chandra sees energy ranging from 50 to 10,000 eV. That permits astronomers to photograph extraordinary activities taking place at faraway places across the Universe. - Hubble sees visible light. - Compton sees gamma rays. - Chandra detects x-rays. - The fourth, yet to be named commemoratively and launched, will see infrared energy. The NASA diagram below reveals the portion of the electromagnetic spectrum viewed by Chandra X-Ray Observatory: Frequency and Wavelength of Energy in the Electromagnetic Spectrum Energy Frequency in hertz Wavelength in meters gamma-rays 1020-1024 <10-12 m x-rays 1017-1020 1 nm-1 pm ultraviolet 1015-1017 400 nm-1 nm visible 4-7.5x1014 750 nm-400 nm near-infrared 1x1014-4x1014 2.5 um-750 nm infrared 1013-1014 25 um-2.5 um microwaves 3x1011-1013 1 mm-25 um radio waves <3x1011 >1 mm Understanding Space Technology — Spectrometers How the Radio Spectrum Works Chandra Science Spectrum History Resources Hubble Compton SIRTF Great Observatories Telescopes Deep Space Search STO About STO Questions Suggestions Feedback E-Mail © 2003 Space Today Online E-mail
<urn:uuid:61fcbd63-300e-448e-b300-fb8dd79284c7>
3.796875
642
Knowledge Article
Science & Tech.
48.508578
Ask a question about 'Newton disc' Start a new discussion about 'Newton disc' Answer questions from other users A Newton disc is a disc with segments in rainbow A rainbow is an optical and meteorological phenomenon that causes a spectrum of light to appear in the sky when the Sun shines on to droplets of moisture in the Earth's atmosphere. It takes the form of a multicoloured arc... colours. When the disc is rotated, the colors fade to white; In this way Isaac Newton Sir Isaac Newton PRS was an English physicist, mathematician, astronomer, natural philosopher, alchemist, and theologian, who has been "considered by many to be the greatest and most influential scientist who ever lived."... demonstrated that white light is a combination of the seven different colours found in a rainbow. A Newton Disc can be created by painting a disc with the seven different colours. A combination of red, green and blue in the circular disc will yield the same result. This is due to the phenomenon called persistence of vision Persistence of vision is the phenomenon of the eye by which an afterimage is thought to persist for approximately one twenty-fifth of a second on the retina.... . It can easily be made at home using a card board piece This property is based on the principles of dispersion of light.
<urn:uuid:f8e00b7b-b041-4996-8e87-8223a844440a>
3.46875
278
Q&A Forum
Science & Tech.
54.6515
August 20, 2009 Just about every two years, the planet Mars makes its closest approach to Earth... around 36 million miles. That's when we pack our robotic emissaries off to timing their launches to spend the least effort to get there. Some fly around it... snapping pictures... Others land... to sample its surface...a few to crawl around its canyons and craters. These probes may pave the way for human explorers... and, perhaps permanent settlers... who'll dig deeper still... in search of answers to our most pressing question: Did Mars develop far enough - and stay that way long enough - for life to arise? And, if so, does anything live now within Mars' dusty plains... beneath its ice caps... or maybe somewhere underground? Mars does not give up its secrets easily ... it's almost as if the little planet is embarrassed. Over a century ago, a few observers thought they saw clues that Mars In 1877, the Italian astronomer Giovanni Schiaparelli noted markings... which he saw as a latticework of lines. He called them "canali" in Italian... meaning nothing more than "shallow channels" in American astronomer, Percival Lowell, found the lure of these He saw Schiaparelli's channels as artificial canals. He speculated that they carried melting snow from the poles to the dry interior. After all, on Earth, the Suez Canal had recently opened to ship traffic. The Panama Canal was beginning to be dug. The Martian canals, Lowell said, were built by a sophisticated society confronting an environmental catastrophe on the grandest of Martians, he thought, must face urgent choice: move water across vast arid regions, or perish on an increasingly dry planet. As the 19th Century gave way to the 20th, Lowell took his case to the public, in a series of three best-selling books. And the public responded with... questions. Who were these Martians, who had the means to remake an entire Some offered schemes for making contact. Giant mirrors would flash greetings... Light beams... Mental telepathy. Many astronomers grew deeply skeptical... but Lowell's vision of a harsh, yet Earth-like planet endured in the public's imagination.. That vision was dealt harsh blow in 1964. The Mariner IV spacecraft ventured in for a closer look... And what it saw looked like the Moon. Three more Mariners followed. They found huge dormant volcanoes... the deepest and longest canyon in the solar system...but not a trace of life, present or past. In the mid-1970's, two lander-orbiter robot teams, named Viking, took up residence at Mars. Maybe the Martians were just hiding, so the Vikings tested the soil for signs of life. But all the evidence from Viking told us... Mars is not only barren... but in fact hostile to life. It's no wonder. Martian air temperatures range from -20° Fahrenheit to down below -200°. It's also very, very dry. The Sahara Desert on Earth is a rainforest, by comparison. If all of the water vapor in Mars' thin atmosphere fell as snow, it would make a layer of frost not thicker than your fingernail. On Earth, impact craters erode over time from wind and water... and even volcanic activity. On Mars, they can linger for billions of years. But so can the imprint of riverbeds, lake bottoms and ocean shorelines... And the Viking orbiters saw a lot of them. It's not hard to believe that a great deal of water once flowed But where did all the water go? To find out, scientists needed to do real field-geology on Mars. They needed rovers... travelling robots with tools and instruments.
<urn:uuid:2fa5b583-24ef-4f43-8028-b9b21b7cd59d>
3.484375
847
Nonfiction Writing
Science & Tech.
69.440484
Based on their distinguished achievements in original research, three Caltech professors—Mike Brown, Ken Farley, and John Seinfeld—are among the 84 members and 21 foreign associates newly elected to the National Academy of Sciences. In 1969, an exploding fireball tore through the sky over Mexico, scattering thousands of pieces of meteorite across the state of Chihuahua. More than 40 years later, the Allende meteorite is still serving the scientific community as a rich source of information about the early stages of our solar system's evolution. Recently, scientists from the California Institute of Technology (Caltech) discovered a new mineral embedded in the space rock—one they believe to be among the oldest minerals formed in the solar system. For those who study earthquakes, one major challenge has been trying to understand all the physics of a fault—both during an earthquake and at times of "rest"—in order to know more about how a particular region may behave in the future. Now, researchers at Caltech have developed the first computer model of an earthquake-producing fault segment that reproduces, in a single physical framework, the available observations of both the fault's seismic (fast) and aseismic (slow) behavior. Hiroo Kanamori, the John E. and Hazel S. Smits Professor of Geophysics, Emeritus, at Caltech, has been elected one of 21 new foreign associates of the National Academy of Sciences. Eighty-four new members were also announced during the 149th annual meeting of the academy in Washington, D.C. By analyzing stalagmites, a team of Caltech researchers has determined that the climate signature in the tropics through four glacial cycles looks different in some ways and similar in others when compared to the climate signature at high latitudes. The results suggest that Earth's climate system might have two modes of responding to significant changes. The second-largest mass extinction in Earth's history coincided with a short but intense ice age. Although it has long been agreed that the so-called Late Ordovician mass extinction was related to climate change, exactly how the change produced the extinction has not been known. Now, a team led by Caltech scientists has determined that the majority of extinctions were caused by habitat loss due to falling sea levels and cooling of the tropical oceans. One of Caltech's oldest buildings, the Linde + Robinson Laboratory for Global Environmental Science, is the recipient of a 2012 Los Angeles Conservancy Preservation Award. The building, an astronomy lab built in 1932 that has undergone extensive renovations over the past two years, is the nation's first lab constructed in an existing historic building to earn LEED Platinum rating. The field of study of Andrew Thompson, assistant professor of environmental science and engineering at Caltech, presents not only theoretical challenges but logistical ones as well. That's because he is interested in the circulation and ecology of the Southern Ocean and the role it plays in global climate. The hostile environment of this area makes long-term research difficult, so he's part of a team that is seeking to monitor the region with autonomous underwater vehicles called gliders.
<urn:uuid:8e2bbc80-4984-4518-99aa-e7b7906fae85>
3.4375
636
Content Listing
Science & Tech.
30.152758
molecules have nonsuperimposable mirror images. And as a general rule of thumb, chiral molecules must have one or more chiral centers -- that is, carbons that have four non-identical substituents around it. (There are, of course, exceptions to this rule). A classic case of a simple chiral molecule is the following halogenated methane derivative: atom has four non-identical substituents around it, making this carbon a chiral center, and as proof of its chirality the molecule has a non-superimposable mirror image. A fancy term used in textbooks and in the literature to describe molecules that are mirror images of each other is enantiomers, as in "the enantiomer of the left molecule above is the molecule on the right, its mirror image." between enantiomers, chemists use the R and S classification system. Stereocenters, (sometimes called chiral centers, or stereogenic centers) are carbons that have four non-identical substituents on them, and are designated as either of R stereochemistry or S stereochemistry. If a molecule has one stereocenter of R configuration, then in the mirror image of that molecule, the stereocenter would be of S configuration, and vice-versa. of R or S configuration can be applied in three steps: 1. Order the substituents coming off the stereogenic carbon atom using the Cahn-Ingold-Prelog rules. 2. Rotate the molecule until the lowest priority (number 4) substituent is in the back 3. Draw a curve from number 1 to number 2 to number 3 substituent. If the curve is clockwise, the stereocenter is of R configuration. If the curve is counterclockwise, the stereocenter is of S configuration. 1. Order the substituents: Order the substituents coming off the carbon stereocenter from 1 to 4, with 1 being the highest priority substituent and 4 being the lowest priority substituent. To assign priority using the Cahn-Ingold-Prelog rules, compare the first atoms of the substituents. those substituents with higher molecular weight atoms a higher priority number. In our example below, iodine would be 1, bromine 2, chlorine 3, and fluorine 4, because iodine has the highest molecular weight (and is therefore highest priority) and fluorine has the lowest molecular weight (and is therefore the lowest priority). the first atom of two substituents happen to be identical identical in molecular weight, go to the next atom and make the molecular weight comparison (eg. an ethyl group would have higher priority over a methyl group). priorities for double bonds becomes a bit more challenging (this applies in the same fashion to carbonyls, C=O, and imines, C=N). A carbon with a double bond to another carbon is treated as a carbon singly bonded to two carbons, as shown below. This means that, for example, an ethylene substituent, R-CH=CH2, will have a higher priority then an ethyl substituent Shown here is an ethylene substituent (often called an allyl substituent). By the Cahn-Ingold-Prelog rules for assigning R and S nomenclature, this allyl group can be redrawn with each double bond carbon singly bonded to an additional carbon with three "phantom ligands," that are ignored. molecule can now be numbered as follows: Rotate the molecule. There are a couple of different ways to go from the priority numbering to determining R and S confiuguration. One of the best methods taught by a lot of undergraduate textbooks is to rotate the molecule until the number 4 priority subsituent is in the back, as shown below. (Remember that dashed lines mean a bond going into the computer screen, and a solid wedged line indicates a bond coming out of the screen). This takes some practice, especially if you are like most people and have difficulty visualizing molecules in three dimensions. Draw the curve. A curve is then drawn from the 1 to 2 to 3 priority substituents, ignoring the 4th priority subsituent (as shown below). If that curve goes clockwise then that stereocenter is of the R configuration. If the curve goes in a counterclockwise direction, then that stereocenter is of S configuration. In our example below, the curve goes counterclockwise so the stereocenter is of S configuration. of the compound above, then, would be (S)-bromo, chloro, fluoro, iodomethane, and the name of its enantiomer, or its mirror image, would be (R)-bromo, chloro, fluoro, iodomethane. Click here for a short quiz on assigning R or S configuration.
<urn:uuid:3a6d6276-ee28-4806-915b-5fcab1a033e6>
3.5625
1,107
Tutorial
Science & Tech.
38.862325
|All materials consist of many small particles, the molecules. Air, the earth - everything which surrounds us. The aggregation state of a material - solid, liquid or gaseous - depends upon how closely these molecules are bound together. Take the example water. Water freezes below 0°C, thus becomes solid. The molecules move closer and connect together. Starting from a temperature of 0 °C (fusion point) to 100°C (boiling point), water is liquid. The water molecules are in a somewhat looser grouping and move relative to each other. Starting from the boiling point, the water evaporates: it becomes gaseous. The molecules are now very far apart from each other and have nearly no more cohesion. They move about faster than in the liquid or solid state. This condition requires more energy. One calls this also heat energy, which we measure with the temperature.| As previously mentioned, this change in material state changes the density and depends on „the tightness “ with which molecules stay and bind together. Solid bodies have thus a higher density than gaseous materials, because their molecules stay more closely together. In the atmosphere, most molecules are air. Air protects the earth from the sun and from space. In the accompanying pictures one can see, how the Sunlight passes the atmosphere and reaches the earth surface. The molecules of the ozone layer filter a part of the sun light, which does therefore not arriveon the earth's surface. Most sun light reach the ground, however, the density of air is small. On the earth surface, the sun light is to a large extent taken up (absorbed) by the ground. Thus, the ground warms up. A smaller part of the sun light is thrown back again into the atmosphere (reflected). Air at ground level is now heated by the ground. Since the air molecules move apart when heating up, air expands and looses density - the heated air rises. Since the pressure at the ground is higher than in the atmosphere, ascending air can expand further with the altitude. During this expansion heat energy is lost. Ascending warm air cools down. It hardly continues to warm up due to the increased distance to the ground. Also, the reflected and partially long-wave radiation has less energy and does not heat the air as much as the incoming radiation. Further, air contains fewer molecules by volume, which could be met by the sun light, due to the expansion. Therefore, lower temperatures prevail the higher layers of air (see Meteogram to the right). If it is cloudy, the water molecules in the cloud absorb a large part of the sun light. Only a small part of the sun light then reaches the ground. Therefore, it is often cool under the clouds, even with high sun radiation. Air elevation by high temperature: If the sun radiation is very strong, air warms up more rapidly at the ground and rises. It displaces cooler air from higher layers and expands at the same time, since the air pressure decreases with altitude. The rise of air temperature and warming of higher atmosphere layers during the day is shown in an animated AirMeteogramm to the right. If more and more air warms up and rises, a kind of "air bubble" develops, which can expand the whole trophosphere (lower layer of the atmosphere) upwards. This is one of the reasons that the troposphere is thicker in the Tropics than in the Polar zones. An inversion is a situation where air temperature in higher layers is higher than in lower layers of the atmosphere. This is caused by the process of heating. The hot air rises and then stays above the cool air. For an inversion to happen, something must either prevent the heated air from cooling when it is in a higher layer, or something must cool the air below more rapidly than the air above. Causes for such an inversion are: Examples of an inversion due to cold air flow can be seen in the sample meteogram (right): The flow of cold air from nearby mountains creates a cold air cushion above the valley bottom at 970 meters (green bar) in about 2000 meter altitude, above the surrounding terrain (brown line).
<urn:uuid:b38ce935-aa0e-4696-bc71-b58cc4b07725>
4.28125
863
Knowledge Article
Science & Tech.
53.495872
| Many shrews have such uniformly grayish coats that separate species cannot easily be distinguished, but both the summer and winter coats of the Tundra Shrew are highly distinctive. Its summer pelage is tricolored, dark brown on the back, pale gray on the underparts, and brownish-gray or pale brown in between. Its longer winter fur is brown on the back and grayish on the sides and underparts. The Tundra Shrew is common, though limited, in distribution in Alaska and extreme northwestern Canada, where it inhabits hillsides and other well-drained areas with dense vegetation. Its food habits are not well known, but insects, earthworms, and parts of a small grass flower were found in the digestive tracts of some specimens. Embryo counts in a small sample of pregnant females averaged 10, which is high for shrews. Merriam, C.H., 1900. Proceedings of the Washington Academy of Science, 2:16. Mammal Species of the World
<urn:uuid:4bbf31c4-ce2c-4e7c-9a49-d2ec167d5e3b>
3.453125
209
Knowledge Article
Science & Tech.
56.06374
Report an inappropriate comment Sat Sep 13 10:38:03 BST 2008 by Pat Donnelly A distant cluster of galaxies is said to confirm the existence of undetectable energy. Not a single reference is made to the most powerful known force in the cosmos: electricity. Because the speed of light is used as a benchmark for defining cosmological distance calculations, the shifting of Fraunhofer lines into the red end of observed electromagnetic spectra determines "recessional velocity". As standard theories dictate, the faster an object recedes from our observation platforms the further away it is because the primordial Big Bang explosion imparted an initial impulse that is causing the universe to expand. Using these theoretical parameters, a faster recessional velocity means greater distance, which means an earlier time period. Astronomers made this disconcerting find ten years ago that the universe is expanding faster today than it did in the past. In order to accommodate anomalous redshift observations the existence of a force that exerts negative pressure on gravitational fields was proposed and later called "dark energy" because it cannot be detected with any instrument. Enzo Brachini from the European Organization for Astronomical Research in the Southern Hemisphere (ESO) wrote: "This implies that one of two very different possibilities must hold true. Either the Universe is filled with a mysterious dark energy which produces a repulsive force that fights the gravitational brake from all the matter present in the Universe, or, our current theory of gravitation is not correct and needs to be modified, for example by adding extra dimensions to space." Two of the most pressing issues in the modern approach to understanding the universe are the adherence to redshift as the only tool for estimating distances and ages of stars and galaxies, and a lack of knowledge when it comes to electricity. First, in order to advance the catalogue of knowledge it often requires one's reputation and livelihood be placed on the block and the axe allowed to fall where it may. It takes real courage to buck the system and stand on one's convictions despite antagonism. Such is the case with Halton Arp, one of the grand masters in the field of astronomical research. Dr. Arp earned his place at the top of his field through years of research and many lonely hours on cold mountain peaks documenting far-flung celestial objects. As his galactic compendium grew, he noticed that there was something wrong with conventional time-speed-distance calculations he found objects with higher redshift values in front of objects with lower redshift. Surely, such a conundrum should have immediately called into question the very nature of that "cosmological constant". If redshift is not an indicator of distance, J083026+524133 may not be so far away and therefore not so massive or bright. As Arp and his colleagues have repeatedly shown, taking in a wider field of view often reveals similar objects on the opposite side of a nearby active galaxy. Many of these high-redshift pairs are connected across the galaxy with a bridge of radiating material. Theories of an expanding universe, dark matter, and dark energy depend on the XMM Newton's (and other observatories) extremely narrow field of view and how the data is selected. The story of Halton Arp's experiences with the scientific community has been documented many times in these pages. Suffice to say, a respectful and open-minded reception from astronomers and astrophysicists was not to be the result of his discovery. Rather than accepting his observations, Dr. Arp's papers were barred from publication and his telescope time was canceled. He was shunned by colleagues and ignored by the community at large ÃÆÃâÃâÃÂ¢ÃÆÃ¢Ã¢ââ¬à ¡ÃâÃÂ¬ÃÆÃ¢Ã¢ââ¬Å¡Ã¬Ãâ¦Ã¢â¬Å one of the most shameful chapters in a book filled with instances of shoddy treatment and blind resentment. Second, by referring to material with a temperature of 100 million Kelvin as "hot gas" astrophysicists are highlighting their complete ignorance of plasma and its behavior. No atom can remain intact at such temperatures electrons are stripped from the nuclei and powerful electrical fields develop. The gaseous matter becomes plasma, capable of conducting electricity and forming double layers. In 1986, Hannes in a NASA-sponsored conference on double layers in astrophysics, said: "Double layers in space should be classified as a new type of celestial object (one example is the double radio sources). It is tentatively suggested that x-ray and gamma ray bursts may be due to exploding double layers. In solar flares, [double layers] Plasma is the first state of matter and makes up more than 99.99% of all that we observe in the universe. Cosmological redshift has been shown to be a property of matter and not one of velocity. It is far past time that scientists actually look at what they see with critical eyes. By Stephen Smith
<urn:uuid:de1cdca0-2f77-40af-9ad3-9fa96c501015>
3.078125
1,032
Comment Section
Science & Tech.
35.252083
Is Antarctica losing or gaining ice? What the science says... |Select a level...||Basic||Intermediate| Satellites measure Antarctica is gaining sea ice but losing land ice at an accelerating rate which has implications for sea level rise. Skeptic arguments that Antarctica is gaining ice frequently hinge on an error of omission, namely ignoring the difference between land ice and sea ice. In glaciology and particularly with respect to Antarctic ice, not all things are created equal. Let us consider the following differences. Antarctic land ice is the ice which has accumulated over thousands of years on the Antarctica landmass itself through snowfall. This land ice therefore is actually stored ocean water that once fell as precipitation. Sea ice in Antarctica is quite different as it is generally considered to be ice which forms in salt water primarily during the winter months. In Antarctica, sea ice grows quite extensively during winter but nearly completely melts away during the summer (Figure 1). That is where the important difference between antarctic and arctic sea ice exists. Arctic sea ice lasts all the year round, there are increases during the winter months and decreases during the summer months but an ice cover does in fact remain in the North which includes quite a bit of ice from previous years (Figure 1). Essentially Arctic sea ice is more important for the earth's energy balance because when it melts, more sunlight is absorbed by the oceans whereas Antarctic sea ice normally melts each summer leaving the earth's energy balance largely unchanged. Figure 1: Coverage of sea ice in both the Arctic (Top) and Antarctica (Bottom) for both summer minimums and winter maximums Source: National Snow and Ice Data Center One must also be careful how you interpret trends in Antarctic sea ice. Currently this ice is increasing and has been for years but is this the smoking gun against climate change? Not quite. Antarctic sea ice is gaining because of many different reasons but the most accepted recent explanations are listed below: i) Ozone levels over Antarctica have dropped causing stratospheric cooling and increasing winds which lead to more areas of open water that can be frozen (Gillet 2003, Thompson 2002, Turner 2009). ii) The Southern Ocean is freshening because of increased rain, glacial run-off and snowfall. This changes the composition of the different layers in the ocean there causing less mixing between warm and cold layers and thus less melted sea ice (Zhang 2007). All the sea ice talk aside, it is quite clear that really when it comes to Antarctic ice, sea ice is not the most important thing to measure. In Antarctica, the most important ice mass is the land ice sitting on the West Antarctic Ice Sheet and the East Antarctic Ice Sheet. Therefore, how is Antarctic Land-ice doing? Figure 2: Estimates of Total Antarctic Land Ice Changes and approximate sea level contributions using many different measurement techniques. Adapted from The Copenhagen Diagnosis. (CH= Chen et al. 2006, WH= Wingham et al. 2006, R= Rignot et al. 2008b, CZ= Cazenave et al. 2009 and V=Velicogna 2009) Estimates of recent changes in Antarctic land ice (Figure 2) range from losing 100 Gt/year to over 300 Gt/year. Because 360 Gt/year represents an annual sea level rise of 1 mm/year, recent estimates indicate a contribution of between 0.27 mm/year and 0.83 mm/year coming from Antarctica. There is of course uncertainty in the estimations methods but multiple different types of measurement techniques (explained here) all show the same thing, Antarctica is losing land ice as a whole, and these losses are accelerating quickly. Last updated on 4 May 2012 by John Cook. View Archives
<urn:uuid:ccf1e401-2006-4aa4-a201-af4d3ebb4886>
3.625
768
Knowledge Article
Science & Tech.
45.58612
The Dark Ages Lunar Interferometer (DALI) Kurt Weiler (NRL) The Dark Ages Lunar Interferometer (DALI) is a Moon-based radio telescope concept aimed at imaging highly-redshifted neutral hydrogen signals from the first large scale structures forming during the Universe’s “Dark Ages” and “Epoch of Reionization.” The Universe’s Dark Ages consist of the interval after recombination until the formation of the first luminous objects, when the Universe was unlit by any stars. During the Dark Ages, baryons -- neutral hydrogen atoms -- were able to collapse into dark matter-dominated, overdense regions. As the H I gas accumulated in overdense regions, its excitation temperature decoupled from, and became lower than, the temperature of the cosmic microwave background (CMB). Observations of the highly-redshifted hyperfine (21-cm) transition should show a patchwork of absorption features from the first large-scale structures against the CMB. Observing these features would probe structure formation in the relatively simple linear regime, and the H I line may represent the only means of obtaining information about this cosmic epoch. Later, at redshifts z ~ 10, the first stars and black holes formed in these overdense regions, and their collective UV radiation led to the Universe becoming nearly fully ionized, a state in which it remains today. The Epoch of Reionization (EoR) marks this second transition, during which time the 21-cm line excitation temperature should have risen, eventually exceeding the CMB temperature, until essentially all of the hydrogen was ionized. Imaging the (redshifted) 21-cm line of H I at different wavelengths will construct a tomographic or 3-dimensional view of the Dark Ages and EoR. Operating at 1 - 30 meter wavelengths (10 - 300 MHz), probing redshifts 6 < z < 100, DALI would be located on the far side of the Moon, where it would be shielded from terrestrial emissions and, for half of the Moon’s orbit, from solar radio emissions. In order to have sufficient sensitivity, the array must have an effective collecting area of at least 10 km^2 (10^7 m^2). As a secondary science goal, DALI will target auroral radio bursts from extrasolar planets in order to study their magnetospheres, potentially helping to distinguish habitable and non-habitable terrestrial-mass planets in the solar neighborhood. We illustrate the notional DALI concept and identify areas of technology development that will be required over the next decade that would allow the deployment of DALI in following decades.
<urn:uuid:641ad975-09e6-4244-9c09-75f2e1aee56a>
3.171875
557
Academic Writing
Science & Tech.
23.383896
Alternate Entry Name: _EPOST B: .epost( errnum, format, arg1, arg2, ... ); C: void _epost(int errnum, const char *format, ...); .EPOST "posts" an error message. This means that the message is formatted using the "format" string and arguments, then is saved in memory for later retrieval. For example, suppose that a program encounters an error but wants to perform some clean-up or error recovery operations before it issues an error message to the user. The program may use .EPOST to create and save an appropriate error message, then perform whatever clean-up actions are necessary. Later on, it can retrieve the message and actually output it. Formatting the message before cleaning up may let you create a more informative message, since the clean-up process may get rid of data which would be useful to include in the message. Only one message can be posted at a time. Thus each call to .EPOST overwrites the message posted by the previous call. There are several ways to retrieve a posted message: msg = .strer(errnum); assigns "msg" a pointer to a string holding the text of the currently posted message, provided that the specified "errnum" matches the error number of the posted message. Once you have obtained this message pointer, you can output it in the usual way (e.g. with PRINTF). outputs the "prefix" string followed by the posted error message, provided that the current value of "errno" matches the error number of the posted message. expl b lib .strer expl b lib io.err expl b lib .perror Copyright © 1996, Thinkage Ltd.
<urn:uuid:8fdd4c35-11b6-4cdb-8a83-73cf09da4e8d>
2.765625
369
Documentation
Software Dev.
57.346164
Narrator: This is Science Today. Earlier this year, an international team of astrophysicists discovered a distant, icy planet five times the size of Earth. The smallest extrasolar planet revealed outside of our solar system was found using a technique called microlensing that is based on an idea Albert Einstein came up with 70 years ago. Astronomer Ken Cook of the Lawrence Livermore National Laboratory, who was part of the group that made the discovery, says this happens when a foreground star passes very close to the line of sight of a more distant star. This creates a ring-like image. Cook: Now, this ring is so small on the sky that we can not tell it's a ring – all that we can tell is that suddenly the star is brighter. And when planets get near the ring, it perturbs the light coming, so it causes a little bump in the light curve that we see. As soon as this star moves off the line of sight, this star goes back to being dimmer and its normal self. Narrator: The new Earth-like planet, made of rock and ice, orbits a parent star every ten years at three times the distance from Earth to the sun. For Science Today, I'm Larissa Branin.
<urn:uuid:2af12d66-b487-4614-aeba-dc97f8a1663b>
4.0625
257
Audio Transcript
Science & Tech.
58.911275
Naturally occurring uranium contains more than 99% 23892U, an isotopeOne of two or more samples of an element whose atoms differ in the number of neutrons found in the nucleus. which decays to 23490Th by α emission, as shown in Eq. (1) from Naturally Occurring Radioactivity. The productA substance produced by a chemical reaction. of this reaction is also radioactive, however, and undergoes β decay, as already shown in Eq. (3) from that section. The 23491Pa produced in this second reaction also emits a β particle: These three reactions are only the first of 14 steps. After emission of eight α particles and six β particles, the isotope 20682Pb is produced. It has a stable nucleusThe collection of protons and neutrons at the center of an atom that contains nearly all of the atoms's mass. which does not disintegrate further. The complete process may be written as follows: While the net reaction is Such a series of successive nuclear reactions is called a radioactive series. Two other radioactive series similar to the one just described occur in nature. One of these starts with the isotope 23290Th and involves 10 successive stages, while the other starts with 23592U and involves 11 stages. Each of the three series produces a different stable isotope of lead. EXAMPLE 1 The first four stages in the uranium-actinium series involve the emission of an α particle from a 23592U nucleus, followed successively by the emission of a β particle, a second α particle, and then a second β particle. Write out equations to describe all four nuclear reactions. SolutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture. The emission of an a particle lowers the atomic numberThe number of protons in the nucleus of an atom; used to define the position of an element in the periodic table; represented by the letter Z. by 2 (from 92 to 90). Since elementA substance containing only one kind of atom and that therefore cannot be broken down into component substances by chemical means. 90 is thorium, we have The emission of a β particle now increases the atomic number by 1 to give an isotope of element 91, protactinium: The next two stages follow similarly: EXAMPLE 2 In the thorium series, 23290Th loses a total of six α particles and four β particles in a 10-stage process. What isotope is finally produced in this series? Solution The loss of six α particles and four β particles: involves the total loss of 24 nucleons and 6 × 2 – 4 = 8 positive charges from the 23290Th nucleus. The eventual result will be an isotope of mass numberThe sum of the numbers of protons and neutrons in an atom; these two kinds of particles contain almost all of the mass of an atom. 232 – 24 = 208 and a nuclear charge of 90 – 8 = 82. Since element 82 is Pb, we can write
<urn:uuid:3a4b3e2d-8c51-4c49-b695-424e316378a2>
3.90625
629
Tutorial
Science & Tech.
51.408347
This video is called Australian Antarctic season 2012-13. From Wildlife Extra: New acoustic devices to track and count Blue whales Eavesdropping on the elusive blue whale October 2012: Australian Antarctic scientists have successfully tested new acoustic technology to track and locate scores of blue whales hundreds of kilometres away by eavesdropping on the resonating song of these rare and elusive animals. By using sound rather than sight to initially detect the whales, the scientists significantly improved the likelihood of finding and counting whales in the vast Southern Ocean. The research is a core part of an Australian-led international project to estimate the abundance, distribution and behaviour of the species that was decimated in the early 1900s when industrial whaling killed approximately 250,000 animals. To test the technology, the team of Australian Antarctic Division scientists deployed directional sonobuoys in northern Bass Strait in January and March. Australian Environment Minister Tony Burke said, “Blue whales are under threat of extinction and improved scientific knowledge will help in the conservation and recovery of the species. This research reinforces Australia’s commitment to non-lethal research of whales. This contrasts with Japan’s so-called ‘scientific whaling’ where the alleged research begins with a harpoon. This breakthrough project again shows you don’t have to kill a whale to study it.” 103 sightings of Blue whales Leader of the Australian Marine Mammals Centre Dr Mike Double said that over 20 days on the voyage there were 103 sightings of blue whales over a 10,000 km2 area. “While blue whales are the largest animals on earth, growing up to 31 metres long, they’re still very difficult to find in a vast ocean and we know very little about them,” Dr Double said. “The real-time passive acoustic tracking system was highly effective at picking up their low frequency calls from hundreds of kilometres of away, thus maximising our chance of locating them.” The sonobuoys allowed researchers to record more than 500 hours of audio including more than 20,000 blue whale vocalisations. “During the voyages 32 vocalising blue whales were detected via acoustic tracking and of these 29 sightings located one or more whales. That’s a 90% success rate!” Dr Double said. Once the whales were located they were photographed and biopsied for further identification. A prototype moored acoustic recorder was also trialled, with the equipment deployed for three days in a fixed position. While the results from this recorder are still being analysed, the fixed moorings could be used to listen for whale song for up to 15 months. Mr Burke said the acoustic technology will now be used in the Antarctic Blue Whale Project, which will estimate their abundance and migration patterns, in January next year. “The Blue Whale Project is an initiative of international Southern Ocean Research Partnership involving nine other countries,” he said. “When Australia attends the International Whaling Commission and some countries talk about so-called ‘scientific whaling’ we pursue programs like this.”
<urn:uuid:b48cf28f-e867-416f-b268-c1e0f6fd77b1>
3.53125
635
Personal Blog
Science & Tech.
36.313957
Protons are positively electric; electrons negative. This is an attractive situation. Where do the shells come from that keep electrons from falling into the protons under attraction? The answer can be addressed by many ways. Here is some, which follows from AWT : the rotating electron makes the vacuum foam near protons so dense by shaking, it will become flowing on the surface of the resulting dense blob of vacuum like bubble at the water droplet surface. The another answer can follow from the description of electron orbital like the mercury droplet separating the electron from photon by surface tension. The breakthrough of the surface membrane by electron would require the temporal formation of strongly negatively curved surface with negative, repulsing potential, which is not possible. By another words, this absence of electron fall into atom nuclei can be explained as a sort of buoyancy or surface tension phenomena. We can even invent the explanation based on the laws of optics of electron wave spreading and total reflection phenomena and many others, but all these explanation will remain inertia and Newtonian mechanics based. It should be noted, inside the neutron stars the electron can be pushed into proton under formation of neutron due the strong repulsive force of another electrons. The same phenomena will occur, if you place the electron and proton into dense vacuum near the black hole. The density of vacuum will compensate the density of field near the particles by such way, these particles will condense nearly seamlessly into neutron or even heavier particle. From the same reason the resulting neutron will decompose easily into neutrino and gamma radiation, though.
<urn:uuid:ce104a14-3361-4b14-862a-d1e98679bd7a>
3.5
320
Comment Section
Science & Tech.
27.906165
Fossils and Living Species Fossils raised many questions about the origin of species--and not just for Darwin. Discoveries in geology had already challenged the idea that the world and all its species had been created at the same time a few thousand years ago. Fossils clearly showed that in past ages, the world had been inhabited by different species from those existing today. Old species had died out, and new species had appeared at many different times in Earth's history. Fossils also revealed another intriguing pattern: New species tended to appear where similar species had previously lived. Why would one species replace a similar one in the same location? Or perhaps, Darwin would eventually wonder, had the older species somehow given rise to the new ones? Back in London, the relationship between old and new species, as shown in fossils, would become one of the main lines of evidence leading to Darwin's theory of evolution.
<urn:uuid:f6f8d765-d8b1-4278-935c-0122c1bd7736>
3.875
186
Knowledge Article
Science & Tech.
42.21913
FORUM ON EDUCATION Energy - a Basic Physics Concept and a Social Value John L. Roeder Thirty years ago I began teaching at The Calhoun School in New York City. Soon after I arrived, the Arab Oil Embargo meant that the availability of gasoline at the corner service station could no longer be taken for granted, and before year's end I would pay in excess of a dollar for a gallon of it for the first time. The term "energy crisis" entered our vocabulary, and at Calhoun we decided to start a seminar about it. That seminar later led to more organized and systematic teaching about energy, first in a course on "Critical Social Issues" and later in a physical science course called "Energy for the Future." I got involved with the educational work of the National Energy Foundation, then headquartered in New York City, spent two summers working on NSTA's "Project for an Energy Enriched Curriculum," and became a Resource Agent for the New York Energy Education Project. Although my energy-focused physical science course gave way to Conceptual Physics and later Active Physics, after Paul Hewitt convinced me in 1989 that physics could and should be taught to ninth graders, only last year did I return to my earlier "life" as an energy educator and develop an Active Physics-formatted chapter on energy issues, in which the challenge was the same as the final exam of my former course: for students to plan their energy future without fossil fuels.1 It is the Second Law of Thermodynamics that makes energy an important concept in society. If we had only the First Law to worry about, we wouldn't have to worry: energy might not be created, but it isn't destroyed either. All the energy in the world today would continue to be available to us. But for energy to meet our needs, it must be transformed -- e.g., we need to increase the thermal energy in our homes in winter, and we need a lot of energy brought to our appliances by electrons in electric current if they are to operate. The Second Law of Thermodynamics tells us that when energy is transformed, some of it gets transformed to a form that is less useful (the most typical example of this is “waste heat”). Energy “sources” are more useful forms of energy that can be transformed to meet our needs. When we “produce” energy, what we are really doing is to transform useful energy from these energy "sources" to a form that meets our needs. When we “use” these energy “sources,” energy in a form that met our needs is transformed to a less useful form. When we “conserve” energy, we “use” the smallest amount of an energy “source” to accomplish a particular task. An important plan for any energy future is to “conserve” as much as we can, but “conserve” as much as it might, an industrial society still needs to “use” new “sources” of energy – to heat and cool its buildings, to run its appliances, to move its people, and to manufacture its goods. Because of their convenience, the "sources" of choice for more than a hundred years have been fossil fuels, the fuels I ask my students to plan their future without. Why? Not just because a shortage of fossil fuels got us into trouble in 1973 – and again in 1979. Not just because burning fossil fuels produces carbon dioxide which leads to global warming. More fundamentally, we're eventually going to run out of them. Their continued use to support an ever-increasing population is not "sustainable" -- in the sense that our use of them denies future generations the benefits of their use (and as a manufacturing material as well as an energy "source"). Twenty years after the 1973 Arab Oil Embargo I took a retrospective look at what our actions showed we had learned from it. I learned that US total energy "use" had declined the years immediately following the energy crises of 1973 and 1979, that US energy use through 1990 had fallen below a host of predictions, but that most of the reduction was due to the industrial sector. But little had been done to wean us from our diet of fossil fuels. The Solar Energy Research Institute was charged at its founding in 1977 to meet 20% of US energy needs from renewable sources by 2000. It was renamed the National Renewable Energy Laboratory (NREL) in 1991. I thought that this 30-year anniversary of the Arab Oil Embargo might be a good time to find out whether this goal had been met. Data for US fossil fuel and total energy use are plotted on Figures 1 and 3. Both graphs show a decline following the energy crisis years of 1973 and 1979 and that both fossil fuel and total energy use had climbed back to their peak 1979 values a decade later and continue to climb. But, while fossil fuel use doubled from 1949 to 1968, it has not increased even 50% more than the 1968 usage since then. And not until 2000 did petroleum use climb back to its 1979 peak. But the fact that we have put the brakes on increasing our petroleum use more than for other fossil fuels since the energy crises of the 1970s is no overt cause for rejoicing. For while imports still comprise only a small fraction of the coal (1.5%) and natural gas (20%) that we use, the fraction of petroleum imported passed 50% in 1990. M. King Hubbert, whose ability to forecast future fossil fuel production in terms of past data was legendary, wrote in the September 1971 Scientific American2 that "In the case of oil the period of peak production appears to be the present," and he was right. We've decreased the rate at which our use of energy in general and fossil fuels in particular has increased, but these uses are still increasing. Moreover, the time since the energy crises of the 1970s have seen a decline of US production of petroleum and continually increasing imports. How're we doing on renewables? Did NREL achieve the goal of 20% of US energy from renewable sources by 2000? Fig. 2 plots energy from conventional hydroelectricity, biomass, geothermal, and solar, and only since 1988 has solar gotten up off the t-axis on the graph. Most of our renewable energy continues to come from the two sources that have played the leading role even before renewable energy was fashionable: hydroelectricity and biomass. Geothermal has also started to make a more significant contribution since the energy crisis years, although it, too, had been around for a long time (see Fall 2002 issue). The total US energy use in Fig. 3 shows an increasing gap between total energy use and fossil fuel use. Although no new nuclear reactors have been erected since Three Mile Island in 1979, nuclear electricity continues to play an increasing role, and this has increased to be just a little greater than renewables. In 1979 the Ford Foundation-sponsored study, Energy: The Next Twenty Years, opened with the following statement: More than half a decade has passed since the oil crisis of 1973-1974 signaled a new era in U.S. and world history. The effort to develop a satisfactory policy response to what was once characterized as the "moral equivalent of war" has stretched out so long that weariness rather than vigor characterizes the national debate. . . . energy and environmental objectives seem irreconcilable; . . . a national consensus that solar energy is a good thing has yet to result in significant resource commitments, while support for nuclear energy, yesterday's hope for tomorrow, is eroding; and coal is marking time. Meanwhile, the slow, steady increase in the number of barrels of oil imported . . . provide[s] reminders that much needs to be done.3 I don't think it would stretch the imagination to replace "more than half a decade" in this statement with "three decades." In that time we have not learned the lessons of the energy crises, nor have we met the well-intentioned goal of 20% of our energy from renewable sources by 2000. In fact, at the World Summit on Sustainable Development in Johannesburg last year the leaders of the world could not agree to increase the percentage of the world's energy use from renewables to 15% by 2010. Last fall when I presented my ninth graders the challenge of the new Active Physics-formatted chapter I wrote on energy issues, I told them that I was asking them to do what the leaders of the world were unwilling to commit to: plan their energy future without fossil fuels. In the year 2010 those ninth graders will be graduating from college and begin to take their place in the world. If the leaders of the world, more preoccupied with the politics of the present when they should be framing a forward-looking vision of the future, haven't figured out how to produce 15% of the world's energy by renewable means by then, I hope that the next generation will be better trained to deal with this problem. 1. John L. Roeder, "Active Physics Chapters on Energy," AAPT Announcer, 32(2), 95 (Summer 2002) 2. M. King Hubbert, "The Energy Resources of the Earth," in Energy and Power (Freeman, San Francisco, 1971) 3. Hans H. Landsberg, et al., Energy: The Next Twenty Years (Ballinger, Cambridge, MA, 1979) (Note: The preceding article was excerpted from the author's talk of the same title at the American Association of Physics Teachers meeting in Madison, WI, 4 Aug 2003.) by John L. Roeder The Calhoun School New York, NY 10024
<urn:uuid:ce08ddda-2638-411a-a67d-a1783618f5d8>
2.6875
2,003
Nonfiction Writing
Science & Tech.
51.859446
The kinetic model of matter is an important idea that can be tackled at a variety of levels. To link the animation to NC 'Scientific Enquiry' it would be useful to emphasise that the animation is trying to model how matter behaves and as a model it will have flaws. As part of this discussion the pupils' attention could be drawn to the assumptions at the beginning of the animation. For example it is unlikely that the water molecule would be spherical and as the attraction between water molecules would be quite strong (hydrogen bonding) the particles might not quite behave in the way shown in the animation. At a different level it might be noticed that the particles are almost all moving at the same speed, which is obviously incorrect. The questions are provided to promote discussion based upon the kinetic model. The kinetic theory of matter states that matter is made up of small particles which are constantly in motion. The higher the temperature the faster the particles move. In a solid the particles are close together, moving very little, and attract one another strongly. In a liquid the particles are further apart, moving and as a result the attraction between each particle is less. In a gas the particles are very far apart, moving fast and there is no attraction between them. The animation attempts to model this relationship. Read the opening page and the assumptions on which the model is based. Open the animation and start it by clicking on the 'Increase Heat' button. Continue increasing the amount of heat and follow the rise in temperature by viewing the thermometer and the graph. What is happening to the matter when the temperature remains steady? Choose one of the questions from the end of the animation and discuss possible answers with other members of the class.
<urn:uuid:4b3392ea-54d9-45cc-819f-5c35503c3150>
4.53125
351
Tutorial
Science & Tech.
43.552873
Predicting the consequences of global warming is one of the most difficult tasks for the world’s climate researchers. This is because the natural processes that cause rain, hail and snow storms, increases in sea level and other expected effects of global warming are dependent on many different factors. It is also difficult to predict the size of the emissions of greenhouse gases in the coming decades, as this is determined to a great extent by political decisions and technological breakthroughs. How might this affect Europe? Many of the effects of global warming have been well-documented, and observations from real life are consistent with earlier predictions. The effects that can be predicted include: When the weather gets warmer, evaporation from both land and sea increases. This can cause drought in areas of the world where the increased evaporation is not compensated for by more precipitation. In some regions of the world this will result in crop failure and famine especially in areas where temperatures are already high. The extra water vapour in the atmosphere will fall again as extra rain, which can cause flooding in other places in the world. Worldwide, glaciers are shrinking rapidly at present. Ice appears to be melting faster than previously estimated. In areas that are dependent on meltwater from mountain areas, this can cause drought and lack of domestic water supply. According to the IPCC, up to a sixth of the world's population lives in areas that will be affected by meltwater reduction. The warmer climate will probably cause more heatwaves, more violent rainfall and also an increase in the number and/or severity of storms. Sea level rises because of melting ice and snow and because of the thermal expansion of the sea (water expands when warmed). Areas that are just above sea level now, may become submerged. For example, some Pacific Island nations are expected to be partially or completely submerged by the end of the century. Coastal and shallow marine plants and animals will be affected, for example mangroves and coral reefs. In countries with large areas of coastal lowland there will be a dual risk of river floods and coastal flooding, which will reduce the area for living and working. Coastal defences will need to be strengthened, and river levees will need to be developed. The increase in standing water may allow more insects like mosquitoes and diseases spread by insects, such as Lyme’s disease. The most recent meetings of scientists (2009 Climate Change Summit, Copenhagen (COP15) suggest that the consequences of increase in temperature caused by the greenhouse effect may be more severe than were previously thought.
<urn:uuid:37952460-92ae-46f5-8081-5d1eafb510ee>
3.96875
519
Knowledge Article
Science & Tech.
38.677273
Zelia developed from a convective cloud cluster in the near equatorial trough to the southwest of Sumatra. Intensifying and moving to the southeast on the southern side of the middle level ridge it soon encountered low-level southeasterly flow following the passage of a cold front to the south. This weakened the system and rapidly reversed its path. Zelia then dissipated as a TC over water in a high shear environment, but maintained its identity as a tropical depression for several more days. For more details see the TC Zelia Report (pdf) Track and intensity All times in WST - subtract 8 hours to convert to UTC.
<urn:uuid:7b578f77-0193-4dd5-b3af-22b33c9d77d0>
2.859375
134
Knowledge Article
Science & Tech.
46.445302
One of the delights of wandering around an undergraduate chemistry laboratory is discussing the unexpected, if not the outright impossible, with students. The >100% yield in a reaction is an example. This is sometimes encountered (albeit only briefly) when students attempt to recrystallise a product from cyclohexane, and get an abundant crop of crystals when they put their solution into an ice-bath to induce the crystallisation. Of the solvent of course! I should imagine 1000% yields are possible like this. What the students are not expecting is that cyclohexane has such a high melting point, higher than that of water! n-Octane for example melts at -57°C (and most of us have seen those travelogues in the antarctic where the petrol tanks need to be warned to prevent freezing), so why is that of cyclohexane so much higher? That it might be strange is shown by the melting points of the series: - benzene, +5.5°C - cyclohexadiene, -89°C - cyclohexene, -97°C - cyclohexane, +6.5°C. Benzene one might explain because it famously stacks in a herring-bone fashion, with the relatively electropositive hydrogen attracted to the π-cloud on the face. Clearly, this explanation cannot hold for cyclohexane, which has no π-face. What does the crystal look like? If one inspects the structure closely, one can find quite a few H…H contacts at about 2.4Å and they are arranged in a particularly rigid three-dimensional manner. The maximum attractive force resulting from van der Waals, or dispersion interactions between two hydrogens is thought to occur at ~2.4Å. Perhaps cyclohexane is a prime (possibly THE prime) example of the influence of this (under-rated) interaction? A molecule covered in Velcro no less. By the way, can you spot the connection with the previous post? Postscript: Below is a so-called non-covalent-analysis (NCI) of cyclohexane as packed into a crystal lattice. The coordinates are obtained from a neutron diffraction structure. The green regions indicate weakly attractive zones.
<urn:uuid:447f442b-9d5b-4fc0-9c83-ef3acd6ff5d7>
3.046875
483
Personal Blog
Science & Tech.
47.211602
The Unusual Properties of Water Molecules Water molecules have unusual chemical and physical properties. Water can exist in all three states of matter at the same time: liquid, gas, and solid. Imagine that you’re sitting in your hot tub (filled with liquid water) watching the steam (gas) rise from the surface as you enjoy a cold drink from a glass filled with ice (solid) cubes. Very few other chemical substances can exist in all these physical states in this close of a temperature range. Water’s unique properties Following are some of the unique properties of water: In the solid state, the particles of matter are usually much closer together than they are in the liquid state. So if you put a solid into its corresponding liquid, it sinks. But this is not true of water. Its solid state is less dense than its liquid state, so it floats. Water’s boiling point is unusually high. Other compounds similar in weight to water have a much lower boiling point. Another unique property of water is its ability to dissolve a large variety of chemical substances. It dissolves salts and other ionic compounds, as well as polar covalent compounds such as alcohols and organic acids. Water is sometimes called the universal solvent because it can dissolve so many things. It can also absorb a large amount of heat, which allows large bodies of water to help moderate the temperature on earth. Water has many unusual properties because of its polar covalent bonds. Oxygen has a larger electronegativity than hydrogen, so the electron pairs are pulled in closer to the oxygen atom, giving it a partial negative charge. Subsequently, both of the hydrogen atoms take on a partial positive charge. The partial charges on the atoms created by the polar covalent bonds in water are shown in the following figure. Water is a dipole and acts like a magnet, with the oxygen end having a negative charge and the hydrogen end having a positive charge. These charged ends can attract other water molecules. This attraction between the molecules is an intermolecular force (force between different molecules). Intermolecular forces can be of three different types: London force (or dispersion force). This weak type of attraction generally occurs between nonpolar covalent molecules, such as nitrogen, hydrogen, or methane. It results from the ebb and flow of the electron orbitals, giving a weak and brief charge separation around the bond. Weak dipole-dipole interaction. This intermolecular force occurs when the positive end of one dipole molecule is attracted to the negative end of another dipole molecule. It’s much stronger than a London force, but it’s still pretty weak. Extremely strong dipole-dipole interaction. This force occurs when a hydrogen atom is bonded to one of three extremely electronegative elements — O, N, or F. These three elements have a very strong attraction for the bonding pair of electrons, so the atoms involved in the bond take on a large amount of partial charge. This bond turns out to be highly polar — and the higher the polarity, the more effective the bond. When the O, N, or F on one molecule attracts the hydrogen of another molecule, the dipole-dipole interaction is very strong. This strong interaction is called a hydrogen bond. The hydrogen bond is the type of interaction that’s present in water, as shown in the following illustration.Hydrogen bonding in water. Water molecules are stabilized by these hydrogen bonds, so breaking up (separating) the molecules is very hard. The hydrogen bonds account for water’s high boiling point and ability to absorb heat. When water freezes, the hydrogen bonds lock water into an open framework that includes a lot of empty space. In liquid water, the molecules can get a little closer to each other, but when the solid forms, the hydrogen bonds result in a structure that contains large holes. The holes increase the volume and decrease the density. This process explains why the density of ice is less than that of liquid water (the reason ice floats). The structure of ice is shown below, with the hydrogen bonds indicated by dotted lines.
<urn:uuid:d9e5600c-955f-419a-bc33-6f44ca608ddd>
3.71875
868
Knowledge Article
Science & Tech.
38.904832
Contact: Christine Pulliam Harvard-Smithsonian Center for Astrophysics Caption: A giant gamma-ray structure was discovered by processing Fermi all-sky data at energies from 1 to 10 billion electron volts, shown here. The dumbbell-shaped feature (center) emerges from the galactic center and extends 50 degrees north and south from the plane of the Milky Way, spanning the sky from the constellation Virgo to the constellation Grus. Credit: NASA/DOE/Fermi LAT/D. Finkbeiner et al. Usage Restrictions: None Related news release: Astronomers find giant, previously unseen structure in our galaxy
<urn:uuid:31110fa4-e149-4e5c-aa7e-3c4ac1a0671f>
3.46875
137
Truncated
Science & Tech.
34.520435
In any geometric solid that is composed of flat surfaces, each flat surface is called a face. The line where two faces meet is called an For example, the cube above has six faces, each of which is a Where two squares meet, a is formed, which is called an In the case of a cube, it has 12 such edges. (C) 2009 Copyright Math Open Reference. All rights reserved
<urn:uuid:07d85d21-f77f-4def-bde5-4bc53c4ec931>
3.765625
88
Knowledge Article
Science & Tech.
57.460238
A loop statement specifies the repeated execution of a statement sequence. It is terminated by the execution of any exit statement within that sequence. LoopStatement = LOOP StatementSequence END. LOOP IF t1^.key > x THEN t2 := t1^.left; p:= TRUE ELSE t2 := t1^.right; p:= FALSE END; IF t2 = NIL THEN EXIT END; t1=t2 END While, repeat, and for statements can be expressed by loop statements containing a single exit statement. Their use is recommended as they characterize the most frequently occurring situations where termination depends either on a single condition at either the beginning or end of the repeated statement sequence, or on reaching the limit of an arithmetic progression. The loop statement is, however, necessary to express the continuous repetition of cyclic processes, where no termination is specified. It is also useful to express situations exemplified above. Exit statements are contextually, although not syntactically bound to the loop statement which contains them.
<urn:uuid:fae096dc-a42a-4c60-a594-bb7438c851b9>
3.234375
213
Documentation
Software Dev.
38.257468
NGC 1961 is a problematic galaxy. Its highly disturbed and asymmetric spiral arms would normally indicate an interaction or merger with another galaxy. However, no culprit is found to be the source of NGC 1961's angst. This galaxy is part of a group (of about 10 other smaller galaxies) around 171 million light years away. Given the apparent size and brightness of this galaxy- it must be one of the largest galaxies in our "local" universe. Astronomers have observed this galaxy from X-rays to radio wavelengths of light in order to unravel the mystery of this galaxy's morphology. One recent paper concludes that the shape of the galaxy may be due to its interaction with the gas in the cluster. Most of this galaxy is still producing young and massive stars that live short lives and die violent deaths as supernovae. The most recent explosion in this galaxy was observed in 2001. |L R G B color production was used to create this image.|| Minimum credit line: Doug Matthews/Adam Block/NOAO/AURA/NSF BACK to main Best of AOP page. Return to NOAO Home Page
<urn:uuid:91c3c5be-d1bf-4538-a563-f6fd1d670385>
2.9375
246
Knowledge Article
Science & Tech.
56.004978
I’ve been trying to find out how many taste buds pigeons have after reading somewhere that they haven’t got many compared to humans. Well, after a lot of searching I found the original paper (Pigeon Pages by Pigeon Recovery) I had read which states that feral pigeons have only 57 taste buds compared to the 9000 humans have. Searches on the net have given me a result of 37 taste buds in feral pigeons, some stating between 27-59. None of the pigeon books I have state how many taste buds they have, nor could I find anything scientific on the net. If anyone has a good solid reference, please share it with me. I will be very grateful. Thank you! But I guess it is safe to say that pigeons haven’t got a lot of taste buds. Which makes me wonder what is it about certain foods that make Georgie go mad for them? If her taste range is so limited then popcorn and brioche must be really really delicious!! Maybe it is also the texture of these foods that she likes? A tad bit off subject: Interestingly, pigeons suck up their water instead of the usual ‘dip, tip and gulp’ you see in other bird species. The exception is the tooth-billed pigeon (Didunculus strigirostris), which scoops up its water because of its teeth. Pigeons have excellent eyesight and they can see all colours – including ultra-violet light (which humans cannot see). They can hear much lower frequencies than humans, as well as higher frequencies. As for their sense of smell, new research suggests that they use their sense of smell to find their way back home: Robin McKie, science editor The Observer, Sunday 6 August 2006 Scientists have discovered the secret of pigeons’ remarkable ability to navigate perfectly over journeys of several hundred miles. They do it by smell. Research found that pigeons create ‘odour’ maps of their neighbourhoods and use these to orient themselves. This replaces the idea that they exploited subtle variations in the Earth’s magnetic field to navigate. ‘This is important because it is the first time that magnetic sensing and smell have been tested side by side,’ said Anna Gagliardo, of the University of Pisa, who led the research. The discovery that birds have an olfactory positioning system is the latest surprising discovery about bird migration. Birds know exactly when to binge on berries or insects to fatten themselves for long flights, and some species recognise constellations, which helps them to fly at night. Birds also travel immense distances: the average Manx shearwater travels five million miles during its life. Research into navigation has included an experiment in which robins were released with a patch over one eye – some on the right eye, some on the left. The left-eye-patched robins navigated well, but those with right-eye patches got hopelessly lost. ‘It is a very strange finding,’ said Graham Appleton, of the British Trust for Ornithology . ‘It is clear the cues robins use to navigate are only detectable in one eye. Why that should be the case, I have no idea.’ In the Pisa experiments, Gagliardo, working with Martin Wild of the University of Auckland , followed up experiments done in 2004, which showed that pigeons could detect magnetic fields. She argued that this did not mean they actually did. So in 24 young homing pigeons she cut the nerves that carried olfactory signals to their brains. In another 24 pigeons she cut the trigeminal nerve, which is linked to the part of the brain involved in detecting magnetic fields. The 48 birds were released 30 miles from their loft. All but one of those deprived of their ability to detect magnetic fields were home within 24 hours, indicating that it was not an ability that helped them to navigate. But those who had been deprived of their sense of smell fluttered all over the skies of northern Italy. Only four made it home. Gagliardo and her team conclude that pigeons read the landscape as a patchwork of odours. Every spring, hundreds of millions of birds head north in order to exploit new resources. Gulls head to the Arctic to make use of the 24 hours of daylight prevailing there, while swallows and other birds leave Africa to exploit the British summertime. The navigation involved in these long journeys is still a cause of considerable debate among scientists. Among the main theories are suggestions that some birds remember visual maps of the terrain they fly over; that they follow the lines of Earth’s magnetic field; and that night-time flyers remember star maps of the sky. However, the discovery of pigeons’ prowess at exploiting smells is considered important because their navigational abilities are some of the most acute in the natural world. Pigeons excel at getting home when released in unfamiliar locations. That they achieve such accuracy using smell is all the more surprising.
<urn:uuid:8da3eaa1-cd01-4d8a-8a85-d583670f1e3b>
3.125
1,049
Personal Blog
Science & Tech.
50.898363
Dec. 19, 2011 A new German-based project is setting out to rescue biodiversity data at risk of being lost, because they are not integrated in institutional databases, are kept in outdated digital storage systems, or are not properly documented. The project, run by the Botanic Garden and Botanical Museum Berlin-Dahlem, provides a good example for a GBIF recommendation to establish hosting centres for biodiversity data. This is one of a set of data management recommendations just published by GBIF. The team behind the German project called reBiND, or Biodiversity Needs Data, has started identifying threatened databases for archiving, and will make them accessible via the GBIF network. The focus is initially on specimen and observational data that are already digitized but that are not part of the documentation process of a museum or other institution. Examples include data from diplomas or PhD theses, generally stored on a computer hard drive or a disc and often in danger of getting lost because of lack of documentation. Examples of data being 'rescued' by the reBiND project are: - A private collection of observation data on beetles from meadow orchards in Southern Germany. The data had been stored by biologist Andreas Kohlbecker on a 1986 Mac 512 computer, using a Mac OS 6.8 operating system and making use of Filemaker II software from 1989. The data were rescued by running the operating system and software in Basilisk II Mac Emulator. - Extensive primary data from a PhD thesis on epiphytic moss vegetation in the Canary Islands. They had been stored in 1997 on obsolete 3.5-inch floppy discs using Excel files. They have been made readable using an external floppy drive and will be converted to XML format. These data are especially valuable because the study was the first to document moss communities on the islands taking microclimates and human impacts into account. The workflow developed by the reBiND project uses the Biological Collections Access Services (BioCASe) provider software package to transform data into XML files. BioCASe is one of the publishing tools through which data are published to the GBIF network. Repair software detects and corrects any errors introduced during the conversion process. ReBiND aims to enable users with a minimum of technical background knowledge to transform and archive their biodiversity data. At present, the XML files containing the rescued data are stored on the project's own server, in a database specifically designed for the purpose. The intention is to make the data discoverable and accessible globally through the GBIF network, and the team is working with GBIF Germany to bring this about. The project expects to take on data rescue work globally. The team is also working on a best practice handbook on the rescue and storage of threatened digital data. The three-year project is funded by the German Research Foundation (Deutsche Forschungsgemeinschaft). The GBIF position paper on data hosting infrastructure for primary biodiversity data looks at the rescuing and re-hosting of data stored in formats that are difficult to access. It emphasizes that the biodiversity community must adopt standards and develop tools to enable data discovery and thus help preserve data. Other social bookmarking and sharing tools: Note: If no author is given, the source is cited instead.
<urn:uuid:427ccd40-74a8-4196-aafb-0571242124fe>
3.0625
683
Knowledge Article
Science & Tech.
31.661747
The preservation of living cells for transport and storage has become increasingly important as cell based treatments and living products have been developed for medical care. To treat a patient using living cells that can assist in wound repair or synthesize important chemicals in the body has tremendous medical potential; but this potential cannot be realised unless living cells can be stored and transported so they can get to the patient. Our research explores a new technique for preserving living cells in a dried state. If cells can be stabilised in room temperature glasses, then can be easily stored and shipped, and then rehydrated for use at their point of need. We have dried and stored many types of cells including sperm, fibroblasts and hepatocytes, but there is a lot of research necessary to extend the lives of the dried cells and to extend this technology to many new and important cell types. Convective drying uses the flow of dried gas to remove water from the cells. Our work mostly focuses on this type of drying. It is a complex process because cells that are dried go through many possible phase transitions. By controlling the drying rates and by adding sugars to the cells we try to dry them into stable states; but experiments and models are continuously being developed and improved to create more uniform and controlled drying, so we can determine how best to keep the cells alive.
<urn:uuid:b72bfc20-197b-4310-92df-581589455698>
3.640625
268
Academic Writing
Science & Tech.
38.415833
Let us now see how the Maxwell equations (17.2)–(17.5) predict the existence of electromagnetic waves. For simplicity we will consider a region of space and time in which there are no sources (i.e., we consider the propagation of electromagnetic waves in vacuum). Thus we set p = 0 = j in our space-time region of interest. Now all the Maxwell equations are linear, homogeneous. maxwell equation, space, time, electromagnetic, chapter 18 Torre, Charles G., "18 The Electromagnetic Wave Equation" (2012). Foundations of Wave Phenomena. Book 5.
<urn:uuid:54324a60-a77d-45dd-abec-3afe6467175f>
2.84375
129
Academic Writing
Science & Tech.
53.871128
Many soils of the arid west have a rich active azofying flora. This is due in no small measure to their composition. They are high in calcium and magnesium carbonate and contain a good supply of available phosphorus and potassium but have a low nitrogen content. They are poor in organic carbon; hence, their native supply of energy is limited. It is well-known that a liberal supply of rapidly decaying organic matter is beneficial, and this is being supplied to some soils in the form of manures. This will increase the nitrogen content of the soil. What effect will this increase have upon the nitrogen-fixing powers of the soil? It is the province of this bulletin to consider some of the results which have been obtained in seeking an answer to this question. Greaves, J. E. and Nelson, D. H., "Bulletin No. 185 - The Influence of Nitrogen in Soil on Azofication" (1923). UAES Bulletins. Paper 151.
<urn:uuid:4caaa164-cea4-4378-b851-d3b4bd5b0bc7>
2.953125
202
Academic Writing
Science & Tech.
59.738457
In our last session we learnt about header, footer, hgroup, figure and figure caption tags.One thing you can notice is that these tags are replacement of our traditional div tag in HTML 4 and rather more specific and categorized.There are a few more tags before we move into our next topic. - <time>-Time tag is used to define date or time. - <section> -Section tag is used to represent a section within an article(Behaves just like a subheading in an essay). - <nav>-nav tag is used for declaring a navigational section of a html document(Similar to grouping multiple links at the bottom of the page). - <progress>-Progress tag is used for representing the progress of a task(Similar to progress bar which shows how much you have downloaded or installed). - <meter>-Meter tag is used for indicating a scalar measurement within a known range or fractional value(Representing data by parts or in fractions). It looks something like this: Now open notepad and try using the above tags.save it as index.html and open it in your browser.Have fun and stay tuned for the next tutorial.If you have any doubts, feel free to comment and we will get back to you with your doubts.you can Download the source file here. Stay tuned for more.
<urn:uuid:f651c6da-477e-4032-b48c-205f738e186d>
3.296875
283
Tutorial
Software Dev.
56.75035
1. Click on the five different buttons in the applet (F=4, F=6, F=8, F=12, F=20). Can you name the five regular polyhedra? 2. Look at the top of the chart. How many faces does each polyhedron have? How many vertices? 3. Stop the rotation. Look at each of the polyhedra. What polygon do you see on the faces of the 4. Instead of using the stop button you can also try dragging the mouse on the figures to rotate them. 5. Click on the pull down menu with choices color model, light model, metal model, wire model and select wire model and view the cube. Can you count the edges? the vertices? Now change to the other four polyhedra and count the edges and vertices for all of them. Are there any patterns?
<urn:uuid:eab74388-2650-4417-9513-1e6fbb034c78>
3.234375
186
Tutorial
Science & Tech.
82.575072
Assume you have a rigid body falling into the ocean at terminal velocity. Also assume that the rigid body does not break on impact. How could you figure out how much kinetic energy would be lost in the resulting splash? I promise this isn't a homework question, but an engineering thought experiment. I began to reason that water is incompressible so the body has to displace the water. I know the volume of my object so I could say I know how much water gets displaced. I also have a figure that the splash (radial jet) leaves at 20-30 times the impact speed. From there I can calculate a KE, but I don't believe it because this doesn't account for the surface tension. Any guidance would really be appreciated. The final velocity in the fluid is zero if there is no gravity so all kinetic energy is lost. If there is gravity, the body will move down with the velocity determined with balance of the gravity, Archimedes', and friction forces. It has nothing to do with the initial velocity.
<urn:uuid:8e50de63-f627-4f1b-b46f-414ae3bc396f>
2.75
211
Q&A Forum
Science & Tech.
53.592316
The way we control machines has changed dramatically over time. We have gone a long way from manual handles, knobs, strings and buttons to a great invention like the mind control head sets, seen in sci-fi movies only not long ago. Well, it is no longer science fiction. The possibility to control things with the power of your thought is yet another scientific achievement that will become more and more available, until it becomes part of our daily lives. Think of it and it’s done. Just like that. Futuristic ideas from the sci-fi literature suggest that you will only have to “think” what you want and it will be instantly obeyed by machines that “read” your commands. It may seem abstract and distant, but this technology is already here and is becoming reality. How it works? The human brain. A small “universe”, confined within your skull. Some believers would call it God’s greatest invention, scientists would call it nature’s greatest achievement. This is our most complicated organ and the most complex thing in the nature so far. Modern scientists are on a quest of understanding how this “bio-computer” neural network works. Despite that a lot have been discovered, they are at a relatively early stage of understanding and knowing everything about the brain. Neuroscience is going hand by hand with computer systems. Transmitting the brain signals to a machine, thus controlling the latter opens up a world of possibilities. A lot of companies have presented their prototypes of “mind-reading” headsets. Basically, what most of them currently do is detect you brain waves’ frequency. Variations in those frequencies might be cause by changing mood, thoughts, emotions and many more. By interpreting such brain wave fluctuations the headsets, earlier used in hospitals only (EEG), can make the connection to a machine, which could be put entirely under the control of the brain. A true remote control is still not possible, though. However, a lot has been achieved in the recent years. People can now play simple games by focusing their attention and thought ti a single point on the screen and allowing the device to translate simple commands. Controlling machines with a though is just a small fragment of what will this neurotechnology be capable of. Monitoring brain’s activity and health status is another positive application of such devices. Pre-emptive treatment of our mental health is the panacea to good and long life. Some researchers have discovered that those headsets could influence your mind, if it is in a state of disorder. Healing psychological trauma would be a great feat, but we must consider that it can and will be used for bad purposes. Tampering with someone’s brains against his or hers will would be an awful crime. So this is another thing scientists must keep in mind before going too far with mind-reading technologies. Exploring the brain could also lead us to “mimic” this organ’s functionality and structure.This way we can create new types of computers based on its cortex structure. Implementing this natural design on artificial copies would lead us to creating smarter and almost perfect computers. Playing God on a master level. There’s almost no limit of how will this go in the next decades. We are entering an exciting era, where this design will be inevitably improved further and further. Activities like turning your lights on and off, setting the oven’s temperature, driving your car and many more, only by the use of your brain, would be possible very soon, if the progress continues with that rate. For instance controlling helicopters with such devices is already done. We can live this part of science fiction writers’ imagination, where they could only dream of such “magical” gadgets. Read more about this great invention on Cnet’s article.
<urn:uuid:6bc0cd73-c891-4521-b2a2-f87cf62382ec>
2.71875
800
Truncated
Science & Tech.
51.583392
My hero helping to beat back the tide of stupidity and hysteria? Climate change theories face being debunked by Captain Cook and Darwin Weather reports made by famous explorers such as Captain James Cook and Charles Darwin are helping scientists to study climate change. Although there are numerous weather reports from the 18th and 19th century covering entire continents the oceans have largely been uncharted territory. Now a new project has transcribed and digitised nearly 300 ships' logbooks dating back to the 1760s to help scientists fill in the gaps in the world's recent climate history. Until now they have been an untapped resource of scientific data. Some ship logs have already revealed evidence that climate change may not be as rapid as believed, with many charting little or no change in Arctic sea temperatures compared with today. However the data of others may prove the opposite. Recordings taken by the HMS Isabella, which sought the Northwest Passage in 1818, compared with today show a significant decline in sea ice in Baffin Bay, Canada. This area of sea connects the Arctic and Atlantic Oceans. Each log contains accurate weather information with daily and sometimes even hourly measurements of temperature, wind speed, air pressure and ice formation. Full article here.
<urn:uuid:66c8ccc8-cf0c-411d-8a05-5a197bd8edba>
2.765625
252
Personal Blog
Science & Tech.
35.983442
In a first for interplanetary exploration, NASA engineers said Wednesday that the $2.5 billion Curiosity robot rover successfully mined material from inside a rock on Mars, advancing the agency's search for conditions once favorable for microbial life. Latest Photos from NASA's Mars Rover Held in a scoop on the rover's mechanical arm, the tablespoon of pulverized gray rock offers planetary scientists their first sample from the planet's interior, where it may have been sheltered from the harsh surface chemistry and ultra-violet radiation. It may be several weeks before they can analyze the powder using the craft's onboard chemical test-kit. "This is the first time any robot has drilled in any rock on Mars" or anywhere else beyond Earth, said Louise Jandura, chief engineer for Curiosity's sample system at the Jet Propulsion Laboratory in Pasadena, Calif. The sample comes from a hole about 2.5 inches deep drilled on Feb. 8 in a flat plate of sedimentary bedrock that the space agency managers have named "John Klein" in memory of a Mars Science Laboratory deputy project manager who died in 2011. They picked the rock in Gale Crater, which the robot has been exploring since it landed there on Aug. 5, because they hope it holds evidence of wet environmental conditions long ago. "The rocks in this area have a really rich geological history," said sampling-system scientist Joel Hurowitz at JPL. "This is reason for us to be pretty excited here." To complete the drilling without mishap, the rover's operators on Earth had to devise new procedures to solve a series of computer software problems that affected the craft's motor's control systems, said JPL drill-systems engineer Scott McCloskey. They are also worried about potential problems with the welds that hold the scoop together. "None of these caused any harm to the rover," he said. Write to Robert Lee Hotz at email@example.com A version of this article appeared February 21, 2013, on page A7 in the U.S. edition of The Wall Street Journal, with the headline: Rover on Mars Extracts First Rock-Core Sample.
<urn:uuid:d6b6d764-68bb-425b-9ffb-4691c94aaf05>
3.1875
443
Truncated
Science & Tech.
48.62063
Coastal & Marine Geology InfoBank Our Mapping Systems The USGS and Science Education USGS Fact Sheets ground penetrating radar Comment: 17:47 - 19:12 (01:25) Source: Annenberg/CPB Resources - Earth Revealed - 21. Groundwater Keywords: pollution, groundwater, landfill, rainfall, "saturated zone", clay, leachate Our transcription: One way in which we pollute groundwater is through badly designed or improperly maintained landfills. When rain leaches the pollutants from a landfill into the saturated zone, a plume of polluted waste spreads out in the direction of groundwater flow. Fortunately, engineers are now developing more effective techniques to minimize the impact of landfills on groundwater quality. One of the techniques that has been developed recently is a landfill liner system. Its multiple layers act as a barrier between the garbage and the surrounding environment. The base of the liner system is simply a layer of impermeable clay, which is spread around the landfill and compacted by heavy equipment. While the clay itself might be adequate to prevent the seepage of polluted water, or "leachate," out of the landfill, extra precautions are taken. A synthetic high-density plastic liner is placed on top of the clay. Next, a half meter or so of permeable sand along with drainage pipes is laid across the plastic. Here the leachate accumulates and flows out from beneath the landfill for safe disposal elsewhere. Geology School Keywords
<urn:uuid:21de66b7-e175-4e3e-82f6-242f18a62b03>
3.859375
321
Knowledge Article
Science & Tech.
34.375169
Over the past few years, we have made considerable progress in understanding the genetics of flowering time in maize. As a part of our studies, we have determined the knob composition of a number of inbred lines. These include some of the currently used inbreds, or those which have been used extensively in the present day maize germplasm in the U.S.A. The cytological analyses regarding the number, size and position of knobs were performed with acetocarmine squashes of microsporocytes mostly at pachytene stage during meiosis. With regard to size, the knobs were divided into six arbitrary classes. This classification was subjective since no attempt was made to measure the size of the knob. The data presented in Table 1 provide information on 35 inbred lines regarding the number, size and position of knobs together with some background information. The knob number ranged from 1 to 6 with most lines having 3 to 4 knobs. Of 23 possible locations, the knobs were found at 13 in the lines analyzed. The knob on the long arm of chromosome 7 (7L knob) was the most frequent, being present in all the lines (or races) except C103, F2, Apachito and Early Early Synthetic. Two other knobs were also found with high frequency; one on the long arm of chromosome 4 (4L) and the other on the short arm of chromosome 9 (9S). The knobs on chromosome 1 were the least frequent while no knob was observed on chromosome 10. Two Mexican flints (Apachito and Azu1) and the Argentine flint (Colorado Halidaisi majorado) have supernumerary B chromosomes in their genomes. Three flints, Wilbur's, Parker's and Tama (not shown in the table), were also analyzed, and as expected none had a cytologically observable knob. The Lancaster Sure Crop and its derivatives such as C103 and Mo17, the cold tolerant French inbred F2, and Early Early Synthetic (probably the earliest maturing line of maize) had only one knob. The inbreds related to Oh43 family have the highest average knob number while those related to Iowa Stiff Stalk Synthetic have an average knob number intermediate between those of the Lancaster derivatives and the Oh43 family. Table 1 (continued). Number, size and position of heterochromatic knobs in maize (Zea mays W inbreds and varieties. Sajjad R. Chughtai and Dale M. Steffensen Return to the MNL 61 On-Line Index Return to the Maize Newsletter Index Return to the Maize Genome Database Page
<urn:uuid:3f0b43ee-bff1-4344-a179-f01cbc59a635>
3.015625
553
Academic Writing
Science & Tech.
42.13809
Science Fair Project Encyclopedia An anti-radiation missile is a missile which is designed to detect and home in on the emissions of an enemy radar installation. Commonly carried by specialist "Wild Weasel" aircraft in the SEAD role, the primary purpose of this type of missile is to degrade enemy air defences in the first period of a conflict in order to increase the chances of survival for the following waves of strike aircraft. They can also be used to quickly shut down unexpected SAM sites during a raid. Aircraft which fly with strike aircraft to protect them from enemy air defences often also carry cluster bombs and are known as a SEAD escort. The cluster bombs can be used to ensure that after the ARM disables the SAM system's radar, the command post, missile launchers etc. are also destroyed to make sure the SAM site stays down. Early ARMs weren't particularly intelligent; they would simply home in on the source of radiation and explode when they got near. Smart SAM operators learned to turn their radar off when an ARM was fired at them then turn it back on later, greatly reducing the missile's effectiveness. This led to the development of more advanced ARMs like the AGM-45 Shrike and AGM-88 HARM missiles, which have inertial guidance systems (INS) built-in. This allows them to 'remember' where the radar last was if it is turned off and continue to home into it. It is less likely to hit the radar in this case as the earlier the radar is turned off (and assuming it never turns back on), the more error is introduced into its course, however the high speed of the HARM and its smokeless motor means that it will probably close the distance significantly before anyone realises it has been fired and gives it a good chance of hitting even in this circumstance. Another design point of modern ARM missiles, other than the range (which is hopefully greater than that of the SAM systems it will be targetted at) is their speed. Some SAM systems utilise huge missiles which are able to accelerate up to incredible speeds (some as high as Mach 10), which means that if the ARM is to be useful in a 'duel' between an aircraft and a SAM site, the ARM should be able to fly to and hit the SAM site faster than the SAM can fly to and hit the aircraft. The AGM-88 HARM mostly succeeds in this area; its top speed of around Mach 4 is partly due to its altitude and speed advantage at launch over the SAM which has to climb from rest at ground level and partly due to its powerful rocket motor. This means that if the SAM launches a missile at the aircraft first, unless it is one of the very fastest SAM systems (SA-10/S-300 or SA-20/S-400), the aircraft is likely to win. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:d718d62f-9f12-4d6a-ae50-ddf5b30ed608>
2.984375
607
Knowledge Article
Science & Tech.
47.218551
Science Scoops: And…Reflections of Earth's Climate by Stephen James O'Meara Wondering how Earth's climate may be changing over the years? Well, how about keeping an eye on the Moon? Yup. That's what researchers did in the late 1920s, and that's what Philip R. Goode (New Jersey Institute of Technology) and his colleagues did more recently. Actually, Goode and his team are not directly watching the Moon. Instead, they're watching sunlight that has been reflected off the Earth and onto the Moon's “bright” side when it just happens to be in shadow, a phenomenon called “earthshine.” You see earthshine best when the Moon is in a crescent phase; it's the “old Moon in the new Moon's arms.” In essence, the scientists are using the Moon as a mirror, one that will reflect changes in Earth's atmosphere. Here's how it works. The amount of sunlight our planet bounces back into space reflects how much cloud, atmospheric dust, and snow is covering the Earth. Any radiation not being reflected is being absorbed. This means that if the Earth isn't being as reflective as normal, Earth's climate must be getting warmer. As reported in Sky & Telescope magazine, Goode says that on average the Earth reflects 30 percent of the sunlight hitting it. But recently our planet seems to be a bit brighter than it was in 1994–95. What does this mean? Is the Earth getting colder? Well, stay tuned, because the earthshine measurements will have to continue for many more years before the researchers can draw any conclusions. - In Lesson 2, you studied a diagram that showed the phases of the Moon as viewed from Earth. Why do you think that earthshine is brighter when the Moon is in a crescent phase? Write a sentence or two explaining your answer. [anno: Answers will vary but could include that because a smaller portion of the Moon is lighted during a crescent phase, a variation in that amount of light might be more noticeable.] - What are some of the other ways that people use or have used the phases of the Moon? List a few ways that people have used phases of the Moon. [anno: Answers will vary but could include that some people have tracked time using the phases of the Moon. Some people believe that the full Moon causes strange things to happen, and they might alter their plans because of a full Moon.]
<urn:uuid:f82ffa84-c59b-4eaa-ad7d-096d5300f804>
3.953125
515
Truncated
Science & Tech.
66.282231
What will happen when an F5 tornado hits a nuclear power plant? The most violent tornadoes, F5 category, have winds in excess of 300 MPH and can pick up automobiles and hurl them like missiles at speeds of over 100 MPH. Strong frame homes are ripped off their foundations and carried through the air like toys. The winds are powerful enough to strip the bark off the trunks of trees. Steel reinforced concrete buildings are heavily damaged. Can we design nuclear power plants that can safely survive this kind of storm, or would we end up with a radioactive tornado after a direct hit? This question is in the General Section. Responses must be helpful and on-topic.
<urn:uuid:6b0630e8-2fae-41d0-91e3-a0ec24e075e0>
3.15625
138
Q&A Forum
Science & Tech.
64.96375
Next: Getting a string of characters and attributes from a window, Previous: Cursor location and window coordinates, Up: The basic curses library [Contents][Index] These functions examine a character on the screen and return it, along with its rendition. These routines return the rendered complex character at the current position in the named window.
<urn:uuid:665d6ba6-f277-443d-a8fb-02901a372372>
2.9375
70
Documentation
Software Dev.
22.763333
The coupling laser is tuned to the energy difference between states 2 and 3, so it doesn't interact at all with the electrons in the ground state. Nothing happens. Then a second laser, called the "probe laser" that contains the information they want to capture is sent into the condensate. The probe laser is tuned to the energy difference between energy level 1 and 3. In the absence of the coupling laser beam, the probe laser pulse would be completely absorbed by the electrons in the ground state promoting them to the higher energy level and destroying the condensate. All information would be lost. Instead, the coupling laser prevents this absorption. The two laser beams shift the electrons into a quantum superposition of states 1 and 2, meaning that each individual electron is in both states at once. In a way, the two laser beams cancel each other out, like evenly matched competitors in a tug of war. This superposition state allows the light pulse to be imprinted in the atoms.
<urn:uuid:231ceb7b-7bf2-4a4c-8333-aab3595ae126>
3.390625
199
Knowledge Article
Science & Tech.
51.164209
Antimatter is real stuff, not just science fiction. Antimatter is firmly in the realm of science with some aspects even entering the technology realm. There is also a lot of speculation about what one might do with antimatter. What is Antimatter? Antimatter is matter with its electrical charge reversed. Anti-electrons, called "positrons," are like an electron but with a positive charge. Antiprotons are like protons with a negative charge. Positron, antiprotons and other antiparticles can be routinely created at particle accelerator labs, such as CERN in Europe, and can even be trapped and stored for days or weeks at a time. And just last year, they made antihydrogen for the first time. It didn’t last long, but they did it. Also, Antimatter is NOT antigravity. Although it has not been experimentally confirmed, existing theory predicts that antimatter behaves the same to gravity as does normal matter. Technology is now being explored to make antimatter carrying cases, to consider using antimatter for medical purposes, and to consider how to make antimatter rockets. Right now it would cost about One-Hundred-Billion dollars to create one milligram of antimatter. One milligram is way beyond what is needed for research purposes, but that amount would be needed for large scale applications. To be commercially viable, this price would have to drop by about a factor of Ten-Thousand. And what about using antimatter for power generation? - not promising. It costs far more energy to create antimatter than the energy one could get back from an antimatter reaction. Right now standard nuclear reactors, which take advantage of the decay of radioactive substances, are far more promising as power generating technology than antimatter. Something to keep in mind, too, is that antimatter reactions - where antimatter and normal matter collide and release energy, require the same safety precautions as needed with nuclear reactions.
<urn:uuid:21c5bd62-fbb0-4b38-a256-5ff21da6eccd>
3.65625
411
Knowledge Article
Science & Tech.
33.651456
A North Atlantic Ocean circulation system weakened considerably in the late 1990s, compared to the 1970s and 1980s, according to a NASA study. Sirpa Hakkinen, lead author and researcher at NASA's Goddard Space Flight Center, Greenbelt, Md. and co-author Peter Rhines, an oceanographer at the University of Washington, Seattle, believe slowing of this ocean current is an indication of dramatic changes in the North Atlantic Ocean climate. The study's results about the system that moves water in a counterclockwise pattern from Ireland to Labrador were published and can be found on the Science Magazine website. The current, known as the sub polar gyre, has weakened in the past in connection with certain phases of a large-scale atmospheric pressure system known as the North Atlantic Oscillation (NAO). But the NAO has switched phases twice in the 1990s, while the subpolar gyre current has continued to weaken. Whether the trend is part of a natural cycle or the result of other factors related to global warming is unknown. "It is a signal of large climate variability in the high latitudes," Hakkinen said. "If this trend continues, it could indicate reorganization of the ocean climate system, perhaps with changes in the whole climate system, but we need another good five to 10 years to say something like that is happening." Rhines said, "The subpolar zone of the Earth is a key site for studying the climate. It's like Grand Central Station there, as many of the major ocean water masses pass through from the Arctic and from warmer latitudes. They are modified in this basin. Computer models have shown the slowing and speeding up of the subpolar gyre can influence the entire ocean circulation system." Satellite data makes it possible to view the gyre over the entire North Atlantic basin. Measurements from deep in the ocean, using buoys, ships and new autonomous "robot" Seagliders, are important for validating and extending the satellite data. Sea-surface height satellite data came from NASA's Seasat (July, August 1978), U.S. Navy's Geosat (1985 to 1988), and the European Space Agency's European Remote Sensing Satellite1/2 and NASA's TOPEX/Poseidon (1992 to present). Hakkinen and Rhines were able to reference earlier data to TOPEX/Poseidon data, and translate the satellite sea-surface height data to velocities of the subpolar gyre. The subpolar gyre can take 20 years to complete its route. Warm water runs northward through the Gulf Stream, past Ireland, before it turns westward near Iceland and the tip of Greenland. The current loses heat to the atmosphere as it moves north. Westerly winds pick up that lost heat, creating warmer, milder European winters. After frigid Labrador Sea winters, parts of the water mass in the current can become cold and dense enough to sink beneath the surface, and head slowly southward back to the equator. The cycle is sensitive to the paths of winter storms and to the buoyant fresh water from glacial melting and precipitation, all of which are experiencing great change. While previous studies have proposed winds resulting from the NAO have influenced the subpolar gyre's currents, this study found heat exchanges from the ocean to the atmosphere may be playing a bigger role in the weakening current. Using Topex/Poseidon sea-surface height data, the researchers inferred Labrador Sea water in the core of the gyre warmed during the 1990s. This warming reduces the contrast with water from warmer southern latitudes, which is part of the driving force for ocean circulation. The joint NASA-CNES (French Space Agency) Topex/Poseidon oceanography satellite provides high-precision data on the height of the world's ocean surfaces, a key measure of ocean circulation and heat storage in the ocean. NASA's Earth Science Enterprise is dedicated to understanding the Earth as an integrated system and applying Earth System Science to improve prediction of climate, weather and natural hazards using the unique vantage point of space. NASA, the National Oceanic and Atmospheric Administration, and the National Science Foundation funded the study.
<urn:uuid:f27ba744-6b04-4f4a-b0ea-20760fad38e5>
3.515625
860
Knowledge Article
Science & Tech.
37.902931
I would like some input into learning x86 assembly language on a PC. My question is - How do you write beginning assembly language programs without having to navigate the Windows OS? (a.k.a. Win32... just a minor word of caution - depending on what OS you're loading after your module has finished up, you will likely need to switch the CPU back into real mode for the hand-off to the OS to work... I'm done with the C part.. int i = 0; int j = 0; int z = 0; int length = 0; int carry = 0; printf("Enter first number: ");... This is the C/C++ board, so I assume that's the language. The assembler board can be found here. result_str = sum / 10 + '0'; result_str = sum % 10 + '0'; result_str = sum; You can work with numbers as large as you want by assigning them to strings. Then you simply work backwards through the strings, character by character performing your math tasks. Adding two... : What do you need help with, more specifically? i want to know what to use instead instead of int.. and also to know how to assign the sum by modifying the assembly code... What do you need help with, more specifically? can someone please help me with this? : : you cannot call INT functions from win32. win32 is protected mode, : : INT functions are real mode. : Thanks for your prompt : So is that means that... Hi. I'm new on these forums ^^, but I already got some Delphi experience and a little assembly experience too. I'd like to know how can i capture the packets sent by a process to a remote computer.... : You can easily load a bitmap using either of the WinAPI functions : LoadImage or LoadBitmap. Look for their specification on MSDN. : For jpgs I'd recommend that you find a library. I don't know... You can easily load a bitmap using either of the WinAPI functions LoadImage or LoadBitmap. Look for their specification on MSDN. For jpgs I'd recommend that you find a library. I don't know of any... Just wondering if anyone has played around with these libraries from ASM code? Sys_basher seems to now... : You can also make the whole thing in assembly. : Download fasm, and there's some examples of how to make win32 api And I am creating high level stuff for FASM too - it will be fun... : : I am looking for a message board for Win32 help. I use assembly. : : Thanks. : You could use one of the following board, whichever you find : I am looking for a message board for Win32 help. I use assembly. You could use one of the following board, whichever you find appropriate: : No probs.I got the solution. I have exactly the same problem. I wrote a Com Assembly with C#, registered it with regasm and pushed it into the gac with gacutil successful. So far so... : assemblyName is the incoming assembly name from the user. The problem here is, the assembly gets loaded to MyAppDomain and DeafaultAppdomain. This is... : hello, for some reason I cannot "step into" an MFC function in debug mode. My project is unicode enabled. What is the likely cause? You probably wanted to step into those 'magical' MFC functions...
<urn:uuid:b51aafe8-9319-4484-b4c7-c47bb667d470>
2.734375
770
Comment Section
Software Dev.
77.052919
Context. Dwarf spheroidal (dSph) galaxies are the least luminous, least massive galaxies known. Recently, the number of observed galaxies in this class has greatly increased thanks to large surveys. Determining their properties, such as mass, luminosity and metallicity, provides key information in our understanding of galaxy formation and evolution. Aims. Our aim is to provide as clean and as complete a sample as possible of red giant branch stars that are members of the Hercules dSph galaxy. With this sample we explore the velocity dispersion and the metallicity of the system. Methods. Strömgren photometry and multi-fibre spectroscopy are combined to provide information about the evolutionary state of the stars (via the Strömgren c₁ index) and their radial velocities. Based on this information we have selected a clean sample of red giant branch stars, and show that foreground contamination by Milky Way dwarf stars can greatly distort the results. Results. Our final sample consists of 28 red giant branch stars in the Hercules dSph galaxy. Based on these stars we find a mean photometric metallicity of -2.35 ⁺/₋ 0.31 dex which is consistent with previous studies. We find evidence for an abundance spread. Using those stars for which we have determined radial velocities we find a systemic velocity of 45.20 ⁺/₋ 1.09 kms⁻¹ with a dispersion of 3.72 km s⁻¹, this is lower than values found in the literature. Furthermore we identify the horizontal branch and estimate the mean magnitude of the horizontal branch of the Hercules dSph galaxy to be V₀=21.17 ⁺/₋ 0.05, which corresponds to a distance of 147⁺⁸₋₇ kpc. Conclusions. When studying sparsely populated and/or heavily foreground contaminated dSph galaxies it is necessary to include knowledge about the evolutionary stage of the stars. This can be done in several ways. Here we have explored the power of the c₁ index in Strömgren photometry. This index is able to clearly identify red giant branch stars redder than the horizontal branch, enabling a separation of red giant branch dSph stars and foreground dwarf stars. Additionally, this index is also capable of correctly identifying both red and blue horizontal branch stars. We have shown that a proper cleaning of the sample results in a smaller value for the velocity dispersion of the system. This has implications for galaxy properties derived from such velocity dispersions. Copyright 2009 EDP Sciences, First published in Astronomy and Astrophysics, Vol 506, Issue 3, pp. 1147-1168, published by EDP Sciences. The original article can be found at http://dx.doi.org/10.1051/0004-6361/200912718.
<urn:uuid:e527a706-e965-475b-b5b5-73c0ea84001e>
2.734375
610
Academic Writing
Science & Tech.
55.647352
This image shows a cross section through the earthís crust and upper mantle showing lithosphere plates (made of the crust layer and the top part of the mantle) moving over the asthenosphere (upper mantle). Click on image for full size Old Rocks Give New Clues about Ancient Earth! News story originally written on July 11, 2002 Scientists have found 2.5 billion year old rocks in China that help us understand more about what the Earth was like long ago. These rocks formed when Earth was young, during the Archean Age. During the Archean, there was no ocean and no continents and Earth was a very hot place. Earth cooled slowly and eventually it was cool enough for part of the upper mantle layer (the asthenosphere), below the Earthís crust, to flow just a little, moving the giant plates that lie above. This is called plate tectonics and it still happens today. In fact, plate tectonics is a very important process on Earth today, making mountains grow high, volcanoes spew lava, and earthquakes rumble. When did this important process begin to shape the surface of our Earth ? Scientists have been unsure of when plate tectonics actually began. Until recently, there wasnít any evidence that plates moved before 2 billion years ago. We thought Earth was still too hot for rocky plates to form at the surface and slide on the asthenosphere. But the 2.5 billion year old rocks found in China tell a different story. The newly discovered rocks formed long ago in the upper part of the mantle layer, underneath an ancient seafloor. These rocks show evidence that, 2.5 billion years ago, the upper part of the mantle layer would melt a little bit and then flow a little bit. When the rocks in this layer did this, plates could move at the Earthís surface. So, these rocks show that the plates of the lithosphere were active and sliding around long before we thought they could! Shop Windows to the Universe Science Store! Learn about Earth and space science, and have fun while doing it! The games section of our online store includes a climate change card game and the Traveling Nitrogen game You might also be interested in: The Archean is the name of the age which began with the forming Earth, as illustrated in this picture. This period of Earth's history lasted a long time, 2.8 billion years! That is more than half the...more The main force that shapes our planetís surface over long amounts of time is the movement of Earth's outer layer by the process of plate tectonics. This picture shows how the rigid outer layer of the...more It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more The Space Shuttle Discovery lifted off from Kennedy Space Center on October 29th at 2:19 p.m. EST. The weather was great as Discovery took 8 1/2 minutes to reach orbit. This was the United States' 123rd...more A moon was discovered orbiting the asteroid, Eugenia. This is only the second time in history that a satellite has been seen circling an asteroid. A special mirror allowed scientists to find the moon...more Will Russia ever put the service module for the International Space Station in space? NASA officials want an answer from the Russian government. The necessary service module is currently waiting to be...more A coronal mass ejection (CME) happened on the Sun early last month. The material that was thrown out from this explosion passed the ACE spacecraft. The SWICS instrument on ACE has produced a new and very...more
<urn:uuid:484343c2-d3c5-47d7-a3f7-f0489fa88c89>
4.125
768
Content Listing
Science & Tech.
63.830631
This is a drawing of the evolution of the interior of a giant planet. Click on image for full size Image from: The New Solar System How the Interior of Uranus Formed The drawing shows a possible history of the inside of giant planets. As the planets drew material from the solar cloud, bits of heavy rock collected inside the forming planet, as shown in figure A. Once the planet finished forming, these heavy bits of rock fell into the middle of the planet, as illustrated in figure B. As shown in the picture, the gas part of the planet is much bigger than the rocky part. That is because the amount of gas and ice which came to Uranus in the beginning depended upon where Uranus was in the original solar cloud. Eventually, the heavy, rocky material at the center became a core, as illustrated in figure C. Leftover heat from this process of Uranus' forming may still influence the motions in Uranus' atmosphere. Shop Windows to the Universe Science Store! Our online store includes fun classroom activities for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist are also full of classroom activities on different topics in Earth and space science! You might also be interested in: Scientists think that the solar system formed out of a spinning cloud of hydrogen and helium gas. Because the cloud was spinning, it flattened into a frisbee shape, just like a ball of pizza dough becomes...more The picture shows places on Jupiter which are hot. Jupiter is a very warm body in space, as shown in the picture, and this warmth is associated with the energy of Jupiter. When the energy of the outer...more The Giant planets do not have the same layered structure that the earthlike planets do. Their evolution was quite different than that of the earthlike planets, and they have much more gas and ice inside....more Motions in the interior of a gas-giant planet such as Uranus may be very different from the motions within the Earth. A second idea for the motions in the interior of a gas-giant planet is shown in this...more Motions in the interior of a planet help carry heat from the inside to the outside. The drawing to the left illustrates the kind of movements that happen in interior of a planet. Material rises from the...more The giant planets have definitely changed since their formation. But how much remains to be seen. Most of the original air of the giant planets remains in place. (The earth-like planets lost most of their...more Uranus' atmosphere is made of methane, a medium sized molecule. At the uppermost reaches of the atmosphere, methane gas breaks apart due to energy from the sun and from the magnetosphere. The remins of...more Motions of air in the atmosphere include wind. The major winds in the Uranian atmosphere are zonal winds, which have zones blowing west and belts flowing east. Motion of air in the atmosphere can also...more
<urn:uuid:1111c550-c642-45b6-baf0-5613fd140661>
3.96875
619
Knowledge Article
Science & Tech.
57.809759
Wednesday, October 03, 2012 One of the most fascinating and beautiful sites on the internet is the Astronomy Picture of the Day . It has been online almost since the beginning of the World Wide Web, and is still breathtaking in beauty and simplicity. Each day features a fabulous photograph and sometimes a video, along with a brief description packed with information. This kind of condensed informative and readable information is not easy to write, and the APOD writers make it look graceful and effortless. The scope of the pictures ranges from local to microscopic to cosmic. Astronomy Picture of the Day (APOD) is originated, written, coordinated, and edited since 1995 by Robert Nemiroff and Jerry Bonnell. The APOD archive contains the largest collection of annotated astronomical images on the internet. In real life, Bob and Jerry are two professional astronomers who spend most of their time researching the universe. Bob is a professor at Michigan Technological University in Houghton, Michigan, USA, while Jerry is a scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland USA. They are two married, mild and lazy guys who might appear relatively normal to an unsuspecting guest. Together, they have found new and unusual ways of annoying people such as staging astronomical debates. Most people are surprised to learn that they have developed the perfect random number generator. Here are a selection of recent pictures to give you a taste of the site. The caption follows each picture. It is well worth checking APOD on a daily basis for a bit of inspiration and to expand out from the usual narrow focus of daily life. M16: Pillars of Creation July 22, 2012 Image Credit: J. Hester, P. Scowen (ASU), HST, NASA Explanation: It was one of the most famous images of the 1990s. This image, taken with the Hubble Space Telescope in 1995, shows evaporating gaseous globules (EGGs) emerging from pillars of molecular hydrogen gas and dust. The giant pillars are light years in length and are so dense that interior gas contracts gravitationally to form stars. At each pillars' end, the intense radiation of bright young stars causes low density material to boil away, leaving stellar nurseries of dense EGGs exposed. The Eagle Nebula, associated with the open star cluster M16, lies about 7000 light years away. The pillars of creation were imaged again in 2007 by the orbiting Spitzer Space Telescope in infrared light, leading to the conjecture that the pillars may already have been destroyed by a local supernova, but light from that event has yet to reach the Earth. Be Honest: Have you seen this image before? An Ancient Stream Bank on Mars October 2, 2012Image Credit: NASA, JPL-Caltech, MSSSExplanation: Fresh evidence of an ancient stream has been found on Mars. The robotic rover Curiosity has run across unusual surface features that carry a strong resemblance to stream banks on Earth. Visible in the above image, for example, is a small overhanging rock ledge that was quite possibly created by water erosion beneath. The texture of the ledge appears to be a sedimentary conglomerate, the dried remains of many smaller rocks stuck together. Beneath the ledge are numerous small pebbles, possibly made smooth by tumbling in and around the once-flowing stream. Pebbles in the streambed likely fell there as the bank eroded. Circled at the upper right is a larger rock possibly also made smooth by stream erosion. Curiosity has now discovered several indications of dried streambeds on Mars on its way to its present location where it will be exploring the unusual conjunction of three different types of landscape. A Solar Filament Erupts September 17, 2012 Image Credit: NASA's GSFC, SDO AIA TeamExplanation: What's happened to our Sun? Nothing very unusual -- it just threw a filament. At the end of last month, a long standing solar filament suddenly erupted into space producing an energetic Coronal Mass Ejection (CME). The filament had been held up for days by the Sun's ever changing magnetic field and the timing of the eruption was unexpected. Watched closely by the Sun-orbiting Solar Dynamics Observatory, the resulting explosion shot electrons and ions into the Solar System, some of which arrived at Earth three days later and impacted Earth's magnetosphere, causing visible aurorae. Loops of plasma surrounding an active region can be seen above the erupting filament in the ultraviolet image. If you missed this auroral display please do not despair -- over the next two years our Sun will be experiencing a solar maximum of activity which promises to produce more CMEs that induce more Earthly auroras. Other pictures you might enjoy are Hurricane Paths on Planet Earth that show the path of every known hurricane round the globe since 1851. Or see this dramatic view of a lightning storm around an erupting volcano: Ash and Lightning Above an Icelandic Volcano , taken during the 2010 volcanic eruption in Eyjafjallajokull glacier Iceland. Also check out the Orion Nebula: The Hubble View and the Cats Eye Nebula . The Astronomy Picture of the Day is always worth a visit. To learn more visit the Astronomy Picture of the Day's Educational Links for a variety of sources appealing to every level of interest in astronomy. To find books and dvds on astronomy in the library look in the section with the Dewey numbers 520 - 529, especially in the 523 section. Wednesday, April 04, 2012 A gush of bird-song, a patter of dew, A cloud, and a rainbow's warning, Suddenly sunshine and perfect blue-- An April day in the morning. - Harriet Prescott Spofford, April After a brief taste of summer, Mistress Weather returned us to a more typical spring season. Gardeners across the region are getting their hands dirty preparing vegetable gardens for planting and finding early annuals to bring color to April’s rainy days. My husband and I still have a LOT of work to do with the landscaping and yard around the house we bought last year and the work will probably take us a few years to make it as we dream it can be. We do plan to have a small kitchen garden this year, though, and I have been working my way through books about smaller gardens to figure out the best crops for my space. Now I just have to hope for a few days without rain so I can get the planting started… Rain is not the only form of shower to watch for in April. The Lyrid Meteor Shower happens in the skies April 20 – April 21, with best viewing predicted for pre-dawn hours. Set your alarm for 2:00 am (or just make it a “star party” and stay up all night!), grab a blanket and a lawn chair (or an old camping mattress like my mom and I used to do) and head outdoors to take in the show. Think it might be too bright in your neighborhood to see anything? Join the National Capital Astronomers group at their first Exploring the Sky event for 2012 on April 21. If you miss this one, don’t worry—there are other meteor showers throughout the year, most notably the Perseids in August. Tuesday, March 15, 2011 After writing entries for this blog for over two years, I have learned to save links to possible sites of interest as I discover them. When blog time rolls around, I check my bookmarks and see if any themes emerge. Some things just don't fit anywhere, and this has left some oddments lingering in my files. Hey! Maybe that's a category in itself. So, just for fun, here are some sites I've come across in my travels through the Web and wanted to share. Ever wondered what would happen if you mashed up a famous science fiction book with a famous picture book? Here it is--Goodnight Dune. That is actutally a good book compared to the one I'm now going to tell you about. Possibly the worst picture book ever written is Little Kettle-head by Helen Bannerman. Yes, the same Helen Bannerman who wrote and illustrated the controversial book Little Black Sambo. At least Little Black Sambo had a coherent plot--this one is plain weird. Little Kettle-head should be given to everyone who thinks they can write a children's book as an example of what not to do--not ever, ever. It is so creepy that one doesn't know where to start to enumerate its failings. I didn't know whether to laugh or cry. (Okay, I admit it--there were screams of laughter eminating from my office, once I was able to get my jaw off the floor. But I have a sick sense of humor.) I think it's time to get back to the world of good books, now. Did you know that Tove Jansson of Moomintroll fame also illustrated The Hobbit by J.R.R. Tolkien? Click on the first picture to enlarge it and use the arrows to move through the slideshow. Speaking of Tolkien, The Lord of the Rings is number 10 in this list of Top Ten Most Overrated Novels. Maybe you don't agree with this list. Let them know--they appear to be still taking comments. I've had many hamsters during my career as a children's librarian, and currently I have hermit crabs in my office (don't ask) but, early on this year, The Library of Congress had a hawk take up residence in the main reading room. One thing no library I've worked in has had--zombies. But you never know these days. When confronted with a zombie outbreak is your library prepared? Here's a Zombie Emergency Prepardness Plan for libraries. Libraries have expanded the scope of their collections greatly over the years from new media formats to items such as puzzles and tools. This, however, takes the cake, although I'm vaguely disgusted to talk of food in the same breath as introducing you to the largest collection of belly button lint in the world. All together now: EWWWWWWWWWWWWWWWWWWWW! Way to go Graham Barker for further eroding the reputation of librarians. Speaking of food and too much time on your hands (that was implied by the above example, right?) some people are creating animated MRI's of fruit to produce living fractiles. Yeah, you heard me. Cool, huh? There's one at the beginning of this blog post. That's a watermelon, believe it or not. From the innerverse to the outerverse: Do you want to explore the universe and not leave the house? Try Celestia a free space simulation. Whew! I needed a break. There, we've gone from the ridiculous to the sublime; science fiction to real science. Aren't you glad? Montgomery County Public Libraries
<urn:uuid:e51dfe33-35b3-46c2-9309-23e65f62b606>
2.875
2,260
Personal Blog
Science & Tech.
54.38894
This paper presents the results of hydroacoustic noise research in three large European rivers: the Danube, the Sava, and the Tisa. Noise in these rivers was observed during a period of ten years, which includes all annual variation in hydrological and meteorological conditions (flow rate, speed of flow, wind speed, etc.). Noise spectra are characterized by wide maximums at frequencies between 20 and 30 Hz, and relatively constant slope toward higher frequencies. Spectral level of noise changes in time in relatively wide limits. At low frequencies, below 100 Hz, the dynamics of noise level is correlated with the dynamics of water flow and speed. At higher frequencies, noise spectra are mostly influenced by human activities on river and on riverbanks. The influence of wind on noise in rivers is complex due to the annual variation of river surface. The influence of wind is less pronounced than in oceans, seas, and lakes.
<urn:uuid:041ecb79-b51a-459c-8d57-0186de43a4fb>
3.015625
186
Academic Writing
Science & Tech.
40.775092
Climate Change: News and Comments Full steam ahead for the real story of 20th Century warming Although It seems a strange thing to celebrate, the Titanic Festival in Belfast, where the ship was built, will very soon mark the 100th anniversary of the liner’s foundering on 15 April 1912 after hitting a south-wandering iceberg, with the loss of a multitude of passengers and crew. Comparing the £100-million Titanic complex newly built in Belfast with the Guggenheim Museum in Bilbao, the travel writer Simon Calder has commented, “There is a great shipbuilding heritage, it is a divided city, but the Guggenheim is great on the outside but rubbish on the inside – unlike the Titanic building.” What’s more, James Cameron’s movie “Titanic” has been remastered in 3D for the centenary. Time then for me to dig out some slides that I’ve used off and on in lectures since 1999 as an illustration of Henrik Svensmark’s cosmic rays in action, controlling our climate. But first, just to show that I’m not being kooky, here’s a graph from a 2000 paper by E. N. Lawrence of the UK Meteorological Office. “The Titanic disaster – a meteorologist’s perspective” related iceberg abundance at low latitudes to a scarcity of sunspots. And Steven Goddard recalls a much older article, from the Chicago Tribune in 1923, that also linked icebergs with sunspots The notion that the Sun is dimmer when there are few sunspots goes right back to William Herschel at the beginning of the 19th Century. The trouble is that the variations in solar brightness, as measured by satellites, are too small to explain the strong influence of the Sun on climate as recorded over thousands of years, and continuing into the 21st Century. That’s where Svensmark’s discovery of 16 years ago comes in, with the amplifier. Cosmic rays coming from the Galaxy are more intense when there are fewer sunspots and they increase the global cloud cover, so cooling the world. Some preliminary comments before showing my own slides about cosmic rays and the fate of Titanic. Of course the disaster also involved several elements of shameful seamanship, but the fact remains that large icebergs abounded much further south than usual in the spring of 1912. Secondly, I prepared the slides so long ago that I can’t recall the data sources. If challenged, I expect I could dig them out, and I do remember that the picture is from the Illustrated London News. There was no direct recording of cosmic ray variations in those days. Indeed. Victor Hess was busy discovering them at that very time. So we have to make do with the geomagnetic activity index (called aa in the second slide) as an inverse indicator of cosmic ray influx, and with the counts of beryllium-10 and carbon-14, which are made by cosmic rays. Otherwise the slides should speak for themselves. The theme music of Cameron’s film “Titanic” is entitled “Full Steam Ahead”. Although the ship came to an abrupt halt, the same has not happened to Svensmark’s theory. As plenty of other posts on this blog will show you, its bow wave keeps sweeping aside the attempts to falsify it. And fresh energy builds up more and more speed as all the pieces of the hypothesis fall into place, from quantum chemistry to the shape of the Milky Way Galaxy. It’s a truly titanic idea, threatening disaster for the multitude who ignore the natural drivers of climate change, and shame for the misguided folk on the bridge who peer at computer screens instead of looking out of the window. E.N. Lawrence, Weather (Roy. Met. Soc.), Vol. 55, March 2000. See also this from NOAA
<urn:uuid:fb9d6543-3353-46e5-96f4-334f39bddc69>
3.40625
821
Personal Blog
Science & Tech.
53.075293
A biosignature is any substance – such as an element, isotope, molecule, or phenomenon – that provides scientific evidence of past or present life. Measurable attributes of life include its complex physical and chemical structures and also its utilization of free energy and the production of biomass and wastes. Due to its unique characteristics, a biosignature can be interpreted as having been produced by living organisms; however, it is important that they not be considered definitive because there is no way of knowing in advance which ones are universal to life and which ones are unique to the peculiar circumstances of life on Earth. In geomicrobiology The ancient record on Earth provides an opportunity to see what geochemical signatures are produced by microbial life and how these signatures are preserved over geologic time. Some related disciplines such as geochemistry, geobiology, and geomicrobiology often use biosignatures to determine if living organisms are or were present in a sample. These possible biosignatures include: (a) microfossils and stromatolites; (b) molecular structures (biomarkers) and isotopic compositions of carbon, nitrogen and hydrogen in organic matter; (c) multiple sulfur and oxygen isotope ratios of minerals; and (d) abundance relationships and isotopic compositions of redox sensitive metals (e.g., Fe, Mo, Cr, and rare earth elements). For example, the particular fatty acids measured in a sample can indicate which types of bacteria and archaea live in that environment. Another example are the long-chain fatty alcohols with more than 23 atoms that are produced by planktonic bacteria. When used in this sense, geochemists often prefer the term biomarker. An other example is the presence of straight-chain lipids in the form of alkanes, alcohols an fatty acids with 20-36 carbon atoms in soils or sediments. Peat deposits are an indication of originating from the epicuticular wax of higher plants. Life processes may produce a range of biosignatures such as nucleic acids, lipids, proteins, amino acids, kerogen-like material and various morphological features that are detectable in rocks and sediments. Microbes often interact with geochemical processes, leaving features in the rock record indicative of biosignatures. For example, bacterial micrometer-sized pores in carbonate rocks resemble inclusions under transmitted light, but have distinct size, shapes and patterns (swirling or dendritic) and are distributed differently from common fluid inclusions. A potential biosignature is a phenomenon that may have been produced by life, but for which alternate abiotic origins may also be possible. In astrobiology Astrobiological exploration is founded upon the premise that biosignatures encountered in space will be recognizable as extraterrestrial life. The usefulness of a biosignature is determined, not only by the probability of life creating it, but also by the improbability of nonbiological (abiotic) processes producing it. An example of such a biosignature might be complex organic molecules and/or structures whose formation is virtually unachievable in the absence of life. For example, some categories of biosignatures can include the following: cellular and extracellular morphologies, biogenic substance in rocks, bio-organic molecular structures, chirality, biogenic minerals, biogenic stable isotope patterns in minerals and organic compounds, atmospheric gases, and remotely detectable features on planetary surfaces, such as photosynthetic pigments, etc. Biosignatures need not be chemical, however, and can also be suggested by a distinctive magnetic biosignature. Another possible biosignature might be morphology since the shape and size of certain objects may potentially indicate the presence of past or present life. For example, microscopic magnetite crystals in the Martian meteorite ALH84001 were the longest-debated of several potential biosignatures in that specimen because it was believed until recently that only bacteria could create crystals of their specific shape. However, anomalous features discovered that are "possible biosignatures" for life forms would be investigated as well. Such features constitute a working hypothesis, not a confirmation of detection of life. Concluding that evidence of an extraterrestrial life form (past or present) has been discovered, requires proving that a possible biosignature was produced by the activities or remains of life. For example, the possible biomineral studied in the Martian ALH84001 meteorite includes putative microbial fossils, tiny rock-like structures whose shape was a potential biosignature because it resembled known bacteria. Most scientists ultimately concluded that these were far too small to be fossilized cells. A consensus that has emerged from these discussions, and is now seen as a critical requirement, is the demand for further lines of evidence in addition to any morphological data that supports such extraordinary claims. Scientific observations include the possible identification of biosignatures through indirect observation. For example, electromagnetic information through infrared radiation telescopes, radio-telescopes, space telescopes, etc. From this discipline, the hypothetical electromagnetic radio signatures that SETI scans for would be a biosignature, since a message from intelligent aliens would certainly demonstrate the existence of extraterrestrial life. Over billions of years, the processes of life on a planet would result in a mixture of chemicals unlike anything that could form in an ordinary chemical equilibrium. For example, large amounts of oxygen and small amounts of methane are generated by life on Earth. The presence of methane in the atmosphere of Mars indicates that there must be an active source on the planet, as it is an unstable gas. Furthermore, current photochemical models cannot explain the presence of methane in the atmosphere of Mars and its reported rapid variations in space and time. Neither its fast appearance nor disappearance can be explained yet. To rule out a biogenic origin for the methane, a future probe or lander hosting a mass spectrometer will be needed, as the isotopic proportions of carbon-12 to carbon-14 in methane could distinguish between a biogenic and non-biogenic origin. In June, 2012, scientists reported that measuring the ratio of hydrogen and methane levels on Mars may help determine the likelihood of life on Mars. According to the scientists, "...low H2/CH4 ratios (less than approximately 40) indicate that life is likely present and active." Other scientists have recently reported methods of detecting hydrogen and methane in extraterrestrial atmospheres. The planned ExoMars Trace Gas Orbiter to be launched in 2016 to Mars, will study atmospheric trace gases and will attempt to characterize potential biochemical and geochemical processes at work. The Viking missions to Mars The Viking missions to Mars in the 1970s conducted the first experiments which were explicitly designed to look for biosignatures on another planet. Each of the two Viking landers carried three life-detection experiments which looked for signs of metabolism; however, the results were declared 'inconclusive'. The Mars Science Laboratory mission is currently investigating habitability of the Martian environment and is attempting to detect biosignatures on Mars. The future ExoMars mission has similar objectives. See also - Steele, A., Beaty; et al. (September 26, 2006). "Final report of the MEPAG Astrobiology Field Laboratory Science Steering Group (AFL-SSG)" (.doc). In David Beaty. The Astrobiology Field Laboratory. U.S.A.: the Mars Exploration Program Analysis Group (MEPAG) - NASA. p. 72. Retrieved 2009-07-22. More than one of - "Biosignature - definition". Science Dictionary. 2011. Retrieved 2011-01-12. - Carol Cleland; Gamelyn Dykstra, Ben Pageler (2003). "Philosophical Issues in Astrobiology". NASA Astrobiology Institute. Retrieved 2011-04-15. - "SIGNATURES OF LIFE FROM EARTH AND BEYOND". Penn State Astrobiology Research Center (PSARC). Penn State. 2009. Retrieved 2011-01-14. - "Reading Archaean Biosignatures". NASA. July 30, 2008. Retrieved 2011-01-14. - Fatty alcohols - Beegle, Luther W.; et al (August 2007). "A Concept for NASA's Mars 2016 Astrobiology Field Laboratory". Astrobiology 7 (4): 545–577. Bibcode:2007AsBio...7..545B. doi:10.1089/ast.2007.0153. PMID 17723090. Retrieved 2009-07-20. - Bosak, Tanja Bosak; Virginia Souza-Egipsy, Frank A. Corsetti and Dianne K. Newman (May 18, 2004). "Micrometer-scale porosity as a biosignature in carbonate crusts". Geology 32 (9): 781–784. Bibcode:2004Geo....32..781B. doi:10.1130/G20681.1. Retrieved 2011-01-14. - Crenson, Matt (2006-08-06). "After 10 years, few believe life on Mars". Associated Press (on usatoday.com). Retrieved 2009-12-06. - McKay, David S.; et al. (1996). "Search for Past Life on Mars: Possible Relic Biogenic Activity in Martian Meteorite ALH84001". Science 273 (5277): 924–930. Bibcode:1996Sci...273..924M. doi:10.1126/science.273.5277.924. PMID 8688069. - Rothschild, Lynn (September, 2003). "Understand the evolutionary mechanisms and environmental limits of life". NASA. Retrieved 2009-07-13. - Wall, Mike (13 December 2011). "Mars Life Hunt Could Look for Magnetic Clues". Space.com. Retrieved 2011-12-15. - Gardner, James N. (February 28, 2006). "The Physical Constants as Biosignature: An anthropic retrodiction of the Selfish Biocosm Hypothesis". Kurzweil. Retrieved 2011-01-14. - "Astrobiology". Biology Cabinet. September 26, 2006. Retrieved 2011-01-17. - "Artificial Life Shares Biosignature With Terrestrial Cousins". The Physics arXiv Blog. MIT. 10 January 2011. Retrieved 2011-01-14. - Mars Trace Gas Mission (September 10, 2009) - Remote Sensing Tutorial, Section 19-13a - Missions to Mars during the Third Millennium, Nicholas M. Short, Sr., et al., NASA - Oze, Christopher; Jones, Camille; Goldsmith, Jonas I.; Rosenbauer, Robert J. (June 7, 2012). "Differentiating biotic from abiotic methane genesis in hydrothermally active planetary surfaces". PNAS 109 (25): 9750–9754. Bibcode:2012PNAS..109.9750O. doi:10.1073/pnas.1205223109. Retrieved June 27, 2012. - Staff (June 25, 2012). "Mars Life Could Leave Traces in Red Planet's Air: Study". Space.com. Retrieved June 27, 2012. - Brogi, Matteo; Snellen, Ignas A. G.; de Krok, Remco J.; Albrecht, Simon; Birkby, Jayne; de Mooij, Ernest J. W. (June 28, 2012). "The signature of orbital motion from the dayside of the planet t Boötis b". Nature 486: 502–504. arXiv:1206.6109. Bibcode:2012Natur.486..502B. doi:10.1038/nature11161. Retrieved June 28, 2012. - Mann, Adam (June 27, 2012). "New View of Exoplanets Will Aid Search for E.T.". Wired (magazine). Retrieved June 28, 2012. - "2016 ESA/NASA ExoMars Trace Gas Orbiter", MEPAG June 2011, Jet Propulsion Laboratory, June 16, 2011, retrieved 2011-06-29 (PDF) - Levin, G and P. Straaf. 1976. Viking Labeled Release Biology Experiment: Interim Results. Science: vol: 194. pp: 1322-1329. - Chambers, Paul (1999). Life on Mars; The Complete Story. London: Blandford. ISBN 0-7137-2747-0. - Klein, Harold P.; Levin, Gilbert V. (1976-10-01). "The Viking Biological Investigation: Preliminary Results". Science 194 (4260): 99–105. Bibcode:1976Sci...194...99K. doi:10.1126/science.194.4260.99. PMID 17793090. Retrieved 2008-08-15. - ExoMars rover - "Mars Science Laboratory: Mission". NASA/JPL. Retrieved 2010-03-12.
<urn:uuid:e769a3b3-609f-4d96-9f35-9c2658e32784>
3.8125
2,702
Knowledge Article
Science & Tech.
42.406782
In the years after Columbus' voyage, burning of New World forests and fields diminished significantly – a phenomenon some have attributed to decimation of native populations by European diseases. But a new University of Utah-led study suggests global cooling resulted in fewer fires because both preceded Columbus in many regions worldwide. For decades, scientists have known that the effects of global climate change could have a potentially devastating impact across the globe, but Harvard researchers say there is now evidence that it may also have a dramatic impact on public health. Today, the U.S. Environmental Protection Agency’s (EPA) Energy Star program launched the 2012 National Building Competition: Battle of the Buildings with a record 3,200 buildings across the country going head to head to improve energy efficiency, lower utility costs and protect health and the environment. For several days this month, Greenland's surface ice cover melted over a larger area than at any time in more than 30 years of satellite observations. Researchers have found a way to use GPS to measure short-term changes in the rate of ice loss on Greenland -- and reveal a surprising link between the ice and the atmosphere above it. The greatest climate change the world has seen in the last 100,000 years was the transition from the ice age to the warm interglacial period. Sulfur has traditionally been portrayed as a secondary factor in regulating atmospheric oxygen, with most of the heavy lifting done by carbon. However, new findings that appeared this week in Science suggest that sulfur's role may have been underestimated. Scientists from the University of Toronto and the University of California Santa Cruz are shedding light on one potential cause of the cooling trend of the past 45 million years that has everything to do with the chemistry of the world's oceans. Global emissions of carbon dioxide (CO2) -- the main cause of global warming -- increased by 3% last year, reaching an all-time high of 34 billion tonnes in 2011. “The first half of 2012 was dry for most of the Northeast. New York, Massachusetts, Pennsylvania, New Jersey, and West Virginia were below normal. Maryland and Connecticut were much below normal, and Delaware had its driest on record. Until now, scientists who study air pollution using satellite imagery have been limited by weather. Clouds, in particular, provide much less information than a sunny day. No matter how you drill it, using natural gas as an energy source is a smart move in the battle against global climate change and a good transition step on the road toward low-carbon energy from wind, solar and nuclear power. Elevated levels of atmospheric carbon dioxide accelerate carbon cycling and soil carbon loss in forests, new research led by an Indiana University biologist has found. Satellite measurements show that nitrogen dioxide in the lower atmosphere over parts of Europe and the US has fallen over the past decade. An international team that includes scientists from Johannes Gutenberg University Mainz (JGU) has published a reconstruction of the climate in northern Europe over the last 2,000 years based on the information provided by tree-rings. Studies by U.S. Department of Agriculture (USDA) scientists show some no-till management systems can lower atmospheric levels of PM10—soil particles and other material 10 microns or less in diameter that degrade air quality—that are eroded from crop fields via the wind. A new study led by the University of Colorado Boulder indicates air pollution in the form of nitrogen compounds emanating from power plants, automobiles and agriculture is changing the alpine vegetation in Rocky Mountain National Park. The U.S. Environmental Protection Agency is proposing to approve Arizona’s air quality plan to control sulfur dioxide and soot at three power plants in the state. Some coral reef fish may be better prepared to cope with rising CO2 in the world's oceans -- thanks to their parents. For eastern Pacific populations of leatherback turtles, the 21st century could be the last. New research suggests that climate change could exacerbate existing threats and nearly wipe out the population.
<urn:uuid:523e9443-bdad-4dd9-bf91-f36847b3ceac>
3.265625
816
Content Listing
Science & Tech.
36.347674
2.2. How much dark matter? An important recent development is that DM can now be constrained to a value around 0.25 by several independent lines of evidence: (i) One of the most ingenious and convincing arguments comes from noting that baryonic matter in clusters - in galaxies, and in intracluster gas - amounts to 0.15 - 0.2 of the inferred virial mass (White et al. 1993). If clusters were a fair sample of the universe, this would then be essentially the same as the cosmic ratio of baryonic to total mass. Such an argument could not be applied to an individual galaxy, because baryons segregate towards the centre. However, there is no such segregation on the much larger scale of clusters: only a small correction is necessary to allow for baryons expelled during the cluster formation process. (ii) Very distant galaxies appear distorted, owing to gravitational lensing by intervening galaxies and clusters. Detailed modelling of the mass-distributions needed to cause the observed distortions yields a similar estimate. This is a straight measurement of DM which (unlike (i)) does not involve assumptions about b, though it does depend on having an accurate measure of the clustering amplitude. (iii) Another argument is based on the way density contrasts grow during the cosmic expansion: in a low density universe, the expansion kinetic energy overwhelms gravity, and the growth of structure saturates at recent epochs. The existence of conspicuous clusters of galaxies with redshifts as large as z = 1 is hard to reconcile with the rapid recent growth of structure that would be expected if DM were unity. More generally, numerical simulations based on the cold dark matter (CDM) model model are a better fit to the present-day structure for this value of DM (partly because the initial fluctuation spectrum has too little long-wavelength power if DM is unity). Other methods will soon offer independent estimates. For instance, DM can be estimated from the deviations from the Hubble flow induced by large-scale irregularities in the mass distribution on supercluster scales.
<urn:uuid:08bdc5f7-30be-4cc5-b79d-a811d8384609>
2.90625
428
Academic Writing
Science & Tech.
39.265928
Dip your toe into the fascinating topic of genetics. From Mendel's theories to some cutting edge experimental techniques, this article gives an insight into some of the processes underlying. . . . Explore the properties of this different sort of differential Get further into power series using the fascinating Bessel's equation. How much energy has gone into warming the planet? See how enormously large quantities can cancel out to give a good approximation to the factorial function. Explore the power of aeroplanes, spaceships and horses. We all know that smoking poses a long term health risk and has the potential to cause cancer. But what actually happens when you light up a cigarette, place it to your mouth, take a tidal breath. . . . Given the equation for the path followed by the back wheel of a bike, can you solve to find the equation followed by the front By exploring the concept of scale invariance, find the probability that a random piece of real data begins with a 1. Which parts of these framework bridges are in tension and which parts are in compression? How fast would you have to throw a ball upwards so that it would Ever wondered what it would be like to vaporise a diamond? Find out Read all about electromagnetism in our interactive article. Work with numbers big and small to estimate and calculate various quantities in biological contexts. Work with numbers big and small to estimate and calculate various quantities in physical contexts. Read about the mathematics behind the measuring devices used in Unearth the beautiful mathematics of symmetry whilst investigating the properties of crystal lattices Can you deduce why common salt isn't NaCl_2? An introduction to a useful tool to check the validity of an equation. Fancy learning a bit more about rates of reaction, but don't know where to look? Come inside and find out more... Build up the concept of the Taylor series Look at the advanced way of viewing sin and cos through their power series. We think this 3x3 version of the game is often harder than the 5x5 version. Do you agree? If so, why do you think that might be? How much peel does an apple have? Have you got the Mach knack? Discover the mathematics behind exceeding the sound barrier. There has been a murder on the Stevenson estate. Use your analytical chemistry skills to assess the crime scene and identify the cause of death... Investigate x to the power n plus 1 over x to the power n when x plus 1 over x equals 1. This article (the first of two) contains ideas for investigations. Space-time, the curvature of space and topology are introduced with some fascinating problems to explore. On a "move" a stone is removed from two of the circles and placed in the third circle. Here are five of the ways that 27 stones could Investigate constructible images which contain rational areas. Formulate and investigate a simple mathematical model for the design of a table mat. Get some practice using big and small numbers in chemistry. Two perpendicular lines lie across each other and the end points are joined to form a quadrilateral. Eight ratios are defined, three are given but five need to be found. Two polygons fit together so that the exterior angle at each end of their shared side is 81 degrees. If both shapes now have to be regular could the angle still be 81 degrees? Work out the numerical values for these physical quantities. An article demonstrating mathematically how various physical modelling assumptions affect the solution to the seemingly simple problem of the projectile. Is the age of this very old man statistically believable? Some of our more advanced investigations Can you find some Pythagorean Triples where the two smaller numbers differ by 1? What's the chance of a pair of lists of numbers having sample correlation exactly equal to zero? A simplified account of special relativity and the twins paradox. When is a knot invertible ? All types of mathematical problems serve a useful purpose in mathematics teaching, but different types of problem will achieve different learning objectives. In generalmore open-ended problems have. . . . Explore the properties of combinations of trig functions in this open investigation. Looking at small values of functions. Motivating the existence of the Taylor expansion. Could nanotechnology be used to see if an artery is blocked? Or is this just science fiction? What functions can you make using the function machines RECIPROCAL and PRODUCT and the operator machines DIFF and INT? Take any pair of numbers, say 9 and 14. Take the larger number, fourteen, and count up in 14s. Then divide each of those values by the 9, and look at the remainders. Investigations and activities for you to enjoy on pattern in Draw three equal line segments in a unit circle to divide the circle into four parts of equal area.
<urn:uuid:91fc959a-eeba-41d2-84be-2ffa4721cc34>
3.546875
1,053
Content Listing
Science & Tech.
51.400163
Thunder is the crash and rumble associated with lightning. It is caused by the explosive expansion and contraction of air heated by the stroke of lightning. This results in sound waves that can be heard easily six to seven miles (9.7 to 11.3 kilometers) away. Occasionally such rumbles can be heard as far away as 20 miles (32.2 kilometers). The sound of great claps of thunder is produced when intense heat and the ionizing effect of repeated lightning occurs in a previously heated air path. This creates a Shockwave that moves at the speed of sound. Article source http://www.scribd.com/doc/33786529/The-Handy-Weather-Answer-Book
<urn:uuid:f40fd636-a780-448b-8bb7-1adb95eed8ba>
3.375
145
Truncated
Science & Tech.
72.621883
Symbols in Ocamlyacc grammars represent the grammatical classifications of the language. A terminal symbol (also known as a token type) represents a class of syntactically equivalent tokens. You use the symbol in grammar rules to mean that a token in that class is allowed. The symbol is represented in the Ocamlyacc parser by a value of variant type, and the lexer function function returns a token type to indicate what kind of token has been read. A nonterminal symbol stands for a class of syntactically equivalent groupings. The symbol name is used in writing grammar rules. It should start with lower case. Symbol names can contain letters, digits (not at the beginning), underscores. The terminal symbols in the grammar is a token type which is a value of variable type in Ocaml. So it should be start with upper case. Each such name must be defined with a Ocamlyacc declaration with %token. See Token Type Names. The value returned by the lexer function is always one of the terminal symbols. Each token type becomes a Ocaml value of variant type in the parser file, so lexer function can return one. Because the lexer function is defined in a separate file, you need to arrange for the token-type definitions to be available there. After invoking "ocamlyacc filename.mly", the file filename.mli is generated which contains token-type definitions. It is used in lexer function. The symbol error is a terminal symbol reserved for error recovery (see Error Recovery); you shouldn't use it for any other purpose.
<urn:uuid:47745ff6-4ed8-4103-9308-14bf67ecd56e>
3.09375
337
Documentation
Software Dev.
43.232198
Specify a warp factor and the form will then calculate the speed in multiples of the speed of light and the time to travel the specified distances. You may also specify your own distance in light years. The velocity values of warp factors are calculated for standard conditions found within the Milky Way galaxy. Actual vessel performance may vary considerably, depending on factors such as interstellar gas and dust density, electric and magnetic field strength, and fluctuations within the subspace domain.
<urn:uuid:286bca6c-22a7-4b32-be29-91633176ba07>
2.703125
98
Documentation
Science & Tech.
35.88125
Published by the American Geological Institute of the Earth Sciences A Galapágos tortoise observes Jim Stimac backpacking to the rim of Alcedo caldera. All photos courtesy of Fraser Goff. |In early 1995, colleagues and I ventured to the Galápagos for six weeks in search of magmatic tritium. Extensive analysis on gases from two of the volcanoes in the archipelago was part of a long-term research project that looked at more than 10 volcanoes world wide to find evidence (or non-evidence) for cold fusion in the deep Earth. Isabela, the largest island, consists of six coalesced basaltic shields resembling upside-down soup bowls 30 to 60 kilometers across and more than one kilometer high. Two of these shields, Sierra Negra and Alcedo, are noted for their vigorous fumarolic activity. We were the first to sample for tritium in the Galápagos and also the first to determine if the fumaroles were volcanic or geothermal. Although sampling the gases of these fumaroles was our main objective, our trip became a goat-eating, burro-sacrificing, volcano-exploring adventure. The Galápagos islands are remote and their special ecology, geology and history are preserved as an Ecuadorian national park. The climate is hot, people are scarce and drinking water is scarcer. While most tourists stay within one or two kilometers of the ocean, my colleagues Gary M. McMurtry of the University of Hawaii, Alfredo Roldán-Manzo of Instituto Nacional de Energia in Guatemala and then post-doc Jim A. Stimac of Los Alamos and I planned to camp on the caldera rims and make day trips to intracaldera sites. On Sunday, Jan. 22, we boarded a fishing boat from Santa Cruz for an overnight voyage to Villamil, the only village on Isabela. We were greeted the next morning with a typical tropical island view: palm trees, waves breaking on black lava, a few sandy beaches and a dark, distant volcano draped in green. We horse-packed through a rainstorm to the southwest rim of Sierra Negra and set up camp. The next several days we spent hiking into the caldera to the fumarole field of Mina Azufral (a sulfur mine). We donned gas masks and stood several hours in clouds of acid gases sampling fumaroles as hot as 210 degrees Celsius. These true volcanic emissions contained steam, CO2, SO2, H2S, HCl and minor trace components. Sulfur, white mineral incrustations, altered basalt and small flows of molten sulfur occurred everywhere. After 10 days at Sierra Negra, we boarded a small boat sailing north toward Alcedo Volcano. The next morning we disembarked on a dry and desolate coast with our gear, three guides and 25 plastic jugs — each filled with five gallons of water. The captain stated he’d be back in 11 days and sailed away on his boat El Pirate. |The northwest flank of Alcedo is blanketed with white pumice erupted from small rhyolite vents inside the caldera. Backpacking to the crater rim took us most of the day. Halfway up, we saw our first giant Galápagos tortoise grazing peacefully on grass. Although tortoises frequented sporadic mud holes, there were no springs or streams. The guides hauled the jugs of water from the beach, but it was not always enough. The hot and humid days made us incredibly thirsty. During our visit, Ecuador and Peru were at war, and when our Ecuadorian guides caught a burro they named him Fujimori after Peru’s president Alberto Fujimori. One guide carried a rifle to shoot goats, and the meat greatly improved our freeze-dried meals. The team standing on Alcedo caldera rim. Left to right: Alfredo, Pepi (with rifle), Giovanni, Eduardo, Gary, Fraser and Jim. Gary McMurtry and Fraser Goff sample gas from a fumarole in Alcedo caldera. Two new fumaroles steam in the background. |We camped on the southeast rim and again made long day trips to fumaroles distributed along a fault zone inside the western caldera. Surprisingly, these fumaroles were extremely water-rich and discharged at the boiling point. Some were also very noisy and issued from phreatic explosion craters. We didn’t need gas masks. The gases contained minor CO2 and H2S, but virtually no SO2 or HCl. Hydrothermally altered rocks, incrustation and sulfur deposits were comparatively minor. These were geothermal fumaroles, not volcanic fumaroles, so we had come a long way to sample the wrong kinds of features for our project. After returning to the beach for pick-up, our guides dispatched Fujimori with a bullet. The boat showed up as planned and the captain brought us cold beer. On the voyage back to Santa Cruz, the captain landed a yellow-fin tuna and sliced thick strips of sashimi from the back of its neck. Did we find tritium? Our samples contained minor amounts of tritium, but all of meteoric origin (rain, snow or groundwater). We found no magmatic tritium in Sierra Negro and Alcedo volcanoes. But since the fluids of these Galápagos volcanoes have never been extensively sampled, in June we published a comparison of their hydrothermal characteristics (Goff, F., et al., 2000, Bulletin of Volcanology, v. 62, p. 34-52). The magmatic tritium project originated from the “cold fusion” controversy of the late 1980s. One of the more legitimate proponents of the theory that cold fusion occurs in Earth’s interior was Steve Jones of Brigham Young University. Jones postulated that one consequence of cold fusion in Earth’s interior might be excess tritium coming out of the deep earth in volcanic emissions. I had made a few measurements of tritium in magmatic fluids at Mount St. Helens, and Jones soon contacted me to pursue the study. The tritium I had sampled merely showed contamination of magmatic fluids with young meteoric water (rain, snow or groundwater). Still, the U.S. Department of Energy granted a small amount of seed money Gary M. McMurtry of the University of Hawaii and me to further investigate anomalous tritium in magmatic fluids at St. Helens and Kilauea. Early results from investigations at these two volcanoes were ambiguous and attempts to publish preliminary results were shut down by scientific journals. We then obtained additional funding through a proposal from Los Alamos’ Laboratory of Directed Research and Development to enlarge the scope of the project. I argued that consistent results were required from several volcanoes of different compositions and tectonic environments to settle the issue. Using these funds, McMurtry and I visited several more volcanoes from 1992 to 1996 collecting high-temperature magmatic fluids and analyzing them for tritium and other isotope and chemical constituents. We found that tritium in such fluids was derived from mixing with meteoric waters or, in some cases, from seawater. Generally speaking, tritium content is inversely correlated with increasing temperature and geochemical constituents enriched in magmatic fluids (Goff, F., McMurtry, G.M. 2000, Tritium and stable isotopes in magmatic waters: JVGR, 97, 347-396). Mount Oyama, which sits in the center of the small island of Miyakejima about 200 kilometers south-southwest of Tokyo, apparently became active June 27 following all-night evacuations of 2,500 residents. Scientists suspected an eruption on the west flank of the island, but discolored seawater and steam indicated an eruption on the submarine west flank about 1.8 kilometers from the coast. An afternoon earthquake on July 1 of magnitude 6.1 jarred the Izu islands south of Japan and left one person dead from a landslide. The quake was centered 15 kilometers below the surface near Kozushima island. Homemade hot air balloons in Brazil, used illegally during June festivals, ignited 16 separate fires that destroyed at least 88 acres of Rio’s rapidly diminishing Atlantic Rainforest — the largest single loss in the past two decades, said Major Fabio Meirelles, deputy commander of Rio’s Rainforest Firefighting Group. Hot air from burning, fuel-soaked cotton propels the balloons, which are traditionally used to honor Catholic saints and often carry hanging lanterns or fireworks. Associate Editor Christina Reed compiles
<urn:uuid:684454e2-4191-4559-8995-3005e2dea3d4>
3.4375
1,870
Knowledge Article
Science & Tech.
44.27925
You would be forgiven for thinking that shrinking space budgets are a total disaster. Certainly, they're not good news but the irony is that having less money may force us into missions that might previously have been overlooked, yet still have the potential to revolutionise our understanding of the universe. In March this year, the European Space Agency issued a call for small mission ideas. This is something of a departure for the ESA, which in the past has only used smaller missions to test technology, such as with Smart-1 and Proba. The idea is to produce a credible science mission for launch in 2017 costing the ESA just €50m. Additional funds of up to €100m can be sought from national funding bodies. More than 60 letters of intent were received, a number of them designed to test aspects of gravity that could lead to a fundamental breakthrough in physics. The beauty of gravitational physics missions is that they are generally cheap to build. The spacecraft doesn't necessarily need to do much, just move as gravity dictates. Earth-bound scientists then track the movement and compare it with theoretical predictions. If they find a discrepancy, they have discovered a clue to physics beyond Einstein, and the game is really on. Hints of unexplainable spacecraft motion have shown up for many years, the most famous being the Pioneer anomaly. Although it may now have been explained as the recoil from heat being released from the spacecraft's radioactive power source, there is another odd motion stepping into the limelight: the flyby anomaly. Some spacecraft appear to pick up more speed than expected when they fly by Earth to boost their velocity. The amount of extra energy is variable and not every flyby produces this effect. Puzzling. Is it real or just tracking errors? One small mission proposal, by Orfeu Bertolami, of the University of Porto in Portugal, is designed to find out by using a small spacecraft that would constantly determine its position using the ESA's Galileo satellite navigation system. Another proposal, by Ignazio Ciufolini, of the University of Lecce in Italy, would follow up his Italian mission, the Laser Relativity Satellite (Lares). Launched in February this year, Lares hit headlines as the high-tech disco ball that could dethrone Einstein. It is studying an aspect of general relativity called frame dragging. It cost less than €10m and is expected to produce a measurement within one percentage point of Relativity's prediction. GalileoGalilei (GG) by Anna Nobili, of the University of Pisa in Italy, would follow up an independent French mission, Microscope (Micro-Satellite à traînée Compensée pour l'Observation du Principe d'Equivalence), currently being built. Both aim to test the equivalence principle, a cornerstone of general relativity, to increasing sensitivities. The equivalence principle states that masses respond to gravity in the same way they do to other forces. In other words, if I push an object along, it behaves in the same way as when I drop it. But why should that be true? Small deviations from this behaviour are predicted by the string theories of physics, which seek to unite our understanding of gravity with the other forces of nature. Intriguingly, or frustratingly depending upon your outlook, no test has yet found a discrepancy. Either string theories are wrong, or we need to look at gravity in more detail. There are other proposals on the ESA small missions list intended to probe gravity in other ways. Ironically, simple gravitational missions, which have perhaps the greatest chance of creating a scientific revolution, are among the cheapest that can be imagined. Yet neither the ESA nor Nasa has been inclined towards them before. Could budget restrictions on both sides of the Atlantic remedy this? The ESA will evaluate the small mission proposals between July and October and announce which one it will fund. Although there are many excellent other ideas in the letters of intent, from solar storm monitors to searches for habitable planets around other stars, my fingers are crossed for a gravity mission.
<urn:uuid:09ac3c09-76d6-4481-9001-f6df898b3f2f>
3.375
834
Nonfiction Writing
Science & Tech.
39.422931
One particularly baffling aspect of planetary wanderings were the periods of retrograde motion. A planet such as Mars would spend much of the year moving slowly eastward against the background of fixed stars. Then, to everyone's surprise, it would change direction and slide westward for a couple of months or so before stopping again and returning to its easterly path. This is retrograde motion (retro meaning backward). The image below traces the positions of the planet Mars as it executed a retrograde loop in 1995. As you might imagine, these cosmic loop-de-loop's seemed pretty weird to ancient astronomers. The retrograde motion of the planets was particularly bothersome to the ancient Greek philosophers/scientists. They had established the tradition of demanding a physical model for whatever they were studying. They wanted something they could picture in their heads that was logical and self-consistent. As for the planets, the Greeks wanted a model for the solar system that could replicate all the motions seen in the sky, including the wacky retrograde loops. Most Greeks believed the Earth - and not the Sun - lay at the center of the solar system. While the Greeks did manage to develop an Earth-centered model for the solar system, it was not simple, but had little orbits laid on top of bigger ones. That bothered some aesthetically minded astronomers who believed that Nature in both appearance and plan should be beautiful and therefore simple. Developing a simple picture for the real paths of planets, that put the Sun at the solar system's center, was a job that waited almost two millennia.
<urn:uuid:88cedca8-89d8-49be-b7d8-a5ac1a84fc3c>
3.875
320
Knowledge Article
Science & Tech.
45.006667
Web edition: June 6, 2008 Readers share their thoughts on "Down with carbon" (SN: 5/10/08, p. 18), which describes carbon dioxide sequestration: The article repeatedly mentions liquid CO2, which has to be under high pressure to become a liquid. Has the CO2 released from burning fuel to run the necessary compressors and pumps been considered, or would those be powered with wind or solar energy? If so, why not just use those sources directly to replace fossil fuels and make less CO2 to begin with? Why keep devising complex technological schemes to fix problems rather than simply avoiding the technologies that cause the problems? BRUCE NOVAK, NEEDHAM, MASS. Stop burning carbon It is less economical to patch a broken system with an after-damage repair than to eliminate the problem in the first place—in this case, the use of combustion to generate power. For a smaller investment, and in less time, we can ramp up energy production using proven, non-combustive technologies for stationary power generation from wind, tides, nuclear fission and direct capture of solar energy. These technologies exist and are relatively uneconomical in the United States now merely due to scale, limited engineering commitment and lack of public support. Let’s get real. The sooner we stop burning carbon to make electricity, the better. We can make a big dent with alternative power before we even invent carbon capture from smokestacks. DAVID P. VERNON, TUCSON, ARIZ. Thank you for a thorough article. The idea of digging trenches to bury trees seems extremely work-intensive and destructive. Wood waste could be buried under the overburden in strip mining operations, or sunk to anaerobic depths in deep lakes or off the continental shelf. There are numerous logs being recovered from the deep zones in Lake Superior. JOHN BRODEMUS, OSWEGO, ILL. Logs and trees drowned beneath man-made reservoirs suffer little or no degradation in the low-oxygen environments. Bogs and peat lands are also excellent preservers: Kauri trees that fell into New Zealand peat lands and were buried 50,000 years ago are preserved so well that they’re now being unearthed and sold to furniture makers and other woodworkers. —SID PERKINS Send communications to: Editor, Science News 1719 N Street, NW, Washington, D.C. 20036 All letters subject to editing.
<urn:uuid:e021f92c-5163-40e3-9d2b-08f92e848cfb>
3.203125
521
Comment Section
Science & Tech.
45.629375
This month’s Rakudo development work has already seen us switch to the new QRegex grammar engine for parsing Perl 6 source, unifying it with the mechanism for user-space grammars and regexes. A week and a bit on, another major improvement in this space has also landed: alternations now participate in Longest Token Matching, as per spec. What does this mean? To give a simple example: > say "beer" ~~ /be|.*/ q[beer] Here, the second branch of the alternation wins – because it matches more characters than the first branch. This is in contrast to sequential alternation (which you are likely more used to), which is done with the || operator in Perl 6: > say "beer" ~~ /be||.*/ q[be] The || may remind you of the short-circuiting “or” operator, which is exactly what a sequential alternation in a regex does: we try the possibilities in order and pick the first one that matches. On the other hand, the | is a reminder of the “any” junction constructor, which is analogous to what happens in a regex too: we process all of the branches with a parallel NFA, trimming impossible options as we go, and the one that matches most characters will win. If multiple match, we take them in order of length until one matches. Note that – just like with protoregexes – the thing we actually use the NFA on is the declarative prefix. Perl 6 regexes are a mixture of declarative and procedural; the switch between them is seamless. The declarative bits are amenable to processing with an NFA. Longest token matching is not only a Perl 6 user-space feature, but also used when parsing Perl 6 – and this goes for alternations too. In fact, the ability to quickly decide which branch to take out of a bunch of possible options is also important for parsing performance. STD, the standard grammar, is written so that trying things sequentially will usually give a correct parse. However, there are exceptions, and up until now they have been problematic. With this work, we now come closer to parsing things the way the standard grammar does. In fact, a lot of the tweaks I had to make in order to get the Perl 6 grammar to parse things correctly again after implementing longest token matching for alternations were a case of aligning it more closely with STD, which is decidedly encouraging. So, the branches in NQP and Rakudo containing this work have landed. Once again, it was a fairly deep and significant change, and pulling it off has involved various other improvements along the way (such as making tie-breaking by declaration order work reliably). Happily, the improvements we’ve made because we dogfood the grammar engine to parse Perl 6 source will also make things better for those writing grammars in Rakudo. I merged it this evening, with no regressions in the spectests or in module space tests. While I’ve put in most of the commits on this work, it certainly wasn’t a one person effort. pmichaud++ is once again to thank for the excellent design work behind this, and moritz++, tadzik++ and kboga++ have both helped with testing, fixing tests that had bad assumptions about LTM semantics and fixing Pod parsing to work with the new alternation semantics. The next release is still two weeks off. I expect to spend my tuits, which should be in reasonable supply, on various follow-up tweaks as a result of the regex engine work, pre-compilation improvements and diving into the QAST work, which I’m hopeful will land in time for the July release. Meanwhile, stay tuned: I expect pmichaud++ will have some nice news about what he’s been cooking up for the June release coming up soon. :-)
<urn:uuid:e39f8a6e-912b-403f-bd9c-863403da1607>
2.953125
820
Personal Blog
Software Dev.
48.79179
File this under the "Remember, it's not designed" category: Since the best city planners around the world have not been able to end traffic jams, scientists are looking to a new group of experts: slime mold. . . . . The scientists let the mold organize itself and spread out around these nutrients, and found that it built a pattern very similar to the real-world train system connecting those cities around Tokyo. And in some ways, the amoeba solution was more efficient. What's more, the slime mold built its network without a control center that could oversee and direct the whole enterprise; rather, it reinforced routes that were working and eliminated redundant channels, constantly adapting and adjusting for maximum efficiency. . . . . "The model captures the basic dynamics of network adaptability through interaction of local rules, and produces networks with properties comparable to or better than those of real-world infrastructure networks," Wolfgang Marwan of Otto von Guericke University in Germany, who was not involved in the project, wrote in an accompanying essay in the same issue of Science. "The work of Tero and colleagues provides a fascinating and convincing example that biologically inspired pure mathematical models can lead to completely new, highly efficient algorithms able to provide technical systems with essential features of living systems," Marwan said. HT: Telic Thoughts
<urn:uuid:76140525-9d00-4343-a96d-de8070125016>
2.96875
273
Personal Blog
Science & Tech.
35.800642
The figure shows a right triangle ACB, BH is the altitude drawn to the hypotenuse, HE is perpendicular to BC, HF is perpendicular to AC, and EM and FN are perpendicular to AB. If BC = a, AC = b, AB = c, and CH = h, prove that (5) HM = HN, and (6) twice the geometric mean of AN and BM.
<urn:uuid:092a3060-91e3-4795-93cc-58b5f0319644>
3.375
92
Tutorial
Science & Tech.
73.837931
By coupling a nonlinear system, such as an atom, to the electromagnetic field, it is possible to create Fock states (eigenstates of the harmonic oscillator). (Top) Brune et al. send atoms (left) into a cavity (center). The atoms are prepared with pulse to be in a superposition of states and before they enter the cavity. The relative phase between these states, which is converted to probability amplitudes for and with pulse when the atoms exit the cavity, depends on the number of photons in the cavity. (Bottom) In place of a cavity, Wang et al. create an electromagnetic field in a microwave resonator (blue). A superconducting qubit, acting as an artificial atom, couples to the center conductor.
<urn:uuid:1cb9b277-00ce-4a5b-9752-a42fb1826511>
3.703125
153
Knowledge Article
Science & Tech.
35.80601
"Phonons -- are some sort of vibratinal lattice quarks." Not true, phonons are not quarks but quasiparticles, more here. Phonons are also bosons. Wikipedia misleadingly shows that phonon is not an elementary particle or a boson but my lecturer and Wikipedia article about bosons support the bosonness. Please, note that the term "quasiparticle" differ between contextes. According to Wikipedia, "These fictitious particles are typically called "quasiparticles" if they are fermions (like electrons and holes), and called "collective excitations" if they are bosons". This is about elementary particles so do not mix them up. I understand this so that phonon is not an elementary particle so the above definitions do not hold for it. Anyway, notice that the same word "quasiparticle" is confusingly used in similar context for totally different meaning! Phonon is a quantitative piece of energy in vibrational lattice ( "fononi=kiteen värähtelyenergian kvantti", ~p.163). This quantification is eventually a result of $E=hf$. I don't use the word "particle" because phonon is not traditional elementary particle like electron, proton or a quark. It is some sort of fictious particle that appears during interaction with other materials. For example, light travelling in photons do not contain phonons but when the high-energy light hits wall, the phonon lattice emerges. Example: Silica and Diamond Silica and diamond have the same structure but Silica does not conduct heat. Please, note that electricity conductance do not mean heat conductance, both Silica and Diamond do not conduct electricity because of no moving electrons. In order to explain the heat conductance of diamond, one needs to consider frequencies and hence energies. In layman terms, diamond lattice is able to sustain frequencies of higher energy while Silica not. This may be different in extreme temperatures such as close to absolute zero. The heat conductivity is a feature due to the phonon lattice. The lattice vibrates according to partial derivatives aka Hamiltonian equations here. The Hamiltonian consists of kinetic energy and rotational energy for each thing where we don't have potential energy between the particles because it is very close to zero (gravitational field effect very small). For me, the Hamiltonian equations look similar to the wave equation here but the earlier Hamiltonian looks different. ERR I am probably messing something up here, cannot yet see how the two different Hamiltonians are the same...checking, the picture source here. Bullet points from Wikipedia I. "The thermodynamic properties of a solid are directly related to its phonon structure." II. "At absolute zero temperature, a crystal lattice lies in its ground state, and contains no phonons" II.I. "If more than one ground state exists, they are said to be degenerate." I. Is phonon a particle? II. What does it look like? III. Can you see the phonon with microscope? Phonon is a boson, more here and a picture here. 'These fictitious particles are typically called "quasiparticles" if they are fermions (like electrons and holes), and called "collective excitations" if they are bosons (like phonons and plasmons),1 although the precise distinction is not universally agreed.' Wikipedia
<urn:uuid:2260c8d9-9a94-459b-bbb7-0c2c775c2c47>
2.921875
746
Q&A Forum
Science & Tech.
30.618235
Oh man. I don't even know where to start. That's partly because there is not one but I dunno, five or so string theories. I'll assume you know a bit about the elementary particles. Here's what a string theory, in a nutshell is(and I like it for that). So we built accelerators 'n stuff, and greatly expanded our horizon from the three subatomic particles most people know about. In fact, it went a bit overboard and now there are more subatomic particles then there are elements in a periodic table. That's a lot, mind you! The only logical(note: there is no hard proof from which to conclude this, it's kind of "this is how it worked out so far" thingy) thing for a physicist to think is that there has to be some simpler underlaying mechanism which gives birth to such complexity. Compare it to language; there are only so many signs in a writing system but you have no problems expressing much more information by simply lining those signs together. So the scientists assumed that subatomic particles are a tad too big of an alphabet and instead assumed they are in fact words. The natural thing to do was to look for 'the alphabet' of subatomic particles then. And that's basically what strings are. So here's an overused analogy of what strings are. Imagine a string. Stretch it and pluck it. It will make a sound. Now stretch it harder. It makes a higher pitched sound now. The sounds in this analogy are what we see as elementary particles. The strings are what actually creates them, depending on how tense they are, in other words, how fast they vibrate. So obviously, one string can play an entire array of elementary particles. That makes the atom an orchestra. Fun, right? The only thing left to do is push this analogy into as many dimensions as you need to make mathematical sense out of what we observe as the real world and voilá, string theory. p.s. the last paragraph is what gave birth to several string theories. As you may or may not know, the equations of motion are differential equations and you solve them by choosing the boundary conditions(just some numbers, nothing scary). But what happens is, depending on your choice of boundary conditions, you get different sets of solutions. It's just a mathematical guessing game. Also, you get to choose the dimensionality of space. So far, popular(as in, solved) spaces are 10 and 26 dimensional. In both spaces there are several legit solutions, but this is all math. It's not fit for storytelling.
<urn:uuid:834dea01-4bba-4e43-bc1b-e95b9e696b7b>
3.1875
539
Personal Blog
Science & Tech.
61.355648
XHTML 1.0 is a reformulation of HTML 4.0. What this really means is that learning XHTML is basically the same as learning HTML. The main difference is a few simple rules - as XHTML is more strict than standard HTML. The reality is that HTML is not going away. While there are many advantages to having a very flexible system where tags can be easily added and changed in the DTD, but most Web Developers won't need or want their own tags. Why rewrite the way the <p> tag works, when it's already in the document definition? Converting an HTML Document There are a few basic rules you need to apply to convert an HTML 4 document to XHTML. - Stricter adherence to the HTML specification. Many browsers are very lax in how they interpret HTML. This leads to incongruities in how the pages are displayed, and XHTML doesn't allow that. The best way to correct this is to use an XHTML validator. - Write well formed documents. What this generally means is avoiding overlapping elements. The following nested code is acceptable: because the <em> tag is opened and closed within the <p> tag. However, this is not allowed: because the <em> tag overlaps the <p> tag. - Write tags and attributes in lowercase. XHTML is a case-sensitive markup language, so are potentially two different tags. <LI> and <li> - End tags are required. In HTML, some tags which actually contain elements do not require the end tag. The most common of these is the tag. XHTML requires that the tag be used. For singleton tags, such as you should include a trailing slash in the tag itself, e.g. to get a line break. - Attributes must be quoted and include values. What this means is that non-quoted attributes such as are invalid. And attributes which used to stand alone, must now be written as name="value" pairs. For example adds the noshade attribute to the <hr/> tag. <hr noshade="noshade" /> - Do a second validation. The last step is to validate your XHTML again. This will tell you of any additional problems or issues with your code.
<urn:uuid:925756a8-3952-4c2d-a903-cd51e89abd56>
3.71875
489
Tutorial
Software Dev.
68.303894
In an effort to stop the warming of global scale, the Framework Convention on Climate Change was concluded in 1992 at the Earth Summit held in Brazil, and it went into effect in 1994. At present, a total of 187 countries including Japan as well as the European Community have ratified the convention. In order to achieve the goal of this convention, a protocol to make it mandatory to reduce the emission of six types of gasses causing greenhouse effect, namely carbon dioxide, was adopted in 1997 at the Third Session of the Conference of the Parties to the United Nations Framework Convention on Climate Change (COP3) held in Kyoto. The Kyoto Protocol will finally come into effect in February 2005 with the ratification of Russia in November 2004. This protocol obliges developed countries to reduce gasses causing greenhouse effect by certain degrees compared to that of the year 1990 (6% for Japan, 7% for U.S., and 8% for EU) during the period of 2008 to 2012. In Japan, the central and local governments as well as private corporations and citizens have been working on introducing systems to achieve that target of reducing emission, cooperating internationally with such measures as emission trading. However, the emission of gasses causing greenhouse effect is rather on the increase in Japan, and that a reduction of 14% has become necessary to achieve the said target. In this background, besides reducing the amount of carbon dioxide emitted, which is a matter of course, absorption and fixation of carbon dioxide by plants, namely forests, has been expected to be highly effective, and that concrete measures have already started. In this special report, the current situation surrounding environmental conservation technologies using plants as a means in absorbing carbon dioxide will be explained. Incidentally, plants, while expected to perform as a means in absorbing carbon dioxide, must be maintained to function properly to enable humans and nature to live together and conserve biological diversity, all in an effort to sustain the healthy state of our country. Accordingly, this special report also elaborates on efforts in conserving primitive landscapes, restoring to the status quo of places where nature has been destroyed, as well as the situations and the idea on the application of countermeasure technologies at city parks, green areas in factories and schools, in addition to activities to create and maintain forests as a means in absorbing carbon dioxide.
<urn:uuid:03d13a74-9d69-4001-966c-65be0057eb08>
3.375
464
Knowledge Article
Science & Tech.
23.504793
Notional lox/lh2 rocket engine. 101,988 kN. Study 1964. Isp=459s. Used on Rombus launch vehicle. Thrust (sl): 79,768.100 kN (17,932,582 lbf). Thrust (sl): 8,134,205 kgf. Status: Study 1964. More... - Chronology... Thrust: 101,988.00 kN (22,927,814 lbf). Specific impulse: 459 s. Specific impulse sea level: 359 s. Burn time: 215 s. Associated Launch Vehicles Rombus American SSTO VTOVL orbital launch vehicle. Bono original design for ballistic single-stage-to-orbit (not quite - it dropped liquid hydrogen tanks on the way up) heavy lift launch vehicle. The recoverable vehicle would re-enter, using its actively-cooled plug nozzle as a heat shield. More... Lox/LH2 Liquid oxygen was the earliest, cheapest, safest, and eventually the preferred oxidiser for large space launchers. Its main drawback is that it is moderately cryogenic, and therefore not suitable for military uses where storage of the fuelled missile and quick launch are required. Liquid hydrogen was identified by all the leading rocket visionaries as the theoretically ideal rocket fuel. It had big drawbacks, however - it was highly cryogenic, and it had a very low density, making for large tanks. The United States mastered hydrogen technology for the highly classified Lockheed CL-400 Suntan reconnaissance aircraft in the mid-1950's. The technology was transferred to the Centaur rocket stage program, and by the mid-1960's the United States was flying the Centaur and Saturn upper stages using the fuel. It was adopted for the core of the space shuttle, and Centaur stages still fly today. More... Rombus Lox/LH2 propellant rocket stage. Loaded/empty mass 5,102,041/306,175 kg. Thrust 101,900.00 kN. Vacuum specific impulse 455 seconds. 36 x plug-nozzle engines (20 atm chamber pressure, 7:1 mixture ratio) More... Home - Browse - Contact © / Conditions for Use
<urn:uuid:c5e0c2fc-20f8-4386-9d1b-145b57411e6b>
2.796875
469
Knowledge Article
Science & Tech.
71.770254
Although spotted owls may be more well-known, they are not the only animals that rely on the dwindling old-growth forests of the Pacific Northwest... Although spotted owls may be more well-known, they are not the only animals that rely on the dwindling old-growth forests of the Pacific Northwest... By Donald W. Thomas WHAT DO BATS AND NORTHERN spotted owls have in common? Both are nocturnal and secretive and both depend on old-growth forests for their survival in the Pacific Northwest. Bats and spotted owls are in good company. At least 14 species of vascular plants, 16 species of birds, 6 species of non-bat mammals, and 11 amphibians either depend upon or reach their peak abundances in old-growth forests of the West Coast. Yet old-growth forests are disappearing. Place yourself on just about any mountain top from California to British Columbia and look out over the facing slopes. The odds are that any forest you see will be a patchwork of clearcuts and regrowth less than 100 years old. The original stands of massive 200-year-old Douglas fir, 5 to 10 feet in diameter and stretching up over 100 feet, are mostly gone. One hundred years ago there were about 30 million acres of old-growth forest in California, Oregon, and Washington. Today, only about 17 percent remains, and almost all of it is on public lands controlled by the U.S. Forest Service. In the early 1980s old-growth was disappearing so rapidly that biologists predicted it would be gone by the turn of the century. Many questions were raised. What would happen if old-growth simply ceased to exist? Was old-growth a specific wildlife habitat or simply an age category of forest? How many species depend on old-growth as critical habitat in the rich plant and animal communities of the western slopes of the Cascades, Coast Ranges, and Sierra Nevadas? Would we witness widespread extinctions with the removal of old-growth? At that time, no one could be sure. Given our lack of any fundamental knowledge about the relationships between plants and animals and their forest habitats, the answers would only come from careful study. Lobbying by concerned biologists stimulated the Forest Service to propose, and Congress to fund, a study specifically focusing on old-growth habitat. Beginning in 1983, the Old-Growth Forest Wildlife Habitat Program (OGFWHP) was faced with the daunting task of determining just how plants and animals made use of Douglas-fir forests of different ages and whether they required undisturbed old-growth to ensure their long-term survival. Fortunately, far-sighted planners included bats in the study. When I was called in to run the OGFWHP bat study in 1984, I was struck by how little we knew about bats in natural habitats. Bats are widespread and often seen, but along with most bat biologists I would first look for them in buildings. Virtually nothing was known about the types of roosts that bats typically use in undeveloped forest habitats as opposed to rural landscapes. For the common and widespread little brown bat (Myotis lucifugus), only two descriptions of natural roosts had ever been published. And even less was known about most of the 11 other species that we were likely to encounter. MY FIRST CHALLENGE WAS to develop a sampling method that would allow us to observe and identify the various bat species in forest stands, regardless of their roosting preferences or the age and structure of the forest. Searching for roosts would be difficult and it seemed inefficient. Direct observations were obviously out. Capturing bats either with specialized harp or Tuttle traps or with fine nylon bird nets* was also fraught with problems. What if the bats flew in or above the forest canopy? With their sophisticated echolocation system, would all bats be equally prone to capture? Would feeding bats pay more attention and be less likely to be caught? If so, we could possibly overlook important feeding habitats. A possible means of detecting and even identifying bats was to eavesdrop on their echolocation calls. Most bats continually betray their presence by emitting relatively loud echolocation calls as they navigate through forests or hunt for small insects. If moths and certain other nocturnal insects can use their specialized ears to detect bats at great distances (and thus avoid becoming a meal), why couldn't we? With microphones sensitive to the high-frequency calls of bats, and electronic circuitry to reduce frequencies to the range audible to the human ear, we could simply listen in on bats as they went about their normal activities. The passive detecting system of a bat detector had several features that lent itself to the type of habitat-use survey that the OGFWHP study required. First, most insectivorous bats echolocate continuously as they commute or hunt, so if they are present they will be heard. Bat detectors sample a volume of air 30 or more feet in radius and are likely to pick up the sounds of far more bats than traps or nets would ever catch. Microphones can also be raised high above the ground to listen deep into, and even above, the forest canopy where nets or traps could never be hoisted. And unlike nets or traps, detectors do not rely on bats making navigational errors and so are less affected by the bats' attentiveness or agility. Finally, bat detectors can provide information not only on the presence of a bat, but also on its identity and what it is doing. Many species have their own specific patterns of echolocation calls that allow us to identify them much as bird-watchers do with bird calls [BATS, Summer 1991]. For example, little brown bats sweep from over 70 kHz (70,000 cycles per second) down to almost exactly 40 kHz and they do so in 5-7 milliseconds. Big brown bats (Eptesicus fuscus) sweep from about 40 kHz to 25 kHz, stretching this low-frequency part of the call out over 8-10 milliseconds. By capturing bats, recording their calls, and then analyzing the calls in the laboratory we found that we could easily recognize the echolocation signatures of some bats, but that others could not be separated. For instance, the California, northern long-eared, western long-eared, and western small-footed bats (Myotis californicus, M. evotis, M. ciliolabrum) had similar echolocation calls. Despite these limitations, we were able to assign the 12 species of bats present in the Pacific Northwest to one of seven groups. When bats are simply commuting, say from their day roost to a feeding site, they send out calls at a relatively low rate of about 1-2 pulses per second. This allows them to emit their signals, receive the echo, and have sufficient information to allow them to navigate down paths or through forests, avoiding large obstacles. When bats are hunting, however, they require considerably more information. They must be able to capture small insects when the combined speed of both hunter and prey covers a distance 6-11 yards per second. Once an insect is detected, most bats dramatically increase their pulse repetition rate to over 100 per second in order to get a more precise bearing on the insect and determine its position, relative speed, direction, and maybe even size and surface features. This high repetition rate "feeding buzz" may last less than a half second, but when we hear it over a bat detector, it is a solid indication that a bat is finding insects and trying to capture them. So, by eavesdropping on bat echolocation calls I could determine not only that a bat was present, but also what kind it was and whether or not it was trying to feed. This was all the information that I needed to determine whether old-growth forests were important for bats. To get the answer, my first step was to build a series of automated bat detectors that would turn on in the evening, record both bat calls and the time on a small cassette recorder, and then turn off in the morning to save battery power. OVER THE SUMMERS OF 1984 and 1985, we sampled bat activity in 90 different Douglas-fir forest stands in the Cascade Mountains and Coast Ranges of Washington and Oregon. From the 3,000 bats we detected in Washington and the 6,000 that we detected in Oregon, several important trends became clear. Bats were far more common in old-growth forests than they were in forests that had been disturbed either by logging or by fires. In Washington, all seven Myotis species were three to six times more abundant in old-growth than they were in disturbed forests. The same pattern held in Oregon where the Myotis species were three to four times and silver-haired bats (Lasionycteris noctivagans) were 10 times more common in old-growth forests. The fact that all nine bat species we commonly encountered showed a clear association with old-growth forests in Oregon or Washington is a strong indication that old-growth is important habitat for bats. But what does old-growth offer that disturbed and younger forest can't? When I examined the data carefully I found that most bats didn't remain in the forests to feed. There was a peak of activity for 15 minutes as bats left their day roosts, but through the rest of the night the stands were almost quiet. Feeding buzzes were concentrated elsewhere; they were over 10 times more common above streams and ponds than they were in forests. This made sense because in a parallel study we showed that the small insects that most bats hunt were far more abundant over water than they were inside forest stands. The pattern that we observed indicates that old-growth forests offer critical roosting habitat for most of the bat species that inhabit the Pacific Northwest. We know that apart from the impressive size of the trees, old-growth forests are characterized by an abundance of old or dead trees that have had the time to develop the broken tops, cracks, hollows, and scaling bark that can serve as roosting sites for bats. Without old-growth forests, I believe that we would witness a dramatic decline in populations of not just one species of bat, but of almost all of the species currently found in the Pacific Northwest. There is reason for both pessimism and optimism when considering the future of old-growth forests and bats on the West Coast. On the pessimistic side, old-growth harvesting continues, albeit at a reduced pace. Harvesting in the National Forests and on land controlled by the Bureau of Land Management is slowly reducing old-growth to isolated tiny patches. In Oregon and southern Washington, 39 to 50 percent of the old-growth patches are 30 acres or less. Wind penetrates to the center of patches this small and, over time, will blow down the majority of damaged and dead trees. Because these are precisely the trees that bats are likely to use as roosting sites, this fragmentation of old-growth forests may dramatically reduce the value of remaining old-growth as bat habitat. On the optimistic side, the rate of old-growth harvest has dropped dramatically over the past few years. One of the reasons is that the Forest Service and Bureau of Land Management, who control approximately 80 percent of the remaining old-growth, are obliged by law to ensure that adequate habitat remains for the conservation of all plant and animal species. Although bats still may not be able to stimulate the public pressure required to set aside large tracts of valuable old-growth, concern for the conservation of the northern spotted owl has done just that. Pressure from environmental groups and good government planning has resulted in the protection of sizeable tracts of old-growth Douglas-fir forests throughout Washington and Oregon to ensure adequate breeding habitat for spotted owls. While bats and other forest species can undoubtedly benefit from efforts to ensure the survival of the spotted owl, it is unfortunate that our perception of conservation issues is so often limited to protection of single high-visibility species. It is tempting to believe that low-visibility species, such as bats, can ride on the coat-tails of high-profile conservation movements. There are two problems, however, with this thinking. When we focus our attention on a single species like the northern spotted owl, we risk becoming complacent. If future studies show owl populations to be stable and healthy, what arguments will we use to push for continued monitoring of the low-visibility species like bats? We must also remember that bats are not owls and they almost certainly have different requirements. Setting aside tracts of forest that have been identified as good owl habitat does not necessarily mean that we have acted wisely to ensure the well-being of bat populations into the 21st century. The Old-Growth Forest Wildlife Habitat Program is an example of good management planning, but it is just the beginning. We showed that old-growth forests are critical habitat for many other species of animals in the Pacific Northwest, but we know little about why this is so. It will be up to future studies to answer the remaining questions and provide a sound foundation for long-term planning and management. *Two methods are commonly used to capture and study bats, the harp or Tuttle trap and mist nets. The trap, perfected by Merlin Tuttle, resembles an upright bedspring strung vertically with monofilament fishing line. When bats strike the line, they fall into special collecting bag below from which they cannot escape. Mist nets made of extremely fine nylon mesh have long been used to capture flying birds, but they are also excellent to capture bats. Both methods allow biologists to capture, study, and release bats without harming them. In Oregon, silver-haired bats, a tree-dwelling species, were discovered to be 10 times more abundant in old-growth forests than in forests that had been logged. The cracks, hollows, and scaling bark of aging or dead trees provide ideal roosts for many species of tree-dwelling bats. Stands of old-growth forest can be 200 or more years old in the Pacific Northwest. But today few of these giants remain and they too are disappearing rapidly, spelling trouble for the many species, including bats, that need these ancient forests for their homes. The mosaic of clearcuts is now a familiar sight throughout the Pacific Northwest. The fragmented forests and relatively young regrowth severely limits use by the animals that traditionally make the old forest their home. Forestry management must ensure that an adequate mix of old and younger forest is maintained if wildlife is to survive. Above: Bat detectors helped the research team discover how and when bats use old-growth forests. They learned that bats quickly left their day roosts in the forest to feed over nearby streams and ponds where the characteristic "feeding buzzes" of hunting bats, such as this silver-haired bat, were commonly heard. Although little brown bats (left) and western long-eared bats (right) are today more often encountered in buildings, their natural homes likely are in old-growth forests. The study indicates that all seven Myotis species found in the Pacific Northwest are far more abundant in old growth. Donald Thomas is director of the Groupe de Recherche en Écologie, Nutrition, et Énergétique and Professeur Agrégé in the Département de Biologie, Université de Sherbrooke in Québec. His research on bat and bird ecology and energetics has taken him to Central and South America, Africa, Polynesia, and the rainy
<urn:uuid:e3646955-048c-41c1-a185-e7924c142e0e>
3.59375
3,230
Academic Writing
Science & Tech.
46.034588
aliphatic compoundArticle Free Pass aliphatic compound, any chemical compound belonging to the organic class in which the atoms are not linked together to form a ring. One of the major structural groups of organic molecules, the aliphatic compounds include the alkanes, alkenes, and alkynes, and substances derived from them—actually or in principle—by replacing one or more hydrogen atoms by atoms of other elements or groups of atoms. What made you want to look up "aliphatic compound"? Please share what surprised you most...
<urn:uuid:5dddc4fd-d59a-401e-b853-ed321baf98d5>
2.875
111
Knowledge Article
Science & Tech.
33.496429
The self-ionization of water (also autoionization of water, and autodissociation of water) is the chemical reaction in which two water molecules react to produce a hydronium (H3O+) and a hydroxide ion (OH−): It is an example of autoprotolysis, and relies on the amphoteric nature of water. In Computer science, ACID ( Atomicity Consistency Isolation Durability) is a set of properties that guarantee that Database transactions are In Chemistry, a base is most commonly thought of as an aqueous substance that can accept Protons This refers to the Brønsted-Lowry theory of acids and Acid-base extraction is a procedure using sequential Liquid-liquid extractions to purify Acids and bases from mixtures based on their chemical properties Acid-base homeostasis is the part of Human homeostasis concerning the proper balance between Acids and bases, in other words the PH. An acidity function is a measure of the Acidity of a medium or solvent system usually expressed in terms of its ability to donate protons to (or accept protons from a For an individual weak acid or weak base component see Buffering agent. pH is the measure of the acidity or alkalinity of a Solution. The proton affinity, E pa of a Anion or of a neutral Atom or Molecule is a measure of its gas-phase basicity. In Computer science, ACID ( Atomicity Consistency Isolation Durability) is a set of properties that guarantee that Database transactions are A mineral acid is an Acid derived by Chemical reaction from inorganic Minerals as opposed to Organic acids These have Hydrogen An organic acid is an Organic compound with Acidic properties A Strong acid is an Acid that Ionizes completely in an Aqueous solution (not in the case of Sulfuric acid as it is diprotic A superacid is an Acid with an Acidity greater than that of 100% Sulfuric acid, which has a Hammett acidity function ( H 0 A weak acid is an Acid that does not completely donate all of its hydrogens when dissolved in water In Chemistry, a base is most commonly thought of as an aqueous substance that can accept Protons This refers to the Brønsted-Lowry theory of acids and An organic base is an Organic compound which acts as a base. Organic bases are usually but not always proton acceptors In Chemistry, a base is most commonly thought of as an aqueous substance that can accept Protons This refers to the Brønsted-Lowry theory of acids and In Chemistry, a superbase is an extremely strong base. There is no commonly accepted definition for what qualifies as a superbase but most chemists would accept As the name suggests a non-nucleophilic base is an organic base that is a very Strong base but at the same time a poor Nucleophile. In chemistry a weak base is a Chemical base that does not Ionize fully in an Aqueous solution. In Chemistry, hydronium is the obsolete name for the Cation H 3 O + derived from Protonation of Water In Chemistry, hydroxide is the most common name for the diatomic Anion OH− consisting of Oxygen and Hydrogen Water, however pure, is not a simple collection of H2O molecules. Even in "pure" water, sensitive equipment can detect a very slight electrical conductivity of 0. Electrical conductivity or specific conductivity is a measure of a material's ability to conduct an Electric current. 055 µS·cm-1. The siemens (symbol S is the SI derived unit of Electric conductance. According to the theories of Svante Arrhenius, this must be due to the presence of ions. Svante August Arrhenius ( February 19, 1859 &ndash October 2, 1927) was a Swedish Scientist, originally a Physicist The preceding reaction has a chemical equilibrium constant of Keq = ([H3O+] [OH−]) / [H2O]2 = 3. For a general Chemical reaction \alpha A +\beta B. \rightleftharpoons \sigma S+\tau T. 23 × 10−18. So the acidity constant which is Ka = Keq × [H2O] = ([H3O+] [OH−]) / [H2O] = 1. 8 × 10−16. For reactions in water (or diluted aqueous solutions), the molarity (a unit of concentration) of water, [H2O], is practically constant and is omitted from the acidity constant expression by convention. In Chemistry, concentration is the measure of how much of a given substance there is mixed with another substance The resulting equilibrium constant is called the ionization constant, dissociation constant, or self-ionization constant, or ion product of water and is symbolized by Kw. At Standard Ambient Temperature and Pressure (SATP), about 25 °C (298 K), Kw = [H3O+][OH−] = 1. In Physical sciences standard conditions for temperature and pressure are Standard sets of conditions for experimental measurements to allow comparisons to be made 0×10−14. Pure water ionizes or dissociates into equal amounts of H3O+ and OH−, so their molarities are equal: At SATP, the concentrations of hydroxide and hydronium are both very low at 1. 0 × 10−7 mol/L and the ions are rarely produced: a randomly selected water molecule will dissociate within approximately 10 hours. Since the concentration of water molecules in water is largely unaffected by dissociation and [H2O] equals approximately 56 mol/l, it follows that for every 5. 6×108 water molecules, one pair will exist as ions. Any solution in which the H3O+ and OH− concentrations equal each other is considered a neutral solution. Absolutely pure water is neutral, although even trace amounts of impurities could affect these ion concentrations and the water may no longer be neutral. Kw is sensitive to both pressure and temperature; it increases when either increases. It should be noted that deionized water (also called DI water) is water that has had most impurity ions common in tap water or natural water sources (such as Na+ and Cl−) removed by means of distillation or some other water purification method. Purified water is water from any source that is physically processed to remove impurities Distillation is a method of separating Mixtures based on differences in their volatilities in a boiling liquid mixture Water purification is the process of removing contaminants and other harmful microorganisms from a raw water source Removal of all ions from water is next to impossible, since water self-ionizes quickly to reach equilibrium. By definition, pKw = −log10 Kw. At SATP, pKw = −log10 (1. 0×10−14) = 14. 0. The value of pKw varies with temperature. As temperature increases, pKw decreases; and as temperature decreases, pKw increases (for temperatures up to about 250 °C). This means that ionization of water typically increases with temperature. There is also a (usually small) dependence on pressure (ionization increases with increasing pressure). The dependence of the water ionization on temperature and pressure has been well investigated and a standard formulation exists. pH is a logarithmic measure of the acidity (or alkalinity) of an aqueous solution. pH is the measure of the acidity or alkalinity of a Solution. In Mathematics, the logarithm of a number to a given base is the power or Exponent to which the base must be raised in order to produce By definition, pH = −log10 [H3O+]. Since [H3O+] = [OH−] in a neutral solution, by mathematics, for a neutral aqueous solution pH = 7 at SATP. Self-ionization is the process that determines the pH of water. pH is the measure of the acidity or alkalinity of a Solution. Since the concentration of hydronium at SATP (approximately 25 °C) is 1. 0×10−7mol/l, the pH of pure liquid water at this temperature is 7. Since Kw increases as temperature increases, hot water has a higher concentration of hydronium than cold water (and hence lower pH), but this does not mean it is more acidic, as the hydroxide concentration is also higher by the same amount. Geissler et al. have determined that electric field fluctuations in liquid water cause molecular dissociation. They propose the following sequence of events that takes place in about 150 fs: the system begins in a neutral state; random fluctuations in molecular motions occasionally (about once every 10 hours per water molecule) produce an electric field strong enough to break an oxygen-hydrogen bond, resulting in a hydroxide (OH−) and hydronium ion (H3O+); the proton of the hydronium ion travels along water molecules by the Grotthuss mechanism; and a change in the hydrogen bond network in the solvent isolates the two ions, which are stabilized by solvation. To help compare Orders of magnitude of different Times this page lists times between 10&minus15 second and 10&minus12 second (1 Femto The Grotthuss Mechanism is the mechanism by which an 'excess' Proton or protonic defect diffuses through the Hydrogen bond network of water molecules or other Within 1 picosecond, however, a second reorganization of the hydrogen bond network allows rapid proton transfer down the electric potential difference and subsequent recombination of the ions. To help compare Orders of magnitude of different Times this page lists times between 10&minus12 seconds and 10&minus11 seconds (1 Pico This timescale is consistent with the time it takes for hydrogen bonds to reorient themselves in water.
<urn:uuid:a40b335c-684b-4358-8958-550904847b57>
3.171875
2,081
Knowledge Article
Science & Tech.
29.295071